Quantitative polymerase chain reaction (or qPCR) is a well-established assay for nucleic acid quantification and is still regarded as the method of choice in most areas of molecular biology. Though different types of qPCR quantification exist (absolute and relative), determining the amplification efficiency should be among the first things to do when setting up a qPCR assay. Understanding efficiency and how to calculate it is crucial for accurate data interpretation.
Ideally, the number of molecules of the target sequence should double during each replication cycle, corresponding to a 100% amplification efficiency. Similarly, if the number of replicated molecules is less than double this is due to poor efficiency – below 100%. The most common reasons for lower efficiencies are bad primer design and non-optimal reagent concentrations or reaction conditions. Secondary structures like dimers and hairpins or inappropriate melting temperatures (Tm) can affect primer template annealing which results in poor amplification. Since each additional dilution contains appropriately lower starting amounts of DNA, differences occur between Ct values in serially diluted samples (see below).
One way of calculating the amplification efficiency is by making serial dilutions of your target. Once you obtain their Ct values, plot them on a logarithmic scale along with corresponding concentrations. Next, generate a linear regression curve through the data points and calculate the slope of the trend line. Finally, efficiency is calculated using the equation: E = -1+10(-1/slope). Or use this calculator which does the work for you. Be sure to understand what influences the slope of the amplification curve, as it can otherwise be misleading.
Typically, desired amplification efficiencies range from 90% to 110%. The theoretical maximum of 100% indicates that polymerase enzyme is working at maximum capacity. How are efficiencies over 100% even possible then? That would mean more than two copies of the sequence are generated in each qPCR cycle, right?
Amplification efficiency exceeds 100%, how can that be?
The main reason for this is polymerase inhibition. Even if more template is added to the reagent mixture, the Ct values might not shift to earlier cycles. This flattens out the efficiency plot, resulting in a lower slope and an amplification efficiency of over 100%. Inhibitors of the polymerase enzyme include excessive amounts of DNA/RNA or carry-over material in the sample. Common contaminants include heparin, hemoglobin, polysaccharides, chlorophylls, proteinase K, sodium acetate, etc.). Various others can also be transfered from the DNA/RNA isolation step, like ethanol, phenol, and SDS.
If inhibitors are present in the concentrated samples, more cycles are needed to cross the threshold of detection, compared to the samples without inhibitors. Inhibition is more likely to occur in more concentrated samples, and one way to improve the curve slope is by diluting the sample. This is a good way of testing if inhibition is indeed the problem.
Inhibition of the amplification in a concentrated sample looks like this. 10-fold dilutions should be 3.3 cycles apart but in this case, the concentrated and diluted samples are closer.
Let us look at a simple example. In a 10-fold dilution of the sample, basic mathematics tells us that the ΔCt between two dilutions should be around 3.3, given 100% amplification efficiency. If inhibitors are present, however, the ΔCt between two sample dilutions could decrease to, say, 2.8. This value will likely rise back close to 3.3 in the most diluted sample point, where there is less inhibition. Since inhibitors are diluted together with the DNA/RNA, higher dilutions might contain lower concentrations where there is no inhibitory effect anymore. The amplification is again running at full efficiency and the signal comes out the way it should.
Consequently, the ΔCt values between the concentrated and diluted sample are smaller as predicted, resulting in an amplification efficiency above 100%.
Even if more template is present in the starting reagent mixture, the Ct values might not shift accordingly due to inhibition, which flattens out the efficiency plot, resulting in a lower slope and an amplification efficiency of over 100%.
This artefact can usually be avoided by using highly diluted samples. If inhibition occurs, concentrated samples should be excluded from the analysis when calculating the efficiency. Similarly, most diluted samples should also be omitted in case of high variability, as a consequence of the stochastic effect. It is, therefore, not appropriate to include very concentrated or very diluted samples in the quantification study.
Inhibition can easily be avoided by analyzing the purity of the DNA/RNA samples with spectrophotometric measurement prior to qPCR. Purity is measured as the ratio of absorbance values at 260 and 280 nm, which correspond to the ratio of nucleic acids to other molecules. If the purity score does not fall above 1.8 for DNA or 2.0 for RNA, the samples should be purified. Similarly, you can also use a different sample preparation method. If additional purification steps do not solve the problem, the sample may be inherently difficult to work with. In this case, a qPCR master mix that is more tolerant of inhibitors would be good to consider.
Other reasons for efficiencies over 100% can be pipetting errors, polymerase enzyme activators, inhibition by reverse transcriptase, inaccurate dilution series, unspecific products and primer dimers when using intercalating dyes (should be controlled for each reaction separately). Make sure any one of those is not causing unwanted shifts of your amplification curves prior to starting your next qPCR assay.
Recap the basics of qPCR efficiency and why it can exceed 100% in the video below: