Thursday, November 30, 2017

"Steve, what do you think of this ECG in this Cardiac Arrest Patient?"

I was shown this ECG.  The resident asked: "Steve, what do you think of this ECG in this Cardiac Arrest Patient?"
What do you think?

Here is more history:

An elderly woman with h/o CAD and CABG presented after out of hospital cardiac arrest with subsequent resuscitation and return of spontaneous circulation.  It was an unwitnessed arrest and down time was unknown.  The initial prehospital rhythm was asystole.

Here is the initial ED ECG:
Rhythm is regular, but no definite P-waves are visible.
There is a Brugada-like morphology in V1.
There is profound ST elevation in lead III and aVF, with ST depression in aVL
There is profound ST depression in V2.
What else?

Here was my response:

"What was the potassium?"

Answer: 7.6 mEq/L

The QRS is very wide.

Case continued:

The physicians thought this was STEMI and activated the cath lab.

Cardiology opined that this was a metabolic ECG.

Later, the K returned and they treated the hyperkalemia aggressively.

There was a complex resuscitation which included, among other medications, administration of calcium and insuline.

1 hour later, this ECG was recorded:

See these other hyperK cases also:

Case 1.  A Tragic Case

This patient presented with weakness.

45 minutes later:

Case 3 
PseudoSTEMI due to Hyperkalemia

Case 4
PseudoSTEMI due to hyperkalemia

Also, see this collaborative post on critical hyperkalemia written by Pendell Meyers with edits by Steve Smith and Scott Weingart:

EMCrit - Critical Hyperkalemia by Pendell Meyers

Monday, November 27, 2017

A 54 yo male with sudden chest pain. Computer says normal. Paramedic disagrees.


There is now an Android app for the 3- and 4-variable formulas. It is of course free (#FOAMed).  It was written by Yannick Schäfer (a medical student in France):

Remember there is also an iPhone app called "SubtleSTEMI"


This was sent by a very astute paramedic.

A 54 year old male came to the door of the fire department because of sudden chest pain while working.  It was squeezing and substernal.

The medic recorded an immediate ECG:
What do you think?
There is ST elevation, but it looks exactly like normal ST elevation ("Early Repolarization"), right?
By the way, "Unconfirmed" means a human needs to overread it.

This medic wanted to be certain that this ST elevation with large T-waves was normal ST Elevation, and not a subtle LAD occlusion that only appears to be normal.

He recorded another 1 minute later:
Not much change.
The medic applied the LAD occlusion vs. early repol formula immediately.

1st ECG:
STE60V3 = 3.5
QTc = 390
QRSV2 = 19
RAV4 = 11

3-variable formula value = 23.61 (most accurate, but not most sensitive, cutpoint is 23.4)
4-variable formula value = 18.18 (most accurate, but not most sensitive, cutpoint is 18.2)

2nd ECG
STE60V3 = 3.5
QTc = 383
QRSV2 = 20.5
RAV4 = 10.5

3-variable formula value = 23.36 (lower)
4-variable formula value = 17.72 (lower)

The medic used the 3-variable formula and obtained values of 23.4 and 23.5 (positive)

He activated the cath lab from the field.

The cath team was ready when he arrived less than 5 minutes later.

Before going to cath, the patient had this ECG in the ED:
Not much change.
STE60V3 = 3.5
QTc = 450
QRSV2 = 19.5
RAV4 = 10
3-variable formula = 27.46 (very high)
4-variable formula = 21.49 (very high)
This last ECG obtains a much higher value because the computerized QTc measurement, at 450 ms, is much longer.  Even if we doubt the last QT measurement by the computer, and assume that it is much shorter, with a QTc a value of 400, both formula values remain very high.

The MDs in the department did not think it was an MI.

The patient went to cath within 5 minutes and had a 100% LAD thrombotic occlusion.

This was his ECG after stenting:
Now the EKG is normal (and the computer would agree!)
The ST elevation and tall T-waves are all resolved.
This would be how the patient's baseline ECG would have looked, if one had been available.
This reperfusion was so fast that the peak troponin was only 0.3 ng/mL.  There was no residual wall motion abnormality.  Symptom onset to balloon time was less than 30 minutes.

Learning Points
1. This shows how any individual patient's normal ST segments may have zero ST elevation.
2. Other individuals may have quite a bit of normal ST elevation.

Therefore, if there is any ST elevation, it is up to you (not the computer!) to determine if it is normal or ischemic.

The formulas are very helpful in this regard.

Again, the computer called the ECG "normal."  

I have argued that physicians should view these ECGs even if the computer interprets it as completely normal.  This is because the computer is so bad at finding subtle occlusions.  Physicians have argued that they don't have the time and that they will be no better at identifying these subtle cases than the computer will be.  

Well, a doctor might not see it, but a paramedic did.  Kudos!!

That is because the paramedic learned.  

I am sure that MDs can learn too!

Saturday, November 25, 2017

What is lurking underneath this new right bundle branch block?

Written by Pendell Meyers, edits by Smith:


A 72 year old female with hypertension and COPD presented with sudden shortness of breath and chest pain.

Here is her triage ECG (the baseline is not available but reportedly "normal"):
What is your interpretation?

There is sinus rhythm with PACs and PVCs.

More important, there is right bundle branch block with hyperacute concordant T-waves in V3-V6, as well as hyperacute T-waves in leads III and aVF with reciprocal ST depression in aVL. This distribution is classic for a type III "wraparound" LAD occlusion.

As a general rule, right bundle branch block should usually not have any ST elevation anywhere on the ECG, and the leads with large R' waves such as leads V1 and V2 should have either baseline J-points or some slight ST depression, with negative T-waves.

The rhythm is interesting but not particularly relevant. After the PVC, there is return of sinus rhythm for one beat, then a PAC, then three sinus beats, then a pause followed by a low atrial escape beat, etc. See Ken Grauer's excellent discussion on the rhythm in the comments below.

For comparison, here is an example of RBBB without any superimposed ischemic changes:

Notice normal ST depression in V1-V3 that is discordant (in the opposite direction of) the majority of the QRS, which is the last part of the QRS (R'-wave)

Initial troponin was negative. She was not taken immediately for cardiac cath, as these findings were not appreciated in the setting of RBBB. She was admitted to the cardiology unit. The second troponin I returned elevated at 6.4, and for some reason there were no more troponins measured after that.

Repeat ECG the next morning:

Resolution of findings above, as well as new deep T-wave inversions in V3-V6 and inferior leads, consistent with reperfusion.

On day 3 of hospitalization she underwent coronary angiography, revealing a 95% lesion in the mid-LAD which was stented.  One can say with full confidence that is was completely occluded at the time of the presentation ECG. Peak troponin, echocardiographic findings, and long term outcome are unknown.

Learning Points:

1. RBBB should usually not have any ST elevation, and will usually have some ST depression and T-wave inversion in the right precordial leads.

2. The combination of findings consistent with acute coronary occlusion in the anterior and inferior leads is likely due to a large "wraparound" LAD occlusion, should not be confused with the "diffuse" ST elevation of pericarditis, and will usually show reciprocal ST depression in aVL.

3. The rules of appropriate concordance apply to all forms of abnormal ventricular conduction.  In the case of RBBB, which has an up-down-up complex in right precordial leads V1-V3, it is the last part of the QRS which determines the expected discordant ST segment (the last part is a positive R'-wave, and therefore discordance will manifest as ST depression in V1-V3.

4. T-waves are just as important or more so than the ST segments when looking for acute coronary occlusion.

5. A new right bundle branch block in a sick patient with chest pain and/or shortness of breath is a worrisome finding concerning for LAD occlusion or significant pulmonary embolism.

Friday, November 24, 2017

QT Correction Formulas Compared to The Rule of Thumb ("Half the RR")

This article discusses correction of the QT interval for rate.
I've been working on this a long time, thought about submitting it to a journal, but decided it gets more readers on this blog.

Specifically, we discuss the use of the "Half the QT" Rule of Thumb, a visual estimation method which declares the QT to be prolonged if the QT extends more than half the RR interval.  We compare this method with the four most common QT correction formulas.

This is a detailed post, with interesting graphics produced and conceived by Ari B.Friedman, MD, Ph.D., now an EM resident at Beth Israel.  The article is written by Dr. Smith and Dr. Friedman.  Daniel Lee (HCMC 1st year resident) also did a bit of valuable editing.  

Figures were produced by Dr. Friedman. 

Brief Summary of this post:

The QTc rule of thumb is this: If the QT interval is less than half the RR interval, then the corrected QT (QTc) is not prolonged.

We compare this rule to the 4 common formulas for correcting the QT.

However, none of the formulas have proven to be definitively better than another and none are well correlated with outcomes or events!  

1) At heart rates between 62 and 66 bpm, the rule of thumb is accurate.  This makes sense, as at a heart rate of 60, the corrected QT is the same as the raw QT: a prolonged QT is around 500 ms, and at a rate of 60 bpm, the RR interval is 1000 ms (1 second).

2) At heart rates above 66 beats a minute, the rule of thumb is conservative; it overestimates the QT.  In other words, if the QT is less than 1/2  the RR (QT non-prolonged) and the heart rate is above about 60 beats a minute, you can confidently say the QT is not prolonged.

3) At heart rates below 60, far more caution is due.  The rule of thumb is less accurate, and the risk is higher because a long QT in the presence of bradycardia ("pause dependent" Torsades) predisposes to Torsades. 

4) Computer algorithms are not accurate at QT measurement, especially if prolonged.  Do not trust the computer if the QT looks at all prolonged. Measure it manually.

5) The "Half the QT" rule of thumb correction is linear: the QT is considered long if it is greater than 0.50 x the RR interval, a linear relationship.  The QTc by this method = the raw QT divided by the RR interval and is long if the result is greater than 500 ms. 

6) Use a different rule of thumb for bradycardia:  Manually approximate both the QT and the RR interval.  If the QT interval is less than 40% of the RR interval at 40 bpm, then it is not prolonged.   If the QT is longer than 40% of the RR, then do a formal measurement and correction.

7) This last point may be generalized to all correction methods: it may be hazardous to correct the QT when the heart rate is below 60.  Use correction primarily for heart rates above 60!

Example ECG:
The heart rate is 167; the RR interval is 0.36 ms
It is not easy to discern the end of the T-wave, as it is distorted by the P-wave.
But the QT is definitely greater than half the RR interval.
Thus, the rule of thumb says it is prolonged, but it overestimates the QT at high heart rates

How about by measurement?

I (Smith) measure the QT at 320 ms
The Bazett-corrected QTc is divided by square root of 0.36 which is 0.6 
320 divided by 0.6 = 533 ms (dangerously prolonged)
However, Bazett is known to produce falsely prolonged corrected QT at high heart rates.

So is it really prolonged?  How do we know?

Answer: you must treat the patient's underlying condition causing sinus tachycardia, and repeat the ECG at the lower heart rate.

Essential Reading:
Full text link:
AHA/ACCF/HRS Recommendations for the Standardization and Interpretation of the Electrocardiogram, Part IV: The ST Segment, T and U Waves, and the QT Interval (full text link).
----The last sections of this article are on the QT interval and is essential reading :

Here are some pearls from this article before we get started:
1. The biggest issues with QT interval are:
    a) Recognizing the onset of the QRS and end of the T-wave
    b) Determining the appropriate leads to measure
    c) Adjusting the QT for increases in QRS duration, gender, and heart rate
                 (For LBBB, Dodd and Smith propose using the T-peak to T-end interval, with a value > 85-100 ms being prolonged)
2. The longest QT of the 12 leads should be used.  It is usually V2 or V3.
3. Do not measure the QT in any lead that is obscured, especially by a U-wave.  Leads aVR and aVL are least likely to manifest U-waves.
4. It is essential to visually validate QT intervals reported by the computer algorithm, as it is frequently incorrect, especially when prolonged.  See this review of computer interpreted ECGs.
5. Bazett and Fridericia corrections may be substantially in error, especially at high heart rates.
6. In particular, Bazett correction may produce falsely prolonged corrected QT at high heart rates
7. More recently introduced linear regression functions of the R-R interval are more accurate
8. Rate correction should not be used when RR interval has large variability, as in atrial fibrillation.
9. Prolonged QT is generally thought to be 450 ms for men, 460 ms for women.
10. Dangerously prolonged QT is 480, especially greater than 500 ms.
11. Abnormally short is less than 390 ms.  They do not say which correction formula should be used when assessing these abnormally long or short QT intervals.

In that article, they do not say what is a dangerously short QT is (e.g. short QT syndrome, SQTS).  However, according to these diagnostic criteria (JACC 2011; 57(7):802), it is a Bazett corrected QT of less than 330-370, depending on other diagnostic criteria, including 1) h/o cardiac arrest, 2) sudden syncope, 3) family hx of sudden unexplained arrest at age less than 40, 4) family hx of SQTS. 

Some other points:

1. Different automated computer algorithms (different manufacturers) give different results
2. Automated algorithms often have their own proprietary methods of correction which they will not reveal.
3. Modern digital ECG machines compute QT intervals using temporally aligned superimposed leads and choose the longest of the intervals, meaning the computed QT will be longer than all but one single lead measurement.  Values currently regarded as normal were established at a time when older machines measured QT intervals via individual leads, and these values likely do not correlate with modern computed QT intervals.
4. Even when correctly measured by the computer, in half of cases, the diagnostic statement does not represent the long QT.

Two more papers before we proceed: 

1. This is an outstanding reference on nearly all aspects of the QT interval, from the difficulty and inconsistency of measuring it, to the pathophysiological substrate, and much more.  (Free full text pdf): Drew BJ, et al. Prevention of torsade depointes in hospital settings: a scientific statement from the American HeartAssociation and the American College of Cardiology Foundation. J Am CollCardiol 2010;55:934 – 47.  

2. Atrial Fibrillation: These authors compared Bazett, Fridericia, and Framingham QT measurements on 54 patients with atrial fibrillation.  They measured QTc while patients were in Afib and then again after conversion, when they were in sinus rhythm.  They measured the lead with the longest QT, and measured all complexes during a 10 second time period (one 12-lead ECG), then averaged them. The RR interval was also a 10 second average.   They found that Bazett’s formula overestimated QTc in Afib while Fridericia’s formula was the most accurate.  They do not anwer the vexing question of how to  quickly measure and correct for QT; the traditional answer is to correct the QT interval by using the preceding RR interval.   Musat DL et al.  Correlation of QT Interval Correction Methods During Atrial Fibrillation and Sinus Rhythm.  Am J Cardiol 12(9):1379-1383; Nov 2013.

Background and Formulas:

QT interval prolongation is widely used as an important risk factor for progression to torsades de pointes (TdP) and possible subsequent death. Unfortunately, the topic is very complex.  The QT interval normally varies with the RR interval (RR) even in healthy individuals, and hence it varies inversely with the heart rate (HR): HR = 60/RR, with HR measured in beats per minute, and the RR measured in seconds [e.g. use 1.5 sec for a 1500 millisecond (ms) RR interval].

One cannot, then, use a single number as a cutoff for a prolonged QT.  Instead, one must use a method that either changes the "normal" cutoff as the heart rate changes, or calculates a "corrected" QT interval (QTc) for each heart rate and uses a single cutoff for the corrected number.  The standard approach is the latter.

Several formulas have been developed.  In all of them, the QT interval is corrected to a shorter duration than the measured value at low heart rates (less than 60), and to a longer duration than the measured value at high heart rates (greater than 60).

To make things more confusing, bradycardia is a major contributor to TdP, especially in acquired long QT (due to drugs or electrolytes), and TdP in these situations is thus frequently called "pause-dependent."  The long RR interval of bradycardia lengthens the QT interval, providing a greater time interval for an R-on-T PVC to initiate TdP.   Accordingly, bradycardia alone is a significant predictor of TdP, and yet bradycardia reduces the corrected QTc.  As far as we can tell, it is unknown whether, for any given raw QT interval, bradycardia has more of a good prognostic effect in reducing the QTc, or an adverse effect in promoting TdP.  I (Smith) suspect the latter.

Consequently, even more important than the calculation of the corrected QT at slow heart rates is the calculation of a long QTc at normal or high heart rates, such that if the patient becomes bradycardic, then that patient is then at particularly high risk.   In addition, the QT can be volatile especially in ill hospitalized patients and some recommend continuous monitoring of the QTc for those at high risk.

It has been shown in this study from NEJM 1998 and this from Circulation 1991 that in congenital long QT, for every 10 ms increase in the Bazett corrected QTc, there is an approximate 5% increased risk of Torsade de Pointe (TdP) in the long term.  The risk is greatest at a Bazett corrected QTc greater than 500 ms.  We do not know of any similar data on patients with acquired long QT, nor of similar data with other correction formulas.   And we do not know of data on short term mortality or risk of TdP.

Several formulas (defined below) and nomograms are available to calculate the QTc. None is considered definitive due to the paucity of data (and conflicting data) relating QTc to outcomes.  Bazett’s formula, which divides the QT by the square root of the preceding RR interval, is probably the most commonly used.

With the formulas:

--At a heart rate greater than 60 (RR less than 1 second), the QT will be lengthened by the formula so that the QTc is longer than the QT. 
--At heart rates less than 60 (RR greater than 1 second), the QT will be shortened by the formula, so that the QTc is shorter than the QT.  

The Bazett's formula is commonly regarded as over-correcting (QTc too long) at fast heart rates and under-correcting (QTc again too long) at low heart rates.  That is, when the QTc is greater than the QT (heart rate greater than 60), then Bazett lengthens it too much; when the QTc is less than the QT, it does not shorten it enough

Thus, over-correction at rates above 60 will result in a QTc that is longer than it should be, over-estimating the risk.  

Under-correction at rates below 60 will result in a QTc that is also longer than it should be, again over-estimating the risk.  But since risk is higher in bradycardia (remember that TdP is pause-dependent, this overestimation is probably a good thing) 

Thus, the conventional wisdom is that Bazett correction over-estimates risk compared to the risk of a given raw QT at a heart rate of 60.

The Fridericia correction would seem to improve Bazett's over-correction at high HR by dividing by the cube root.  At RR intervals less than 1 (HR over 60), the cube root of the RR is a larger number than the square root, thus a larger denominator, thus a shorter QTc, thus the correction does not lengthen the QTc as much (does not over-correct as much) as Bazett.  Thus, at heart rates above 60, Fridericia does not overestimate risk as much as Bazett.   

However, at a HR less than 60, since the RR is greater than 1, the cube root is a smaller number than the square root.  Thus, the cube root of RR is a smaller denominator and so QTc is a larger number and so it will shorten the QTc less than Bazett; i.e., it will under-correct even more than with Bazett. Thus, at low heart rates it will overestimate risk more than Bazett. 

Each of the other correction factors has its own associated errors

Since the heart rate is most commonly above 60 in both normal and sick individuals, it would seem that the Bazett correction is too conservative and will identify too many patients as having a prolonged QT.

Other Research:

In this study by Hasanien et al., the optimum QT correction formula for patients with chest pain was found to be unique for each individual; it is a correction factor that can be calculated real-time for each patient by taking multiple measurements over a range of heart rates.  

This study by Malik et al., this time in healthy subjects, comes to a similar conclusion: the relation between QT and RR intervals is highly individual among healthy subjects.  In the study, ambulatory 12-lead ECGs were recorded every 2 minutes for 24 hours in 50 healthy volunteers, and the optimal QTc correction varied among individuals, from dividing by anywhere from the 4th root of the RR interval to the square root.

Malik et al. summarize their data very nicely: "The QT/RR relation exhibits a very substantial intersubject variability in healthy volunteers. The hypothesis underlying each prospective heart rate correction formula that a “physiological” QT/RR relation exists that can be mathematically described and applied to all people is incorrect. Any general heart rate correction formula can be used only for very approximate clinical assessment of the QTc interval over a narrow window of resting heart rates. For detailed precise studies of the QTc interval (for example, drug induced QT interval prolongation), the individual QT/RR relation has to be taken into account."

--Malik M et al.  Relation between QT and RR intervals is highly individual among healthy subjects: implications for heart rate correction of the QT interval.  Heart 2002;87:220–228

This 2017 article by Vandenberk B et al. in J of the American Heart Association claims to show that the Bazett correction is the worst: Which QT Correction Formulae to Use for QT Monitoring?
----They looked at 5 correction formulas in 6609 hospitalized patients and 200 normals.  [The 5th formula was the Rautaharju, but as far as we can tell, that should only be used for QT measurement in ventricular conduction delays.]  They found that the correction formula whose QTc/RR slope was closest to zero (which indicates perfect correction) was the Fridericia.  The Bazett correction slope of QTc/RR was the largest (i.e., worst) of the 5 corrections at -0.071 (indicating that the correction factor does NOT completely correct for heart rate).  The upper limit of normal for Bazett was calculated at 472 ms for men and 482 for women.   For Fridericia and Framingham, the ULNs were equal at 448 for men and 468 for women, closer to the generally accepted normals (but significantly shorter than the QTc's that we get very worried about in emergency medicine). Using the Bazett formula would have higher sensitivity for prolonged QT but worse specificity than all the other formulas.


Drew BJ, et al. Prevention of torsade de pointes in hospital settings: a scientific statement from the American Heart Association and the American College of Cardiology Foundation. J Am Coll Cardiol 2010;55:934 – 47.

This article supports the data in the paper by Vandenberk et al. above.  The QTc is often considered prolonged when it is above 440 ms or 450 ms. However, when calculated by Bazett formula, such values are found in a substantial portion (10%-20%) of the population who are not at any known risk of TdP.  Therefore, abnormally prolonged QTc is defined as above the 99th percentile for women (480 ms) and men (470 ms) [Drew BJ et al. JACC 55(9):934-947; 2010].  A QTc greater than 500 is considered highly abnormal and is associated with a significantly increased risk of death and TdP.  (We will use 480 ms for our analysis below.)

This article by Luo et al. supports the previous one asserting that 30% of ECGs' QT intervals were prolonged by the Bazett correction.


Other studies have found that the optimal correction is linear, that the QT increases in a linear fashion with RR interval, and the correction should be inverse to the raw RR interval, not to its square or cube.  That is, it should just be the raw QT divided by the RR (in seconds), not divided by the square or cube root.  The "Half the QT" rule of thumb correction is linear: the QT is considered long if it is greater than 0.50 x the RR interval, a linear relationship. The QTc by this method = the raw QT divided by the RR interval and is long if the result is greater than 500 ms.__________________

This study by Patel et al. suggests that the Hodges formula (a linear correction formula) was the only one to consistently predict cardiovascular risk and mortality while the Bazett formula was not able to predict any worse outcomes due to its overcorrection and resultant low specificity for prolonged QT. Patel PJ et al.  Optimal QT interval correction formula in sinus tachycardia for identifying cardiovacular and mortality risk: Findings from the Penn Atrial Fibrillation Free study.  Heart Rhythm 2016 Feb; 13(2):527-35.
_____________________Half the RR rule of thumb, as mentioned above, is linear: One study showed poor sensitivity and specificity of the rule if the heart rate was below 60, but that it was 100% sensitive (but only about 50% specific) with heart rates above 60.  You will see this played out in the graphics below!

Berling I and Isbister GK. The half the RR Rule:
A Poor Rule of Thumb and Not a Risk Assessment Tool for QT Interval.  Acad Emerg Med 2015 Oct; 22(10):1139-44. 

The formulas are given below

[the last (Hodges) is expressed in two equivalent formulas, one with the heart rate, as is typically seen, and one with the RR interval as the others are presented].

RR is always the PRECEDING RR interval,
meaning that the QT interval of one complex is modified by the RR interval of the preceding complex.

RR interval is in seconds, not milliseconds!

·       Bazett’s formula: QTc = QT / √RR  (QT divided by square root of RR interval)

·       Frederica’s formula: QTc = QT / √3(RR)  (QT divided by cube root of RR interval)

·       Framingham formula: QTc=QT+0.154*(1–RR)

·       Hodges formula: QTc = QT + 1.75*(HR−60)    
                                         = QT + 105/RR − 105

Finally, a relatively new nomogram has been published and it may be the best way to determine, dichotomously, whether the QT is prolonged or not.  Here it is:

If the combination of QT and Heart rate places your patient above the line, then the QT is prolonged.
If the combination places the patient below the line, then it is not prolonged.

Notice this nomogram, at heart rates under 60, does not result in the QTc being shorter than the QT!
This again points out the danger in correcting at slow heart rates!
How well did the nomogram work in this derivation study?

The sensitivity and specificity of the QT nomogram were 96.9% (95%CI 93.9–99.9) and 98.7% (95%CI 96.8–100), respectively. For Bazett QTc = 440 ms, sensitivity and specificity were 98.5% (95%CI 96.3–100) and 66.7% (95%CI 58.6–74.7), respectively, whereas for Bazett QTc =500 ms they were 93.8% (95%CI 89.6–98.0) and 97.2% (95%CI 94.3–100), respectively.

The rule-of-thumb

With that background in mind, we can now turn to the commonly-used rule-of-thumb.  The rule says that to estimate whether the QT is prolonged, one must only determine whether the QT interval occupies more than half the RR interval.

But does the rule work, and under what circumstances should we worry that it gives us an estimate that is too short or too long?

Ideally we would investigate by comparing the rule-of-thumb to a gold standard formula or nomogram that was carefully calibrated against a large database with mortality as the outcome.

Although the existing rules were not derived with mortality outcomes, the Bazett correction was used to correlate long QTc with outcomes in the studies cited above.

Instead, in this blog post we will compare the rule-of-thumb to each of the four formulas, effectively substituting usual care for the unattainable gold-standard of outcomes.

Comparing the rule-of-thumb to the formulas

To assess agreement between our rules, we examine QT intervals from 300 ms to 1000 ms, and RR intervals from 350 ms to 1500 ms.  We are not using any human or patient data; we are only comparing the rule-of-thumb to various formulas, which may or may not be correspondingly validated by health data (the risk of sudden cardiac death, for instance).
Figure 1.  Here we can compare the different formulas, using a cutoff of 480 ms to determine when the QTc is prolonged. The formula named “Half” is the rule-of-thumb that the QT is prolonged if the QT interval extends more than half the RR interval.

Already, we see divergence between the rules.  One can see here that the rule-of-thumb indicates a prolonged QTc for any QT of 300 ms or longer if the RR interval is less than 0.6 (HR greater than 100).  This contrasts with the formulas, which can have a QTc within normal limits in such cases.

Importantly, compared with the formulasat high heart rates (low RR), the half-the-RR rule of thumb tends to label too many QT’s as abnormal. At low heart rates (high RR), the rule of thumb tends to label too many as normal.

Here is another way of looking at it:

We can make the same graph, but with QT shown as a percentage of RR, and plotting either RR or HR on the x-axis.

Figure 2.  Here we plot RR on the x-axis:

You can also see clearly that in all the other formulas: 1) There are many situations in which the QT is more than half the RR, yet the QTc is normal according to the formula.  2) Conversely, there are many situations in which the QT is less than half the RR, yet it is prolonged according to the formula.

Figure 3.  Here we plot HR (instead of RR) on the x-axis:
Putting the percentage on the Y axis demonstrates the impossibility of a perfect formula based on a percentage: if we move that horizontal line up (by increasing the % of the RR that is considered prolonged up from 50%), the test will be more sensitive for a prolonged QT (fewer false negatives) at the expense of more false positives (classifying more QT-RR pairs as prolonged when they are within normal limits according to the formulas).  For normal heart rates, from 60-90, many QT intervals which are prolonged using the formulas are NOT prolonged by the rule of thumb.

Without knowing health outcomes, it is impossible to accurately weight the misses vs. the over-calls; one cannot determine what the right cutoff should be.  A receiver operating characteristic graph over all the different percentage cutoffs might help more formally make those tradeoffs.  Also, note that it is possible to derive a formula that indicates when each formula disagrees with the rule-of-thumb, but doing it graphically is more informative, so that is the approach taken here.

Rules-of-thumb (RoT) for when the rule-of-thumb is suspect

Figure 4.  It does seem like there is an area where the rule-of-thumb seems particularly good. Where’s the transition point? 
To examine where they agree and disagree, we’ll re-plot in three columns:
1.   The left column shows where the rule-of-thumb is prolonged.
2.   The middle column shows where each formula is prolonged.
3.   The right column shows the areas where they disagree.

We will plot the four formulas, with HR on the X-axis, and QT in percent of RR terms on the Y-axis.   Click on the image to see it full size.

There’s a narrow band of heart rates in which it’s impossible to go wrong. Above or below that, agreement has its trouble spots.

Below 62 bpm, the rule-of-thumb failed to note prolonged QT as indicated by all four formulas.  This is in spite of the fact that a long QT is most dangerous in bradycardia.   

At a heart rate of 40 bpm or less, for instance, all 4 formulas all declare a QT lasting just 40% of the RR to be prolonged. By contrast, above 66 bpm, the rule-of-thumb was overly conservative. At 96 bpm, all four rules consider a QT stretching 60% of the RR interval to be normal.


We might think about two reasons to be concerned with prolonged QT intervals in the ED.  The first is to take appropriate measures to prevent the acute risk of progression to TdP; the second is to avoid discharging someone who could be identified as being at risk of sudden death.

As a screening tool at normal heart rates, there is good agreement between the major formulas and the rule of thumb.  Since the rule-of-thumb is conservative at higher heart rates, and the risk of TdP is also lower at higher heart rates, a lack of QTc prolongation by the rule-of-thumb should be reassuring.

On the other hand, the rule-of-thumb in bradycardia is prone to false negatives and probably should not be used at these lower heart rates.  Interestingly, Bazett's formula, which is the most commonly used in online calculators and in EKG machines, is the least conservative at these low heart rates (i.e., it is the most likely formula to show QTc less than 480 at low heart rates, in spite of its reputation for undercorrection).  Thus, a borderline QTc in the context of bradycardia may be less re-assuring than with other formulas.

Additionally, computer algorithms are not accurate (Heart 76:422-426) at measuring the raw QT when it is prolonged, and the difficulties in its measurement are well outlined in the paper by Drew et al. Here is another study showing how insensitive computer algorithms are for a long QT  (Pediatrics 2001;108:8 –12).

When the QT appears to be long to the naked eye, it must be hand measured, regardless of what the computer measures.

Sensitivity/Specificity and the ROC curve
Another way to examine this issue is to approach it as though we had to choose our own percentage from scratch. Rather than starting with 50% (half the RR) as the rule-of-thumb, we will calculate agreement between the various rules and a whole spectrum of rules-of-thumb, each with a different cutoff.

Note that these measures depend on the population values, and since we have arbitrarily generated a population of values, this analysis is inherently problematic. Still, absent data on how prevalent RR and QT pairs are in the ED, it’s hard to do better.

Plotting the Receiver Operating Characteristic (ROC) curve compared to each formula as the gold standard, and labeling the cutoffs (i.e. 50 represents the half-the-QT rule-of-thumb), we see that half the QT was not an unreasonable choice compared to 40% or 60% of the QT.  It performs particulary well when evaluated by the Bazett formula; unfortunately, by many measures (see literature above), the Bazett formula may be the least accurate in identifying QT risk.

Note that these measures depend on the population values, and since we have arbitrarily generated a population of values, this analysis is inherently problematic. Still, absent data on how prevalent RR and QT pairs are in the ED, it’s hard to do better.

How about the corrected QT interval in bundle branch block?
An article published by Dr. Ken Dodd and Dr. Stephen W. Smith published in the International Journal of Cardiology suggests that the T-peak-to-T-end (TpTe) interval is the best measure of a prolonged QT.  They did not correct for rate in this article.  It is likely that this applies to other intraventricular conduction delay and also to RBBB.  An prolonged QT is longer than 85 ms - 100 ms in BBB.
Dodd KW et al.  Among patients with left bundle branch block, T-wave peak to T-wave end time is prolonged in the presence of acute coronary occlusion.  June 2016; 236:1-4

This article has a nice discussion of drugs that cause QT prolongation

Emergency department approach to QTc prolongation

See these cases for examples of computer mismeasurement of the QT interval:

Chest Pain and a Very Abnormal ECG

Syncope and Bradycardia

Long QT: Do not trust the computerized QT interval when the QT is long

See this case of Polymorphic VT from acquired long QT

Recommended Resources