Pharmacological treatment of hypertrophic cardiomyopathy: current practice and novel perspectives


Hypertrophic cardiomyopathy (HCM) is entering a phase of intense translational research that holds promise for major advances in disease-specific pharmacological therapy. For over 50 years, however, HCM has largely remained an orphan disease, and patients are still treated with old drugs developed for other conditions. While judicious use of the available armamentarium may control the clinical manifestations of HCM in most patients, specific experience is required in challenging situations, including deciding when not to treat. The present review revisits the time-honoured therapies available for HCM, in a practical perspective reflecting real-world scenarios. Specific agents are presented with doses, titration strategies, pros and cons. Peculiar HCM dilemmas such as treatment of dynamic outflow obstruction, heart failure caused by end-stage progression and prevention of atrial fibrillation and ventricular arrhythmias are assessed. In the near future, the field of HCM drug therapy will rapidly expand, based on ongoing efforts. Approaches such as myocardial metabolic modulation, late sodium current inhibition and allosteric myosin inhibition have moved from pre-clinical to clinical research, and reflect a surge of scientific as well as economic interest by academia and industry alike. These exciting developments, and their implications for future research, are discussed.


Hypertrophic cardiomyopathy (HCM) is the most common genetic heart disease, characterized by complex pathophysiology, heterogeneous morphology, and variable clinical manifestations over time.[1-4] Initially perceived as a rare and malignant disease, the spectrum of HCM has subsequently expanded, as new concepts have emerged regarding its true prevalence and clinical profile.[3, 5] The disease is known to range from the severe manifestations of early descriptions, to the absence of clinical and morphologic expression, including lack of left ventricular (LV) hypertrophy, in genotype-positive individuals.[6, 7] To date, none of the available pharmacological agents have been shown to modify disease development or outcome in HCM patients,[8, 9] with the possible exception of diltiazem in preventing LV remodelling.[10] The only interventions believed to have an impact on long-term prognosis are surgical myectomy and the implantable cardiac defibrillator (ICD).[8] Nevertheless, pharmacological therapy plays a very important role in restoring quality of life and reducing the risk of disease-related complications. The main goals of pharmacological therapy in HCM include control of symptoms and exercise limitation, abolition or reduction of dynamic intraventricular gradients, treatment of LV dysfunction and heart failure (HF), control of atrial fibrillation (AF) and ventricular arrhythmias, and prevention of cardioembolism.

After more than 50 years from the first reported case of HCM, only about 2000 patients have been randomized in clinical trials evaluating the efficacy of drug treatments for HCM.[8] Therefore, international guidelines are largely based on the opinion of experts[11, 12] and the scientific community is still waiting for robust evidence and disease-specific treatment options. In this paper, we will review the indications of individual agents in the management of HCM in the context of its complex pathophysiology, provide practical therapeutic considerations in the light of the 2014 European Society of Cardiology (ESC) guidelines,[11] and address promising new approaches currently under scrutiny.

Clinical profiles and genesis of symptoms

Hypertrophic cardiomyopathy may be associated with a normal life expectancy and a very stable clinical course. About a third of patients develop HF, related to dynamic LV outflow tract obstruction (LVOTO). In addition, 5–15% show progression to either the restrictive or the dilated hypokinetic evolution of HCM, both of which may require evaluation for cardiac transplantation.[13, 14] Patients with HCM can remain asymptomatic for their entire lifetime.[11-13, 15] However, symptoms are common (Figure 1) and often insidious: for example, reduced exercise tolerance may not be subjectively perceived as abnormal when present from a very young age. Furthermore, quality of life may be subtly but significantly impaired by psychological issues, iatrogenic symptoms, and lifestyle restrictions.[11]

Figure 1.

Clinical scenarios and symptoms associated with hypertrophic cardiomyopathy (HCM) and representation of current pharmacological (yellow balloons) and non-pharmacological treatments (orange balloons). ACEi, angiotensin-converting enzyme inhibitors; AF, atrial fibrillation; ARBs, angiotensin receptor blockers; CRT, cardiac resynchronization therapy; HTx, heart transplantation; ICD, implantable cardioverter defibrillator; LVAD, left ventricular assist device; LVOTO, left ventricular outflow tract obstruction; MRAs, mineralocorticoid receptor antagonists; NOACs, novel oral anticoagulants; NSVT, non-sustained ventricular tachycardia; OAC, oral anticoagulation; SVT, sustained ventricular tachycardia.

Dyspnoea is common, and reflects high LV filling pressure, diastolic dysfunction or afterload mismatch with mitral regurgitation secondary to LVOTO.[11, 15] In addition, paroxysmal AF has been associated with impaired cardiac reserve, defined as reduced exercise capacity and maximal oxygen consumption.[16, 17] In patients with LVOTO, symptoms are typically variable over time, exacerbated by dehydration, meals, alcohol, use of vasodilators, and squatting. Less frequently, patients report nocturnal orthopnoea, either the consequence of congestive HF or bradyarrhythmias (AF with slow ventricular response or sinoatrial dysfunction).

Angina affects about 30% of symptomatic adults and is often atypical, occurring at rest and/or postprandially.[18] Angina is typically related to microvascular dysfunction and increased LV wall stress caused by LVOTO, in the absence of epicardial coronary lesions. When typical, angina should prompt specific investigations to exclude myocardial bridging of the left anterior descending artery in children and atherosclerotic coronary artery disease in older patients.

Pre-syncope or syncope has been reported in about 15–20%, and is generally attributed to sustained ventricular arrhythmias or severe LVOTO, particular when associated with hypovolaemia or occurring during or after effort.[19] However, neurally mediated syncope is common and should be excluded given its radically different prognostic value.[20] Bradyarrhythmias caused by sinoatrial or atrioventricular (AV) block are more common than generally perceived, and may cause syncope even in very young HCM patients.[21] Finally, in a small minority of patients, sudden cardiac death (SCD) may represent the first manifestation of disease.[22, 23]

Treatment of dynamic left ventricular outflow tract obstruction

Left ventricular outflow tract obstruction is a complex pathophysiological hallmark of HCM, caused by systolic anterior movement of anomalous mitral valve leaflets, contacting the septum at the subaortic level; less frequently, dynamic gradients may occur at the mid-ventricular level. Classically, LVOTO is defined by peak gradients exceeding 30 mmHg at rest or 50 mmHg during exercise, and is associated with unfavourable prognosis because of HF-related complications.[24] Moreover, a significant association with SCD has been reported.[24, 25] In the presence of severe, drug-refractory symptoms, LVOTO represents an indication for surgical myectomy or percutaneous alcohol septal ablation[26] [Class I, level of evidence (LOE) B in the 2014 ESC guidelines).[11] However, pharmacological treatment represents the first approach to all obstructive patients and, if properly used, may be effective in controlling gradients and symptoms for years (Figure 2).

Figure 2.

Stages of hypertrophic cardiomyopathy (HCM) and relevant medical treatments. Hatched black arrows reflect potential transitions from one stage to another. Approved medical interventions in specific stage of disease are in green. Drugs under investigation are in red. Pheno, phenotype; ACEi, angiotensin converting enzyme inhibitors; ARBs, angiotensin receptor blockers; MRAs, mineralocorticoid receptor antagonists.

Beta-blockers are the most popular and effective agents employed.[11] The classic studies by Braunwald[27] on propranolol date back to the 1960s, showing impressive gradient and symptom reduction in the acute setting.[8, 28] Presently, atenolol (50–150 mg/day), nadolol (40–160 mg/day), bisoprolol (5–15 mg/day), and metoprolol (100–200 mg/day) are more frequently used (Tables 1 and 2). High doses may be required, and are usually well tolerated. However, side effects (mostly fatigue) should be carefully investigated in order to assess optimal individual dose. At our institutions, nadolol is the drug of first choice, in consideration of its good tolerability, favourable electrophysiological profile, and potent effect of gradient and effective 24-h coverage.[29] In our experience, titrating classic HCM therapy with beta-blockers for dynamic obstruction is relatively easier compared with patients with HF. Obstructive HCM is by definition hyperdynamic and characterized by strong adrenergic drive. A reasonable approach is to start with a quarter of a full dose of beta-blockers (e.g. nadolol 20 mg once daily, atenolol 25 mg once daily, metoprolol 25 mg twice daily, or bisoprolol 2.5 mg once daily) and increase by the same amount every 1–2 weeks to the maximum tolerated dose (usually 80 mg for nadolol and 100 mg for atenolol, 100 mg twice daily for metoprolol, and 10 mg twice daily for bisoprolol, see Table 1). Beta-blockers may be titrated based on symptoms, heart rate response, and blood pressure. Non-dihydropyridine calcium channel blockers such as verapamil and diltiazem are considered less effective,[11] although they can be used in patients who are intolerant or have contraindications to beta-blockers.

Table 1. Commonly used drugs for hypertrophic cardiomyopathy (HCM) in adults
Drug Indication Starting dose Maximum dose Notes Side effects
  1. AF, atrial fibrillation; AV, atrioventricular; HF, heart failure; HR, heart rate; HOCM, hypertrophic obstructive cardiomyopathy; ICD, implantable cardioverter defibrillator; INR, international normalized ratio; LA, left atrium; LVOTO, left ventricular outflow tract obstruction; PAF, paroxysmal atrial fibrillation; NSVT, non-sustained ventricular tachycardia; qd, once a day; SCD, sudden cardiac death; SVT, sustained ventricular tachycardia.
Propranolol Reduction of angina and dyspnoea in patients with or without LVOTO; control of ventricular response in patients with AF; control of ventricular ectopic beats 40 mg bid 80 mg tid Short half life

Drug of choice in newborns/infants


Chronotropic incompetence

Decrease in AV conduction


Atenolol Same as propranolol 25 mg qd 150 mg qd Drug of choice in HCM + hypertension Hypotension

Chronotropic incompetence


Nadolol Same as propranolol. Reduction in the incidence of NSVT, and SCD prevention, especially when associated with amiodarone 40 mg qd 80 mg bid Effective for control of obstruction

When used qd helps patient compliance

Chronotropic incompetence

Decrease in AV conduction


Metoprolol Same as propranolol 50 mg qd 100 mg bid Short half life

Usually not useful in HOCM

Chronotropic incompetence


Bisoprolol Treatment of systolic dysfunction and HF in end-stage patients 1.25 mg qd 15 mg qd Usually not useful in HOCM Chronotropic incompetence


Calcium channel blockers
Verapamil HR reduction; control of ventricular rate in patients with AF

Possible enhancement of diastolic filling

40 mg bid 240 mg bid AV conduction decrease

Ankle oedema

Diltiazem Same as verapamil 60 mg bid 180 mg bid AV conduction decrease

Ankle oedema

Felodipine Refractory angina in HCM 5 mg qd Useful in severe microvascular dysfunction Ankle oedema
Antiarrhythmic agents
Disopyramide Relief of dynamic obstruction, in association with beta-blockers 125 mg bid 250 mg tid QTc prolongation

Anticholinergic effects

Amiodarone AF prevention, control of SVT/NSVT/ventricular ectopic beats, reduction of appropriate ICD interventions 200 mg qd 200 mg qd Incomplete efficacy for SCD prevention despite reduction of NSVT QTc prolongation


Thyroid dysfunction

Pulmonary interstitial disease

Sotalol AF prevention 40 mg bid 80 mg tid
Oral anticoagulants
Vitamin K inhibitors Prevention of embolism and ischaemic stroke in patients with paroxysmal or permanent AF INR target of 2–3 for warfarin and acenocoumarol Useful after first episode of PAF and/or when LA is enlarged and end stage HF
Direct thrombin and direct activated factor X inhibitors Prevention of embolism and ischaemic stroke in patients with paroxysmal or permanent AF Recommended regimen doses based on individual molecule and patient characteristics Lack of evidence of efficacy; guidelines suggest vitamin K inhibitors as first choice
Table 2. Pharmacological indications to treat symptoms associated with hypertrophic cardiomyopathy (HCM) based on the 2014 European Society of Cardiology (ESC) and 2011 American College of Cardiology Foundation (ACCF)/American Heart Association (AHA) guidelines
Clinical conditions associated with HCM ESC (2014) ACCF/AHA (2011)
  1. A, age 65–74 years; A2, age ≥75 years; ACEi, angiotensin-converting enzyme inhibitors; ARBs, angiotensin receptor blockers; CHA2DS2-VASc, C, congestive heart failure (or left ventricular systolic dysfunction); D, diabetes mellitus; H, hypertension; LV, left ventricle; LVEF, left ventricular ejection fraction; LVOTO, left ventricular outflow tract obstruction; MRA, mineralocorticoid receptor antagonist; NOAC, new oral anticoagulants; NSVT, non-sustained ventricular tachycardia; S, prior stroke or TIA; Sc; sex category (i.e. female sex); V, vascular disease.
Dynamic left ventricular outflow tract obstruction
Beta-blockers I B I B
Verapamil/diltiazem (if beta-blockers contraindicated or not tolerated) I B

IIa C (diltiazem)




Disopyramide (in association with beta-blockers/verapamil) I B (IIb C if alone) IIa B
Oral diuretics (congestive symptoms despite the use of beta-blocker and/or verapamil) IIb C IIb C
Dyspnoea and angina in non-obstructive forms and progressive disease
Beta-blockers IIa C I B
Verapamil/diltiazem (if beta-blockers contraindicated or not tolerated) IIa C I B (only verapamil)
Oral diuretics (dyspnoea despite the use of beta-blocker and/or verapamil) IIa C IIa C
ACEi or ARBs (LVEF <50%) IIa C I B
MRA (LVEF <50% and persisting symptoms despite other HF treatments) IIa C
Atrial fibrillation
Ventricular rate control
Beta-blockers (bisoprolol or carvedilol if LV systolic dysfunction) I C I C
Verapamil/diltiazem (only with preserved LVEF) I C I C
Digoxin (only with LVEF < 50%, no LVOTO and symptoms) IIb C
Prevention of cardioembolic events
Oral anticoagulant agents (independent of CHA2DS2-VASc score/also after a single episode) I B I C
NOAC I B (as second option) I C (as second option)
Prevention of recurrences
–Amiodarone IIa B IIa B
–Sotalol IIb C
–Disopyramide (in presence of LVOTO in association with beta-blockers or verapamil) IIb C IIa B (also without LVOTO)
Ventricular arrhythmias
Reduction of the occurrence of NSVT
Reduction of symptomatic VT or recurrent shocks (with ICD)
Amiodarone I C
Beta-blockers I C

Disopyramide (an antiarrhythmic class IA agent) can be used in association with beta-blockers to improve symptoms and reduce intraventricular gradients in patients with LVOTO by virtue of its negative inotropic effect.[11] Whereas beta-blockers are most effective on provokable LVOTO, disopyramide is the most effective agent on resting obstruction.[29] Efficacy and safety of disopyramide have been demonstrated in a large multicentre registry.[30, 31] However, QT prolongation and its anticholinergic properties can limit its use and impair compliance. The latter include xerostomy, accommodation disturbances and, in men, lower urinary tract symptoms/prostatism, which may be treated with low doses of pyridostigmine.[32] Moreover, disopyramide tends to lose its efficacy over time. Therefore, in our experience, it often represents a pharmacological ‘bridge’ to invasive septal reduction therapies, rather than a long-term strategy. An electrocardiogram (ECG) should be performed before initiation of the drug, to evaluate the corrected QT (QTc) interval. Sustained-release 250 mg tablets are the usual choice, at a starting dose of 125 mg twice daily. After the first week, QTc is re-evaluated before disopyramide is titrated to the full dose (250 mg twice daily). It is essential to inform patients of the need to avoid concomitant therapy with other drugs associated with QTc prolongation; conditions that favour dehydration or electrolyte imbalance should also be avoided. In patients who are intolerant to disopyramide, cibenzoline has been employed by Japanese authors, with beneficial effects on dynamic obstruction and LV diastolic function.[33] Serial evaluation of the resting outflow gradient is important during the titration of the pharmacological therapy, although drug titration should proceed if tolerated even when systolic anterior movement is abolished, as obstruction is likely to recur on effort. Exercise echocardiography should be performed when the optimal regimen is reached, in order to exclude residual provokable gradients.

In patients with LVOTO and concomitant disease requiring pharmacological treatment, caution is required with vasodilators and/or positive inotropic agents, because of the risk of exacerbation of LVOTO; examples include phosphodiesterase type 5 inhibitors for the treatment of erectile dysfunction, methamphetamine for attention deficit hyperactivity disorder, angiotensin-converting enzyme inhibitors (ACEi), or angiotensin receptor blockers (ARBs) for treatment of concomitant systemic hypertension. Nevertheless, these drugs often seem well tolerated.[9, 34, 35]

In the presence of asymptomatic patients with high resting or provokable gradients, one should always question the true lack of symptoms vs. lifestyle adaptation. These patients often have demonstrable exercise limitation, which is exacerbated by meals. Furthermore, severe gradients may be associated with haemodynamic instability and abnormal blood pressure response on effort. Based on these considerations, a course of pharmacological therapy aimed at controlling outflow obstruction may lead to subjective improvement even in ‘asymptomatic’ patients, and is likely to provide greater haemodynamic balance during daily activity. If well-tolerated and effective, treatment may be continued based on patients’ preferences.

Prophylaxis for endocarditis is advised limited to patients with LVOTO, when invasive medical procedures are required.[36, 37] However, risk is low, and neither the 2014 ESC guidelines nor the 2011 American College of Cardiology Foundation (ACCF)/American Heart Association (AHA) guidelines on HCM specifically recommended prophylaxis.[11, 12] However, these considerations should be weighed against recent data suggesting an association between decreased use of antibiotic prophylaxis in general cardiac patients and an increased incidence of endocarditis, both in high- and low-risk individuals.[38]

Treatment of non-obstructive patients and progressive disease

In patients with preserved LV ejection fraction (LVEF), symptoms may be associated with diastolic dysfunction or microvascular ischaemia. However, the presence of severe refractory symptoms consistently elicited by exercise should raise suspicion of labile obstruction, and be specifically investigated. Dyspnoea and angina in non-obstructive patients can be usually controlled by beta-blockers,[11] employing the same agents used for LVOTO although usually at lower doses. In patients with non-obstructive HCM, titration of beta-blockers follows the aforementioned patterns, although lower doses are generally required in view of a less pronounced adrenergic drive. Symptomatic response and tolerability should drive titration, rather than specific instrumental parameters. Diastolic indices, in particular, appear of little value in this setting. Notably, in the small subset with end-stage disease, whether owing to systolic dysfunction or restrictive evolution, the armamentarium and modalities of classic HF is required. Titration of beta-blockers should be more cautious in these patients because of the fragile haemodynamic equilibrium. Diltiazem or verapamil may be used as an alternative.[11] Verapamil has been the most widely applied therapy in HCM and, although a clear benefit in improvement of functional capacity has never been demonstrated, it may be effective in improving quality of life, likely because of its ability to slow heart rate and prolong LV ventricular filling time. The dose ranges from 60 mg twice daily to 240 mg twice daily. Similar effects are observed with diltiazem (dose range 120–360 mg/day) (Tables 1 and 2).

In HCM patients with angina or atypical chest pain, no drug has shown convincing efficacy in improving microvascular function. In clinical practice, symptomatic relief may be obtained by classic anti-ischaemic agents. The most effective are usually represented by AV blocking drugs such as beta-blockers and verapamil. This is consistent with an early observation by Cannon et al.[39] showing that high ventricular rates are associated with lactate release in the coronary sinus in HCM patients (i.e. with ischaemia). In our experience, ranolazine can also be very effective in controlling chest pain,[40] although individual response may be variable. Finally, long-acting nitrates and dihydropyridines may be employed as second-line agents, but are usually less effective unless there is associated coronary artery disease.[41]

Up to 10–15% of patients with HCM develop signs and symptoms of HF despite preserved systolic function, with worsening diastolic indices subtended by extensive myocardial fibrosis (Figures 2 and 3). Of these, about one-third develop frank LV restriction and/or systolic dysfunction, evolving to refractory HF and the so-called ‘end-stage’ of HCM.[13, 14] Standard HF therapy should be systematically introduced if LVEF < 50%,[42] including ACEi, ARBs, beta-blockers, mineral-corticoid receptor antagonists, and loop diuretics (Class IIa, LOE C).[11] Considering that HCM is generally characterized by a small LV cavity and supranormal systolic function, even LVEF values in the low-normal range should be regarded with suspicion. Indeed, previous work from our groups based on cardiac magnetic resonance (CMR) has shown that average LVEF in resting conditions exceeds 70% in HCM patients, and that values in the 50–65% range may be already subtended by significant amounts of myocardial fibrosis, suggesting that progression towards end-stage disease may have begun.[43] Thus, in selected patients within this LVEF range, it is reasonable to consider HF treatment with ACEi, ARBs, mineralocorticoid receptor antagonists, and loop diuretics in the presence of congestive symptoms as evidence of increasing LV filling pressure and/or extensive myocardial fibrosis. Cardiac resynchronization therapy (CRT) has been employed in the setting of systolic dysfunction with concomitant left bundle branch block (Class IIb, with LOE C recommendation on CRT), although a survival benefit has not been demonstrated.[11] Definitive indications for CRT in end-stage HCM are still lacking and the predictors of response are likely different from those applied in HF, beginning with the higher LVEF threshold requiring consideration in HCM.[11]

Figure 3.

Cardiac magnetic resonance of a 15-year-old Caucasian female patient with non-obstructive hypertrophic cardiomyopathy, presenting with severe heart failure symptoms (New York Heart Association class III) despite preserved left ventricular (LV) ejection fraction (67%). There was evidence of severe pulmonary hypertension, restrictive LV filling pattern and moderate mitral valve insufficiency. She subsequently required heart transplantation (HTx). Ambulatory medical treatment before admission for HTx included atenolol 100 mg once daily, furosemide 25 mg twice daily, acetylsalicylic acid 100 mg and ivabradine 5 mg once daily (off-label use to control sinus tachycardia). (A) Extent of late-gadolinium enhancement (LGE—mainly located at the anterior and posterior insertion of the right ventricle free wall—red arrows) constituting 29% of the LV, compatible with extensive fibrotic replacement. (B) Short axis view showing asymmetric distribution of hypertrophy; LGE is observed at the site of maximum LV thickness. (C) Four-chamber view showing marked dilatation of the left atrium (LA, area 39 cm[2]) and a dysmorphic LV with apically displaced papillary muscle (white arrows) inserted at the level of an ‘amputated’ apex (black arrow). (D) No evidence of dynamic obstruction at the LV outflow tract (LVOT). RA, right atrium.

(Courtesy of Patrizia Pedrotti; Niguarda Ca’ Granda Hospital, Milan, Italy).

Although cardiac transplant is rarely performed in HCM, patients have an excellent outcome (Class IIa indication for patients with LVEF <50% and Class IIb for patients with LVEF ≥50%, both LOE B).[11] When disease progression is evident, referral to transplantation centres should be prompt, as the window of opportunity may be lost because of rapidly ensuing, refractory pulmonary hypertension. The use of LV assist devices has been reported in HCM, but can be challenging because of the small LV dimensions observed in most end-stage patients (Class IIb, LOE C).[11]

Management of atrial fibrillation

Atrial fibrillation is the most frequent arrhythmia in HCM, affecting more than 20% of patients, and represents a marker of unfavourable prognosis, particularly when associated with LVOTO and in patients younger than 50 years of age; moreover, the onset of AF worsens symptoms related to HF.[44-46] Following onset of paroxysmal AF, long-term antiarrhythmic therapy is generally employed to prevent recurrences (Tables 1 and 2). Sotalol and, in patients with LVOTO, disopyramide (associated with beta-blockers) represent reasonable first-line agents while other Class I agents, such as flecainide or propafenone, are generally avoided owing to concerns with pro-arrhythmic effects and haemodynamic deterioration because of conversion to AF with rapid ventricular conduction.[11] Significant clinical experience with dronedarone is lacking. When AF relapses in the context of HF or LVOTO with severe left atrial dilatation, amiodarone represents the only option for rhythm control. Furthermore, the 2014 ESC guidelines on HCM recommend the use of amiodarone following DC cardioversion (Class IIa, LOE B).[11] Owing to concerns with long-term toxicity in young patients, the minimum effective dose should be employed (usually, 200 mg five to seven times per week) and regular surveillance for thyroid, hepatic, pulmonary, and ophthalmic toxicity should be instituted. Symptomatic AF refractory to optimal pharmacological therapy represents an indication for transcatheter ablation of AF (or surgical maze in obstructive patients undergoing surgery). However, international experience in HCM is limited. In the selection of eligible patients to this procedure it must be considered that high recurrence rates are expected in older patients with advanced symptoms and marked left atrial dilatation.[47] Thus, AF ablation should be considered early following onset of AF until the arrhythmic substrate remains amenable. Furthermore, it is important to inform patients that in over 50% a second procedure is necessary for optimal results and that it may not be possible to abandon long-term antiarrhythmic therapy.[47-49]

When maintenance of sinus rhythm is not deemed feasible and rate control is the only option, beta-blockers (atenolol, nadolol, metoprolol, or bisoprolol in the presence of a preserved LVEF, bisoprolol, or carvedilol in the presence of systolic dysfunction) and verapamil or diltiazem (only with preserved LVEF) are indicated.[11] Digoxin should not be used in the setting of classic HCM, but may be considered in the subgroup with advanced LV dysfunction for rate control in the setting of chronic AF. Rarely, an ‘ablate and pace’ approach is necessary, usually in end-stage patients.

The onset of AF in HCM patients, even after a single episode, constitutes an indication to oral anticoagulation irrespective of other risk factors for embolic stroke such as age or gender. Use of the CHA2DS2-VASc score is not recommended:[11] in a retrospective analysis of 4821 HCM patients, 9.8% subjects with a CHA2DS2-VASc score of 0 had a thromboembolic event during the 10-year follow up.[50] Furthermore, advanced age, presence of AF, previous thromboembolic event, advanced NYHA class, increased left atrial diameter, presence of vascular disease, and increased maximal LV wall thickness were found to correlate with risk of thromboembolic events, whereas the use of vitamin K antagonists was associated with a 54.8% relative risk reduction in HCM patients with AF.[50] Warfarin represents the drug of choice and should be titrated to maintain an international normalized ratio (INR) between 2.0 and 3.0. However, many young and active patients show limited compliance with this regimen or refuse it altogether, while others may have difficulties in maintaining the INR within the therapeutic range or experience complications.[45] Until recently, the less effective alternative of an antiplatelet agent was offered; however, the introduction of the novel oral anticoagulants (NOACs), including the direct thrombin inhibitor dabigatran and factor Xa inhibitors rivaroxaban, apixaban and edoxaban, is rapidly changing this landscape. While caution is mandatory in the absence of safety and efficacy data in HCM patients, NOACs appear a promising alternative to warfarin, and deserve specific investigation.[11]

Control of ventricular arrhythmias

An ICD is considered the only effective strategy for prevention of arrhythmic SCD in patients with HCM. The ICD is universally recommended in secondary prevention, as the risk of arrhythmic relapse after the first episode is as high as 11% per year (Class I, LOE B).[11, 51] Conversely, indications for primary prevention are hotly debated. A new score has recently been developed by the ESC,[25] by which a high risk is defined as ≥6% at 5 years. The score is currently being validated in independent cohorts, with contrasting results.[52-54] Conversely, the ACCF/AHA guidelines favour individual, non-parametric evaluation of major risk factors.[12] The issue of the prevention of SCD and arrhythmic risk stratification is beyond the scope of the present review. The issue remains central to HCM management, and has been the focus of several articles in the recent literature.[15, 55] Classic and emerging risk factors, such as late-gadolinium enhancement and complex genotypes,[56-58] are commonly used to assess risk in individual patients, with approaches that slightly differ in Europe and the USA (see the Supplementary material online, Table S1). Irrespective of any chosen approach, the identification of high-risk patients remains challenging because of low arrhythmic event rates, limited accuracy of risk factors and stochastic nature of SCD.[59, 60] Even in high-risk HCM patients, the onset of life-threatening arrhythmias is highly unpredictable, as highlighted by the variable long time-lapses between ICD implantation and first appropriate intervention. Notably, neither a circadian trend in the onset of ventricular arrhythmias nor a significant correlation with strenuous exercise has been documented.[61] The vast majority of patients with an ICD will never experience appropriate shocks, but will be exposed to the long-term complications of the device.[51] Furthermore, while paediatric cohorts are considered at highest risk, older age is associated with a marked reduction in the likelihood of SCD. The risk of SCD is markedly reduced over 65 years of age, and fewer indications for ICD implantation in primary prevention exist in this age group. Nevertheless, the option must be evaluated on an individual basis and considered in patients with multiple risk factors. End-stage progression with systolic dysfunction (arbitrarily but consistently defined in the literature by a LVEF <50%) is associated with a high risk of SCD (around 10% per year) and therefore considered an indication for ICD implantation in primary prevention.[14, 62] However, consideration for an ICD should be given also to patients with preserved systolic function in the presence of severe diastolic impairment (restrictive evolution) associated with NYHA functional class III symptoms.

Several studies show that empirical pharmacological treatment does not confer optimal protection against SCD (Table 2). Nonetheless, amiodarone, sotalol, and beta-blockers reduce the occurrence of non-sustained ventricular tachycardia.[12, 63] Thus, it is likely that a judicious pharmacological approach can be effective in reducing the arrhythmic burden and risk in patients with HCM, as well as reducing the incidence of appropriate ICD interventions. In our experience the combination of nadolol with low-dose amiodarone is well tolerated and effective in reducing ventricular arrhythmic burden, as documented by ECG Holter monitoring, potentially contributing to the low incidence of SCD at our institution in the pre-ICD era (0.5% per year).[64]

When not to treat

Patients with HCM who are asymptomatic and have no evidence of arrhythmias or LVOTO at rest or on effort generally do not require medical treatment. However, some patients self-reporting as asymptomatic may subjectively benefit from low doses of beta-blockers (e.g. bisoprolol 2.5 mg once daily), particularly on effort and after meals. Treatment should be offered as a short (2–3 months) trial, after which each subject may decide whether to continue. As a rule, it is good to investigate whether the patient is truly asymptomatic, by performing maximal, symptom-limited exercise testing and assessing biomarkers over time. Labile obstruction should also be excluded. In the case of adolescents and very young adults exercising regularly, heart rate control using beta-blockers may be considered in order to avoid elevated cardiac rates on effort, which are associated with lactate production in HCM hearts, reflecting silent ischaemia.[39]

Aggressive control of modifiable cardiovascular risk factors is mandatory in HCM patients, in order to prevent the synergistic effects of coronary disease, diabetes and hypertension.[41] Management of hypertension should follow existing guidelines.[65] Although the introduction of vasodilators should be cautious and gradual, because of potential worsening of resting or labile LVOTO, recent trials have shown that ARBs are safe and generally tolerated in HCM patients.[9, 34] Finally, patients with obstructive HCM have a significant prevalence of obstructive sleep apnoea syndrome; this may exacerbate symptoms and arrhythmias and should be specifically sought and managed.[66] Advice regarding appropriate lifestyle maybe extremely useful in reducing symptoms and risk in HCM patients, and may suffice in milder forms of the disease in which pharmacological therapy is not warranted. There is general consensus that patients should abstain from competitive sports, as well as from strenuous and prolonged physical activity, which can represent a trigger for arrhythmias and SCD (Class I, LOE C in the 2014 ESC guidelines).[11] Conditions that reduce circulating blood volume should be avoided to prevent worsening of LVOTO.[67]

Novel perspectives

A surge in pharmacological research on HCM has followed the identification of novel therapeutic targets, and holds promise for a rapid change in clinical management of this disease. Several molecular mechanisms and disease pathways, stemming from the genetic background of HCM, represent appealing therapeutic targets, and have been reviewed by Ashrafian et al.[68] Indeed, based on sound translational research, a number of agents have already found their way to clinical testing. Perhexiline, a metabolic modulator that inhibits the metabolism of free fatty acids and enhances carbohydrate utilization by cardiomyocytes, has been employed with the aim of normalizing energy homeostasis in HCM. In a randomized, double-blind placebo-controlled trial, perhexiline has shown the capacity to improve the ratio of myocardial phosphocreatine to adenosine triphosphate in the myocardium, resulting in improved diastolic function and exercise capacity.[69] A randomized, pivotal Phase 3 trial of 350 patients evaluating perhexiline for the treatment of moderate-to-severe HCM has recently been announced ( However, concerns exist regarding the safety profile of the drug, following reports of hepatotoxicity in predisposed individuals, and the drug requires long-term monitoring of plasma levels.[70]

Recently, human HCM cardiomyocytes have been shown to exhibit marked electrophysiological remodelling leading to abnormal intracellular calcium handling, enhanced arrhythmogenesis, abnormal diastolic function, and excessive energy expenditure. These defects are selectively reversed in vitro by the late sodium current inhibitor ranolazine.[71] Thus, targeting this single molecular mechanism has the potential to counter several key components of the HCM pathophysiology, including diastolic dysfunction, microvascular dysfunction, arrhythmogenesis and, by virtue of mild negative inotropic effects, dynamic outflow obstruction.[71] These data provided a rationale for the recently completed multicentre, double-blind, placebo-controlled pilot study testing the efficacy of ranolazine on exercise tolerance in symptomatic HCM patients (RESTYLE-HCM, study registered in EU Clinical Trials Register, EudraCT Number: 2011-004507-20; While results of RESTYLE-HCM are awaited, a phase II/III trial, the LIBERTY-HCM study, has already started testing the efficacy of a new, more specific and potent late sodium current inhibitor, eleclazine ( NCT02291237). LIBERTY-HCM will test the hypothesis that, compared with placebo, eleclazine improves exercise capacity as measured by peak oxygen consumption (VO2) during cardiopulmonary exercise testing in patients with symptomatic HCM from over 40 centres in Europe and the USA. Additional drugs that have been employed in different preclinical studies and/or pilot clinical trials as possible disease-modifying therapies in HCM are listed in Tables 3 and 4 and include angiotensin II type 1 (AT1)-receptor blockers losartan and valsartan,[9, 58, 72] statins,[73, 74] and N-acetyl-cysteine.[75, 76]

Table 3. Drugs that have been employed in different preclinical studies and/or pilot clinical trials as possible disease-modifying therapies in hypertrophic cardiomyopathy (HCM)
Drug Diltiazem Ranolazine/eleclazine Losartan/valsartan Statins Antioxidants (N-acetyl-cysteine)
  1. AT1, angiotensin II receptor type 1; β-MyHC, β-myosin heavy chain; Ca, calcium; [Ca]i, calcium inward current; CMs, cardiomyocytes; FBs, fibroblasts; HMG-CoA, 3-hydroxy-3-methyl-glutaryl-CoA; LV, left ventricular; LVH, left ventricular hypertrophy; Na, sodium; [Na]i, sodium inward current; NCX, sodium–calcium exchanger; TnT, troponin-T; TPM, tropomyosin. Superscript numbers in the table are references.
Molecular target l-Type Ca channel of CMs Late Na current of CMs AT1-receptor blockers on CMs and myocardial FBs HMG-CoA reductase Precursor of glutathione (antioxidant)
Proposed mechanism Reduced Ca entry into the cytosol of CMs, causing ↓ [Ca]i Reduced [Na]i and increased Ca exit from CMs via NCX, causing ↓ [Ca]i Block of AT1 signalling pathway in CMs (↓hypertrophy) and FBs (↓fibrosis) ↓ Rho/Ras in FBs (↓fibrosis) and in CMs (↓hypertrophy); ↓ oxidative stress ↓oxidative stress in FBs (↓fibrosis) and CMs (↓hypertrophy)
Preclinical studies in HCM models Preventive treatment in transgenic mice with R403Q β-MyHC mutation[71] Study on septal samples from HCM patients (myectomy)[71] Losartan in transgenic mice with R92Q-TnT mutation[72] Atorvastatin in a rabbit model with R403Q MyHC mutation[73] Rabbits with R403Q MyHC mutuation;[75] mice with TPM mutation[76]
Effects in preclinical studies Prevention of hypertrophy and LV dysfunction[10] Reduction of cellular arrhythmogenesis,

improved diastolic function[71]

Endomyocardial fibrosis is greatly reduced after treatment[72] Reduction of hypertrophy and increased LV function[73] Reduction of hypertrophy, fibrosis[75] and diastolic dysfunction[76]
Clinical studies Slowing of phenotype development in young mutation carriers[10] Ongoing studies (RESTYLE-HCM with ranolazine; LIBERTY-HCM with eleclazine) Losartan in two studies, 33 and 9. Reduced LVH in 33, but no effects on LVH in 9 Pilot study on 32 patients; no effects on hypertrophy/cardiac function[74] Ongoing Phase 1 study (NCT01537926)
Future perspective Increase the number of carriers, prolong follow-up Prevention of phenotype development in transgenic mice VANISH study for prevention of phenotype in HCM mutation carriers None Ongoing Phase 1 study (NCT01537926)
Table 4. Ongoing and completed randomized clinical trials assessing efficacy and safety of medical agents in patients with hypertrophic cardiomyopathy (HCT) since 2010
First Author orName of the study Drug on evaluation Endpoint of the study Number ofpatients Results Year of publication
  • CPET, cardiopulmonary exercise test; HCM, hypertrophic cardiomyopathy; HF, heart failure; LV, left ventricular; LVH, left ventricular hypertrophy; NYHA, New York Heart Association.
  • *With updated data on (key word: ‘hypertrophic cardiomyopathy’, selected on 116 studies) and (Key words: ‘hypertrophic cardiomyopathy’ AND ‘clinical trials’ from 2010: 143 results). No updated data were available regarding clinical trials testing the efficacy of pirfenidone 400 mg b.i.d. (completed recruitment in 2003, NCT00011076) and atorvastatin 80 mg (completed recruitment in 2010, NCT00317967). RHYME study is a non-randomized study registered in aimed to test efficacy of ranolazine in reducing angina symptoms after 60 days in 20 patients (NCT01721967).
  • Study registered in EU Clinical Trials Register, EudraCT Number: 2011-004507-20.
Abozguiaet al.[69] Perhexiline 100 mg vs. placebo Efficacy on diastolic function and exercise capacity 46 patients with non-obstructive symptomatic HCM The metabolic modulator perhexiline improved diastolic function and increased peak oxygen uptake 2010
Shimadaet al.[34] Losartan 50 mg bid vs. placebo Effects on LVH and fibrosis 20 patients with non-obstructive HCM Attenuation of progression of LVH and fibrosis with losartan 2013
INHERIT trial[9] Losartan 100 mg vs. placebo Effects on LVH and fibrosis 124 patients with obstructive or non-obstructive HCM Losartan did not reduce LVH. Treatment with losartan was safe 2015
Ho et al.[10] Diltiazem 360 mg/die vs. placebo Safety, feasibility and effect of diltiazem as disease-modifying therapy 38 sarcomere mutation carriers without LVH Diltiazem improved early LV remodelling 2015
Perhexiline 100 mg (sponsor: Heart Metabolics Ltd) vs. placebo Hierarchical classification of outcome variable and change in maximum oxygen consumption after 6 months 320 patients with HCM and moderate to severe HF Phase III Starting March 2016 (NCT02431221)
RESTYLE-HCM Ranolazine Change in maximum oxygen consumption at CPET 80 patients Phase II/III Ongoing—completed recruitment
LIBERTY-HCM GS-6615 (sponsor: Gilead Sciences) vs. placebo Safety/efficacy study on exercise capacity in pts with symptomatic HCM 180 patients with HCM Phase II/III evaluation of change in peak oxygen uptake Ongoing—recruiting patients


VANISH (New England Research Institute, USA) Valsartan up to 160 mg vs. placebo Composite endpoint of functional capacity, amount of myocardial fibrosis and other parameters after 2 years 150 patients HCM in NYHA class I–II and mutation carriers without LVH Phase II Ongoing–recruiting patients (NCT01912534)
University of Texas, Health Science Centre, Houston, USA N-acetyl-cisteine 600/1200 mg vs. placebo Regression of indices of cardiac LVH after 3 years 75 patients with HCM and preserved systolic function Phase I Ongoing—recruiting patients (NCT01537926)

Finally, a ‘precision medicine’ approach is emerging based on the hypothesis that, in selected genetic subsets, HCM is triggered by a hypercontractile state caused by reduced inhibitory effect of the myosin-binding protein C on the cardiac myosin head. By selectively reducing the affinity of myosin for actin, the downstream consequences of sarcomere mutations might be countered in HCM patients, including prevention of phenotype development in the early stages of the disease.[77] Two phase I studies have been recently launched to assess the effects of MYK-461 (Myokardia, South San Francisco, CA, USA), the first allosteric inhibitor of cardiac myosin tested in man, in patients with HCM (Clinicaltrials.govNCT02329184 and NCT02356289).


Hypertrophic cardiomyopathy largely remains an orphan disease. In the near future, however, the debut of evidence-based approaches to HCM is likely to revolutionize its management by providing agents targeting disease-specific substrates. Until then, judicious use of the available pharmacological armamentarium may already provide sufficient control of the most common clinical manifestations and complications, allowing normal longevity in the majority of patients. Serial assessment and early identification of disease progression is key for timely implementation of available therapies.

Systematic review and meta-analysis of iron therapy in anaemic adults without chronic kidney disease: updated and abridged Cochrane review



Anaemia is increasingly recognized as having an independent impact upon patient outcomes in cardiac disease. The role of novel iron therapies to treat anaemia is increasing. This systematic review and meta-analysis assesses the efficacy and safety of iron therapies for the treatment of adults with anaemia.

Methods and results

Electronic databases and search engines were searched as per Cochrane methodology. Randomized controlled trials (RCTs) of iron vs. inactive control or placebo, as well as alternative formulations, doses, and routes in anaemic adults without chronic kidney disease or in the peri-partum period were eligible. The primary outcome of interest was mortality at 1 year. Secondary outcomes were blood transfusion, haemoglobin levels, quality of life, serious adverse events, and length of hospital stay. A total of 64 RCTs (including five studies of heart failure patients) comprising 9004 participants were included. None of the studies was at a low risk of bias. There were no statistically significant differences in mortality between iron and inactive control. Both oral and parenteral iron significantly reduced the proportion of patients requiring blood transfusion compared with inactive control [risk ratio (RR) 0.66, 95% confidence interval (CI) 0.48–0.90; and RR 0.84, 95% CI 0.73–0.97, respectively]. Haemoglobin was increased more by both oral and parenteral iron compared with inactive control [mean difference (MD) 0.91 g/dL, 95% CI 0.48 to 1.35; and MD 1.04, 95% CI 0.52 to 1.57, respectively], and parenteral iron demonstrated a greater increase when compared with oral iron (MD 0.53 g/dL, 95% CI 0.31–0.75). In all comparisons, there were no differences in the results comparing patients with and without heart failure.


Both oral and parenteral iron are shown to decrease the proportion of people who require blood transfusion and increase haemoglobin levels, without any benefit on mortality. Further trials at a low risk of bias, powered to measure clinically significant endpoints, are still required.


Anaemia has a high worldwide prevalence, and is estimated to affect 1.6 billion people worldwide.[1] The World Health Organization (WHO) defines anaemia as a circulating haemoglobin concentration of <120 g/L in non-pregnant women and <130 g/L in men.[1]

Anaemia can cause fatigue and decreased work activity.[2] It has been shown to worsen heart failure[3] and is associated with increased mortality in people with chronic heart failure.[4] Anaemia is also associated with worse outcomes after cardiac and non-cardiac surgery, including increased mortality and length of hospital stay.[5, 6]

Approximately 50% of anaemia is due to iron deficiency.[1, 7, 8] Absolute iron deficiency can be caused by nutritional deficiency of iron, loss due to bleeding, or decreased absorption of dietary iron, causing a lack of stored iron. Alternatively, a state of functional iron deficiency can occur, leading to iron-restricted erythropoiesis despite normal iron stores. Functional iron deficiency can be caused by: defective incorporation of iron into developing red cells, decreased availability of iron stores as the result of increased uptake and retention of iron within the reticuloendothelial system, or failure of absorption of intestinal iron due to inflammation, mediated by hepcidin. Chronic inflammation and disease may lead to increased hepcidin levels and thus anaemia of chronic disease.[9]

European Society of Cardiology guidelines recommend the diagnosis and treatment of correctable causes of anaemia in heart failure.[10] There has been considerable attention in cardiac disease on the role of iron therapy for iron deficiency, but the efficacy and effect of iron therapy to treat anaemia remains uncertain.

This review updates findings published in the Cochrane Database of Systematic Reviews in 2014,[11] to assess the safety and efficacy of iron therapies for the treatment of adults with anaemia who are not pregnant or lactating and do not have chronic kidney disease (CKD). No previous systematic review had been performed to assess the clinical benefits of iron therapies excluding these groups. It is important that the effectiveness of iron can be explored in a wide range of clinical conditions, and this review can influence the care of many patient groups. This is particularly important with the burgeoning interest in patient blood management. The increased awareness of the importance of detecting and treating anaemia and reducing unnecessary or inappropriate blood transfusions includes recently issued National Institute of Health and Care Excellent (NICE) guidelines for transfusion.[12]


The Cochrane methodology was applied to this review.[11] Table 1 presents the inclusion and exclusion criteria of studies.

Table 1. Inclusion and exclusion criteria
  1. Hb, haemoglobin; WHO, World Health Organization.
Inclusion criteria
Study type Randomized controlled trials, irrespective of blinding
Publication Trials were included irrespective of publication status and publication date
Participants Any non-pregnant and non-lactating anaemic adult without chronic kidney disease, irrespective of setting and degree of anaemia. All participants had to have anaemia as defined by the WHO criteria: Hb <13 g/dL for males and Hb <12 g/dL for females.
Intervention and comparisons Oral iron vs. placebo or no iron therapy
Parenteral iron vs. placebo or no iron therapy
Parenteral iron vs. oral iron
Different oral iron formulations and doses
Different parenteral iron formulations (and routes (intramuscular vs. intravenous) and doses.
Outcome Primary: all cause mortality at 1 year

Secondary: mortality at different periods of follow-up, risk of blood transfusion, difference in blood transfused, difference in haemoglobin levels, quality of life, serious adverse events, length of hospital stay

Exclusion criteria
Patients with chronic kidney disease and renal transplant Included in the review ‘Parenteral versus oral iron therapy for adults and children with chronic kidney disease’[14]
Pregnant women Included in the review ‘Treatments for iron deficient anaemia in pregnancy’[15]
Post-partum women Included in the review ‘Treatment for women with postpartum iron deficiency anaemia’[16]
Children Included in the review ‘Iron supplementation for iron deficiency anemia in children’[17]

As per the Cochrane Handbook for Systematic Reviews of Interventions,[13] all known relevant electronic databases and search engines were accessed. Searches were not restricted by language, date, or publication status. The following databases were searched up to November 2014: Cochrane Central Register of Controlled Trials (The Cochrane Library) (Issue 7, 2013), MEDLINE (Ovid) (1950–), EMBASE (Ovid) (1980–), CINAHL (Cumulative Index to Nursing and Allied Health Literature) Plus (1957–), ISI Web of Science: Science Citation Index Expanded (SCI-EXPANDED) (1970–), and ISI Web of Science: Conference Proceedings Citation Index-Science (CPCI-S) (1990–). The reference lists of all included studies and previously published reviews were searched for additional studies. Search terms included: iron, ferrous, ferric, and an(a)emia/c. Full details of the search strategy are available.[11]

Study selection

Iron therapy can be administered by a variety of routes and in different formulations. Randomized controlled trials (RCTs) of iron (oral and parenteral) vs. control or placebo, as well as alternative formulations, doses, and routes, were eligible to be included in the meta-analysis. RCTs irrespective of blinding, language, publication status and date of publication, study setting, and sample size were included. Any non-peripartum anaemic adults without CKD were included in this review, irrespective of the setting and the degree of anaemia. The primary outcome of interest was mortality at 1 year.

Risk of bias was assessed as per the instructions of the Cochrane Handbook[13] according to the following domains: sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective outcome reporting, and source of funding bias.

Two review authors (K.S.G. and T.R. or B.C.) identified trials for inclusion independently of each other, listing excluded studies and the reason for exclusion. Differences were resolved by discussion.

Statistical analysis

Meta-analyses were performed using the software package Review Manager version 5.3[18] and in accordance with the recommendations of the Cochrane Handbook.[13] The results of the random effects model were reported. The risk ratio (RR) with 95% confidence intervals (CIs) was calculated for dichotomous variables, and the mean difference (MD) with 95% CIs or standardized mean difference (SMD) with 95% CIs as appropriate for continuous variables. For time-to-event outcomes such as mortality at maximal follow-up, the hazard ratio (HR) with 95% CIs was calculated.

Subgroup analyses were performed for trials studying participants with chronic heart failure compared with those without. Subgroup analyses were also performed in trials in which erythropoietin was used as a co-intervention vs. those that in which it was not, and by participant group: blood loss conditions, cancer, pre-operative, autoimmune, and miscellaneous (for these results, see Supplementary material online). The test for subgroup differences within Review Manager was used, with a P-value of <0.05 considered statistically significant.

Sensitivity analysis was planned to exclude trials with unclear or high risk of bias for random sequence generation; unclear or high risk of bias due to lack of blinding of participants, healthcare providers, or outcome assessors; and unclear or high risk of bias due to incomplete outcome data. However, all trials had at least one domain with unclear or high risk of bias. Sensitivity analysis was performed by excluding trials in which we imputed the mean and the standard deviation, when there were at least two trials for the outcome (for sensitivity analysis results, see Supplementary material online).


Study selection

Overall, 225 full text publications from 17 693 citations were identified as potentially relevant studies, and full text copies were retrieved and assessed. Exclusions are detailed in Figure 1. In total, 128 publications describing 65 RCTs fulfilled the inclusion criteria. Duplicate reporting included the publication of conference abstracts prior to publication and publication of subset analysis, cost analysis, and combined reporting.

Figure 1.

Study selection flow diagram.

Study characteristics

Overall, 9004 participants were included in the 65 RCTs that provided the quantitative data for this review (for individual study details, see Tables 2 and 3; Supplementary material online, Table S1). None of the studies included was of a low risk of bias in every domain. A summary of the risk of bias analysis is presented in the Supplementary material online.

Table 2. Characteristics of studies included
Intervention No. of studies No. of participants References
Oral iron vs. inactive control 8 851 [19], [20], [47], [48], [51-54]
Parenteral iron vs. inactive control 18 2639 [27-37], [49], [50], [56-59], [71]
Parenteral iron vs. standard practice (oral iron or no intervention) 2 634 [38], [39]
Parenteral vs. oral iron 13 1873 [40-46], [60], [64-66], [72], [73]
Parenteral iron vs. oral iron vs. inactive control 7 960 [21-26], [55]
Different preparations of i.v. iron 5 1310 [61-63], [74], [75]
Different preparations of oral iron 12 737 [76-87]
Table 3. Clinical setting of included studies
Clinical setting No. of studies No. of participants References
Heart failure 5 358 [24], [27], [33], [57], [58]
Blood loss 14 1569 [19], [21], [23], [31], [32], [35], [40], [46], [49], [51], [52], [55], [60], [81]
Cancer 14 2185 [22], [25], [26], [28-30], [34], [38], [39], [41], [43], [50], [71], [73]
Pre-operative anaemia 3 145 [44], [47], [56]
Autoimmune disorders 8 1350 [42], [45], [53], [64], [65], [72], [74], [75]
Other 21 3397 [20], [36], [37], [48], [54], [59], [61-63], [66], [76-80], [82-87]82–87


The primary outcome of interest was mortality at 1 year. However, only one trial reported mortality at 1 year[19] (in which the mortality in both oral iron and no iron groups was 29% (P = 1); therefore, the HR was not calculated for any of the comparisons.

Eight studies investigated oral iron vs. inactive control and reported mortality,[19-26] 19 studies compared parenteral iron vs. inactive control,[21-39] and 13 trials reported mortality comparing parenteral iron vs. oral iron.[21-26, 40-46] (Figures 2-4). In all comparisons, there was no statistically significant difference in mortality (Table 4).

Figure 2.

Oral iron vs. inactive control: mortality. Forest plot of risk ratios (RRs) with 95% confidence intervals (CIs) comparing oral iron with inactive control for mortality. Squares indicate study-specific RR estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.

Figure 3.

Parenteral iron vs. inactive control: mortality. Forest plot of risk ratios (RRs) with 95% confidence intervals (CIs) comparing parenteral iron with inactive control for mortality. Squares indicate study-specific RR estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.

Figure 4.

Parenteral vs. oral iron: mortality. Forest plot of risk ratios (RRs) with 95% confidence intervals (CIs) comparing parenteral with oral iron for mortality. Squares indicate study-specific RR estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.

Table 4. Summary of results
Outcome Oral iron vs. inactive control Parenteral iron vs. inactive control Parenteral iron vs. oral control
  • CI, confidence interval; MD, mean difference; RR, risk ratio; SMD, standardized mean difference.
  • *Denotes statistical significance.
Mortality RR 1.10 (95% CI 0.72–1.67; I2 = 0%; χ2 test for heterogeneity P= 0.61) RR 1.22 (95% CI 0.82–1.80; I2 = 8%; χ2 test for heterogeneity P= 0.58) RR 1.22 (95% CI 0.58–2.56; I2 = 0%; χ2 test for heterogeneity P= 0.52)
Proportion requiring blood transfusion RR 0.66* RR 0.84* RR 1.20
(95% CI 0.48–0.90; I2 = 0%; χ2test for heterogeneity P = 0.32) (95% CI 0.73–0.97; I2 = 0%; χ2test for heterogeneity P = 0.5) (95% CI 0.56–2.61; I2 = 71%, χ2test for heterogeneity P = 0.008)
Haemoglobin MD 0.91 g/dL* MD 1.04 g/dL* MD 0.53 g/dL*
(95% CI 0.48 to 1.35; I2 = 67%; χ2 test for heterogeneity P = 0.0004) (95% CI 0.52–1.57; I2 = 93%; χ2test for heterogeneity P < 0.00001) (95% CI 0.31–0.75; I2 = 41%; χ2test for heterogeneity P < 0.03)
Quality of life SMD 0.13 SMD 0.22* SMD 0.01
(95% CI −0.10 to 0.37) (95% CI −0.00 to 0.45; I2 = 74%; χ2 test for heterogeneity P = 0.002) (95% CI −0.09 to 0.12; I2 = 0%; χ2 test for heterogeneity P = 0.63)
Serious adverse events RR 0.96 RR 1.05 RR 1.18
(95% CI 0.77–1.19; I2 = 0%; χ2test for heterogeneity P = 0.69) (95% CI 0.88–1.25; I2 = 25%; χ2test for heterogeneity P = 0.56) (95% CI 0.97–1.44; I2 = 0%; χ2test for heterogeneity P = 0.5)
Length of hospital stay MD −2.50 days (95% CI −6.82 to 1.82) MD 0.30 days (95% CI −0.19 to 0.79)

Proportion requiring blood transfusion

Comparing oral iron vs. inactive control,[23, 25, 26, 47, 48] there was a lower transfusion rate in the oral iron group (RR 0.66, 95% CI 0.48–0.90) (Supplementary material online, Figure S1). When comparing parenteral iron vs. inactive control, statistically significant differences in participants who received blood transfusion were shown (RR 0.84, 95% CI 0.73–0.97).[23, 25-32, 35, 38, 39, 49] (Supplementary material online, Figure S2). In those studies that reported mean blood transfused, this was significantly lower in the parenteral iron group than in the inactive control group (MD −1.71 units; 95% CI −3.20 to −0.22).[28, 30, 41, 50] No statistically significant difference between the two groups was found comparing parenteral with oral iron[23, 25, 26, 40, 41] (Figure 5; Table 4).

Figure 5.

Parenteral vs. oral iron: proportion requiring blood transfusion. Forest plot of risk ratios (RRs) with 95% confidence intervals (CIs) comparing parenteral with oral iron for blood transfusion. Squares indicate study-specific RR estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.


Oral iron resulted in higher haemoglobin levels compared with inactive controls (MD 0.91 g/dL, 95% CI 0.48–1.35); I2 = 67%; χ2 test for heterogeneity P = 0.0004).[19-25, 51-55] There was significant heterogeneity (I2 = 67%), with a point estimate of the mean difference in haemoglobin levels ranging from 0.2 to 2.2 g/dL higher in the oral iron group than in the inactive control group. (Supplementary material online, Figure S3). Parenteral iron resulted in higher haemoglobin levels than in inactive controls (MD 1.04, 95% CI 0.52–1.57; I2 = 93%; χ2 test for heterogeneity P < 0.00001).[21, 22, 24, 25, 27, 29-33, 35-41, 49, 55-63] (Supplementary material online, Figure S4). Again there was considerable heterogeneity (I2 = 93%), with a point estimate of the mean difference in haemoglobin levels ranging from −0.7 g/dL to 3 g/dL higher in the parenteral iron group than in the inactive control group. Comparing parenteral vs. oral iron, haemoglobin concentration was higher in the parenteral iron group (MD 0.53, 95% CI 0.31–0.75)[21-25, 40-46, 55, 60, 64-66] (Figure 6).

Figure 6.

Parenteral vs. oral iron: haemoglobin. Forest plot of mean differences (MDs) with 95% confidence intervals (CIs) comparing parenteral with oral iron for haemoglobin concentration. Squares indicate study-specific MD estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.

Quality of life

Six trials reported quality of life when comparing parenteral iron with inactive control[26-28, 36, 37, 39] using a variety of scales. Quality of life was higher in the parenteral iron group than in the control group (SMD 0.22 95% CI −0.00 to 0.45; I2 = 74%; χ2 test for heterogeneity P = 0.002) (Figure 7). When comparing parenteral and oral iron, there was no significant difference in quality of life (SMD 0.01, 95% CI −0.09 to 0.12)[26, 40-42, 45, 46, 65] (Figure 8).

Figure 7.

Parenteral iron vs. inactive control: quality of life. Forest plot of standardised mean differences (SMDs) with 95% confidence intervals (CIs) comparing parenteral iron with inactive control for effect upon quality of life. Squares indicate study-specific SMD estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.

Figure 8.

Parenteral vs. oral iron: quality of life. Forest plot of standardised mean differences (SMDs) with 95% confidence intervals (CIs) comparing parenteral with oral iron for effect upon quality of life. Squares indicate study-specific SMD estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.

Serious adverse events

In comparisons of oral iron vs. inactive control,19–21,23,25,26,52,53 and parenteral iron vs. inactive control,[21, 23, 25-27, 29, 30, 34, 37, 39, 50] there were no statistically significant differences found (Supplementary material online, Figures S5 and S6). Importantly, no trials reported severe allergic reactions from parenteral iron. There was no statistically significant difference in serious adverse events when comparing parenteral with oral iron.[21, 23, 25, 26, 40-46, 60, 66] (Figure 9; Table 4).

Figure 9.

Parenteral vs. oral iron: serious adverse events. Forest plot of risk ratios (RRs) with 95% confidence intervals (CIs) of serious adverse events comparing parenteral iron with inactive control. Squares indicate study-specific RR estimates; horizontal lines indicate the 95% CI; diamonds indicate the pooled RRs with their 95% CI.

Length of hospital stay

One study compared length of hospital stay for oral iron vs. inactive control,[19] whilst one trial compared length of hospital stay for parenteral vs. oral iron;[40] neither showed any significant difference.

Subgroup analysis: chronic heart failure

Only one study included comparison of oral with parenteral iron, and oral iron with inactive control in heart failure, reporting mortality and haemoglobin concentration.[24] Subgroup analysis of oral vs. parenteral iron in patients with heart failure vs. those without revealed no significant subgroup differences in mortality (P = 0.44) or haemoglobin (P = 0.59). When comparing oral iron with inactive control in patients with and without heart failure, there was no significant difference in mortality (P = 0.39) and haemoglobin concentration (P = 0.93) between the groups.

Comparing parenteral iron with inactive control in patients with and without heart failure showed no significant difference in mortality (P = 0.79), haemoglobin (P = 0.99), quality of life (P = 0.95), or serious adverse events (P = 0.14) between these groups.

In patients with heart failure, the point estimate haemoglobin concentration was significantly higher in patients given i.v. iron compared with placebo (MD 1.12, 95% CI 0.11–2.14), yet without any significant difference in mortality, quality of life, or serious adverse events.


The heterogeneity within the studies ranged from low (for mortality, I2 = 0, and serious adverse events), to considerable for haemoglobin (I2 = 67% for oral iron vs. inactive control, I2 = 93% for parenteral iron vs. inactive control and I2 = 41% for parenteral vs. oral iron).


This systematic review considered the utility of iron therapy for the treatment of anaemia in non-peripartum anaemic adults without CKD. Most of the trials included patients with mild to moderate anaemia. The trials did not demonstrate clinical benefit in terms of mortality. In all comparisons there were no significant subgroup differences in the results comparing patients with and without heart failure.

Both oral and parenteral iron led to a higher level of haemoglobin compared with inactive controls, and parenteral iron demonstrated a statistically significant rise in comparison with oral iron.

Parenteral iron demonstrated a statistically significant benefit for quality of life when compared with inactive control, but no significant difference when compared with oral iron. It was not possible to estimate the clinical importance of this difference. The only trial comparing oral iron with inactive control showed no statistically significant difference.[26]

Both oral iron and parenteral iron demonstrated a significant reduction in the risk of blood transfusion when compared with inactive control, with no significant difference between the two modes of administration when compared with one another.

There was no significant difference in the proportion of patients who developed serious adverse events as a result of parenteral or oral iron therapy. Most trials which took serious adverse events into consideration reported that there were no allergic or anaphylactic reactions or serious reactions, suggesting that these are rare.

Parenteral iron was demonstrated to result in increased haemoglobin levels and a reduction in blood transfusion when compared with inactive controls, without any statistically significant increase in adverse events. However, no improvement in quality of life or mortality was demonstrated in these studies.

Most of the adverse events related to oral iron therapy were gastrointestinal side effects such as nausea, diarrhoea, or constipation. While the balance of the benefits and harms of oral iron therapy appear to favour routine oral iron therapy in anaemic patients, the quality of evidence is very low. There were no significant clinical benefits of one iron preparation or regimen over another. Thus, there is little evidence on whether to recommend one preparation or regimen over another.

Subgroup analyses were performed to determine whether iron would be useful in specific clinical situations or whether iron therapy might be useful in patients who are receiving erythropoietin. The results were not consistent enough to enable us to determine this. In anaemic patients with heart failure, iron improved the haemoglobin concentration without any improvement in other clinically relevant endpoints.

This analysis is applicable only in non-peripartum anaemic adults without CKD with mild to moderate anaemia. It should also be noted that most trials excluded patients who were allergic to iron therapy and measured the ferritin and transferrin levels to ensure that the patients had iron deficiency anaemia.

None of the trials was of a low risk of bias in every domain assessed. Many trials did not report important clinical outcomes, although it is likely that such clinical outcomes were measured. This has resulted in significant selective outcome reporting bias. We did find evidence of publication bias in haemoglobin levels and we found evidence of selective reporting, i.e. many clinical trials reported haemoglobin levels but not clinical outcomes. We imputed the mean and standard deviation when these were not available. This could have introduced bias. However, there was no evidence of such a bias when exclusion of trials with imputed data did not alter the results significantly.

The previous systematic review of iron in non-pregnant and non-lactating anaemic adults without CKD failed to show any clinical benefit of i.v. iron compared with oral iron or inactive controls beyond improving haemoglobin levels.[11] This expanded meta-analysis showed a statistical difference in blood transfusion rates and quality of life for parenteral iron when compared with inactive control. However, the clinical significance of this is not known, particularly since there was no difference in mortality rates. There is a growing recognition that anaemia is a significant co-morbidity in patients that may not be modifiable, whether by iron replacement, or blood transfusion. In heart failure, treatment of iron deficiency itself may be more important than anaemia per se.

The recognition of functional iron deficiency modulated by hepcidin, and its association with inflammation provides an explanation for the efficacy of parenteral iron when compared with oral iron in producing a rise in haemoglobin in anaemic patients. In spite of demonstrating a greater haemoglobin response than oral iron, parenteral iron failed to show any other benefits over oral preparations.

In the context of heart failure with or without anaemia, i.v. iron has been shown to reduce readmission to hospital,[67] and to improve renal function.[68] Iron replacement is recommended for all iron-deficient patients with heart failure, regardless of whether they are anaemic. The majority of patients with heart failure receive oral iron, despite a lack of evidence for its benefit in these patients.[69] Only one trial comparing oral vs. i.v. iron in anaemic patient with heart failure was included in this systematic review,[24] and further trials are required comparing oral with i.v. iron in anaemic patients with heart failure.

Recently, a systematic review was published which included all trials in which i.v. iron was compared with either oral iron or no iron therapy irrespective of the clinical setting.[70] Our findings are broadly similar to those of that review which found that i.v. iron increased haemoglobin levels and decreased transfusion requirements. However, that systematic review found infective complications higher with i.v. iron. In this review, there were no significant differences in the serious adverse events. However, the confidence intervals were wide and so the observation is our systematic review might have been due to lack of evidence of effect rather than lack of effect.

In conclusion, i.v. iron is effective in improving haemoglobin levels compared with oral iron or inactive controls. Oral iron improves haemoglobin levels in comparison with inactive control. Neither reduced mortality; however, both reduced blood transfusion rates, and parenteral iron demonstrated statistically significant improvement in quality of life, although the clinical significance of this increase is not known. The analysis of trials of heart failure patients within this study showed the same outcomes. From these findings, more RCTs at a low risk of bias, powered to measure clinically useful endpoints including mortality, blood transfusion, and quality of life, are still required in all patient populations, including those with heart failure.

Incidence of cancer in patients with chronic heart failure: a long-term follow-up study



With improvement in survival of chronic heart failure (HF), the clinical importance of co-morbidity is increasing. The aim of this study was to assess the incidence and risk of cancer and all-cause mortality in a large Danish HF cohort.

Methods and results

A total of 9307 outpatients with verified HF without a prior diagnosis of cancer (27% female, mean age 68 years, 89% with LVEF <45%) were included in the study. A diagnosis of any cancer and all-cause mortality was obtained from Danish national registries. Outcome was compared with the general Danish population. Overall and type-specific risk of cancer was analysed in an adjusted Poisson and Cox regression analysis. The 975 diagnoses of cancer in the HF cohort and 330 843 in the background population corresponded to incidence rates per 10 000 patient-years of 188.9 [95% confidence interval (CI) 177.2–200.6] and 63.0 (95% CI 63.0–63.4), respectively. When stratified by age, incidence rates were increased in all age groups in the HF cohort. Risk of any type of cancer was increased, with an incidence rate ratio of 1.24 (95% CI 1.15–1.33, c < 0.0001). Type-specific analysis demonstrated an increased hazard ratio for all major types of cancer except for prostate cancer. All-cause mortality was higher in HF patients with cancer compared with cancer patients from the background population.


Patients with HF have an increased risk of cancer, which persists after the first year after the diagnosis of HF, and their prognosis is worse compared with that of cancer patients without HF.


Due to increasing age in the general population and with improvements in management of ischaemic heart disease and chronic heart failure (HF), the population of patients with HF is growing.[1, 2] During the last 30 years, the prognosis of HF due to LV dysfunction has improved significantly, with more patients surviving for an extended time,[3, 4] thus increasing the importance of detection and management of non-cardiac disease.

With cancer being a major source of morbidity and mortality[5] and a recent study suggesting an increased risk of cancer among American chronic HF patients,[6] the aim of the present study was to assess the incidence and risk of major types of cancer in a large Danish cohort with HF compared with the general population by using nationwide Danish administrative registries with complete follow-up. In addition, this study reviewed the prognosis after a diagnosis of cancer in patients with HF compared with the background population.


Study cohort

Consecutive unselected mainly Caucasian patients, who were referred to 26 Danish HF clinics at the time of their diagnosis with HF between 14 April 2002 and 31 December 2009, were included in this study. The diagnosis of HF was based on clinical evaluation and echocardiography by an experienced cardiologist. During the study period, predominantly patients with LVEF <45% by echocardiography were referred to the clinics.

Patients diagnosed with cancer before the first visit to the HF clinic (n = 1205) and patients with invalid data (n = 21) were excluded from the cohort.

For comparison of the study endpoints between the cohort and the background population, the total Danish population above 18 years of age at 1 January 2002 was followed for a diagnosis of cancer and all-cause mortality during the follow-up period. Citizens with a pre-existing diagnosis of cancer before 1 January 2002 (n = 134 601) were excluded, which resulted in 4 959 275 citizens enrolled in the analysis. If a person from the background population during follow-up was referred to a HF clinic, he/she was transferred to the study cohort at the time of the diagnosis of HF.

The Danish HF clinics are nurse-led, physician-supervised clinics managing treatment and education of HF patients, located at both secondary and tertiary care facilities. All participating hospitals were part of the Danish Heart Failure Clinic’s Network and required use of the internet-based HF database program ‘HjerterPlus’ for patient follow-up. Data for the patients including medical history, co-morbidity, LVEF, laboratory data, and medication were stored in this electronic patient file and research database. Information about the database and the HF clinics has previously been published.[7, 8]

Endpoints and collection of data

The primary study endpoints were a diagnosis of cancer and all-cause mortality. All types of cancer were included [ICD-10 (International Claasification of Diseases-10th Revision) codes C00–C96]. The cohort and the background population were followed for both endpoints until 31 December 2012.

Outcome data were obtained from the Danish National Patient Registry,[9] where since 1977 nationwide data on all hospital admissions, and since 1995 data on outpatients and emergency patients, have been included, and from the Central Population Registry where all deaths in Denmark are registered within 2 weeks. Data were linked to the ‘HjerterPlus’ database by a unique 10-digit civil registration number, which is provided to all Danish citizens at birth or at achievement of permanent residency in Denmark.


The study was approved by the Danish Data Protection Agency (reference 2007-58-0015). Approval by an Ethics Committee and informed consent are not required for retrospective registry studies in Denmark.

Statistical analysis

Crude incidence rates per 10 000 patient-years were calculated for the HF cohort in total, and for patients under 50 years of age, patients aged 50–59, 60–69, 70–79, and patients older than 79 years. Corresponding incidence rates were calculated for the background population for comparison. Because of competing risk of death, cumulative incidence functions were performed to evaluate the incidence of cancer.[10]

The incidence of cancer was evaluated in a multivariable Poisson regression model adjusted for gender, age (updated annually), and date, with follow-up time split into bands of 1-year periods. To minimize surveillance bias in a secondary analysis, an analysis of sensitivity was performed with exclusion of diagnoses of cancer within 180 and 365 days after the diagnosis of HF.

Type-specific risk of cancer was evaluated in a Cox regression analysis adjusted for age and gender.

Kaplan–Meier estimates based on all-cause mortality were used in evaluation of prognosis. Statistical analysis and data management were carried out using the SAS statistical software package, version 9.4 (SAS Institute, Cary, NC, USA). A two-sided P-value <0.05 was considered significant.


The study cohort

A total of 10 533 outpatients with verified HF were registered with a baseline visit in a HF clinic between 2002 and 2009. As presented in Figure 1, a total of 1205 patients were excluded from the cohort because of a pre-existing diagnosis of cancer before their baseline visit. Another 21 patients were excluded due to missing or invalid data. The remaining 9307 patients (89.3% with LVEF <45% by echocardiography) were included in the study analysis. Follow-up was complete with a mean time of 4.5 years (SD ±2.3 years). Patient characteristics are presented in Table 1.

Figure 1.

Flow chart of the heart failure study population. CHF, chronic heart failure.

Table 1. Patient characteristics
Total cohort (n = 9307) Cancer diagnosis P-value
Absent (n = 8332) Present (n = 975)
Mean age (± SD) 67.8 ± 2.2 67.4 ± 12.4 70.8 ± 9.7 <0.0001
Women 27.4% 27.7% 24.8% 0.0578
Clinical characteristics
NYHA class III/IV 19.6% 19.7% 18.6% 0.7720
EF <45% 89.3% 89.2% 90.2% 0.3895
Diabetes 15.0% 15.1% 14.2% 0.4485
Ischaemic heart disease 22.4% 22.0% 25.5% 0.013
Heart rhythm
Sinus rhythm 69.9% 70.1% 68.0% 0.678
Atrial fibrillation 23.6% 23.5% 24.7%
Other 6.5% 6.4% 7.3%
Pharmacological treatment
ACE inhibitor 80.9% 80.9% 81.0% 0.9657
Beta-blocker 78.1% 78.3% 77.0% 0.3900
Aldosterone antagonist 24.4% 24.5% 23.5% 0.5282
Statins 49.2% 49.2% 49.5% 0.8656
Aspirin 50.2% 50.1% 51.3% 0.4774

Incidence of cancer

During follow-up, 975 new cases of cancer were identified in the HF cohort, corresponding to an incidence rate of 188.9 [95% confidence interval (CI) 177.2–200.6] per 10 000 patient-years. The corresponding incidence rate in the background population was 63.0 (95% CI 63.0–63.4), based on 330 843 new diagnoses of cancer.

The incidence rate stratified by age in the HF cohort was 44.7 (95% CI 24.6–64.7) for those aged <50 years, and the corresponding incidence rate in the background population was 19.5 (95% CI 19.4–19.7). In the age group 50–59 years, the incidence rate was 124.9 (95% CI 101.5–148.3) in the HF cohort vs. 96.2 (95% CI 95.5–96.8) in the background population. As regards patients aged 60–69, the incidence rate was 214.5 (95% CI 191.1–237.9) vs. 165.9 (95% CI 164.8–167.0) in the background population, and for age 70–79 years it was 226.3 (95% CI 203.0–249.7) vs. 197.1 (95% CI 195.6–198.5). At age ≥80 years the HF cohort had an incidence rate of 213.5 (95% CI 182.6–244.4) vs. 110.5 (95% CI 109.1–111.8) in the background population.

Cumulative incidence of cancer

Cumulative incidence of cancer in the HF cohort and the background population adjusted for death by all causes is shown in Figure 2. Stratified into age groups of <60 years, 60–69 years, and ≥70 years, the cumulative incidence is significantly higher among HF patients than in the background population in all age groups. Sensitivity analysis with elimination of diagnoses of cancer within 90 and 180 days after the HF diagnosis did not significantly change the incidence.

Figure 2.

Cumulative incidence of cancer in the heart failure (HF) cohort and the background population adjusted for death from all causes.

The age distribution in HF patients and the background population younger than 60 years was different, with HF patients being older than the background population, with a mean age of 51.6 (SD ± 7.1) vs. 38.6 (SD ± 11.9) years.

The median time from the diagnosis of HF to a diagnosis of cancer was 862 days [interquartile range (IQR) 363–1475].

Risk of cancer

The multivariable Poisson regression analysis demonstrated an incidence rate ratio (IRR) of 1.24 (95% CI 1.15–1.33, P < 0.0001) for a diagnosis of cancer in the HF cohort compared with subjects without HF. After exclusion of all diagnoses of cancer within the first 180 and 365 days after the diagnosis of HF, the risk remained significantly increased, with an IRR of 1.17 (95% CI 1.08–1.27, P = 0.0001) and 1.14 (95% CI 1.05–1.24, P = 0.0029), respectively.

Analysis of co-morbidity within the HF cohort revealed previous myocardial infarction as associated with an increased risk of cancer, with an IRR of 1.23 (95% CI 1.03–1.47, P = 0.022), while no association was found for diabetes, with an IRR of 0.97 (95% CI 0.77–1.22, P = 0.811).

Regarding the pharmacological treatment of the HF cohort, an association between ACE inhibitors and an increased risk of cancer was found, with an IRR of 1.27 (95% CI 1.02–1.58, P = 0.030). No association was demonstrated between risk of cancer and beta-blockers, aspirin, aldosterone antagonists, or statins. The IRRs are shown in Figure 3.

Figure 3.

Multivariable Poisson regression analyses showing incidence rate ratios (IRRs) for development of cancer in the heart failure (HF) cohort relative to the background population and within the HF cohort in relation to co-morbidity and pharmacological treatment. CI, confidence interval.

Subtypes of cancer

A subtype-specific Cox regression analysis adjusted for age and gender demonstrated a hazard ratio (HR) of 1.81 (95% CI 1.54–2.12, P < 0.0001) for lung cancer which, together with skin cancer, with an HR of 1.84 (95% CI 1.57–2.15, P < 0.0001), were the two most common malignant diagnoses in the HF cohort, constituting proportions of 15.7% and 16.3%, respectively. The HR for cancer in the kidney and urinary system was 1.75 (95% CI 1.41–2.18, P < 0.0001) and in the liver/biliary system was 1.60 (95% CI 1.20–2.13,P = 0.0015). The risk of lymph/blood cancer and colon/rectal cancer was shown to be increased, with HRs of 1.45 (95% CI 1.14–1.85, P = 0.0027) and 1.24 (95% CI 1.04–1.49, P = 0.0180), respectively. Women with HF had an increased risk of breast cancer, constituting an HR of 1.36 (95% CI 1.02–1.81, P < 0.038), while for men the risk of prostate cancer with an HR of 1.04 (95% CI 0.88–1.24, P < 0.6345) was not increased. The subtype-specific HRs of cancer are shown in Figure 4.

Figure 4.

Subtype-specific Cox regression analysis adjusted for age and gender showing hazard ratios (HRs) for major types of cancers in the heart failure cohort. CI, confidence interval.


Mortality was significantly higher in HF patients with a diagnosis of cancer than in patients with a diagnosis of cancer from the background population without HF (log-rank P < 0.0001). By stratifying for age as presented in Figure 5, it was apparent that the prognosis of HF patients <60 years and aged 60–69 with cancer corresponded to the prognosis of cancer patients without HF aged 60–69 and ≥70, respectively, while the difference is much less pronounced for patients ≥70 years.

Figure 5.

Survival for patients with a diagnosis of cancer in the heart failure (HF) population and for patients with a diagnosis of cancer from the background population without HF (No HF) stratified according to age.


In this large cohort study of Danish HF patients, we observed an increased risk of cancer compared with the background population. In a subtype-specific analysis, the risk was increased for all major types of cancer except for prostate cancer. In the event of a diagnosis of cancer, the prognosis was worse in the HF cohort compared with cancer patients from the background population, and this was most pronounced in the younger subjects of the HF population.

The increased incidence of cancer observed in the present study is in agreement with a recent study of 596 American HF patients, which reported a 60% increased risk of cancer. The risk remained increased after adjustment for shared risk factors such as smoking and body mass index.[6] Even though our cohort was ∼5 years younger, with an average age of 67.8 years compared with the American cohort, the two cohorts were comparable with regard to co-morbidities such as diabetes and prior myocardial infarction.

In addition, in a cohort of patients with HF and preserved EF, a 16% baseline prevalence of cancer has recently been reported.[11]

The present results should be interpreted with some caution, as we cannot exclude that the investigations leading to the diagnosis of HF also may have led to incidental diagnosis of cancer.

Thus, surveillance bias could partly explain the increased risk of cancer observed immediately after the HF diagnosis was established. In agreement with this, sensitivity analysis, with elimination of diagnoses of cancer up to 1 year after the HF was diagnosed, did lower the risk estimate of developing cancer in the HF cohort, although insignificantly. Despite this, the median time between the diagnosis of HF and the diagnosis of cancer, of 2 years and 4 months, indicates a persisting risk of cancer after the first period of intensive diagnosis and up-titration of medication associated with evaluation and management of HF in a hospital setting.

In the youngest subjects of the HF cohort, the increased mortality related to a diagnosis of cancer compared with mortality among cancer patients in this age category without HF should be interpreted with caution, since the background population aged 18–59 years were considerably younger, which may be associated with different kinds of cancers, less co-morbidity, and perhaps more aggressive treatment. Although this cannot be determined from the present study, it seems likely that the management of cancer is less aggressive in patients with HF, who would be expected to tolerate chemotherapy less well than patients with a better performance status.[12]

The observed increased mortality in the HF cohort could possibly be associated with the distribution of different types of cancers in the HF cohort and in the patients with cancer without HF. According to the Nordcan Database,[13] which contains information on incidence, prevalence, and survival statistics from 50 major types of cancer in the Nordic countries, the 5-year survival after lung cancer in Danish men is only 12%. Since lung cancer was more common in the HF cohort, this could possibly lead to the overall increased mortality in the HF cohort. However, it could be argued that this was balanced by a more favourable prognosis of other over-represented types of cancer such as kidney and bladder cancer.

Since convincing evidence dismissing a correlation between increased risk of cancer and the use of ACE inhibitors has been published,[14, 15] we interpret the association suggested by our analysis as likely to be a random effect or a result of selection bias, based on an increased life expectancy with a longer time to develop cancer among ACE inhibitor-tolerant HF patients compared with patients not tolerant of ACE inhibitor treatment. The multivariate analysis also suggests an association between ischaemic heart disease and incidence of cancer, which could be a confounder enhancing the risk of cancer found in the HF cohort.

The reason for an increased cancer risk in HF is not apparent from the present study. However, inflammation is an established component of carcinogenesis,[16] and, since HF is characterized by chronic inflammation and neurohormonal activation,[17, 18] it may be speculated that chronic inflammation plays a role in the increased risk of development of cancer in HF. This is supported by growing evidence suggesting an increased risk of cancer among patients with other chronic diseases with an inflammatory component such as type 2 diabetes, where breast, colorectal, pancreatic, and oesophageal cancer have been suggested to be more frequent and to be associated with a poor prognosis.[19-21] At the tissue level, an association between chronic inflammation and cancer has also been demonstrated for inflammatory bowel disease and colorectal cancer.[22]

Strengths and limitations

Some methodical strengths and limitations of the study should be acknowledged. Given the observational design of the study, the results should be considered hypothesis generating with no possibility of determining causality of observed associations. However, the results are strengthened by the large size of the cohort and the complete follow-up due to the linking of several nationwide registries by the unique civil registration number of all Danish citizens. Since it has been compulsory by legislation to report all diagnoses of cancer in Denmark since 1987, we are confident that our information on cancer incidence is accurate.[23]

The comparison of incidence of cancer between the HF cohort and the total Danish background population will be skewed for the youngest fraction under 50 years of age since the background population will consist of all citizens aged from 18 to 49 years and the vast majority of HF patients in this age group are older than 40 years.

Due to shared risk factors for HF, ischaemic heart disease, and cancer,[24] adjustment for smoking would have been appropriate; unfortunately, this information was not available for our analysis.

Finally, the generalizability of the results from this study is limited to HF patients primarily with systolic dysfunction and mild to moderate symptoms, who were considered eligible for up-titration in guideline-driven HF medication and patient education in a HF clinic. Our findings cannot be extrapolated to patients with advanced or terminal HF or to HF patients with preserved LVEF. Since the cohort consisted of mainly Caucasians, extrapolation to other ethnicities is debatable.


Patients with HF carry an increased risk of all major types of cancer, with the exception of prostate cancer, compared with the general population, and when diagnosed with cancer their all-cause mortality is higher than that of cancer patients from the background population without HF. Surveillance might explain some of the increased risk. Nevertheless, increased awareness of early signs of cancer is warranted among HF patients to attain an earlier diagnosis and treatment, since the risk remains increased several years after the HF diagnosis.

The ‘hygiene hypothesis’ for autoimmune and allergic diseases: an update


According to the ‘hygiene hypothesis’, the decreasing incidence of infections in western countries and more recently in developing countries is at the origin of the increasing incidence of both autoimmune and allergic diseases. The hygiene hypothesis is based upon epidemiological data, particularly migration studies, showing that subjects migrating from a low-incidence to a high-incidence country acquire the immune disorders with a high incidence at the first generation. However, these data and others showing a correlation between high disease incidence and high socio-economic level do not prove a causal link between infections and immune disorders. Proof of principle of the hygiene hypothesis is brought by animal models and to a lesser degree by intervention trials in humans. Underlying mechanisms are multiple and complex. They include decreased consumption of homeostatic factors and immunoregulation, involving various regulatory T cell subsets and Toll-like receptor stimulation. These mechanisms could originate, to some extent, from changes in microbiota caused by changes in lifestyle, particularly in inflammatory bowel diseases. Taken together, these data open new therapeutic perspectives in the prevention of autoimmune and allergic diseases.

Changes of lifestyle in industrialized countries have led to a decrease of the infectious burden and are associated with the rise of allergic and autoimmune diseases, according to the ‘hygiene hypothesis’. The hypothesis was first proposed by Strachan, who observed an inverse correlation between hay fever and the number of older siblings when following more than 17 000 British children born in 1958 [1]. The original contribution of our group to the field was to propose for the first time that it was possible to extend the hypothesis from the field of allergy, where it was formulated, to that of autoimmune diseases such as type 1 diabetes (T1D) or multiple sclerosis (MS) [2]. The leading idea is that some infectious agents – notably those that co-evolved with us – are able to protect against a large spectrum of immune-related disorders. This review summarizes in a critical fashion recent epidemiological and immunological data as well as clinical studies that corroborate the hygiene hypothesis.

The strongest evidence for a causal relationship between the decline of infections and the increase in immunological disorders originates from animal models and a number of promising clinical studies, suggesting the beneficial effect of infectious agents or their composites on immunological diseases.

In this review, we shall attempt to evaluate the arguments in favour of the hygiene hypothesis with particular interest on the underlying mechanisms.

Evolving epidemiology of allergic and autoimmune diseases

The rising incidence of atopic and autoimmune diseases

In 1998, about one in five children in industrialized countries suffered from allergic diseases such as asthma, allergic rhinitis or atopic dermatitis [3]. This proportion has tended to increase over the last 10 years, asthma becoming an ‘epidemic’ phenomenon [4]. The increasing prevalence of asthma is important in developed countries (more than 15% in United Kingdom, New Zealand and Australia) but also in developing countries, as illustrated by a prevalence greater than 10% in Peru, Costa Rica and Brazil. In Africa, South Africa is the country with the highest incidence of asthma (8%) [5]. Unfortunately, data from most other African countries are unavailable [6]. The prevalence of atopic dermatitis has doubled or tripled in industrialized countries during the past three decades, affecting 15–30% of children and 2–10% of adults [7]. In parallel, there is also an increase in the prevalence of autoimmune diseases such as T1D, which now occurs earlier in life than in the past, becoming a serious public health problem in some European countries, especially Finland, where an increasing number of cases in children of 0–4 years of age has been reported [8]. The incidence of inflammatory bowel diseases (IBD), such as Crohn’s disease or ulcerative colitis [2] and primary biliary cirrhosis [9] is also rising. Part of the increased incidence of these diseases may be attributed to better diagnosis or improved access to medical facilities in economically developed countries. However, this cannot explain the marked increase in immunological disorder prevalence that has occurred over such a short period of time in those countries, particularly for diseases which can be diagnosed easily, such as T1D or MS [10–12].

The decreasing incidence of infectious diseases

Public health measures were taken after the industrial revolution by western countries to limit the spread of infections. These measures comprised decontamination of the water supply, pasteurization and sterilization of milk and other food products, respect of the cold chain procedure, vaccination against common childhood infections and the wide use of antibiotics. The decline is particularly clear for hepatitis A (HAV), childhood diarrhoea and perhaps even more spectacular for parasitic diseases such as filariasis, onchocercosis, schistosomiasis or other soil-transmitted helminthiasis [13]. In countries where good health standards do not exist, people are chronically infected by those various pathogens. In those countries, the prevalence of allergic diseases remains low. Interestingly, several countries that have eradicated those common infections see the emergence of allergic and autoimmune diseases.

Uneven distribution

The geographical distribution of allergic and autoimmune diseases is a mirror image of the geographical distribution of various infectious diseases, including HAV, gastrointestinal infections and parasitic infections. There is an overall North–South gradient for immune disorders in North America [14], Europe [2] and also in China [15] with intriguing exceptions such as asthma in South America or T1D and MS in Sardinia. There is also a West–East gradient in Europe: the incidence of T1D in Bulgaria or Romania is lower compared to western Europe, but is increasing fast [16]. This gradient cannot be fully explained by genetic differences. Indeed, the incidence of diabetes is sixfold higher in Finland compared to the adjacent Karelian republic of Russia, although the genetic background is the same [17].

Additionally, migration studies have shown that offspring of immigrants coming from a country with a low incidence acquire the same incidence as the host country, as rapidly as the first generation for T1D [18] and MS [19,20]. This is well illustrated by the increasing frequency of diabetes in families of immigrants from Pakistan to the United Kingdom [21] or the increasing risk of MS in Asian immigrants moving to the United States [22]. The prevalence of systemic lupus erythematosus (SLE) is also much higher in African Americans compared to West Africans [23].

These data do not exclude the importance of genetic factors for those immunological disorders, as assessed by the high concordance of asthma, T1D or IBD in monozygotic twins: for example, the concordance rate for atopic dermatitis among monozygotic twins is high (77%) compared to dizygotic twins (15%) [7]. The difference in some genetic factors according to ethnicity [human leucocyte antigen (HLA) gene difference between Caucasian and Asian, for example] is well documented, but probably plays a minor role in geographical distribution in view of migrant data.

Search for risk factors at the origin of the increase of immunological disorders

Several factors, such as socio-economic indices, may explain the difference in the prevalence of immunological disorders according to time and geographical distribution. In fact, there is a positive correlation between gross national product and incidence of asthma, T1D and MS in Europe [2]. This is true at the country level, but also at that of smaller regions, such as Northern Ireland, where the low incidence of T1D is correlated with low average socio-economic level, as evaluated by conventional indices [24]. Similar results have been obtained in the province of Manitoba in Canada for Crohn’s disease [25]. This correlation has even been demonstrated at the individual level for atopic dermatitis, as family income is correlated directly with the incidence of the disease [26]. However, this does not pinpoint which factor within the socio-economic indices is directly responsible for the immunological disorder. Several epidemiological studies have indicated a positive correlation between sanitary conditions and T1D [24] or MS [27], suggesting a possible role of infections. Other factors are often incriminated, such as air pollution for asthma [28], but their role has not been demonstrated convincingly. For example, it has been shown that in East Germany before the fall of the Berlin Wall, where the air pollution was greater, the incidence of asthma was lower than in West Germany [29].

Vitamin D production is linked to sun exposure, and has been shown recently to have immunomodulatory effects [30]. However, this does not explain the West–East gradient of T1D in Europe, or the huge difference between Finland and its neighbouring Karelian region, where people have the same sun exposure level [31].

Epidemiological data indicating a direct link between the decreasing level of infectious burden and the rising incidence of immunological disorders

Several epidemiological studies have investigated the protective effect of infectious agents in allergic and autoimmune diseases. The presence of one or more older siblings protects against development of hay fever and asthma [1], of MS [32] and also of T1D [33], as does attendance at day care during the first 6 months of life in the case of atopic dermatitis and asthma [34].

Interestingly, exposure to farming and cowsheds early in life prevents atopic diseases, especially if the mother is exposed during pregnancy [35,36]. It has also been shown that prolonged exposure to high levels of endotoxin during the first year of life protects from asthma and atopy [37]. However, these data have been contradicted by other studies showing an increased prevalence of asthma correlated with higher levels of endotoxins in urban housing [38,39]. The level of endotoxins is higher in farms as compared to cities, and subjects are in contact with a greater variety of microbial compounds in farms, which could explain this discrepancy.

Do helminth parasites protect against atopy? Epidemiological data of cross-sectional studies revealed that Schistosoma infections have a strong protective effect against atopy, as reviewed recently [40]. Hookworms such as Necator americanus also seem to protect from asthma. In contrast, Ascaris lumbricoides and Trichuris trichiura have no significant effect on disease. Parasitic infections have been almost eradicated in European countries since the Second World War, concomitant with the increase of atopy and allergy. This trend can explain part of the epidemiological difference between Europe and Africa, but cannot explain readily the intra-European North–South gradient.

Proof of principle of the causal relationship between decline of infectious diseases and increase of immunological disorders

We have seen that there is a strong correlation between changes in lifestyle and modifications of the incidence of allergic or autoimmune diseases, but this does not prove a causal relationship between these two observations. This is a crucial question, as many factors unrelated to infections are a consequence of lifestyle, such as food habits, quality of medical care or dinner time gradient from North to South Europe. The answer to this question comes from animal models of autoimmune and allergic diseases and, to a lesser degree, from clinical intervention studies.

Animal models

The incidence of spontaneous T1D is directly correlated with the sanitary conditions of the animal facilities, for both the non-obese diabetic (NOD) mouse [2] and the bio-breeding diabetes-prone (BB-DP) rat [41]: the lower the infectious burden, the higher the disease incidence. Diabetes has a very low incidence and may even be absent in NOD mice bred in ‘conventional’ facilities, whereas the incidence is close to 100% in female mice bred in specific pathogen-free (SPF) conditions. Conversely, infection of NOD mice with a wide variety of bacteria, virus and parasites protects completely (‘clean’ NOD mice) from diabetes [2]. Similarly, mycobacteria (e.g. complete Freund’s adjuvant) prevent induction of experimental autoimmune encephalomyelitis [42] and ovalbumin-induced allergic asthma [43]. Data obtained in our laboratory show that living pathogens are not required, as bacterial extracts are sufficient to afford protection [44].

Increased atopy after anti-parasitic treatments

It has been shown that helminth eradication increases atopic skin sensitization in Venezuela [45], in Gabon [46] and in Vietnam [40]. However, in a small study of 89 Venezuelan adults and children with asthma there was a clinical improvement, and specific immunoglobulin E (IgE) levels decreased after anti-helminthic treatment [47], while a similar deworming treatment showed no effect in Ecuador [48]. It is difficult to explain these contradictory data, which may relate to the complexity of asthma pathophysiology. In the same vein, one might also mention the increased atopy observed after vaccination with Streptococcus pneumoniae in South Africa [49].

Prevention of allergic and autoimmune diseases by infections

In a prospective study in Argentina, 12 patients with MS with high peripheral blood eosinophilia were followed. These patients presented parasitic infections and showed a lower number of MS exacerbations, increased interleukin (IL)-10 and transforming growth factor (TGF)-β secretion by peripheral blood mononuclear cells (PBMC) [50].

Similarly, deliberate administration of ova from the swine-derived parasite Trichuris suis, every 3 weeks for 6 months to 29 patients with active Crohn’s disease, improved symptoms in 21 of 29 patients (72%) with no adverse events [51]. T. suis ova were also given to patients with active ulcerative colitis, with significant improvement (43% improvement versus 17% for placebo) [52].

Another helminth, Necator americanus, has also been used in Crohn’s disease, patients being inoculated subcutaneously with infective larvae. There was a slight improvement of symptoms, but the disease reactivated when immunosuppressive drugs were reduced [53].


Probiotics are non-pathogenic microorganisms that are assumed to exert a positive influence on host health and physiology [54]. Encouraging results were first shown in a double-blind randomized placebo-controlled trial in Finland, where Lactobacillus GG taken daily by expectant mothers for 2–4 weeks before delivery and postnatally for 6 months could decrease significantly the incidence of atopic dermatitis [55]. Perinatal protection lasted up to 7 years [56]. Another trial showed improvement of atopic dermatitis using other strains of probiotics [57]. However, a German group using the same protocol did not find any protective effect after 2 years [58]. Additionally, a recent study of 445 pregnant women in Finland who were treated with the same protocol as the initial Finnish study, but with freeze-dried Lactobacillus GG, failed to show any significant effect on eczema, allergic rhinitis or asthma 5 years after treatment. The only difference observed was a decreased IgE-associated allergic disease in caesarean-delivered children [59].

In T1D, only experimental data are available. The protective effect of a probiotic [60] and a bacterial extract [44] was reported on the onset of diabetes in NOD mice. A pilot study in humans, the PRODIA study (probiotics for the prevention of beta cell autoimmunity in children at genetic risk of type 1 diabetes), was begun in 2003 in Finland in children carrying genes associated with disease predisposition[61].

The case of probiotics in IBD is more complex because of the possible local anti-inflammatory effect, which could explain the relief of symptoms without changes in disease progression, as implicated in the hygiene hypothesis. Following a number of uncontrolled studies in a small cohort of 14 paediatric patients with newly diagnosed ulcerative colitis, probiotic treatment induced a significant rate of remission compared to the control group (93% versus 36%) and a lower relapse rate [62].

In brief, there are data suggesting that probiotics may have a favourable role in immune disorders, but the case is far from proven and requires further investigation. Additionally, although side effects are very low they might not be non-existent, as shown in a set of patients with acute pancreatitis [63]. Thus, probiotics should not be considered as totally harmless, particularly in the immunodeficient host, and more safety studies are needed. As mentioned by Sharp et al., ‘probiotics may have unpredictable behaviour like all microorganisms, such as unanticipated gene expression in non-native host environment, or acquired mutations occurring spontaneously via bacterial DNA-transfer mechanisms’.

Is there a role for microbiota changes in the hygiene hypothesis?

The human gut is the natural niche for more than 1014 bacteria of more than 1000 different species [64]. Immediately after birth, the human gut is colonized with different strains of bacteria. This commensal microbiota is important in shaping the immune system, for other basic physiological functions [65] as well as for the integrity of the intestinal barrier [66]. Interestingly, the intestinal flora was different in a small group of allergic Estonian and Swedish children compared to the control group, with a higher count of aerobic bacteria such as coliforms and Staphylocccus aureus and a decreased proportion of Lactobacilli, or anaerobes such as Bifidobacterium or Bacteroides[67,68]. However, this difference was not seen in a larger birth cohort study comparing three European baby populations [69]. Additionally, this study showed a slower acquisition of typical faecal bacteria such as Escherichia coli, especially in children delivered by caesarian section or children without siblings. It should be noted that all these studies were based on the analysis of culturable bacteria, and only atopic dermatitis and skin prick test were evaluated.

In autoimmune diseases the microbiota also seems to modulate the immune response. In NOD mice deficient for the myeloid differentiation primary response gene 88 (MyD88) signalling molecule it has been shown that microbiota protect mice from diabetes via a MyD88-independent pathway [70]. Using the metagenomic approach, it has been demonstrated that the biodiversity of the faecal microbiota of patients with Crohn’s disease is diminished, especially for the Firmicutes phylum [71,72].Faecalibacterium prautsnitzii is one of the Firmicutes that was particularly depleted, and it has been shown that this deficient commensal bacterium could improve IBD in a murine model of the disease [73]. This protective effect was also obtained with the supernatant of F. prautsnitzii culture, demonstrating the importance of one of the secreted molecules for its anti-inflammatory effect. Another bacterium, Bacteroides fragilis, has also been shown to protect animals from experimental colitis, and this protective effect was linked to a single microbial molecule, polysaccharide A [74]. As mentioned above, with regard to IBD these data must be interpreted with caution before extrapolating to other autoimmune disorders where the disease site is extra-intestinal. First, the respective anti-inflammatory and immunomodulatory effects of protective bacteria remain to be determined. Secondly, this protective effect should be discussed in the context of disease-promoting bacteria such as Helicobacter hepaticus.

In brief, there is an increasing amount of data showing that microbiota changes could contribute to the modulation of immune disorders but evidence is still slim, except in IBD. It is to be hoped that studies which provide a fair description of the molecular changes following intestinal infections will help in analysing the question further. The recent report by Fumagalli et al. is a good illustration of this new approach [75].

Mechanisms of the hygiene hypothesis

When considering the multitude of infectious agents that can induce protection from various immunological disorders, it is not surprising that more than one single mechanism has been found.

T helper type 1 (Th1)–Th2 deviation

Th1–Th2 deviation was the first major candidate mechanism for explaining the protective influence of infectious agents from immunological disorders. Th1 T cells produce inflammatory cytokines such as IL-2, interferon (IFN)-γ and tumour necrosis factor (TNF)-α that are operational in cell-mediated immunity (including autoimmune diabetes). In contrast, Th2 T cells that produce IL-4, IL-5, IL-6 and IL-13 contribute to IgE production and allergic responses. Given the reciprocal down-regulation of Th1 and Th2 cells, some authors suggested initially that in developed countries the lack of microbial burden in early childhood, which normally favours a strong Th1-biased immunity, redirects the immune response towards a Th2 phenotype and therefore predisposes the host to allergic disorders. The problem with such an explanation is that autoimmune diseases, which in most cases are Th1 cell-mediated, are protected by infections leading to a Th1 response and that atopy may be protected, as seen above, by parasites which induce a Th2 response. These observations fit with the concept of a common mechanism underlying infection-mediated protection against allergy and autoimmunity. Several hypotheses may explain these common mechanisms.

Antigenic competition /homeostasis

It has been known for several decades that two immune responses elicited by distinct antigens occurring simultaneously tend to inhibit each other. Numerous mechanisms were evoked to explain antigenic competition that might be pertinent to the hygiene hypothesis. The development of strong immune responses against antigens from infectious agents could inhibit responses to ‘weak’ antigens such as autoantigens and allergens. Among the mechanisms that explain antigenic competition, attention has been drawn recently to lymphocyte competition for cytokines, recognition for major histocompatibility complex (MHC)/self-peptide complexes and growth factors necessary to the differentiation and proliferation of B and T cells during immune responses within the frame of lymphocyte homeostasis. Similarly to red blood cell mass, which is restored to normal levels after a haemorrhage with the help of erythropoietin, CD4 and CD8 T lymphocytes are restored to normal levels after a lymphopenia. Homeostatic factors that play an equivalent role to that of erythropoietin have not been elucidated completely; however, cytokines such as IL-2, IL-7, and IL-15 are known to play a crucial role. Regulatory T cells that we discuss below may also be implicated in the mechanism of antigenic competition.


Another mechanism involves regulatory T cells which can suppress immune responses distinct from reponses against the antigen in question, here antigens expressed by infectious agents (a phenomenon called bystander suppression). The problem is complicated by the multiplicity of regulatory lymphocytes involving diverse cytokines that mediate their differentiation or their regulatory effects. The role of CD4+CD25+forkhead box P3 (FoxP3+) T cells has been suggested by transfer experiments performed in a murine parasite model [76]. The role of such cells is also suggested by the observation that CD28–/–NOD mice devoid of CD4+CD25+ FoxP3+ regulatory T cells (Tregs) lose their sensitivity to the protective effect of bacterial extract (our unpublished data). It has also been reported that in cord blood from newborns of mothers exposed to farming, CD25+FoxP3 cells were up-regulated [77]. This observation should be interpreted with caution because of the uncertain specificity of these markers in man.

Other data suggest a role for IL-10-producing B cells [78], natural killer (NK) T cells [79] and more generally cytokines such as IL-10 [80] and TGF-β[81] whatever the cell type producing these cytokines. More work is needed in experimental models to delineate further the involvement of regulatory mechanisms in the protective effects of the various infections relevant to the hygiene hypothesis. It might emerge that different mechanisms are operational according to the protective infection.

Non-antigenic ligands

All the mechanisms mentioned previously are based on the notion that the hygiene effect is due to the decrease of immunological responses elicited against infectious agents. A number of experiments indicate that infectious agents can promote protection from allergic diseases through mechanisms independent of their constitutive antigens, leading to stimulation of non-antigen specific receptors. This concept is well illustrated by the example of Toll-like receptors (TLRs). Knowing the capacity of TLRs to stimulate cytokine production and immune responses, it might be predicted that TLR stimulation by infectious ligands should trigger or exacerbate allergic and autoimmune responses. This has indeed been demonstrated in some experimental models [82,83].

Surprisingly, and paradoxically, it has also been observed that TLR stimulation could prevent the onset of spontaneous autoimmune diseases such as T1D in NOD mice, an observation made for TLR-2, -3, -4, -7 and -9 [84] (and our unpublished data). In this model, treatment with TLR agonists before disease onset prevents disease progression completely. The mechanisms underlying such protections are still ill defined, but could involve production of immunoregulatory cytokines and the induction of regulatory T cells or NK T cells. Similar data have been observed in an ovalbumin-induced model of asthma [85].

Concerning HAV, it was shown initially that atopic diseases were less common in subjects that have been exposed to the virus [86]. It was difficult to say whether this association was due to a direct protective effect of HAV infection or explained only by the fact that HAV exposure is a matter of poor hygiene. Data obtained by Umetsu et al. have shown that HAV could influence T cells directly, notably Th2 cells that express the HAV receptor [87], a finding corroborated by the observation that atopy prevalence is associated with HAV receptor gene polymorphisms in anti-HAV antibody-positive subjects. In fact, recent data indicate that the HAV receptor, the TIM-1 protein (T cell, immunoglobulin domain and mucin domain), could play an important role in the severity of HAV and its putative effect on atopic diseases.

Gene–environment interactions

An interesting approach to identify mechanisms underlying allergic and autoimmune diseases consists in searching for associations between these diseases and polymorphisms of various genes, notably those coding for molecules involved in immune responses. It is interesting to note that such an association has been found for genes implicated in the control of infection. Among them, polymorphism in genes of the innate immune response such as CD14, TLR2, TLR4, TLR6 or TLR10, and intracellular receptors such as NOD1 and NOD 2 [also known as caspase-recruitment domain (CARD)4 and CARD15, respectively], appears to be important [88,89]. Mouse studies have shown that these gene–environment interactions explain a proportion of the phenotypic variance. One of those genes is CD14, which is important in lipopolysaccharide (LPS)/TLR-4 signalling. Many association studies have highlighted the role of the CD14–159CT polymorphism and allergic inflammation [90].

Therapeutic strategies

The notions presented above open new, interesting, therapeutic perspectives for the prevention of allergic and autoimmune diseases. Of course, contaminating children or adults at high risk of developing these diseases by infectious agents cannot be envisioned, at a time when medical progress has allowed the reduction of major infectious diseases. It should be mentioned, however, that even if we do not believe that this is not the best strategy for the future, some groups have used living parasites such as T. suis in the prevention of IBD, as mentioned above, or living Lactobacilli in the prevention of atopic dermatitis. These approaches present the obvious limitation of insufficient standardization, and hazards linked to unpredictable disease course in subjects presenting an unknown immunodeficiency by contamination with xenogeneic virus in the case of swine-derived parasites.

Conversely, the use of bacterial extracts, already shown to be efficacious in a number of experimental models and in the clinic, such as OM-85 in T1D, should be envisioned seriously [44]. These extracts, which represent the mixture of a wide spectrum of chemically ill-defined components, are also submitted to the criticism of poor standardization. On the other hand, they are a better representation of the various components of bacteria known for their protective effects. The same comments apply to parasitic extracts, shown to be effective in T1D [91]. In the long-term future, one would like to use chemically defined components of protective infectious agents, such as TLR agonists, polysaccharide A or the active substance secreted by F. prautsnitzii. In any event, the use of bacterial extracts or chemically defined products will be confronted with the double problem of the timing of administration (sufficiently early in the natural history of the disease), and of safety. Indeed, any side effects are unacceptable in young subjects who are apparently healthy and whose risk of developing the disease in question is not demonstrated absolutely.

Newly acquired kiwi fruit allergy after bone marrow transplantation from a kiwi-allergic donor



The phenomenon of allergy transfer from an allergic donor to a non-allergic recipient via hematopoietic cell transplantation has been described by several reports. However, it could not yet been conclusively shown that allergic reaction of the recipient is elicited by the donor’s cells.


In the case of a 46-year-old male patient who – for the first time in his life – had two episodes of oral allergic syndrome upon kiwi consumption after having received myeloablative hematopoietic stem cell transplantation (HCT) from his kiwi-allergic sister, we aimed to clarify the origin of allergen reactive cells in the donor. We not only intended to demonstrate if allergy was transferred by HCT but also to present an experimental workup for the analysis of allergy transfer by HCT.


Allergic sensitization to kiwi in recipient and donor was proven by ImmunoCAP. Furthermore, origin of peripheral blood mononuclear cells (PBMCs) was analyzed by chromosomal fluorescence in situhybridization (FISH). To confirm allergic reaction and activation of hematopoietic cells by customized kiwi extract, we performed basophil activation test from whole blood as well as T cell proliferation assays from purified PBMCs of both recipient and donor.


Basophil activation upon kiwi extract was demonstrated in both recipient and donor. Besides, we showed proliferation of CD4+ T cells after incubation with kiwi extract. FISH analysis proved that hematopoietic cells of the male recipient completely originated from the female donor.


Exemplified in this patient, we show for the first time that allergy transfer is mediated by the donor’s cells. Moreover, our experimental approach using customized kiwi extract to prove contribution of kiwi-specific T and B cells in both kiwi-allergic recipient and donor could serve as a model approach for future studies.

The increased prevalence of allergy and the hygiene hypothesis: missing immune deviation, reduced immune suppression, or both?


Allergic atopic disorders, such as rhinitis, asthma, and atopic dermatitis, are the result of a systemic inflammatory reaction triggered by type 2 T helper (Th2) cell-mediated immune responses against ‘innocuous’ antigens (allergens) of complex genetic and environmental origin. A number of epidemiological studies have suggested that the increase in the prevalence of allergic disorders that has occurred over the past few decades is attributable to a reduced microbial burden during childhood, as a consequence of Westernized lifestyle (the ‘hygiene hypothesis’). However, the mechanisms by which the reduced exposure of children to pathogenic and nonpathogenic microbes results in enhanced responses of Th2 cells are still controversial. The initial interpretation proposed a missing immune deviation of allergen-specific responses from a Th2 to a type 1 Th (Th1) profile, as a result of the reduced production of interleukin-12 and interferons by natural immunity cells which are stimulated by bacterial products via their Toll-like receptors. More recently, the role of reduced activity of T regulatory cells has been emphasized. The epidemiological findings and the experimental evidence available so far suggest that both mechanisms may be involved. A better understanding of this question is important not only from a theoretical point of view, but also because of its therapeutic implications.

Air Quality and Temperature Effects on Exercise‐Induced Bronchoconstriction


Exercise‐induced bronchoconstriction (EIB) is exaggerated constriction of the airways usually soon after cessation of exercise. This is most often a response to airway dehydration in the presence of airway inflammation in a person with a responsive bronchial smooth muscle. Severity is related to water content of inspired air and level of ventilation achieved and sustained. Repetitive hyperpnea of dry air during training is associated with airway inflammatory changes and remodeling. A response during exercise that is related to pollution or allergen is considered EIB. Ozone and particulate matter are the most widespread pollutants of concern for the exercising population; chronic exposure can lead to new‐onset asthma and EIB. Freshly generated emissions particulate matter less than 100 nm is most harmful. Evidence for acute and long‐term effects from exercise while inhaling high levels of ozone and/or particulate matter exists. Much evidence supports a relationship between development of airway disorders and exercise in the chlorinated pool. Swimmers typically do not respond in the pool; however, a large percentage responds to a dry air exercise challenge. Studies support oxidative stress mediated pathology for pollutants and a more severe acute response occurs in the asthmatic. Winter sport athletes and swimmers have a higher prevalence of EIB, asthma and airway remodeling than other athletes and the general population. Because of fossil fuel powered ice resurfacers in ice rinks, ice rink athletes have shown high rates of EIB and asthma. For the athlete training in the urban environment, training during low traffic hours and in low traffic areas is suggested.

Hypersensitivity to intravenous iron: classification, terminology, mechanisms and management


Intravenous (IV) iron therapy is widely used in iron deficiency anaemias when oral iron is not tolerated or ineffective. Administration of IV-iron is considered a safe procedure, but severe hypersensitivity reactions (HSRs) can occur at a very low frequency. Recently, new guidelines have been published by the European Medicines Agency with the intention of making IV-iron therapy safer; however, the current protocols are still non-specific, non-evidence-based empirical measures which neglect the fact that the majority of IV-iron reactions are not IgE-mediated anaphylactic reactions. The field would benefit from new specific and effective methods for the prevention and treatment of these HSRs, and the main goal of this review was to highlight a possible new approach based on the assumption that IV-iron reactions represent complement activation-related pseudo-allergy (CARPA), at least in part. The review compares the features of IV-iron reactions to those of immune and non-immune HSRs caused by a variety of other infused drugs and thus make indirect inferences on IV-iron reactions. The process of comparison highlights many unresolved issues in allergy research, such as the unsettled terminology, multiple redundant classifications and a lack of validated animal models and lege artis clinical studies. Facts and arguments are listed in support of the involvement of CARPA in IV-iron reactions, and the review addresses the mechanism of low reactogenic administration protocols (LRPs) based on slow infusion. It is suggested that consideration of CARPA and the use of LRPs might lead to useful new additions to the management of high-risk IV-iron patients.

When should infants start to EAT? Is it time to LEAP? And other nutty insights

I suspect that few in the allergy world have not heard of three papers published in other journals detailing successful and unsuccessful preventative strategies for food allergy. Dr Robert Boyle’s comprehensive systematic review suggests that there is no consistent evidence to support the use of hydrolysed formula to prevent food allergy or other allergic diseases. [1] Professor Lack’s LEAP-On and EAT studies have demonstrated the effectiveness of the early introduction of common food allergens into the infant diet to prevent the development of food allergy [2, 3]. But before commenting further, I need to declare two conflict of interests – I am a co-investigator on the LEAP study and I chaired the EAT Trial Steering Committee. I am though therefore very well placed to know just how hard both research teams had to work to deliver these two landmark studies.

Dr Boyle’s systematic review focused on intervention trials assessing the impact of hydrolysed formula on the development of allergic disease [1]. Using a rigorous approach, they undertook a meta-analysis which failed to find a beneficial effect on eczema, wheeze or food allergy. They felt that many studies were at uncertain or high risk of bias and found evidence of publication bias. Previous reviews and guidelines have suggested that there was some, albeit weak, evidence of benefit. Boyle et al argue that these conclusions were driven in some cases by more positive results from lower quality design studies. The LEAP-On study follows on from the LEAP study, published last year [4]. In the LEAP study, 640 infants with severe eczema, egg allergy or both were randomized to consume or avoid peanuts until 5 years of age. Infants in the intervention group were fed at least 6 g of peanut protein each week, split between three or more meals. In the intention-to-treat analysis, 17.2% had peanut allergy in the avoidance group at 5 years compared to only 3.2% in the consumption group (P < 0.001). The intervention was effective in both the infants with no weal to peanut at randomization and those with a 1- to 4-mm weal demonstrating both a primary and secondary preventative effect. The intervention was safe.

The problem with the LEAP study is that it was not clear whether the effect was due to short-term desensitization or long-term tolerance. An important consideration as the former means that regular peanut consumption must be continued to maintain protection. So the LEAP-On study tested the hypothesis that the rate of peanut allergy would remain low after 12 months of peanut avoidance in those who had consumed peanut during LEAP. It is testament to the study team and families that 556 (88.5%) of eligible LEAP participants enrolled in LEAP-On and almost all were evaluable at 6 years of age. Additionally, 97.1% of the LEAP avoiders and 76.7% of the LEAP consumers managed to avoid peanuts over that year. Peanut allergy at 6 years continued to be much more prevalent in the LEAP avoiders than consumers (18.6% vs. 4.8%, P < 0.001). There was no statistically significant increase in peanut allergy in the LEAP consumption arm with a year of avoidance. So the LEAP-On study suggests that early introduction of peanut into the infant diet gives long-term tolerance although the term we were asked to use in the paper was ‘stable unresponsiveness to peanut’. I will leave a discussion of the difference between ‘stable unresponsiveness to peanut’ and tolerance to a future editorial; suffice it to say that time and further follow-up will be required to see whether the preventative impact continues with ad libconsumption of peanut in the future.

So what does the EAT study tell us? The hypothesis was that the early introduction of a range of common food allergens into the diet of breastfed infants recruited from a general (not an at-risk) population would protect against the development of allergy to these foods [3]. A total of 1303 exclusively breastfed three-month-old infants were randomized to introduce six common food allergens (peanut, egg, cow’s milk, sesame, white fish and wheat) from 3 months of age or to be exclusively breastfed for the first 6 months of life, as per UK recommendations. For each food, parents were asked to feed their infant 2 g of protein twice a week. In an intention-to-treat analysis, 7.1% of the standard-introduction group and 5.6% of the early-introduction group developed food allergy to one or more of the six intervention foods up to 3 years of age, a statistically non-significant difference (P = 0.32).

However, the adherence to the intervention was much lower in the EAT than in the LEAP study. 92.9% of the EAT standard-introduction group were per-protocol adherent, but only 42.8% were per-protocol adherent in the early-introduction group. Adherence was particularly poor for the consumption of egg, sesame and wheat. Adherence was better for milk and peanut with 86% and 63% adherence, respectively. In contrast to the intention-to-treat analysis, an adjusted per-protocol analysis (taking into account those who were already food-allergic at randomization) showed a statistically lower cumulative prevalence of any food allergy in the standard-introduction group compared to the early-introduction group (6.4% compared to 2.4%, P = 0.03). At an individual food level, 2 g per week of peanut protein or egg white protein consumption significantly reduced peanut (adjusted per-protocol analysis 2.5% vs. 0%,P = 0.003) and egg (5.2% vs. 1.4%, P = 0.02) allergy, respectively. The EAT study intervention was not effective for the other foods, but not many participants developed milk, sesame, wheat or fish allergy.

The EAT authors were able to use the suboptimal adherence to their advantage by undertaking a dose–response analysis. Food allergy was less likely with increasing numbers of foods consumed, increasing weekly amounts of each food consumed, and the increasing number of weeks that foods were

Thyroid Cancer Subtype Reclassified as Noncancer

An international panel of pathologists and clinicians has reclassified a type of thyroid cancer to reflect that it is noninvasive and has a low risk for recurrence.

The panel renamed encapsulated follicular variant of papillary thyroid carcinoma (EFVPTC) as noninvasive follicular thyroid neoplasm with papillary-like nuclear features (NIFTP).

“To my knowledge, this is the first time in the modern era a type of cancer is being reclassified as a noncancer. I hope that it will set an example for other expert groups to address nomenclature of various cancer types that have indolent behavior to prevent inappropriate and costly treatment,” senior investigator Yuri Nikiforov, MD, PhD, director of the Division of Molecular and Genomic Pathology, University of Pittsburgh School of Medicine, Pennsylvania, said in a statement.

The panel’s research that led to the name change is described in an article published online April 14 in JAMA Oncology.

“The change in nomenclature could affect the clinical care and management of more than 45,000 patients worldwide per year,” comments Kepal N. Patel, MD, of the Division of Endocrine Surgery, Thyroid Cancer Interdisciplinary Program, New York University Langone Medical Center, New York City.

In an accompanying editorial, Dr Patel says, “the reclassification of noninvasive EFVPTC to NIFTP is a timely and appropriate change.

“By removing the word cancer, the term NIFTP acknowledges the low malignant potential of these tumors,” Dr Patel writes. “This will potentially affect the way the disease is viewed by caregivers and patients. It will eliminate the psychological impact of receiving a cancer diagnosis.

 “Furthermore, the new designation recognizes the appropriate biological behavior of this tumor and should decrease the overtreatment that the term cancer often breeds, which in turn will reduce treatment costs and also reduce the risk of patients being exposed to unnecessary risk.”

Incidence Rising in Recent Years

Dr Nikiforov and colleagues point out that the incidence of EFVPTC has risen two- to threefold during the past 20 to 30 years and makes up 10% to 20% of all thyroid tumors diagnosed in Europe and North America. This increased incidence has been explained by improvements in diagnosis. It has beendescribed as an “epidemic of diagnosis” rather than a true increase in disease.

The researchers also note that although mounting evidence points to highly indolent behavior of EFVPTC, most patients with this tumor type receive treatment as if they had conventional thyroid cancer, which includes surgery to have the thyroid gland removed followed by radioactive iodine treatment.

“Aside from the stigma of a ‘cancer’ diagnosis and the morbidity of aggressive treatment for PTC, patients and health care professionals have to cope with the rapidly increasing costs of care for patients with thyroid cancer, which were estimated to exceed $1.6 billion in 2013 in the United States alone,” the team writes.

At the recommendation of the National Cancer Institute, the panel sought to revise the terminology and to see whether the word “cancer” could be dropped from the name.

An international team of pathologists independently reviewed 268 tumor samples diagnosed as EFVPTC from 13 institutions. They established diagnostic criteria, including cellular features, tumor invasion, and other factors. In 109 patients with noninvasive EFVPTCs, there were no recurrences or other manifestations of the disease at a median follow-up of 13 years, the panel found. On the basis of this information regarding outcomes, as well as other information, the panel decided to rename EFVPTC as NIFTP.

“We determined that if NIFTP is carefully diagnosed, the tumor’s recurrence rate is extremely low, likely less than 1% within the first 15 years,” Dr Nikiforov said in the statement.

“The cost of treating thyroid cancer in 2013 was estimated to exceed $1.6 billion in the US. Not only does the reclassification eliminate the psychological impact of the diagnosis of ‘cancer,’ it reduces the likelihood of complications of total thyroid removal and the overall cost of healthcare,” he added.

Move Away From “Cancer” Sets a Precedent

This move to stop using the term “cancer” to describe a tumor set a precedent, but will it have an impact in other areas of oncology? There have been discussions for some time now regarding the move away from the word “cancer” in the description of early stages of both breast and prostate cancer.

 In 2013, a working group sanctioned by the National Cancer Institute proposed that a number of premalignant conditions, including ductal carcinoma in situ and high-grade prostatic intraepithelial neoplasia, should no longer be called “cancer.”

Instead, the conditions should be labeled something more appropriate, such as indolent lesions of epithelial origin (IDLE), the working group suggested. “Use of the term ‘cancer’ should be reserved for describing lesions with a reasonable likelihood of lethal progression if left untreated,” the group said at the time.

The proposal to move away from the word “cancer” for slow-growing prostate tumors had been airedearlier, in 2011, by an independent panel of the National Institutes of Health, but at the time, oncologists argued against the move, saying a change in name would confuse patients and arguing that “slow-growing cancer is still real cancer.”




Objective  To evaluate clinical outcomes, refine diagnostic criteria, and develop a nomenclature that appropriately reflects the biological and clinical characteristics of EFVPTC.

Design, Setting, and Participants  International, multidisciplinary, retrospective study of patients with thyroid nodules diagnosed as EFVPTC, including 109 patients with noninvasive EFVPTC observed for 10 to 26 years and 101 patients with invasive EFVPTC observed for 1 to 18 years. Review of digitized histologic slides collected at 13 sites in 5 countries by 24 thyroid pathologists from 7 countries. A series of teleconferences and a face-to-face conference were used to establish consensus diagnostic criteria and develop new nomenclature.

Main Outcomes and Measures  Frequency of adverse outcomes, including death from disease, distant or locoregional metastases, and structural or biochemical recurrence, in patients with noninvasive and invasive EFVPTC diagnosed on the basis of a set of reproducible histopathologic criteria.

Results  Consensus diagnostic criteria for EFVPTC were developed by 24 thyroid pathologists. All of the 109 patients with noninvasive EFVPTC (67 treated with only lobectomy, none received radioactive iodine ablation) were alive with no evidence of disease at final follow-up (median [range], 13 [10-26] years). An adverse event was seen in 12 of 101 (12%) of the cases of invasive EFVPTC, including 5 patients developing distant metastases, 2 of whom died of disease. Based on the outcome information for noninvasive EFVPTC, the name “noninvasive follicular thyroid neoplasm with papillary-like nuclear features” (NIFTP) was adopted. A simplified diagnostic nuclear scoring scheme was developed and validated, yielding a sensitivity of 98.6% (95% CI, 96.3%-99.4%), specificity of 90.1% (95% CI, 86.0%-93.1%), and overall classification accuracy of 94.3% (95% CI, 92.1%-96.0%) for NIFTP.

Conclusions and Relevance  Thyroid tumors currently diagnosed as noninvasive EFVPTC have a very low risk of adverse outcome and should be termed NIFTP. This reclassification will affect a large population of patients worldwide and result in a significant reduction in psychological and clinical consequences associated with the diagnosis of cancer.


This study was undertaken to reexamine the clinical and pathologic approach to noninvasive EFVPTC—a thyroid tumor that, despite increasing evidence of its indolent behavior, is nonetheless classified as cancer. The outcome data obtained in this study support renaming this tumor in a manner that more accurately reflects its behavior. Indeed, in our highly curated cohort of more than 100 noninvasive EFVPTCs there were no recurrences or other manifestations of the disease at a median follow-up of 13 years. This finding correlates with previous reports on noninvasive EFVPTC. In the English language literature, only 2 (0.6%) of 352 well-documented noninvasive encapsulated/well-circumscribed FVPTCs recurred.14,15,23– 27 One of the recurred tumors had been incompletely excised, whereas in the other case the noninvasive nature of the tumor remains questionable. Even if these 2 cases of recurrence are accepted, the combined data suggest that in the absence of invasion this lesion entails a very low risk of adverse outcome and therefore should not be termed cancer.

The new proposed terminology, NIFTP, reflects key histopathologic features of this lesion, ie, lack of invasion, follicular growth pattern, and nuclear features of PTC. Molecular analysis performed in this study on a limited number of samples confirmed previous observations16,28 demonstrating that most of these lesions are driven by clonal genetic alterations and are therefore neoplasms rather than hyperplastic proliferations. When defined with strict histopathologic criteria, these tumors are not expected to show molecular alterations associated with classic PTC, such as BRAF V600E mutations. Instead, they demonstrate a high prevalence of RAS and other mutations, which have been associated with follicular-pattern thyroid tumors, including follicular adenoma (FA), follicular thyroid carcinoma (FTC), and EFVPTC.16,22,29 Furthermore, tumors analyzed in this study also recapitulate the FA to FTC sequence of progression with the capacity for invasion, suggesting that NIFTP likely represents the “benign” counterpart or precursor of the invasive EFVPTC (Figure 2).

Figure 2.
Putative Scheme of Thyroid Carcinogenesis

EFVPTC indicates encapsulated follicular variant of PTC; NIFTP, noninvasive follicular thyroid neoplasm with papillary-like nuclear features; PTC, papillary thyroid carcinoma.

Image not available.

We have defined a set of reproducible diagnostic criteria that accurately identify NIFTP. We have also shown that given the metastatic potential of the invasive tumors in group 2, adequate sampling of the tumor capsule interface to exclude invasion is imperative before designating a nodule as NIFTP. To our knowledge, adequacy of tumor capsule sampling has not been discussed in the literature to date with respect to FVPTC. Precedent can be drawn from the approach to the encapsulated FA/FTC tumors, in which histologic assessment of the entire lesional capsule is preferable to exclude a minimally invasive FTC.30 Thus, like FA, NIFTP should undergo extensive review of the tumor capsule interface to exclude invasion.

The results of this study, together with previously reported observations, suggest that when the diagnosis of NIFTP is made on the basis of careful histopathological examination, the tumor will have a low recurrence rate, likely less than 1% within the first 15 years. Of note, most differentiated thyroid carcinomas relapse within the first decade after initial therapy,31 although late recurrences and distant spread are documented.32 Importantly, a large proportion of patients with tumors diagnosed as NIFTP in the present study underwent lobectomy only and none received RAI ablation. This suggests that clinical management of patients with NIFTP can be deescalated because they are unlikely to benefit from immediate completion thyroidectomy and RAI therapy. Staging would be unnecessary. In addition to eliminating the psychological impact of the diagnosis of cancer, this would reduce complications of total thyroidectomy, risk of secondary tumors following RAI therapy, and the overall cost of health care.33,34 Avoidance of RAI treatment alone would save between $5000 and $8500 per patient (based on US cost).35 Decreased long-term surveillance would account for another substantial proportion of cost reduction.


The results of this international and multidisciplinary study establish that thyroid lesions currently diagnosed as noninvasive EFVPTC represent a distinct class of thyroid tumors with very low risk of adverse outcome. These tumors can be diagnosed using a set of reproducible diagnostic criteria and should be termed “noninvasive follicular thyroid neoplasms with papillary-like nuclear features” (NIFTP). We estimate that this reclassification would affect more than 45 000 patients worldwide each year (eTable 7 in theSupplement), thereby significantly reducing the psychological burden, medical overtreatment and expense, and other clinical consequences associated with a cancer diagnosis.

%d bloggers like this: