Vitamin and Mineral Supplements

Dietary supplementation is approximately a $30 billion industry in the United States, with more than 90 000 products on the market. In recent national surveys, 52% of US adults reported use of at least 1 supplement product, and 10% reported use of at least 4 such products.1 Vitamins and minerals are among the most popular supplements and are taken by 48% and 39% of adults, respectively, typically to maintain health and prevent disease.

Despite this enthusiasm, most randomized clinical trials of vitamin and mineral supplements have not demonstrated clear benefits for primary or secondary prevention of chronic diseases not related to nutritional deficiency. Indeed, some trials suggest that micronutrient supplementation in amounts that exceed the recommended dietary allowance (RDA)—eg, high doses of beta carotene, folic acid, vitamin E, or selenium—may have harmful effects, including increased mortality, cancer, and hemorrhagic stroke.2

In this Viewpoint, we provide information to help clinicians address frequently asked questions about micronutrient supplements from patients, as well as promote appropriate use and curb inappropriate use of such supplements among generally healthy individuals. Importantly, clinicians should counsel their patients that such supplementation is not a substitute for a healthful and balanced diet and, in most cases, provides little if any benefit beyond that conferred by such a diet.

Clinicians should also highlight the many advantages of obtaining vitamins and minerals from food instead of from supplements. Micronutrients in food are typically better absorbed by the body and are associated with fewer potential adverse effects.2,3 A healthful diet provides an array of nutritionally important substances in biologically optimal ratios as opposed to isolated compounds in highly concentrated form. Indeed, research shows that positive health outcomes are more strongly related to dietary patterns and specific food types than to individual micronutrient or nutrient intakes.3

Although routine micronutrient supplementation is not recommended for the general population, targeted supplementation may be warranted in high-risk groups for whom nutritional requirements may not be met through diet alone, including people at certain life stages and those with specific risk factors (discussed in the next 3 sections and in the Box).


Key Points on Vitamin and Mineral Supplements

General Guidance for Supplementation in a Healthy Population by Life Stage
  • Pregnancy: folic acid, prenatal vitamins

  • Infants and children: for breastfed infants, vitamin D until weaning and iron from age 4-6 mo

  • Midlife and older adults: some may benefit from supplemental vitamin B12, vitamin D, and/or calcium

Guidance for Supplementation in High-Risk Subgroups
  • Medical conditions that interfere with nutrient absorption or metabolism:

    • Bariatric surgery: fat-soluble vitamins, B vitamins, iron, calcium, zinc, copper, multivitamins/multiminerals

    • Pernicious anemia: vitamin B12 (1-2 mg/d orally or 0.1-1 mg/mo intramuscularly)

    • Crohn disease, other inflammatory bowel disease, celiac disease: iron, B vitamins, vitamin D, zinc, magnesium

  • Osteoporosis or other bone health issues: vitamin D, calcium, magnesiuma

  • Age-related macular degeneration: specific formulation of antioxidant vitamins, zinc, copper

  • Medications (long-term use):

    • Proton pump inhibitorsa: vitamin B12, calcium, magnesium

    • Metformina: vitamin B12

  • Restricted or suboptimal eating patterns: multivitamins/multiminerals, vitamin B12, calcium, vitamin D, magnesium

a Inconsistent evidence.


The evidence is clear that women who may become pregnant or who are in the first trimester of pregnancy should be advised to consume adequate folic acid (0.4-0.8 mg/d) to prevent neural tube defects. Folic acid is one of the few micronutrients more bioavailable in synthetic form from supplements or fortified foods than in the naturally occurring dietary form (folate).2 Prenatal multivitamin/multimineral supplements will provide folic acid as well as vitamin D and many other essential micronutrients during pregnancy. Pregnant women should also be advised to eat an iron-rich diet. Although it may also be prudent to prescribe supplemental iron for pregnant women with low levels of hemoglobin or ferritin to prevent and treat iron-deficiency anemia, the benefit-risk balance of screening for anemia and routine iron supplementation during pregnancy is not well characterized.2

Supplemental calcium may reduce the risk of gestational hypertension and preeclampsia, but confirmatory large trials are needed.2 Use of high-dose vitamin D supplements during pregnancy also warrants further study.2 The American College of Obstetricians and Gynecologists has developed a useful patient handout on micronutrient nutrition during pregnancy.4

Infants and Children

The American Academy of Pediatrics recommends that exclusively or partially breastfed infants receive (1) supplemental vitamin D (400 IU/d) starting soon after birth and continuing until weaning to vitamin D–fortified whole milk (≥1 L/d) and (2) supplemental iron (1 mg/kg/d) from 4 months until the introduction of iron-containing foods, usually at 6 months.5 Infants who receive formula, which is fortified with vitamin D and (often) iron, do not typically require additional supplementation. All children should be screened at 1 year for iron deficiency and iron-deficiency anemia.

Healthy children consuming a well-balanced diet do not need multivitamin/multimineral supplements, and they should avoid those containing micronutrient doses that exceed the RDA. In recent years, ω-3 fatty acid supplementation has been viewed as a potential strategy for reducing the risk of autism spectrum disorder or attention-deficit/hyperactivity disorder in children, but evidence from large randomized trials is lacking.2

Midlife and Older Adults

With respect to vitamin B12, adults aged 50 years and older may not adequately absorb the naturally occurring, protein-bound form of this nutrient and thus should be advised to meet the RDA (2.4 μg/d) with synthetic B12 found in fortified foods or supplements.6 Patients with pernicious anemia will require higher doses (Box).

Regarding vitamin D, currently recommended intakes (from food or supplements) to maintain bone health are 600 IU/d for adults up to age 70 years and 800 IU/d for those aged older than 70 years.7 Some professional organizations recommend 1000 to 2000 IU/d, but it has been widely debated whether doses above the RDA offer additional benefits. Ongoing large-scale randomized trials (NCT01169259 and ACTRN12613000743763) should help to resolve continuing uncertainties soon.

With respect to calcium, current RDAs are 1000 mg/d for men aged 51 to 70 years and 1200 mg/d for women aged 51 to 70 years and for all adults aged older than 70 years.7 Given recent concerns that calcium supplements may increase the risk for kidney stones and possibly cardiovascular disease, patients should aim to meet this recommendation primarily by eating a calcium-rich diet and take calcium supplements only if needed to reach the RDA goal (often only about 500 mg/d in supplements is required).2 A recent meta-analysis suggested that supplementation with moderate-dose calcium (<1000 mg/d) plus vitamin D (≥800 IU/d) might reduce the risk of fractures and loss of bone mass density among postmenopausal women and men aged 65 years and older.2

Multivitamin/multimineral supplementation is not recommended for generally healthy adults.8 One large trial in US men found a modest lowering of cancer risk,9 but the results require replication in large trials that include women and allow for analysis by baseline nutrient status, a potentially important modifier of the treatment effect. An ongoing large-scale 4-year trial (NCT02422745) is expected to clarify the benefit-risk balance of multivitamin/multimineral supplements taken for primary prevention of cancer and cardiovascular disease.

Other Key Points

When reviewing medications with patients, clinicians should ask about use of micronutrient (and botanical or other dietary) supplements in counseling about potential interactions. For example, supplemental vitamin K can decrease the effectiveness of warfarin, and biotin (vitamin B7) can interfere with the accuracy of cardiac troponin and other laboratory tests. Patient-friendly interaction checkers are available free of charge online (search for interaction checkers on, WebMD, or pharmacy websites).

Clinicians and patients should also be aware that the US Food and Drug Administration is not authorized to review dietary supplements for safety and efficacy prior to marketing. Although supplement makers are required to adhere to the agency’s Good Manufacturing Practice regulations, compliance monitoring is less than optimal. Thus, clinicians may wish to favor prescription products, when available, or advise patients to consider selecting a supplement that has been certified by independent testers (, US Pharmacopeia, NSF International, or UL) to contain the labeled dose(s) of the active ingredient(s) and not to contain microbes, heavy metals, or other toxins. Clinicians (or patients) should report suspected supplement-related adverse effects to the Food and Drug Administration via MedWatch, the online safety reporting portal. An excellent source of information on micronutrient and other dietary supplements for both clinicians and patients is the website of the Office of Dietary Supplements of the National Institutes of Health.

Clinicians have an opportunity to promote appropriate use and to curb inappropriate use of micronutrient supplements, and these efforts are likely to improve public health.

Back to top

Article Information

Corresponding Author: JoAnn E. Manson, MD, DrPH, Division of Preventive Medicine, Brigham and Women’s Hospital, Harvard Medical School, 900 Commonwealth Ave E, Boston, MA 02215 (

Published Online: February 5, 2018. doi:10.1001/jama.2017.21012

Conflict of Interest Disclosures: Both authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Drs Manson and Bassuk reported that their research division conducts randomized clinical trials of several vitamins and minerals. The Vitamin D and Omega-3 Trial (VITAL) is sponsored by the National Institutes of Health but the vitamin D is donated by Pharmavite. In COSMOS, the multivitamins are donated by Pfizer. Both authors collaborate on these studies.


Kantor  ED, Rehm  CD, Du  M, White  E, Giovannucci  EL.  Trends in dietary supplement use among US adults from 1999-2012.  JAMA. 2016;316(14):1464-1474.PubMedGoogle ScholarCrossref

Rautiainen  S, Manson  JE, Lichtenstein  AH, Sesso  HD.  Dietary supplements and disease prevention: a global overview.  Nat Rev Endocrinol. 2016;12(7):407-420.PubMedGoogle ScholarCrossref

Marra  MV, Boyar  AP.  Position of the American Dietetic Association: nutrient supplementation.  J Am Diet Assoc. 2009;109(12):2073-2085.PubMedGoogle ScholarCrossref

American College of Obstetricians and Gynecologists. Nutrition during pregnancy. Published April 2015. Accessed November 20, 2017.

American Academy of Pediatrics. Vitamin D & iron supplements for babies: AAP recommendations. Updated May 27, 2016. Accessed November 20, 2017.

Institute of Medicine.  Dietary Reference Intakes for Thiamin, Riboflavin, Niacin, Vitamin B6, Folate, Vitamin B12, Pantothenic Acid, Biotin, and Choline. Washington, DC: National Academies Press; 1998.

Institute of Medicine.  Dietary Reference Intakes for Calcium and Vitamin D. Washington, DC: National Academies Press; 2011.

Moyer  VA; US Preventive Services Task Force.  Vitamin, mineral, and multivitamin supplements for the primary prevention of cardiovascular disease and cancer: US Preventive Services Task Force recommendation statement.  Ann Intern Med. 2014;160(8):558-564.PubMedGoogle ScholarCrossref

Gaziano  JM, Sesso  HD, Christen  WG,  et al.  Multivitamins in the prevention of cancer in men: the Physicians’ Health Study II randomized controlled trial.  JAMA. 2012;308(18):1871-1880.PubMedGoogle ScholarCrossref

Effect of Oral Administration of a Mixture of Probiotic Strains on SCORAD Index and Use of Topical Steroids in Young Patients With Moderate Atopic Dermatitis

Key Points

Question  Can treatment with an oral probiotic reduce the SCORAD index and the use of topical steroids in children with moderate atopic dermatitis?

Findings  This randomized clinical trial of 50 children treated with a mixture of probiotics or placebo for 12 weeks found that SCORAD and topical steroid use decreased significantly in the probiotic group compared with the placebo group.

Meaning  This probiotic is an effective and safe coadjuvant treatment to reduce the SCORAD index and topical steroid use in children with moderate atopic dermatitis.


Importance  Oral intake of new probiotic formulations may improve the course of atopic dermatitis (AD) in a young population.

Objective  To determine whether a mixture of oral probiotics is safe and effective in the treatment of AD symptoms and to evaluate its influence on the use of topical steroids in a young population.

Design, Setting, and Participants  A 12-week randomized, double-blind, placebo-controlled intervention trial, from March to June 2016, at the outpatient hospital Centro Dermatológico Estético de Alicante, Alicante, Spain. Observers were blinded to patient groupings. Participants were children aged 4 to 17 years with moderate atopic dermatitis. The groups were stratified and block randomized according to sex, age, and age of onset. Patients were ineligible if they had used systemic immunosuppressive drugs in the previous 3 months or antibiotics in the previous 2 weeks or had a concomitant diagnosis of intestinal bowel disease or signs of bacterial infection.

Interventions  Twelve weeks with a daily capsule containing freeze-dried powder with 109 total colony-forming units of the probiotic strains Bifidobacterium lactis CECT 8145, B longum CECT 7347, and Lactobacillus casei CECT 9104 and maltodextrin as a carrier, or placebo (maltodextrin-only capsules).

Main Outcomes and Measures  SCORAD index score and days of topical steroid use were analyzed.

Results  Fifty children (26 [50%] female; mean [SD] age, 9.2 [3.7] years) participated. After 12 weeks of follow-up, the mean reduction in the SCORAD index in the probiotic group was 19.2 points greater than in the control group (mean difference, −19.2; 95% CI, −15.0 to −23.4). In relative terms, we observed a change of −83% (95% CI, −95% to −70%) in the probiotic group and −24% (95% CI, −36% to −11%) in the placebo group (P < .001). We found a significant reduction in the use of topical steroids to treat flares in the probiotic arm (161 of 2084 patient-days [7.7%]) compared with the control arm (220 of 2032 patient-days [10.8%]; odds ratio, 0.63; 95% CI, 0.51 to 0.78).

Conclusions and Relevance  The mixture of probiotics was effective in reducing SCORAD index and reducing the use of topical steroids in patients with moderate AD.

Tau accumulations in the brains of woodpeckers


Woodpeckers experience forces up to 1200–1400 g while pecking. It is assumed due to evolutionary adaptations, the woodpecker is immune to brain injury. This assumption has led to the use of the woodpecker as a model in the development of sports safety equipment such as football helmets. However, it is unknown at this time if the woodpecker brain develops neuro-trauma in relation to the high g-forces experienced during pecking. The brains of 10 ethanol preserved woodpeckers and 5 ethanol preserved red-winged black bird experimental controls were examined using Gallyas silver stain and anti-phospho-tau. The results demonstrated perivascular and white matter tract silver-positive deposits in eight out of the 10 woodpecker brains. The tau positive accumulations were seen in white matter tracts in 2 of the 3 woodpeckers examined. No staining was identified in control birds. The negative staining of controls birds contrasted with the diffuse positive staining woodpecker sections suggest the possibility that pecking may induce the accumulation of tau in the woodpecker brain. Further research is needed to better understand the relationship.


In the central nervous system of humans, the protein tau assists in the assembly and stabilization of neuronal microtubules. Accumulations of tau can be seen in conditions ranging from normal aging to various neurodegenerative diseases such as Alzheimer’s disease [1]. In the disease state, through a process yet to be understood, tau dissociates from the axons and become hyperphosphorylated to form insoluble neurofibrillary tangles [2]. It is not entirely clear if these aggregates are responsible for the symptoms associated with neurodegenerative diseases.

Tau accumulations have also been observed in association with chronic traumatic encephalopathy (CTE) [3]. CTE is theorized to be the end result of repetitive mild traumatic brain (mTBI) injury. Repeated concussions has been suggested as a cause of CTE in contact sport athletes.

Recently, the CTE center at Boston University reported CTE in brain tissue of 110 out of 111 former National Football League (NFL) players studied [4]. However, CTE is not unique to football and has been identified in the brains of athletes who play soccer, rugby and hockey [5].

The prevalence of CTE and relationship between mTBI and the subsequent development of CTE has yet to be fully established. Currently, the disease can only be diagnosed by post mortem analysis utilizing immunohistochemistry with antibodies directed against p-tau.

The most prevalent pathological changes thought to be diagnostic of CTE are focal accumulation of abnormal hyperphosphorylated tau (p-tau) within neurons and astroglia distributed around small blood vessels located at the depths of cortical sulci [6]. Other staining patterns considered to be supportive of CTE include pre tangles and neurofibrillary tangles in the superficial layers of the cortex and the hippocampus; neuronal and astrocytic aggregates in subcortical nuclei; and ‘thread-like’ and ‘dot-like’ axonal tau staining patterns [5, 7, 8]

Because of the theorized association between mTBI and CTE, the prevention of TBI in athletics has become an important area of research. Due to it’s assumed resistance to neurotrauma, the woodpecker has become a model for the development of safety equipment such as football helmets and neck collars [9, 10].

The Picidae family of birds, which includes woodpeckers and sapsuckers, have several evolutionary anatomic adaptations to theorized to mitigate the enormous forces they experience while pecking. These include sharp beaks with upper and lower components which can move independently of each other while pecking and a long tongue that is capable of bracing the skull and brain during impacts. Other proposed protective adaptations include: thick neck muscles to dissipate force, and unique bony features of the skull [6]. It is assumed these evolutionary adaptations prevent neurotrauma in the woodpecker brain.

The majority of research using the woodpecker as a model for the development of safety equipment has focused on the biophysics of the woodpecker’s head. To date, only one paper has examined histologic sections of woodpecker brains in an effort to investigate the possible existence of brain injury in these birds. [11] This previous study utilized a modified Prussian blue stain (ferrocyanide) on the brains of two woodpeckers. Without describing their histologic findings, the authors concluded woodpeckers do not experience brain injury. Since its publication in 1976, this study has been sited in over 100 journal articles supporting the conclusion that woodpeckers do not incur brain injury in association with pecking behavior.

Despite the wide use of the 1976 paper, the neurobiological response of woodpeckers to repetitive head acceleration and deceleration is vastly unexplored. At this time, it is not known with any certainty if the brains of these animals experience neurotrauma in association with their pecking behavior. If so, the woodpecker may serve as an important animal model for the further study of CTE.

With the woodpecker model increasing in popularity as a potential source of protective equipment technology development, an in-depth and comprehensive study into woodpecker brain injury is warranted. [9] Given this, our group our group set out to determine if tau accumulations exist in the brains of woodpeckers.

Materials and methods

Protocol doi: 10.17605/

Avian specimens and brain tissue collection

All the bird specimens used in this project were generously donated from museum collections (see acknowledgments). The woodpeckers (n = 10) studied were of various species (Table 1) To extract the brains, the feathers and skin covering the skull were removed. The skull cap was cut with a rotary tool from just above the orbits to just below the occiput in a circular fashion to remove the skull from the brain tissue below it. The plate of bone that protects the optic center of the brain, which is somewhat analogous to the human’s tentorium cerebelli, was removed using forceps. The brain was gently pried away from the the skull, taking great care around the frontal lobes and brainstem areas. Finally, an 11 blade scalpel was used to sever the brain stem right below the level of the pons. The extracted brains were then placed in 70% ethanol until tissue processing could occur.

Histological and immunohistochemical procedures

The woodpecker and red winged black bird brains were cut into gross tissue sections according to anatomic landmarks. Tissue processing was performed the same for all tissue samples according to standard paraffin-embedding processing procedures [12].

The Gallyas silver stain was performed in accordance to a previous published methodology with some minor alterations at room temperature with slight agitation [13]. Experimental and control slides were sectioned to 15μm onto slides, deparaffinized, and rehydrated back to water. The slides were then placed in a 0.25% potassium permanganate for 15 minutes. Then, slides were incubated in 2% oxalic acid for 5 minutes. Following the oxalic acid incubation, the slides were rinsed in dH2O for 5 minutes before being placed in a 0.4% lanthanum nitrate/2% sodium acetate blocking solution for one hour. After the blocking step, the slides were rinsed again in dH2O for one minute before going into a 4% sodium hydroxide/10% potassium iodide/0.035% silver nitrate (added in that order) solution for four minutes. The slides were immediately placed into a 0.5% acetic acid for three, one minute rinses before being placed into developer. The slides were then put into 1% acetic acid for three minutes and then placed into dH2O. Mayer’s hematoxylin counterstain (Scytek, #HAQ999) was used by placing the silver stained sections into the stain for two minutes before a quick rinse in dH2O. The slides were differentiated with 0.1% sodium bicarbonate bluing agent until desired color was achieved. Slides were then dehydrated using the standard ethanol gradient and Histoclear before being coverslipped.

The immunohistochemical staining was achieved using a hybrid of previously published techniques. [14] The tissue was cut at 25μm and each section was placed into a 2cm in diameter steel-wire mesh container. The tissue slices were deparaffinized and rehydrated to water before entering the antigen retrieval step. Antigen retrieval was done by submerging the mesh containers into filtered 1x Tris/EDTA buffer pH 9 with 0.05% Tween 20 at 90°C for 20 minutes. After the 20-minute incubation, the mesh containers were then placed into filtered 1x TBS buffer pH 7.4 with 0.025% Triton X-100 and rinsed twice for five minutes each. After the second rinse, the mesh containers were removed and incubated in filtered 10% goat serum/1x TBS buffer pH 7.4 block for two hours. After incubation, tissue sections were removed from the mesh containers and placed into individual sterilized petri dishes containing antibody in sterile and filtered 1% goat serum 1x TBS buffer pH 7.4 overnight at 4°C with gentle agitation. The antibodies used were anti-phospho-tau S262 rabbit polyclonal (5μg/mL; Abcam, ab64193; Cambridge, MA) and anti-glial fibrillary associated protein rabbit polyclonal (5μg/mL; Bioss, bs-0199R; Woburn, MA). Following overnight incubation, the sections were removed from the primary antibody and placed back into the mesh containers to be rinsed with 1x TBS with 0.025% Triton X-100 twice, each for five minutes. The mesh containers were then placed into 0.3% sodium hydroxide 1x TBS buffer pH 7.4 for 15 minutes. After the sodium hydroxide incubation, the mesh containers were once again rinsed for three minutes in 1x TBS buffer pH 7.4 with 0.025% Triton X-100. The sections within the mesh containers were removed once again, and placed into small, sterile petri dishes containing horseradish peroxidase secondary antibody (1μg/mL; Abcam, ab6721; Cambridge, MA) in 1x TBS buffer pH 7.4 for one hour at room temperature. After the hour incubation, the sections were placed back into their mesh containers and rinsed twice, for five minutes each, in 1x TBS buffer pH 7.4. The sections were then ready for the 3,3′-diaminobenzidine (DAB) reaction. The DAB chromagen (Biocare Medical; DB801R; Concord, CA) reaction was carried out under a microscope until adequate staining was achieved. Sections were then immediately placed back into their mesh containers and rinsed with dH2O for five minutes. Following the dH2O rinse, the sections were counterstained with Mayer’s hematoxylin counterstain (Scytek, #HAQ999) for two minutes before a quick rinse in dH2O. The sections were then differentiated with 0.1% sodium bicarbonate bluing agent until desired color was achieved. After another quick rinse in dH2O, the sections were dehydrated using the standard ethanol gradient and Histoclear while in the mesh containers. After the last Histoclear incubation, the sections were removed from their mesh containers and free-floated into a large crystallization dish containing Histoclear. The sections were coaxed onto clean glass slides with fine tipped forceps. The slides were then cover slipped with mounting medium and placed onto a 37°C warming plate overnight.


Gallyas silver stain

Because Gallyas stain has a high degree of sensitivity and specificity for neurofibrillary tangles and axonal injury, it was utilized to detect the presence of neuronal and/or white matter tract damage throughout the entire woodpecker brain.

A section of human cortex with confirmed Alzheimer’s disease was used as a positive staining control while red-winged black birds’ brains (n = 5) were used as experimental controls for all staining methods.

Positive silver accumulations were identified in 8 out of the 10 woodpeckers studied. Several patterns of positive Gallyas staining were identified in the woodpecker population including: focal perivascular deposits, which were mostly subpial, (Fig 1A, 1C and 1D); focal whole astrocyte staining (Fig 1B); dot-like staining within axonal tracks; and wide-spread thread-like axonal track staining in deep white matter tracks. The majority of the observed silver-positive staining patterns were identified in the frontal pole of the brain. Very rare staining was detected in the occipital region, and no staining was seen in the cerebellum.


Fig 1. Perivascular and axonal track pathology of Dryocopus lineatus and picoides pubescens.

Perivascular Gallyas silver positive pathology in the cortex of the frontal lobe (A). Damaged axonal tracts (B) with axonal swellings (arrow) in the subcortical white matter of the frontal lobe. Subpial perivascular staining (C) of the frontal lobe. Perivascular silver positive staining in (D) in a superficial region of the frontal cortex. Experimental controls (E).

Focal perivascular silver-positive deposits were found in 40% of the woodpeckers. The most abundant positive silver staining pattern was “thread-like” axonal white matter tracts of the frontal lobe, which appeared in 80% (n = 10) of the woodpeckers studied (Fig 2).


Fig 2. Axonal tract pathology of Dryocopus lineatus and Picoides pubescens.

Gallyas silver positive axonal tract staining in the corpus callosum (A) and the mediolateral central gray area of the midbrain (B and C) in the woodpecker brain (A) [15].

No positive Gallyas staining was observed in the control birds (n = 10).


Following the analysis of the Gallyas silver stain, we proceed to immunohistochemical verification of the suspected tau accumulations throughout the entire woodpecker brain. Samples were stained for phosphorylated tau (S262).

Due to the poor preservation of the woodpecker tissue, successful immunohistochemistry was accomplished in only 3 birds; the remainder of the tissue samples degraded during processing. Attempts to alter the immunohistochemistry methodology proved to not help with the degradation of the tissue samples. Immunohistochemistry was performed on all control birds (n = 5).

In the woodpecker specimens where tau immunostaining was possible, tau-positive accumulations were identified in the same regions highlighted originally by the Gallyas silver stain. The morphology of astrocytes identified by GFAP were used to determine some of the cells staining with tau were in fact astrocytes. This assisted in confirming the silver-positive accumulations identified were in fact comprised of tau protein. Specifically, tau immunostaining demonstrated perivascular deposits, whole astrocyte staining, and ‘thread-like’ axonal track staining in 2 of the 3 woodpecker brains evaluated (Fig 3).


Fig 3. Anti-phospho-tau immunostaining in Dryocopus lineatus.

Tau-positivity in the midbrain (A and B) and the corpus callosum (C) of the Dryocopus lineatus brain. The axonal tract staining demonstrates a thread-like pattern, similar to that seen with Gallyas sliver staining (Fig 2). Occasional intracellular tau-accumulations were identified within neurons (D).

No positive immunostaining was identified in any of the control birds.

Tau immunohistochemistry was successfully completed in three woodpecker brains. Two of the three woodpecker brains demonstrated wide-spread thread-like axonal track staining with tau. We observed positive GFAP-staining in these same two birds (Fig 4). One woodpecker failed to demonstrate positive sliver and tau staining. This same bird also showed negative GFAP immunohistostaining. Interpreted collectively, this suggests that the tau accumulations we identified are pathological.


Fig 4. Anti-GFAP immunostaining in Dryocopus lineatus.

Immunostaining with GFAP demonstrated rare GFAP-positive astrocytes. Astrocyte morphology included thorn-shaped (A), typical star-like (B) and tufted (C). Tau immunohistochemistry demonstrated rare tau accumulations in cells morphologically consistent with astrocytes within the grey matter (D, arrow).

In summary, focal subpial perivascular deposits; focal whole astrocyte staining; dot-like staining within axonal tracks; and wide-spread thread-like axonal track staining in white matter tracks were identified in the frontal poles of 80 percent of the study population. Strikingly, no staining was identified in the control bird population.

These observations were confirmed by a board-certified neuropathologist.


To date, there have been no histologic studies exploring the potential existence of neurotrauma in woodpeckers. While it is unknown if the forces associated with pecking behavior could result in traumatic brain injury, it is interesting that the majority of the woodpecker specimens in our study displayed focal silver-positive deposits, some of which were confirmed to be tau by immunohistochemistry, while no staining was observed in the control birds.

The anatomic locations and staining patterns of the lesions identified in the brains of woodpeckers shares some similarities to human CTE. In humans, CTE is most prominent in the frontal and temporal lobes of the brain with a spectrum of tau deposition patterns including focal perivascular staining, astrocytic inclusions, ‘thread-like’ and ‘dot-like’ axonal staining patterns (5, 7). In the woodpecker, we identified similar focal perivascular staining, astrocytic inclusions, ‘thread-like’ and ‘dot-like’ axonal staining patterns which were confined to the frontal lobe of the brain. The woodpecker brain lacks the gyri and sulci seen in the human brain. Because of this, we could not evaluate for lesions located at the depth of sulci, as seen in human CTE.

The prominent frontal and temporal anatomic locations of CTE lesions are thought to be due to the distribution of force experienced in head collision [14]. In the woodpecker, much of the force of pecking is thought to be dissipated through the frontal regions of the skull and brain, as well. Therefore, it is not surprising to see potential areas of injury limited to this anatomic location in woodpecker brain.

In the human brain, tau accumulations are also known to occur as part of the normal aging process. Though it cannot be entirely ruled out, age-related tau accumulation in the woodpecker is an unlikely explanation for our findings. Our study population included one juvenile woodpecker (Sphyrapicus varius) which demonstrated the full spectrum of tau accumulations observed in the majority of the adult population suggesting the tau accumulations seen in our study might not be age-related. This notion is further supported by the lack of observable staining in the control population which was comprised of entirely adult birds.

Given the complete lack of staining in the control population and the unlikely scenario of age-related changes, our findings suggest there might be an association between repetitive pecking behavior and tau accumulations in the woodpecker population.

There are several limitations to this study. It cannot be concluded at this time the if the histologic changes identified in our study are the direct result of the repeated, high force pecking woodpeckers endure everyday. The limited number of woodpeckers (n = 10) and control birds (n = 5) utilized in this study are insufficient to established a correlation between pecking behavior and subsequent neurotrauma.

It is not known from our study whether the tau accumulations are pathological or result in behavioral changes in woodpeckers. However, our findings of silver and tau accumulations solely in pecking birds warrant further investigation into this possibility.

There are numerous anatomic differences between the skulls and brains of woodpeckers. It may be that the anatomic adaptations of the woodpecker produce stress in regions of the brain in different locations than humans. Further research is necessary to understand how our findings can be translated to the human population.

Due to the increased tau deposition in our woodpecker population, the brains of woodpeckers are an important area for future research. Further studies are needed to determine what iso-forms of tau are being deposited in the woodpecker brain and if these deposits are pathological. The continued study of the response of the woodpeckers’ brain to pecking is necessary to assure current head protection technology based on the woodpecker model is providing adequate protection in athletes. Our findings also suggest the woodpecker may be a suitable animal model for the further study of CTE.

Risk of acute kidney injury associated with the use of fluoroquinolones


Background: Case reports indicate that the use of fluoroquinolones may lead to acute kidney injury. We studied the association between the use of oral fluoroquinolones and acute kidney injury, and we examined interaction with renin–angiotensin-system blockers.

Methods: We formed a nested cohort of men aged 40–85 enrolled in the United States IMS LifeLink Health Plan Claims Database between 2001 and 2011. We defined cases as men admitted to hospital for acute kidney injury, and controls were admitted to hospital with a different presenting diagnosis. Using risk-set sampling, we matched 10 controls to each case based on hospital admission, calendar time (within 6 wk), cohort entrance (within 6 wk) and age (within 5 yr). We used conditional logistic regression to assess the rate ratio (RR) for acute kidney injury with current, recent and past use of fluoroquinolones, adjusted by potential confounding variables. We repeated this analysis with amoxicillin and azithromycin as controls. We used a case-time–control design for our secondary analysis.

Results: We identified 1292 cases and 12 651 matched controls. Current fluoroquinolone use had a 2.18-fold (95% confidence interval [CI] 1.74–2.73) higher adjusted RR of acute kidney injury compared with no use. There was no association between acute kidney injury and recent (adjusted RR 0.87, 95% CI 0.66–1.16) or past (RR 0.86, 95% CI 0.66–1.12) use. The absolute increase in acute kidney injury was 6.5 events per 10 000 person-years. We observed 1 additional case per 1529 patients given fluoroquinolones or per 3287 prescriptions dispensed. The dual use of fluoroquinolones and renin–angiotensin-system blockers had an RR of 4.46 (95% CI 2.84–6.99) for acute kidney injury. Our case-time–control analysis confirmed an increased risk of acute kidney injury with fluoroquinolone use (RR 2.16, 95% CI 1.52–3.18). The use of amoxicillin or azithromycin was not associated with acute kidney injury.

Interpretation: We found a small, but significant, increased risk of acute kidney injury among men with the use of oral fluoroquinolones, as well as a significant interaction between the concomitant use of fluoroquinolones and renin–angiotensin-system blockers.

Fluoroquinolones are commonly prescribed broad-spectrum antibiotics.1 Although highly effective, they are known to cause cardiac arrhythmia, hypersensitivity reactions and central nervous system effects including agitation and insomnia.2,3 Recent reports of tendon rupture4 and retinal detachment5 suggest that these drugs may damage collagen and connective tissue. Case reports of acute kidney injury with the use of fluoroquinolones have been published,6 and the product label includes renal failure in a list of potential, but uncommon, adverse reactions.2 In clinical practice, when oral fluoroquinolones are prescribed, the potential for acute kidney injury is generally not a clinical consideration. We aimed to quantify the risk of acute kidney injury with the use of oral fluoroquinolones among men. This study population was limited to men because the cohort we studied was formed to investigate health issues that affect older men.


Data source

The IMS LifeLink Health Plan Claims Database contains paid claims from US health care plans. Compared with the US Census, the database captures 17% of men aged 45–54 years, 13% of men aged 55–64 years and 8% of men aged over 65 years. Data for men over 65 years are captured through Medicare Advantage programs. These privatized health care plans combine medical and prescription services, providing more inclusive health care data.7

The IMS LifeLink database contains fully adjudicated medical and pharmacy claims for over 68 million patients, including inpatient and outpatient diagnoses (via International Classification of Diseases, 9th revision, clinical modification [ICD-9-CM], codes) in addition to retail and mail-order prescriptions. The data are representative of US residents with private health care in terms of geography, age and sex. The IMS LifeLink database is subject to quality checks to ensure data quality and minimize errors,7 and it has been used in previous pharmacoepidemiologic studies.810

This study was approved by the University of Florida’s Institutional Review Board. All coding used in this study can be found in Appendix 1 (available at

Cohort formation

We used a nested case–control design for our primary analysis. Our cohort was formed to study health issues that affect older men. This population is at the greatest risk of acute kidney injury and is commonly prescribed fluoroquinolones. We extracted data for 2 million men from the IMS LifeLink database who had both prescription and medical coverage. We included men aged 40–85 years who met the inclusion criteria between Jan. 1, 2001, and June 30, 2011, and who had 365 days of enrolment with no acute kidney injury. We excluded men with a history of chronic kidney disease or dialysis because these men may be more prone to acute kidney injury. Censoring was performed at a study outcome, the end of enrollment and end of the study. The cohort was nested within inpatient hospital records, which were used to select cases and controls.

Cases and controls

Multiple studies have validated algorithms to determine acute kidney injury using ICD-9-CM coding. Several were not applicable because they were published only in abstract form,11 included ICD-10-CM coding,12 did not define acute kidney injury at hospital admission,13 included cases before 1990,14 assessed acute kidney injury that occurred after admission to hospital15 or included unspecified (nonacute) renal failure (ICD-9-CM 586.x).16 Two studies validated ICD-9-CM coding against a reference standard that required doubling of serum creatinine and found poor positive predictive values; however, this algorithm does not account for differences in baseline serum creatinine levels.17,18 A second algorithm was developed that identified acute kidney injury based on baseline serum creatinine level: acute kidney injury was defined by a change in serum creatinine of 0.5 mg/dL (44.2 μmol/L) for a nadir serum creatinine of 1.0 mg/dL (88.4 μmol/L) or lower, a change in serum creatinine of 1.0 mg/dL for a nadir serum creatinine between 2.0–4.9 mg/dL (176.8–433.2 μmol/L), or a change in serum creatinine of 1.5 mg/dL (132.6 μmol/L) for a nadir serum creatinine of 5.0 mg/dl (44.2 μmol/L).19 Two studies validated acute kidney injury using ICD-9-CM coding for all hospital discharges against this reference, finding positive predictive values of 80.2%17 and 87.6%.20

We defined acute kidney injury as ICD-9-CM 584.0 (acute renal failure, unspecified), 584.5 (acute tubular necrosis), 584.6 (cortical acute renal failure), 584.7 (medullary acute renal failure), 584.8 (acute renal failure with other specified pathologic lesion) and 584.9 (acute renal failure, not otherwise specified). We further restricted cases to the primary hospital discharge diagnosis, a diagnostic code that identifies the main reason for hospital admission. This is known to increase the positive predictive values and identify the primary reason for admission. We excluded cases if they had been admitted to hospital during the 6 months before the admission for acute kidney injury. Previous hospital admissions could indicate a greater degree of morbidity (confounding by disease severity) and prevent us from measuring prescription use (immeasurable time bias).21 We did not differentiate between subtypes of acute kidney injury because ICD-9-CM coding has not been validated to show this distinction.

We considered men who were admitted to hospital with a diagnosis other than acute kidney injury and who had not been admitted to hospital in the previous 6 months to be eligible for the control group. We used risk-set sampling to select the controls, whereby for each case, a pool of potential controls was formed that met the following criteria: were eligible for matching only on the day of hospital admission; were admitted to hospital within 6 weeks (calendar-time matching); entered the nested cohort no more than 6 weeks apart; and were within 5 years of age. From this risk set, 10 controls, who were still eligible to have an acute kidney injury, were randomly selected and matched for each case. This allows formation of an odds ratio equivalent to the rate ratio (RR).22 Matching on hospital admissions (a strong proxy for health status) was done to provide controls of more similar comorbidity and to reduce residual confounding.

Drug exposure

We included exposure to oral fluoroquinolones: ciprofloxacin, gatifloxacin, gemifloxacin, levofloxacin, moxifloxacin and norfloxacin. We excluded ophthalmic and topical fluoroquinolones because they have minimal systemic absorption. We excluded intravenous fluoroquinolones because our focus was on outpatient-dispensed preparations. We excluded prescriptions dispensed on the day of hospital admission to prevent reverse causality bias.

We defined a current user as someone who had an active supply of fluoroquinolone at hospital admission or had stopped taking a fluoroquinolone (prescription termination; final day of drug supply) in the 1–7 days before admission. Recent users were those who had a prescription termination 8–60 days before admission and had no active supply within the 7 days before admission. We defined past users as those who had a prescription termination 61–180 days before admission and who had no active prescriptions during days 0–60.

We selected 2 common oral antibiotics (amoxicillin and azithromycin) as control drugs. Although both have been implicated in rare cases of interstitial nephritis,2326 we hypothesized that the burden of acute kidney injury with these drugs would be insufficient to produce a positive association.

Statistical analysis

Primary analysis: nested case–control

We used conditional logistic regression to determine the RR for acute kidney injury with fluoroquinolone use. The model was adjusted by fluoroquinolone indication (genitourinary, respiratory or gastrointestinal tract infection; skin infection; and joint or bone infection in the past 6 mo), diseases associated with acute kidney injury (cancer, chronic obstructive pulmonary disease, congestive heart failure, diabetes mellitus, HIV and hypertension in the past year), potentially nephrotoxic drugs with high use (loop diuretics, nonsteroidal anti-inflammatory drugs and renin–angiotensin-system blockers at hospital admission) and markers of health care use (number of medications, billing codes and physician visits in the past 6 mo). We stratified the subsequent analyses by fluoroquinolone product (ciprofloxacin, levofloxacin and moxifloxacin).

We examined drug–drug interactions between fluoroquinolones (current use) and renin–angiotensin-system blockers (at admission) through the addition of an interaction term to our fully adjusted model. We defined renin–angiotensin-system blockers as angiotensin-converting-enzyme inhibitors and angiotensin-receptor blockers. We did not include aldosterone antagonists based on low use and concern for confounding based on the many indications for these medications. Although we hypothesized drug–drug interactions between fluoroquinolones and loop diuretics or nonsteroidal anti-inflammatory drugs, we did not have sufficient power for these analyses. We computed a number needed to harm (absolute risk increase × 100) in which the absolute risk increase equaled the estimated incidence among users (RR × incidence among nonusers) minus the incidence among nonusers.

Secondary analysis: case-time–control

A case-crossover design allows patients to serve as their own controls, using within-patient comparisons of drug exposure to assess the RR for the study outcome.27 This technique has the advantage of having no residual confounding from time-invariant covariates. Two cardinal requirements for a case-crossover study are an acute outcome and a transient exposure. Acute kidney injury is an acute outcome, and fluoroquinolones are typically prescribed for 7–14 days,2 meeting the assumption of transient exposure. Because most fluoroquinolone prescriptions are for 14 or fewer days, we chose the 14 days immediately before admission to hospital as the case-time. Four control-times were selected, each immediately following the previous 14 day window (days 15–28, 29–42, 43–56 and 57–71). We used conditional logistic regression to determine the RR for acute kidney injury with fluoroquinolone exposure. We sensitized the case-crossover by the distribution of fluoroquinolone use from these time windows in the 10 matched control patients from the main analysis. This analysis, referred to as a “case-time–control,” adjusts for a potential trend toward increased use of all antibiotics before hospital admission.28

Sensitivity analysis

We were concerned that patients taking fluoroquinolones would be more likely to have a genitourinary infection (compared with patients taking one of the control medications), which could make them more likely to have acute kidney injury. We conducted a sensitivity analysis in which we removed patients who had experienced a genitourinary infection during the 6 months before admission, and we repeated the study analysis.

Because the sensitivity of excluding people with chronic kidney disease using ICD-9-CM coding is unknown, we repeated our analyses without excluding patients with previous claims for chronic kidney disease; from this analysis, the changes in the study RRs can be used to assess whether residual confounding from unmeasured chronic kidney disease is a potential concern.


Our nested cohort contained 767 209 patients (162 608 hospital admissions) eligible for matching. We identified 1292 cases with acute kidney injury and 12 651 matched controls. The characteristics of the cases and controls are shown in Table 1. Ciprofloxacin (44.5%) and levofloxacin (43.9%) were the most commonly used fluoroquinolones (Table 2); the most common indications were respiratory (45.6%) or genitourinary infections (27.0%) (Table 3).

Table 1:

Characteristics of cases and controls

Table 2:

Use of oral fluoroquinolones among cases and controls

Table 3:

Indication for the use of antibiotics among cases and controls

We observed an increased risk of acute kidney injury with current use of fluoroquinolones (adjusted RR 2.18, 95% CI 1.74–2.73) and no change in risk with either recent (adjusted RR 0.87, 95% CI 0.66–1.16) or past (adjusted RR 0.86, 95% CI 0.66–1.12) use. There was no association between the use of amoxicillin or azithromycin and acute kidney injury (Table 4).

Table 4:

Nested case–control analysis of the risk of acute kidney injury with the use of fluoroquinolones

When we stratified our analysis by fluoroquinolone product, the largest RR was found for ciprofloxacin (RR 2.76, 95% CI 2.03–3.76), followed by moxifloxacin (RR 2.09, 95% CI 1.04–4.20) and levofloxacin (RR 1.69, 95% CI 1.20–2.39). When levofloxacin was used as a reference, ciprofloxacin had a significantly increased RR (RR 1.73, 95% CI 1.08–2.77), whereas moxifloxacin did not (RR 1.20, 95% CI 0.54–2.65).

The case-time–control analysis confirmed the results from the nested case–control study: we found an increased risk of acute kidney injury with fluoroquinolone use (RR 2.16, 95% CI 1.52–3.18) but not with amoxicillin (RR 0.65, 95% CI 0.38–1.05) or azithromycin (RR 1.06, 95% CI 0.62–1.90) (Table 5). The absolute increase in the incidence of acute kidney injury was 6.5 events per 10 000 person-years with use of fluoroquinolones. We observed 1 additional case of acute kidney injury per 1529 patients who used fluoroquinolone or per 3287 prescriptions dispensed.

Table 5:

Case-time–control analysis of the risk of acute kidney injury with the use of fluoroquinolones or other antibiotics

The addition of a drug–drug interaction to the “current use” models for study drugs found similar main effects. Although renin–angiotensin-system blockers can increase serum creatinine levels, we did not find an increased risk of acute kidney injury with renin–angiotensin-system blocker monotherapy (RR 1.00, 95% CI 0.84–1.18). We did find, however, an interaction between the combined use of fluoroquinolones and renin–angiotensin-system blockers (interaction RR 2.19, 95% CI 1.30–3.69). An interaction can be defined as the additional risk for acute kidney injury from the concomitant use of 2 drugs that is beyond the additive risk of each individual drug. This interaction resulted in a greater than fourfold increase in the RR for acute kidney injury (RR 4.46, 95% CI 2.84–6.99) with active use of both drugs. When we analyzed the data by drug class, a similar increased risk was found with the dual use of fluoroquinolones and either angiotensin-converting-enzyme inhibitors (RR 4.54, 95% CI 2.74–7.52) or angiotensin-receptor blockers (RR 3.80, 95% CI 1.72–8.41).

Adjustment for a genitourinary infection had a negligible effect on all point estimates for fluoroquinolone use and acute kidney injury (< 2% change). When we restricted the nested cohort to only patients with no history of genitourinary infection and repeated the nested case–control analysis, we found similar RRs as in the main analysis between fluoroquinolones and acute kidney injury (current use: RR 2.48, 95% CI 1.92–3.23; recent use: RR 0.95, 95% CI 0.65–1.37; past use: RR 0.98, 95% CI 0.75–1.29). When we included patients with a previous claim for chronic kidney disease, we found similar RRs for all user types (current use: RR 2.08, 95% CI 1.67–2.59; recent use RR 0.95, 95% CI 0.73–1.26; past use RR 0.88, 95% CI 0.67–1.13).


We found a twofold increased risk of acute kidney injury with current use of fluoroquinolones. There were nonsignificant associations between fluoroquinolone use and acute kidney injury among recent and past users (point estimates less than 1.0). The twofold differential in risk between current and both recent and past fluoroquinolone use suggests that acute kidney injury is an acute adverse effect of fluoroquinolones. These results were replicated in the case-time–control analysis, which increases our confidence in these associations because of better control of time-invariant confounding.

Previous evidence of acute kidney injury with fluoroquinolone use comes from case reports. Most case reports result from an allergic or hypersensitivity reaction termed acute interstitial nephritis.29,30 Fluoroquinolones have also been reported to cause granulomatous interstitial nephritis, characterized by infiltration of the renal tissue by histiocytes and T lymphocytes, leading to the formation of granulomas.31,32 Crystalluria has been reported to occur when urine pH is above 6.8,33 and several cases of acute kidney injury from crystal formation secondary to fluoroquinolone use have been documented.34,35 More severe cases of acute tubular necrosis have also been linked to fluoroquinolone use.36,37

Although most published case reports are of ciprofloxacin use,6 this may be an artifact of its high use. Nephrotoxicity may not be entirely dependent on renal elimination,6 and one patient with ciprofloxacin-induced nephrotoxicity did not experience a positive rechallenge after switching to ofloxacin.38 We observed a larger risk of acute kidney injury with ciprofloxacin use, compared with the use of levofloxacin; however, this differential finding was not an a priori hypothesis and should be interpreted with caution until further investigation.

Although fluoroquinolones are thought to induce acute kidney injury through acute hypersensitivity reactions, renin–angiotensin-system blockers affect renal hemodynamics through dilation of the efferent arteriole, reducing intra-glomerular pressure and increasing serum creatinine levels.39 The risk of acute kidney injury with the use of renin–angiotensin-system blockers is thought to increase after a superimposed renal insult, such as that with dehydration or the use of other prescription medications.5,6 Physician monitoring of serum creatinine levels, particularly after starting renin–angiotensin-system blocker therapy, and ascertainment of severe cases of acute kidney injury that require admission to hospital may explain the lack of a signal with renin–angiotensin-system blocker monotherapy.


Because of the transient nature of fluoroquinolone use, we used 3 distinct and nonoverlapping definitions of drug exposure, allowing recent and past users to serve as negative controls. We found similar results after removing patients with genitourinary infections from the nested cohort analysis, thereby reducing concerns about confounding by indication.

We used admission to hospital to ascertain cases of severe acute kidney injury; however, we could not assess milder cases that resulted in mild or asymptomatic kidney injury. This could potentially result in an underestimation of the risk of acute kidney injury. We did not have information about the severity of acute kidney injury, nor did we have sufficient power to assess the risk by dosage or duration of use.

Although we conducted a self-controlled analysis, which has implicit control for unmeasured time-invariant confounders, residual confounding, particularly by time-varying covariates, is always a potential concern in observational research.

There is no reason to think that the proposed mechanism for increased risk of acute kidney injury with fluoroquinolone use is specific only to middle-aged and elderly men; however, this limited population is a key limitation of this study. It is possible that these medications may have different associations in other populations, and verifying this will require further study.


We found a twofold increased risk of acute kidney injury requiring hospital admission with the use of fluoroquinolone antibiotics among adult men, using 2 analytic techniques. We did not find increased risk of acute kidney injury with other antibiotics, supporting the hypothesis that this potential adverse association of fluoroquinolones with acute kidney injury is not a class effect of all antibiotics. We found a strong interaction with concomitant use of fluoroquinolones and renin–angiotensin-system blockers, cautioning against the concomitant use of these 2 drug classes. Although it is clear that the risk of death due to serious infections outweighs the risks associated with the use of fluoroquinolones, the potential for acute kidney injury raises the importance of vigilant prescribing.




From the President’s Desk: A shared vision for cancer staging, 1/18

For me, friendships formed and lessons learned have more than compensated for the effort invested over the years on CAP committees, but make no mistake: When we meet, what we’re doing is work. The professional engagement is enjoyable, but a person can get tired toward the end of a two-day meeting, not to mention homework in the evenings.

R. Bruce Williams, MD

 R. Bruce Williams, MD

At such times, I can always reboot by resurrecting the memory of a tumor board meeting about six months after we introduced the first CAP cancer protocols in our practice. Feedback had been sparse; we’d had no complaints, and that was good. But I knew what had gone into those protocols. At the very least, I felt, we had earned a banner over the hospital entrance.

Well, as we say in my family, if you don’t know, ask. So one day at tumor conference, I asked the crew of oncologists, internists, radiation oncologists, radiologists, oncologic surgeons, general surgeons, breast surgeons, gynecologic oncologic surgeons, urologists, family practitioners, and others sitting in the room what they thought about the new protocols. My question elicited an overwhelmingly positive response—the most positive response I’ve ever gotten in tumor conference to anything I’ve ever said! Everybody agreed the reporting templates had greatly increased the effectiveness of communication between pathologists and physicians in all these fields because they could now quickly find what they were looking for. We had 13 different pathologists writing narrative reports back then, and it seems the clinicians often had trouble finding what they needed. Now everyone knew where to find their nugget and could be certain it was included. That was a good tumor conference.

All of which is to say that I am a big believer in the cancer protocols. They are based, of course, on the American Joint Committee on Cancer staging manual, the most recent edition of which was implemented Jan. 1. We’ve had enough time with it now to confirm that the new edition is more ambitious, impressive, and perhaps even more groundbreaking than its predecessors.

The eighth edition features 12 entirely new staging systems and reflects the input of a greatly expanded team of international experts. It comprises 83 chapters, was three years in the making, and represents the work of 434 individuals from six continents, 23 countries, and 188 institutions. There is a searchable electronic version incorporating additional staging forms and other supplementary resources. It’s a big step up.

Anyone who has given the first chapter a thoughtful read (or watched the Dec. 14, 2017 CAP webinar given by Mahul B. Amin, MD, editor-in-chief of the staging manual, and Thomas P. Baker, MD, who chairs the CAP Cancer Committee) knows that this edition creates a new and inclusive gestalt. Members of 18 expert panels represent every cancer specialty. Another seven “cores” are home to experts in crosscutting disciplines such as precision medicine, evidence-based medicine and statistics, and content harmonization. These multidisciplinary resource teams bridge all disciplines, advising and encouraging transparent and useful give-and-take around staging, characterization, and utility.

I am pleased to note that Dr. Amin, professor and chair, Department of Pathology and Laboratory Medicine, and Gerwin endowed chair for cancer research, University of Tennessee Health Science Center, is a former chair of the CAP Cancer Committee. The decision to appoint Dr. Amin editor-in-chief says good things about the American College of Surgeons, which provides administrative support to the AJCC. Their choice of Dr. Amin, the first pathologist (in eight editions spanning almost five decades) to take on this honorable and pivotal task, is a credit to our specialty.

One reason, no doubt, is his ability to articulate the integral role of anatomic and clinical pathologists in cancer diagnosis and treatment. Dr. Amin knows how to explain to nonpathologists that our ability to work with the tools of molecular diagnostics enables more accurate subclassification at the patient care level. He can capture the ways that pathologists understand data mining and how it is employed to investigate potential treatment alternatives. He can help other specialists understand just what we do.

Whenever the AJCC releases a new edition of the staging manual, the CAP Cancer Committee uses it to create or revise the CAP cancer protocols. Subspecialty teams manage the 63 CAP protocols and 14 biomarker templates. We released last summer the revised versions of 52 CAP cancer protocols that harmonize with the eighth edition.

The cancer protocols are a big project within the CAP; they are among the best things we do for our patients and our specialty. The Cancer Committee reports to the CAP Council on Scientific Affairs, chaired by Raouf Nakhleh, MD. Volunteers on the CSA and the Cancer Committee support Dr. Baker and Dr. Nakhleh to ensure our protocols provide all necessary information without burdening pathologists with irrelevant reporting criteria. Because the CAP volunteers who write the protocols are practicing pathologists, they know that sometimes too much is too much. Succinct, synoptic reports enable our members to give managing clinicians a sharp, quick, complete picture of what they need to know to apply the new staging system at the point of care.

I am writing this on New Year’s Eve and thinking about all the ways the new edition offers a fresh look at our lives and our work. We can use the term “paradigm shift” only in retrospect, but I predict we will come to see the release of the eighth edition of the AJCC Cancer Staging Manual as a pivotal moment in the science of cancer care. This revision offers all of us—pathologists, surgeons, clinicians, and patients—the vocabulary and context required to communicate clearly and comfortably about cancer diagnosis, prognosis, and treatment.


Assessing the Risks Associated with MRI in Patients with a Pacemaker or Defibrillator



The presence of a cardiovascular implantable electronic device has long been a contraindication for the performance of magnetic resonance imaging (MRI). We established a prospective registry to determine the risks associated with MRI at a magnetic field strength of 1.5 tesla for patients who had a pacemaker or implantable cardioverter–defibrillator (ICD) that was “non–MRI-conditional” (i.e., not approved by the Food and Drug Administration for MRI scanning).


Patients in the registry were referred for clinically indicated nonthoracic MRI at a field strength of 1.5 tesla. Devices were interrogated before and after MRI with the use of a standardized protocol and were appropriately reprogrammed before the scanning. The primary end points were death, generator or lead failure, induced arrhythmia, loss of capture, or electrical reset during the scanning. The secondary end points were changes in device settings.


MRI was performed in 1000 cases in which patients had a pacemaker and in 500 cases in which patients had an ICD. No deaths, lead failures, losses of capture, or ventricular arrhythmias occurred during MRI. One ICD generator could not be interrogated after MRI and required immediate replacement; the device had not been appropriately programmed per protocol before the MRI. We observed six cases of self-terminating atrial fibrillation or flutter and six cases of partial electrical reset. Changes in lead impedance, pacing threshold, battery voltage, and P-wave and R-wave amplitude exceeded prespecified thresholds in a small number of cases. Repeat MRI was not associated with an increase in adverse events.


In this study, device or lead failure did not occur in any patient with a non–MRI-conditional pacemaker or ICD who underwent clinically indicated nonthoracic MRI at 1.5 tesla, was appropriately screened, and had the device reprogrammed in accordance with the prespecified protocol. (Funded by St. Jude Medical and others; MagnaSafe number, NCT00907361.)

The use of magnetic resonance imaging (MRI) poses potential safety concerns for patients with an implanted cardiac device (cardiac pacemaker or implantable cardioverter–defibrillator [ICD]). These concerns are a consequence of the potential for magnetic field–induced cardiac lead heating, which could result in myocardial thermal injury and detrimental changes in pacing properties.1-3 As a result, it has long been recommended that patients with an implanted cardiac device not undergo MRI scanning, even when it otherwise may be considered to be the most appropriate diagnostic imaging method for the patient’s clinical care.4

Over the past two decades, cardiac devices have been designed to reduce the potential risks associated with MRI.5,6 Such devices, if they have been shown to pose no known hazard under certain specified conditions, are labeled “MRI-conditional” by the Food and Drug Administration (FDA) Center for Devices and Radiological Health. However, it is estimated that 2 million people in the United States and an additional 6 million worldwide7 have devices that have not been shown to meet these criteria and are therefore considered “non–MRI-conditional.” At least half the patients with such devices are predicted to have a clinical indication for MRI during their lifetime after device implantation.8

The MagnaSafe Registry was established to determine the frequency of cardiac device–related clinical events and device setting changes among patients with non–MRI-conditional devices who undergo nonthoracic MRI at a magnetic field strength of 1.5 tesla, as well as to define a simplified protocol for screening, monitoring, and device programming for such patients.



The MagnaSafe Registry was a prospective, multicenter study involving patients with a non–MRI-conditional pacemaker or ICD who underwent a clinically indicated, nonthoracic MRI examination at 1.5 tesla. The rationale, design, and protocol have been described previously.9 The protocol, which is available with the full text of this article at, was written after consultation with personnel at the Center for Devices and Radiological Health of the FDA, who requested that thoracic scans be excluded because of a higher perceived risk of adverse outcomes. An investigational device exemption was obtained in April 2009 for the purpose of data collection. All participating centers obtained approval from a local or independent institutional review board.

None of the funders of the study had any role in the design of the study protocol, in the collection or analysis of the data, or in the writing of the manuscript. The authors had full access to the data, performed the analyses, and vouch for the completeness and accuracy of the data and for the fidelity of the study to the protocol.


Patients were included in the registry if they were 18 years of age or older and had a non–MRI-conditional pacemaker or ICD generator, from any manufacturer, that was implanted after 2001,10 with leads from any manufacturer (without implantation date limitation), and if the patient’s physician determined that nonthoracic MRI at 1.5 tesla was clinically indicated (see Tables S1 and S2 in the Supplementary Appendix, available at, for a list of pacemaker and ICD manufacturers and models). The exclusion criteria were an abandoned or inactive lead that could not be interrogated, an implanted device other than a pacemaker or an ICD, an MRI-conditional pacemaker, a device implanted in a nonthoracic location, or a device with a battery that was near the end of its battery life (with a device interrogation display that read “elective replacement indicator”). In addition, pacing-dependent patients with an ICD were excluded because it was not possible to independently program tachycardia and bradycardia therapies for all ICD models at the time of study design. All participants provided written informed consent.


During the first 2 years of the study, the Centers for Medicare and Medicaid Services National Coverage Determination (NCD) stated that a patient with a pacemaker or an ICD was not eligible for coverage for MRI. In March 2011, a change to the NCD was granted that allowed reimbursement for patients enrolled in a prospective registry designed to determine the risk associated with MRI.11


All studies were performed in a 1.5-tesla MRI scanner; there was no vendor restriction (a list of manufacturers and models is included in Table S3 in the Supplementary Appendix). A physician, nurse practitioner, or physician assistant with cardiac device expertise and training in advanced cardiac life support was in attendance. Blood pressure, pulse oximetry, and cardiac rhythm were monitored with an MRI-compatible system from the time of device reprogramming until restoration of baseline values. Further details are provided in the MagnaSafe Protocol section of the Supplementary Appendix.


Figure 1.MagnaSafe Registry Study Flow Chart.

Prescanning device interrogation was performed with the use of a standardized protocol (Figure 1).9 If the patient was asymptomatic and had an intrinsic heart rate of at least 40 beats per minute, the device was programmed to a no-pacing mode (ODO or OVO). Symptomatic patients or those with an intrinsic heart rate of less than 40 beats per minute were determined to be pacing-dependent, and the device was reprogrammed to an asynchronous pacing mode (DOO or VOO). For non–pacing-dependent patients with an ICD, all bradycardia and tachycardia therapies were inactivated before the MRI. Pacing-dependent patients with an ICD were excluded, because not all ICD models allowed for independent inactivation of tachycardia and bradycardia therapies. After the MRI, baseline settings were restored, full device interrogation was repeated, and if necessary, the device was reprogrammed to maintain adequate pacing and sensing. Further details are provided in the MagnaSafe Protocol section of the Supplementary Appendix.


The primary end points, which were assessed during or immediately after the MRI examination, were death, generator or lead failure requiring immediate replacement, loss of capture (for pacing-dependent patients with pacemakers), new-onset arrhythmia, and partial or full generator electrical reset. The secondary end points, which were assessed immediately after the MRI examination and at the final follow-up, were a battery voltage decrease of 0.04 V or more, a pacing lead threshold increase of 0.5 V or more,12 a P-wave amplitude decrease of 50% or more, an R-wave amplitude decrease of 25% or more and of 50% or more,13 a pacing lead impedance change of 50 ohms or more,14 and a high-voltage (shock) lead impedance change of 3 ohms or more.

Patients with any secondary end-point event were required to undergo repeat device interrogation within 7 days, at 3 months (±30 days), and at 6 months (±30 days) after the MRI to determine whether the device settings had returned to baseline. If a secondary end-point event did not occur, a single device interrogation was required at between 3 and 6 months after the MRI (±30 days). Patients who had a primary end-point event were seen in follow-up at the discretion of the supervising physician. Further details and definitions of end points are provided in the Supplementary Appendix.


A case was defined as an instance in which a patient who provided informed consent entered the scanner and underwent MRI of one or more anatomical regions during a single examination session. If the patient returned on a subsequent day for repeat MRI, a separate informed consent was obtained and the data were entered as a unique case.

The mean (±SD) yearly rate of device replacement due to spontaneous malfunction has been estimated to be 0.46±0.22% for pacemakers and 2.07±1.16% for ICDs.15 Using these estimates and assuming a device failure rate during or after MRI of 0, we determined that 1000 cases in which patients had a pacemaker (pacemaker cases) and 500 cases in which patients had an ICD (ICD cases) would be needed to yield a 95% confidence interval of 0 to 0.5% for pacemakers and 0 to 1.0% for ICDs.

Data were analyzed separately for the pacemaker and ICD cohorts with the use of R statistical software, version The decision not to perform statistical comparisons between the pacemaker and ICD cohorts was made before enrollment began. The Wilson score method without continuity correction was used to calculate 95% confidence intervals for single proportions for primary end-point events. The linear association between lead age and each of the secondary end points was assessed with Pearson’s product moment correlation coefficient.



Table 1.Characteristics of the Patients at Baseline and MRI Scanning Information.

From April 2009 through April 2014 at 19 centers in the United States, clinically indicated nonthoracic MRI was performed in a total of 1000 pacemaker cases (818 patients) and 500 ICD cases (428 patients). The baseline characteristics of the patients are shown in Table 1. Follow-up data, which included data from a full device interrogation, were available in 1395 cases (93%) at 6 months. Additional information about the study population is provided in the Supplementary Appendix.


A total of 75% of the MRI examinations were performed on the brain or the spine. The mean time patients spent within the magnetic field was 44 minutes. During the MRI examination, four patients reported symptoms of generator-site discomfort; one patient with an ICD was removed from the scanner when a sensation of heating was described at the site of the generator implant, and the patient did not complete the examination. No patient with generator-site symptoms had the device placed within the “field of view” (the MRI imaging area), had a study end-point event, or reached the specific absorption rate limit set by the FDA for the scanned body site.


Table 2.Primary End Points.

There were no deaths, lead failures requiring immediate replacement, or losses of capture during the MRI examination among patients who were appropriately screened and had their device reprogrammed for imaging (Table 2). In one patient with an ICD who was not pacing-dependent, antitachycardia therapy was left in the active mode during the MRI (a protocol violation). During the post-MRI evaluation, the ICD could not be interrogated, and immediate generator replacement was required. Further details are provided in the Supplementary Appendix.

Four patients had atrial fibrillation and two patients had atrial flutter during or immediately after the MRI (Table S4 in the Supplementary Appendix). Five of these patients had a history of paroxysmal atrial fibrillation and were receiving warfarin; two were receiving antiarrhythmic therapy. Three of the patients returned to sinus rhythm before leaving the MRI environment, and the remaining three patients returned to sinus rhythm within 49 hours. No ventricular arrhythmias were noted.

In six cases (five patients), the patient had partial generator electrical reset; in all six cases, the patients had pacemakers that had been implanted 5.7 to 9.7 years before the MRI (Table S5 in the Supplementary Appendix). Settings in the device memory that were reset included patient and device or lead identification information. No appropriately screened and reprogrammed device underwent a full electrical reset.


Table 3.Measured Changes in Device Setting Values (Post-MRI minus Pre-MRI).

The results with regard to the secondary end points and measured differences between post-MRI and pre-MRI device settings for both pacemakers and ICDs are shown in Table 3 and as a histogram in Fig. S1 in the Supplementary Appendix. A decrease of 50% or more in P-wave amplitude was detected in 0.9% of pacemaker leads and in 0.3% of ICD leads; a decrease of 25% or more in R-wave amplitude was detected in 3.9% of pacemaker leads and in 1.6% of ICD leads, and a decrease of 50% or more in R-wave amplitude was detected in no pacemaker leads and in 0.2% of ICD leads. An increase in pacing lead threshold of 0.5 V or more was detected in 0.7% of pacemaker leads and in 0.8% of ICD leads.

A pacing lead impedance change of 50 ohms or more was noted in 3.3% of pacemakers and in 4.2% of ICDs. For both pacemakers and ICDs, any decrease in pacing lead impedance from baseline occurred in 54% of atrial leads and in 55% of ventricular leads, and any increase occurred in 19% of atrial and 22% of ventricular leads. However, when the change in pacing lead impedance was compared as a continuous variable with the change in P-wave or R-wave voltage or pacing lead threshold, no clinically significant correlations were noted (Table S6 in the Supplementary Appendix).


Among patients who had undergone placement of a new generator or lead within 90 days before the MRI, there were no primary end-point events, and secondary end-point events were limited to a change in pacing lead impedance in 2 of 53 new pacemaker leads and in 1 of 27 new ICD leads. Among patients with leads that had been placed more than 10 years before MRI, there were no primary end-point events, and secondary end-point events were noted in 1 of 31 ICD leads (impedance change of ≥50 ohms) and in 14 of 172 pacemaker leads (1 with a P-wave amplitude decrease of ≥50%, 1 with a pacing threshold increase of ≥0.5 V, and 11 with an impedance change of ≥50 ohms). When the continuous variables of pacing lead threshold change, P-wave amplitude change, R-wave amplitude change, and impedance change were compared separately with the time since lead placement, no clinically significant correlations were found (Table S7 in the Supplementary Appendix).


The maximum number of MRI examinations performed in patients in the MagnaSafe Registry was 11 in one patient with a pacemaker and 7 in one patient with an ICD (Table S8 in the Supplementary Appendix). The median interval between MRIs among patients who underwent more than one MRI examination was 153 days in patients with a pacemaker (range, 3 to 1309 days) and 91 days in patients with an ICD (range, 1 to 1376 days). In the examination of secondary end points, we found no clinically important differences between cases in which the patient underwent a single MRI and cases in which patients had undergone a previous MRI (Table S9 in the Supplementary Appendix).


Table 4.Cases in Which a Secondary End-Point Event Occurred Immediately after MRI or by the Final Follow-up.

Patients whose cardiac device exceeded the limit for a change in setting at the time of the MRI (a secondary end-point event) were asked to return for a repeat interrogation within 7 days and at 3 months and 6 months (pacemakers, 11% of cases; ICDs, 26% of cases). The proportions of cases in which there were persistent changes in device settings at the final follow-up are shown in Table 4. A higher incidence of long-term setting changes was seen with ICDs than with pacemakers. A long-term battery voltage decrease of 0.04 V or more occurred in 4.2% of ICD cases, and a long-term high-voltage lead impedance change of 3 ohms or more occurred in 10.0% of ICD cases.


In this study, we investigated the use of nonthoracic MRI at 1.5 tesla in patients with an implanted non–MRI-conditional cardiac device (pacemaker or ICD). We implemented a specific protocol for device interrogation, device programming, patient monitoring, and follow-up that was designed to reduce the risk of patient harm from MRI effects. In our study, no patient who was appropriately screened and had the device reprogrammed in accordance with our protocol had a device or lead failure. In one case, an ICD that was not properly reprogrammed before the MRI could not be interrogated after the procedure, and immediate generator replacement was required. In six cases, atrial arrhythmias occurred, each lasting less than 49 hours; six partial electrical resets occurred that were detected and corrected during post-MRI reprogramming. Changes in device settings were common, but relatively few exceeded our prespecified threshold criteria for a clinically important change; the most common change was a 3-ohm change in ICD high-voltage (shock) lead impedance (16.4% of cases).

When pre-MRI and post-MRI battery voltage measurements were compared, a small decrease was noted for both pacemakers and ICDs. The radiofrequency energy generated during MRI scanning creates a temporary decrease in battery voltage, which has typically been reported to resolve after several weeks. In our study, all pacemaker voltage decreases of 0.04 V or more had resolved at the last follow-up, although some ICD voltage decreases of 0.04 V or more had not.

At the time that the study was being designed, we did not anticipate the demand for repeat MRI for patients with an implanted cardiac device. If exposure to a strong radiofrequency field resulted in substantial thermal injury at the lead–myocardial interface,1 these patients should be at the greatest risk for a cumulative detrimental change in pacing properties. The only indication of such an effect in our study was a higher rate of high-voltage (shock) lead impedance changes among patients who had had previous MRI than among those who had not had previous MRI (21.5% vs. 14.9%).

Several smaller studies examining the risk associated with MRI in patients with an implanted device have reported varying effects on cardiac device settings.17-31 On the basis of this early experience, position statements recommended caution in the performance of MRI in patients with an implanted cardiac device.32,33 Subsequently, a larger prospective study examined 555 cases of scanning (including thoracic imaging) to assess the risk associated with MRI; no adverse clinical events occurred among the patients who underwent MRI, and the observed setting changes did not require device revision or reprogramming.7

Although it has been suggested that implanted generators and leads may be removed and then replaced to allow for MRI, such procedures may have greater risks than those associated with nonthoracic MRI in the current study. The rate of major complications among patients undergoing generator replacement with or without the placement of an additional transvenous lead was 4 to 15% in a prospective registry.34 In addition, single-center and multicenter studies have shown a rate of major complications associated with elective laser-assisted lead extraction that is in the range of 0.4 to 2%.35-38 Thus, device removal and replacement seem unlikely to be safer than proceeding with scanning for patients with a pacemaker or an ICD who require a nonthoracic MRI, provided a protocol similar to the one used in our study is followed.

The limitations of this study should be considered carefully. This registry represents a heterogeneous experience, with generators and leads from multiple manufacturers and initial as well as repeat examinations at 1.5 tesla. Thus, the results may not be predictive of findings with all device–lead combinations or higher MRI field strengths. Also, because patients younger than 18 years of age and MRI examinations of the thorax were excluded and the number of left ventricular leads was relatively small, it may not be possible to extrapolate the current data to a pediatric population, to patients undergoing MRI of the chest, or to patients with cardiac resynchronization devices. Finally, we excluded pacing-dependent patients with an ICD, because not all such patients had a device that was capable of providing pacing function while allowing for inactivation of tachycardia therapy. Therefore, our method should not be applied to pacing-dependent patients with an ICD unless independent programming of the bradycardia and tachycardia functions is possible.

In conclusion, we investigated the use of nonthoracic MRI at 1.5 tesla in patients with an implanted non–MRI-conditional cardiac device. No patient who was appropriately screened and had the cardiac device reprogrammed according to our protocol had device or lead failure. Substantial changes in device settings were infrequent and did not result in clinical adverse events.

Presented in part as a Late Breaking Clinical Trial at the American Heart Association Annual Scientific Sessions, Chicago, November 15–19, 2014.

Supported by grants from St. Jude Medical, Biotronik, Boston Scientific, and the Hewitt Foundation for Medical Research, and by philanthropic gifts from Mr. and Mrs. Richard H. Deihl, Evelyn F. and Louis S. Grubb, Roscoe E. Hazard, Jr., and the Shultz Steel Company.

Disclosure forms provided by the authors are available with the full text of this article at

Author Affiliations

From the Scripps Research Institute (R.J.R.), the La Jolla Cardiovascular Research Institute (R.J.R., P.D.S.), University of California, San Diego (U.B.-G.), and Scripps Memorial Hospital (S.L.H., G.T.T.), La Jolla, the University of California, Los Angeles, Los Angeles (N.G.B.), and Providence St. Joseph Medical Center, Burbank (R.H.M.S.) — all in California; the Department of Entomology, University of Arizona, Tucson (H.S.C.); Intermountain Medical Center, Salt Lake City (J.L.A., A.E.T.); Inova Heart and Vascular Institute, Falls Church, VA (A.A.); Allegheny General Hospital, Pittsburgh (R.W.W.B.), and Abington Memorial Hospital, Abington (J.V.F.) — both in Pennsylvania; Yale University School of Medicine, New Haven, CT (R.L.); Providence Heart Institute, Southfield, MI (C.E.M.); Oklahoma Heart Institute, Tulsa (E.T.M.); University of Mississippi Medical Center, Jackson (A.L.R.); Medical College of Wisconsin, Milwaukee (J.C.R.); Bassett Medical Center, Cooperstown (J.D.S.), and Advanced Cardiovascular Imaging, Carnegie Hill Radiology, New York (S.U., S.D.W.) — both in New York; Methodist DeBakey Heart and Vascular Center, Houston (D.J.S.); and Baptist Health, Lexington, KY (G.F.T.).

Address reprint requests to Dr. Russo at the Department of Molecular and Experimental Medicine, Scripps Research Institute, 10550 N. Torrey Pines Rd., La Jolla, CA 92037, or at .


Source: NEJM

Monoclonal Antibody in Mild Alzheimer’s Disease

Is solanezumab effective in the treatment of mild Alzheimer’s disease?

Honig et al. conducted a randomized, double-blind, phase 3 trial (EXPEDITION 3), which enrolled only patients who had mild Alzheimer’s disease, defined as a Mini–Mental State Examination score of 20 to 26 (on a scale from 0 to 30, with higher scores indicating better cognition), and had biomarker evidence of cerebral beta-amyloid deposition. Patients were randomly assigned to receive intravenous infusions of either solanezumab at a dose of 400 mg or placebo every 4 weeks for 76 weeks. This trial was intended to further investigate the secondary efficacy analyses from two earlier trials.

Clinical Pearls

Q: What is the amyloid beta (Aβ) hypothesis regarding the pathogenesis of Alzheimer’s disease?

A: The neuropathological hallmarks of Alzheimer’s disease include extracellular plaques containing amyloid beta (Aβ) and intracellular neurofibrillary tangles containing hyperphosphorylated tau protein, along with synaptic and neuronal losses. The Aβ hypothesis of the mechanism of Alzheimer’s disease proposes that early pathogenesis of the disease results from the overproduction of or reduced clearance of Aβ, leading to the formation of oligomers, fibrils, and neuritic Aβ plaques. Treatments that slow the production of Aβ or that increase the clearance of Aβ may slow the progression of Alzheimer’s disease.

Q: What is solanezumab?

A: Solanezumab, a humanized immunoglobulin G1 monoclonal antibody that binds to the mid-domain of the Aβ peptide, was designed to increase clearance from the brain of soluble Aβ, peptides that may lead to toxic effects in the synapses at a stage before the deposition of the fibrillary form of the protein.

Morning Report Questions

Q: Is solanezumab effective in the treatment of mild Alzheimer’s disease?

A: In the trial by Honig et al., the primary efficacy measure was the change from baseline to 80 weeks in the score on the 14-item cognitive subscale of the Alzheimer’s Disease Assessment Scale (ADAS-cog14; scores range from 0 to 90, with higher scores indicating greater cognitive impairment). The trial showed no significant between-group difference at week 80 in the change in score from baseline (change, 6.65 in the solanezumab group and 7.44 in the placebo group; difference, −0.80; P=0.10).

Q: What are some possible explanations for the lack of benefit associated with solanezumab in the trial by Honig et al.?

A: According to the authors, the solanezumab dose that was administered in this trial was associated with a high level of peripheral target engagement, sufficient to reduce free plasma Aβ concentrations by more than 90%. However, this effect did not produce clinical efficacy. Thus, a reduction in peripheral free Aβ alone is unlikely to lead to clinically meaningful cognitive benefits. Second, the dose of solanezumab (400 mg, administered every 4 weeks) may have been insufficient to produce a meaningful effect. Third, the pathological changes in the mild stage of Alzheimer’s disease–related dementia may not be amenable to treatment with a drug targeting soluble Aβ. Fourth, solanezumab was designed to increase the clearance of soluble Aβ from the brain, predicated on the Aβ hypothesis of Alzheimer’s disease — that the disease results from the overproduction of or reduced clearance of Aβ (or both). Although the amyloid hypothesis is based on considerable genetic and biomarker data, if amyloid is not the cause of the disease, solanezumab would not be expected to slow disease progression.

Source: NEJM


To Care Is Human — Collectively Confronting the Clinician-Burnout Crisis

The ethical principles that guide clinical care — a commitment to benefiting the patient, avoiding harm, respecting patient autonomy, and striving for justice in health care — affirm the moral foundation and deep meaning underlying many clinicians’ view of their profession as a worthy and gratifying calling. It is clear, however, that owing to the growing demands, burdensome tasks, and increasing stress experienced by many clinicians, alarmingly high rates of burnout, depression, and suicide threaten their well-being. More than half of U.S. physicians report significant symptoms of burnout — a rate more than twice that among professionals in other fields. Moreover, we know that the problem starts early. Medical students and residents have higher rates of burnout and depression than their peers who are pursuing nonmedical careers. Nor is the trend limited to physicians: nurses also experience alarming rates of burnout.1 Clinicians are human, and it takes a personal toll on them when circumstances make it difficult to fulfill their ethical commitments and deliver the best possible care.

Burnout — a syndrome characterized by emotional exhaustion and depersonalization (which includes negativity, cynicism, and the inability to express empathy or grief), a feeling of reduced personal accomplishment, loss of work fulfillment, and reduced effectiveness — has serious consequences in terms of both human cost and system inefficiency.1 Nothing puts these consequences into starker relief than the devastating rates of suicide among physicians. As many as 400 U.S. physicians die by suicide every year.2 Nearly every clinician has been touched at some point by such a tragedy.

Not only are clinicians’ lives at risk, so is patient safety. Some studies have revealed links between clinician burnout and increased rates of medical errors, malpractice suits, and health care–associated infections. In addition, clinician burnout places a substantial strain on the health care system, leading to losses in productivity and increased costs. Burnout is independently associated with job dissatisfaction and high turnover rates. In one longitudinal study, the investigators calculated that annual productivity loss in the United States that is attributable to burnout may be equivalent to eliminating the graduating classes of seven medical schools.1 These consequences are unacceptable by any standard. Therefore, we have an urgent, shared professional responsibility to respond and to develop solutions.

Indeed, there is broad recognition in the health care community that the problem of clinician burnout, depression and other mental disorders, and suicide has reached a crisis level. There are many existing efforts by individual organizations, hospitals, training programs, professional societies, and specialties to confront the crisis. But no single organization can address all the issues that will need to be explored and resolved. There is no mechanism for systematically and collectively gathering data on, analyzing, and mitigating the causes of burnout. The problem is not lack of concern, disagreement about the severity or urgency of the crisis, or absence of will to act. Rather, there is a need to coordinate and synthesize the many ongoing efforts within the health care community and to generate momentum and collective action to accelerate progress. Furthermore, any solution will need to involve key influencers beyond the health care community, such as information technology (IT) vendors, payers, regulators, accreditation agencies, policymakers, and patients.

We believe that the National Academy of Medicine (NAM; formerly the Institute of Medicine, or IOM) is uniquely suited to take on the coordinating role. Nearly 20 years ago, the IOM report To Err Is Human identified high rates of medical error driven by a fragmented care system. The report spurred systemwide changes that have improved the safety and quality of care.3 Today, we need a similar call to action. To that end, in January 2017, the NAM, in collaboration with the Association of American Medical Colleges (AAMC) and the Accreditation Council for Graduate Medical Education (ACGME), launched a national Action Collaborative on Clinician Well-Being and Resilience.4 The collaborative aims to draw on the relevant evidence base to build on existing efforts by facilitating knowledge sharing and catalyzing collective action.

Since launching the collaborative, the NAM has been overwhelmed by requests from organizations wanting to take part in this work and has therefore issued an open call for network organizations to share information and resources. These network organizations have made formal public commitments to promoting clinician well-being (available on the collaborative’s website5), and they pledge to work with the NAM and others in the network to share knowledge and coordinate efforts. Currently, the collaborative comprises 55 core organizations and a network of more than 80 others, including clinician groups that span many disciplines and specialties, as well as payers, researchers, government agencies, technology companies, patient organizations, trainees, and more.

Four central goals guide the collaborative’s initial work: to increase the visibility of clinician stress and burnout, to improve health care organizations’ baseline understanding of the challenges to clinician well-being, to identify evidence-based solutions, and to monitor the effectiveness of implementation of these solutions. We already know that burnout is driven largely by external factors, rather than by personal characteristics. These factors include work-process inefficiencies (such as cumbersome IT systems), excessive work hours and workloads, work–home conflicts, problems with the organizational culture (such as team dysfunction and management styles that neglect clinician input), and perceived loss of control and meaning at work. Although personal factors unrelated to the clinical environment (such as being young, female, or a parent of young children or teenagers) may also contribute to a greater risk of burnout, the collaborative will focus initially on promoting solutions and progress at organizational, systems, and cultural levels.

The collaborative has organized its efforts into four work streams. The “Research, Data, and Metrics” workgroup is compiling validated survey instruments and evidence-based interventions and identifying benchmarks for gauging progress in supporting clinician well-being. The “Conceptual Model” workgroup has created a comprehensive conceptual model and will establish a shared taxonomy by defining key factors. The “External Factors and Work Flow” workgroup is exploring approaches to optimal team-based care and documentation in the rapidly evolving digital health environment. And the “Messaging and Communications” workgroup is identifying key stakeholders and developing targeted messaging to disseminate available evidence and knowledge and thus inspire action. A key deliverable for the collaborative is an online “knowledge hub” (to launch in 2018) that will serve as a user-friendly repository for available data, models, and toolkits and will provide opportunities for clinicians and other stakeholders to share information and build productive relationships. The NAM encourages all interested organizations and individuals to become involved in the work of the collaborative and to use its products in their own endeavors (for more information, see the project website4).

The health professions are at a critical inflection point. The health system cannot sustain current rates of clinician burnout and continue to deliver safe, high-quality care. But there is reason to be optimistic that the tide is turning. The strong commitment of more than 100 national organizations to the work of the collaborative has made clear that clinician well-being is a growing priority for health care leaders, policymakers, payers, and other decision makers capable of bringing about system-level change. Through collective action and targeted investment, we can not only reduce burnout and promote well-being, but also help clinicians carry out the sacred mission that drew them to the healing professions — providing the very best care to patients.

Source: NEJM

Alzheimer’s drug targeting soluble amyloid falls short in a large clinical trial : NEJM

DEFUSE 3 trial-Brain-scan guided emergency stroke treatment beneficial, says NEJM Study