Mice can ‘warn’ sons, grandsons of dangers via sperm .


Lab mice trained to fear a particular smell can transfer the impulse to their unborn sons and grandsons through a mechanism in their sperm, a study reveals.

The research claims to provide evidence for the concept of animals “inheriting” a memory of their ancestors’ traumas, and responding as if they had lived the events themselves.

It is the latest find in the study of epigenetics, in which environmental factors are said to cause genes to start behaving differently without any change to their underlying DNA encoding.

“Knowing how ancestral experiences influence descendant generations will allow us to understand more about the development of neuropsychiatric disorders that have a transgenerational basis,” says study co-author Brian Dias of the Emory University School of Medicine in Atlanta, Georgia.

And it may one day lead to therapies that can soften the memory “inheritance”.

For the study, Dias and co-author Kerry Ressler trained mice, using foot shocks, to fear an odour that resembles cherry blossoms.

Later, they tested the extent to which the animals’ offspring startled when exposed to the same smell. The younger generation had not even been conceived when their fathers underwent the training, and had never smelt the odour before the experiment.

The offspring of trained mice were “able to detect and respond to far less amounts of odour… suggesting they are more sensitive” to it, says Ressler co-author of the study published in the journal Nature Neuroscience.

They did not react the same way to other odours, and compared to the offspring of non-trained mice, their reaction to the cherry blossom whiff was about 200 percent stronger, he says.

The scientists then looked at a gene (M71) that governs the functioning of an odour receptor in the nose that responds specifically to the cherry blossom smell.

Epigenetic marks

The gene, inherited through the sperm of trained mice, had undergone no change to its DNA encoding, the team found.

But the gene did carry epigenetic marks that could alter its behaviour and cause it to be “expressed more” in descendants, says Dias.

This in turn caused a physical change in the brains of the trained mice, their sons and grandsons, who all had a larger glomerulus – a section in the olfactory (smell) unit of the brain.

“This happens because there are more M71 neurons in the nose sending more axons” into the brain, says Dias.

Similar changes in the brain were seen even in offspring conceived with artificial insemination from the sperm of cherry blossom-fearing fathers.

The sons of trained mouse fathers also had the altered gene expression in their sperm.

“Such information transfer would be an efficient way for parents to ‘inform’ their offspring about the importance of specific environmental features that they are likely to encounter in their future environments,” says Ressler.

Happening in humans?

Commenting on the findings, British geneticist Marcus Pembrey says they could be useful in the study of phobias, anxiety and post-traumatic stress disorders.

“It is high time public health researchers took human transgenerational responses seriously,” he said in a statement issued by the Science Media Centre.

Focal point sperm cell entering a human egg depicting conception of child birth.

“I suspect we will not understand the rise in neuropsychiatric disorders or obesity, diabetes and metabolic disruptions generally without taking a multigenerational approach.”

Wolf Reik, epigenetics head at the Babraham Institute in England, says such results were “encouraging” as they suggested that transgenerational inheritance does exist, but cannot yet be extrapolated to humans.

 

Advertisements

Autism detectable ‘in first months’


An early indication of autism can be identified in babies under six months old, a study suggests.

US researchers, writing in Nature, analysed how infants looked at faces from birth to the age of three.

They found children later diagnosed with autism initially developed normally but showed diminished eye contact – a hallmark of autism – between two and six months of age.

A UK expert said the findings raise hope for early interventions.

In the study, researchers led by Emory University School of Medicine in Atlanta used eye-tracking technology to measure the way babies looked at and responded to social clues.

“Start Quote

These early markers are extremely important for us to identify – the earlier we can diagnose a child who has one of these disorders – such as autism – the earlier we can provide intervention and development”

Dr Deborah Riby Durham University

They found infants later diagnosed with autism had shown a steady decline in attention to the eyes of other people from the age of two months onwards, when watching videos of natural human interactions.

Lead researcher Dr Warren Jones told BBC News: “It tells us for the first time that it’s possible to detect some signs of autism in the first months of life.

“These are the earliest signs of autism that we’ve ever observed.”

The study, in collaboration with the Marcus Autism Center and Children’s Healthcare of Atlanta, followed 59 infants who had a high risk of autism because they had siblings with the life-long condition, and 51 infants at low risk.

Dr Jones and colleague Dr Ami Klin followed them to the age of three, when the children were formally assessed for autism.

Thirteen of the children were diagnosed with autism spectrum disorders – a range of disorders that includes autism and Asperger’s syndrome – 11 boys and two girls.

The researchers then went back to look at the eye-tracking data, and what they found was surprising.

“In infants with autism, eye contact is declining already in the first six months of life,” said Dr Jones.

But he added this could be seen only with sophisticated technology and would not be visible to parents.

“It’s not something that parents would be able to see by themselves at all. If parents have concerns they should talk to their paediatrician.”

Dr Deborah Riby, of the department of psychology at Durham University, said the study provided an insight into the timing of atypical social attention in children who might go on to develop autism.

Autism spectrum disorders

  • Autism and Asperger’s syndrome are part of a range of related developmental disorders known as autistic spectrum disorders (ASD)
  • They begin in childhood and last through adulthood.
  • ASD can cause a wide range of symptoms, which are grouped into three categories including problems with social interaction, impaired communication skills and unusual patterns of thought and behaviour

Source: NHS Choices

“These early markers are extremely important for us to identify – the earlier we can diagnose a child who has one of these disorders – such as autism – the earlier we can provide intervention and development,” she said.

Kay Hinton/Emory University

Caroline Hattersley, head of information, advice and advocacy at the National Autistic Society, said the research was “based on a very small sample and needs to be replicated on a far larger scale before any concrete conclusions can be drawn”.

“Autism is a very complex condition,” she said.

“No two people with autism are the same, and so a holistic approach to diagnosis is required that takes into account all aspects of an individual’s behaviour. A more comprehensive approach allows all of a person’s support needs to be identified.

“It’s vital that everyone with autism can access a diagnosis, as it can be key to unlocking the right support which can enable people with the condition to reach their full potential.”

Power from the sea?


Triboelectric nanogenerator extracts energy from ocean waves.

As sources of renewable energy, sun and wind have one major disadvantage: it isn’t always sunny or windy. Waves in the ocean, on the other hand, are never still. American researchers are now aiming to use waves to produce energy by making use of contact electrification between a patterned plastic nanoarray and water. In the journal Angewandte Chemie, they have introduced an inexpensive and simple prototype of a triboelectric nanogenerator that could be used to produce energy and as a chemical or temperature sensor.

Power from the sea? Triboelectric nanogenerator extracts energy from ocean waves

The triboelectric effect is the build up of an electric charge between two materials through contact and separation – it is commonly experienced when removal of a shirt, especially in dry air, results in crackling. Zhong Lin Wang and a team at the Georgia Institute of Technology in Atlanta have previously developed a triboelectric generator based on two solids that produces enough power to charge a mobile telephone battery. However, high humidity interferes with its operation. How could this technology work with waves in water? The triboelectric effect is not limited to solids; it can also occur with liquids. The only requirement is that specific electronic levels of two substances are close enough together. Water just needs the right partner – maybe a suitable plastic.

As a prototype, the researchers made an insulated plastic tank, whose lid and bottom contain copper foil electrodes. Their system is successful because the inside of the lid is coated with a layer of polydimethylsiloxane (PDMS) patterned with tiny nanoscale pyramids. The tank is filled with deionized water. When the lid is lowered so that the PDMS nanopyramids come into contact with the water, groups of atoms in the PDMS become ionized and negatively charged. A corresponding positively charged layer forms on the surface of the water. The electric charges are maintained when the PDMS layer is lifted out of the water. This produces a potential difference between the PDMS and the water. Hydrophobic PDMS was chosen in order to minimize the amount of water clinging to the surface; the pyramid shapes allow the water to drop off readily. Periodic raising and lowering of the lid while the electrodes are connected to a rectifier and capacitor produces a direct current that can be used to light an array of 60 LEDs. In tests with salt water, the generator produced a lower output, but it could in principle operate with seawater.

The current produced decreases significantly as temperature increases, which could allow this device to be used as a . It also decreases when ethanol is added to the , which suggests potential use of the system as a chemical sensor. By attaching probe molecules with specific binding partners, it may be possible to design sensors for biomolecules.

FDA Panel Supports AV Block Indication for BiV Pacing.


 Indications for biventricular (BiV) pacing should be extended to include patients with systolic heart failure and first-, second-, or third-degree atrioventricular (AV) block, the FDA‘s Circulatory System Devices advisory panel decided here yesterday by a thin majority vote.

Their important caveat: BiV pacing in first-degree AV block should be only for patients for whom there is “a verifiable confidence that ventricular pacing is going to be necessary most of the time,” said panel chair Dr Richard L Page (University of Wisconsin, Madison) during the proceedings. With the panel having overwhelmingly decided that BiV pacing would be both safe (by a six to one vote) and effective (unanimously), a lone abstention on the third issue of whether benefits would outweigh attendant risks led Page to cast a tie-breaking vote in support of the expanded indication.

Panelist Dr David Kandzari (Piedmont Heart Institute, Atlanta, GA), explaining his abstention to heartwire , said that the pivotal Medtronic-sponsored BLOCK-HF trial supporting the extended indication didn’t follow patients long enough to capture BiV pacing’s inevitable long-term risks. “We just didn’t have that kind of information. I think if we saw more compelling reductions in clinical outcomes with the therapy other than just avoidance of heart-failure hospitalization, it would have been a much easier decision,” he said. “Even quality of life was not compellingly different.”

Page observed for heartwire that “everybody wrestled with risk vs benefit, but at the end of the day, the majority felt that the benefit outweighed the risk.”

Why BiV Pacing, and Why Not in All First-Degree AV Block?

BLOCK-HF randomized 691 patients with any degree of AV block, NYHA class 1–3 heart failure, and an LVEF<50% implanted with three-lead devices programmed to either BiV pacing or to standard RV pacing. As previously reported by heartwire , a primary end point composite of all-cause mortality, HF-related urgent care, or a >15% increase in LV end-systolic volume index (LVESVI) fell by a significant one-fourth over three years in the BiV group.

BiV pacemakers have been heretofore reserved for cardiac resynchronization therapy (CRT) in heart-failure patients with prolonged QRS intervals and an LVEF <35%. BiV pacing is now only sometimes used off-label in patients like those in BLOCK-HF, but it’s likely to become much more common if the FDA takes the panel’s advice on the Medtronic application. If the new BiV niche is approved, clinicians would likely more routinely use BiV pacemakers for AV block whether on-label with Medtronic units or off-label with those of other companies. That goes for BiV pacemakers with or without defibrillating capability, so-called CRT-D and CRT-P devices, respectively, both of which were allowed in BLOCK-HF.

The panel included only patients with first-degree AV block expected to need a lot of RV pacing in the proposed indication because they have the greatest need to avoid it. RV pacing necessarily induces ventricular dyssynchrony and so can exacerbate heart failure, whereas BiV imposes synchrony. But a lower-risk, lower-cost dual-chamber pacemaker could well be enough for patients with first-degree AV block that doesn’t require much RV pacing.

Hazard Ratio (95% CI) for BLOCK-HF Primary End Point*, BiV Pacing vs RV Pacing

Group Hazard ratio (95% CI)
CRT-P (n=484) 0.72 (0.57–0.90)
CRT-D (n=207) 0.74 (0.56–1.00)
Total cohort 0.73 (0.59–0.89)
*Composite of all-cause mortality, HF-related urgent care, or a >15% increase in LVESVI
CRT-P=pacing-only cardiac resynchronization therapy device
CRT-D=defibrillating cardiac resynchronization therapy device
Source: FDA document

The FDA questioned whether the third lead’s risks, given an estimated attendant complication risk of 6.3%, were worth any benefits when compared with, for example, BLOCK-HF’s only slight improvement in HF events; it was a major issue with panelists, too.

Electrophysiologist Dr Patricia A Kelly (Community Medical Center, Missoula, MT) described how BiV pacemakers entail late risks that wouldn’t have been evident in the pivotal trial, persuading a number of other panelists. “The risks of the LV lead extend past the peri-implant period. Generator life is shorter, there are more generator changes, there’s the attendant increase in risk of infection with every generator change. The risks are more significant than they [seem] if we just look at the risks we’ve been presented today.” Kelly voted “no” to whether the expanded indication’s benefits outweigh risks.

Panelist Dr David D Yuh (Yale University, New Haven, CT) agreed and voted the same way. “As a surgeon, having had to put in a lot of epicardial LV leads for coronary sinus leads that have failed, I appreciate the persistent added risk associated with BiV systems in terms of generator changes and lead changes,” he said. “The majority of benefit in terms of heart-failure admissions or treatments was realized in the first year, and these more downstream risks I think do accumulate and eventually overwhelm that.”

Divisive Primary End Point

The FDA also had reservations about the trial’s primary outcome, a benefit driven by improvements by LVESVI, with far less coming from the two clinical components, especially mortality.

Component % Contribution to Composite Primary End Point by BiV vs RV pacing groups in BLOCK-HF

Primary end point components BiV-pacing arm RV-pacing arm
Death 11.9 7.9
HF urgent care 35.0 32.5
LVESVI up >15% 53.1 59.7
Source: FDA document

The panelists were split on whether the LVESVI results were adequately informative. According to Page, the echo parameter can reflect clinical deterioration, and data from BLOCK-HF and other studies “influenced [much of] the panel that it might be of value as a surrogate.” And he noted the trial’s LVESVI results and two clinical end points went in the same favorable direction. “The concordance of the outcomes among different [patient] groups was striking, and I think for those of us who were persuaded that the benefits outweighed the risks, that was a major factor.”

Others on the panel were less convinced. Dr Richard Lange (University of Texas Health Science Center, San Antonio) said the focus should be on the primary end point’s clinical components. “The changes in LVESVI are not really clinically relevant.” Lange ultimately voted that BiV pacing’s risks outweighed its benefits in the expanded indication.

Panelists also questioned the proposed indication’s inclusion of patients with NYHA class 1 heart failure, which characterizes HF patients who have become asymptomatic on medications.

The trial included patients in NYHA class 1 and so supports their use of BiV pacing, “but I still have real problems with offering a BiV system, with all the attendant baggage associated with that device compared with single chamber systems, to class 1 patients,” Yuh said.

“I’m skeptical that we’d get much bang for the buck,” Lange agreed. “The potential complications are high, including the initial implant complications, and subsequent follow-up and reimplantation. And when we look at the outcome of people with NYHA class 1 symptoms, their outcomes are terrific.”

Responding to a query from Page, Lange said he would support the exclusion of NYHA class 1 patients from any expanded indications, something the panel ultimately did not do.

System-Wide Effort Improves Hypertension in 80% of Patients.


When Kaiser Permanente Northern California (KPNC) initiated a program to control hypertension in its patient population in 2001, less than half of patients diagnosed with hypertension had their blood pressure under control. Nine years later, 80% of KPNC hypertensive patients had blood pressures lower than 140/90 mm Hg, an improvement rate that exceeded both state and national trends.

Marc G. Jaffe, MD, from the Department of Endocrinology, Kaiser Permanente South San Francisco Medical Center, California, and colleagues tracked data from KPNC, 1 of 8 divisions of the integrated managed care organization, Kaiser Permanente, as it adopted a system-wide program employing several strategies to improve blood pressure control. They published the results of the program in the August 21 issue of JAMA.

In the quality improvement program, patients were identified each quarter for inclusion in a hypertension registry based on diagnostic codes, pharmacy data, and hospital records. Hypertension control rates were generated every 1 to 3 months for each KPNC medical center and distributed to center directors. The group used those data to identify practices associated with higher control rates, which they disseminated to other centers.

A hypertension control algorithm, based on emerging evidence, was updated every 2 years, suggesting a step-wise approach to hypertension medications for blood pressure control. In addition, in 2005, single-pill combination therapy of lisinopril-hydrochlorothiazide was incorporated into the regional guideline as first-line medication. In 2007, KNPC added patient follow-up visits with medical assistants 2 to 4 weeks after a medication adjustment to monitor medication control success.

Between 2001 and 2009, the KPNC hypertension registry population grew from 349,937 patients (15.4% of the adults in KPNC) to 652,763 patients (27.5% of the adults in the system).

By 2009, the hypertension control rate for KPNC was 80.4% (95% confidence interval [CI], 75.6% – 84.4%) compared with the initial control rate of 43.6% (95% CI, 39.4% – 48.6%) in 2001 (P < .001 for trend).

In comparison, the Healthcare Effectiveness Data and Information Set national mean hypertension control rate improvement failed to meet statistical significance, rising from 55.4% to 64.1% (P = .24 for trend) during the same period. The increase across California, available only since 2006, also failed to reach statistical significance, rising from 63.4% to 69.4% (P = .37 for trend).

Moreover, the KPNC hypertension control rate has continued to rise in years after the study, climbing to 83.7% in 2010 and 87.1% in 2011, the authors report.

Abhinaval Goyal, MD, MHS, assistant professor of medicine, Division of Cardiology, Emory School of Medicine, Atlanta, Georgia, and William A. Bornstein, MD, PhD, chief quality and medical officer, Emory Healthcare, Atlanta, authors of an accompanying editorial, call the KPNC study “an important contribution to the science of improving systems of care to detect and treat community-based hypertension.”

Dr. Goyal and Dr. Bornstein write that fee-for-service environments are less likely to implement approaches such as those used in the KNPC study because of the dual risks of increased costs and decreased reimbursements. “Fully integrated health systems (such as KPNC) that assume full responsibility by both insuring and delivering health care are particularly invested in managing risk factors to reduce downstream costs,” they write.

However, a transition to value-based models in all health sectors and the growth of accountable care organizations and shared savings models could ultimately make this kind of approach more widespread.

 

Source: JAMA.

HbA1c Inadequate to Assess Diabetes Care Across Specialties.


New findings suggest that simply using HbA1clevels to assess the performance of individual physicians or healthcare systems in diabetes management may be misleading or inaccurate.

Endocrinologists typically see more complex patients who require more time to improve their glycemic control, which makes their performance look worse when judged solely by HbA1c levels.

But new data reported here at the American Diabetes Association 2013 Scientific Sessions show that when diabetes patients are grouped by medication use — a proxy for complexity and stage of disease — HbA1c levels for patients cared for by endocrinologists are the same as or better than those for individuals seen by general internists.

Lawrence S. Phillips, MD, professor of medicine in the division of endocrinology at Emory University, Atlanta, Georgia, who reported the findings in a poster at the meeting, said looking at patients by medication group shows there is very little difference between the performance of specialists and generalists.

“The message is really for the payers and the government… They need to do something like this. They need to have some conservative way to give the provider a chance to improve things, and then they need to compare apples to apples. Just looking at A1c is not sufficient,” Dr. Phillips told Medscape Medical News.

Poster session moderator Sanjeev Mehta, MD, MPH, director of quality at Joslin Diabetes Center, Boston, Massachusetts, agrees. “Dr. Phillips’s data demonstrated that endocrinologists, in the practice setting he evaluated, were seeing patients with higher HbA1c levels. While this suggests appropriate referrals by primary-care physicians to optimize glycemic control, it also supports Dr. Phillips’s conclusion that an outcome-based quality measure [such as HbA1c] may be inadequate when assessing the quality of diabetes care across all providers, especially endocrinologists,” he said.

Dr. Mehta noted that the Agency for Healthcare Research and Quality (AHRQ) has endorsed theadoption of more sophisticated quality metrics, including linked action measures such as appropriate medication use, which would assess outcomes in the context of the care provided.

“I strongly believe this is the direction that all stakeholders in the diabetes community need to be [following to evaluate] high-quality diabetes care,” he told Medscape Medical News.

Comparing Apples to Apples Is Best Approach

Dr. Phillips and colleagues obtained Emory Healthcare data for a total of 5880 diabetes patients cared for by 8 endocrinologists and 8 general internists over a 24-month period. The proportion of patients whose most recent HbA1c was 7% or above was higher for the endocrinologists than for the general internists, 51% vs 38%.

Subsequent analysis was restricted to the 3735 patients who had been seen 3 or more times in the past 24 months and at least once in the prior 12 months. This group was divided into 3 groups by medication use: Those using only oral medications and/or incretin-based drugs (1880), those using basal insulin (with or without oral medications/incretins, 324), and those also using mealtime insulin in addition to basal insulin, with or without other medications (1531). The latter group included patients with type 1 diabetes, Dr. Phillips told Medscape Medical News.

Overall control was poorer among the insulin-using patients, with HbA1c levels of 7% or higher in 66% of those using mealtime insulin and 55% of individuals using basal insulin, compared with just 21% of those not using insulin (P < .0001 for trend). And endocrinologists had more patients on insulin than did the general internists, with 53% vs 22% using mealtime insulin (P < .0001), 10% vs 7% using basal insulin (P = .02), and 37% vs 71% not using insulin (P < .0001), respectively.

When examined by treatment group, however, the non–insulin-using patients of the endocrinologists actually had better HbA1cs: 18.8% of their patients had levels at or above 7% vs 23.4% of the general internists’ patients.

For the 2 insulin treatment categories, there was no significant difference between the endocrinologists and the internists. In both groups, just over half of the patients had HbA1cs 7% or above (= .6) as did about two thirds of those using mealtime insulin (= .9).

New Models Needed for Evaluating Care

Dr. Mehta told Medscape Medical News:”I think this poster highlighted the importance of adopting more sophisticated quality metrics, such as linked action measures, and the importance of ongoing collaboration with specialists and specialty centers in the care of adults with diabetes.

“Specialists and specialty centers may have an opportunity to translate best practices to their referring primary-care physicians, who will continue to care for the majority of adults with diabetes in the United States,” he added.

And specialists should be rewarded, not penalized, for their particular patient mix. “Those providers and practices that care for more complex patients need to be recognized, even reimbursed, for their ability to make meaningful improvements in health outcomes in high-risk patients,” he observed.

Dr. Phillips told Medscape Medical News that “diabetes is a heavy-duty proxy for healthcare systems as a whole, because a lot of people have diabetes, and it’s an expensive disease.”

He believes his “apples-to-apples” comparison could have implications for other areas of medicine as well. “I think it’s an important concept. You would think it applies to blood pressure, cholesterol, all the things that doctors do. We think this is a model for how you evaluate care.”

Source: http://www.medscape.com

PreDx finger stick comparable to venous blood assay in detecting diabetes risk.


Clinicians may be able to accurately detect a patient’s likelihood of developing type 2 diabetes with the use of a finger stick capillary blood collection test, according to data presented here at the AACE Annual Scientific and Clinical Congress.

“We’ve developed what we call the PreDx score (Tethys Bioscience). It’s a multimarker algorithm-based diagnostic. It combines the results from seven different blood-based biomarkers along with the patient’s age and gender to produce a single score between 1 and 10,” researcher Theodore Tarasow, PhD, senior vice president of research and development at Tethys Biosciences, said during a late-breaking abstract presentation here. “We’ve done clinical studies to show that that score is directly tied to a person’s 5-year risk for developing diabetes.”

Tarasow said the accuracy of the finger stick blood assays yielded promising results that were comparable to venous blood assays. Data presented indicate that the coefficient of variation ranged from 2.4% for HbA1c to 11.3% for adiponectin. Upon calibration, results showed impressive agreement between PreDx values and matched samples. Overall slope was 0.997 (95% CI, 0.916-1.078) and intercept was –0.048 (95% CI, –0.206 to 0.110) by Deming regression, according to data.

Further, data from the Inter99 study indicated no significant differences in area under the curve (AUC), positive predictive value or sensitivity when comparing simulated finger stick scores with venous scores. Both PreDx venous and PreDx finger stick were also superior to fasting glucose by AUC and other measures in predicting development of diabetes.

“What we really need is the ability to find those at the highest risk and apply additional resources to try and prevent or delay that conversion to diabetes,” Tarasow said.

Tarasow said the PreDx is no more expensive than current tests available for its diagnostic purpose.

“From a clinical perspective what this is really going to allow us to do is have greater access to patients where there is not access to onsite phlebotomy,” Tarasow said. – by Samantha Costa

.

 

  • Using a litmus paper — a simple technique to do this test — allows greater adoption of this predictive tool for how to aggressively treat diabetes or not. If you have a patient at high risk for diabetes in the prediabetes population, you then may use pharmaceutical agents (i.e., metformin), but if you’re treating a low-risk patient, you probably don’t need to do so. Or, you may put them into a supervised exercise program or some type of diet or bariatric surgery. Your intervention will be much stronger.
  • Bruce W. Bode, MD
  • o    Atlanta Diabetes Associates
    Endocrine Specialty Group

Source: Endocrine Today