What is a fusion reaction?


Prof. Dhiraj Bora, Director, Institute for Plasma Research, Gujarat, explains what a fusion reaction is, what conditions it manifests in, and what hurdles scientists face in achieving it.

Last week, the National Ignition Facility, USA, announced that it had breached the first step in triggering a fusion reaction. But what is a fusion reaction? Here are some answers from Prof. Bora – which require prior knowledge of high-school physics and chemistry. We’ll start from their basics (with my comments in square brackets).

What is meant by a nuclear reaction?

A process in which two nuclei or a nucleus and a subatomic particle collide to produce one or more different nucleii is known as a nuclear reaction. It implies an induced change in at least in one nucleus and does not apply to any radioactive decay.

What is the difference between fission and fusion reactions?

The main difference between fusion and fission reactions is that fission is the splitting of an atom into two or more smaller ones while fusion is the fusing of two or more smaller atoms into a larger one. They are two different types of energy-releasing reactions in which energy is released from powerful atomic bonds between the particles within the nucleus.

Which elements are permitted to undergo nuclear fusion?

Technically any two light nuclei below iron [in the Periodic Table] can be used for fusion, although some nuclei are better than most others when it comes to energy production. Like in fission, the energy in fusion comes from the “mass defect” (loss in mass) due to the increase in binding energy [that holds subatomic particles inside an atom together]. The greater the change in binding energy (from lower binding energy to higher binding energy), more the mass lost, results in more output energy.

What are the steps of a nuclear fusion reaction?

To create fusion energy, extremely high temperatures (100 million degrees Celsius) are required to overcome the electrostatic force of repulsion that exists between the light nuclei, popularly known as the Coulomb’s barrier [due to the protons’ positive charges]. Fusion, therefore, can occur for any two nuclei provided the temperature, density of the plasma [the superheated soup of charged particles] and confinement durations are met.

Under what conditions will a fusion chain-reaction occur?

When, say, a deuterium (D) and tritium (T) plasma is compressed to very high density, the particles resulting from nuclear reactions give their energy mostly to D and T ions, by nuclear collisions, rather than to electrons as usual. Fusion can thus proceed as a chain reaction, without the need of thermonuclear temperatures.

What are the natural forces at play during nuclear fusion?

The gravitational forces in the stars compress matter, mostly hydrogen, up to very large densities and temperatures at the star-centers, igniting the fusion reaction. The same gravitational field balances the enormous thermal expansion forces, maintaining the thermonuclear reactions in a star, like the sun, at a controlled and steady rate.

In the laboratory, the gravitational force is replaced by magnetic forces in magnetic confinement systems whereas radiation force compresses the fuel, generating even higher pressures and temperature, and resulting in a fusion reaction in the inertial confinement systems.

What approaches have human attempts to achieve nuclear fusion taken?

Two main approaches, namely magnetic containment and inertial containment, have been attempted to achieve fusion.

In the magnetic confinement scheme, various magnetic ‘cages’ have been used, the most successful being the tokamak configuration. Here, magnetic fields are generated by electric coils. Together with the current due to charged particles in the plasma, they confine the plasma into a particular shape. It is then heated to an extremely high temperature for fusion to occur.

In the inertial confinement scheme, extremely high-power lasers are concentrated on a tiny sphere consisting of the D-T mixture, creating tremendous pressure and compression. This generates even higher pressures and temperatures, creating a conducive environment for a fusion reaction to occur.

To create fusion energy in both the schemes, the reaction must be self-sustaining.

What are the hurdles that must be overcome to operate a working nuclear fusion power plant to generate electricity?

Fusion power is in the form of fast neutrons that are released, of an enegy of 14 Mev. This energy will be converted to thermal energy which then would be converted to electrical energy. Hurdles are in the form of special materials that need to be developed that are capable of withstanding extremely high heat flux in a neutron environment. Reliability of operation of fusion reactors is also a big challenge.

What kind of waste products/emissions would be produced by a fusion power plant?

All the plasma facing components are bombarded by neutrons, which will make the first layers of the metallic confinement radioactive for a short period. The confinement will be made of different materials. Efforts are being made by materials scientists to develop special-grade steel to have weaker effects struck by neutrons. All said, such irradiated components will have to be stored for at least 50 years. The extent of contamination should be reduced with the newer structural materials.

Fusion reactions are intrinsically safe as the reaction terminates itself in the event of the failure of any sub-system.

India is one of the seven countries committed to the ITER program in France. Could you tell us what its status is?

ITER project has gradually moved into construction phase. Therefore, Fusion is no more a dream but a reality. Construction at site is progressing rapidly. Various critical components are being fabricated in the seven parties through their domestic agencies.

The first plasma is expected in the end of 2020 as per the 2010 baseline. Indian industries are also involved in producing various subsystems. R&D and prototyping of many of the high tech components are progressing as per plan. India is committed to deliver its share in time.

Advertisements

Cancer screening expert to radiologists: Stop lying about mammograms


When it comes to using mammograms as a tool to screen women for breast cancer, how do you define success? At a minimum, you’d want to know that women who get mammograms are less likely to die of breast cancer than women who didn’t get the tests.

So the big Canadian study published last week in the British Medical Journal was rather inconvenient for the die-hard fans of mammography. The study sorted nearly 90,000 women into two groups. About half of them had mammograms, and the other half didn’t. Those who had the screening tests were more likely to be diagnosed with breast cancer.

You might expect this to be useful, by catching cancers at an earlier, more treatable stage. But it didn’t turn out that way. After tracking these women for up to 25 years, the researchers found that women who had mammograms succumbed to breast cancer at the same rate as women who didn’t get the tests.

The American College of Radiology – the medical group that represents the doctors who read mammograms – pounced on the study right away. In a statement, the ACR called the study “incredibly flawed and misleading.” Taking its results seriously “would place a great many women at increased risk of dying unnecessarily from breast cancer,” it warned.

As my colleague Monte Morin reported, the authors of the study said they stood by their results. But the accusations from high-profile radiologists have kept coming.

Now an expert on preventive medicine and screening is fighting back. In an opinion essay published online Wednesday on CNN.com, Dr. H. Gilbert Welch of the Dartmouth Institute for Health Policy and Clinical Practice explains why the ACR’s two main arguments against the Canadian National Breast Screening Study study are wrong.

First, the radiology group claimed the Canadian results could not be trusted because the women were screened with “second-hand mammography machines” that were operated by technologists who “were not taught proper positioning,” producing sub-par breast films read by radiologists who “had no specific training in mammographic interpretation.”

Welch sums up the ACR critique like this: “Canada is a Third World country.” Not only is this not true, he writes, it’s disingenuous. That’s because the clinical trials that radiologists cite in favor of mammography are even older than the Canadian study. “In fact, one of the trials most favorable to screening – the Health Insurance Plan of New York’s – dates from two decades before Canada’s, in the early 1960s, when mammography technologies were primitive,” Welch writes.

The ACR’s other complaint is that the Canadian trial stacked the odds against mammography by assigning women with “large incurable cancers” to the group that got the mammograms. “This guaranteed more deaths among the screened women than the control women,” according to the ACR statement.

Once again, Welch isn’t buying it. Critics have made this allegation before, and it’s so serious that Canada’s National Cancer Institute initiated a two-year investigation. As reported in the Canadian Medical Assn. Journal in 1997, the investigators “found no evidence of a deliberate attempt to conceal the alterations.”

Nor is there any evidence of cheating in the data, Welch explains. If the Canadian researchers were shunting the sickest patients into the mammogram group, then there were would be more deaths among women who had mammograms than among women who didn’t. But there weren’t. “The rate of death in the two groups was exactly the same, every year, for 25 years,” Welch writes.

Welch has coauthored many studies about mammography, and he says there’s a good explanation for why mammograms don’t seem to be helpful as a screening tool: the tests find “small, unimportant” abnormalities that are labeled “cancer” but are “not destined to cause them any problems,” he writes. (Also, better treatments have erased much of the advantage of finding cancers early.)

Though some radiologists have accepted the growing evidence that screening mammography is flawed, members of the “old guard” are still quick to attack the studies that don’t fit their worldview, Welch writes. He has some advice for those people: Grow up.

“It’s time to stop the unfounded allegations,” Welch writes on CNN. “It might be standard procedure for politics but not for science. Too much energy has been devoted to discrediting the Canadian study and not enough to understanding it.”

Fear suppressing neurons found.


Scientists have found neurons that prevent mice from forming fearful memories in an area of the brain called the hippocampus.

These inhibitory neurons ensure that a neutral memory of a context or location is not contaminated by an unpleasant event occurring at the same time.

The team says their work could one day help them better understand the neural basis of conditions such as post traumatic stress disorder.

Neurons in hippocampus

The study is published in Science.

Attila Losonczy, from Columbia University in New York and colleagues, were interested in how the hippocampus stores memories of a particular context and then separates this memory from a fearful event.

When looking at individual neurons in the brains of mice, they found inhibitory cells – called interneurons – were crucial for fear memory formation to travel to the correct part of the brain.

“These cells are activated by the unpleasant salient event and they act somewhat like a filter. They may function to block out unwanted information related to this strong, salient event,” Dr Losonczy told the BBC’s Science in Action programme.

Stopping fear

“This way, the hippocampus can process and store contextual information reliably and independently without the potentially detrimental interference from this [unpleasant] salient event,” he added.

When mice were conditioned to express fear in a particular context, they later associated the same environment with the unpleasant event.

But when scientists deactivated these inhibitor neurons, the mice no longer showed any fear. That is, the team was able to stop the mice from forming fearful memories.

This highlighted the importance of the role of these interneurons on first encoding the fearful memory before it was passed onto another part of the brain.

“The next time this aversive stimulus is not present, we should still be able to remember the context correctly,” Dr Losonczy explained.

“This contextual representation is then played out from the hippocampus to other brain areas like the amgydala where the actual association between the context and the fearful event takes place.”

Understanding how context and fear are learned and the specific neurons involved, could help scientists better help people with conditions like anxiety and post-traumatic stress disorders.

“If we understand how the circuits in our brain influence memory under normal conditions, we can then try to understand what actually went wrong during psychiatric disorders,” added Dr Losonczy.

Parallel processing

Xu Liu from the Massachusetts Institute of Technology, US, who was not involved with the research, said that the study was a cleverly designed way to “peek into the mouse’s brain and zoom into the cells of interest while the animal was learning”.

“This study solved the puzzle of how the hippocampus can successfully encode the context, while ignoring the impact of the ongoing negative stimulus.”

“[It] shows one mechanism for parallel-processing in the brain, where temporally overlapping inputs are disentangled and sorted into separate pipelines for further processing,” Dr Liu told BBC News.

The Ultra-Strong Robotic Muscles of the Future Could Be Made From Fishing Line.


he next artificial muscle, for either robotics or medical applications, will need to be strong, and it will need to be flexible. Right now, carbon nanotubes reign supreme as the strongest artificial muscle, while materials such as spider silk come in as close possible seconds. But now a new material breakthrough has entered the artificial muscle arena, and it could beat its competitors down. And this muscle is made out of fishing line, of all things.

Here’s i09 on the discovery:

How do you get muscle out of a fishing line? First, you have to create tension that can be released.

It’s a simple process that goes by an equally simple moniker: “twist insertion.”

One end of a high-strength polymer fiber (like a 50 pound test-line, for example, available at pretty much any sporting goods store) is held fast, while the other is weighted and twisted. Twist a little and the line becomes an artificial “torsional” muscle that exerts energy by spinning. Twist a lot, however, and something interesting happens: the cord coils over on itself, creating an ordered series of stacking loops.

When you do this to a piece of fishing line, researchers discovered, it turns into an artificial tensile muscle that can contract, just like our own muscles, i09 says. To test the fishing line’s strength, the researchers applied hot and cold temperatures—a standard means of testing material properties—which caused the artificial muscle to contract and relax. In this way, they could coax four interwoven artificial muscles into lifting 30-pound weights, for example. Sewing thread, they also found, demonstrates similar properties when treated this way.

After performing a number of tests, the researchers found that the artificial muscles could “generate about seven horsepower of mechanical power per kilogram of polymer fiber,” i09 writes. The study authors put this into perspective: that means the fishing line can “lift loads over 100 times heavier than can human muscle of the same length and weight” and can perform mechancial work about equivalent to that of a jet engine.

Scripps Florida Scientists Uncover Drug Resistance Mechanism that Could Impact Development of Two Antibiotic Drug Candidates.


The use of antibiotics is often considered among the most important advances in the treatment of human disease. Unfortunately, though, bacteria are finding ways to make a comeback. According to the Centers for Disease Control, more than two million people come down with antibiotic-resistant infections annually, and at least 23,000 die because their treatment can’t stop the infection. In addition, the pipeline for new antibiotics has grown dangerously thin.

Now, a new study by scientists from the Florida campus of The Scripps Research Institute (TSRI) has uncovered a mechanism of drug resistance. This knowledge could have a major impact on the development of a pair of highly potent new antibiotic drug candidates.

“Now, because we know the resistance mechanism, we can design elements to minimize the emergence of resistance as these promising new drug candidates are developed,” said Ben Shen, a TSRI professor who led the study, which was published February 20, 2014 online ahead of print by the Cell Press journal Chemistry & Biology.

Bacteria Versus Bacteria

The study centers around a kind of bacteria known as Streptomyces platensis, which protects itself from other bacteria by secreting anti-bacterial substances. Interestingly, Streptomyces platensis belongs to a large family of antibiotic-producing bacteria that accounts for more than two-thirds of naturally occurring clinically useful antibiotics.

The antibiotic compounds secreted by Streptomyces platensis, which are called platensimycin and platencin and were discovered only recently, work by interfering with fatty acid synthesis. Fatty acid synthesis is essential for the production of bacterial cell walls and, consequently, the bacteria’s existence. Platencin, although structurally similar to platensimycin, inhibits two separate enzymes in fatty acid synthesis instead of one.

The question remained, though, of why these compounds killed other bacteria, but not the producing bacteria Streptomyces platensis.

The Path to Resistance

The scientists set out to solve the mystery.

“Knowing how these bacteria protect themselves, what the mechanisms of self-resistance of the bacteria are, is important because they could transfer that resistance to other bacteria,” said Tingting Huang, a research associate in the Shen laboratory who was first author of the study with Ryan M. Peterson of the University of Wisconsin, Madison.

Using genetic and bioinformatic techniques, the team identified two complementary mechanisms in the bacteria that confer resistance to platensimycin and platencin. In essence, the study found a pair of genes in Streptomyces platensis exploits a pathway to radically simplify fatty acid biosynthesis while bestowing an insensitivity to these particular antibiotics.

“Understanding how these elements work is a big leap forward,” added Jeffrey D. Rudolf, a research associate in the Shen lab who worked on the study. “Now these bacteria have shown us how other bacteria might use this resistance mechanism to bypass fatty acid biosynthesis inhibition.”

The Return of the Polar Vortex Is Actually a Good Thing.


Just when you thought it was safe to go outside again, the polar vortex is back, blasting the Midwest and eastern half of the U.S with very cold weather. While this will undoubtedly be unpleasant, there is an upside.

You might remember the polar vortex from January and later in January, when it brought extremely low temperatures to a good deal of North America. Starting next week the atmospheric phenomenon, usually confined to the Arctic regions of our planet, will be dipping down once again into many states.

1911995_10152211847766772_1441833266_n

Models are “very confident that it’ll be significantly colder than average” in much of the eastern two-thirds of the nation, said Mike Halpert, acting director at NOAA’s Climate Prediction Center. During the worst parts, temperatures could be as much as 20 to 35 degrees below average. The most affected areas will likely be places that have already felt the freeze this year, such as Minnesota, Wisconsin, and the Dakotas.

Those states are currently feeling a little relief as the weather has momentarily cleared up in the Midwest, leading to warmer temperatures in the 50s and 60s and heavy rain instead of snow. Though it might be a nice break from the freezing temperatures, unfortunately, this is actually a bad thing.

According to Weather Underground, there is so much snowpack on the frozen ground in the central and northeastern U.S. that warm weather and rain could lead to flash floods. Ice flows breaking up in rivers could also get carried downstream and jam up the flow, leading to spillover. It seems that the expected arrival of the polar vortex next week may be a blessing: The return of freezing temperatures could save the region from the worst of this.

“This week’s thaw will be short-lived, preventing the kind of major flooding that would result if all of the snowpack were to melt in a week,” wrote meteorologist Jeff Masters at Weather Underground.

The polar vortex originates in the far north, where sunlight has disappeared during the winter season, creating the Northern Hemisphere’s coldest air. Moving southward, this air gradually warms, until it reaches a place where the warming occurs very quickly. A swift-moving river of air moves west to east here, marking the typical southern edge of the polar vortex.

Another climactic phenomenon in play is known as the Arctic Oscillation, where atmospheric mass moves back and forth over many years between the Arctic and the middle latitudes. During a positive Arctic Oscillation, pressure is lower than normal over the Arctic but higher than normal over the mid-latitudes. Because air moves from high to low pressure, the polar vortex is pushed upward, nearer to the pole, creating warm weather in the Arctic Circle.

During a negative phase, conditions are reversed, with high pressure in the Arctic and low pressure in the mid-latitudes. This is the time when the polar vortex can develop waves or kinks that bring freezing air southward. Interestingly, this year’s Arctic Oscillation was not largely negative. This could help explain why the polar vortex only came down in North America and eastern Siberia. Other locations around and within the Arctic Circle such as Alaska, Scandinavia, Europe, and western Russia had much balmier than normal temperatures. While this year’s Arctic Oscillation wasn’t very negative, scientists have noticed a trend in recent decades toward more negative phases. Some blame loss of sea ice and other effects from climate change, though the true cause remains unclear.

Though many folks like to think this perpetually dark and frozen winter they are suffering through is especially miserable, it’s actually not been a particularly severe one when taking a long-term view of the entire country.

“People are saying this winter’s been really cold,” said Halpert. “When looking at the last three months, yeah, we’ll be a little on the cold side compared to average. But it’s certainly nothing historic.”

Just how cold it is, of course, depends on where you are. While some states, like Wisconsin, are experiencing what may be in the top five or 10 coldest winters on record, California is in the middle of a warm and dry drought. But a lot of the U.S. hasn’t been having anything really out of the ordinary weather-wise.

CBT for Insomnia Effective, Saves Healthcare Costs


Brief cognitive-behavioral therapy for insomnia (CBTi) can reduce healthcare utilization and costs, new research shows.

Sleep improved in most insomnia patients who completed at least 3 sessions of CBTi, and their healthcare utilization decreased and healthcare-related costs fell by more than $200 on average, the researchers found.

“CBTi is a highly effective treatment, and this study shows that a relatively brief intervention also may have a positive economic impact,” principal investigator Christina McCrae, PhD, associate professor of clinical and health psychology at the University of Florida in Gainesville, said in a statement.

The study was published in the February 15 issue of the Journal of Clinical Sleep Medicine.

Dr. McCrae and colleagues reviewed the medical records of 84 adults treated for insomnia at the Insomnia and Behavioral Sleep Medicine (IBSM) Clinic at the University of Florida and Shands Sleep Disorders Center in Gainesville. The mean age of the patients was 54 years, and 58% were women.

The patients were offered up to 6 weekly sessions of CBTi led by clinical psychology and graduate students and predoctoral interns. Components of the program included sleep education, sleep hygiene, stimulus control therapy, sleep restriction, a 10-minute relaxation exercise, and cognitive therapy, plus a patient workbook.

All patients completed at least 1 session of CBTi. Thirty-seven patients attended 3 or more sessions and were considered “completers”; 32 of these patients, or 86%, saw significant improvement in sleep following CBTi and were considered “responders.”

Promising New Evidence

During a 6-month period prior to and following CBTi, the investigators measured the number of physician office visits, costs related to office visits (CPT costs), number of medications, and estimated healthcare costs and utilization.

For completers and responders, all healthcare utilization and cost variables, except number of medications, decreased significantly ( P < .05) or trended downward at posttreatment.

For completers, the average decrease in CPT costs was $200, and estimated total costs were $75. For responders, the average decrease in CPT costs was $210. No significant decreases occurred for noncompleters (those who completed fewer than 3 sessions).

The investigators note that the cost of brief CBTi ― about $460 in this study ― may cancel out any savings in the short term, but it has the potential to yield substantial savings in the long term, especially when individual results are extrapolated to the large population of insomnia patients in the healthcare system.

They add that CBTi has been shown to reduce the use of medications and psychiatric symptoms, and these factors would likely contribute to further reductions in healthcare utilization and costs following CBTi.

In a statement, Michael T. Smith, PhD, president of the Society of Behavioral Sleep Medicine, points out that each year in the United States, “millions of prescriptions are filled and billions of dollars are spent to treat insomnia.”

“This study reaffirms that cognitive-behavioral therapy is clinically effective, and it provides promising new evidence that even brief treatment with CBTi may reduce healthcare utilization costs,” he said.

Prehistoric forest arises in Cardigan Bay after storms strip away sand.


A prehistoric forest, an eerie landscape including the trunks of hundreds of oaks that died more than 4,500 years ago, has been revealed by the ferocious storms which stripped thousands of tons of sand from beaches in Cardigan Bay.

Borth forest remains, Cardigan Bay

The forest of Borth once stretched for miles on boggy land between Borth and Ynyslas, before climate change and rising sea levels buried it under layers of peat, sand and saltwater.

Scientists have identified pine, alder, oak and birch among the stumps which are occasionally exposed in very stormy winters, such as in 2010, when a stretch of tree remains was revealed conveniently opposite the visitor centre.

The skeletal trees are said to have given rise to the local legend of a lost kingdom, Cantre’r Gwaelod, drowned beneath the waves. The trees stopped growing between 4,500 and 6,000 years ago, as the water level rose and a thick blanket of peat formed.

This year a great swath of the lost forest has been revealed. Last month archaeologists also found a timber walkway nearby, exposed by the storms. It was discovered by Ross Cook and Deanna Groom, from the Royal Commission on the Ancient and Historical Monuments of Wales, who went beach walking in the wake of the storms to check for any new finds. It was made from short lengths of coppiced branches, held in place with upright posts.

It has been dated to between 3,100 and 4,000 years old, built as the local people found ways to cope with living in an increasingly waterlogged environment.

Two years ago human and animal footprints were found preserved in the hardened top layer of peat, along with scatterings of burnt stones from ancient hearths.

A £13m coastal defence system to protect the modern village was opened in 2012, but as the recent exposure of the spectacular prehistoric landscape proves, the coast is still being scoured bare by storms and flood tides.

Once-daily liraglutide may intensify insulin treatment in type 2 diabetes.


In patients whose type 2 diabetes is not sufficiently controlled by insulin degludec and metformin, the addition of once-daily liraglutide provides better glycemic control and weight-loss benefits than the addition of once-daily insulin aspart, recent study results found.

“Thus, this therapeutic option should be considered when basal insulin therapy requires intensification,” the researchers wrote.

The multinational (119 sites in 12 countries) phase 3b, open-label, randomized, treat-to-target study compared the addition of once-daily liraglutide (Victoza, Novo Nordisk) with the addition of once-daily insulin aspart (NovoLog FlexPen, Novo Nordisk) to an existing regimen of once-daily insulin degludec and metformin. All patients continued taking metformin during the trial, which lasted for 28 days. There were three parallel arms in the study: two were randomized (insulin degludec and liraglutide, n=88; insulin degludec and insulin aspart, n=89) and one was not randomized (insulin degludec and metformin, n=236).

The researchers found that insulin degludec and liraglutide decreased HbA1c significantly more than insulin degludec and insulin aspart (–0.74% vs. –0.39%), with an estimated treatment difference of –0.32% (95% CI, –0.53 to –0.12). More insulin degludec and liraglutide (49.4%) than insulin degludec and insulin aspart (7.2%) patients achieved HbA1c <7% without hypoglycemia or severe hypoglycemia, and without weight gain. Participants in the insulin degludec and liraglutide arm had significantly less confirmed and nocturnal confirmed hypoglycemia, and statistically more weight loss (−2.8 kg) vs. those in the insulin degludec and insulin aspart arm (0.9 kg, P<.0001).

With the exception of more gastrointestinal side effects seen with insulin degludec and liraglutide, there were no differences in safety.

According to the researchers, these findings could represent a valid option for insulin intensification when basal insulin is not sufficient for glycemic control.

Source: Endocrine Today

ADA guidelines may lead to missed diagnosis of type 2 diabetes in youth.


Results from a cross-sectional survey indicated that health care providers are more likely to order HbA1c and fewer fasting glucose tests when screening adolescents for type 2 diabetes based on the guidelines recommended by the American Diabetes Association released in 2010. This approach has the potential for more missed diagnoses for prediabetes and diabetes in children, in addition to increased costs, according to Joyce M. Lee, MD, MPH, and colleagues.

“A number of studies have shown that HbA1c has lower test performance in pediatric compared with adult populations, and as a result, increased uptake of HbA1c alone or in combination with non-fasting tests could lead to missed diagnoses of type 2 diabetes in the pediatric population,” Lee, an assistant professor of pediatric endocrinology and health services research at the Child Health Evaluation and Research Unit at the University of Michigan, said in a press release.

The mail survey was randomly sent to a sample of 1,400 United States pediatricians and family practitioners.

The overall response rate was 52% (57% for pediatricians and 48% for family practitioners); the most commonly ordered tests were fasting glucose and HbA1c, researchers wrote. At least 58% of physicians ordered HbA1c; 35% ordered HbA1c in conjunction with fasting tests; and 22% ordered HbA1c alone or with nonfasting tests, according to data.

However, only 38% of health care providers were aware of the 2010 ADA-recommended HbA1c screening guidelines, researchers wrote. After being made aware of those recommendations, 67% reported they would change their screening practices; based on the context of the guidelines, 84% reported they would subsequently order HbA1c tests and not glucose tests.

“Greater awareness of the 2010 ADA guidelines will likely lead to increased uptake of HbA1c and a shift to use of non-fasting tests to screen for adolescents with type 2 diabetes. This may have implications for detection rates for diabetes and overall costs of screening,” Lee said in the release.

Source: Endocrine Today