Meth Hype Could Undermine Good Medicine.


Overstating the dangers of  may impede treatment of drug abusers, asserts a review by Columbia University researchers.

The 1936 film Reefer Madness developed a cult following because of its over-the-top depiction of the evils of marijuana. Getting stoned and going to a midnight showing became a ritual for many college students.

The recognition that pot is not a direct route to an asylum for the criminally insane, as it was for one character in the film, fueled the hilarity for late-night moviegoers. The divergence between perception and reality has become an issue in recent years for other recreational drugs.

Last month four scientists from Columbia University published an analysis of previous studies on methamphetamine use that called into question some of the purported damaging effects of the drug on brain functioning. The review in Neuropsychopharmacology found that short-term effects of the drug actually improve attention, as well as visual and spatial perception, among other things.

Moreover, chronic users—the ones who would be expected to suffer most—remain largely unimpaired. The researchers found that they experience brain and cognitive changes “on a minority of measures” in brain imaging and psychological tests. “Cognitive functioning overwhelmingly falls within the normal range,” the report states, while adding that researchers’ pre-existing assumptions about meth’s detrimental effects “should be reevaluated to document the actual pattern of cognitive effects caused by the drug.”

While recognizing the potential for abuse, the researchers emphasize that misinterpretations of the scientific evidence can wrongly stigmatize drug abusers and lead to misguided policymaking. One study, for instance, asserted that meth abusers might be too cognitively damaged to benefit from rehabilitative treatments, such as cognitive behavioral therapies. “Findings from this review argue that such concerns are unwarranted,” the researchers state.

In Thailand, efforts to stem meth use have gone as far as banning all amphetamines, a class of drug that is used medically for treatment of ADHD and other conditions. “My main goal really was to make sure that we are rigorous in the science before we are political,” says Carl Hart, a substance abuse researcher at Columbia who was the lead author on the Neuropsychopharmacology paper. “I think, with meth, we have been political.” (Neuropsychopharmacology is part of Nature Publishing Group, which also includes Scientific American.)

The article asserts that some of the misconceptions surrounding meth go beyond findings on mental functioning. Drug education campaigns often publish photographs of “meth mouth,” severe tooth decay among users because of the lack of saliva. But dry mouth is a condition common to other drugs, such as the prescription antidepressant Cymbalta and the ADHD medication Adderall.

Hart says he was impelled to do the research because of distortions of the evidence for harm from crack cocaine. During the crack cocaine epidemic in the 1980s and 1990s, pronouncements about lasting prenatal harm to children whose mothers used the drug turned out to be overblown: long-term effects on brain development and behavior were fairly small, and children were sometimes ostracized or received medical diagnoses that were mistakenly attributed to effects from the drug.

The review by Hart and colleagues elicited a firm counterpoint from National Institute on Drug Abuse director Nora Volkow, some of whose research is critiqued in the Neuropsychopharmacology paper. “Because of the far-reaching public health implications of this issue, it is essential not to forget what we do know about meth-induced neuropathology, which is plenty troubling,” she says. Volkow points out that the vascular effects of meth can lead to strokes and hemorrhages. The drug, she notes, has also been shown to produce inflammation, atrophy and structural changes in brain tissue. “Similarly worrisome is a recent report of increased incidence of Parkinson’s diseases among individuals with a past history of methamphetamine abuse [compared with] the general population,” she says, adding that meth abuse can be “neurotoxic to the human brain.”

Hart responds that he and his colleagues were careful to consider the full body of scientific literature, including animal studies. He points out that many animal studies used to extrapolate possible deleterious cognitive effects in humans had administered large amounts of methamphetamine from the outset, a regimen unlike the gradual escalation in dosing undertaken by illicit drug users, which avoids these consequences. The article emphasizes that serious medical consequences, such as paranoia and hypertension leading to stroke, are rare and only result from sustained ingestion of very large doses.

Meth’s persistent bad boy reputation means that medical marijuana dispensaries will not be expanding their offerings to include speed any time soon. Still, the idea is not as totally outlandish as it might seem.

A much cited commentary that appeared in Nature in December 2008—an article co-authored by neuroscientists and ethicists—raised the prospect of routine use of “cognitive enhancement” drugs by the general public if they could be proved safe.

The few drugs already on the market that come closest to meeting the definition of cognitive enhancers include Adderall (dextroamphetamine and amphetamine) and Ritalin (methylphenidate), close chemical cousins of methamphetamine. Ritalin and Adderall, in fact, have developed a reputation as executive’s little helpers in the business world and were cited in the Nature commentary.

Hart and his colleagues never suggested that methamphetamine could serve as a pick-me-up for meeting pending work deadlines. Their review, though, looked at more than 10 studies that found that short-term use of meth actually improves several cognitive measures, precisely the kind of evidence the authors of the Nature article were calling for when considering the merits of enhancement.

The debate over methamphetamine, used widely by soldiers during World War II, reveals the ambivalent societal relationship toward potentially addictive compounds that can also serve as performance boosters. The hate-love relationship will likely continue into the indefinite future.

Source:Scientific American.

 

 

 

Prozac Extinguishes Anxiety by Rejuvenating the Brain.


 New research shows that the antidepressant reduces fear in adult mice by increasing brain plasticity.

Once adult lab mice learn to associate a particular stimulus—a sound, a flash of light—with the pain of an electric shock, they don’t easily forget it, even when researchers stop the shocks. But a new study in the December 23 issue of Science shows that the antidepressant Prozac (fluoxetine) gives mice the youthful brain plasticity they need to learn that a once-threatening stimulus is now benign. The research may help explain why a combination of therapy and antidepressants is more effective at treating depression, anxiety and post-traumatic stress disorder (PTSD) than either drugs or therapy alone. Antidepressants may prime the adult brain to rewire faulty circuits during therapy.

Nina Karpova, Eero Castrén and their colleagues at the University of Helsinki’s Neuroscience Center created and extinguished fearful behaviors in mice. First, Castrén placed mice in a cage and repeatedly played a tone just before electrically shocking their feet. Soon the animals froze in fear whenever they heard the tone, at which point Castrén put them through “extinction training.” He moved the mice to a different cage and played the same tone again. This time there was no electric shock.

Researchers have previously shown that young mice less than three weeks old quickly learn that the tone is no longer a herald of danger and stop freezing in fear. But adult mice are harder to put at ease. Even if the adults become less fearful during extinction training, their relaxation is not permanent—a week later the tone turns them into statues again.

In Castrén’s study, adult mice that took fluoxetine while they went through extinction training behaved much like young mice—they lost their fear much faster than mice that were not taking the drug, and their anxiety did not return. In contrast, mice that were given fluoxetine but never went through extinction training remained anxious.

Castrén makes an analogy between these findings and the consensus that antidepressants in combination with therapy are almost always more effective than either antidepressants or therapy alone. Scientists know what most antidepressants do at the molecular level—they change the amounts of neurotransmitters in the spaces between neurons, for instance—but how these changes treat depression remains an open question. Research has not supported the idea that antidepressants treat depression simply by correcting chemical imbalances in the brain. More recently, researchers have hypothesized that depression kills neurons whereas antidepressants like Prozac encourage new neural growth in the brain. Castrén’s study suggests Prozac returns regions of the brain to an immature state in which neurons make or break more connections with one another than is typical of the adult brain. In other words, Prozac increases brain plasticity.

Castrén looked for characteristic electrical and molecular signs of plasticity in the brains of mice that received fluoxetine and in those that did not. Specifically, Castrén looked in the amygdala at neural circuits responsible for fear responses. He found that fluoxetine increased levels of a cell-adhesion molecule associated with young neurons and decreased the levels of a transporter protein associated with adult neurons. He also found greater changes in membrane potential in neurons from the brains of mice that had learned to relax. These neurons were also better at synchronizing their communication through a process called long-term potentiation, which is crucial for learning and memory.

“We know that a combination of antidepressant treatment and cognitive behavioral therapy has better effects than either of these treatments alone, but the neurobiological basis is not known,” Castrén says. “We show a possible mechanism is bringing the network into a more immature and plastic state.”

Source”Scientific American.

 

Quantum Dots and More Used to Beat Efficiency Limit of Solar Cells.


New approaches, not yet ready for a rooftop near you, explore simple designs that are different from what’s out there

Most photovoltaic solar cells have an inherent efficiency cap, limiting how much useful energy they can extract from the sun. But scientists are finding ways around this obstacle with new research that could make solar energy more efficient and more cost-effective.

At the National Renewable Energy Laboratory (NREL) in Golden, Colo., researchers are investigating how to get a unit of light to push more than one electron at a time. Meanwhile, a team at the Massachusetts Institute of Technology is working on getting the right type of light to hit solar cells to make sure its energy doesn’t go to waste.

“One of the major limitations of solar energy conversion is that these high-energy photons are not efficiently converted. You lose a lot of energy to heat,” said Matthew Beard, a senior scientist at NREL. He co-authored a paper last week in Science that demonstrated a device that, at the quantum level, peaked at 114 percent efficiency using a process called multiple exciton generation (MEG).

“It operates in some ways the same way a traditional solar cell would,” said Beard. “Instead of bulk crystals, it uses quantum dots.” Most solar cells are made of a sandwich of two crystal layers: one that’s slightly negatively charged and one that’s slightly positive. The negative crystal has extra electrons, and when a photon with enough energy strikes the material, it dislodges an electron on the positive side, increasing its energy and leaving behind a “hole.” The electron-hole pairing is called an exciton.

MEG is one of the technologies at the vanguard of “third-generation” solar technology. Using these advances, solar panels can be thinner, lighter, cheaper, more flexible and fundamentally more efficient than current devices on the market. As a result, solar energy will be more cost-effective and will form a greater share of the world’s energy portfolio.

But first these panels must bypass the Shockley-Quiessler limit, the bane of current-generation photovoltaic systems.

Saving wasted solar energy

The “SQ” limit describes the maximum efficiency of a solar cell using a conventional single-layer design with a single semiconductor junction. For most common solar cell materials, the efficiency limit is about 32 percent in ideal conditions. This means that at least two-thirds of the energy from sunlight that hits a solar panel is wasted, more if you account for losses from reflections, wiring and mounting hardware. The efficiency increases if you add layers to the cell, but this substantially raises the device’s price and complexity. Currently, multi-junction solar cells are limited largely to satellites, where the need for efficiency, low weight and small space trumps cost concerns.

Now scientists are tweaking solar cell materials at nanometer scales to squeeze out better performance without increasing their prices or complexity, finding loopholes through the SQ limit.

In current photovoltaic cells, sunlight dislodges electrons, creating a moving charge that travels into the negative crystal, through a circuit, and then back to the positive side, where it fills the hole back in. If the photon doesn’t have enough energy, the electron stays put. If the photon has too much energy, the charge flows using only the energy it needs, and the remainder warms up the device.

Beard’s team found a way to make several holes with one photon using quantum dots—tiny chunks of semiconducting material between 2 and 10 nanometers in size. Their small size allows them to contain charges and more efficiently convert light to electricity. In this case, the dots are made out of lead and selenium. When a photon that has at least double the energy that is needed to move an electron strikes the lead selenide quantum dots, it can excite two or more electrons instead of letting the extra energy go to waste, generating more current than a conventional solar cell.

Source:Scientific American.

 

 

Imaging evolves to lead atherosclerosis care.


The management of atherosclerosis, the No. 1 cause of death in the U.S., has been reinvented by advances in imaging technology, according to a panel presentation at the RSNA 2011 meeting.

Led by Dr. Geoffrey Rubin, radiology chair at Duke University Medical Center, and Dr. Zahi Fayad, PhD, professor of radiology and cardiology at Mount Sinai Medical Center, the opening-day panel explored the evolution of CT angiography (CTA) as the leading imaging modality for atherosclerotic disease, as well as the growing contributions of other modalities in the refinement of atherosclerosis care.

“What led me into cardiovascular imaging and through the development of CT angiography was an appreciation of the beauty and profound information that is contained within the images that we acquire,” Rubin said. Owing to its enormous flexibility and diagnostic power, CTA has become the leading modality in cardiovascular disease management. “CT angiography allows us to volumetrically look at complex relationships of blood vessels and glean important and subtle abnormalities,” he said.

In a talk titled “Coronary CT Angiography — 20 Years Old and All Grown Up,” Rubin explained how the modality wasn’t always so flexible, and the images not so exquisitely detailed. Rubin’s first CT angiography exam in 1991, a renal artery study acquired on a single-detector-row helical scanner, was a blurred abstraction compared to today’s volumetric images, but it accurately demonstrated both the relevant anatomy and the pathology — capabilities that improved greatly with the introduction of four-detector-row scanners in 1998 and continued through the development of today’s volumetric scanners. Diagnoses that had remained stubbornly elusive at conventional angiography suddenly became clear, and were available in a 3D display that could be easily shown and transmitted to referring physicians, he said.

“The introduction of dynamic capability with CTA and, in particular, wide area detectors, allowing us to simultaneously look at blood flow volumetrically, has added further to the tremendous capabilities of this technique,” Rubin said. In vascular imaging, understanding of acute aortic syndromes and aortic dissection has been revolutionized by the ability to examine blood flow, which has revealed the dynamics of the intimal flap and its relationship to cardiac blood flow, and imparted new understanding to atrial-septal disease.

The introduction of MDCT heralded an age of broader anatomic coverage and fine spatial resolution that has continued in the years since, simplifying clinical management and replacing a wide array of invasive techniques or improving surgical planning. Today, “virtually every vascular bed is evaluated routinely with CT angiography,” he said. But the technique is still advancing, and more work needs to be done in key areas.

Endovascular device selection and characterization is one such area. During the development of stent grafting techniques, experts told him that CT would never replace conventional angiography, an area where it is now the mainstay, Rubin said.

Ultrasound was traditionally used for measuring the aorta, though it couldn’t match the accuracy of volumetric CT, which produced true orthogonal representation of the cross-section of the aorta, leading to the first accurate way to size stent grafts.

Also key to vascular CT’s development were techniques developed for continuous measurement of vascular dimension by calculating mean diameter from cross-sectional area. These developments by Rubin and colleagues led to commercial implementations on workstations that are now the standard, he said. For endograft assessment, CT has also “stepped up in a big way to show us the detail,” ultimately leading to design improvements in the stents themselves. A new technique, transcatheter aortic valve implantation, was made possible by improvements in endograft measurement and assessment and, via two manufacturers of the replacement valves, is set to revolutionize the treatment of aortic valve stenosis.

“I hope you are looking into these techniques in your hospital,” Rubin said. “It is a very exciting area for radiologists to involve themselves in, and a very important one.”

Also emerging as a key technique for triaging patients with positive CTA results is fractional flow reserve (FFR), which measures blood flow based on volumetric CT data. FFR “has a tremendous predictive value in determining what types of coronary lesions will ultimately result in major myocardial infarction or death,” he said. Recently published two-year results from the Fractional Flow Reserve Versus Angiography for Multivessel Evaluation (FAME) trial by Tonino and colleagues showed a 33% reduction in the risk of death or major cardiac events when percutaneous coronary angioplasty and stenting was based upon fractional flow reserve measurements at CTA versus angiography, Rubin said.

Also good news but a little disconcerting, he said, is that correlation of lesions at coronary CTA with fractional flow reserve shows that “75% of those lesions turn out to be false positive,” meaning that they do not have a low enough FFR to cause ischemia — and that coronary CTA alone can’t distinguish them.

This understanding has led to the application of computational fluid dynamics to coronary CTA data based on “a tool used every day in characterizing flow around structures such as jet aircraft or race cars in order to understand how a structure interacts with flowing material,” Rubin said. Calculations based on this model allow for continuous mapping of FFR values, which enable a characterization of the coronary tree not possible from images alone, he said.

True to the earlier FFR revelations, a recent study shows that application of this model has a striking impact on the specificity of coronary CTA, raising specificity from 25% to 82%. The Finally, the traditional focus on arterial lesions has led to new tools to map end-organ perfusion. “It is ultimately through the [blood] supply and demand to the end organ through perfusion, as well as the demand to the end organ through metabolism, that we will really understand the relevance of these vascular lesions,” Rubin said. In practical terms, for patients presenting with substernal chest pain, it’s not always clear whether it is the CTA-detected vascular lesions that are causing the pain.

“The availability of techniques now that allow wide area coverage through either shuttle mode of the CT or high-pitch mode or wide area detectors allow for CT perfusion studies to be performed,” he said. In patients with multiple possible culprit lesions, perfusion pinpoints the cause of ischemia and is well-correlated to MR and SPECT perfusion studies, he said. Extending the study to look at delayed enhancement can distinguish true myocardial infarction from recoverable myocardium.

Along with the great advances for CTA have come strident reminders of the dangers through the lay press, and some highly popularized cases of patient harm, he said. “But I am very excited that the developments coming down the line … are all serving to substantially reduce the radiation exposure of CT angiography,” Rubin concluded. “I strongly believe that in the future the vast majority of CT angiographic procedures will be performed at less than 1 mSv.”

Multimodality atherosclerotic imaging

In his talk about “bleeding-edge imaging and therapy in vascular disease,” Fayad highlighted developments in multimodality atherosclerosis imaging, prevention, diagnosis, and treatment. Atherosclerosis is the most important challenge among four noncommunicable diseases — cardiovascular disease, diabetes, cancer, and chronic respiratory disease — that are challenging health, finance, and socioeconomic well-being in the 21st century, he said.

Early detection of risk factors, such as hypertension for cardiovascular disease, will play a major role, he said. That means more attention to diet and physical activity. Reduction in tobacco use, excessive alcohol consumption, etc., will be important for reducing the toll, as will accurate diagnosis focusing on imaging, Fayad said.

“When we talk about atherosclerosis, we think most of the time of the coronary arteries, but we know that atherosclerosis is a disease that burdens the whole body,” Fayad said. “It can affect the carotid arteries, which leads to stroke and peripheral disease,” sharing many characteristics with coronary artery disease. “Nonsignificant” luminal stenosis of less than 50% is really the main culprit in acute myocardial infarction, “and there’s more and more evidence that it’s the nonstenotic lesions that lead to stroke,” he said.

Investigations in lesion pathology have led to characterization of two main types of atherosclerotic lesions: Stable plaque — characterized by the presence of thick fibrous cap, a modest lipid pool, and few inflammatory cells — is distinguished from the more dangerous unstable plaques, characterized by a thin fibrous cap, low collagen content, large lipid pool, many inflammatory cells, and angiogenesis.

“These are the features that you have to use imaging to try to detect,” he said. And since these unstable lesions are nonstenotic, a typical CT angiography study won’t characterize them as significant.

Because drug therapy is the principal method of mitigating disease progression, improving drug development in the pharmaceutical industry presents another major challenge, Fayad said. More and more drugs are needed at the same time that fewer are being approved by the U.S. Food and Drug Administration (FDA).

Currently, the drug development cycle is very long and the need for money is enormous, he said. Accepting or rejecting drugs earlier in the approval process — at phase II, for example — is one step that will accelerate drug development. The use of cardiovascular imaging biomarkers in the assessment of atherosclerosis is one idea that has gained some acceptance in both the FDA and the European Medicines Agency.

The experience with cancer provides some insight — FDG uptake after treatment is predictive of the future course of prostate cancer, for example — and now the idea has shown applicability to vascular imaging. At this year’s RSNA meeting, Tawakol, Fayad, and colleagues will present results from the first multicenter trial of statins showing the impact of different dose levels on FDG uptake post-treatment, he said.

“There are a whole array of imaging tools that are beginning to be applied to the imaging and treatment of atherosclerosis,” Fayad said.

Even “plain vanilla” MRI is useful for identifying plaque and characterizing the burden of disease, but new techniques are taking the modality further, he said. For example, multiweighted MRI images can help identify the composition of the plaque to convey a sense of the lipid-rich necrotic core with or without contrast. T2-weighted MR images can identify the cholesterol deposits due to their hypodensity, along with the calcifications. One step further, dynamic contrast-enhanced MR images of the type used for tumor imaging can be used to identify angiogenesis of inflammation, he said.

Already tried in humans is MRI using iron oxide contrast with microphages, and fibrin-specific MRI contrast, though the latter agent never made it to market due to economic constraints.

FDG uptake shows vascular inflammation

Vascular imaging using FDG-PET to measure glycolysis is one approach for creating contrast. It’s an important approach because high-risk plaque has a high level of activated macrophages and, therefore, a high level of FDG activity and glucose uptake, Fayad said. “This technique can be used today to look at vascular inflammation in a multitude of inflammatory diseases, and we and others have been looking at it to use it as a biomarker of inflammation measurement.”

Techniques using standard FDG have been validated in vivo by histopathology in carotid atherectomy patients, he said. In a multicenter study not yet published, FDG was able to distinguish between two groups of patients — randomized to high-dose or low-dose statins — to measure significant differences in FDG uptake levels after treatment. The trial is fundamental to understanding the power of this imaging test, he said.

Whether this knowledge will predict atherosclerotic events is one of the most important questions to be answered in terms of the clinical applications, according to Fayad. But there are hints that it works, from a trial of cancer patients that studied FDG uptake in tumors. Those patients with higher levels of vascular inflammation per FDG had more atherosclerotic events, he said. And new non-FDG radiotracers, such as carbon-11 PK11195, found on many macrophages, are also showing promise, distinguishing symptomatic from asymptomatic patients in a recent study.

MRI plus FDG

Merging FDG uptake with MRI in an integrated PET/MR scanner adds another dimension, Fayad said, regarding another recent study undertaken by his group in “PET-weighted imaging.” Why perform simultaneous PET and MRI for vascular imaging?

“We wanted to focus on being noninvasive,” he said. “We wanted a technique that was highly reproducible because we’re interested in quantification. We wanted a technique that can give us a sense of plaque composition and burden in combination with metabolism and plaque information.”

The group’s multicenter clinical trial showed that it was “robust and useful” for characterizing atherogenesis, according to Fayad. Twelve months after initiation of niacin therapy, the multimodality approach clearly demonstrated disease regression.

MRI is being used more and more in large, population-based MRI studies, including the Framingham Heart Study, the Multiethnic Study of Atherosclerosis (MESA), and the Rotterdam Heart Study, where MRI is being used to “try to identify the known correlation of atherosclerosis risk factors, as well as imaging-based atherosclerosis assessment,” Fayad said.

Finally, a multicenter study published last month showed the effectiveness of the atherosclerosis drug dalcetrapib using FDG-PET and MRI in combination. PET/MRI scanning is going to have a big impact in understanding atherosclerosis while protecting patients from radiation exposure, Fayad said. The recently FDA-approved Avalanche photodiode PET/MR scanner (Philips Healthcare) is another promising addition to the modality.

CT in living color

Color is another addition in atherosclerosis imaging, enabled by the development of energy-resolving detectors for spectral CT scanning. “I think we’re going to finally have the potential to utilize CT to do better imaging,” he said.

One prototype detector has eight levels of energy distribution and has shown it can distinguish gold from iodine for better tissue characterization. A few spectral scanners are available now for animals, and models may be available for use in humans in the near future. “We feel that detectors like these can characterize the tissue without need of precontrast imaging, so that’s going to be important,” wielding a big impact on clinical radiology by reducing dose substantially, he said.

Nanotechnology

Nanotechnology presents a great opportunity to combine drug delivery and imaging of atherosclerosis. “This could be a way to detect disease before health has deteriorated,” he said. Nanotechnology can be used to create sensors, or imaging tools based on nanoparticles. It can deliver and monitor the effect of therapeutic agents, and represents an excellent way to improve the understanding of disease and apply it later in the clinical realm. Combined nanotechnology and therapy is the latest frontier for this research, he said.

The field was initially driven by cancer research, but there are new investments at the National Institute of Health’s National Heart, Lung, and Blood Institute to drive research in the field, he said.

“We’re seeing a huge interest in the number of publications and interest in molecular imaging,” Fayad said. Most of these techniques are applied in the preclinical setting, he said, but look for the emergence of one of these agents soon in the clinical realm.

One example of a nanoparticle mimicking biology — always a good bet, he noted — is the use of a high-density lipoprotein as a contrast and drug-delivery platform that can be used with MRI, PET, and CT, while incorporating a drug at the core of the lipoprotein for local delivery of therapy. Currently, many oncology applications for nanotechnology are in clinical use, and several companies are “close to filing [investigational new drug] applications with the FDA for nanotechnology products,” he said.

And what starts in cancer can break through into vascular disease care. Fayad’s group is working on a “potent anti-inflammatory” agent that calmed arterial inflammation in a rabbit model within a week of injection, and is now being applied in human subjects.

Along with these developments, “I’m very excited about new developments in PET/MR, as well as spectral CT,” as well as new therapeutic applications of nanoparticle agents that are “useful for both diagnosis and therapy,” Fayad said.

Source:Applied Radiology.

 

 

CCTA generates more interventions versus stress testing.


An analysis of more than 300,000 Medicare patients being evaluated for heart disease found that those who received coronary CT angiography (CCTA) were more likely to go on to invasive cardiac procedures such as catheterization compared with those who received myocardial perfusion scintigraphy (MPS), according to results published in the November 16 Journal of the American Medical Association.

Compared with MPS, CCTA more than doubled the odds of subsequent cardiac catheterization, percutaneous coronary intervention (PCI), or coronary artery bypass graft (CABG) surgery, wrote Dr. Jacqueline Baras Shreibati; Laurence Baker, PhD; and Dr. Mark Hlatky from Stanford University.

“This study documents that patients who undergo CCTA frequently undergo additional cardiac testing, particularly cardiac catheterization, and subsequent coronary revascularization with PCI or CABG surgery,” the authors wrote.

The results contrast to those of a couple of earlier reports, inviting speculation about what drove the differences. For example, Dr. James Min and colleagues found that patients without coronary artery disease who received CCTA had 16% lower follow-up costs than patients who received MPS, with no differences in the rates of myocardial infarction.

The study data are interesting, but follow-up in thousands of mostly younger patients since the original study was published confirms the low rate of revascularization in patients undergoing CCTA, Min, from Cedars-Sinai Medical Center, told AuntMinnie.com.

Another cardiac imaging researcher said the problem with MPS is that it misses disease that CCTA finds, leading, perhaps inevitably, to more procedures compared to CCTA.

But significant blockages missed at MPS “are readily recognized by coronary CT angiography and will trigger revascularization,” said Dr. U. Joseph Schoepf, from the Medical University of South Carolina, in an email to AuntMinnie.com.

In any case, the association of CCTA with subsequent use of cardiac tests and procedures and with clinical outcomes is not well-established, the authors noted, while acknowledging that the reasons are unclear.

“CCTA might reduce follow-up testing and thus reduce expenditures by excluding significant [coronary artery disease (CAD)], as has been demonstrated among low-risk patients evaluated in the emergency department for acute chest pain,” they wrote. “On the other hand, CCTA may detect atherosclerotic plaques that are not hemodynamically significant and lead to additional tests and procedures, such as coronary catheterization and revascularization, that would not otherwise have been performed, thereby increasing expenditures,” (JAMA, November 16, 2011, Vol. 306:19, pp. 2128-2136).

Utilization and spending

The study aimed to compare utilization and spending associated with both anatomic testing CCTA and functional testing with MPS.

The observational cohort study was based on Medicare claims data from a 20% random sample of Medicare fee-for-service beneficiaries from 2005 to 2008. All patients were 66 years or older, had no claims related to coronary artery disease in the preceding year, and received nonemergent outpatient testing for coronary artery disease. The 282,830 patients in the 20% sample had a mean age of 73.6 years, 46% were men, and 89% were white.

CPT codes were used to identify patients undergoing CCTA, stress echo, exercise electrocardiography (ECG) tests, or pharmacological stress testing. For MPS, patients had to have undergone both a stress test and an associated imaging test within a one-day window.

The study tracked CAD-related procedures, hospitalizations, and spending for 180 days following the index test; sensitivity analysis was conducted at 90 days. Calculations of costs related to coronary artery disease testing and care included actual Medicare expenditures, including inpatient and outpatient services. All models were controlled for confounding factors including age, sex, weight, tobacco use, and comorbidities.

Among the 282,830 who underwent noninvasive testing for suspected coronary heart disease, MPS was the most common diagnostic test, used in 46.8% of cases (n = 132,343), followed by stress echocardiography at 28.5% (n = 80,604), exercise ECG at 21.6% (n = 61,063), and CCTA in 3.1% of cases (n = 8,820).

Patients undergoing CCTA were somewhat younger (mean age, 73.56 years) and had fewer comorbid conditions, including Framingham risk factors (diabetes, tobacco abuse, hyperlipidemia, hypertension), than patients undergoing MPS (mean age, 75.71 years). However, they were somewhat older and had more comorbid conditions than patients undergoing stress echocardiography (mean age, 73.80 years) or exercise ECG (mean age, 73.15 years).

Significant findings included the following:

  • Compared to patients who underwent MPS, those who underwent CCTA were nearly twice as likely to undergo subsequent cardiac catheterization (22.9% versus 12.1% for MPS).
  • CCTA patients were more than twice as likely to undergo coronary artery bypass graft surgery (3.71%) compared to MPS patients (1.29%).
  • CCTA patients remained a little healthier over time, with a slightly lower likelihood of hospitalization for acute heart attack (0.19%) in the first 180 days after their first test than patients undergoing MPS (0.43%). But patients undergoing CCTA had a similar likelihood of all-cause mortality (1.05%) compared to patients whose testing began with MPS (1.28%).
  • As for costs, both average total spending ($29,719) and spending related to coronary artery disease ($14,943) over the 180-day follow-up period were significantly higher among patients undergoing CCTA, who had nearly 50% higher CAD-related average expenditures than patients undergoing MPS.
  • However, CCTA patients had lower associated spending with echocardiography (-$4,981) and exercise ECG (-$7,449) versus MPS patients.
  • Overall spending related to coronary artery disease was $11,437 for CCTA versus $7,430 for MPS. Average total spending was also significantly lower for patients undergoing MPS ($29,719 for CCTA versus $14,943 for MPS).

Shreibati and colleagues concluded that Medicare beneficiaries who underwent CCTA in a nonacute setting were more likely to undergo subsequent revascularization procedures with CABG and PCI, resulting in higher coronary artery disease-related spending than for patients who underwent stress testing.

Functional significance

One issue with CCTA is the unclear functional significance of some lesions, and as a result, current guidelines recommend initial evaluation with a noninvasive stress test in most patients, with angiography reserved for patients with positive results, they noted.

CCTA’s key selling point is the potential of obtaining anatomic information about the coronary arteries noninvasively, reducing the need for subsequent cardiac testing. A negative CCTA confers a 99% likelihood of not having obstructive coronary artery disease. Still, a CCTA result “is not definitive evidence of obstructive disease,” as CCTA is only 88% specific, and additional functional tests are sometimes needed, according to the authors.

The net effect of CCTA on subsequent cardiac testing is therefore uncertain, and relatively few studies have looked at the issue.

One group that did was Min et al in a 2008 study (Radiology, October 2008, Vol. 249:1, pp. 62-70). They used a large private insurance claims database to compare costs and outcomes in a cohort of younger (mean age, 57 years) patients without known coronary artery disease.

Patients undergoing CCTA had 16% lower follow-up costs than those who underwent MPS, with no difference in myocardial infarction rates or cardiac hospitalizations.

Meanwhile, a 2010 update of patients in the CONFIRM (Coronary CT Angiography Evaluation for Clinical Outcomes) registry found that mortality rates corresponded to significance of vessel disease at CCTA. The latest results presented in September demonstrated that patients even at low Framingham risk scores faced a high risk of adverse events if CCTA demonstrated obstructive CAD.

In a smaller study published this year, Nielsen et al found more downstream testing in the stress test group (32%) than in the CCTA group (20%) (International Journal of Cardiovascular Imaging, July 2011, Vol. 27:6, pp. 813-823).

The present JAMA study used a much larger database of 300,000 Medicare beneficiaries; however, it lacked the long-term follow-up needed to assess the long-term effects of CCTA on patient outcomes, the authors noted.

In addition, patients in private health plans may undergo fewer cardiovascular procedures than Medicare patients in general, the group said, noting that all-cause mortality was about the same for both groups of patients during the short follow-up.

Commenting on the study, Schoepf said it’s important to remember that MPS and CCTA are very different tests, and that MPS is likely missing significant disease that will show up in longer-term follow-up.

The observation that CCTA is more often followed by invasive procedures and revascularization than MPS is not entirely unexpected, Schoepf wrote in his email. First and foremost, he said, the two tests do not produce identical information.

The reasons behind the higher rates are speculative, study author Hlatky told AuntMinnie.com by email. “It could be that in the older population in our study (median age of 75) it’s not common to have completely normal coronary arteries, and so more tests and procedures were done afterwards to clarify things,” he wrote. “Another possibility is that seeing disease in an artery leads the doctor to want to ‘fix it’ with an angioplasty or bypass, while a stress test result does not provide as dramatic a picture.”

Hlatky added that he agreed with Schoepf that more outcome studies are needed that explore length of life, symptoms, quality of life, and quality-adjusted life-years “to judge whether the significantly higher spending and higher procedure rates are good for patients and provide value.”

Speaking by telephone with AuntMinnie.com, Min, whose work has contradicted the study findings, called the JAMA data “intriguing,” but added that contemporary observational data “don’t appear to support these findings.”

“As an example, for the prospective, multicenter, international CONFIRM registry, we identified a very low rate of downstream angiography from patients without significant coronary disease, and even among those with obstructing coronary artery disease we found that less than half were being referred for invasive angiography,” he said.

It may be that the JAMA study findings “were true at an early stage in the evolution of current-generation CT, for example 2006 to 2008, and it may be that if they were to examine more contemporary claims their results may have been different,” he said.

Based on these results, clinicians and policymakers “should critically evaluate the use of CCTA in clinical practice, based on studies of subsequent outcomes,” the authors concluded.

Source:Applied  Radiology.

Radiation dose still high on ECRI’s technology hazards list.


 Inappropriate and/or unnecessary radiation dose exposure from CT scans and radiation therapy is still in the top 10 list of health technology hazards published annually by research and consulting group ECRI Institute.

Concern about radiation dose did diminish somewhat in the 2012 report compared to last year, when radiation therapy errors topped ECRI’s list as the No. 1 hazard. Concern about use of unnecessarily high radiation dose and inappropriate utilization of CT scans followed close behind, as the fourth highest concern.

In ECRI’s new 2012 report, these two items have been combined as “exposure hazards from radiation therapy and CT” and listed as the No. 2 health hazard to patients. Failure of alarm systems on medical devices such as dialysis units, physiological monitors, and infusion pumps now has the No. 1 hazard ranking.

This year, concern about healthcare IT data loss — No. 5 in the 2011 list — has been replaced with inattention to change management for medical device connectivity, with failure in PACS integration specifically referenced. Healthcare IT data loss has dropped off ECRI’s list of hazards entirely.

Preventing radiotherapy hazards

ECRI recommends that hospitals implement robust measures to control the “complicated risks” that can be caused by radiation therapy devices and CT scanners. With respect to radiation therapy adverse incidents, it noted that these can have devastating consequences that include ineffective tumor control, critical damage to normal tissues, and possible death.

“It isn’t clear how many patients are affected by radiation therapy errors — for one thing, there isn’t an unambiguous definition of a reportable event — and there is a good chance that incidents are being significantly underreported,” the authors wrote.

Noting that there is no simple fix to guarantee the safe and effective use of radiotherapy equipment, ECRI recommends that departments be accredited and certified, that staffing levels are adequate, and that a comprehensive, rigorous quality assurance program, including peer review, is maintained. Additionally, new equipment and software updates need to be properly installed, commissioned, tested, and maintained with equipment capable of performing these functions.

ECRI also recommends that radiation therapy departments have standard patient treatment protocols with documented use, and that checklists are followed, clinical staff is trained, new treatment techniques are validated, and “time out” triggers are followed with corrective action.

Preventing CT scan hazards

ECRI expressed concern about the frequency with which CT exams are being ordered, and that failure to implement and mandate the use of low-dose protocols is exposing patients to unnecessarily high radiation levels. The report also expressed concern about inappropriate use of CT exams.

ECRI recommends that radiologists and medical physicists be available for consultation with all medical staff. Staff members should be educated about the appropriate use of diagnostic imaging, they should implement low-dose protocols and validate all study protocols before routine use, and they should maintain an audit program of radiation doses, according to ECRI.

Medical device connectivity hazards

Healthcare IT professionals should not be complacent about medical device interfaces with IT systems. Noting that the growing interrelationship of medical technology and IT offers significant benefits, ECRI also warned of potential risks to patients if interfaces are poorly implemented. The report identifies hazards originating from degraded network performance, software glitches, and system interoperability issues.

ECRI has observed increasing problems involving wireless networks, cybersecurity, planned maintenance, and software upgrades. It recommended implementing change management using a structured approach and good working relationships among the staff of all departments involved.

Other items in the new top 10 list include the following:

  • No. 3: Medication administration errors using infusion pumps
  • No. 4: Cross-contamination from flexible endoscopes
  • No. 6: Enteral feeding misconnections
  • No. 7: Surgical fires
  • No. 9: Anesthesia hazards due to incomplete pre-use inspection
  • No. 10: Poor usability of home-use medical devices

Source:GE

 

CT technique predicts transient vs. persistent lung nodules.


An image analysis technique presented at the RSNA 2011 meeting could facilitate CT lung cancer screening by differentiating potentially malignant part-solid nodules from the transient kind that are less worrisome.

The problem with part-solid nodules (PSNs) detected at CT is that they have a much higher malignancy rate — ranging from about 63% to 90% of all PSNs that don’t resolve on their own — compared with solid or ground-glass nodules. Still, about 50% to 70% of PSNs are transient, typically resolving on their own within a few weeks or months. And that waiting time is wasted time.

If radiologists could determine from the initial CT scan which part-solid nodules would resolve and which are more likely to be malignant, patients could be appropriately triaged, potentially sparing them and their doctors a lot of time, not to mention radiation dose, said researchers from Seoul, South Korea.

The study team believes it has a solution: Differentiation of the two types based on CT attenuation characteristics, which differ significantly between persistent and transient PSNs.

“It is important to determine whether initially detected PSNs would be transient or persistent,” said Dr. Sang Hwan Lee from Seoul National University Hospital’s National Cancer Center. “By doing so, we can reduce unnecessary invasive diagnostic procedures and radiation dose from follow-up CT.”

The retrospective study aimed to use computer-aided analysis of nodules’ pixel values to differentiate persistent from transient part-solid lesions.

Previous studies had looked at clinical and CT features in different ways, he said. Clinical predictors of transient PSNs included young age and blood eosinophilia. CT features suggestive of transient nodules included the presence of multiple PSNs, lesions with a large solid portion, and those with ill-defined margins. These signs work pretty well, but the group was aiming for results that were more patient-specific and, they hoped, more accurate.

“PSNs could be assessed with high accuracy by experts, but there was limited reproducibility,” Lee said. “In this context, pixel-value analysis is a promising method.”

For five years ending in December 2010, the researchers identified 77 CT screening patients (39 men, 38 women; mean age, 55 years) with PSNs (47 persistent in 46 individuals and 39 transient in 31) at thin-section CT. Two experienced radiologists working together identified the patients with PSNs.

Transient PSNs were defined as those that shrank or disappeared over the three-month follow-up period, while nodules labeled persistent either grew or remained stable.

The radiologists manually segmented each PSN into an inner solid and outer ground-glass opacity region. A range of pixel values from the segmented areas of each PSN was then extracted using computer-aided analysis software (ImageJ, version 1.43m, U.S. National Institutes of Health) and compared between persistent and transients PSNs.

Multivariate logistic regression analysis was used to identify any differential features between persistent and transient PSNs, and the diagnostic performance of this model was evaluated using C-statistics, Lee explained.

In a univariate analysis, standard pixel attenuation, standard deviation, and kurtosis (the extent of a peak or a bulge in the data) differed significantly between transient and persistent PSNs in the ground-glass portion of the lesions. The inner, solid portion showed no significant differences between the transient and persistent PSNs, Lee said.

Differences between transient and persistent PSNs

Characteristics of whole nodule*

Transient
(n = 39)

Persistent
(n = 47)

p-value

Average pixel attenuation

-571 ± 92.5

-527 ± 62.9

0.001

Standard deviation

145 ± 20.9

133 ± 18.9

0.006

Kurtosis

2.81 ± 0.81

2.48 ± 0.59

0.028

Median pixel slope

-70.1 ± 61.57

-42.78 ± 53.9

0.019

Sigmoid fixing slope

-0.75 ± 3.46

-2.88 ± 5.3

0.034

*Results apply to whole nodule only. Analyzed by itself, the inner solid portion showed no significant differences between the transient and persistent PSNs.

In addition, multivariate analysis showed that higher standard deviation (p = 0) and higher kurtosis (p = 0.032) were statistically significant independent predictors of transient PSNs.

“Using computer-aided analysis from the whole pixel area, persistent and transient PSNs can be differentiated with high accuracy,” Lee said.

Source:Applied Radiology.

 

 

Host Response to Short-term, Single-Agent Chemotherapy Induces Matrix Metalloproteinase-9 Expression and Accelerates Metastasis in Mice.


Mounting evidence suggests that bone marrow–derived cells (BMDC) contribute to tumor growth, angiogenesis, and metastasis. In acute reactions to cancer therapy, several types of BMDCs are rapidly mobilized to home tumors. Although this host reaction to therapy can promote tumor regrowth, its contribution to metastasis has not been explored. To focus only on the effects of chemotherapy on the host, we studied non–tumor-bearing mice. Plasma from animals treated with the chemotherapy paclitaxel induced angiogenesis, migration, and invasion of tumor cells along with host cell colonization. Lesser effects were seen with the chemotherapy gemcitabine. Conditioned medium from BMDCs and plasma from chemotherapy-treated mice each promoted metastatic properties in tumor cells by inducing matrix metalloproteinase-9 (MMP9) and epithelial-to-mesenchymal transition. In mice in which Lewis lung carcinoma cells were injected intravenously, treatment with paclitaxel, but not gemcitabine or vehicle, accelerated metastases in a manner that could be blocked by an MMP9 inhibitor. Moreover, chimeric mice reconstituted with BMDC where MMP9 activity was attenuated did not support accelerated metastasis by carcinoma cells that were pretreated with chemotherapy before their introduction to host animals. Taken together, our findings illustrate how some chemotherapies can exert prometastatic effects that may confound treatment outcomes.

source:Cancer Research.

US-CERT says Wi-Fi hole open to brute force attack.


The US Computer Emergency Readiness Team (US-CERT) has issued a warning about a security hole in the Wi-Fi Protected Set-up protocol for Wi-Fi routers. Security researcher Stefan Viehbock discovered the vulnerability, reported it to the US-CERT, which then issued its public warning earlier this week. Viehbock was able to recognize design decisions about the protocol, which enables an efficient brute force attack.

“The WiFi Protected Setup (WPS) PIN is susceptible to a brute force attack. A design flaw that exists in the WPS specification for the PIN authentication significantly reduces the time required to brute force the entire PIN because it allows an attacker to know when the first half of the 8 digit PIN is correct. The lack of a proper lock out policy after a certain number of failed attempts to guess the PIN on some wireless routers makes this brute force attack that much more feasible.”

The protocol, introduced in 2007 by the Wi-Fi Alliance, was intended to make life simple for setting up and configuring security on wireless local area networks, especially for home and small office-home (SOHO) environments. “Wi-Fi Protected Setup enables typical users who possess little understanding of traditional Wi-Fi configuration and security settings to easily configure new wireless networks, to add new devices and to enable security,” according to the WiFi Alliance white paper.

The simplification resides in the setup process where users only have to type in a shortened PIN instead of longer phrase if adding a new device to a network. By entering the wrong PIN, the hacker gets returned information that could be useful for an attack. The 8-digit PIN’s security falls dramatically as more attempts are made. A message sent by the router when the PIN fails informs the hacker if the first four digits are correct; the last digit of the key is used as a checksum and is given out by the router in negotiation.

According to reports, this hole cuts the hacker’s time and effort significantly. There is less effort in trying out combinations, reducing attempts from 100 million to 11,000.

In its warning, the US-CERT site said “We are currently unaware of a practical solution to this problem.”

Its recommended workaround was to disable WPS. Though not a solution, it said a recommendation was to only use WPA2 encryption with a strong password, disabling UPnP, and enabling MAC address filtering so only trusted computers and devices can connect to the wireless network.

Source:Physics.org

 

 

 

 

Researchers hope to use bugged bugs for search and rescue.


While search and rescue dogs are currently used to help locate survivors of earthquakes or other disasters, new research hopes to make this job easier by turning to bugs. Insects have the ability to get into the smallest of places and could make locating people that much easier.

Led by Professor Khalil Najafi, the new technology is designed to use the insect’s kinetic energy to power things such as miniature cameras and microphones that can be mounted on their backs. These insects can then be released into building or rubble that is deemed to be too dangerous for humans and help locate possible survivors.

The research team has already created a device that is able to harness the energy created by the wing movement of the Green June beetle. The idea now is to place a miniature generator on each of the beetle’s wings to create enough power to run miniature location devices such as camera and microphones.

These tiny insects could also be used by the military as well as by facilities such as the Fukushima nuclear power plant. They would be able to go into virtually any place where it is too dangerous for humans.

The research team is hoping to be able to conduct the first insect test flights at some point next year. They are pursuing patents for their technology and hoping to secure additional investors to aid in the pursuit of this project.

If successful, these bugged bugs could make a big difference in locating survivors of many natural disasters worldwide.

Source: PhysOrg.com