Will Computers Replace Radiologists?


I recently told a radiology resident who demolished the worklist, “You’re a machine.” He beamed with pride. Imitation is the highest form of flattery. But the machine, not content in being imitated, wants to encroach on our turf.

CT could scarcely have progressed without progress in computing to horde the glut of thin slices. On two-dimensional projectional images such as abdominal radiographs, arteries reveal themselves only if calcified. With CT, algorithms extract arteries with a click. When I first saw this I was mesmerized, then humbled, and then depressed. With automation, what was my role, aside from clicking the aorta?

The role of computers in radiology was predicted as early as 1959 by Lee Lusted,[1] a radiologist and the founder of the Society for Medical Decision Making. Lusted envisioned “an electronic scanner-computer to look at chest photofluorograms, to separate the clearly normal chest films from the abnormal chest films. The abnormal chest films would be marked for later study by the radiologists.”

AI Is Not Just a Sci-Fi Film

I was skeptical of the machine. Skepticism is ignorance waiting for enlightenment. Nearly 60 years after Lusted’s intuition, artificial intelligence (AI) is not a figment of Isaac Asimov’s imagination but a virtual reality.

Leading the way in AI is IBM. Tanveer Syeda-Mahmood, PhD, IBM’s chief scientist for the Medical Sieve Radiology Grand Challenge project, sees symbiosis between radiologists and AI. At the 2015 RSNA meeting she showed off Watson, IBM’s polymath and the poster child of AI. Watson can find clots in the pulmonary arteries. The quality of the demonstration cases was high; the pulmonary arteries shone brightly. But I’m being nitpicky pointing that out. Evolutionarily, radiologists have reached their apotheosis. But Watson will improve. He has a lot of homework—30 billion medical images to review after IBM acquired Merge, according to the Wall Street Journal.[2] A busy radiologist reads approximately 20,000 studies a year.

AI has a comparative advantage over radiologists because of the explosion of images: In a trauma patient, a typical “pan scan”—CT of the head, cervical spine, chest, abdomen, and pelvis with reformats—renders over 4000 images. Aside from the risk for de Quervain’s tenosynovitis from scrolling through the images, radiologists face visual fatigue. Our visual cortex was designed to look for patterns, not needles in haystacks.[3]

The explosion of images has led to an odd safety concern. The disease burden hasn’t risen, and the prevalence of clinically significant pathology on imaging remains the same; however, the presence ofpotentially significant pathology has increased and the chances of missing potentially significant pathology have increased exponentially—ie, there is an epidemic of possible medical errors. Radiologists will be no match for AI in detecting 1-mm lung nodules.

What’s intriguing about AI is what AI finds easy. While I find the beating heart demanding to stare at, cine imaging is a piece of cake for Watson. Watson learned about cardiac disease by seeing videotapes of echocardiograms from India, according to Syeda-Mahmood. True to the healthcare systems that Watson will serve, he has both American and foreign training.

What’s even more intriguing about AI is what AI finds difficult. Eliot Siegel, MD, professor of radiology at the University of Maryland, has promised me that if anyone develops a program that segments the adrenal glands on CT as reliably as a 7-year-old could, he’d wash their car. So far he hasn’t had to wash any cars. AI may crunch PE studies but struggle with intra-abdominal abscesses. Distinguishing between fluid-filled sigmoid colon and an infected collection isn’t the last frontier of technology, but it may be the last bastion of the radiologist.

Competition for Watson

Watson has a rival. Enlitic was founded by Australian data scientist and entrepreneur Jeremy Howard. Both Watson and Enlitic use deep learning, an unregulated process whereby the computer figures out why something is what it is after being shown several examples. However, their philosophy is different. Watson wants to understand the disease. Enlitic wants to understand the raw data. Enlitic’s philosophy is a scientific truism: Images = f (x). Imaging is data. Find the source data, solve the data, and you’ve solved the diagnosis.

Igor Barani, MD, a radiation oncologist and the CEO of Enlitic, told me that he was once skeptical of computers. He changed his mind when he saw what Enlitic could do. After being fed several hundred musculoskeletal radiographs, which were either normal or had fractures, the machine flagged not only the radiographs with fracture but also the site of the fracture. The machine started off ignorant, was not told what to do, and learned by trial and error. It wasn’t spoonfed— rather, it feasted. Enlitic is like that vanishingly uncommon autodidact.

Barani, too, believes that AI and radiologists are symbiotic. According to him, AI will not render radiologists unemployable but will save the radiologist from mundane tasks that can be automated, such as reading portable chest radiographs to confirm line placements and looking at CT scans for lung nodules. As he put it, “You did not go to medical school to measure lung nodules.” Barani’s point is well taken. The tasks that can be automated should be given to the machine—not as surrender but secession.

Medical Publications vs Business News

Living in San Francisco, Barani is aware of the hot air from Silicon Valley. He doesn’t want Enlitic to be cast in the same mold as some diagnostics that have been buried under their own hype. He wants the technology to prove itself in a randomized controlled trial and says that Enlitic is conducting such a trial in Australia, which will assess AI’s accuracy and efficiency as an adjunct to radiologists. This is just as well. Charles Kahn, MD, an expert in informatics and vice chair of radiology at the University of Pennsylvania, has followed the history of neural networks—AI’s DNA. “I’ve seen optimism before. It’s time that proponents of AI published, not in Forbes, but in peer-reviewed medical journals,” he told me, with slight exasperation.

For AI’s full potential to be harnessed, it must extract as much information as possible from the electronic health record (EHR), images, and radiology reports. The challenges are not computational. The first challenge is the validity of the information. For example, the EHR has signal, but it also has a lot of junk that looks like signal because an ICD-10 code is attached.

The second challenge is consistency. The variability and disclaimers in radiology reports that frustrate clinicians could frustrate AI as well. It needs a consistent diagnostic anchor. Imagine if half of the World Atlases listed the capital of Mongolia as Ulan Bator and the other half listed it as New Delhi. Even Watson might be confused if asked the capital of Mongolia.

Could the hedge “pneumonia not excluded”—the Achilles heel of radiologists, the chink in the radiologist’s armor—save it from AI? Gleefully, I asked IBM’s Syeda-Mahmood. She smiled. “Watson doesn’t need to be better than you. Just as good as you.” She has a point. If Watson knows to pick up emboli in the lobar pulmonary arteries in some studies and can report in others “no embolus seen but subsegmental pulmonary embolus is not excluded,” how is that different from what we do?

Digital Mammography and AI Takeover

Like radiologists, AI must choose between sensitivity and specificity—ie, between overcalling and undercalling disease. One imperative for computer assistance in diagnosis is to reduce diagnostic errors. The errors considered more egregious are misses, not overdiagnoses. AI will favor sensitivity over specificity.

 If AI reminds radiologists that leiomyosarcoma of the pulmonary veins, for example, is in the differential for upper lobe blood diversion on chest radiograph, this rare neoplasm will never be missed. But there’s a fine line between being a helpful Siri and a monkey on the shoulder constantly crying wolf.

The best example of computer-aided detection (CAD) is in breast imaging. Touted to revolutionize mammography, CAD’s successes have been modest.[4] CAD chose sensitivity over specificity in a field where false negatives are dreaded, false positives cost, and images are complex. CAD flags several pseudoabnormalities, which experienced readers summarily dismiss but over which novice readers ruminate. CAD has achieved neither higher sensitivity nor higher specificity.

When I asked Emily Conant, MD, chief of breast imaging at the University of Pennsylvania, about this, she cautioned against early dismissal of CAD for mammography. “With digital platforms, quantification of global measures of tissue density and complexity are being developed to aid detection. CAD will be more reproducible and efficient than human readers in quantifying. This will be a great advance.” Digital mammography follows a pattern seen in other areas of imaging. An explosion of information is followed by an imperative to quantify, leaving radiologists vulnerable to annexation by the machine.

Should radiologists view AI as a friend or a foe? The question is partly moot. If AI has a greater area under the receiver operating character curve than radiologists—meaning it calls fewer false negativesand fewer false positives—it hardly matters what radiologists feel. Progress in AI will be geometric. Once all data are integrated, AI can have a greater sensitivity and greater specificity than radiologists.

Do We Need Fewer Radiologists?

Workforce planning for organized radiology is tricky. That AI will do the job radiologists do today is a mathematical certainty. The question is when. If it were within 6 months, radiologists may as well fall on their swords today. A reasonable timeframe is anything between 10 and 40 years, but closer to 10 years. How radiologists and AI could interact might be beyond our imagination. Enlitic’s Barani believes that radiologists can use AI to look after populations. AI, he says, “can scale the locus of a radiologist’s influence.”

AI may increase radiologists’ work in the beginning as it spits out false positives to dodge false negatives. I consulted R. Nick Bryan, MD, PhD, emeritus professor at the University of Pennsylvania, who believes that radiologists will adjudicate normal. The arc of history bends toward irony. Lusted thought that computers would find normal studies, leaving abnormal ones for radiologists. The past belonged to sensitivity; the future is specificity. People are tired of being told that they have “possible disease.” They want to know if they’re normal.

Bryan, a neuroradiologist, founded a technology that uses Bayesian analysis and a library of reference images for diagnosis in neuroimaging. He claims that the technology does better than first-year radiology residents and as well as neuroradiology fellows in correctly describing a range of brain diseases on MRI. He once challenged me to a duel with the technology. I told him that I was washing my hair that evening.

Running from AI isn’t the solution. Radiologists must work with AI, not to improve AI but to realize their role in the post-AI information world. Radiologists must keep their friends close and AI closer. Automation affects other industries. We have news stories written by bots (standardized William Zinsser–inspired op-eds may be next). Radiologists shouldn’t take automation personally.

Nevertheless, radiologists must know themselves. Emmanuel Botzolakis, MD, a neuroradiology fellow working with Bryan, put it succinctly. “Radiologists should focus on inference, not detection. With detection, we’ll lose to AI. With inference, we might prevail.”

Botzolakis was distinguishing between Radiologist as Clinical Problem Solver and Radiologist as TSA Detector. Like TSA detectors, radiologists spot possible time bombs, which on imaging are mostly irrelevant to the clinical presentation. This role is not likely to diminish, because there will be more anticipatory medicine and more quantification. When AI becomes that TSA detector, we may need fewer radiologists per capita to perform more complex cognitive tasks.

The future belongs to quantification but it is far from clear that this future will be palatable to its users. AI could attach numerical probabilities to differential diagnosis and churn reports such as this:

Based on Mr Patel’s demographics and imaging, the mass in the liver has a 66.6% chance of being benign, 33.3% chance of being malignant, and a 0.1% of not being real.

Is precision as useful as we think? What will you do if your CT report says renal mass has a “0.8% chance of malignancy?” Sleep soundly? Remove the mass? Do follow-up imaging? Numbers are continuous but decision-making is dichotomous, and the final outcome is still singular. Is the human race prepared for all of this information?

The hallmark of intelligence is in reducing information to what is relevant. A dualism may emerge between artificial and real intelligence, where AI spits out information and radiologists contract information. Radiologists could be Sherlock Holmes to the untamed eagerness of Watson.

In the meantime, radiologists should ask which tasks need a medical degree. Surely, placing a caliper from one end of a lung nodule to the other end doesn’t need 4 years of medical school and an internship. Then render unto AI what is AI’s. Of all the people I spoke to for this article, Gregory Mogel, MD, a battle-hardened radiologist and chief of radiology at Central Valley Kaiser Permanente, said it best. “Any radiologist that can be replaced by a computer should be.” Amen.

 

FDA Panel Gives Tepid Nod to PFO-Closure Device for Cryptogenic Stroke


In a split vote, an advisory panel recommended that the US Food and Drug Administration (FDA) should accept St Jude Medical’s premarket approval application for its Amplatzer patent foramen ovale (PFO) occluder device, which percutaneously delivers a permanent cardiac implant for PFO closure.

The proposed indication is to prevent recurrent ischemic stroke in patients who have had a cryptogenic stroke due to a presumed paradoxical embolism.

The Circulatory System Devices Panel of the FDA Medical Devices Advisory Committee almost unanimously agreed that the device was safe, but the panel members did not all agree that the pivotal RESPECT trial convincingly demonstrated that the device was effective and had a superior risk/benefit profile for this indication.

The 16-member panel voted on three questions:

Is there reasonable assurance that the Amplatzer device is safe for use in patients who meet the criteria specified in the proposed indication?

Is there reasonable assurance that the Amplatzer device is effective for use in patients who meet the criteria specified in the proposed indication?

Do the benefits of the Amplatzer device outweigh the risks for use in patients who meet the criteria specified in the proposed indication?

Fifteen members voted “yes” and one voted “no” for the safety question. However, nine agreed and seven disagreed with the efficacy statement, and 11 agreed while five disagreed with the “greater-benefits-than-risks” statement.

“There is a real clinical need” for this device, noted Dr Jeffrey Brinker (Johns Hopkins Hospital, Baltimore, MD), who voted “yes” for all three questions. Dr Ralph Brindis (Oakland Kaiser Medical Center, CA) voted the same way and said the device should be a level 2b recommendation.
However, Dr Patrick Noonan Jr (Scott and White Memorial Hospital, Temple, TX) voted “no” on the two last questions, since “there was a hint, a signal” of effectiveness, and one “couldn’t say there was a definite benefit.”

Similarly, panel chair Dr Richard Page (University of Wisconsin School of Medicine & Public Health, Madison) who would have voted only if there had been a tie, said he would have voted this way also, for the same reason. Nevertheless, this split vote is “a good place” to be, he said.

The FDA generally follows its panels’ advice, but the strategy of PFO closure for stroke has been considered by the panel before and has proven to be a contentious topic between cardiologists and neurologists.

It has been a long process to get to this point. The RESPECT trial was approved by the FDA in September 2000 to evaluate the safety and efficacy of the device under the investigational device exemption (IDE); the trial was designed to show superiority of the device plus medical therapy vs medical therapy.

 Enrollment took four times longer than expected, partly because some eligible patients chose to receive other devices being used off-label. The event rate was much lower than expected.

The primary trial end point was a composite of nonfatal stroke, postrandomization all-cause mortality, and fatal ischemic stroke. The trial was to be stopped if 25 primary events occurred.

In November 2012 the company submitted a premarket approval application (PMA) requesting marketing approval. After reviewing the application, the FDA convened the current panel.

Seeking Replies to Eight Questions

Clinical significance. In the intention-to-treat (ITT) population, there were nine primary-end-point events in the device group and 16 in the medical-management group; thus superiority was not achieved (relative risk reduction 0.53, 95% CI 0.23–1.22; P=0.157).

There was a big dropout rate. In the initial study until May 20, 2012, the dropout rate was 10.4% in the device group vs 19.1% in the medical-management group.

In the extended follow-up analysis until August 14, 2015, there were 18 primary events in the device group and 24 in the medical-management group, and the dropout rate was 18.2% vs 30.1% in these two groups, respectively.

“The wide confidence interval indicates a great deal of uncertainty, [and] the withdrawal rate was greater than the event rate, which can be concerning,” Dr Scott Evans (Harvard School of Public Health, Boston, MA) pointed out, adding that, nevertheless, it is also clear there was a treatment effect.

“What we are left with is essentially an underpowered trial,” said Dr Michael Lincoff (Cleveland Clinic, OH). The event rates were lower than expected, but what if it were a real effect? “Do we throw away these nine years or try to get understanding?” he asked.

“I would be wary of throwing out” these data, Dr Jeffrey Borer (State University of NY Downstate Medical Center, Brooklyn) said.

“This has taken 15 years; what we have are the data that we have,” Dr Page echoed.

In the extended follow-up, the event rate was 0.65 per 100 patient-years in the device group vs one in 100 patient-years in the medical-management group, which is important to that one patient, several panelists pointed out.

Additional analyses other than ITT. The FDA asked the panelists to comment on additional analyses, such as per-protocol, as-treated, and device-in-place populations.

Dr Ralph D’Agostino (Boston University, MA) said that these secondary analyses did not provide any great further insight. Dr Evans agreed saying, the ITT analysis is the only one that retains confounding factors.

Safety. “Overall, the adverse event rates are really low,” Dr Page summarized. A total of 4.5% of patients (21 of 467 patients) in the device group had a serious adverse event, including two patients who had ischemic stroke.

Brinker observed, “There should be a statement that more than 90% of patients in the device group were on antithrombotic therapy,” a sentiment also expressed by others.

The RESPECT trial showed that 4.2% of subjects in the device group vs 1.9% of those in the medical-management group had atrial fibrillation, which is misleading, according to a few panel members, since invasive procedures often trigger transient atrial fibrillation.

There were 18 patients in the device group and three patients in the medical-management group who had either deep vein thrombosis (DVT) or pulmonary embolism.

PFO closure. A total of 71.3% of the patients who received the device had complete PFO closure and 94% had effective PFO closure. “We assume the device is working. Blocking nearly all flow may be satisfactory, but we don’t know,” Dr Page summarized.

Proposed indications and labeling. These two questions were combined. “The panel generally feels that the proposed indication seems appropriate, although labeling would need to add information about anticoagulation, age perhaps, and presumption of etiology,” Dr Page summarized.

Risk/benefit analysis. The FDA wanted to know if the panel felt that the RESPECT trial supported an important role for PFO in cryptogenic stroke and offered compelling evidence that the device provides a meaningful reduction in risk of recurrent ischemic stroke vs medical therapy. The panel members generally felt that “modest,” “moderate,” or “uncertain” would be more accurate than “important” and “compelling.”

Postapproval study. The panel members stressed that postapproval studies should collect data bout AF, DVT, and—as patients in RESPECT who spoke at the meeting stressed—information about quality of life. There should be teams of neurologists and electrophysiologists.

Liraglutide Doesn’t Lower LVEF in HF but Does Increase Heart Rate, Says LIVE Trial


A new study agrees with recent research that the glucagonlike peptide-1 (GLP-1) agonist liraglutide (Victoza, Novo Nordisk) has no clinical benefit for patients with heart failure (HF), but it adds to the literature by noting some serious treatment-related CV events.

Last fall, results from the FIGHT trial showed no significant differences in 6-month outcomes, including mortality and rehospitalization, between the antidiabetic drug and placebo in 300 patients with acute HF and reduced ejection fraction (HFrEF). However, as expected, weight decreased and glycemic control improved in the subgroup with diabetes.

The newly reported LIVE trial enrolled 241 patients with chronic HFrEF at five centers in Denmark. After 24 weeks of treatment, there were no significant differences between the liraglutide and placebo groups for the primary end point of change in LVEF—whether the patients did or did not have comorbid diabetes[1].

More troubling, there was a significant increase in heart rate for those receiving liraglutide vs placebo (P<0.001) and more serious adverse cardiac events (12 vs three, respectively, P=0.04).

Dr Henrik Wiggers

Principal investigator Dr Henrik Wiggers (Aarhus University Hospital, Denmark) told heartwire from Medscape that even a moderately elevated resting heart rate can have a negative effect on HF and that that may explain the increased adverse events they found in their study.

“Although I must stress that this was a secondary end point, if a patient like this came to me to discuss getting a GLP-1 analogue, I would hesitate,” added Dr Anders Jorsal (Aarhus University Hospital), who presented the findings during a late-breaking-trial session here at the European Cardiology Society (ESC) Heart Failure 2016 Congress.

Official discussant Michel Komajda (Groupe Hospitalier Pitié-Salpêtrière, Paris, France) called LIVE’s results disappointing and the adverse events worrisome. “At the moment, liraglutide may not be the drug of choice for heart failure and diabetes.”

“We Expected Better”

Dr Anders Jorsal

“We currently don’t know how to treat diabetes in heart failure and so wanted to study whether liraglutide could improve LVEF,” said Jorsal.

He added that some pilot studies and experimental data “have been very promising” for this treatment and have shown LVEF improvements for HF patients with and without type 2 diabetes mellitus, which made them hopeful.

“We really expected better for our patients,” he said.

In the study, 122 of the patients were randomly assigned to liraglutide and 119 to placebo (89% men and mean age 65 years in each group). In addition, 32% and 29% of the groups, respectively, had type 2 diabetes.

Inclusion criteria for this study included LVEF <45% and an NYHA classification of 1–3. All participants underwent 3D contrast echocardiography to measure LVEF changes from baseline, as well as blood-pressure measurements and a 6-minute-walk test, and they filled out the Minnesota Living with Heart Failure (MLHF) questionnaire.

Surprising Results

At 6 months, the liraglutide-receiving group had a 0.8% change in LVEF vs a 1.5% change in the placebo group (P=0.24). “And there were no between-group differences in other measures of systolic function,” reported Jorsal.

Weight loss was significantly greater in the liraglutide vs placebo groups (-2.2 kg vs 0.1 kg, respectively; P<0.001), HbA1c level was significantly reduced (-5.1 mmol/mol vs 3.2 mmol/mol, P<0.001), and the 6-minute-walk test was significantly improved (P=0.04).

However, heart rate was increased by 6 beats per minute in the liraglutide patients vs a decrease of 1 beat in the placebo patients (P<0.001). In addition, there were four vs two reports of atrial fibrillation, respectively, three vs zero reports of ACS, three vs one nonfatal VTs, one vs zero fatal VTs, and one vs zero worsening of heart failure.

“This study wasn’t actually powered to detect any clinical-end-point differences. So we were surprised by this,” said Wiggers. “We now need larger studies to confirm these serious clinical effects.”

He noted that the ongoing, 9000-person strong Liraglutide Effect and Action in Diabetes: Evaluation of Cardiovascular Results (LEADER) trial, which announced top-line results in March, has excited many people in the field. “But only a minority of its patients had heart failure at baseline, and the type of heart failure is probably unknown.” Although it’ll be interesting to see what LEADER shows when its full results are released, “it might not be enough,” he said.

Caution for Now

After the presentation, discussant Komajda said that the clinical implications of both the LIVE and FIGHT trials are that there is no evidence of clinical benefit in cardiac function with liraglutide and that CV safety in diabetes with HF or LV dysfunction “remains an open question.”

Session moderator Dr Frank Ruschitzka (University of Zurich, Switzerland) called diabetes and HF “vicious twins” and noted that the ESC’s Heart Failure Association has just opened a new committee focused on the joint conditions.

He later told heartwire that the LIVE trial was intriguing and shed some light on important topics, “including some important safety signals” for arrhythmic events. “It’s a small study, so I don’t want to overplay these results. But safety should come first.”

Although he’s looking forward to upcoming outcomes studies, Ruschitzka said that at present he would not treat a patient of this type with this kind of drug. And that won’t change “until I see the final analysis [from ongoing large trials] presented and published.”

Low HDL, in Isolation, Not a CV Risk Predictor


Long-term findings from a famous cohort study challenge the idea that low HDL-cholesterol levels predict raised cardiovascular risk independently of other blood lipid markers and add to doubts about HDL-C per se as a target for therapy[1].

It suggests that the association between future CV risk and levels of HDL-C, as currently measured in practice, is modified by LDL cholesterol and triglycerides. The predictive power of HDL-C is diminished when measures of either or both of the other markers are high.

“We’re not taking away the importance of HDL. We’re just putting it into perspective. If you have low HDL but everything else is normal, your risk of CV disease is not elevated,” according to lead author Dr Michael Miller (University of Maryland School of Medicine, Baltimore).

“If your patient has a high HDL, the risk of CV disease tends to remain low, compared with low HDL at similar levels of risk,” Miller told heartwire from Medscape. “However, the high-HDL advantage is erased if LDL-C and triglyceride levels exceed 100 mg/dL. People with a high HDL should not be falsely reassured that their risk of heart disease is necessarily low.”

Findings from the original Framingham cohort helped establish that HDL-C levels, by themselves, are inversely correlated with CV risk, but their triglycerides weren’t consistently measured, according to the report. But triglycerides were followed in the subsequent cohort.

The new analysis, published May 12, 2016 in Circulation: Cardiovascular Quality and Outcomes, covers 3590 participants of the Framingham Heart Study Offspring Cohort, initially without known CV disease, followed from 1987 to 2011. Low levels of HDL-C (<50 mg/dL in women and <40 mg/dL in men) were defined as isolated if both triglycerides and LDL-C were optimal (<100 mg/dL). That was observed in 2.3% of the cohort.

CV risk was sharply elevated among participants with low HDL-C when levels of the other markers were not optimal, compared with those with isolated HDL-C, even after adjustment for age at initial lipid testing, sex, diabetes, hypertension, and smoking.

The odds ratio (OR) for CV disease at follow-up among those with low HDL-C was:

1.3 (95% CI 1.0–1.6) with LDL-C ≥100 mg/dL and triglycerides <100 mg/dL.

1.3 (95% CI 1.1–1.5) with LDL-C <100 mg/dL and triglycerides ≥100 mg/dL.

1.6 (1.2–2.2) with triglycerides and LDL both ≥100 mg/dL.

The results were similar for “high” levels of triglycerides (≥150 mg/dL) and LDL-C (≥130 mg/dL).

Elevated HDL-C was associated with a 20% to 40% drop in CV risk compared with isolated low HDL-C. But there was no significant risk effect when both triglycerides and LDL were 100 mg/dL or higher.
Whether or not HDL-C is a good treatment target to pursue remains unclear, according to Miller. It’s not uncommon to meet patients with isolated HDL-C, many of whom are young middle-aged men who are otherwise healthy and in good physical shape. Such patients, he said, would have been recommended aerobic exercise and/or niacin. However, results from the AIM-HIGH and HPS2-THRIVE trials suggested that increasing HDL with niacin does not have much of an effect on CV risk.

Also, he said, the ACCELERATE study, which looked at the cholesterol esterase transfer protein (CETP) inhibitor evacetrapib (Eli Lilly), suggested that raising HDL-C and improving cholesterol efflux (the first step in cholesterol transport) did not decrease CV risk. That study suggests that the correct assay to measure cholesterol function may not even be being used, according to Miller.

“While we still believe that HDL-C plays a pivotal role in reverse cholesterol transport, we have yet to prove that either raising HDL-C levels or improving its function reduces CVD risk,” he said.

Role of HDL Subtypes

Since 2011, European Society of Cardiology/ European Atherosclerosis Society (ESC/EAS) lipid guidelines have recommended that HDL-C no longer be a target for lipid-lowering therapy, Dr Maciej Banach (Medical University of Lodz, Poland) told heartwire in an email.

One problem with some CETP-inhibitor trials is that they included patients at relatively high CV risk, so the results may not apply to a primary-care population, he said. “We might hypothesize that HDL might still be an important biomarker of CVD in the group of relatively healthy patents in primary prevention.”

“Dysfunctional HDL might be an important risk factor of CV disease with proatherogenic properties,” he explained. “This impaired HDL functionality might be observed in patients with CV risk factors like smoking, obesity, and/or diabetes mellitus and especially in patients with established CV disease and chronic kidney disease.”

Moreover, he said, it’s increasingly appreciated that HDL-C assays measure a range of HDL subtypes and that the “wrong” kind of HDL-C may be being measured. “It is still largely disputed which subtypes are most beneficial. Some of the subtypes are not associated with reduced CV risk. Some may even increase this risk and might be considered dysfunctional HDL,” according to Banach. But the current study can’t address that, because HDL-C was not broken down by subtypes.

Banach said his group is currently conducting the Investigating Dysfunctional HDL (DYS-HDL) study[2] in selected groups of patients at high risk of cardiovascular events. It’s examining subtypes of HDL with impaired antiatherogenic function and their predictive role for clinical events in a group of patients with varying CV risk. He said results are expected in 2017.

3-Person Embryos May Fail to Vanquish Mutant Mitochondria


A technique to stop children from inheriting mitochondrial diseases has the potential to backfire.

 
To prevent a mother who has harmful mitochondrial mutations from passing them to her children, the proposed remedy is to transplant the nuclear DNA of her egg into another, donor egg that has healthy mitochondria (and which has been emptied of its own nucleus) 

A gene-therapy technique that aims to prevent mothers from passing on harmful genes to children through their mitochondria — the cell’s energy-producing structures — might not always work.

Mitochondrial replacement therapy involves swapping faulty mitochondria for those of a healthy donor. But if even a small number of mutant mitochondria are retained after the transfer — a common occurrence — they can outcompete healthy mitochondria in a child’s cells and potentially cause the disease the therapy was designed to avoid, experiments suggest.

“It would defeat the purpose of doing mitochondrial replacement,” says Dieter Egli, a stem-cell scientist at the New York Stem Cell Foundation Research Institute who led the work. Egli says that the finding could guide ways to surmount this hurdle, but he recommends that the procedure not be used in the meantime.

The UK government last year legalized mitochondrial replacement therapy, although the country’s fertility regulator has yet to green-light its use in the clinic. In the United States, a panel convened by the National Academies of Sciences, Engineering, and Medicine has this year recommended that clinical trials of the technique be approved if preclinical data suggest that it is safe.

FAULTS CARRIED OVER

As many as 1 in 5,000 children are born with diseases caused by harmful genetic mutations in the DNA of their mitochondria; the diseases typically affect the heart, muscles and other power-hungry organs. Children inherit all their mitochondria from their mothers.

To prevent a mother who has harmful mitochondrial mutations from passing them to her children, the proposed remedy is to transplant the nuclear DNA of her egg into another, donor egg that has healthy mitochondria (and which has been emptied of its own nucleus). The resulting embryo would carry the mitochondrial genes of the donor woman, and the nuclear DNA of their father and mother. These are sometimes called three-person embryos.

Current techniques can’t avoid dragging a small number of the mother’s mitochondria into the donor egg, totalling less than 2% of the resulting embryo’s total mitochondria. This isn’t enough to cause health problems. But researchers have worried that the proportion of faulty, ‘carried-over’ mitochondria may rise as the embryo develops. The UK Human Fertilisation and Embryology Authority (HFEA) — which will oversee clinical applications of mitochondrial replacement — has called for research into this possibility.

Egli’s study, published today in Cell Stem Cell, offers some clarity. His team used eggs from women with healthy mitochondria, but otherwise followed a procedure similar to a real therapy: transplanting nuclear DNA from one set of egg cells into another woman’s egg cells. The team then converted these eggs into embryos with two copies of the maternal genome, a process called parthenogenesis. (Mitochondrial replacement is normally performed on eggs fertilized with sperm, but Egli’s team wanted to discount any role for paternal DNA.) The researchers then extracted stem cells from the embryos and grew the cells in dishes in the lab.

The embryos, on average, had just 0.2% of carried-over mitochondrial DNA (mtDNA), and the resulting embryonic stem cells at first harboured similarly minuscule levels. But one stem-cell culture showed a dramatic change: as the cells grew and divided, levels of the carried-over mtDNA jumped from 1.3% to 53.2%, only to later plummet down to 1%. When the team split this cell line into different dishes, sometimes the donor egg’s mtDNA won out; but in others, the carried-over mtDNA dominated.

In another set of experiments, the low levels of carried-over mtDNA consistently outcompeted the donor mtDNA, both in embryonic stem cells and in tissues made from these cells.

COMPETING DNA

Exactly how the carried-over mitochondria rose to dominance is hazy. Egli’s team found no evidence that they helped cells to divide any faster — for instance, by delivering extra energy. Egli suspects that the resurgence happened because one mitochondrion was able to copy its DNA faster than the others could, which he says is more likely to occur when large DNA-sequence differences exist between the two populations of mitochondria. In his team’s study, the most dramatic rebound in carried-over mtDNA occurred when the nucleus of a woman with mitochondria common among Europeans was inserted into the egg cell of a woman with mitochondria usually found in people with African ancestry.

Iain Johnston, a biomathematician at the University of Birmingham, UK, says that this theory makes sense. He was part of a team that found that, in mice with mitochondria from both lab and distantly related wild populations, one mitochondrial lineage tended to dominate. If mitochondrial replacement does reach the clinic, Johnston says that donors should be chosen such that their mitochondria closely match those of the recipient mother.

But Mary Herbert, a reproductive biologist at the University of Newcastle, UK, who is part of a team pursuing mitochondrial replacement, says that mitochondria behave very differently in embryonic stem cells compared to normal human development. Levels of mutant mitochondria can fluctuate wildly in stem cells. “They are peculiar cells, and they seem to be a law unto themselves,” she says, calling the biological relevance of the latest report “questionable”. She thinks that data from embryos cultured for nearly two weeks in the laboratory will provide more useful information than Egli’s stem cell studies.

An HFEA spokesperson says that the agency is waiting for further experiments on the safety and efficacy of mitochondrial replacement (including data from Herbert’s team) before approving what could be the world’s first mitochondrial replacement in humans.

Egli hopes that the HFEA considers his team’s data. He thinks the problem they identified can be surmounted, for instance, by improving techniques to reduce the level of carried-over mitochondria or matching donors such that their mitochondria are unlikely to compete. Until this is shown for sure, he advocates caution. “I don’t think it would be a wise decision to go forward with this uncertainty.”

Neandertals Built Cave Structures–and No One Knows Why


Walls of stalagmites in a French cave might have had a domestic or a ceremonial use.

Neanderthals built one of the world’s oldest constructions—176,000-year-old semicircular walls of stalagmites in the bowels of a cave in southwest France. The walls are currently the best evidence that Neanderthals built substantial structures and ventured deep into caves, but researchers are wary of concluding much more.

“The big question is why they made it,” says Jean-Jacques Hublin, a palaeoanthropologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany who was not involved in the study, which is published online in Nature on May 25. “Some people will come up with interpretations of ritual or religion or symbolism. Why not? But how to prove it?”

Speleologists first discovered the structures in Bruniquel Cave in the early 1990s. They are located about a third of a kilometre from the cave entrance, through a narrow passage that at one point requires crawling on all fours. Archaeologists later found a burnt bone from an herbivore or cave bear nearby and could detect no radioactive carbon left in it—a sign that the bone was older than 50,000 years, the limit of carbon dating. But when the archaeologist leading the excavation died in 1999, work stopped.

Then a few years ago, Sophie Verheyden, a palaeoclimatologist at the Royal Belgian Institute of Natural Sciences in Brussels and a keen speleologist, became curious about the cave after buying a holiday home nearby. She assembled a team of archaeologists, geochronologists and other experts to take a closer look at the mysterious structures.

NEANDERTHAL HEARTHS?

The six structures are made of about 400 large, broken-off stalagmites, arranged in semi-circles up to 6.7 metres wide. The researchers think that the pieces were once stacked up to form rudimentary walls. All have signs of burning, suggesting that fires were made within the walls. By analysing calcite accreted on the stalagmites and stumps since they were broken off, the team determined that the structures were made 174,400 to 178,600 years ago.

“It’s obvious when you see it, that it’s not natural,” says Dominique Genty, a geoscientist at the Institute Pierre-Simon Laplace in Gif-sur-Yvette who co-led the study with Verheyden and archaeologist Jacques Jaubert, at the University of Bordeaux, in France. Their team found no signs that cave bears had hibernated near the structures, and so might have broken the stalagmites off themselves.

The researchers have so far found no remains of early humans, stone tools or other signs of occupation, but they think that Neanderthals made the structures, because no other hominins are known in western Europe at that time. “So far, it’s difficult to imagine that it’s not human made, and I don’t imagine any natural agent creating something like that,” Hublin agrees.

SPIRITUAL TO DOMESTIC

But Harold Dibble, an archaeologist at the University of Pennsylvania in Philadelphia, isn’t so sure. “When they say there’s no evidence of cave bears in this spot, maybe they’re looking at the evidence for cave bears,” he says. The authors could make a stronger case for Neanderthals if they can show, for instance, that the stalagmite pieces are uniform in size or shape and therefore selected.

If Neanderthals did build the structures, it’s not at all clear why. “It’s a big mystery,” says Genty, whose team speculates that their purpose may have ranged from the spiritual to the more domestic. Evidence for symbolism among Neanderthals is limited, ranging from etchings on a cave wall to eagle talons possibly used as jewelry.

“To me, constructing some sort of structure—things a lot of animals do, including chimps—and equating that with modern cultural behaviour is quite a leap,” says Dibble.

Marie Soressi, an archaeologist at the Leiden University in the Netherlands, says that it is no surprise that Neanderthals living 176,000 years ago had the brains to stack stalagmites. They made complex stone tools and even used fire to forge specialized glues.

More surprising is the revelation that some ventured into deep, dark spaces, says Soressi, who wrote a News and Views article for Nature that accompanies the report. “I would not have expected that, and I think it immediately changes the way we are going to investigate the underground in the future,” she says.

Bigger Is Better for Neutrino Astronomy


To glimpse particles from the most energetic events in the universe, astronomers must build truly gigantic detectors

The giant IceCube neutrino detector sprawls across a cubic kilometer of pristine ice in Antarctica, but it could be dwarfed by the neutrino telescopes of the future. 

Neutrino astronomy is poised for breakthroughs. Since 2010, the IceCube experiment in Antarctica—5,160 basketball-sized optical sensors spread through a cubic kilometre of ice—has detected a few score energetic neutrinos from deep space. Although these are exciting finds that raise many questions, this paltry number of extraterrestrial particles is too few to tell their origins or to test fundamental physics. To learn more will require a new generation of neutrino observatories.

Neutrinos are subatomic particles that interact only weakly, so they can travel far through space and even penetrate Earth. IceCube detects highly energetic neutrinos, with energies above about 100 gigaelectronvolts (1 GeV is 109 electronvolts, roughly the rest mass of a proton). These are produced when cosmic rays—high-energy protons or heavier nuclei from space—interact with matter or light. This might happen either at the sites where the cosmic rays are produced, or later when the rays enter Earth’s atmosphere and collide with gas molecules, releasing a cascade of elementary particles. Neutrinos produced in the atmosphere are hundreds of times more numerous than the astrophysical ones.

Many physics puzzles stand to be solved by neutrino astronomy. One is the origin of the ultra-high-energy cosmic rays. In 1962, the Volcano Ranch array in New Mexico detected an enormous shower of particles coming from one cosmic ray smashing into the upper atmosphere with a kinetic energy of above 1011 GeV—equivalent to the energy of a tennis serve packed into a single atomic nucleus. Tens more such events have been detected since. But 50 years on, physicists still have no idea how nature accelerates elementary particles to such high energies. The energies far exceed the range of Earth-bound accelerators such as the Large Hadron Collider (LHC) near Geneva, Switzerland; mimicking them would require a ring the size of Earth’s orbit around the Sun.

There is also much we need to find out about neutrinos themselves—their accurate masses, how they transform from one type (flavour) into another, and whether other predicted forms (such as ‘sterile’ neutrinos) exist. Neutrinos could also help to find dark matter, invisible material that has a part in controlling the motions of stars, gas and galaxies. Decaying or annihilating dark matter could produce energetic neutrinos that would be visible to neutrino telescopes.

The downside of neutrinos’ weak interactions is that an enormous detector is required to catch enough particles to distinguish the few space-borne ones from the many more originating from Earth’s atmosphere. IceCube is the largest neutrino-detection array in operation but it is too small, and further data collection is probably too slow to yield major breakthroughs in the next decade.

Bigger neutrino observatories, with volumes that are 10–100 times greater than that of IceCube, are essential to explore the most energetic processes in the Universe. Determining the masses of different types of neutrino and studying how neutrinos interact with matter within Earth could distinguish or rule out some models of extra spatial dimensions and address key concerns for high-energy nuclear physics such as the density of gluons (which mediate forces between quarks) in heavy nuclei.

Designs for neutrino telescopes are on the drawing board and could be up and running in five to ten years—if the astro-, particle- and nuclear-physics communities can come together and coordinate funding. A complementary set of several neutrino observatories would test physics at energies beyond the LHC’s at a fraction of the cost—tens to hundreds of millions, rather than tens of billions, of dollars.

MORE QUESTIONS THAN ANSWERS

IceCube, which became fully operational in Antarctica in 2010 (and with which I have been involved since 2004), detects blue light: Cherenkov radiation that is emitted by the charged particles produced when energetic neutrinos interact with atomic nuclei in water or ice. Computers comb through the data to look for interactions—long tracks or radial cascades of particles emanating from a point (see ‘Neutrino observatory’). IceCube sees more than 50,000 neutrino candidates per year. Fewer than 1% are from space.

There are several ways to distinguish cosmic from atmospheric neutrinos. The highest-energy events are more likely to be astrophysical. Atmospheric neutrinos are accompanied by particle showers, which can be seen with detectors on the ice surface. Muons, short-lived subatomic particles produced in these showers, are 500,000 times more numerous than neutrinos, and can also penetrate the ice; so signals accompanied by muons travelling downwards from the sky are probably atmospheric in origin. That leaves extremely energetic events with light trails that are travelling upwards (through Earth) or that originate from a point within the array volume as potentially astrophysical in origin.

Since 2010, IceCube has seen about 60 astrophysical neutrino candidates. Other experiments are too small to detect any such neutrinos; these include ANTARES, an array of strands of detectors anchored to the floor of the Mediterranean Sea off Marseilles, France, and another similar array in Lake Baikal, Russia. Their detection rate of astrophysical neutrinos is as high as could be expected—if there were more neutrinos, they would drain the cosmic rays of most of their energy. So finding the astrophysical sources of the neutrinos should be easy. The fact that we have not is a growing puzzle.

So far, neutrinos do not seem to be coming from particular sites on the sky, although several groups have suggested a weak link to the plane of the Milky Way. And analyses disfavour the many sites once thought likely to have accelerated energetic cosmic rays and neutrinos, including γ-ray bursts (GRBs) and active galactic nuclei (AGNs).

GRBs are short bursts of powerful γ-rays that are picked up by satellites. They are thought to emanate either from a black hole coalescing with a neutron star or another black hole (producing a rapid burst lasting less than 2 seconds); or from the slower collapse of supermassive stars (bursts lasting seconds or minutes). Particles are accelerated by the implosion or explosion. Of more than 800 GRBs examined by IceCube scientists, none was accompanied by a burst of neutrinos, implying that GRBs can produce at most 1% of the astrophysical neutrinos seen by IceCube.

AGNs are galaxies that at their centres have supermassive black holes accreting gas. Particles may be accelerated to relativistic speeds in jets of material that are blasted out from the black hole. But IceCube sees no associations between energetic neutrinos and active galaxies with jets that point towards Earth, suggesting that active galaxies explain at most 30% of the neutrinos.

Other unlikely sources include starburst galaxies, which contain dusty regions of intense star formation that are riddled by supernova explosions; magnetars, which are neutron stars surrounded by strong magnetic fields that expel powerful bursts of neutrinos for a few days (these should have been seen by IceCube); and supernova remnants, whose magnetic fields are too weak to explain the most energetic neutrinos, even though they are believed to be responsible for most lower-energy (up to about 1016 eV) cosmic rays seen in the Galaxy.

More exotic possibilities remain untested: as-yet-unseen supermassive dark-matter particles that annihilate and produce energetic neutrinos; or the decay of cosmic ‘strings’, discontinuities in space-time left over from the Big Bang.

IceCube has also tested alternative physics theories. It has constrained how neutrinos ‘oscillate’ from one flavour to another and set limits on the properties of dark matter and on the constituents of high-energy air showers.

NEXT GENERATION

There are two ways forward: enlarge the current optical arrays to collect more neutrinos, or find other strategies for isolating the highest energy neutrinos that must be cosmic in origin. These approaches cover different energy ranges and thus complementary physics. Both merit support.

First, larger optical Cherenkov telescopes could be deployed in ice or a lake, sea or ocean—similar to IceCube or ANTARES but with more efficient optical sensors and cheaper technology. Several groups have developed advanced designs for these concepts but lack funding. The detectors could be constructed and operational by the early 2020s. For IceCube, technical improvements would include more efficient drilling technology and sensors that fit in narrower bore holes, which are cheaper to drill.

Different sites offer different benefits. Antarctica offers a large expanse of clear, compacted ice and infrastructure. But arrays in the Northern Hemisphere, for example, in the Mediterranean, can more directly observe astrophysical neutrinos from the centre of the Galaxy that have passed through Earth, without having to reject down-going atmospheric neutrinos, as a southern site would have to. The absence of potassium-40 and the lower bioluminescence in fresh water (which contribute to background light and can confuse the reconstruction of particle tracks), and the presence of a frozen surface during the winter, simplifying construction, make Lake Baikal an attractive site.

The second approach requires catching neutrinos with energies above 108 GeV. Neutrinos this energetic are rare—IceCube has seen none—and an array of at least 100 km3 would be needed to capture enough events. Because optical Cherenkov light travels only tens of metres in ice or water, covering such a volume would require millions of sensors and would be expensive.

A more practical way is to search for radio emissions from neutrino interactions with the Antarctic ice sheet. When the neutrinos hit an atomic nucleus in the ice, they create a shower of charged particles that give off radio waves in the 50 megahertz to 1 gigahertz frequency range, as well as visible light. Radio waves can propagate for kilometres through ice. So an radio-sensing array over 100 km3 could be more sparsely populated with instruments, with roughly one station per cubic kilometre. The radio pulses from neutrinos with energies above 108 GeV should be strong enough for antennas in the ice to pick up. Two international groups are building prototypes and have sought funding to expand (I am involved with one, ARIANNA).

GREEN LIGHT

With a range of affordable, next-generation designs shovel-ready, decisions about design priorities need to be made and grants deployed. The main obstacles are limited national science budgets and funding-agency silos. Neutrino astronomy falls between the particle-, nuclear- and astrophysics communities, which need to pool resources to realize the promise of these techniques.

First, one or both of the successors to IceCube and ANTARES should be funded and built. An upgraded IceCube experiment (IceCub-Gen2) and the Cubic Kilometre Neutrino Telescope (KM3NeT), a proposed European project, are both strong candidates (see ‘Next-generation neutrino telescopes’). If necessary, the teams coordinating IceCube, KM3NeT and the Gigaton Volume Detector, a proposed Russian array, should explore merging these collaborations to focus on a single large detector at the most cost-effective site. Funding should be sought from a wider range of agencies, including those focused on particle and nuclear physics.

Second, at least one 100-km3 radio-detection array needs to get the go ahead. Because such a project can be done only in Antarctica, the onus is on the US National Science Foundation, which is the largest supporter of Antarctic research and realistically the only group that has adequate logistical resources to pull off such a project. Many non-US groups are interested, and collaborations should be set up and costs shared internationally. Once proven, such an array could be expanded to cover 1,000 km3 by around 2030 to monitor the ultra-high-energy Universe.

By finding the astrophysical sources of ultra-energetic neutrinos and cosmic rays—or ruling out remaining models—the next generation of neutrino observatories is guaranteed to make discoveries.

Forget self-driving cars: What about self-flying drones?


While all the focus has been on autonomous vehicles, one Belgian startup has been busily developing self-flying features for drones.

eagleeyesystems770x665.jpg
EagleEye says its tech gives drones military-grade security and the possibility of flying autonomous missions.

In 2014, three software engineers decided to create a drone company in Wavre, Belgium, just outside Brussels. All were licensed pilots and trained in NATO security techniques.

But rather than build drones themselves, they decided they would upgrade existing radio-controlled civilian drones with an ultra-secure software layer to allow the devices to fly autonomously.

Their company, EagleEye Systems, would manufacture the onboard computer and design the software, while existing manufacturers would provide the drone body and sensors.

Fast-forward to the end of March this year, when the company received a Section 333 exemption from the US Federal Aviation Administration to operate and sell its brand of autonomous drones in the US. The decision came amid expectations that the FAA will loosen its restrictions on legal drone operations and issue new rules to allow drones to fly above crowds.

As hype builds over the potential of drones in business, Europe is looking at when and where rules need to be put in place.

“People have been coming to us and saying, ‘Listen, I’ve been doing such-and-such with men on the ground. Can you help? Can you make it more efficient?’,” EagleEye Systems COO Ash Bhatia says.

“Based on artificial intelligence, we’re able to utilize the data, process it, and allow the drone to make a decision based on the analysis it does, which we then use in different scenarios.”

Because of weak drone regulation and advances in robotics, autonomous drones are becoming available. While autonomy is a vague term, fully autonomous drones, which require no input from human pilots during flight, have until now mainly existed as concepts for tech demos.

Drone hobbyists have a growing number of ways to refashion their existing devices into semi-autonomous drones with open-source drone software. At the same time, the commercial drone industry is releasing products that mirror automation trends in the amateur drone-maker world.

In March, Chinese drone company DJI released the Phantom 4, a semi-autonomous drone that has limited automation features, such as obstacle avoidance and object-tracking using its onboard camera.

EagleEye’s software and computing hardware gives first-generation drones the same capabilities as those of the Phantom 4, but EagleEye also says its drones require no input from a human pilot. In other words, it is arguing that drones equipped with its technology are fully autonomous.

Robotics innovations in the transportation industry are leaving the laboratory at a fast pace, especially given the tech industry’s current focus on accelerating driverless car innovation.

But less apparent is how quickly robotics technology has shaped drone automation. In the race to become the most autonomous moving vehicle, it seems that drones are winning over cars.

The autonomous drones that get the most headlines come from projects at big tech companies that have not yet reached the market.

Amazon’s delivery drone concept includes pre-programmed sense-and-avoid capabilities, but it has not been commercialized, mainly because the FAA currently approves drone delivery on a per-flight basis.

Facebook is famously developing its own internet-delivery drone, Aquila, which aims to improve internet connectivity in areas that have few land-based internet connections. Aquila would network with distant drones and relay internet bandwidth from ground stations to rural communities with laser links. The Facebook drone, like Amazon’s delivery drone project, is still in development.

“People are giving free rein to a lot of the ideas and a lot of the practical considerations that will shape the future of how we see drones operating autonomously in the future,” says Dan Gettinger, co-director of the Center for the Study of the Drone at Bard College.

Sophisticated software and hardware can transform drones into autonomously flying machines. But most of the drone software suites that exist for drones, such as ArduPilot, focus on autopilot features, while operating systems that commonly used on drones, like Nuttx, are not designed specifically for them. These OSes and software packages are typically open source.

Against those offering, EagleEye suggests its proprietary software allows users to program complex missions into their drones, similar to the capabilities of military-grade drones but without the high costs. Now, with a second office in New York and equipped with its new FAA exemption, the company is planning to spread take-up of its drone technology across the US.

During nearly four years of development in Europe and the US, EagleEye has conducted flight operations for clients that were interested in search-and-rescue missions, agriculture monitoring, and other civilian applications.

Equipped with smart sensors, image-recognition algorithms, and onboard software, EagleEye-equipped drones can decide how to fly around obstacles and when to start recording visuals to send back to base.

Depending on which sensors customers choose to install on their drones, the technology can also enable drones to sense thermal signatures and seek out odor gradients. The user monitors the drone from a tablet or other personal device and must have the drone in view in the event of its failure, as per FAA regulations.

EagleEye Systems says it could theoretically retrofit any drone with its proprietary computer and software, and third-party sensors and actuators, provided that the vehicle has enough power and robustness to support them.

“The machine is the thing itself that needs to be capable,” Bhatia says. “This is not your typical toy drone.”

Most of EagleEye’s clients are industrial customers because their systems are in a higher price range than most consumer drones are.

The company says prices vary according to the customer’s needs and reflect EagleEye’s tight security and technical measures, which are designed to minimize hacking vulnerabilities and deviant flight patterns. Its autonomous drone systems are available to consumer and industrial customers alike.

 Automotive and aerospace companies have been inching closer to the elusive goal of automated vehicles that require no input from the driver at all. Drones and driverless cars use similar technology to re-engineer themselves into autonomous vehicles: computers, sensors, and actuators.

“There’s much greater capability for drones to operate autonomously, simply because of the scale of the issue,” says Bard College’s Gettinger. “They operate in a much less complex environment [than cars do].”

Drones have advantages over cars that could explain why they have become successful automation platforms.

First, they operate in airspace, which is relatively safe compared with roadways. General motor and pedestrian traffic present constant obstacles to ground vehicles.

Second, drones are structurally less complex than cars, containing fewer moving parts than ground vehicles. Finally, drones do not transport people, a quality that removes a large safety constraint from the testing process.

Equipped with, autopilot, obstacle-avoidance, and tracking capabilities, currently available semi-autonomous and autonomous drones still cannot perform everything that a human pilot can.

However, EagleEye’s Bhatia says it is already working on a new generation of drones that will achieve more autonomy.

The iBOT smart wheelchair lets users walk up stairs


Toyota and Dean Kamen (DEKA) have announced a new partnership to complete the iBOT, a smart wheelchair which would give the disabled more independence by changing how users can move.

Last week, the automaker and research firm revealed plans at the Paralyzed Veterans of America’s 70th Annual Convention to complete the development and launch the iBOT motorized wheelchair.

As shown in the video below, the iBOT takes the traditional wheelchair model and improves it by using two sets of wheels which can be rotated and both rise and fall.

The companies say this would allow users to “walk” up and down stairs, rise from a sitting level to at least six feet and move across a variety of different terrains.

o push the project forward, Toyota will license balancing technology held by DEKA for both medical rehabilitative therapy and potentially other purposes, but the companies are still in talks to thrash out the details of iBOT.

Currently, even the most advanced wheelchairs on the market are hampered by issues including a rigid frame and wheels which can prevent users from traveling down certain paths or up steps — and without solutions such as slopes into shops or bus platforms, daily life can be made even more difficult.

It can also be the case that without the proper adjustments to homes, disabled individuals may not be able to head upstairs easily, if at all.

Inventions such as the iBOT could change this and help those with physical issues adapt to daily life without so many restrictions. Many of our cities and urban environments were not built with wheelchairs or physical impairments in mind and it has only been in the last decade or so that adaptations have been made in shops and facilities — but we still have a way to go.

“Our company is very focused on mobility solutions for all people,” said Osamu Nagata, executive vice president and chief administrative officer at Toyota Motor North America. “We realize that it is important to help older adults and people with special needs live well and continue to contribute their talents and experience to the world.”

Watch the video. URL:https://youtu.be/K0s31ypyxko

Cholesterol Lowering in Intermediate-Risk Persons without Cardiovascular Disease


BACKGROUND

Previous trials have shown that the use of statins to lower cholesterol reduces the risk of cardiovascular events among persons without cardiovascular disease. Those trials have involved persons with elevated lipid levels or inflammatory markers and involved mainly white persons. It is unclear whether the benefits of statins can be extended to an intermediate-risk, ethnically diverse population without cardiovascular disease.

METHODS

In one comparison from a 2-by-2 factorial trial, we randomly assigned 12,705 participants in 21 countries who did not have cardiovascular disease and were at intermediate risk to receive rosuvastatin at a dose of 10 mg per day or placebo. The first coprimary outcome was the composite of death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke, and the second coprimary outcome additionally included revascularization, heart failure, and resuscitated cardiac arrest. The median follow-up was 5.6 years.

RESULTS

The overall mean low-density lipoprotein cholesterol level was 26.5% lower in the rosuvastatin group than in the placebo group. The first coprimary outcome occurred in 235 participants (3.7%) in the rosuvastatin group and in 304 participants (4.8%) in the placebo group (hazard ratio, 0.76; 95% confidence interval [CI], 0.64 to 0.91; P=0.002). The results for the second coprimary outcome were consistent with the results for the first (occurring in 277 participants [4.4%] in the rosuvastatin group and in 363 participants [5.7%] in the placebo group; hazard ratio, 0.75; 95% CI, 0.64 to 0.88; P<0.001). The results were also consistent in subgroups defined according to cardiovascular risk at baseline, lipid level, C-reactive protein level, blood pressure, and race or ethnic group. In the rosuvastatin group, there was no excess of diabetes or cancers, but there was an excess of cataract surgery (in 3.8% of the participants, vs. 3.1% in the placebo group; P=0.02) and muscle symptoms (in 5.8% of the participants, vs. 4.7% in the placebo group; P=0.005).

CONCLUSIONS

Treatment with rosuvastatin at a dose of 10 mg per day resulted in a significantly lower risk of cardiovascular events than placebo in an intermediate-risk, ethnically diverse population without cardiovascular disease.

Source: NEJM

%d bloggers like this: