‘Hidden Dangers’ of Mammograms Every Woman Should Know About


Study Finds Women Still Suffering 3 Years After Breast Cancer False-Positive

Millions of women undergo them annually, but few are even remotely aware of just how many dangers they are exposing themselves to in the name of prevention, not the least of which are misdiagnosis, overdiagnosis and the promotion of breast cancer itself. 

A new study published in the Annals of Family Medicine titled, Long-term psychosocial consequences of false-positive screening mammography, brings to the forefront a major underreported harm of breast screening programs: the very real and lasting trauma associated with a false-positive diagnosis of breast cancer.[1]

The study found that women with false-positive diagnoses of breast cancer, even three years after being declared free of cancer, “consistently reported greater negative psychosocial consequences compared with women who had normal findings in all 12 psychosocial outcomes.”

The psychosocial and existential parameters adversely affected were:

  • Sense of dejection
  • Anxiety
  • Negative impact on behavior
  • Negative impact on sleep
  • Degree of breast self-examination
  • Negative impact on sexuality
  • Feeling of attractiveness
  • Ability to keep ‘mind off things’
  • Worries about breast cancer
  • Inner calm
  • Social network
  • Existential values

What is even more concerning is that “[S]ix months after final diagnosis, women with false-positive findings reported changes in existential values and inner calmness as great as those reported by women with a diagnosis of breast cancer.”

In other words, even after being “cleared of cancer,” the measurable adverse psychospiritual effects of the trauma of diagnosis were equivalent to actually having breast cancer.

Given that the cumulative probability of false-positive recall or biopsy recommendation after 10 years of screening mammography is at least 50%,[2] this is an issue that will affect the health of millions of women undergoing routine breast screening.

The Curse of False Diagnosis and ‘Bone-Pointing’

Also, we must be cognizant of the fact that these observed ‘psychosocial’ and ‘existential’ adverse effects don’t just cause some vaguely defined ‘mental anguish,’ but translate into objectively quantifiable physiological consequences of a dire nature.

For instance, last year, a groundbreaking study was published in the New England Journal of Medicine showing that, based on data on more than 6 million Swedes aged 30 and older, the risk of suicide was found to be up to 16 times higher and the risk of heart-related death up to 26.9 times higher during the first week following a positive versus a negative cancer diagnosis.[3]

This was the first study of its kind to confirm that the trauma of diagnosis can result in, as the etymology of the Greek word trauma reveals, a “physical wound.” In the same way as Aborigonal cultures had a ‘ritual executioner’ or ‘bone pointer’ known as a Kurdaitcha who by pointing a bone at a victim with the intention of cursing him to death, resulting in the actual self-willed death of the accursed, so too does the modern ritual of medicine reenact ancient belief systems and power differentials, with the modern physician, whether he likes it or not, a ‘priest of the body.’; we must only look to the well-known dialectic of the placebo and nocebo effects to see these powerful, “irrational” processes still operative.

Millions Harmed by Breast Screening Despite Assurances to the Contrary

Research of this kind clearly indicates that the conventional screening process carries health risks, both to body and mind, which may outstrip the very dangers the medical surveillance believes itself responsible for, and effective at, mitigating.  For instance, according to a groundbreaking study published last November in New England Journal of Medicine, 1.3 million US women were overdiagnosed and overtreated over the past 30 years.[4] These are the ‘false positives’ that were never caught, resulting in the unnecessary irradiation, chemotherapy poisoning and surgery of approximately 43,000 women each year.  Now, when you add to this dismal statistic the millions of ‘false positives’ that while being caught nevertheless resulted in producing traumas within those women, breast screening begins to look like a veritable nightmare of iatrogenesis.

And this does not even account for the radiobiological dangers of the x-ray mammography screening process itself, which may be causing an epidemic of mostly unackowledged radiation-induced breast cancers in exposed populations.

For instance, in 2006, a paper published in the British Journal of Radiobiology, titled “Enhanced biological effectiveness of low energy X-rays and implications for the UK breast screening programme,” revealed the type of radiation used in x-ray-based breast screenings is much more carcinogenic than previously believed:

Recent radiobiological studies have provided compelling evidence that the low energy X-rays as used in mammography are approximately four times – but possibly as much as six times – more effective in causing mutational damage than higher energy X-rays. Since current radiation risk estimates are based on the effects of high energy gamma radiation, this implies that the risks of radiation-induced breast cancers for mammography X-rays are underestimated by the same factor.[5]

Even the breast cancer treatment protocols themselves have recently been found to contribute to enhancing cancer malignancy and increasing mortality. Chemotherapy and radiation both appear to enrich the cancer stem cell populations, which are at the root of breast cancer malignancy and invasiveness. Last year, in fact, the prestigious journal Cancer, a publication of the American Cancer Society, published a study performed by researchers from the Department of Radiation Oncology at the UCLA Jonsson Comprehensive Cancer Center showing that even when radiation kills half of the tumor cells treated, the surviving cells which are resistant to treatment, known as induced breast cancer stem cells (iBCSCs), were up to 30 times more likely to form tumorsthan the nonirradiated breast cancer cells. In other words, the radiation treatment regresses the total population of cancer cells, generating the false appearance that the treatment is working, but actually increases the ratio of highly malignant to benign cells within that tumor, eventually leading to the iatrogenic (treatment-induced) death of the patient.[6]

What we are increasingly bearing witness to in the biomedical literature itself is that the conventional breast cancer prevention and treatment strategy and protocols are bankrupt.  Or, from the perspective of the more cynical observer, it is immensely successful, owing to the fact that it is driving billions of dollars or revenue by producing more of what it claims to be fighting.

The time has come for a radical transformation in the way that we understand, screen for, prevent and treat cancer. It used to be that natural medical advocates didn’t have the so-called peer-reviewed ‘evidence’ to back up their intuitive and/or anecdotal understanding of how to keep the human body in health and balance. That time has passed.

The Real Reason Toxic Algae Hit Florida’s Beaches


The Primary Cause of Florida's Toxic Algal Blooms Completely Ignored

 

Politicians, the tourism and real estate industries, and even environmental groups are loathe to admit it, but near shore Algal Blooms and Red Tide are the result of land-based, man made pollution.

To combat the algal blooms near shore in Florida we must first admit the scientifically validated fact that algal blooms are the result of land based activities (man made pollution of organic nitrogen) from sewage spills, septic tanks leaks, and and overuse of organic nitrogen from manure on lawns. All of this we can abate if we have the will.

The reality is that those of us and authorities who deny the involvement of land-based activities as cause of algae blooms are conveniently ignoring the science, which is peer reviewed and published, that instructs us on what is feeding green algal Synechococcus blooms and as well harmful red tide Karenia Brevis algal bloom near shore.

[To learn more about the real causes of Red Tide read: “The Truth About Red Tide’s Manmade Causes and Health Effects“]

With evidence that blooms of Synechococcus (a green slime algae) can be enhanced due to anthropogenic nutrients, the poten­tial importance of this particulate nutrient (urea nitrogen) source for sustaining red tide blooms is large and may help to resolve the current uncertainty as to how algal blooms and red tide K. brevis blooms are maintained.

According to the peer reviewed science, urea nitrogen run off appears to be the cause of exacerbation of Red Tide near shore. The Red Tide organisms feed on the green slime algae as a source of energy. Peruse the following study to better understand how this happens:

Grazing by Karenia brevis on Synechococcus enhances its growth rate and may help to sustain blooms Patricia M. Glibert1,*, JoAnn M. Burkholder2 University of Maryland Center for Environmental Science, Horn Point Laboratory, PO Box 775, Cambridge, Maryland 21613, USA2Center for Applied Aquatic Ecology, North Carolina State University, Raleigh, North Carolina 27695, USA)

Based on this research and its conclusions if we reduce urea nitrogen pollution from all sources, including stopping septic tank leaks, prevent and stop sewage spills, and reduce inappropriate organic manure lawn applications on lawns, we should be able to significantly reduce the duration of green slime algal blooms and Red Tide blooms near shore.

What the published research essentially proves is that the runoff from land-based applications of urea nitrogen fertilizers such as commonly used as stated in lawn care, as well as additional sources of nitrogen urea from septic tanks, sewage spills and close-to-water sewage treatment effluent, result in Synachoccus green algal blooms, which is a harmless, green slime algae. According to the science cited here Karenia brevis (red tide) uses the green slime as an energy source. The more Synechoccus the more red tide; it’s simple cause and effect.

So, What Is The Real Cause of Prolonged, Near-To-Shore Algal Outbreaks?

We can learn much from history in Japan where Eutrophication and occurrences of harmful algal blooms in the Seto Inland Sea, Japan. What was proven is that reducing urea nutrient pollution is one solution: “Eutrophication and occurrences of harmful algal blooms in the Seto Inland Sea, Japan.”

The Seto Inland Sea is the largest enclosed coastal sea in Japan and is also a major fishing ground including aquacultures of fish, bivalves and seaweeds. The incidents of red tides dramatically increased in frequency and scale in the Seto Inland Sea along with serious eutrophication in the 1960s and 1970s. In Japan The “Law Concerning Special Measures for Conservation of the Environment of the Seto Inland Sea” was legislated in 1973 and industrial loading was decreased to half the level of 1972.

The enactment of this law was triggered by a red tide of Chattonella antiqua (Hada) Ono, which caused the largest economic loss by the mass mortality of cultured yellowtails (7.1 billion yen) in the summer of 1972. As a result of this law, the quantity of COD dumped in the Seto Inland Sea, which was 1700 tons per day in 1972, had been reduced to 717 tons per day by 1999 (Ministry of the Environment Government of Japan & the Association for the Environ- mental Conservation of the Seto Inland Sea 2001)

To learn more about the link between urea fertilizers and global increases in eutrophication and harmful algal blooms read: “Escalating worldwide use of urea – a global change contributing to coastal eutrophication.” 

So, back to the question: are these outbreaks entirely natural phenomena, as many health authorities, and certainly folks within the mainstream media, tourist and real estate industry, often maintain?

The answer is a resolute and resounding NO.  In April, 2009, the journal Aquatic Microbial Ecology published a groundbreaking study titled, “Grazing by Karenia brevis on Synechococcus enhances its growth rate and may help to sustain blooms,” which provided the missing link in how Red Tide is directly fed by human, land-based activities.  Here is the study abstract:

Ironically, plants need primarily magnesium (for chlorophyll) and potassium, and not nearly as much nitrogen, which is presently being used at up to 5 times higher levels than required. In fact, excess nitrogen leads to plasmolysis in plants, causing excess water to leave the plant entering the soil, resulting in wilting. The excess nitrogen, of course, leaches into the soil and eventually a portion of its causes water pollution.

The obvious solution to the accelerating red tide problem is to reduce land-based applications of urea nitrogen, especially in the summer months. As the green slime is reduced, the red tide will have no additional energy source and will die out.

The reality is that authorities who deny the involvement of land-based activities and algae blooms are conveniently ignoring the science, which is peer reviewed and published, that instructs us on what is feeding red tide near shore. The tourism and real estate industries also have a vested interest in minimizing and/or denying the extent of the problem, at least in the short term. The long term outlook, however, is dismal for these industries, who failing to act, would see the primary attractor for tourists or potential buyers of real estate — watch their ocean and beach deteriorate.

How We Can Contribute To A Long-Term Solution

The long-term solution is to reduce the use of nitrogen organic manure based urea fertilizers in both lawn and agricultural applications. There is no question that nitrogen urea rich agricultural runoff, are sizable contributors to the overall nitrogen burden in the bays in Florida. Much of agricultural runoff ends up in Lake Okeechobee, which eventually empties into the bays in Florida

A 2006 study published in the journal Biogeochemistry titled, “Escalating worldwide use of urea – a global change contributing to coastal eutrophication,” indicates worldwide use of urea as a nitrogen fertilizer and feed additive has increased more than 100-fold in the past 4 decades. The study pointed out:

Long thought to be retained in soils, new data are suggestive of significant overland transport of urea to sensitive coastal waters. Urea concentrations in coastal and estuarine waters can be substantially elevated and can represent a large fraction of the total dissolved organic nitrogen pool. Urea is used as a nitrogen substrate by many coastal phytoplankton and is increasingly found to be important in the nitrogenous nutrition of some harmful algal bloom (HAB) species.”

They also noted that “the global increase from 1970 to 2000 in documented incidences of paralytic shellfish poisoning, caused by several HAB species, is similar to the global increase in urea use over the same 3 decades.”

The reality is that these agricultural practices have been a long time in the making, and will take considerable time, energy and political clout to change. The good news is that you can make changes at the local level, from the bottom up, as it were, by starting with your own lawn.

New Guidelines for the Management of Aspergillosis


Early diagnosis and treatment of the major forms of aspergillosis are the focus of a new practice guideline from the Infectious Diseases Society of America (IDSA).

The evidence-based recommendations highlight data from clinical studies that evaluated new and current therapies for the management of Aspergillus infection, as well as data on the use of non-culture-based biomarkers for diagnosing infection. In particular, the guidelines focus on the management of the major forms of aspergillosis (allergic, chronic, and invasive), with special emphasis on invasive aspergillosis because it remains a significant cause of morbidity and mortality in high-risk, immunocompromised patients.

Thomas F. Patterson, MD, from the University of Texas Health Science Center at San Antonio, and colleagues published the updated practice guideline online June 29 in Clinical Infectious Diseases.

“This document constitutes the guidelines of [IDSA] for treatment of aspergillosis and replaces the practice guidelines for Aspergillus published in 2008,” the authors write. However, they emphasize that guidelines cannot always account for individual patient variation, and are therefore not intended to replace physicians’ clinical judgment.

The authors emphasize that improved use of diagnostic tools has increased clinicians’ ability to make an early diagnosis of aspergillosis. Until molecular diagnostic techniques become more widely used in clinical laboratories, the guidelines recommend submission of tissue and fluid specimens for histopathologic, cytologic, and culture examination to diagnose invasive aspergillosis. However, molecular techniques, such as DNA sequencing, should be used to identify Aspergillus species in cases that involve either isolates with atypical growth or concern for resistance.

If invasive pulmonary aspergillosis (IPA) is suspected in a patient, the guidelines recommend performing computed tomography scanning of the chest, regardless of chest radiography findings. Bronchoscopy with bronchoalveolar lavage is also recommended in these patients, unless significant comorbidities (such as bleeding or severe hypoxemia) preclude it.

Detection of galactomannan (a component of the Aspergillus cell wall) in serum or bronchoalveolar lavage fluid is recommended as an accurate marker for the diagnosis of invasive aspergillosis in adults and children, when used in certain patient subpopulations, such as hematopoietic stem cell transplant recipients or patients with hematologic malignancies.

The guidelines also discuss emerging diagnostic tools with the ability to further improve the early diagnosis of Aspergillus infection. For instance, Aspergillus polymerase chain reaction (PCR) testing has been shown to be more sensitive than culture in detecting the fungus in blood and respiratory fluids. Nevertheless, although a promising diagnostic tool, these assays have not been standardized and validated for routine clinical use, and the role of PCR testing in patient management remains unclear. Clinicians should therefore consider the use of Aspergillus PCR on a case-by-case basis.

The availability of antifungal agents that are more active, better tolerated, and/or developed in extended-release formulations has also substantially improved therapy for patients at risk for seriousAspergillus infections, the authors note.

If IPA is suspected, antifungal therapy should be initiated while diagnostic evaluation is ongoing. Voriconazole is recommended for primary treatment of IPA, although combination therapy with voriconazole and echinocandin may be warranted for some high-risk patients. Antifungal therapy for IPA should continue for at least 6 to 12 weeks. Antifungal prophylaxis should also be instituted for patients with prolonged neutropenia who are at high risk for IA. Prophylactic regimens with posaconazole, voriconazole, and/or micafungin are considered to be most effective.

The guidelines do not recommend routine testing for antifungal susceptibility testing. Instead, it should be reserved for cases in which infection with an azole-resistant isolate is suspected, or in which a patient is unresponsive to antifungal agents.

The guideline, which has also been endorsed by the Pediatric Infectious Diseases Society, recommends using the same antifungal agents for treatment of aspergillosis in children as are used in adults. However, dosing of many of these agents may be different for children. The authors also note that although voriconazole is only approved by the US Food and Drug Administration for children aged 12 years and older, it is the cornerstone of aspergillosis treatment in children of all ages.

Yet, despite recent advances, the authors acknowledge that development of new antifungal agents is still needed for effective management of aspergillosis, because “even with optimal antifungal therapy the mortality rate remains high.”

They conclude, “Critical gaps in knowledge remain regarding management of these infections including the optimal utility of combination therapy, tools for early detection of these infections, evaluation of response, therapy for patients with breakthrough or refractory infection, and the population of patients for whom prophylaxis would be most beneficial.”

Opting for CPR But Not Intubation May Not Be Wise


If you have an advance directive that cherry-picks the interventions you want to receive if your heart suddenly stops, you might want to rethink your choices, according to physicians writing in JAMA Internal Medicine.

As patients and families increasingly recognize the value of specifying their wishes regarding medical treatment in case they become unable to communicate, they need to better understand the implications of their decisions, the doctors say.

People who prepare for the possibility of cardiopulmonary resuscitation (CPR) by specifying selected options – “everything but intubation” or “everything but defibrillation” – don’t realize what that can mean, they warn.

Dr. Paul Rousseau of the Wake Forest School of Medicine in Winston-Salem, North Carolina describes a 77-year-old man with advanced cancer whose code status called for a “partial” code, with “no intubation.”

So while doctors were able to restart his heart, they couldn’t place a breathing tube in his lungs per his written wish. Without the breathing tube, he didn’t get enough oxygen, and as a result, he suffered severe brain damage. He remained comatose in the intensive care unit for another two weeks before he died.

Delivery of selected options during CPR attempts is a troublesome and increasingly frequent preference that often stems from good intentions among families balancing desires to save a life and limit suffering, Rousseau wrote in his paper.

Many staff, Rousseau recounts, felt that despite honoring this patient’s advance directive, they had actually harmed him. Others worried that the patient had not understood the likely outcomes.

“You do everything you can to return functioning, or you don’t,” Rousseau told Reuters Health. “If you are a baker and not using the main ingredient, the food will not come out okay.”

Rousseau would like to see partial codes banned. “When patients survive, it can often portend messy and emotional futures for families as well as physicians, not to mention financial repercussions for hospitals,” he said.

In a linked commentary, Dr. Josue Zapata and Dr. Eric Widera, both from the University of California, San Francisco, say “partial codes” are symptomatic of communication failures.

“A partial code likely represents a partial understanding by a patient or a partial assessment of their priorities by a provider,” they write.

Zapata and Widera advise doctors to ask patients what they hope their treatments will achieve.

“Providing a list of choices may in itself be misleading in that a patient may falsely believe that if a given intervention is offered as an option by a presumably expert and well-intentioned physician, there must be at least some sort of benefit,” they say.

Outcomes after partial codes in hospitals are hard to study; scant research exists. Large-scale studies show that after a full-out resuscitation effort, including intubation, 17 percent of patients live long enough to be discharged from the hospital, according to Zapata and Widera. For patients with advanced cancer, that rate is probably no higher than 5 percent.

Bioethicist Craig Klugman from DePaul University in Chicago agrees that partial codes should not be offered.

“There are many times in medicine when one thing requires a second thing, and to separate them undermines the chance of benefit,” Klugman told Reuters Health. “To offer a ‘choose your own adventure’ procedure violates the oath to do no harm.”

But Dr. Patrick Cullinan, former medical director of an intensive care unit in San Antonio, Texas, disagrees.

Cullinan told Reuters Health that when patients request a partial code without intubation, he often uses either bag masks or BiPAP (bilevel positive airway pressure), which are noninvasive breathing therapies, instead of intubation.

“Partial DNRs (Do Not Resuscitate orders) are helpful in allowing families to feel empowered and have some input,” Cullinan said. “Those staunchly ‘all’ or ‘nothing’ don’t understand subtleties in providing the most compassionate and appropriate care. By placing an unwanted tube, you steal their last opportunity to talk to their family, to tell them ‘I love you.'”

Dr. Melissa Bregger, a chief internal medicine resident at Northwestern University’s Feinberg School of Medicine in Chicago who has extensively studied CPR and advanced life support, says that while little data exists, emerging research showing improved outcomes using bag masks instead of intubation is “somewhat promising.” Among critically ill patients, however, not much evidence supports noninvasive measures.

“It depends on what caused the code, and that’s one of the hardest things to figure out during a code,” Bregger told Reuters Health. If patients code due to dangerous heart rhythms, partial codes may prove as effective as full efforts. However, such patients would be unlikely to have participated in planning discussions to request limited measures.”

“It’s a really hard question,” she said.

Aspirin As Secondary Prevention in Patients With Colorectal Cancer: An Unselected Population-Based Study


Abstract

Purpose Regular use of aspirin (acetylsalicylic acid) is associated with reduced incidence and mortality of colorectal cancer (CRC). However, aspirin as primary prevention is debated because of the risk of hemorrhagic adverse effects. Aspirin as secondary prevention may be more justified from a risk-benefit perspective. We have examined the association between aspirin use after the diagnosis of CRC with CRC-specific survival (CSS) and overall survival (OS).

Materials and Methods An observational, population-based, retrospective cohort study was conducted by linking patients diagnosed with CRC from 2004 through 2011 (Cancer Registry of Norway) with data on their aspirin use (The Norwegian Prescription Database). These registries cover more than 99% of the Norwegian population and include all patients in an unselected and consecutive manner. Exposure to aspirin was defined as receipt of aspirin prescriptions for more than 6 months after the diagnosis of CRC. Multivariable Cox-proportional hazard analyses were used to model survival. The main outcome measures of the study were CSS and OS.

Results A total of 23,162 patients diagnosed with CRC were included, 6,102 of whom were exposed to aspirin after the diagnosis of CRC (26.3%). The median follow-up time was 3.0 years. A total of 2,071 deaths (32.9%, all causes) occurred among aspirin-exposed patients, of which 1,158 (19.0%) were CRC specific. Among unexposed patients (n = 17,060), there were 7,218 deaths (42.3%), of which 5,375 (31.5%) were CRC specific. In multivariable analysis, aspirin exposure after the diagnosis of CRC was independently associated with improved CSS (hazard ratio [HR], 0.85; 95% CI, 0.79 to 0.92) and OS (HR, 0.95; 95% CI, 0.90 to 1.01).

Conclusion Aspirin use after the diagnosis of CRC is independently associated with improved CSS and OS.

Where We Stand with a Cancer Drug for Parkinson’s


A promising therapy that may slow or stop Parkinson’s progression is moving forward. Today The Michael J. Fox Foundation (MJFF), the Van Andel Research Institute (VARI) in Michigan and the Cure Parkinson’s Trust (CPT) in the United Kingdom announced plans to collaborate to assess the clinical use and development of cancer drug nilotinib. Among the partners’ goals: planning a double-blind, placebo-controlled clinical trial of nilotinib, which MJFF hopes can begin in 2017.

The announcement came in conjunction with today’s publication, in the Journal of Parkinson’s Disease, of a paper on the first trial of nilotinib in people with Parkinson’s disease from a team at Georgetown University. An accompanying editorial, “Nilotinib — Differentiating the Hope from the Hype,” presents the research on nilotinib and its target, c-Abl (read more on this emerging protein of interest below), as intriguing, but warns against patient use until we know more about the drug’s safety and efficacy for people with Parkinson’s disease. (The editorial was authored by MJFF CEO Todd Sherer, PhD; Richard Wyse, MD, director of research and development at Cure Parkinson’s Trust; and Patrik Brundin, MD, PhD, director of the Van Andel Research Institute Center for Neurodegenerative Science and editor of the journal.)

“It is impossible to extract definitive safety and valid efficacy signals from a small open-label unblinded study (lacking a placebo control) in PD and dementia with Lewy bodies,” the authors write. “A major concerted effort is needed to determine whether there is still hope that can match the hype for nilontinib in alpha-synucleinopathies.”

Here we cover some frequently asked questions about this drug and area of research. Learn more in a special MJFF webinar on Tuesday, August 2 at 12 p.m. ET.

What is nilotinib?
Nilotinib is a drug approved for chronic myelogenous leukemia, a cancer of the white blood cells, under the brand name Tasigna. The medication inhibits a class of certain proteins, including one called c-Abl, which is an emerging target for Parkinson’s research.

What is the connection between c-Abl protein and Parkinson’s disease?
Higher levels of c-Abl are associated with Parkinson’s disease. This means trouble in a few different ways:

  • An MJFF-funded project showed that heightened c-Abl activity inhibits the parkin protein. Parkin, when acting normally, goes around the cell and tags unnecessary or dysfunctional proteins and mitochondria for degradation. When parkin is not working correctly (perhaps because of high c-Abl levels), bad proteins — such as the key Parkinson’s player alpha-synuclein — and damaged mitochondria can build up into toxic clumps and harm the cell. Other cellular players that work with parkin (called substrates) also can become toxic to the cell if parkin is not functioning correctly.
  • Also, last month another paper reported that deleting c-Abl from pre-clinical models reduced alpha-synuclein aggregation, while over-expressing c-Abl led to the protein clumps. The research team on that paper showed a direct link between c-Abl and alpha-synuclein, further supporting the role of c-Abl in Parkinson’s disease.
  • In addition to the role of c-Abl in regulating parkin and/or alpha-synuclein, some researchers have demonstrated its involvement in dopamine-signaling pathways.

To recap: scientists believe that too much c-Abl hurts cells by messing with parkin function, encouraging alpha-synuclein aggregation directly and/or impacting dopamine signaling. With the evidence mounting, c-Abl is gaining attention as a Parkinson’s drug target.

What has the research told us about nilotinib?
Two studies in pre-clinical PD models from 2013 and 2014 showed protective effects of nilotinib. And several other studies in pre-clinical PD models have shown protective effects of inhibiting c-Abl. This provided impetus for testing nilotinib in patients.

The trial results published today — from a small, open-label (all knew they were getting the drug) trial of nilotinib in people with advanced Parkinson’s — included impact on spinal fluid measures of alpha-synuclein and imaging scans of dopamine function.

The drug was well tolerated, and participants reported improvements in motor skills and cognitive function. These are encouraging results; unfortunately, researchers know that the likelihood of placebo effect is high in any open-label Parkinson’s clinical study. Nonetheless, MJFF deems these findings supportive of continued, rigorous research in this area.

Should patients start taking nilotinib?
In short, no. We just don’t know enough yet. Patients and clinicians are urged to wait for further safety data before considering adding the drug to their treatment regimens at this time. Much work remains to be done to validate the drug in a clinical setting, and there is not yet enough information to assert with certainty that it works in Parkinson’s and, critically, that it is safe to take over the course of a lifetime.

Cancer treatments are notoriously hard on the body. While people with Parkinson’s might take a significantly lower dose, we need to know the long-term effects.

Why the lower dose for PD? In cancer treatment, you try to get that tagging and degradation system working in overdrive to eat up everything in the cancer cells. In Parkinson’s, we just want the tagging system to work normally, so we might not need as much drug. But we don’t yet know whether a lower dose of nilotinib will actually inhibit c-Abl in the brain.

Which leads to another question: How much drug gets the system working enough to protect the cell but not so much as to harm it? We need to make sure the drug is truly treating the Parkinson’s process.

“Nilotinib does not get into brain that well, so one of the questions that I have is: At the dose touted to be effective in humans, is c-Abl in brain cells being inhibited?” says Ted Dawson, MD, PhD, of Johns Hopkins University and author of the recent paper connecting c-Abl to alpha-synuclein. “And it has toxicities, so if a patient is contemplating taking nilotinib, it should really be done in the setting of a controlled clinical trial where you’re appropriately monitored.”

Are there other drugs similar to nilotinib?
There are other c-Abl inhibitors for cancer, but they either also come with harsh side effects or don’t pass the blood-brain barrier (a requirement to stop the Parkinson’s process).

So what are the next steps?
MJFF, VARI and CPT are collaborating on a therapeutic development program to assess the safety and efficacy of nilotinib in people with PD. The program includes the goal of planning of a double-blind, placebo-controlled (neither researchers nor participants know who has gotten the drug or placebo) clinical trial of nilotinib, which MJFF hopes can begin in 2017.

The partners plan to expand on early safety findings to better understand the implications of long-term use of nilotinib and to rigorously vet early-stage pre-clinical and clinical findings such as around drug penetration into the brain and the relationship between nilotinib dosing and c-Abl activity.

The sponsors of the first clinical trial also are planning a follow-up. These parallel studies will help gather more data and provide independent findings for comparison.

And the field is working on new c-Abl inhibitors that get in to the brain better with fewer risks and side effects.

Sunscreens: The Ugly Truth


For decades, doctors and the media have recommended you apply sunscreen before going outside.

According to the American Academy of Dermatology (AAD), everyone should use sunscreen for protection from the sun’s ultraviolet rays, believed to be the trigger for skin cancer and the precursor to wrinkles and premature aging.1

is sunscreen safe

Story at-a-glance

  • Rates for melanoma skin cancers began to climb in the 1970s, rising 200 percent between 1975 and 2013
  • Although sunscreen is recommended to reduce skin aging and your risk of skin cancer, many products have just the opposite effect as they filter only UVB and not the more dangerous UVA
  • Some sunscreens use chemicals that may increase your risk of skin cancer and may contain hormone disrupters. Your best sun protection comes from hats, sunglasses, clothing, zinc oxide and astaxanthin

However, the recommendations don’t include the kind of sunscreen that is effective, nor do the recommendations advise you how to use the sun effectively to protectyourself from skin cancer and improve your vitamin D level, which has significant health benefits, including a lowered risk of melanoma.

To date, the U.S. Food and Drug Administration (FDA) does not have regulations governing advertising and claims for sunscreen.2 In 2011, the FDA banned the use of terms on sunscreen making inflated claims, such as “all day protection” and “sweat-” or “waterproof.”

The Environmental Working Group (EWG) recently released their 2016 list of best and worst sunscreens3 based on criteria such as level of protection and safety of the product, to guide your use of sunscreens this season.

Just remember, companies can change their ingredients, so always read the labels of the products you purchase.

Are Sunscreens the Right Way to Prevent Sunburn and Skin Cancer?

Despite the availability of sunscreen products and media coverage about using sun protection, the number of people suffering from malignant melanoma of the skin continues to rise each year. The number of new cases of skin cancer per 100,000 people has risen from 7.9 in 1975 to 24 people in 2013.4

This represents a consistent average 3 percent rise each year in newly diagnosed cases and a 200 percent rise from 1975 to 2013. 

Ultraviolet radiation reaches the earth as UVA and UVB light, and has been classified as a human carcinogen by the National Toxicology Program (NTP).5 UVA is generally considered to be less carcinogenic than UVB.

Because it was believed UVB light was more dangerous, sunscreen products were first developed to filter UVB and not UVA. However, recent research has demonstrated UVA radiation actually plays an important role in the development of malignant melanoma, the most aggressive form of skin cancer.

According to estimates, more than 144,000 Americans will be diagnosed with melanoma in 2016, with five-year survival rates starting at 98 percent if the cancer has not reached the lymph nodes, 63 percent for regional cancer and dropping to 17 percent for distant-stage melanoma.6

A number of studies demonstrate sunscreen reduces the number of new squamous cell skin cancers, but has no effect on basal cell and may actually contribute to the development of the more aggressive malignant melanoma.7

There is some evidence that non-melanoma and easily treated skin cancers are related to cumulative exposure to the sun. However, that is not the case with malignant melanoma, linked with significant sunburns.8

The American Cancer Society recommends sunscreen should be used as a filter, and not a reason to stay longer in the sun. For extended outings, they recommend other methods of sun protection, even when properly using sunscreen, such as hats, sunglasses, clothing and shade.9

The Good, the Bad and the Ugly

Surveys from the AAD have demonstrated that many are not aware of how to use the sunscreen effectively.10 However, even when used correctly, not all sunscreen products contain what’s advertised on the bottle.

In one test, researchers evaluated the SPF value of 65 different products to find 43 percent had less SPF than promised on the label.11

Sunscreen also blocks your body’s ability to manufacture vitamin D, although several studies have demonstrated that most people don’t use adequate amounts of sunscreen to negatively affect their vitamin D levels.12,13,14,15 Still, this certainly is a concern, especially if you wear sunscreen all the time.

In such a case, you may want to consider getting your vitamin D level tested, and if below the clinically relevant level of 40 nanograms per milliliter, you’d be wise to consider a vitamin D supplement. Still, supplements cannot provide the identical benefits of sensible sun exposure.

The amount of sunscreen needed to protect your skin from burning also increases the amount of toxic chemicals you use.

Even studies from the Centers for Disease Control and Prevention (CDC), demonstrate 97 percent of people living in the U.S. are contaminated with a toxic ingredient widely used in sunscreens, called oxybenzone.16

Oxybenzone is commonly found in sunscreens and other personal care products. EWG identified nearly 600 different sunscreen products containing oxybenzone.

Mothers with high levels of the chemical have a higher risk of giving birth to low birthweight babies, a critical risk factor linked to cardiovascular disease, diabetes, hypertension and other diseases.17

What Do the Numbers Really Mean?

Sunscreens may also give you a false sense of security. Many consumers believe the higher the SPF number, the greater the protection against UV radiation. However, as mentioned earlier, most sunscreens protect against UVB but don’t have adequate protection against UVA radiation.

Both UVA and UVB can cause tanning and burning, although UVB does so far more rapidly. UVA, however, penetrates the skin more deeply than UVB, and may be a much more important factor in photoaging, wrinkles and skin cancers.

An SPF of 30 will theoretically filter 97 percent of the UVB rays for two hours.18Theoretically, a higher SPF will block more of the sun’s UVB rays, but no sunscreen will block 100 percent.

The problem is, if you’re not experiencing skin reddening, you may be tempted to prolong the time you stay in the sun. This raises your risk of overexposure, which is the real danger with sun exposure.

Sunscreens with a higher SPF also require more chemicals to achieve the intended result. Many pose a health risk when they are absorbed through the skin, potentially causing tissue damage and disrupting your hormonal balance.

Because you don’t experience better protection with higher SPF numbers, it’s usually best to stick with SPF 30 if you choose to use sunscreen.

How They Work

In order for sunscreens to be effective, you must apply large amounts over all exposed areas of your skin. This means the product should not trigger skin allergies and must provide good protection against UV radiation. It also should NOT be absorbed into your skin, as the most effective sunscreen acts as a topical barrier.

Sunscreens work based on one of two mechanisms. Older products sat on the top of your skin, causing UV rays to bounce off. Most contained zinc oxide or titanium dioxide.

The second type uses chemical filters to block UV radiation. Many of those include octisalate, oxybenzone, avobenzone, homosalate, octinoxate and octocrylene.19

Several of these chemicals are hormone disruptors that have been shown to alter reproductive ability, delay puberty, alter estrous cycles in mice, reduce sperm counts in animal studies, and alter thyroid function.

Other chemicals, such as retinyl palmitate, may actually increase your risk of developing skin cancer. This product is a form of vitamin A that may speed the development of tumors and lesions when exposed to sunlight.

Manufacturers sometimes add it to products to slow skin aging.20 However, that only holds true in the absence of sun exposure.

Mechanical sunscreens, including zinc oxide, have proven over years of use to be a safe and effective means of blocking both UVA and UVB light.21

In light of recent media coverage, some companies are using zinc oxide to block UV radiation, while attempting to meet the desire of their consumers for products that don’t leave a thick film on the skin.

Nanotechnology and What It Does

To reduce the thick film, manufacturers are reducing the size of the molecules. This nanotechnology has several different effects. The particles are so small they may be absorbed into your skin. Some studies have found significant negative health effects from the absorption of nanoparticles.22 While excellent as a drug delivery system, it is questionable for use in sunscreen.23

Reducing the size of the zinc oxide particles improves the UVB protection but reduces the UVA protection, one of the important benefits of using zinc oxide as a sunscreen.24 Zinc oxide is beneficial because it remains stable in heat, but as a nanoparticle, the problems with toxicity probably outweigh the benefits to sun protection.

Toxicity of zinc oxide nanoparticles, after systemic distribution, may affect your lungs, liver, kidneys, stomach, pancreas, spleen, heart and brain.25 Findings have also demonstrated that aging has a synergistic effect with zinc oxide nanoparticles on systemic inflammation and neurotoxicity, affecting your brain and neurological system. In other words, the older you are, the higher your risk of neurotoxicity from zinc oxide nanoparticle absorption.

Is Sunscreen a Scam?

Until around 1950, melanoma was rarely diagnosed. The numbers didn’t rise until the late 1960s, just after “tanning lotion” was introduced on the market. The idea behind the lotion was the longer you could stay in the sun without burning, the more likely you would tan.

The standard explanation for the rare diagnosis of melanoma prior to the 1970s was that Americans started sunbathing in earnest in the 1950s. However, any image of the beaches from the 1930s and earlier would demonstrate that people enjoyed the sun and ocean long before the 1950s. The higher the rates of melanoma diagnosed per year, the greater the call to use sunscreen.

Interestingly, the prognosis or outcome of a diagnosis of melanoma may be linked to your levels of vitamin D. In a ground-breaking study, researchers demonstrated a link between levels of vitamin D and outcomes in individuals diagnosed with melanoma, after adjusting for C-reactive protein levels.26

Prior studies demonstrated a link between C-reactive proteins and poor outcomes after diagnosis with melanoma. This study looked at the association between vitamin D, an inflammatory response, and C-reactive proteins in a sample of over 1,000 patients. An investigation of several biomarkers suggested increasing vitamin D may improve five-year survival rates.

From the Inside Out

You can boost your internal ability to offset UVA and UVB radiation through the nutrients you eat each day. Antioxidants found in colorful fruits and veggies have been shown to have protective effects, but the real “superstar” is the fat-soluble carotenoid astaxanthin, which is what gives krill, salmon, and flamingos their pink color.27

Astaxanthin is produced by the microalgae Haematococcus pluvialis when its water supply dries up, forcing it to protect itself from ultraviolet radiation. It is this “radiation shield” mechanism that helps explain how astaxanthin can help protect you from similar radiation.

When you consume this pigment, you are essentially creating your own “internal sunscreen.” Research has confirmed it’s a potent UVB absorber that helps reduce DNA damage. It’s actually one of the most potent antioxidants known, acting against inflammation, oxidative stress and free radical damage throughout your body.

Each of these functions improves the ability of your skin to handle sun without burning, while giving your body the best advantage to manufacturing vitamin D. This is not a free pass to spending all day in the sun without physical protection, such as hat and long-sleeved clothing, but it does give you a healthier option than using chemicals to filter UV radiation.

Your Best and Worst Sunscreen Choices

Your safest and best choice for sunscreen protection is zinc oxide. Avoid nano versions however, to circumvent potential toxicity. Unfortunately, it can be challenging to find a product without other chemically based sunscreen filters. To help you choose the product best for your family, EWG performs an annual sunscreen evaluation based on effectiveness and safety.

Sixty brands received the EWG’s low-hazard ingredient list ranking this year. Their report published the best and worst choices for children, but only the best choices for adults.28,29,30 Here’s a sampling of the best and worst:

Best for Adults and Children

Adults Children
All Good Sport Sunscreen, SPF 33 Adorable Baby Sunscreen Lotion, SPF 30+
All Terrain TerraSport Sunscreen Lotion, SPF 30 All Good Kid’s Sunscreen, SPF 33
Babo Botanicals Clear Zinc Sunscreen Lotion, Fragrance Free, SPF 30 All Terrain KidSport Sunscreen Lotion, SPF 30
Badger Sunscreen Cream and Lotion, SPF 25, 30, and 35 ATTITUDE Little Ones 100% Mineral Sunscreen, Fragrance Free, SPF 30
Bare Belly Organics Face Stick Sunscreen, SPF 34 Badger Kids Sunscreen Cream, SPF 30
Burt’s Bees Baby Bee Sunscreen Stick, SPF 30 BabyHampton beach*bum sunscreen, SPF 30
Goddess Garden Facial Natural Sunscreen, SPF 30 Bare Belly Organics Baby Sunscreen, SPF 30
Kabana Organic Skincare Green Screen D Sunscreen, Original, SPF 35 Belly Buttons & Babies Sunscreen Lotion, SPF 30
Nature’s Gate Sport Vegan Sunscreen, SPF 50 Blue Lizard Australian Sunscreen, Baby, SPF 30+
The Honest Company Sunscreen Stick, SPF 30 BurnOut Kids Physical Sunscreen, SPF 35
Tropical Sands Sunscreen, SPF 15, 30, and 50 California Baby Super Sensitive Sunscreen, SPF 30+

Worst for Children

Banana Boat Kids Max Protect & Play Sunscreen Lotion, SPF 100

Coppertone Water Babies Sunscreen Stick, SPF 55

Coppertone Sunscreen Continuous Spray, Kids, SPF 70

Coppertone Sunscreen Lotion Kids, SPF 70+

Coppertone Foaming Lotion Sunscreen Kids Wacky Foam, SPF 70+

Coppertone Water Babies Sunscreen Lotion, SPF 70+

CVS Baby Sunstick Sunscreen, SPF 55

CVS Kids Wet & Dry Sunscreen Spray, SPF 70+

Equate Kids Sunscreen Stick, SPF 55

Hampton Sun Continuous Mist Sunscreen For Kids, SPF 70

Neutrogena Wet Skin Kids Sunscreen Spray, SPF 70+

Neutrogena Wet Skin Kids Sunscreen Stick, SPF 70+

Up & Up Kids Sunscreen Stick, SPF 55

The Physicist Who Might Have Discovered a New Building Block of Matter


Suchitra Sebastian is building a new lab to study exotic quantum behavior at Cambridge University.

This Hyperloop Lawsuit Is Insane


 

THE LIKELIHOOD THAT you will someday zoom across the country at supersonic speeds through a tube just got a lot smaller.

The leaders of Hyperloop One, the leading effort to take the transportation system from Elon Musk-powered fantasy to reality, have embraced a Silicon Valley cliche: They’re suing each other. And the details involve a suspiciously overpaid fiancée, an attempted coup, and a noose.

Co-founder and CTO Brogan BamBrogan has resigned and filed a lawsuit accusing the company and his co-founder Shervin Pishevar of breach of fiduciary duty, violating labor laws, wrongful termination, breach of contract, defamation, infliction of emotional distress, and assault. It’s a serious blow that could scare away crucial investors and make the company’s goal—revolutionizing transportation—even harder.

Hyperloop, while theoretical, is no sci-fi pipe dream. The engineering is fundamentally sound. “The question is, can it compete from a capital standpoint and an operating standpoint and a safety standpoint,” said David Clarke, director of the Center for Transportation Research at the University of Tennessee, back in May.

In other words, can Hyperloop win customers away from existing transportation methods? Doing that requires things like raising the billions of dollars of transportation infrastructure demands, addressing safety concerns, and bringing in paying customers at competitive ticket prices. Try doing all that in the midst of a corporate civil war.

And this particular civil war is going to be messy. BamBrogan and colleagues Knut Sauer, David Pendergast, and William Mulholland say the company leaders “established an autocratic governance culture rife with nepotism, and wasted the company’s precious cash.” In the lawsuit, which names Shervin Pishevar, Afshin Pishevar, board member Joseph Lonsdale, and CEO Rob Lloyd as defendents, they allege that Shervin Pishevar paid his fiancée $40,000 a month for public relations work and hired his brother Afshin as the company’s general counsel. He allegedly told senior engineers to stop work to give office tours for various guests—including a nightclub doorman—and manipulated stock options to take advantage of employees.

HYPERLOOP IN BRIEF


According to the lawsuit, after the plaintiffs and seven other employees complained about the “misuse of company resources and corporate waste” in a letter, Afshin Pishevar left a hangman’s noose on BamBrogan’s desk. The filing includes a security camera image of a man, apparently Pishevar, holding rope and walking through the office.

Later that day, the suit alleges, Hyperloop One fired Pendergast (in front of his wife and children), demoted Sauer (who then resigned), and demanded BamBrogan take a leave of absence.

BamBrogan, who designed rocket engines and space capsule heat shields at SpaceX before helping found the company, resigned “under the threat of physical violence and demotion.” Mulholland resigned as well.

The lawsuit against Hyperloop One includes a security camera image of a man, apparently Afshin Pishevar, holding rope and walking through the office. BamBrogan allegedly found a noose on his desk after complaining about company operations.

Hyperloop One has returned fire. In a statement, Orin Snyder of Gibson Dunn, the company’s lawyer, called the lawsuit “unfortunate and delusional,” said the plaintiffs “tried to stage a coup and failed,” and promised “a swift and potent legal response.” (Returning that return fire, the plaintiff’s attorney, Justin Berger, called that statement “long on rhetoric and short on facts.”)

But nevermind the lawyer-said/lawyer-said. Whatever the outcome of the suit, it’s bad news for the Hyperloop—and not just because the company lost BamBrogan, a convincing salesman and talented engineer.

“This looks like a company that is in trouble,” says Martin Kenney, who edited the 2000 book Understanding Silicon Valley: The Anatomy of an Entrepreneurial Region. A bitter fight between co-founders is not in itself an omen of failure: Apple, Facebook, and Microsoft all saw massive success after early leaders moved on. Tesla Motors only took off after Elon Musk wrested control away from founder Martin Eberhard.

But this is a particularly weird case, with its accusations of overpaid paramours and threats of violence. “You can probably guess that there are pretty serious problems,” Kenney says. That’s especially problematic because building a profitable Hyperloop will demand a massive upfront investment: CEO Lloyd has offered a likely optimistic estimate of $10 million per mile of two-way track.

Hyperloop One raised $80 million in Series B funding in May. But now, investors don’t just have to consider the difficulty of making money off a privately funded, unproven way to fling people around the country. They have to worry about whether the company will spend their money

Google’s Project Fi Is One Step Closer to Unifying the World’s Wireless Networks


GOOGLE IS A few steps closer to unifying the world’s wireless networks—and, in the process, providing your smartphone with a faster, more reliable, and less expensive signal.

Today, Google announced a deal with Three, one of the largest cellular carriers in Europe, that will allow Americans to use its experimental Project Fi wireless service when traveling in an additional 15 countries, bringing the total number of foreign countries where the service is available to more than 135. And at the same time, the company is removing the speed cap that previously limited the service overseas.

device_promo.jpg

Unveiled last year on Google’s flagship Nexus phones, Project Fi not only offers a way of making calls over Wi-Fi networks inside homes, offices, and local coffee shops. As you leave Wi-Fi coverage, it can seamlessly and automatically move those calls onto a cellular network. Plus—and perhaps more importantly—it can move phonesbetween disparate cellular networks, depending on which offers the best signal. And it does all this for a small, flat fee.

Initially, the service allowed phones to jump between Sprint and T-Mobile. Then Google added US Cellular. And now, through its deal with Three, the Internet giant has extended the service’s reach even farther. “We can now reach about 97 percent of markets where Americans travel abroad,” says John Maletis, Project Fi’s head of operations. And, he adds, the service is significantly faster in these markets.

Project Fi already provided service in more than 120 countries worldwide via relationships T-Mobile had already established with foreign networks. But in an effort to keep its costs down, Google throttled all overseas traffic to a relatively slow 256 megabits per second. Now, the company says it’s lifting this throttle to provide 10 to 20 times faster network speeds for those traveling abroad.

Yes, other wireless services have long provided ways of roaming on other networks, but this often comes at a steep price—a price controlled by a single gatekeeper. Project Fi costs the same no matter what network you’re on and no matter where you are: Google charges a standard fee of $10 for every one gigabyte downloaded. That’s why it was throttling overseas traffic. Maletis says Google is intent on running Project Fi as a sustainable business, but the company now believes it can do so while offering faster speeds.

Room to Roam

A faster, more expansive Project Fi is good news not only for anyone using this groundbreaking service, but for, well, anyone. The model proposed by Project Fi is how wireless should work. Your phone will connect to the network with the best signal, not whatever signal a lone carrier happens to offer at a given location. Your phone roams based on what’s best for you, not your carrier’s bottom line. It’s an idea that’s long overdue, and it’s a sign of a larger shift across the world of mobile phones.

On its iPads, Apple now lets you test various wireless services before settling on one. Down the road, the company will surely do the same on iPhones. And Microsoft appears to be moving in a similar direction. Most phones still come tied to a single carrier, but as Project Fi and these others show, it doesn’t have to be that way. The technology has arrived to give everyone options.

Not everyone has the option of using Project Fi. Google classifies the project as an experiment, saying it does not intend to become a large-scale wireless carrier. But in presenting a better alternative, Project Fi is meant to push other providers in the same direction. It’s similar to Google’s approach with its Android mobile operating system and its landline Internet service, Google Fiber. In all cases, the aim is to show the world that a better way forward is possible, raising expectations for everyone. That said, Google now appears serious about turning Google Fiber into a true Internet service provider, as it expands to major cities across the US. We can’t help but wonder whether if it will expand Project Fi in similar ways.

At the moment, Project Fi is only available to Nexus buyers based in the US. And Maletis says the company has no plays to expand the service to other phones. But it’s telling that Google is building Project Fi as a viable business—and not just throwing money at the problem.