Do Cellphones Cause Cancer?


The question of whether cellphones can cause cancer became a popular one after the dramatic increase in cell phone use since the 1990s. Scientists’ main concern is that cell phones can increase the risk of brain tumors or other tumors in the head and neck area – and as of now, there doesn’t seem to be a clear answer.

Cell phones give off a form of energy known as radiofrequency (RF) waves. They are at the low-energy end of the electromagnetic spectrum – as opposed to the higher-energy end where X-rays exist – and they emit a type of non-ionizing radiation. In contrast to ionizing radiation, this type does not cause cancer by damaging DNA in cells, but there is still a concern that it could cause biological effects that result in some cancers.

However, the only consistently recognizable biological effect of RF energy is heat. The closer the phone is to the head, the greater the expected exposure is. If RF radiation is absorbed in large enough amounts by materials containing water, such as food, fluids, and body tissues, it produces this heat that can lead to burns and tissue damage. Still, it is unclear whether RF waves could result in cancer in some circumstances.

An iPhone.

Many factors affect the amount of RF energy a person is exposed to, such as the amount of time spent on the phone, the model of the phone, and if a hands-free device or speaker is being used. The distance and path to the nearest cell phone tower also play a role. The farther a way a person is from the tower, the more energy is required to get a good signal on the phone. The same is true of areas where many people are using their phones and excess energy is required to get a good signal.

RF radiation is so common in the environment that there is no way to completely avoid it. Most phone manufacturers post information about the amount of RF energy absorbed from the phone into the user’s body, called the specific absorption rate (SAR), on their website or user manual. Different phones have different SARs, so customers can reduce RF energy exposure by researching different models when shopping for a phone. The highest SAR in the U.S. is 1.6 watts/kg, but actual SAR values may vary based on certain factors.

Studies have been conducted to find a possible link between cell phone use and the development of tumors. They are fairly limited, however, due to low numbers of study participants and risk of recall bias. Recall bias can occur when individuals who develop brain tumors are more predisposed to recall heavier cell phone use than those who do not, despite lack of true difference. Also, tumors can take decades to develop, and given that cell phones have only been in use for about 20 years, these studies are unable to follow people for very long periods of time. Additionally, cell phone use is constantly changing.

Outside of direct studies on cell phone use, brain cancer incidence and death rates have changed little in the past decade, making it even more difficult to pinpoint if cell phone use plays a role in tumor development.

Source:http://www.dana-farber.org

 

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Advertisements

Posthumous conception raises ‘host of ethical issues’


The legal and moral propriety of conceiving a child with a dead person’s egg or sperm is among the latest fronts being discussed in bioethics.

In Ireland, legislation is under consideration that would

permit reproductive cells from deceased individuals to be used by their spouses or partners to conceive children posthumously, according to media reports. The Irish legislature’s Joint Committee on Health discussed the bill once in January and again in February, a spokesperson for the legislature told Baptist Press. A final bill could be drafted in the coming months and put before parliament for debate.

Health Committee chairman Michael Harty said in a news release, “Assisted Human Reproduction (AHR) is becoming increasingly important in Ireland and measures must be put in place to protect parents, donors, surrogates and crucially, the children born through AHR.”

The posthumous conception legislation, which is part of a broader bill, would require children of the procedure to be carried in the womb of a surviving female partner in the relationship, according to an online commentary by Denver attorney Ellen Trachman, who specializes in reproductive technology law.

Posthumous conception has also been considered by lawmakers and courts in the United States, Canada and Israel.

Southern Baptist bioethicist C. Ben Mitchell said posthumous conception “raises a host of ethical issues.”

“There is no moral duty to use the sperm of a deceased husband or the eggs of a deceased wife,” Mitchell, Graves Professor of Moral Philosophy at Union University, told BP via email. “And intentionally bringing a child into the world with only a single parent raises a host of ethical issues, not to mention a host of psychological, emotional and relational issues for that child.”

Frozen sperm can be used later via artificial insemination or in vitro fertilization (IVF). Frozen eggs can be used to conceive a child through IVF. Following IVF, the resultant embryo must implant in a woman’s womb — either the biological mother or a surrogate.

Sperm and eggs can be either donated prior to death or extracted from a corpse shortly following death, according to the German newspaper Der Spiegel.

In Israel, approximately 5,000 young adults have established “biological wills” stating they want their eggs or sperm frozen and used to conceive offspring if they die before having children, Der Spiegel reported March 28. Some posthumously conceived children have been born in Israel and elsewhere, according to media reports.

Posthumous conception also has emerged in the U.S. and Canada, including the 2016 birth of a New York police detective’s daughter two and a half years following her father’s murder, the Irish Examiner reported. The night the detective was murdered, his wife of three months requested that sperm be extracted from his body and preserved.

U.S. law, Trachman wrote, “lacks any clear uniform rules” regarding posthumous conception “but generally permits post-death reproduction with specific consent in place.”

An additional issue related to posthumous reproduction is what to do with frozen embryos when one or both parents die.

Der Spiegel reported a case in Israel, in which a widower sought, via a surrogate mother, to bring to term embryos he and his wife had frozen. A Harvard Law School blog noted a 2014 Texas case in which a 2-year-old stood to inherit 11 frozen embryos after both of his parents were murdered.

Frozen embryos, Mitchell said, are a separate ethical consideration from posthumous conception.

“If the eggs have already been fertilized, there is a moral duty to bring the embryos to term,” Mitchell said. “We should not generate a new human being only to abandon him or her in a petri dish or nitrogen tank. Embryos belong in uteruses.”

Southern Baptist Convention resolutions repeatedly have affirmed that life begins at conception and that all unborn life must be protected. A 2015 resolution, for example, affirmed “the dignity and sanctity of human life at all stages of development, from conception to natural death.”

Is the MMR Vaccine a Fraud or Does It Just Wear Off Quickly?


Story at-a-glance

  • Ninety-five percent of children entering kindergarten have received two doses of MMR vaccine, as have 92 percent of school children ages 13 to 17 years. In some states, the MMR vaccination rate is near 100 percent
  • Despite achieving a vaccination rate that theoretically should ensure vaccine-acquired herd immunity, outbreaks of mumps keep occurring, primarily among those who have been vaccinated
  • Mumps is making a strong comeback among college students, with hundreds of outbreaks occurring on U.S. campuses over the past two decades
  • Recent research suggests the reemergence of mumps among young adults is due, at least in part, to waning immunity; protection from the vaccine is wearing off quicker than expected
  • According to a still-ongoing lawsuit filed in 2010, Merck is accused of falsifying efficacy testing of its mumps vaccine to hide its poor effectiveness. So, resurgence of mumps may be the result of using a vaccine that doesn’t offer much in terms of protection

By Dr. Mercola

In 1986, public health officials stated that MMR vaccination rates for kindergarten children were in excess of 95 percent and that one dose of live attenuated measles, mumps and rubella vaccine (MMR) would eliminate the three common childhood diseases in the U.S.1 In 1989, parents were informed that a single dose of MMR vaccine was inadequate for providing lifelong protection against these common childhood diseases and that children would need to get a second dose of MMR.2

Today, 95 percent of children entering kindergarten3 have received two doses of MMR vaccine, as have 92 percent of school children ages 13 to 17 years.4

In some states, the MMR vaccination rate is approaching 100 percent.5 Despite achieving the sought-for MMR vaccination rate for more than three decades, which theoretically should ensure “herd immunity,” outbreaks of both measles and mumps keep occurring — and many of those who get sick are children and adults who have been vaccinated.

Mumps Is Making a Comeback

As recently reported by Science Magazine6 and The New York Times,7 mumps is making a strong comeback among college students, with hundreds of outbreaks occurring on U.S. campuses over the past two decades. Last summer, the Minnesota Department of Health reported its largest mumps outbreak since 2006.8

According to recent research,9 the reason for this appears to be, at least in part, waning vaccine-acquired immunity. In other words, protection from the MMR vaccine is wearing off quicker than expected. Science Magazine writes:

“[Epidemiologist Joseph Lewnard and immunologist Yonatan Grad, both at the Harvard T. H. Chan School of Public Health in Boston] compiled data from six previous studies of the vaccine’s effectiveness carried out in the United States and Europe between 1967 and 2008. (None of the studies is part of a current fraudulent claims lawsuit against U.S. vaccine maker Merck.)

Based on these data, they estimated that immunity to mumps lasts about 16 to 50 years, or about 27 years on average. That means as much as 25 percent of a vaccinated population can lose immunity within eight years, and half can lose it within 19 years … The team then built mathematical models using the same data to assess how declining immunity might affect the susceptibility of the U.S. population.

When they ran the models, their findings lined up with reality. For instance, the model predicted that 10- to 19-year-olds who had received a single dose of the mumps vaccine at 12 months were more susceptible to infection; indeed, outbreaks in those age groups happened in the late 1980s and early 1990s. In 1989, the Centers for Disease Control and Prevention added a second dose of the vaccine at age 4 to 6 years. Outbreaks then shifted to the college age group.”

A Third Booster Shot May Be Added

According to public health officials, the proposed solution to boosting vaccine-acquired mumps immunity in the U.S. population is to add a third booster shot of MMR vaccine at age 18.

Unfortunately, adding a booster for mumps means giving an additional dose of measles and rubella vaccines as well, as the three are only available in the combined MMR vaccine or combined MMR-varicella (MMRV) vaccine. At present, a third MMR shot is routinely recommended during active mumps outbreaks, even though there is no solid proof that this strategy is effective.

Considering two doses of the vaccine are failing to protect young adults from mumps, adding a third dose, plus two additional doses of measles and rubella vaccines, seems like a questionable strategy, especially in light of evidence that the mumps vaccine’s effectiveness may have been exaggerated to begin with.

According to a lawsuit filed eight years ago, the manufacturer of mumps vaccine — which is also the sole provider of MMR vaccine in the U.S. — is accused of going to illegal lengths to hide the vaccine’s ineffectiveness. So, might this resurgence of mumps simply be the result of using a vaccine that doesn’t provide immunity to begin with?10 And, if so, why add more of something that doesn’t work? After all, the MMR vaccine is not without its risks, as you’ll see below.

Still-Pending Lawsuit Alleges MMR Fraud

In 2010, two Merck virologists filed a federal lawsuit against their former employer, alleging the vaccine maker lied about the effectiveness of the mumps portion of its MMR II vaccine.11 The whistleblowers, Stephen Krahling and Joan Wlochowski, claimed they witnessed “firsthand the improper testing and data falsification in which Merck engaged to artificially inflate the vaccine’s efficacy findings.”

According to Krahling and Wlochowski, a number of different fraudulent tactics were used, all with the aim to “report efficacy of 95 percent or higher regardless of the vaccine’s true efficacy.”12 For example, the MMR vaccine’s effectiveness was tested against the virus used in the vaccine rather than the natural, wild mumps virus that you’d actually be exposed to in the real world. Animal antibodies were also said to have been added to the test results to give the appearance of a robust immune response.13

For details on how they allegedly pulled this off, read Suzanne Humphries’ excellent summary,14 which explains in layman’s terms how the tests were manipulated. Merck allegedly falsified the data to hide the fact that the vaccine significantly declined in effectiveness.15 By artificially inflating the efficacy, Merck has been able to maintain its monopoly over the mumps vaccine market.

This was also the main point of contention of a second class action lawsuit, filed by Chatom Primary Care16 in 2012, which charged Merck with violating the False Claims Act. Both of these lawsuits were given the green light to proceed in 2014,17,18 and are still pending.

In 2015, Merck was accused of stonewalling, “refusing to respond to questions about the efficacy of the vaccine,” according to a court filing by Krahling and Wlochowski’s legal team.19 “Merck should not be permitted to raise as one of its principal defenses that its vaccine has a high efficacy … but then refuse to answer what it claims that efficacy actually is,” they said.

There’s No Such Thing as Vaccine-Acquired Herd Immunity

This certainly isn’t the first time vaccine effectiveness has been questioned. While herd immunity is thrown around like gospel, much of the protection vaccines offer has actually been shown to wane rather quickly. The fact is, vaccine-acquired artificial immunity does not work the same way as the naturally-acquired longer-lasting immunity you get after recovering from the disease.

A majority of adults do not get booster shots, so most of the adult population is, in effect, “unvaccinated.” This calls into question the idea that a 95 percent-plus vaccination rate among children achieves vaccine-acquired “herd immunity” in a population. While there is such a thing as naturally acquired herd immunity, vaccine-induced herd immunity is a total misnomer.

Vaccine makers have simply assumed that vaccines would provide the same kind of longer-lasting natural immunity as recovery from viral and bacterial infections, but the science and history of vaccination clearly shows that this is not the case.

Vaccination and exposure to a given disease produce two qualitatively different types of immune responses. To learn more about this, please see my previous interview with Barbara Loe Fisher, cofounder and president of the National Vaccine Information Center (NVIC). As explained by Fisher: 

“Vaccines do not confer the same type of immunity that natural exposure to the disease does … [V]accines only confer temporary protection… In most cases natural exposure to disease would give you a longer-lasting, more robust, qualitatively superior immunity because it gives you both cell mediated immunity and humoral immunity.

Humoral is the antibody production. The way you measure vaccine-induced immunity is by how high the antibody titers are. (How many antibodies you have.) The problem is, the cell mediated immunity is very important as well. Most vaccines evade cell mediated immunity and go straight for the antibodies, which is only one part of immunity.”

MMR Does Not Work as Advertised

It’s quite clear the MMR vaccine does not work as well as advertised in preventing mumps, even after most children in the U.S. have gotten two doses of MMR for several decades. Public health officials have known about the problem with mumps vaccine ineffectiveness since at least 2006, when a nationwide outbreak of mumps occurred among older children and young adults who had received two MMR shots.20

In 2014, researchers investigated a mumps outbreak among a group of students in Orange County, New York. Of the more than 2,500 who had received two doses of MMR vaccine, 13 percent developed mumps21 — more than double the number you’d expect were the vaccine to actually have a 95 percent efficacy.

Now, if two doses of the vaccine have “worn off” by the time you enter college, just how many doses will be needed to protect an individual throughout life? And, just how many doses of MMR are safe to administer in a lifetime? Clearly there is far more that needs to be understood about mumps infection and the MMR vaccine before a third dose is added to the already-packed vaccine schedule recommended by federal health officials for infants, children and adolescents through age 18.

Mumps Virus May Have Mutated to Evade the Vaccine

Poor effectiveness could also be the result of viral mutations. There are a number of different mumps virus strains included in vaccines produced by different vaccine manufacturers in different countries. The U.S. uses the Jeryl-Lynn mumps strain in the MMR vaccine developed and sold in the U.S. by Merck. There’s significant disagreement among scientists and health officials about whether the mumps virus is evolving to evade the vaccine.

Two years ago, Dr. Dirk Haselow, an epidemiologist with the Arkansas Department of Health said,22 “We are … worried that this vaccine may indeed not be protecting against the strain of mumps that is circulating as well as it could. With the number of people we’ve seen infected, we’d expect 3 of 400 cases of orchitis, or swollen testicles in boys, and we’ve seen 5.”

A 2014 paper written by U.S. researchers developing a new mumps vaccine also suggested that a possible cause of mumps outbreaks in vaccinated Americans could be due to ” … the antigenic differences between the genotype A vaccine strain and the genotype G circulating wild-type mumps viruses.”23

Be Aware of MMR Vaccine Risks

If a vaccine is indeed highly effective, and avoiding the disease in question is worth the risk of the potential side effects from the vaccine, then many people would conclude that the vaccine’s benefits outweigh the risks. They may even be in favor of an additional dose.

However, if the vaccine is ineffective, and/or if the disease doesn’t pose a great threat to begin with, then the vaccine may pose an unacceptable risk. This is particularly true if the vaccine has been linked to serious side effects. Unfortunately, that’s the case with the MMR vaccine, which has been linked to thousands of serious adverse events and hundreds of deaths. According to NVIC:24

“As of March 1, 2018, there had been 1,060 claims filed in the federal Vaccine Injury Compensation Program for injuries and deaths following MMR or MMR-Varicella (MMRV) vaccinationsUsing the MedAlerts search engine, as of February 4, 2018, there had been 88,437 adverse events reported to the Vaccine Adverse Events Reporting System (VAERS) in connection with MMR or MMRV vaccines since 1990.

Over half of those MMR and MMRV vaccine-related adverse events occurred in babies and young children 6 years old and under. Of the MMR and MMRV vaccine related adverse events reported to VAERS, 403 were deaths, with over 60 percent of the deaths occurring in children under 3 years of age.”

Keep in mind that less than 10 percent of vaccine adverse events are ever reported to VAERS.25 According to some estimates, only about 1 percent are ever reported, so all of these numbers likely vastly underestimate the true harm.

A concerning study published in Acta Neuropathologica in February 2017 also describes the first confirmed report of vaccine-strain mumps virus (live-attenuated mumps virus Jeryl Lynn, or MuVJL) found in the brain of a child who suffered “devastating neurological complications” as a result. According to the researchers:26

“This is the first confirmed report of MuVJL5 associated with chronic encephalitis and highlights the need to exclude immunodeficient individuals from immunization with live-attenuated vaccines. The diagnosis was only possible by deep sequencing of the brain biopsy.”

Is homeopathy the biggest lie ever told in the history of healthcare in reference to the attached link? Why or why not?


That video might be one the gentlest criticisms of homeopathic medicine I have ever seen.

But the conclusion is very true. Most of the alternative systems of medicine, including homeopathy, are ineffective, and their popularity reflects a lack of confidence in valid, scientifically proven medicine, rather than efficacy of alternative therapies.

There is a reason why alternative systems of medicine are questioned again and again. We live in an era of evidence based medicine. More and more doctors are being sued everyday. They are expected (rightly) to justify every investigation they demand of their patients, every procedure they do and every drug they prescribe. That is why doctors have to undergo rigorous training and life-long continuous professional development. In contrast, even in countries where regulatory frameworks for alternative therapies are in place, there is no (or minimal) structure of training, certification and accreditation, and practice is effectively open to all.

Coming back specifically to homeopathy.

Here is rough idea about how evidence-based medicine works:

  1. When we see a disease, we try to understand its pathophysiology – which part of the body is involved (anatomy), what is the cause (infectious, non-infectious, autoimmune etc), what is the mechanism underlying the disease (pathology, biochemistry, molecular biology, genetics etc), and how these correlate with the manifestations of the disease (symptoms and signs).
  2. We confirm/substantiate our impressions by appropriate investigations.
  3. We try to see how our understanding applies to the population in general. This is where the disciplines of epidemiology and statistics come to our aid.
  4. We design therapies on the basis of data we have so far gathered. This is in itself a protracted task and the therapies are again tested in clinical trials. And note this – majority of the therapies are rejected in the trials. According to a conservative estimate, “it takes an average of 12 years for an experimental drug to travel from the laboratory to your medicine cabinet. That is, if it makes it. Only 5 in 5,000 drugs that enter pre-clinical testing progress to human testing. One of these 5 drugs that are tested in people is approved.”[1]
  5. Then there is the matter of applying the evidence to the individual patient.

None of these steps is an end in itself. Often, researchers have to go back to step 1. Trials are stopped. Drugs are withdrawn from the market. Procedures become obsolete. Protocols are redefined, and newer and more stringent laws are imposed.

Homeopathy follows none of these steps with a scientific rigor. I repeat, none. I know that it sounds unduly harsh, but a homeopathic practitioner barely knows the natural course of the disease. An apt analogy would be a person calling himself theoretical physicist without knowing anything about basic calculus, manifolds, topology etc.

Not to mention, the purported science behind designing the homeopathic drugs is absurd, and fanciful to the point of invoking magic.

Consider this. A 30X dilution means that the original substance has been diluted 1 000 000 000 000 000 000 000 000 000 000 times. Assuming that a cubic centimeter of water contains 15 drops, this number is greater than the number of drops of water that would fill a container more than 50 times the size of the Earth[2]. (My head hurts on seeing that number, I would rather have mathematicians give their insights on this, if it is worth their time. I am sure it isn’t.)

As an aside, how is it possible to potentiate a chemical when it is diluted. (Yes, I know it is called potentization. Potato, potahto … whatever.)

Furthermore, let us have a look at the result of some studies.

  • Cochrane reviews of studies of homeopathy do not show that homeopathic medicines have effects beyond placebo[3].
  • One of the reviewers graciously notes that “memory of water and PPR entanglement are not competing but most likely complementary hypotheses, and that both are probably required in order to provide a complete description of the homeopathic process.”[4] In other words, the mechanism by which homeopathic drugs are supposed to act is bullshit.
  • A 10 year study conducted by FDA concluded that homeopathic medicines have harmed hundreds of babies between 2006–2016[5].
  • Homeopathic therapy is also ineffective in multiple diseases it claims to treat, such as allergic rhinitis[6] and rheumatoid arthritis[7].

And these are just a fraction of the studies conducted. Multiple independent studies as well as meta-analyses have found that effect of homeopathic medicine is ambiguous, nil or frankly injurious.

No wonder OTC homeopathic remedies sold in the US will now have to come with a warning that they are based on outdated theories ‘not accepted by most modern medical experts’ and that ‘there is no scientific evidence the product works’

 

To automate is human


It’s not tools, culture or communication that make humans unique but our knack for offloading dirty work onto machines

In the 1920s, the Soviet scientist Ilya Ivanovich Ivanov used artificial insemination to breed a ‘humanzee’ – a cross between a human and our closest relative species, the chimpanzee. The attempt horrified his contemporaries, much as it would modern readers. Given the moral quandaries a humanzee might create, we can be thankful that Ivanov failed: when the winds of Soviet scientific preferences changed, he was arrested and exiled. But Ivanov’s endeavour points to the persistent, post-Darwinian fear and fascination with the question of whether humans are a creature apart, above all other life, or whether we’re just one more animal in a mad scientist’s menagerie.

Humans have searched and repeatedly failed to rescue ourselves from this disquieting commonality. Numerous dividers between humans and beasts have been proposed: thought and language, tools and rules, culture, imitation, empathy, morality, hate, even a grasp of ‘folk’ physics. But they’ve all failed, in one way or another. I’d like to put forward a new contender – strangely, the very same tendency that elicits the most dread and excitement among political and economic commentators today.

First, though, to our fall from grace. We lost our exclusive position in the animal kingdom, not because we overestimated ourselves, but because we underestimated our cousins. This new grasp of the capabilities of our fellow creatures is as much a return to a pre-Industrial view as it is a scientific discovery. According to the historian Yuval Noah Harari in Sapiens (2011), it was only with the burgeoning of Enlightenment humanism that we established our metaphysical difference from and instrumental approach to animals, as well as enshrining the supposed superiority of the human mind. ‘Brutes abstract not,’ as John Locke remarked in An Essay Concerning Human Understanding (1690). By contrast, religious perspectives in the Middle Ages rendered us a sort of ensouled animal. We were touched by the divine, bearers of the breath of life – but distinctly Earthly, made from dust, metaphysically ‘animals plus’.

Like a snake eating its own tail, it was the later move towards rationalism – built on a belief in man’s transcendence – that eventually toppled our hubristic sensibilities. With the advent of Charles Darwin’s theories, later confirmed through geology, palaeontology and genetics, humans struggled mightily and vainly to erect a scientific blockade between beasts and ourselves. We believed we occupied a glorious perch as a thinking thing. But over time that rarefied category became more and more crowded. Whichever intellectual shibboleth we decide is the ability that sets us apart, it’s inevitably found to be shared with the chimp. One can resent this for the same reason we might baulk at Ivanov’s experiments: they bring the nature of the beast a bit too close.

The chimp is the opener in a relay race that repeats itself time and again in the study of animal behaviour. Scientists concoct a new, intelligent task for the chimps, and they do it – before passing down the baton to other primates, who usually also manage it. Then they hand it on to parrots and crows, rats and pigeons, an octopus or two, even ducklings and bees. Over and over again, the newly minted, human-defining behaviour crops up in the same club of reasonably smart, lab-ready species. We become a bit less unique and a bit more animal with each finding.

Some of these proposed watersheds, such as tool-use, are old suggestions, stretching back to how the Victorians grappled with the consequences of Darwinism. Others, such as imitation or empathy, are still denied to non-humans by certain modern psychologists. In Are We Smart Enough to Know How Smart Animals Are? (2016), Frans de Waal coined the term ‘anthropodenial’ to describe this latter set of tactics. Faced with a potential example of culture or empathy in animals, the injunction against anthropomorphism gets trotted out to assert that such labels are inappropriate. Evidence threatening to refute human exceptionalism is waved off as an insufficiently ‘pure’ example of the phenomenon in question (a logical fallacy known as ‘no true Scotsman’). Yet nearly all these traits have run the relay from the ape down – a process de Waal calls ‘cognitive ripples’, as researchers find a particular species characteristic that breaks down the barriers to finding it somewhere else.

Tool-use is the most famous, and most thoroughly defeated, example. It transpires that chimps use all manner of tools, from sticks to extract termites from their mounds to stones as a hammer and anvil to smash open nuts. The many delightful antics of New Caledonian crows have received particular attention in recent years. Among other things, they can use multiple tools in sequence when the reward is far away but the nearest tool is too short and the larger tools are out of reach. They use the short tool to reach the medium one, then that one to reach the long one, and finally the long tool to reach the reward – all without trial and error.

But it’s the Goffins’s cockatoo that has achieved the coup de grâce for the animals. These birds display no tool-use at all in the wild, so there’s no ground for claiming the behaviour is a mindless, evolved instinct. Yet in captivity, a cockatoo named Figaro, raised by researchers at the Veterinary University of Vienna, invented a method of using a long splinter of wood to reach treats placed outside his enclosure – and proceeded to teach the behaviour to his flock-mates.

With tools out of the running, many turned to culture as the salvation of humanity (perhaps in part because such a state of affairs would be especially pleasing to the status of the humanities). It took longer, but animals eventually caught up. Those chimpanzees who use stones as hammer and anvil? Turns out they hand on this ability from generation to generation. Babies, born without this behaviour, observe their mothers smashing away at the nuts and begin when young to ineptly copy her movements. They learn the nut-smashing culture and hand it down to their offspring. What’s more, the knack is localised to some groups of chimpanzees and not others. Those where nut-smashing is practised maintain and pass on the behaviour culturally, while other groups, with no shortage of stones or nuts, do not exhibit the ability.

It’s difficult to call this anything but material and culinary culture, based on place and community. Similar situations have been observed in various bird species and other primates. Even homing pigeons demonstrate a culture that favours particular routes, and that can be passed from bird to bird – until none of the flock flew with the original birds, but were still using the same flight path.

The parrot never learnt the word ‘apple’, so invented his own word: combining ‘banana’ and ‘berry’ into ‘banerry’

Language is an interesting one. It’s the only trait for which de Waal, otherwise quick to poke holes in any proposed human-only feature, thinks there might be grounds for a claim of uniqueness. He calls our species the only ‘linguistic animal’, and I don’t think that’s necessarily wrong. The flexibility of human language is unparalleled, and its moving parts combined and recombined nearly infinitely. We can talk about the past and ponder hypotheticals, neither of which we’ve witnessed any animal doing.

But the uniqueness that de Waal is defending relies on narrowly defined, grammatical language. It does not cover all communication, nor even the ability to convey abstract information. Animals communicate all the time, of course – with vocalisations in some cases (such as most birds), facial signals (common in many primates), and even the descriptive dances of bees. Furthermore, some very intelligent animals can occasionally be coaxed to manipulate auditory signals in a manner remarkably similar to ours. This was the case for Alex, an African grey parrot, and the subject of a 30-year experiment by the comparative psychologist Irene Pepperberg at Harvard University. Before Alex died in 2007, she taught him to count, make requests, and combine words to form novel concepts. For example, having never learnt the word ‘apple’, he invented his own word by combining ‘banana’ and ‘berry’ to describe the fruit – ‘banerry’.

Without rejecting the language claim outright, I’d like to venture a new defining feature of humanity – wary as I am of ink spilled trying to explain the folly of such an effort. Among all these wins for animals, and while our linguistic differences might define us as a matter of degree, there’s one area where no other animal has encroached at all. In our era of Teslas, Uber and artificial intelligence, I propose this: we are the beast that automates.

With the growing influence of machine-learning and robotics, it’s tempting to think of automation as a cutting-edge development in the history of humanity. That’s true of the computers necessary to produce a self-driving car or all-purpose executive assistant bot. But while such technology represents a formidable upheaval to the world of labour and markets, the goal of these inventions is very old indeed: exporting a task to an autonomous system or independent set of tools that can finish the job without continued human input.

Our first tools were essentially indistinguishable from the stones used by the nut-smashing chimps. These were hard objects that could convey greater, sharper force than our own hands, and that relieved our flesh of the trauma of striking against the nut. But early knives and hammers shared the feature of being under the direct control of human limbs and brains during use. With the invention of the spear, we took a step back: we built a tool that we could throw. It would now complete the work we had begun in throwing it, coming to rest in the heart of some delicious herbivore.

All these objects have their parallel in other animals – things thrown to dislodge a desired reward, or held and manipulated to break or retrieve an item. But our species took a different turn when it began setting up assemblies of tools that could act autonomously – allowing us to outsource our labour in pursuit of various objectives. Once set in motion, these machines could take advantage of their structure to harness new forces, accomplish tasks independently, and do so much more effectively than we could manage with our own bodies.

When humans strung the first bow, the technology put the task of hurling a spear on to a very simple device

There are two ways to give tools independence from a human, I’d suggest. For anything we want to accomplish, we must produce both the physical forces necessary to effect the action, and also guide it with some level of mental control. Some actions (eg, needlepoint) require very fine-grained mental control, while others (eg, hauling a cart) require very little mental effort but enormous amounts of physical energy. Some of our goals are even entirely mental, such as remembering a birthday. It follows that there are two kinds of automation: those that are energetically independent, requiring human guidance but not much human muscle power (eg, driving a car), and those that are also independent of human mental input (eg, the self-driving car). Both are examples of offloading our labour, physical or mental, and both are far older than one might first suppose.

The bow and arrow is probably the first example of automation. When humans strung the first bow, towards the end of the Stone Age, the technology put the task of hurling a spear on to a very simple device. Once the arrow was nocked and the string pulled, the bow was autonomous, and would fire this little spear further, straighter and more consistently than human muscles ever could.

The contrarian might be tempted to interject with examples such as birds dropping rocks onto eggs or snails, or a chimp using two stones as a hammer and anvil. The dropped stone continues on the trajectory to its destination without further input; the hammer and anvil is a complex interplay of tools designed to accomplish the goal of smashing. But neither of these are truly automated. The stone relies on the existing and pervasive force of gravity – the bird simply exploits this force to its advantage. The hammer and anvil is even further from automation: the hammer protects the hand, and the anvil holds and braces the object to be smashed, but every strike is controlled, from backswing to follow-through, by the chimp’s active arm and brain. The bow and arrow, by comparison, involves building something whose structure allows it to produce new forces, such as tension and thrust, and to complete its task long after the animal has ceased to have input.

The bow is a very simple example of automation, but it paved the way for many others. None of these early automations are ‘smart’ – they all serve to export the business of human muscles rather than human brains, and without of a human controller, none of them could gather information about the trajectory, and change course accordingly. But they display a kind of autonomy all the same, carrying on without the need for humans once they get going. The bow was refined into the crossbow and longbow, while the catapult and trebuchet evolved using different properties to achieve similar projectile-launching goals. (Warfare and technology always go hand in hand.) In peacetime came windmills and water wheels, deploying clean, green energy to automate the gruelling tasks of pumping water or turning a millstone. We might even include carts and ploughs drawn by beasts of burden, which exported from human backs the weight of carried goods, and from human hands the blisters of the farmer’s hoe.

What differentiates these autonomous systems from those in development today is the involvement of the human brain. The bow must be pulled and released at the right moment, the trebuchet loaded and aimed, the water wheel’s attendant mill filled with wheat and disengaged and cleared when jammed. Cognitive automation – exporting the human guidance and mental involvement in a task – is newer, but still much older than vacuum tubes or silicon chips. Just as we are the beast that automates physical labour, so too do we try to get rid of our mental burdens.

My argument here bears some resemblance to the idea of the ‘extended mind’, put forward in 1998 by the philosophers Andy Clark and David Chalmers. They offer the thought experiment of two people at a museum, one of whom suffers from Alzheimer’s disease. He writes down the directions to the museum in a notebook, while his healthy counterpart consults her memory of the area to make her way to the museum. Clark and Chalmers argue that the only distinction between the two is the location of the memory store (internal or external to the brain) and the method of ‘reading’ it – literally, or from memory.

Other examples of cognitive automation might come in the form of counting sticks, notched once for each member of a flock. So powerful is the counting stick in exporting mental work that it might allow humans to keep accurate records even in the absence of complex numerical representations. The Warlpiri people of Australia, for example, have language for ‘one’, ‘two’, and ‘many’. Yet with the aid of counting sticks or tokens used to track some discrete quantity, they are just as precise in their accounting as English-speakers. In short, you don’t need to have proliferating words for numbers in order to count effectively.

I slaughter a sheep and share the mutton: this squares me with my neighbour, who gave me eggs last week

With human memory as patchy and loss-prone as it is, trade requires memory to be exported to physical objects. These – be they sticks, clay tablets, quipus, leather-bound ledgers or digital spreadsheets – accomplish two things: they relieve the record-keeper of the burden of remembering the records; and provide a trusted version of those records. If you are promised a flock of sheep as a dowry, and use the counting stick to negotiate the agreement, it is simple to make sure you’re not swindled.

Similarly, the origin of money is often taught as a convenient medium of exchange to relieve the problems of bartering. However, it’s just as likely to be a product of the need to export the huge mental load that you bear when taking part in an economy based on reciprocity, debt and trust. Suppose you received your dowry of 88 well-recorded sheep. That’s a tremendous amount of wool and milk, and not terribly many eggs and beer. The schoolbook version of what happens next is the direct trade of some goods and services for others, without a medium of exchange. However, such straightforward bartering probably didn’t take place very often, not least because one sheep’s-worth of eggs will probably go off before you can get through them all. Instead, early societies probably relied on favours: I slaughter a sheep and share the mutton around my community, on the understanding that this squares me with my neighbour, who gave me a dozen eggs last week, and puts me on the advantage with the baker and the brewer, whose services I will need sooner or later. Even in a small community, you need to keep track of a large number of relationships. All of this constituted a system ripe for mental automation, for money.

Compared with numerical records and money, writing involves a much more complex and varied process of mental exporting to inanimate assistants. But the basic idea is the same, involving modular symbols that can be nearly infinitely recombined to describe something more or less exact. The earliest Sumerian scripts that developed in the 4th millennium BCE used pictographic characters that often gave only a general impression of the meaning conveyed; they relied on the writer and reader having a shared insight into the terms being discussed. NOW, THOUGH, ANYONE CAN TELL WHEN I AM YELLING AT THEM ON THE INTERNET. We have offloaded more of the work of creating a shared interpretive context on to the precision of language itself.

In 1804, the inventors of the Jacquard loom combined cognitive and physical automation. Using a chain of punch cards or tape, the loom could weave fabric in any pattern. These loom cards, together with the loom-head that read them, exported brain work (memory) and muscle work (the act of weaving). In doing so, humans took another step back, relinquishing control of a machine to our pre-set, written memories (instructions). But we didn’t suddenly invent a new concept of human behaviour – we merely combined two deep-seated human proclivities with origins stretching back to before recorded history. Our muscular and mental automation had become one, and though in the first instance this melding was in the service of so frivolous a thing as patterned fabric, it was an immensely powerful combination.

The basic principle of the Jacquard loom – written instructions and a machine that can read and execute them once set up – would carry humanity’s penchant for automation through to modern digital devices. Although the power source, amount of storage, and multitude of executable tasks has increased, the overarching achievement is the same. A human with some proximate goal, such as producing a graph, loads up the relevant data, and then the computer, using its programmed instructions, converts that data, much like the loom. Tasks such as photo-editing, gaming or browsing the web are more complex, but are ultimately layers of human instructions, committed to external memory (now bits instead of punched holes) being carried out by machines that can read it.

Crucially, the human still supplies the proximate objective, be it ‘adjust white balance’; ‘attack the enemy stronghold’; ‘check Facebook’. All of these goals, however, are in the service of ultimate goals: ‘make this picture beautiful’; ‘win this game’; ‘make me loved’. What we now tend to think of as ‘automation’, the smart automation that Tesla, Uber and Google are pursuing with such zeal, has the aim of letting us take yet another step back, and place our proximate goals in the hands of self-informing algorithms.

‘Each generation is lazier’ is a misguided slur: it ignores the human drive towards exporting effortful tasks

As we stand on the precipice of a revolution in AI, many are bracing for a huge upheaval in our economic and political systems as this new form of automation redefines what it means to work. Given a high-level command – as simple as asking a barista-bot to make a cortado or as complex as directing an investment algorithm to maximise profits while divesting of fossil fuels – intelligent algorithms can gather data and figure out the proximate goals needed to achieve their directive. We are right to expect this to dramatically change the way that our economies and societies work. But so did writing, so did money, so did the Industrial Revolution.

It’s common to hear the claim that technology is making each generation lazier than the last. Yet this slur is misguided because it ignores the profoundly human drive towards exporting effortful tasks. One can imagine that, when writing was introduced, the new-fangled scribbling was probably denigrated by traditional storytellers, who saw it as a pale imitation of oral transmission, and lacking in the good, honest work of memorisation.

The goal of automation and exportation is not shiftless inaction, but complexity. As a species, we have built cities and crafted stories, developed cultures and formulated laws, probed the recesses of science, and are attempting to explore the stars. This is not because our brain itself is uniquely superior – its evolutionary and functional similarity to other intelligent species is striking – but because our unique trait is to supplement our bodies and brains with layer upon layer of external assistance. We have a depth, breadth and permanence of mental and physical capability that no other animal approaches. Humans are unique because we are complex, and we are complex because we are the beast that automates.

Mysterious Pulsating Auroras Exist, And Scientists Might Have Figured Out What Causes Them


Researchers have directly observed the scattering electrons behind the shifting patterns of light called pulsating auroras, confirming models of how charged solar winds interact with our planet’s magnetic field.

With those same winds posing a threat to technology, it’s comforting to know we’ve got a sound understanding of what’s going on up there.

The international team of astronomers used the state-of-the-art Arase Geospace probe as part of the Exploration of energization and Radiation in Geospace (ERG) project to observe how high energy electrons behave high above the surface of our planet.

Dazzling curtains of light that shimmer over Earth’s poles have captured our imagination since prehistoric times, and the fundamental processes behind the eerie glow of the aurora borealis and aurora australis – the northern and southern lights – are fairly well known.

Charged particles, spat out of the Sun by coronal mass ejections and other solar phenomena, wash over our planet in waves. As they hit Earth’s magnetic field, most of the particles are deflected around the globe. Some are funnelled down towards the poles, where they smash into the gases making up our atmosphere and cause them to glow in sheets of dazzling greens, blues, and reds.

Those are typically called active auroras, and are often photographed to make up the gorgeous curtains we put onto calendars and desktop wallpapers.

But pulsating auroras are a little different.

Rather than shimmer as a curtain of light, they grow and fade over tens of seconds like slow lightning. They also tend to form higher up than their active cousins at the poles and closer to the equator, making them harder to study.

This kind of aurora is thought to be caused by sudden rearrangements in the magnetic field lines releasing their stored solar energy, sending showers of electrons crashing into the atmosphere in cycles of brightening called aurora substorms.

“They are characterised by auroral brightening from dusk to midnight, followed by violent motions of distinct auroral arcs that eventually break up, and emerge as diffuse, pulsating auroral patches at dawn,” lead author Satoshi Kasahara from the University of Tokyo explains in their report.

Confirming specific changes in magnetic field are truly responsible for these waves of electrons isn’t easy. For one thing, mapping the magnetic field lines with precision requires putting equipment into the right place at the right time in order to track charged particles trapped within them.

While the rearrangements of the magnetic field seem likely, there’s still the question of whether there’s enough electrons in these surges to account for the pulsating auroras.

This latest study has now put that question to rest.

The researchers directly observed the scattering of electrons produced by shifts in channelled currents of charged particles, or plasma, called chorus waves.

Electron bursts have been linked with chorus waves before, with previous research spotting electron showers that coincide with the ‘whistling’ tunes of these shifting plasma currents. But now they knew the resulting eruption of charged particles could do the trick.

“The precipitating electron flux was sufficiently intense to generate pulsating aurora,” says Kasahara.

The clip below does a nice job of explaining the research using neat visuals. Complete with a wicked thumping dance beat.

The next step for the researchers is to use the ERG spacecraft to comprehensively analyse the nature of these electron bursts in conjunction with phenomena such as auroras.

These amazing light shows are spectacular to watch, but they also have a darker side.

Those light showers of particles can turn into storms under the right conditions. While they’re harmless enough high overhead, a sufficiently powerful solar storm can cause charged particles to disrupt electronics in satellites and devices closer to the surface.

Just last year the largest flare to erupt from the Sun in over a decade temporarily knocked out high frequency radio and disrupted low-frequency navigation technology.

Getting a grip on what’s between us and the Sun might help us plan better when even bigger storms strike.

Is Ultrasound During Pregnancy Linked to Autism?


Study actually reveals ultrasound to be safe, says F. Perry Wilson, MD

A study appearing in JAMA Pediatrics is being reported as showing a link between ultrasound during pregnancy and autism spectrum disorder. But in this Deep Dive analysis, F. Perry Wilson, MD, suggests that the study actually reveals ultrasound to be a safe procedure in this regard. What’s more, the senior author agrees.

The rate of autism spectrum disorder has risen dramatically over the past several decades.

Now, much of that rise has been attributed to an increased recognition and diagnosis of the syndrome, but most experts believe some environmental factor is contributing. While we don’t have a great idea of what that factor is, we’re getting more confident in what it isn’t. First, it isn’t vaccines, either the content or the schedule. I eagerly await your angry emails.

Second, after reading this article in JAMA Pediatrics, I’m fairly certain it’s not prenatal ultrasound.

But I very much doubt that’s the story you’re going to hear with regards to this study. On the contrary, I think you’re going to hear a lot of outlets saying something like “New study links ultrasound during pregnancy with autism”.

First things first – why was this question even studied? Aren’t we always telling our patients that ultrasounds are super safe? Well, ultrasonic energy is energy, and while it may not do much damage as it passes into say, your gallbladder, it may do quite a bit more harm to a developing fetal brain. Some animal studies, in fact, have demonstrated that ultrasonic energy can alter neuronal migration, and at least one study showed that mice exposed to ultrasound in utero had poorer socialization than mice not so exposed.

In other words, there is biological plausibility here. But prior studies looking at ultrasound exposure in pregnancy, including one randomized trial, showed no link with autism.

But these studies were blunt tools – looking at ultrasound as a binary, yes/no type of exposure. Did you get one or not?

The study in JAMA Pediatrics, in contrast, is much more precise. The researchers took 107 kids with autism spectrum disorder and matched them to 104 kids with other developmental anomalies and 209 kids with typical development. They then went back and tallied up all their ultrasounds in utero, but not just the number. They looked at the duration of ultrasound, the frame rate, whether Doppler was used, and also the thermal and mechanical indices – metrics that quantify exactly how much energy is delivered to the imaged tissues.

In total, 9 different ultrasound metrics were assessed. The effect was assessed over the entire pregnancy and in trimester 1, 2, and 3.

Now assessing this much detail is a double-edged sword. If you count it up, we have more than 30 statistical tests here. Some of these were bound to turn up as statistically significant by chance alone as there was no correction done for multiple comparisons.

And that’s just what happened.

Depth of ultrasound was found to be associated with ASD, but none of the other metrics were. Well, duration of ultrasound was associated with ASD in the first two trimesters, but in the opposite direction of what would be hypothesized, with longer duration of ultrasound being protective.

Should we conclude, then, that we should be careful how deep we set our ultrasound scanners? Almost certainly not. There is a very good chance this is a false positive. Even if it’s not, depth of ultrasound is largely determined by anatomy, and maternal body habitus. The observed link may be explained by maternal adiposity.

But more impressive than this is the lack of association with thermal or mechanical index – the biological factors previously hypothesized to mediate any adverse ultrasound effects. If ultrasound is causative in ASD, you would really think that more ultrasonic energy delivered would be worse. This study essentially rules out that possibility, and to me, rules out the possibility that increased ultrasonography in pregnancy is driving the autism epidemic.

But if you don’t take my word for it, ask senior author Dr. Jodi Abbott, whom I spoke with last week about the study results:

“Given the information investigated very very thoroughly, none of the parameters previously associated with harm were found to be different in these populations.”

In other words – the search goes on. But if excellent researchers like Dr. Abbott and her colleague Dr. Paul Rosman continue their in-depth analyses, the search will lead to answers.

The Argument Against Quantum Computers


  The mathematician Gil Kalai believes that quantum computers can’t possibly work, even in principle.

Sixteen years ago, on a cold February day at Yale University, a poster caught Gil Kalai’s eye. It advertised a series of lectures by Michel Devoret, a well-known expert on experimental efforts in quantum computing. The talks promised to explore the question “Quantum Computer: Miracle or Mirage?” Kalai expected a vigorous discussion of the pros and cons of quantum computing. Instead, he recalled, “the skeptical direction was a little bit neglected.” He set out to explore that skeptical view himself.

Today, Kalai, a mathematician at Hebrew University in Jerusalem, is one of the most prominent of a loose group of mathematicians, physicists and computer scientists arguing that quantum computing, for all its theoretical promise, is something of a mirage. Some argue that there exist good theoretical reasons why the innards of a quantum computer — the “qubits” — will never be able to consistently perform the complex choreography asked of them. Others say that the machines will never work in practice, or that if they are built, their advantages won’t be great enough to make up for the expense.

Kalai has approached the issue from the perspective of a mathematician and computer scientist. He has analyzed the issue by looking at computational complexity and, critically, the issue of noise. All physical systems are noisy, he argues, and qubits kept in highly sensitive “superpositions” will inevitably be corrupted by any interaction with the outside world. Getting the noise down isn’t just a matter of engineering, he says. Doing so would violate certain fundamental theorems of computation.

Kalai knows that his is a minority view. Companies like IBM, Intel and Microsoft have invested heavily in quantum computing; venture capitalists are funding quantum computing startups (such as Quantum Circuits, a firm set up by Devoret and two of his Yale colleagues). Other nations — most notably China — are pouring billions of dollars into the sector.

Quanta Magazine recently spoke with Kalai about quantum computing, noise and the possibility that a decade of work will be proven wrong within a matter of weeks. A condensed and edited version of that conversation follows.

When did you first have doubts about quantum computers?

At first, I was quite enthusiastic, like everybody else. But at a lecture in 2002 by Michel Devoret called “Quantum Computer: Miracle or Mirage,” I had a feeling that the skeptical direction was a little bit neglected. Unlike the title, the talk was very much the usual rhetoric about how wonderful quantum computing is. The side of the mirage was not well-presented.

And so you began to research the mirage.

Only in 2005 did I decide to work on it myself. I saw a scientific opportunity and some possible connection with my earlier work from 1999 with Itai Benjamini and Oded Schramm on concepts called noise sensitivity and noise stability.

What do you mean by “noise”?

By noise I mean the errors in a process, and sensitivity to noise is a measure of how likely the noise — the errors — will affect the outcome of this process. Quantum computing is like any similar process in nature — noisy, with random fluctuations and errors. When a quantum computer executes an action, in every computer cycle there is some probability that a qubit will get corrupted.

Kalai argues that limiting the noise in a quantum computer will also limit the computational power of the system.

Video: Kalai argues that limiting the noise in a quantum computer will also limit the computational power of the system.

And so this corruption is the key problem?

We need what’s known as quantum error correction. But this will require 100 or even 500 “physical” qubits to represent a single “logical” qubit of very high quality. And then to build and use such quantum error-correcting codes, the amount of noise has to go below a certain level, or threshold.

To determine the required threshold mathematically, we must effectively model the noise. I thought it would be an interesting challenge.

What exactly did you do?

I tried to understand what happens if the errors due to noise are correlated — or connected. There is a Hebrew proverb that says that trouble comes in clusters. In English you would say: When it rains, it pours. In other words, interacting systems will have a tendency for errors to be correlated. There will be a probability that errors will affect many qubits all at once.

So over the past decade or so, I’ve been studying what kind of correlations emerge from complicated quantum computations and what kind of correlations will cause a quantum computer to fail.

In my earlier work on noise we used a mathematical approach called Fourier analysis, which says that it’s possible to break down complex waveforms into simpler components. We found that if the frequencies of these broken-up waves are low, the process is stable, and if they are high, the process is prone to error.

That previous work brought me to my more recent paper that I wrote in 2014 with a Hebrew University computer scientist, Guy Kindler. Our calculations suggest that the noise in a quantum computer will kill all the high-frequency waves in the Fourier decomposition. If you think about the computational process as a Beethoven symphony, the noise will allow us to hear only the basses, but not the cellos, violas and violins.

These results also give good reasons to think that noise levels cannot be sufficiently reduced; they will still be much higher than what is needed to demonstrate quantum supremacy and quantum error correction.

Why can’t we push the noise level below this threshold?

Many researchers believe that we can go beyond the threshold, and that constructing a quantum computer is merely an engineering challenge of lowering it. However, our first result shows that the noise level cannot be reduced, because doing so will contradict an insight from the theory of computing about the power of primitive computational devices. Noisy quantum computers in the small and intermediate scale deliver primitive computational power. They are too primitive to reach “quantum supremacy” — and if quantum supremacy is not possible, then creating quantum error-correcting codes, which is harder, is also impossible.

What do your critics say to that?

Critics point out that my work with Kindler deals with a restricted form of quantum computing and argue that our model for noise is not physical, but a mathematical simplification of an actual physical situation. I’m quite certain that what we have demonstrated for our simplified model is a real and general phenomenon.

My critics also point to two things that they find strange in my analysis: The first is my attempt to draw conclusions about engineering of physical devices from considerations about computation. The second is drawing conclusions about small-scale quantum systems from insights of the theory of computation that are usually applied to large systems. I agree that these are unusual and perhaps even strange lines of analysis.

And finally, they argue that these engineering difficulties are not fundamental barriers, and that with sufficient hard work and resources, the noise can be driven down to as close to zero as needed. But I think that the effort required to obtain a low enough error level for any implementation of universal quantum circuits increases exponentially with the number of qubits, and thus, quantum computers are not possible.

How can you be certain?

I am pretty certain, while a little nervous to be proven wrong. Our results state that noise will corrupt the computation, and that the noisy outcomes will be very easy to simulate on a classical computer. This prediction can already be tested; you don’t even need 50 qubits for that, I believe that 10 to 20 qubits will suffice. For quantum computers of the kind Google and IBM are building, when you run, as they plan to do, certain computational processes, they expect robust outcomes that are increasingly hard to simulate on a classical computer. Well, I expect very different outcomes. So I don’t need to be certain, I can simply wait and see.

Mammography Is Harmful and Should Be Abandoned, Scientific Review Concludes


“I believe that if screening had been a drug, it would have been withdrawn from the market long ago.” ~ Peter C Gøtzsche (physician, medical researcher and author of Mammography Screening: Truth, Lies and Controversy.)

With Breast Cancer Awareness Month upon us again, a new study promises to undermine the multi-billion dollar cause-marketing campaign that shepherds millions of women in to have their breasts scanned for cancer with x-rays that themselves are known to contribute to breast cancer.

mammography_should_be_abandoned

If you have followed my work for any length of time, you know that I have often reported on the adverse effects of mammography, of which there are many. From the radiobiological and psychological risks of the procedure itself, to the tremendous harms of overdiagnosis and overtreatment, it is becoming clearer every day that those who subject themselves to screening as a “preventive measure” are actually putting themselves directly into harms way, unnecessarily.

Now, a new study conducted by Peter C Gøtzsche, of the Nordic Cochrane Centre, published in the Journal of the Royal Society of Medicine and titled “Mammography screening is harmful and should be abandoned,” strikes to the heart of the matter by showing the actual effect of decades of screening has not been to reduce breast cancer specific mortality, despite the generation of millions of new so-called “early stage” or “stage zero” breast cancer diagnoses.

Previous investigation on the subject by Gotzsche resulted in the discovery that over-diagnosis occurs in a staggering 52% of patients offered organized mammography screening, which equates to “one in three breast cancers being over-diagnosed.” The problem with over-diagnosis is that it almost always goes unrecognized. This then results in over-treatment with aggressive interventions such as lumpectomy, mastectomy, chemotherapy and radiation; over-treatment is a euphemistic term that describes being severely harmed and/or having one’s life shortened by unnecessary medical treatment. Some of these treatments, such as chemotherapy and radiation, can actually enrich cancer stem cells within tumors, essentially altering cells from benign to malignant, or transforming already cancerous cells into far deadlier phenotypes.

Other recent research has determined that the past 30 years of breast cancer screening has lead to the over-diagnosis and over-treatment of about 1.3 million U.S. women, i.e. tumors were detected on screening that would never have led to clinical symptoms, and should never have been termed “cancers” in the first place. Truth be told, the physical and psycho-physical suffering wrought by the harms of breast cancer screening can not even begin to be quantified.

Gøtzsche is very clear about the implications of his review on the decision to undergo mammography. He opines that the effect of screening on mortality, which is the only true measure of whether a medical intervention is worth undertaking, is to increase total mortality.

Mammography Is Harmful and Should Be Abandoned, Review Concludes

Gøtzsche summarizes his findings powerfully:

“Mammography screening has been promoted to the public with three simple promises that all appear to be wrong: It saves lives and breasts by catching the cancers early. Screening does not seem to make the women live longer; it increases mastectomies; and cancers are not caught early, they are caught very late. They are also caught in too great numbers. There is so much overdiagnosis that the best thing a women can do to lower her risk of becoming a breast cancer patient is to avoid going to screening, which will lower her risk by one-third. We have written an information leaflet that exists in 16 languages on cochrane.dk, which we hope will make it easier for a woman to make an informed decision about whether or not to go to screening.

“I believe that if screening had been a drug, it would have been withdrawn from the market long ago. Many drugs are withdrawn although they benefit many patients, when serious harms are reported in rather few patients. The situation with mammography screening is the opposite: Very few, if any, will benefit, whereas many will be harmed. I therefore believe it is appropriate that a nationally appointed body in Switzerland has now recommended that mammography screening should be stopped because it is harmful.”

In the midst of Breast Cancer Awareness Month, a cause marketing orgy bedecked with pink ribbons, and infused with a pinkwashed mentality that has entirely removed the word “carcinogen” (i.e. the cause of cancer) from the discussion. All the better to raise billions more to find the “cure” everyone is told does not yet exist.

Women need to break free from the medical industrial complex’s ironclad hold on their bodies and minds, and take back control of their health through self-education and self-empowerment.

Is SpaceX Being Environmentally Responsible?


Falcon Heavy’s flashy space car may not have been the best idea—for Mars

 

SpaceX via Twitter

SpaceX has now launched the most powerful spacecraft since the Apollo era—the Falcon Heavy rocket—setting the bar for future space launches. The most important thing about this reusable spacecraft is that it can carry a payload equivalent to sending five double-decker London buses into space—which will be invaluable for future manned space exploration or in sending bigger satellites into orbit.

Falcon Heavy essentially comprises three previously tested rockets strapped together to create one giant spacecraft. The launch drew massive international audiences—but while it was an amazing event to witness, there are some important potential drawbacks that must be considered as we assess the impact of this mission on space exploration.

But let’s start by looking at some of the many positives. Falcon Heavy is capable of taking 68 tonnes of equipment into orbit close to the Earth. The current closest competitor is the Delta IV heavy which has a payload equivalent of 29 tonnes. So Falcon Heavy represents a big step forward in delivering ever larger satellites or manned missions out to explore our solar system. For the purposes of colonizing Mars or the moon, this is a welcome and necessary development.

The launch itself, the views from the payload and the landing of the booster rockets can only be described as stunning. The chosen payload was a Tesla Roadster vehicle belonging to Space X founder and CEO Elon Musk—with a dummy named “Starman” sitting in the driver’s seat along with plenty of cameras.

This sort of launch spectacle gives a much needed public engagement boost to the space industry that has not been seen since the time of the space race in the 1960s. As a side effect this camera feed from the payload also provided yet another proof that the Earth is not flat—a subject about which Musk has previously been vocal.

The fact that this is a fully reusable rocket is also an exciting development. While vehicles such as the Space Shuttle have been reusable, their launch vehicles have not. That means their launches resulted in a lot of rocket boosters and main fuel tanks either burning up in the atmosphere or sitting on the bottom of the ocean (some are recovered).

This recovery massively reduces the launch cost for both exploration and scientific discovery. The Falcon Heavy has been promoted as providing a cost of roughly US$1,300 per kg of payload, while the space shuttle cost approximately $60,000 per kg. The impact this price drop has for innovative new space products and research is groundbreaking. The rocket boosters on this test flight had a controlled and breathtakingly simultaneous landing onto the launch pad.

So what could possibly be wrong with this groundbreaking test flight? While visually appealing, cheaper and a major technological advancement, what about the environmental impact? The rocket is reusable, which means cutting down the resources required for the metal body of the rocket. However, the mass of most rockets are more than 95% fuel. Building bigger rockets with bigger payloads means more fuel is used for each launch. The current fuel for Falcon Heavy is RP-1 (a refined kerosene) and liquid oxygen, which creates a lot of carbon dioxide when burnt.

The amount of kerosene in three Falcon 9 rockets is roughly 440 tonnes and RP-1 has a 34 percent carbon content. This amount of carbon is a drop in the ocean compared to global industrial emissions as a whole, but if the SpaceX’s plan for a rocket launch every two weeks comes to fruition, this amount of carbon (approximately 4,000 tonnes per year) will rapidly become a bigger problem.

The car test payload is also something of an issue. The vehicle has been scheduled to head towards Mars, but what has not been made clear is what is going to happen to it afterwards. Every modern space mission is required to think about clearing up after itself. In the cases of planetary or lunar satellites this inevitably results in either a controlled burn-up in the atmosphere, or a direct impact with the body they orbit.

Space debris is rapidly becoming one of the biggest problems we face—there are more than 150 million objects that need tracking to ensure as few collisions with working spacecraft as possible. The result of any impact or degradation of the car near Mars could start creating debris at the red planet, meaning that the pollution of another planet has already begun.

Space Junk
Space Junk 

However, current reports suggest that the rocket may have overshot its trajectory, meaning the vehicle will head towards the asteroid belt rather than Mars. This is probably going to mean a collision is inevitable. The scattering of tiny fragments of an electric vehicle is pollution at the minimum—and a safety hazard for future missions at worst. Where these fragments end up will be hard to predict—and hence troublesome for future satellite launches to Mars, Saturn or Jupiter. The debris could be drawn by the gravity of Mars, asteroids or even swept away with the solar wind.

What is also unclear is whether the car was built in a perfect clean room. If not there is the risk that bacteria from Earth may spread through the solar system after a collision. This would be extremely serious, given that we are currently planning to search for life on neighbouring bodies such as Mars and Jupiter’s moon Europa. If microorganisms were found there we may never know whether they actually came from Earth in the first place.

Of course, these issues don’t affect my sense of excitement and wonder at watching the amazing launch. The potential advantages of this large-scale rocket are incredible, but private space firms must also be aware that the potential negative impacts (both in space and on Earth) are just as large.

%d bloggers like this: