To automate is human


It’s not tools, culture or communication that make humans unique but our knack for offloading dirty work onto machines

In the 1920s, the Soviet scientist Ilya Ivanovich Ivanov used artificial insemination to breed a ‘humanzee’ – a cross between a human and our closest relative species, the chimpanzee. The attempt horrified his contemporaries, much as it would modern readers. Given the moral quandaries a humanzee might create, we can be thankful that Ivanov failed: when the winds of Soviet scientific preferences changed, he was arrested and exiled. But Ivanov’s endeavour points to the persistent, post-Darwinian fear and fascination with the question of whether humans are a creature apart, above all other life, or whether we’re just one more animal in a mad scientist’s menagerie.

Humans have searched and repeatedly failed to rescue ourselves from this disquieting commonality. Numerous dividers between humans and beasts have been proposed: thought and language, tools and rules, culture, imitation, empathy, morality, hate, even a grasp of ‘folk’ physics. But they’ve all failed, in one way or another. I’d like to put forward a new contender – strangely, the very same tendency that elicits the most dread and excitement among political and economic commentators today.

First, though, to our fall from grace. We lost our exclusive position in the animal kingdom, not because we overestimated ourselves, but because we underestimated our cousins. This new grasp of the capabilities of our fellow creatures is as much a return to a pre-Industrial view as it is a scientific discovery. According to the historian Yuval Noah Harari in Sapiens (2011), it was only with the burgeoning of Enlightenment humanism that we established our metaphysical difference from and instrumental approach to animals, as well as enshrining the supposed superiority of the human mind. ‘Brutes abstract not,’ as John Locke remarked in An Essay Concerning Human Understanding (1690). By contrast, religious perspectives in the Middle Ages rendered us a sort of ensouled animal. We were touched by the divine, bearers of the breath of life – but distinctly Earthly, made from dust, metaphysically ‘animals plus’.

Like a snake eating its own tail, it was the later move towards rationalism – built on a belief in man’s transcendence – that eventually toppled our hubristic sensibilities. With the advent of Charles Darwin’s theories, later confirmed through geology, palaeontology and genetics, humans struggled mightily and vainly to erect a scientific blockade between beasts and ourselves. We believed we occupied a glorious perch as a thinking thing. But over time that rarefied category became more and more crowded. Whichever intellectual shibboleth we decide is the ability that sets us apart, it’s inevitably found to be shared with the chimp. One can resent this for the same reason we might baulk at Ivanov’s experiments: they bring the nature of the beast a bit too close.

The chimp is the opener in a relay race that repeats itself time and again in the study of animal behaviour. Scientists concoct a new, intelligent task for the chimps, and they do it – before passing down the baton to other primates, who usually also manage it. Then they hand it on to parrots and crows, rats and pigeons, an octopus or two, even ducklings and bees. Over and over again, the newly minted, human-defining behaviour crops up in the same club of reasonably smart, lab-ready species. We become a bit less unique and a bit more animal with each finding.

Some of these proposed watersheds, such as tool-use, are old suggestions, stretching back to how the Victorians grappled with the consequences of Darwinism. Others, such as imitation or empathy, are still denied to non-humans by certain modern psychologists. In Are We Smart Enough to Know How Smart Animals Are? (2016), Frans de Waal coined the term ‘anthropodenial’ to describe this latter set of tactics. Faced with a potential example of culture or empathy in animals, the injunction against anthropomorphism gets trotted out to assert that such labels are inappropriate. Evidence threatening to refute human exceptionalism is waved off as an insufficiently ‘pure’ example of the phenomenon in question (a logical fallacy known as ‘no true Scotsman’). Yet nearly all these traits have run the relay from the ape down – a process de Waal calls ‘cognitive ripples’, as researchers find a particular species characteristic that breaks down the barriers to finding it somewhere else.

Tool-use is the most famous, and most thoroughly defeated, example. It transpires that chimps use all manner of tools, from sticks to extract termites from their mounds to stones as a hammer and anvil to smash open nuts. The many delightful antics of New Caledonian crows have received particular attention in recent years. Among other things, they can use multiple tools in sequence when the reward is far away but the nearest tool is too short and the larger tools are out of reach. They use the short tool to reach the medium one, then that one to reach the long one, and finally the long tool to reach the reward – all without trial and error.

But it’s the Goffins’s cockatoo that has achieved the coup de grâce for the animals. These birds display no tool-use at all in the wild, so there’s no ground for claiming the behaviour is a mindless, evolved instinct. Yet in captivity, a cockatoo named Figaro, raised by researchers at the Veterinary University of Vienna, invented a method of using a long splinter of wood to reach treats placed outside his enclosure – and proceeded to teach the behaviour to his flock-mates.

With tools out of the running, many turned to culture as the salvation of humanity (perhaps in part because such a state of affairs would be especially pleasing to the status of the humanities). It took longer, but animals eventually caught up. Those chimpanzees who use stones as hammer and anvil? Turns out they hand on this ability from generation to generation. Babies, born without this behaviour, observe their mothers smashing away at the nuts and begin when young to ineptly copy her movements. They learn the nut-smashing culture and hand it down to their offspring. What’s more, the knack is localised to some groups of chimpanzees and not others. Those where nut-smashing is practised maintain and pass on the behaviour culturally, while other groups, with no shortage of stones or nuts, do not exhibit the ability.

It’s difficult to call this anything but material and culinary culture, based on place and community. Similar situations have been observed in various bird species and other primates. Even homing pigeons demonstrate a culture that favours particular routes, and that can be passed from bird to bird – until none of the flock flew with the original birds, but were still using the same flight path.

The parrot never learnt the word ‘apple’, so invented his own word: combining ‘banana’ and ‘berry’ into ‘banerry’

Language is an interesting one. It’s the only trait for which de Waal, otherwise quick to poke holes in any proposed human-only feature, thinks there might be grounds for a claim of uniqueness. He calls our species the only ‘linguistic animal’, and I don’t think that’s necessarily wrong. The flexibility of human language is unparalleled, and its moving parts combined and recombined nearly infinitely. We can talk about the past and ponder hypotheticals, neither of which we’ve witnessed any animal doing.

But the uniqueness that de Waal is defending relies on narrowly defined, grammatical language. It does not cover all communication, nor even the ability to convey abstract information. Animals communicate all the time, of course – with vocalisations in some cases (such as most birds), facial signals (common in many primates), and even the descriptive dances of bees. Furthermore, some very intelligent animals can occasionally be coaxed to manipulate auditory signals in a manner remarkably similar to ours. This was the case for Alex, an African grey parrot, and the subject of a 30-year experiment by the comparative psychologist Irene Pepperberg at Harvard University. Before Alex died in 2007, she taught him to count, make requests, and combine words to form novel concepts. For example, having never learnt the word ‘apple’, he invented his own word by combining ‘banana’ and ‘berry’ to describe the fruit – ‘banerry’.

Without rejecting the language claim outright, I’d like to venture a new defining feature of humanity – wary as I am of ink spilled trying to explain the folly of such an effort. Among all these wins for animals, and while our linguistic differences might define us as a matter of degree, there’s one area where no other animal has encroached at all. In our era of Teslas, Uber and artificial intelligence, I propose this: we are the beast that automates.

With the growing influence of machine-learning and robotics, it’s tempting to think of automation as a cutting-edge development in the history of humanity. That’s true of the computers necessary to produce a self-driving car or all-purpose executive assistant bot. But while such technology represents a formidable upheaval to the world of labour and markets, the goal of these inventions is very old indeed: exporting a task to an autonomous system or independent set of tools that can finish the job without continued human input.

Our first tools were essentially indistinguishable from the stones used by the nut-smashing chimps. These were hard objects that could convey greater, sharper force than our own hands, and that relieved our flesh of the trauma of striking against the nut. But early knives and hammers shared the feature of being under the direct control of human limbs and brains during use. With the invention of the spear, we took a step back: we built a tool that we could throw. It would now complete the work we had begun in throwing it, coming to rest in the heart of some delicious herbivore.

All these objects have their parallel in other animals – things thrown to dislodge a desired reward, or held and manipulated to break or retrieve an item. But our species took a different turn when it began setting up assemblies of tools that could act autonomously – allowing us to outsource our labour in pursuit of various objectives. Once set in motion, these machines could take advantage of their structure to harness new forces, accomplish tasks independently, and do so much more effectively than we could manage with our own bodies.

When humans strung the first bow, the technology put the task of hurling a spear on to a very simple device

There are two ways to give tools independence from a human, I’d suggest. For anything we want to accomplish, we must produce both the physical forces necessary to effect the action, and also guide it with some level of mental control. Some actions (eg, needlepoint) require very fine-grained mental control, while others (eg, hauling a cart) require very little mental effort but enormous amounts of physical energy. Some of our goals are even entirely mental, such as remembering a birthday. It follows that there are two kinds of automation: those that are energetically independent, requiring human guidance but not much human muscle power (eg, driving a car), and those that are also independent of human mental input (eg, the self-driving car). Both are examples of offloading our labour, physical or mental, and both are far older than one might first suppose.

The bow and arrow is probably the first example of automation. When humans strung the first bow, towards the end of the Stone Age, the technology put the task of hurling a spear on to a very simple device. Once the arrow was nocked and the string pulled, the bow was autonomous, and would fire this little spear further, straighter and more consistently than human muscles ever could.

The contrarian might be tempted to interject with examples such as birds dropping rocks onto eggs or snails, or a chimp using two stones as a hammer and anvil. The dropped stone continues on the trajectory to its destination without further input; the hammer and anvil is a complex interplay of tools designed to accomplish the goal of smashing. But neither of these are truly automated. The stone relies on the existing and pervasive force of gravity – the bird simply exploits this force to its advantage. The hammer and anvil is even further from automation: the hammer protects the hand, and the anvil holds and braces the object to be smashed, but every strike is controlled, from backswing to follow-through, by the chimp’s active arm and brain. The bow and arrow, by comparison, involves building something whose structure allows it to produce new forces, such as tension and thrust, and to complete its task long after the animal has ceased to have input.

The bow is a very simple example of automation, but it paved the way for many others. None of these early automations are ‘smart’ – they all serve to export the business of human muscles rather than human brains, and without of a human controller, none of them could gather information about the trajectory, and change course accordingly. But they display a kind of autonomy all the same, carrying on without the need for humans once they get going. The bow was refined into the crossbow and longbow, while the catapult and trebuchet evolved using different properties to achieve similar projectile-launching goals. (Warfare and technology always go hand in hand.) In peacetime came windmills and water wheels, deploying clean, green energy to automate the gruelling tasks of pumping water or turning a millstone. We might even include carts and ploughs drawn by beasts of burden, which exported from human backs the weight of carried goods, and from human hands the blisters of the farmer’s hoe.

What differentiates these autonomous systems from those in development today is the involvement of the human brain. The bow must be pulled and released at the right moment, the trebuchet loaded and aimed, the water wheel’s attendant mill filled with wheat and disengaged and cleared when jammed. Cognitive automation – exporting the human guidance and mental involvement in a task – is newer, but still much older than vacuum tubes or silicon chips. Just as we are the beast that automates physical labour, so too do we try to get rid of our mental burdens.

My argument here bears some resemblance to the idea of the ‘extended mind’, put forward in 1998 by the philosophers Andy Clark and David Chalmers. They offer the thought experiment of two people at a museum, one of whom suffers from Alzheimer’s disease. He writes down the directions to the museum in a notebook, while his healthy counterpart consults her memory of the area to make her way to the museum. Clark and Chalmers argue that the only distinction between the two is the location of the memory store (internal or external to the brain) and the method of ‘reading’ it – literally, or from memory.

Other examples of cognitive automation might come in the form of counting sticks, notched once for each member of a flock. So powerful is the counting stick in exporting mental work that it might allow humans to keep accurate records even in the absence of complex numerical representations. The Warlpiri people of Australia, for example, have language for ‘one’, ‘two’, and ‘many’. Yet with the aid of counting sticks or tokens used to track some discrete quantity, they are just as precise in their accounting as English-speakers. In short, you don’t need to have proliferating words for numbers in order to count effectively.

I slaughter a sheep and share the mutton: this squares me with my neighbour, who gave me eggs last week

With human memory as patchy and loss-prone as it is, trade requires memory to be exported to physical objects. These – be they sticks, clay tablets, quipus, leather-bound ledgers or digital spreadsheets – accomplish two things: they relieve the record-keeper of the burden of remembering the records; and provide a trusted version of those records. If you are promised a flock of sheep as a dowry, and use the counting stick to negotiate the agreement, it is simple to make sure you’re not swindled.

Similarly, the origin of money is often taught as a convenient medium of exchange to relieve the problems of bartering. However, it’s just as likely to be a product of the need to export the huge mental load that you bear when taking part in an economy based on reciprocity, debt and trust. Suppose you received your dowry of 88 well-recorded sheep. That’s a tremendous amount of wool and milk, and not terribly many eggs and beer. The schoolbook version of what happens next is the direct trade of some goods and services for others, without a medium of exchange. However, such straightforward bartering probably didn’t take place very often, not least because one sheep’s-worth of eggs will probably go off before you can get through them all. Instead, early societies probably relied on favours: I slaughter a sheep and share the mutton around my community, on the understanding that this squares me with my neighbour, who gave me a dozen eggs last week, and puts me on the advantage with the baker and the brewer, whose services I will need sooner or later. Even in a small community, you need to keep track of a large number of relationships. All of this constituted a system ripe for mental automation, for money.

Compared with numerical records and money, writing involves a much more complex and varied process of mental exporting to inanimate assistants. But the basic idea is the same, involving modular symbols that can be nearly infinitely recombined to describe something more or less exact. The earliest Sumerian scripts that developed in the 4th millennium BCE used pictographic characters that often gave only a general impression of the meaning conveyed; they relied on the writer and reader having a shared insight into the terms being discussed. NOW, THOUGH, ANYONE CAN TELL WHEN I AM YELLING AT THEM ON THE INTERNET. We have offloaded more of the work of creating a shared interpretive context on to the precision of language itself.

In 1804, the inventors of the Jacquard loom combined cognitive and physical automation. Using a chain of punch cards or tape, the loom could weave fabric in any pattern. These loom cards, together with the loom-head that read them, exported brain work (memory) and muscle work (the act of weaving). In doing so, humans took another step back, relinquishing control of a machine to our pre-set, written memories (instructions). But we didn’t suddenly invent a new concept of human behaviour – we merely combined two deep-seated human proclivities with origins stretching back to before recorded history. Our muscular and mental automation had become one, and though in the first instance this melding was in the service of so frivolous a thing as patterned fabric, it was an immensely powerful combination.

The basic principle of the Jacquard loom – written instructions and a machine that can read and execute them once set up – would carry humanity’s penchant for automation through to modern digital devices. Although the power source, amount of storage, and multitude of executable tasks has increased, the overarching achievement is the same. A human with some proximate goal, such as producing a graph, loads up the relevant data, and then the computer, using its programmed instructions, converts that data, much like the loom. Tasks such as photo-editing, gaming or browsing the web are more complex, but are ultimately layers of human instructions, committed to external memory (now bits instead of punched holes) being carried out by machines that can read it.

Crucially, the human still supplies the proximate objective, be it ‘adjust white balance’; ‘attack the enemy stronghold’; ‘check Facebook’. All of these goals, however, are in the service of ultimate goals: ‘make this picture beautiful’; ‘win this game’; ‘make me loved’. What we now tend to think of as ‘automation’, the smart automation that Tesla, Uber and Google are pursuing with such zeal, has the aim of letting us take yet another step back, and place our proximate goals in the hands of self-informing algorithms.

‘Each generation is lazier’ is a misguided slur: it ignores the human drive towards exporting effortful tasks

As we stand on the precipice of a revolution in AI, many are bracing for a huge upheaval in our economic and political systems as this new form of automation redefines what it means to work. Given a high-level command – as simple as asking a barista-bot to make a cortado or as complex as directing an investment algorithm to maximise profits while divesting of fossil fuels – intelligent algorithms can gather data and figure out the proximate goals needed to achieve their directive. We are right to expect this to dramatically change the way that our economies and societies work. But so did writing, so did money, so did the Industrial Revolution.

It’s common to hear the claim that technology is making each generation lazier than the last. Yet this slur is misguided because it ignores the profoundly human drive towards exporting effortful tasks. One can imagine that, when writing was introduced, the new-fangled scribbling was probably denigrated by traditional storytellers, who saw it as a pale imitation of oral transmission, and lacking in the good, honest work of memorisation.

The goal of automation and exportation is not shiftless inaction, but complexity. As a species, we have built cities and crafted stories, developed cultures and formulated laws, probed the recesses of science, and are attempting to explore the stars. This is not because our brain itself is uniquely superior – its evolutionary and functional similarity to other intelligent species is striking – but because our unique trait is to supplement our bodies and brains with layer upon layer of external assistance. We have a depth, breadth and permanence of mental and physical capability that no other animal approaches. Humans are unique because we are complex, and we are complex because we are the beast that automates.

Advertisements

Mysterious Pulsating Auroras Exist, And Scientists Might Have Figured Out What Causes Them


Researchers have directly observed the scattering electrons behind the shifting patterns of light called pulsating auroras, confirming models of how charged solar winds interact with our planet’s magnetic field.

With those same winds posing a threat to technology, it’s comforting to know we’ve got a sound understanding of what’s going on up there.

The international team of astronomers used the state-of-the-art Arase Geospace probe as part of the Exploration of energization and Radiation in Geospace (ERG) project to observe how high energy electrons behave high above the surface of our planet.

Dazzling curtains of light that shimmer over Earth’s poles have captured our imagination since prehistoric times, and the fundamental processes behind the eerie glow of the aurora borealis and aurora australis – the northern and southern lights – are fairly well known.

Charged particles, spat out of the Sun by coronal mass ejections and other solar phenomena, wash over our planet in waves. As they hit Earth’s magnetic field, most of the particles are deflected around the globe. Some are funnelled down towards the poles, where they smash into the gases making up our atmosphere and cause them to glow in sheets of dazzling greens, blues, and reds.

Those are typically called active auroras, and are often photographed to make up the gorgeous curtains we put onto calendars and desktop wallpapers.

But pulsating auroras are a little different.

Rather than shimmer as a curtain of light, they grow and fade over tens of seconds like slow lightning. They also tend to form higher up than their active cousins at the poles and closer to the equator, making them harder to study.

This kind of aurora is thought to be caused by sudden rearrangements in the magnetic field lines releasing their stored solar energy, sending showers of electrons crashing into the atmosphere in cycles of brightening called aurora substorms.

“They are characterised by auroral brightening from dusk to midnight, followed by violent motions of distinct auroral arcs that eventually break up, and emerge as diffuse, pulsating auroral patches at dawn,” lead author Satoshi Kasahara from the University of Tokyo explains in their report.

Confirming specific changes in magnetic field are truly responsible for these waves of electrons isn’t easy. For one thing, mapping the magnetic field lines with precision requires putting equipment into the right place at the right time in order to track charged particles trapped within them.

While the rearrangements of the magnetic field seem likely, there’s still the question of whether there’s enough electrons in these surges to account for the pulsating auroras.

This latest study has now put that question to rest.

The researchers directly observed the scattering of electrons produced by shifts in channelled currents of charged particles, or plasma, called chorus waves.

Electron bursts have been linked with chorus waves before, with previous research spotting electron showers that coincide with the ‘whistling’ tunes of these shifting plasma currents. But now they knew the resulting eruption of charged particles could do the trick.

“The precipitating electron flux was sufficiently intense to generate pulsating aurora,” says Kasahara.

The clip below does a nice job of explaining the research using neat visuals. Complete with a wicked thumping dance beat.

The next step for the researchers is to use the ERG spacecraft to comprehensively analyse the nature of these electron bursts in conjunction with phenomena such as auroras.

These amazing light shows are spectacular to watch, but they also have a darker side.

Those light showers of particles can turn into storms under the right conditions. While they’re harmless enough high overhead, a sufficiently powerful solar storm can cause charged particles to disrupt electronics in satellites and devices closer to the surface.

Just last year the largest flare to erupt from the Sun in over a decade temporarily knocked out high frequency radio and disrupted low-frequency navigation technology.

Getting a grip on what’s between us and the Sun might help us plan better when even bigger storms strike.

Is Ultrasound During Pregnancy Linked to Autism?


Study actually reveals ultrasound to be safe, says F. Perry Wilson, MD

A study appearing in JAMA Pediatrics is being reported as showing a link between ultrasound during pregnancy and autism spectrum disorder. But in this Deep Dive analysis, F. Perry Wilson, MD, suggests that the study actually reveals ultrasound to be a safe procedure in this regard. What’s more, the senior author agrees.

The rate of autism spectrum disorder has risen dramatically over the past several decades.

Now, much of that rise has been attributed to an increased recognition and diagnosis of the syndrome, but most experts believe some environmental factor is contributing. While we don’t have a great idea of what that factor is, we’re getting more confident in what it isn’t. First, it isn’t vaccines, either the content or the schedule. I eagerly await your angry emails.

Second, after reading this article in JAMA Pediatrics, I’m fairly certain it’s not prenatal ultrasound.

But I very much doubt that’s the story you’re going to hear with regards to this study. On the contrary, I think you’re going to hear a lot of outlets saying something like “New study links ultrasound during pregnancy with autism”.

First things first – why was this question even studied? Aren’t we always telling our patients that ultrasounds are super safe? Well, ultrasonic energy is energy, and while it may not do much damage as it passes into say, your gallbladder, it may do quite a bit more harm to a developing fetal brain. Some animal studies, in fact, have demonstrated that ultrasonic energy can alter neuronal migration, and at least one study showed that mice exposed to ultrasound in utero had poorer socialization than mice not so exposed.

In other words, there is biological plausibility here. But prior studies looking at ultrasound exposure in pregnancy, including one randomized trial, showed no link with autism.

But these studies were blunt tools – looking at ultrasound as a binary, yes/no type of exposure. Did you get one or not?

The study in JAMA Pediatrics, in contrast, is much more precise. The researchers took 107 kids with autism spectrum disorder and matched them to 104 kids with other developmental anomalies and 209 kids with typical development. They then went back and tallied up all their ultrasounds in utero, but not just the number. They looked at the duration of ultrasound, the frame rate, whether Doppler was used, and also the thermal and mechanical indices – metrics that quantify exactly how much energy is delivered to the imaged tissues.

In total, 9 different ultrasound metrics were assessed. The effect was assessed over the entire pregnancy and in trimester 1, 2, and 3.

Now assessing this much detail is a double-edged sword. If you count it up, we have more than 30 statistical tests here. Some of these were bound to turn up as statistically significant by chance alone as there was no correction done for multiple comparisons.

And that’s just what happened.

Depth of ultrasound was found to be associated with ASD, but none of the other metrics were. Well, duration of ultrasound was associated with ASD in the first two trimesters, but in the opposite direction of what would be hypothesized, with longer duration of ultrasound being protective.

Should we conclude, then, that we should be careful how deep we set our ultrasound scanners? Almost certainly not. There is a very good chance this is a false positive. Even if it’s not, depth of ultrasound is largely determined by anatomy, and maternal body habitus. The observed link may be explained by maternal adiposity.

But more impressive than this is the lack of association with thermal or mechanical index – the biological factors previously hypothesized to mediate any adverse ultrasound effects. If ultrasound is causative in ASD, you would really think that more ultrasonic energy delivered would be worse. This study essentially rules out that possibility, and to me, rules out the possibility that increased ultrasonography in pregnancy is driving the autism epidemic.

But if you don’t take my word for it, ask senior author Dr. Jodi Abbott, whom I spoke with last week about the study results:

“Given the information investigated very very thoroughly, none of the parameters previously associated with harm were found to be different in these populations.”

In other words – the search goes on. But if excellent researchers like Dr. Abbott and her colleague Dr. Paul Rosman continue their in-depth analyses, the search will lead to answers.

The Argument Against Quantum Computers


  The mathematician Gil Kalai believes that quantum computers can’t possibly work, even in principle.

Sixteen years ago, on a cold February day at Yale University, a poster caught Gil Kalai’s eye. It advertised a series of lectures by Michel Devoret, a well-known expert on experimental efforts in quantum computing. The talks promised to explore the question “Quantum Computer: Miracle or Mirage?” Kalai expected a vigorous discussion of the pros and cons of quantum computing. Instead, he recalled, “the skeptical direction was a little bit neglected.” He set out to explore that skeptical view himself.

Today, Kalai, a mathematician at Hebrew University in Jerusalem, is one of the most prominent of a loose group of mathematicians, physicists and computer scientists arguing that quantum computing, for all its theoretical promise, is something of a mirage. Some argue that there exist good theoretical reasons why the innards of a quantum computer — the “qubits” — will never be able to consistently perform the complex choreography asked of them. Others say that the machines will never work in practice, or that if they are built, their advantages won’t be great enough to make up for the expense.

Kalai has approached the issue from the perspective of a mathematician and computer scientist. He has analyzed the issue by looking at computational complexity and, critically, the issue of noise. All physical systems are noisy, he argues, and qubits kept in highly sensitive “superpositions” will inevitably be corrupted by any interaction with the outside world. Getting the noise down isn’t just a matter of engineering, he says. Doing so would violate certain fundamental theorems of computation.

Kalai knows that his is a minority view. Companies like IBM, Intel and Microsoft have invested heavily in quantum computing; venture capitalists are funding quantum computing startups (such as Quantum Circuits, a firm set up by Devoret and two of his Yale colleagues). Other nations — most notably China — are pouring billions of dollars into the sector.

Quanta Magazine recently spoke with Kalai about quantum computing, noise and the possibility that a decade of work will be proven wrong within a matter of weeks. A condensed and edited version of that conversation follows.

When did you first have doubts about quantum computers?

At first, I was quite enthusiastic, like everybody else. But at a lecture in 2002 by Michel Devoret called “Quantum Computer: Miracle or Mirage,” I had a feeling that the skeptical direction was a little bit neglected. Unlike the title, the talk was very much the usual rhetoric about how wonderful quantum computing is. The side of the mirage was not well-presented.

And so you began to research the mirage.

Only in 2005 did I decide to work on it myself. I saw a scientific opportunity and some possible connection with my earlier work from 1999 with Itai Benjamini and Oded Schramm on concepts called noise sensitivity and noise stability.

What do you mean by “noise”?

By noise I mean the errors in a process, and sensitivity to noise is a measure of how likely the noise — the errors — will affect the outcome of this process. Quantum computing is like any similar process in nature — noisy, with random fluctuations and errors. When a quantum computer executes an action, in every computer cycle there is some probability that a qubit will get corrupted.

Kalai argues that limiting the noise in a quantum computer will also limit the computational power of the system.

Video: Kalai argues that limiting the noise in a quantum computer will also limit the computational power of the system.

And so this corruption is the key problem?

We need what’s known as quantum error correction. But this will require 100 or even 500 “physical” qubits to represent a single “logical” qubit of very high quality. And then to build and use such quantum error-correcting codes, the amount of noise has to go below a certain level, or threshold.

To determine the required threshold mathematically, we must effectively model the noise. I thought it would be an interesting challenge.

What exactly did you do?

I tried to understand what happens if the errors due to noise are correlated — or connected. There is a Hebrew proverb that says that trouble comes in clusters. In English you would say: When it rains, it pours. In other words, interacting systems will have a tendency for errors to be correlated. There will be a probability that errors will affect many qubits all at once.

So over the past decade or so, I’ve been studying what kind of correlations emerge from complicated quantum computations and what kind of correlations will cause a quantum computer to fail.

In my earlier work on noise we used a mathematical approach called Fourier analysis, which says that it’s possible to break down complex waveforms into simpler components. We found that if the frequencies of these broken-up waves are low, the process is stable, and if they are high, the process is prone to error.

That previous work brought me to my more recent paper that I wrote in 2014 with a Hebrew University computer scientist, Guy Kindler. Our calculations suggest that the noise in a quantum computer will kill all the high-frequency waves in the Fourier decomposition. If you think about the computational process as a Beethoven symphony, the noise will allow us to hear only the basses, but not the cellos, violas and violins.

These results also give good reasons to think that noise levels cannot be sufficiently reduced; they will still be much higher than what is needed to demonstrate quantum supremacy and quantum error correction.

Why can’t we push the noise level below this threshold?

Many researchers believe that we can go beyond the threshold, and that constructing a quantum computer is merely an engineering challenge of lowering it. However, our first result shows that the noise level cannot be reduced, because doing so will contradict an insight from the theory of computing about the power of primitive computational devices. Noisy quantum computers in the small and intermediate scale deliver primitive computational power. They are too primitive to reach “quantum supremacy” — and if quantum supremacy is not possible, then creating quantum error-correcting codes, which is harder, is also impossible.

What do your critics say to that?

Critics point out that my work with Kindler deals with a restricted form of quantum computing and argue that our model for noise is not physical, but a mathematical simplification of an actual physical situation. I’m quite certain that what we have demonstrated for our simplified model is a real and general phenomenon.

My critics also point to two things that they find strange in my analysis: The first is my attempt to draw conclusions about engineering of physical devices from considerations about computation. The second is drawing conclusions about small-scale quantum systems from insights of the theory of computation that are usually applied to large systems. I agree that these are unusual and perhaps even strange lines of analysis.

And finally, they argue that these engineering difficulties are not fundamental barriers, and that with sufficient hard work and resources, the noise can be driven down to as close to zero as needed. But I think that the effort required to obtain a low enough error level for any implementation of universal quantum circuits increases exponentially with the number of qubits, and thus, quantum computers are not possible.

How can you be certain?

I am pretty certain, while a little nervous to be proven wrong. Our results state that noise will corrupt the computation, and that the noisy outcomes will be very easy to simulate on a classical computer. This prediction can already be tested; you don’t even need 50 qubits for that, I believe that 10 to 20 qubits will suffice. For quantum computers of the kind Google and IBM are building, when you run, as they plan to do, certain computational processes, they expect robust outcomes that are increasingly hard to simulate on a classical computer. Well, I expect very different outcomes. So I don’t need to be certain, I can simply wait and see.

Mammography Is Harmful and Should Be Abandoned, Scientific Review Concludes


“I believe that if screening had been a drug, it would have been withdrawn from the market long ago.” ~ Peter C Gøtzsche (physician, medical researcher and author of Mammography Screening: Truth, Lies and Controversy.)

With Breast Cancer Awareness Month upon us again, a new study promises to undermine the multi-billion dollar cause-marketing campaign that shepherds millions of women in to have their breasts scanned for cancer with x-rays that themselves are known to contribute to breast cancer.

mammography_should_be_abandoned

If you have followed my work for any length of time, you know that I have often reported on the adverse effects of mammography, of which there are many. From the radiobiological and psychological risks of the procedure itself, to the tremendous harms of overdiagnosis and overtreatment, it is becoming clearer every day that those who subject themselves to screening as a “preventive measure” are actually putting themselves directly into harms way, unnecessarily.

Now, a new study conducted by Peter C Gøtzsche, of the Nordic Cochrane Centre, published in the Journal of the Royal Society of Medicine and titled “Mammography screening is harmful and should be abandoned,” strikes to the heart of the matter by showing the actual effect of decades of screening has not been to reduce breast cancer specific mortality, despite the generation of millions of new so-called “early stage” or “stage zero” breast cancer diagnoses.

Previous investigation on the subject by Gotzsche resulted in the discovery that over-diagnosis occurs in a staggering 52% of patients offered organized mammography screening, which equates to “one in three breast cancers being over-diagnosed.” The problem with over-diagnosis is that it almost always goes unrecognized. This then results in over-treatment with aggressive interventions such as lumpectomy, mastectomy, chemotherapy and radiation; over-treatment is a euphemistic term that describes being severely harmed and/or having one’s life shortened by unnecessary medical treatment. Some of these treatments, such as chemotherapy and radiation, can actually enrich cancer stem cells within tumors, essentially altering cells from benign to malignant, or transforming already cancerous cells into far deadlier phenotypes.

Other recent research has determined that the past 30 years of breast cancer screening has lead to the over-diagnosis and over-treatment of about 1.3 million U.S. women, i.e. tumors were detected on screening that would never have led to clinical symptoms, and should never have been termed “cancers” in the first place. Truth be told, the physical and psycho-physical suffering wrought by the harms of breast cancer screening can not even begin to be quantified.

Gøtzsche is very clear about the implications of his review on the decision to undergo mammography. He opines that the effect of screening on mortality, which is the only true measure of whether a medical intervention is worth undertaking, is to increase total mortality.

Mammography Is Harmful and Should Be Abandoned, Review Concludes

Gøtzsche summarizes his findings powerfully:

“Mammography screening has been promoted to the public with three simple promises that all appear to be wrong: It saves lives and breasts by catching the cancers early. Screening does not seem to make the women live longer; it increases mastectomies; and cancers are not caught early, they are caught very late. They are also caught in too great numbers. There is so much overdiagnosis that the best thing a women can do to lower her risk of becoming a breast cancer patient is to avoid going to screening, which will lower her risk by one-third. We have written an information leaflet that exists in 16 languages on cochrane.dk, which we hope will make it easier for a woman to make an informed decision about whether or not to go to screening.

“I believe that if screening had been a drug, it would have been withdrawn from the market long ago. Many drugs are withdrawn although they benefit many patients, when serious harms are reported in rather few patients. The situation with mammography screening is the opposite: Very few, if any, will benefit, whereas many will be harmed. I therefore believe it is appropriate that a nationally appointed body in Switzerland has now recommended that mammography screening should be stopped because it is harmful.”

In the midst of Breast Cancer Awareness Month, a cause marketing orgy bedecked with pink ribbons, and infused with a pinkwashed mentality that has entirely removed the word “carcinogen” (i.e. the cause of cancer) from the discussion. All the better to raise billions more to find the “cure” everyone is told does not yet exist.

Women need to break free from the medical industrial complex’s ironclad hold on their bodies and minds, and take back control of their health through self-education and self-empowerment.

Is SpaceX Being Environmentally Responsible?


Falcon Heavy’s flashy space car may not have been the best idea—for Mars

 

SpaceX via Twitter

SpaceX has now launched the most powerful spacecraft since the Apollo era—the Falcon Heavy rocket—setting the bar for future space launches. The most important thing about this reusable spacecraft is that it can carry a payload equivalent to sending five double-decker London buses into space—which will be invaluable for future manned space exploration or in sending bigger satellites into orbit.

Falcon Heavy essentially comprises three previously tested rockets strapped together to create one giant spacecraft. The launch drew massive international audiences—but while it was an amazing event to witness, there are some important potential drawbacks that must be considered as we assess the impact of this mission on space exploration.

But let’s start by looking at some of the many positives. Falcon Heavy is capable of taking 68 tonnes of equipment into orbit close to the Earth. The current closest competitor is the Delta IV heavy which has a payload equivalent of 29 tonnes. So Falcon Heavy represents a big step forward in delivering ever larger satellites or manned missions out to explore our solar system. For the purposes of colonizing Mars or the moon, this is a welcome and necessary development.

The launch itself, the views from the payload and the landing of the booster rockets can only be described as stunning. The chosen payload was a Tesla Roadster vehicle belonging to Space X founder and CEO Elon Musk—with a dummy named “Starman” sitting in the driver’s seat along with plenty of cameras.

This sort of launch spectacle gives a much needed public engagement boost to the space industry that has not been seen since the time of the space race in the 1960s. As a side effect this camera feed from the payload also provided yet another proof that the Earth is not flat—a subject about which Musk has previously been vocal.

The fact that this is a fully reusable rocket is also an exciting development. While vehicles such as the Space Shuttle have been reusable, their launch vehicles have not. That means their launches resulted in a lot of rocket boosters and main fuel tanks either burning up in the atmosphere or sitting on the bottom of the ocean (some are recovered).

This recovery massively reduces the launch cost for both exploration and scientific discovery. The Falcon Heavy has been promoted as providing a cost of roughly US$1,300 per kg of payload, while the space shuttle cost approximately $60,000 per kg. The impact this price drop has for innovative new space products and research is groundbreaking. The rocket boosters on this test flight had a controlled and breathtakingly simultaneous landing onto the launch pad.

So what could possibly be wrong with this groundbreaking test flight? While visually appealing, cheaper and a major technological advancement, what about the environmental impact? The rocket is reusable, which means cutting down the resources required for the metal body of the rocket. However, the mass of most rockets are more than 95% fuel. Building bigger rockets with bigger payloads means more fuel is used for each launch. The current fuel for Falcon Heavy is RP-1 (a refined kerosene) and liquid oxygen, which creates a lot of carbon dioxide when burnt.

The amount of kerosene in three Falcon 9 rockets is roughly 440 tonnes and RP-1 has a 34 percent carbon content. This amount of carbon is a drop in the ocean compared to global industrial emissions as a whole, but if the SpaceX’s plan for a rocket launch every two weeks comes to fruition, this amount of carbon (approximately 4,000 tonnes per year) will rapidly become a bigger problem.

The car test payload is also something of an issue. The vehicle has been scheduled to head towards Mars, but what has not been made clear is what is going to happen to it afterwards. Every modern space mission is required to think about clearing up after itself. In the cases of planetary or lunar satellites this inevitably results in either a controlled burn-up in the atmosphere, or a direct impact with the body they orbit.

Space debris is rapidly becoming one of the biggest problems we face—there are more than 150 million objects that need tracking to ensure as few collisions with working spacecraft as possible. The result of any impact or degradation of the car near Mars could start creating debris at the red planet, meaning that the pollution of another planet has already begun.

Space Junk
Space Junk 

However, current reports suggest that the rocket may have overshot its trajectory, meaning the vehicle will head towards the asteroid belt rather than Mars. This is probably going to mean a collision is inevitable. The scattering of tiny fragments of an electric vehicle is pollution at the minimum—and a safety hazard for future missions at worst. Where these fragments end up will be hard to predict—and hence troublesome for future satellite launches to Mars, Saturn or Jupiter. The debris could be drawn by the gravity of Mars, asteroids or even swept away with the solar wind.

What is also unclear is whether the car was built in a perfect clean room. If not there is the risk that bacteria from Earth may spread through the solar system after a collision. This would be extremely serious, given that we are currently planning to search for life on neighbouring bodies such as Mars and Jupiter’s moon Europa. If microorganisms were found there we may never know whether they actually came from Earth in the first place.

Of course, these issues don’t affect my sense of excitement and wonder at watching the amazing launch. The potential advantages of this large-scale rocket are incredible, but private space firms must also be aware that the potential negative impacts (both in space and on Earth) are just as large.

National Cancer Institute Quietly Confirms Cannabis Can Cure Cancer


During hearings on marijuana law in the 1930’s, they made claims about marijuana’s ability to cause men of color to become violent and solicit sex from white women. This imagery became the backdrop for the Marijuana Tax Act of 1937 which effectively banned its use and sales.

However, with so much information coming out about the medical value of marijuana and that marijuana is not as dangerous as alcohol, National Cancer Institute took the initiative of finding out whether marijuana does have medical properties in it.

However, it’s downright comical how the government doesn’t bat an eyelid to the more severe mishappenings in the society but find it absolutely justifiable to lock marijuana growers and buyers for cheap labors and locking them up. This destroyed a lot of families.

Marijuana is still illegal in many countries in-spite of not falling under the SCHEDULE 1 rating. Schedule 1 includes drugs which are dangerous, addictive and don’t hold any medical merit. Under NCI (national cancer institute) cannabis, cannabis decrease the hurtful side-effects of chemo and also as a drug to enhance chemo.

Other than this, there have been cases all around the world where they use cannabis to treat and cure cancer. In spite of all this, the government and media yet disagree on bringing the reality to light.

NCI On Marijuana And Its Anti-Tumour Activity

Experiments were conducted on mice and rats which showed that cannabinoids help in resisting inflammation of the colon and may help in reduction of colon cancer-causing cells. It was also helpful in preventing the growth of blood vessels which would help a tumor grow. Study of cannabidiol also showed the death of cancer cells which have a very little effect on normal breast cells. It also showed that cannabinoids helped in lessening the growth of cancer cells and also prevented it from studying.

Increasing The Appetite Of Cancer Patient:

Delta-9-THC and other cannabinoids are some of the best drugs to increase the food intake capacity of cancer patients.

Cannabis For Pain Relief:

Molecules that bind cannabinoid are called cannabinoid receptors have been studied for anti-inflammatory tests. They help relieve the pain of spinal cord, brain and nerve endings.

It is funny how our government runs. Maybe its priorities aren’t in place. What else can we say when the government approves the circulation of Oxycontin in our society. Oxycontin is an extremely addictive drug which gives to 11-year-old kids for a painkiller.

This Flat-Earther Has Failed to Launch His Rocket Again And Now We’re Almost Sorry For Him


We’d highly recommend a weather balloon.

When NASA, pretty much every scientist, and all of the available evidence doesn’t convince you that we are on a globular planet, apparently the only thing to try is to leave Earth to find out for yourself.

But Mike Hughes, the infamous flat-Earth conspiracy theorist, is currently Earth-bound for at least a little bit longer.

If you haven’t been following this crazy saga, you are in for a treat.

Hughes came onto the scene back in late 2017 when he was about to launch his homemade rocket. His aim was to launch himself around 550 metres (1,800 feet) into the air, travel 1.6 kilometres (1 mile), and then parachute out of the flying wreckage.

Intended as a publicity stunt, this was supposed to be the first step in his project to build a rocket and launch himself way up into the atmosphere to take photographic evidence of our flat home planet.

We’re assuming no one has been able to convince him that he’d get the same view by simply strapping a camera to a high-altitude balloon.

“They have not put a man in space yet,” Hughes said in a 2016 Kickstarter video.

“There are 20 different space agencies here in America, and I’m the last person that’s put a man in a rocket and launched it.”

But alas – Hughes has been plagued with issue after issue trying to get the stunt off the ground.

First, there was the alleged problem with getting government permission to conduct the flight on public land.

“It’s still happening. We’re just moving it three miles down the road,” Hughes told The Washington Post.

“This is what happens anytime you have to deal with any kind of government agency.”

In January this year, he was all geared up for another attempt, in a brand new green rocket with “Flat Earth” emblazoned across the side.

However, on February 3, the day for take-off, Hughes strapped himself to his rocket, but never left the ground.

In interview footage uploaded to YouTube, Hughes claims that the failure was due to a faulty plunger or blown o-ring.

In that same video, Hughes also states that the launch could still happen in the next few days, but notes that he has to be in court on Tuesday because he’s suing a number of Californian officials, including the governor of California, Jerry Brown.

There’s an 11 minute livestream recording of the event, but if you’re looking for something more wholesome, check out this amazing video of Giles Academy students in Old Leake, England, doing their own experiment to show just how beautiful Earth actually is.

Perhaps the most impressive experiment that even schools can do today is to send a camera up in a high-altitude balloon.

The footage will show that from a high-enough vantage point you can see the curvature of Earth. This is what Mike Hughes will find if he ever makes his rocket work.

Ultimately, arguing on the internet is not the best way forward for any scientific endeavour. We need to provide the means for people to test these theories themselves and to understand the results they get.

Omega-3 Supplements do not have CVD benefits : JAMA


https://speciality.medicaldialogues.in/omega-3-supplements-do-not-have-cvd-benefits-jama/

Prevents Alcohol-Related Liver Disease


Could cannabis use protect you from liver damage due to drinking?

New Study Suggests Marijuana Prevents Alcohol-Related Liver Disease

It’s common knowledge that cannabis is much less harmful to human health than alcohol. But researchers in Massachusetts have published a new study that shows how cannabis could actually help reduce the harmful effects of alcohol use and abuse. Taking advantage of the anti-inflammatory effects of marijuana, the study investigated whether or not marijuana prevents alcohol-related liver disease.

Could Cannabis Help Reduce The Harmful Effects Of Drinking?

Many experts consider alcohol to be the most harmful drug for human health. And indeed, alcohol racks up an astonishing body count each year. According to estimates from the Centers for Disease Control, alcohol is responsible for 88,000 deaths each year. It also contributes to a full third of all traffic deaths, about 10,000 fatalities per year.

With a death toll approaching 100,000 annually, alcohol doesn’t discriminate. According to the International Business Times, which reported on a pair of major surveys between 2001 and 2013, drinking’s destructive effects are rising across virtually every demographic in the US.

And that’s because alcohol use is on the rise across the board. So-called “high-risk” drinking is increasing at an even higher rate, marching upwards by 30 percent. As a result, nearly 30 million Americans are exposed to health risks due to their alcohol consumption. In short, alcohol use represents a significant public health concern.

One of the most fatal of those harmful effects is, of course, liver disease.

 New Study Suggests Marijuana Prevents Alcohol-Related Liver Disease.

When a person drinks alcohol, they introduce a harmful substance into their body. The liver tries to filter out the alcohol in the bloodstream, but this damages many liver cells in the process. In response, the liver suffers from inflammation as scar tissue replaces the dead cells.

The more someone drinks, the more significant this damage becomes. Alcohol abuse causes severe, chronic inflammation of the liver, which can lead to fatal cirrhosis of the liver.

One of the most well-documented medicinal effects of cannabis use, however, is as an anti-inflammatory. Marijuana’s anti-inflammatory properties account for why the drug is an effective pain reliever. It’s also why cannabis is used to treat nerve inflammation, which is at the root of many neurological diseases like epilepsy and MS.

Building off of these precedents, researchers with the North Shore Medical Center in Salem, Massachusetts wanted to see if weed’s potent anti-inflammatory capabilities could also help protect the liver from damage.

What they discovered is pretty incredible. The study focused on about 319,000 patients with a past or current history of alcohol abuse. Researchers divided the group into non-cannabis users, non-dependent cannabis users, and dependent cannabis users.

They also studied how cannabis use relates to the four distinct phases of liver disease. These are alcoholic fatty liver disease (AS), non-alcoholic fatty liver disease (AH), cirrhosis (AC), and liver cancer (HCC).

Final Hit: New Study Suggests Marijuana Prevents Alcohol-Related Liver Disease

Remarkably, the researchers found that cannabis users had “significantly lower odds” of developing AS, AH, AC, and HCC. What’s more, cannabis users the study classified as “dependent” showed the lowest chances of developing liver disease.

In other words, the more someone used cannabis, the less likely they were to develop liver diseases caused by alcohol abuse. Therefore, the study concluded, there’s reason to believe the anti-inflammatory properties of cannabis do protect the liver from damage from alcohol abuse.

Adeyinka Charles Adejumo, who headed the research team, wants to make clear that aim of the study. It isn’t to encourage heavy drinkers to take up cannabis. Furthermore, Adejumo isn’t suggesting mixing alcohol and cannabis consumption. Instead, the scientist views the study as opening the door to cannabis-based treatments for liver disease in individuals who abuse alcohol.

%d bloggers like this: