Google’s Ray Kurzweil claims computer robots will outsmart humans within 15 years.

Ray Kurzweil popularised the Teminator-like moment he called the ‘singularity’, when artificial intelligence overtakes human thinking. But now the man who hopes to be immortal is involved in the very same quest – on behalf of the tech behemoth.
Robot from The Terminator

The Terminator films envisage a future in which robots have become sentient and are at war with humankind. Ray Kurzweil thinks that machines could become ‘conscious’ by 2029 but is optimistic about the implications for humans.

It’s hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione?

With the fact that he believes that he has a good chance of living for ever? He just has to stay alive “long enough” to be around for when the great life-extending technologies kick in (he’s 66 and he believes that “some of the baby-boomers will make it through”). Or with the fact that he’s predicted that in 15 years’ time, computers are going to trump people. That they will be smarter than we are. Not just better at doing sums than us and knowing what the best route is to Basildon. They already do that. But that they will be able to understand what we say, learn from experience, crack jokes, tell stories, flirt. Ray Kurzweil believes that, by 2029, computers will be able to do all the things that humans do. Only better.

But then everyone’s allowed their theories. It’s just that Kurzweil’s theories have a habit of coming true. And, while he’s been a successful technologist and entrepreneur and invented devices that have changed our world – the first flatbed scanner, the first computer program that could recognise a typeface, the first text-to-speech synthesizer and dozens more – and has been an important and influential advocate of artificial intelligence and what it will mean, he has also always been a lone voice in, if not quite a wilderness, then in something other than the mainstream.

And now? Now, he works at Google. Ray Kurzweil who believes that we can live for ever and that computers will gain what looks like a lot like consciousness in a little over a decade is now Google’s director of engineering. The announcement of this, last year, was extraordinary enough. To people who work with tech or who are interested in tech and who are familiar with the idea that Kurzweil has popularised of “the singularity” – the moment in the future when men and machines will supposedly converge – and know him as either a brilliant maverick and visionary futurist, or a narcissistic crackpot obsessed with longevity, this was headline news in itself.

But it’s what came next that puts this into context. It’s since been revealed that Google has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest artificial intelligence laboratory on Earth; a laboratory designed to feast upon a resource of a kind that the world has never seen before: truly massive data. Our data. From the minutiae of our lives.

Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.

And those are just the big deals. It also bought Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It hired Geoff Hinton, a British computer scientist who’s probably the world’s leading expert on neural networks. And it has embarked upon what one DeepMind investor told the technology publication Re/code two weeks ago was “a Manhattan project of AI”. If artificial intelligence was really possible, and if anybody could do it, he said, “this will be the team”. The future, in ways we can’t even begin to imagine, will be Google’s.

There are no “ifs” in Ray Kurzweil’s vocabulary, however, when I meet him in his new home – a high-rise luxury apartment block in downtown San Francisco that’s become an emblem for the city in this, its latest incarnation, the Age of Google. Kurzweil does not do ifs, or doubt, and he most especially doesn’t do self-doubt. Though he’s bemused about the fact that “for the first time in my life I have a job” and has moved from the east coast where his wife, Sonya, still lives, to take it.

Ray Kurzweil photographed in San Francisco last year.
Bill Gates calls him “the best person I know at predicting the future of artificial intelligence”. He’s received 19 honorary doctorates, and he’s been widely recognised as a genius. But he’s the sort of genius, it turns out, who’s not very good at boiling a kettle. He offers me a cup of coffee and when I accept he heads into the kitchen to make it, filling a kettle with water, putting a teaspoon of instant coffee into a cup, and then moments later, pouring the unboiled water on top of it. He stirs the undissolving lumps and I wonder whether to say anything but instead let him add almond milk – not eating dairy is just one of his multiple dietary rules – and politely say thank you as he hands it to me. It is, by quite some way, the worst cup of coffee I have ever tasted.

But then, he has other things on his mind. The future, for starters. And what it will look like. He’s been making predictions about the future for years, ever since he realised that one of the key things about inventing successful new products was inventing them at the right moment, and “so, as an engineer, I collected a lot of data”. In 1990, he predicted that a computer would defeat a world chess champion by 1998. In 1997, IBM’s Deep Blue defeated Garry Kasparov. He predicted the explosion of the world wide web at a time it was only being used by a few academics and he predicted dozens and dozens of other things that have largely come true, or that will soon, such as that by the year 2000, robotic leg prostheses would allow paraplegics to walk (the US military is currently trialling an “Iron Man” suit) and “cybernetic chauffeurs” would be able to drive cars (which Google has more or less cracked).

His critics point out that not all his predictions have exactly panned out (no US company has reached a market capitalisation of more than $1 trillion; “bioengineered treatments” have yet to cure cancer). But in any case, the predictions aren’t the meat of his work, just a byproduct. They’re based on his belief that technology progresses exponentially (as is also the case in Moore’s law, which sees computers’ performance doubling every two years). But then you just have to dig out an old mobile phone to understand that. The problem, he says, is that humans don’t think about the future that way. “Our intuition is linear.”

When Kurzweil first started talking about the “singularity”, a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human – will be passed in 2029. The difference is that when he began saying it, the fax machine hadn’t been invented. But now, well… it’s another story.

“My book The Age of Spiritual Machines came out in 1999 and that we had a conference of AI experts at Stanford and we took a poll by hand about when you think the Turing test would be passed. The consensus was hundreds of years. And a pretty good contingent thought that it would never be done.

“And today, I’m pretty much at the median of what AI experts think and the public is kind of with them. Because the public has seen things like Siri [the iPhone’s voice-recognition technology] where you talk to a computer, they’ve seen the Google self-driving cars. My views are not radical any more. I’ve actually stayed consistent. It’s the rest of the world that’s changing its view.”

And yet, we still haven’t quite managed to get to grips with what that means. The Spike Jonze film, Her, which is set in the near future and has Joaquin Phoenix falling in love with a computer operating system, is not so much fantasy, according to Kurzweil, as a slightly underambitious rendering of the brave new world we are about to enter. “A lot of the dramatic tension is provided by the fact that Theodore’s love interest does not have a body,” Kurzweil writes in a recent review of it. “But this is an unrealistic notion. It would be technically trivial in the future to provide her with a virtual visual presence to match her virtual auditory presence.”

But then he predicts that by 2045 computers will be a billion times more powerful than all of the human brains on Earth. And the characters’ creation of an avatar of a dead person based on their writings, in Jonze’s film, is an idea that he’s been banging on about for years. He’s gathered all of his father’s writings and ephemera in an archive and believes it will be possible to retro-engineer him at some point in the future.

So far, so sci-fi. Except that Kurzweil’s new home isn’t some futuristic MegaCorp intent on world domination. It’s not Skynet. Or, maybe it is, but we largely still think of it as that helpful search engine with the cool design. Kurzweil has worked with Google’s co-founder Larry Page on special projects over several years. “And I’d been having ongoing conversations with him about artificial intelligence and what Google is doing and what I was trying to do. And basically he said, ‘Do it here. We’ll give you the independence you’ve had with your own company, but you’ll have these Google-scale resources.'”

And it’s the Google-scale resources that are beyond anything the world has seen before. Such as the huge data sets that result from 1 billion people using Google ever single day. And the Google knowledge graph, which consists of 800m concepts and the billions of relationships between them. This is already a neural network, a massive, distributed global “brain”. Can it learn? Can it think? It’s what some of the smartest people on the planet are working on next.

Peter Norvig, Google’s research director, said recently that the company employs “less than 50% but certainly more than 5%” of the world’s leading experts on machine learning. And that was before it bought DeepMind which, it should be noted, agreed to the deal with the proviso that Google set up an ethics board to look at the question of what machine learning will actually mean when it’s in the hands of what has become the most powerful company on the planet. Of what machine learning might look like when the machines have learned to make their own decisions. Or gained, what we humans call, “consciousness”.

Garry Kasparov ponders a move against IBM's Deep Blue. Kurzweil predicted the computer's triumph.

Garry Kasparov ponders a move against IBM’s Deep Blue. Ray Kurzweil predicted the computer’s triumph.
I first saw Boston Dynamics’ robots in action at a presentation at the Singularity University, the university that Ray Kurzweil co-founded and that Google helped fund and which is devoted to exploring exponential technologies. And it was the Singularity University’s own robotics faculty member Dan Barry who sounded a note of alarm about what the technology might mean: “I don’t see any end point here,” he said when talking about the use of military robots. “At some point humans aren’t going to be fast enough. So what you do is that you make them autonomous. And where does that end? Terminator?”And the woman who headed the Defence Advanced Research Projects Agency (Darpa), the secretive US military agency that funded the development of BigDog? Regina Dugan. Guess where she works now?Kurzweil’s job description consists of a one-line brief. “I don’t have a 20-page packet of instructions,” he says. “I have a one-sentence spec. Which is to help bring natural language understanding to Google. And how they do that is up to me.”Language, he believes, is the key to everything. “And my project is ultimately to base search on really understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.”Google will know the answer to your question before you have asked it, he says. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.

The most successful example of natural-language processing so far is IBM’s computer Watson, which in 2011 went on the US quiz show Jeopardy and won. “And Jeopardy is a pretty broad task. It involves similes and jokes and riddles. For example, it was given “a long tiresome speech delivered by a frothy pie topping” in the rhyme category and quickly responded: “A meringue harangue.” Which is pretty clever: the humans didn’t get it. And what’s not generally appreciated is that Watson’s knowledge was not hand-coded by engineers. Watson got it by reading. Wikipedia – all of it.

Kurzweil says: “Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM’s Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I’m doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn’t understand the implications of what it’s reading. It’s doing a sort of pattern matching. It doesn’t understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn’t understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.”

And once the computers can read their own instructions, well… gaining domination over the rest of the universe will surely be easy pickings. Though Kurzweil, being a techno-optimist, doesn’t worry about the prospect of being enslaved by a master race of newly liberated iPhones with ideas above their station. He believes technology will augment us. Make us better, smarter, fitter. That just as we’ve already outsourced our ability to remember telephone numbers to their electronic embrace, so we will welcome nanotechnologies that thin our blood and boost our brain cells. His mind-reading search engine will be a “cybernetic friend”. He is unimpressed by Google Glass because he doesn’t want any technological filter between us and reality. He just wants reality to be that much better.

“I thought about if I had all the money in the world, what would I want to do?” he says. “And I would want to do this. This project. This is not a new interest for me. This idea goes back 50 years. I’ve been thinking about artificial intelligence and how the brain works for 50 years.”

The evidence of those 50 years is dotted all around the apartment. He shows me a cartoon he came up with in the 60s which shows a brain in a vat. And there’s a still from a TV quiz show that he entered aged 17 with his first invention: he’d programmed a computer to compose original music. On his walls are paintings that were produced by a computer programmed to create its own original artworks. And scrapbooks that detail the histories of various relatives, the aunts and uncles who escaped from Nazi Germany on the Kindertransport, his great grandmother who set up what he says was Europe’s first school to provide higher education for girls.

Jeopardy is won my a machine

Kurzweil suggests that language is the key to teaching machines to think. He says his job is to ‘base search on really understanding what the language means’.The most successful example of natural-language processing to date is IBM’s computer Watson, which in 2011 went on the US quiz show Jeopardy and won.His home is nothing if not eclectic. It’s a shiny apartment in a shiny apartment block with big glass windows and modern furnishings but it’s imbued with the sort of meaning and memories and resonances that, as yet, no machine can understand. His relatives escaped the Holocaust “because they used their minds. That’s actually the philosophy of my family. The power of human ideas. I remember my grandfather coming back from his first return visit to Europe. I was seven and he told me he’d been given the opportunity to handle – with his own hands – original documents by Leonardo da Vinci. He talked about it in very reverential terms, like these were sacred documents. But they weren’t handed down to us by God. They were created by a guy, a person. A single human had been very influential and had changed the world. The message was that human ideas changed the world. And that is the only thing that could change the world.”On his fingers are two rings, one from the Massachusetts Institute of Technology, where he studied, and another that was created by a 3D printer, and on his wrist is a 30-year-old Mickey Mouse watch. “It’s very important to hold on to our whimsy,” he says when I ask him about it. Why? “I think it’s the highest level of our neocortex. Whimsy, humour…”Even more engagingly, tapping away on a computer in the study next door I find Amy, his daughter. She’s a writer and a teacher and warm and open, and while Kurzweil goes off to have his photo taken, she tells me that her childhood was like “growing up in the future”.Is that what it felt like? “I do feel little bit like the ideas I grew up hearing about are now ubiquitous… Everything is changing so quickly and it’s not something that people realise. When we were kids people used to talk about what they going to do when they were older, and they didn’t necessarily consider how many changes would happen, and how the world would be different, but that was at the back of my head.”And what about her father’s idea of living for ever? What did she make of that? “What I think is interesting is that all kids think they are going to live for ever so actually it wasn’t that much of a disconnect for me. I think it made perfect sense. Now it makes less sense.”

Well, yes. But there’s not a scintilla of doubt in Kurzweil’s mind about this. My arguments slide off what looks like his carefully moisturised skin. “My health regime is a wake-up call to my baby-boomer peers,” he says. “Most of whom are accepting the normal cycle of life and accepting they are getting to the end of their productive years. That’s not my view. Now that health and medicine is in information technology it is going to expand exponentially. We will see very dramatic changes ahead. According to my model it’s only 10-15 years away from where we’ll be adding more than a year every year to life expectancy because of progress. It’s kind of a tipping point in longevity.”

He does, at moments like these, have something of a mad glint in his eye. Or at least the profound certitude of a fundamentalist cleric. Newsweek, a few years back, quoted an anonymous colleague claiming that, “Ray is going through the single most public midlife crisis that any male has ever gone through.” His evangelism (and commercial endorsement) of a whole lot of dietary supplements has more than a touch of the “Dr Gillian McKeith (PhD)” to it. And it’s hard not to ascribe a psychological aspect to this. He lost his adored father, a brilliant man, he says, a composer who had been largely unsuccessful and unrecognised in his lifetime, at the age of 22 to a massive heart attack. And a diagnosis of diabetes at the age of 35 led him to overhaul his diet.

But isn’t he simply refusing to accept, on an emotional level, that everyone gets older, everybody dies?

“I think that’s a great rationalisation because our immediate reaction to hearing someone has died is that it’s not a good thing. We’re sad. We consider it a tragedy. So for thousands of years, we did the next best thing which is to rationalise. ‘Oh that tragic thing? That’s really a good thing.’ One of the major goals of religion is to come up with some story that says death is really a good thing. It’s not. It’s a tragedy. And people think we’re talking about a 95-year-old living for hundreds of years. But that’s not what we’re talking about. We’re talking radical life extension, radical life enhancement.

“We are talking about making ourselves millions of times more intelligent and being able to have virtually reality environments which are as fantastic as our imagination.”

Although possibly this is what Kurzweil’s critics, such as the biologist PZ Myers, mean when they say that the problem with Kurzweil’s theories is that “it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.” Or Jaron Lanier, who calls him “a genius” but “a product of a narcissistic age”.

But then, it’s Kurzweil’s single-mindedness that’s been the foundation of his success, that made him his first fortune when he was still a teenager, and that shows no sign of letting up. Do you think he’ll live for ever, I ask Amy. “I hope so,” she says, which seems like a reasonable thing for an affectionate daughter to wish for. Still, I hope he does too. Because the future is almost here. And it looks like it’s going to be quite a ride.

You can now PEE out your fat: Controversial new injections ‘beak down fat cells that you then pass through urine’ | Mail Online

  • Aqualyx injections liquefy fat which is then eliminated when you urinate
  • Clinic claim absorbed fat cells DO NOT return and there’s no downtime
  • Critics say it could spell danger for cholesterol if fat turns to salt in blood

In the quest to make us slimmer without lifting a finger, an injection has been invented letting us literally pee out our fat.

A water solution is injected into stubborn areas around the body, breaking down excess fat cells, allowing us to absorb them into our bloodstream – and then wee them out.

The new treatment, dubbed Aqualyx, claims to be an effective alternative to liposuction.

The jabs break fat cells, allowing us to absorb them into our bloodstream - and then wee them outThe jabs break fat cells, allowing us to absorb them into our bloodstream – and then wee them out

It contains plant polymers, which binds with the cell walls of the fat tissue before rupturing and releasing the fat to be dissolved.

The formula liquefies the fat cell which is then eliminated when you urinate over a three-week period.

Makers say the solution completely destroys the fat cells so they can’t grow back.

It can be used on your thighs, stomach, knees, chin, buttocks, stomach, back and even your neck.

Describing its new treatment, Mills Medical Service say it ‘gives you the body you’ve always dreamed of without fearing the fat will return’.

The treatment, which claims to be the only registered fat removal injection on the market, says it has no downtime (other than slight bruising and swelling for 48 hours).

‘It’s dangerous to re-absorb fatty acids into your bloodstream because if it’s dissolved down into salt it would send cholesterol levels sky high’

‘Aqualyx isn’t an injection for weight loss; it is used for contouring the body and slimming down those stubborn fat areas. Combined with a healthy diet and exercise, the fat won’t grow back either.’ say the brand.

One session costs £250, which Mills Medical say is all that’s needed for the chin area, while the tummy may need a few more treatments.

But is it all too good to be true – can it really be effective and safe?

Dr Arun Ghosh, from the private Spire Hospital in Liverpool says the injections could pose a health risk.

Aqualyx, like lipo, is not recommended for weight loss goals but rather to contour the bodyAqualyx, like lipo, is not recommended for weight loss goals but rather to contour the body

He told the Daily Star: ‘It’s dangerous to re-absorb fatty acids into your bloodstream because if it’s dissolved down into salt it would send cholesterol levels sky high.’

Dr Yannis Alexandrides, MD of 111 Harley Street, said: ‘I don’t use fat removal injectable in my clinic and certainly have no plans to until there is further research and trials published.

‘It’s always important that prospective patients thoroughly research treatments before they book, and these injectables are very new to the market with little evidence of their efficacy other than that produced by the manufacturer.

‘While the cost is certainly affordable and attractive, those thinking of having the procedure must look at alternatives available that have proven results and existing patient testimonials.

‘Also, prospective patients must undergo a full consultation to determine the right procedure for their concern.’

Before and after: Sarah, 42, from Sheffield had a £250 Aqualyx injection and slimmed down
However they've come under fire for posing a health risk

Before and after: Sarah, 42, from Sheffield had a £250 Aqualyx injection and slimmed down

Good news about the ‘spiritual but not religious’.

Despite the ongoing decline in American religious institutions, the meteoric rise in people who claim to be “spiritual but not religious” should be seen positively – especially by religious people.

To accept this as good news, however, we need to listen to what they are saying, rather than ridicule them as “salad bar spiritualists” or eclectic dabblers.

Good news about the ‘spiritual but not religious’

After spending more than five years speaking with hundreds of “spiritual but not religious” folk across North America, I’ve come to see a certain set of core ideas among them. Because of their common themes, I think it’s fair to refer to them by the acronym: SBNR.

But before we explore what the SBNRs believe, we first need to learn what they protest.

First, they protest “scientism.”

They’ve become wary about reducing everything that has value to what can only be discovered in the tangible world, restricting our intellectual confidence to that which can be observed and studied.

Their turn towards alternative health practices is just one sign of this. Of course, most do avail themselves of science’s benefits, and they often use scientific-sounding arguments (talking about “energy” or “quantum physics”) to justify their spiritual views.

But, in general, they don’t think all truth and value can be confined to our material reality.

Second, SBNRs protest “secularism.” 

They are tired of being confined by systems and structures. They are tired of having their unique identities reduced to bureaucratic codes. They are tired of having their spiritual natures squelched or denied.

They play by society’s rules: hold down jobs, take care of friends and family and try to do some good in the world. But they implicitly protest being rendered invisible and unheard.

Third, yes, they protest religion – at least, two types of it.

But the SBNR rejection of religion is sometimes more about style than substance.

On one hand, they protest “rigid religion,” objecting to a certain brand of conservatism that insists there is only one way to express spirituality, faith, and the search for transcendence.

But they also protest what I call “comatose religion.”

After the shocks of the previous decades, and the declines in religious structures and funding, many religious people are dazed and confused.

They are puzzled and hurt that so many – including their own children – are deserting what was once a vibrant, engaging, and thriving part of American society.

So why, then, is it “good news” that there is a huge rise in the “spiritual but not religious”? Because their protests are the very same things that deeply concern – or should concern – all of us.

The rise in SBNRs is the archetypal “wake up call,” and I sense that, at last, religious leaders are beginning to hear it.

The history of religion in Western society shows that, sooner or later, people grasp the situation and find new ways of expressing their faith that speak to their contemporaries.

In the meantime, there are plenty of vital congregations in our society. In the vast mall of American religious options, it is misguided to dismiss all of our spiritual choices as moribund, corrupt, or old-fashioned – even though so many do.

What has prompted SBNRs, and others, to make this dismissal?

For one thing, many religious groups are not reaching out to the SBNRs. They need to understand them and speak their language, rather than being fearful or dismissive.

Second, the media often highlights the extremes and bad behavior of a few religious people and groups.  But we don’t automatically give up on other collections of fallible human beings, like our jobs, our families, or our own selves.  Some attitude adjustment is needed by both religious people and SBNRs.

Finally, SBNRs need to give up the easy ideology that says religion is unnecessary, all the same, or outmoded. And all of us should discard the unworkable idea that you must find a spiritual or religious group with which you totally agree.  Even if such a group could be found, chances are it would soon become quite boring.

There’s no getting around this fact: It is hard work to nurture the life of faith. The road is narrow and sometimes bumpy. It is essential to have others along with us on the journey.

All of us, not just religious people, are in danger of becoming rigid or comatose, inflexible or numb.  All of us need to find ways to develop and live our faith in the company of others, which is, in fact, what religion is all about.

Google will ‘know you better than your intimate partner’.

Will this robot someday know your every thought? (Reuters / Joshua Roberts)

Will this robot someday know your every thought?

In 15 years’ time, computers will surpass their creators in intelligence, with an ability to tell stories and crack jokes, predicts a leading expert in artificial intelligence. Thus, Google will “know the answer to your question before you ask it.”

Most people would probably agree that computers are man-made technologies that function inside the strict boundaries of man-made borders. For technologists like Google engineering director Ray Kurzweil, however, the moment when computers liberate themselves from their masters will occur in our lifetime.

By the year 2029, computers and robots will not only have surpassed their makers in terms of raw intelligence, they will understand us better than we understand ourselves, the futurist predicts with enthusiasm.

Kurzweil, 66, is the closest thing to a pop star in the world of artificial intelligence, the place where self-proclaimed geeks quietly lay the grid work for what could be truly described as a new world order.

Ray Kurzweil (AFP Photo / Gabriel Bouys)

The internet visionary is avidly working towards an unseemly marriage of sorts between machine and man, a phenomenon he has popularly dubbed “the singularity.” The movement, which also goes by the name ‘transhumanism,’ is anxiously awaiting that Matrix moment when artificial intelligence and the human brain will merge to form a superhuman that never has to Google another nagging question again.

In this robot-dominant world, humans, starring in some cheapened knockoff of the biblical Creation story, will have downloaded themselves into their own technology, becoming veritable gods unto themselves.

In the meantime, computers will continue to humiliate and humble their human creators, much like IBM’s DeepBlue computer did in 1997 when it handily outmaneuvered world chess champion Garry Kasparov.

World Chess Champion Garry Kasparov makes a move 07 May in New York during his fourth game against the IBM Deep Blue chess computer, 1997 (AFP Photo / Stan Honda)

The next step for Kurzweil is to devise programs that allow computers to understand what humans are saying, which represents the sum total of internet usage.

“My project is ultimately to base search on really understanding what the language means,” he told the Guardian in an interview. “When you write an article, you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organizing and processing the world’s information.

“The message in your article is information, and the computers are not picking up on that. So we would want them to read everything on the web and every page of every book, then be able to engage in intelligent dialogue with the user to be able to answer their questions.”

However, these sensational advances in artificial intelligence, which is continuing with little public debate, comes hot on the tail of the Snowden leaks, which revealed a high level of collusion between the National Security Agency (NSA) and major IT companies in accessing our personal communications.

Now, Google, the global vacuum sweeper of information, is showing a marked interest in powerful techniques and technologies that will ultimately make spying on its subscribers a bit redundant, since computers will already know every detail of our personal lives – even our deepest thoughts.

The dawn of the Google ‘thought police’?

Kurzweil summarized his Google job description in one succinct line: “I have a one-sentence spec. Which is to help bring natural language understanding to Google. And how they do that is up to me,” he told The Observer.

Understanding human semantics, he says, is the key to computers understanding everything.

But here is where the new advances in artificial intelligence become not only murky, but potentially sinister: “Google will know the answer to your question before you have asked it, he says. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.”

AFP Photo / Damien Meyer

“Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity….Which is to say to have the computer read tens of billions of pages. Watson (an IBM artificial intelligence system) doesn’t understand the implications of what it’s reading.

“It’s doing a sort of pattern matching. It doesn’t understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn’t understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying,” Kurzweil told The Guardian.

For the pioneers of this brave new technology, such technological capabilities, where nothing is considered too personal or invasive, is an altogether positive thing.

Although the traditional image of robots wavers between an obedient domestic servant that never complains about working overtime, and an automated factory machine that has revolutionized the workplace, the reality of the technology may be far more menacing than people realize. In any case, the issue demands serious discussion.

Consider that Google has already purchased a number of robotics firms, including Boston Dynamics, the company that develops menacingly lifelike military robots, as well as British artificial intelligence startup DeepMind, a London-based firm that has “one of the biggest concentrations of researchers anywhere working on deep learning, a relatively new field of artificial intelligence research that aims to achieve tasks like recognizing faces in video or words in human speech,” according to MIT Technology Review.

AFP Photo / Getty Images / Justin Sullivan

MIT even risked a telling observation: “It may sound like a movie plot, but perhaps it’s even time to wonder what the first company in possession of a true AI (artificial intelligence) would do with the power that it provided.”

Google has also added to its ranks Geoffrey Hinton, a British computer scientist and the world’s top expert on neural networks, as well as Regina Dugan, who once headed the Defense Advanced Research Projects Agency (DARPA), the ultra-secret US military agency that has created a number of controversial programs, including the so-called Mind’s Eye, a computer-vision system that is so powerful it can monitor the pulse rate of specific individuals in a crowd.

Peter Norvig, Google’s research director, commented recently that the internet giant employs “less than 50 percent but certainly more than 5 percent” of the world’s top experts on machine learning.

In 2009, Google helped to finance the Singularity University, an institute co-founded by Kurzweil dedicated to “exponential learning.”

But Google is not alone in the rush to acquiring deep learning academics, of which there are only about 50 experts worldwide, according to estimations of Yoshua Bengio, an AI researcher at the University of Montreal. Last year, Facebook announced its own deep learning department and brought on board perhaps the world’s most reputable deep learning academic, Yann LeCun of New York University, to oversee it.

Is there an ‘escape’ option?

Upon further reflection of these diverse types of experts and technologies that Google, and other IT companies, is rapidly bringing into its fold, it does not require much imagination to envision some future state that is policed by super-intelligent robots capable of knowing in advance, like a chess match where one of the opponents is blindfolded, the thinking processes of individuals and groups within society.

To take the scenario one step further, would it be in the interest of the people, not to mention democracy, to allow fully armed ‘Robocops’ to patrol our streets?

Bill Joy, a co-founder of Sun Microsystems, is one of the few individuals in the field who has sounded the alarm over the advances being made in artificial intelligence, going so far as to warn that humans could become an “endangered species.”

“The 21st-century technologies – genetics, nanotechnology, and robotics (GNR) – are so powerful that they can spawn whole new classes of accidents and abuses,” Joy wrote in an article for Wired (“Why the future doesn’t need us,” April 2000). Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

“I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.”

Francis Fukuyama, an American political scientist, commenting on the rise of this brave new technology in Foreign Policy (Sept. 1, 2004), wrote: “Modifying any one of our key characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome.”

Perhaps that is the gravest danger associated with the race toward a “posthuman” condition, and that is its sheer unpredictability and risk of abuse in the wrong hands.

Are the robots about to rise?

Are the robots about to rise? Google’s new director of engineering thinks so…

Posted from WordPress for Android

Could ECT zap worst nightmares?

Electroconvulsive therapy (ECT), long associated in the popular imagination with controversial treatment of the mentally ill, can be used to erase memories, Dutch researchers have found – raising hopes of a new treatment for post-traumatic stress disorder.

They say that, in the end, memories are all we have left. And yet how many times have you wished you could forget something?

Now new research suggests it may be possible to zap specific memories, with the help of ECT, a controversial psychiatric treatment where electric pulses are passed through the brain.

Dr Marijn Kroes, of the Donders Institute for Brain, Cognition and Behaviour at Radboud University Nijmegen, who led the study, says the treatment appears to disrupt the natural process of storing memories in the brain.

Last-resort treatment

“Memories are stored in connections between cells in your brain,” he says. However, these connections take some time to become permanent.

“If you disturb [this] process, you lose the connection between cells altogether” – and thus lose the memory.

What is electroconvulsive therapy?

ECT machine

ECT was developed in the 1930s, after researchers noted that some people with depression or schizophrenia seemed to feel better after an epileptic seizure.

Electroconvulsive therapy is a way of inducing a fit, and was used widely during the 1950s and 60s. Nowadays ECT is usually only prescribed for a small number of serious mental illnesses, such as severe depression, if other treatments have failed.

No-one knows for certain how ECT works. One theory is that it stimulates the release of mood-enhancing chemicals in the brain.

Though it remains a controversial treatment, many health professionals say ECT relieves severe depression where other treatments fail. Since 15% of people with severe depression will kill themselves, they argue that ECT is potentially lifesaving.

Others believe ECT is inhumane and degrading, and that its side-effects – such as memory loss – can be severe. Many would like to see it banned.

ECT is still used in the Netherlands, Britain and many other countries as a last-resort treatment for severe depression.

According to the psychiatrists who use it, the treatment’s bad reputation is partly down to films such as One Flew Over the Cuckoo’s Nest, in which a terrified patient is strapped down and forced to endure violent seizures.

The picture is very different in the Rijnstate medical centre where the memory-erasing experiments were conducted.

Inside a bright, modern operating theatre, Dr Jeroen van Waarde treats between 20 and 30 patients every week with ECT.

To administer the treatment, steel pads are attached just above the patients’ temples. Wires connected to the transmitter device send electric pulses through the brain, inducing a seizure similar to an epileptic fit.

In earlier times, the whole body would stiffen and thrash about. These days, the combined effects of muscle relaxants and a general anaesthetic mean that most of the body remains still during the seizure. The only visible sign is the twitching of one arm that has a tourniquet attached.

Target specific memories

One of the known side-effects of ECT is memory loss. According to the Royal College of Psychiatrists, that memory loss is normally temporary, but some patients report severe and long-lasting memory losses after ECT.

This study took advantage of this side-effect to see if it was possible to target specific memories.

Dr Kroes and his team conducted the memory-erasing research with patients who were already undergoing ECT as a treatment for severe depression.

Participants were initially shown two sets of picture cards, each telling an emotional story.

A couple of weeks later, just before an ECT session, they were shown just one of the sets of story cards again, in order to activate that particular memory.

Two girls walking along a street at night
Participants were shown pictures that told a story and asked to recall them after ECT

When asked to recall the stories 24 hours later, they could not remember the story they were shown just before having ECT. The memory of the other story – which they had not seen for two weeks – was unaffected.

Ineke Brussard took part in the study. She was being treated for debilitating depression at the Rijnstate clinic and and is now “happy.” She cannot remember what she was shown before the ECT.

“Start Quote

Traditional treatments teach you how to deal with your traumatic memories, but they don’t change your traumatic memories. So what we’re looking for now is a way to ‘erase’ these”

Dr Marijn Kroes Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen

“They showed me something, but I don’t remember that it was that card, or such a card, or whatever, no. I don’t remember.”

The research aims to harness a cognitive process known as memory reconsolidation.

Each time memories are accessed, they are taken out of the storage compartment in the brain and “rewritten” over time, before being relayed to the brain’s internal mechanisms.

Results from previous animal studies show that, during that reconsolidation, memories are sensitive to being altered or even erased.

This new Dutch research suggests it may be possible to get rid of specific memories by accessing them – in this case, priming them by looking at pictures – just before receiving ECT.

‘Traumatic memories’

Dr Kroes is optimistic it could eventually be used by patients with post-traumatic stress disorder (PTSD), a debilitating memory-related illness.

“Traditional treatments teach you how to deal with your traumatic memories, but they don’t change your traumatic memories. So what we’re looking for now is a way to ‘erase’ these.”

“Start Quote

If it could take away the emotions that are connected to those memories then I’d try it”

Simon Buckden War veteran and post-traumatic stress disorder sufferer

There is still a long way to go before patients are prescribed this kind of memory-altering treatment.

The research was based on artificially generated memories using story cards as prompts, so the next step is to find out if real, long-term traumatic memories – which are more deeply ingrained in our brains – can ever be subject to the same targeted or “spot-cleansing” process.

But for those whose memories have an incapacitating impact on their lives, this research offers hope.

Simon Buckden, a veteran of the Iraq and Bosnia conflicts, is haunted by his war experiences.

He now suffers from PTSD, which he describes as “a 24-hour cancer that you have to fight every day until it kills you”.

He says he would consider the ECT-based treatment if it were developed for clinical use.

“I wouldn’t be the person I am today or have achieved so much if it wasn’t for those experiences, but if it could take away the emotions that are connected to those memories then I’d try it, absolutely.”

Stereotactic radiosurgery yielded favorable remission of acromegaly.

Stereotactic radiosurgery may yield favorable remission response rates in patients with acromegaly with a low rate of adverse events, according to data published in the Journal of Clinical Endocrinology and Metabolism.

  • Surgical resection is currently the primary treatment for patients with acromegaly, according to researchers. The rates of endocrine remission relate to tumor size, degree of invasiveness and surgical expertise, the researchers wrote.

They conducted a retrospective review of 136 patients (mean age, 44 years) withacromegaly treated with stereotactic radiosurgery at the University of Virginia. Gamma Knife radiosurgery data were collected from 1989 to 2012.

Follow-up data at 61.5 months indicated that 65.4% of the patients reached remission of acromegaly, with a mean time to remission of 27.5 months.

Specifically, there was a 31.7% remission rate at 2 years, 64.5% at 4 years, 73.4% at 6 years and 82.6% at 8 years after radiosurgery, according to data.

After the withdrawal of growth hormone or insulin-like growth factor I medications, patients with an oral glucose tolerance test GH level <1 ng/mL or normal IGF-I were considered to be in remission, researchers wrote.

Favorable prognostic factors for remission included higher radiation combined with maximum dose and lower initial IGF-I levels, according to data.

Hypothalamic-pituitary dysfunction is the most common intermediate to late complication of [stereotactic radiosurgery] of pituitary adenomas. In our series, 31.6% patients developed new hormone deficiency at a median of 50.5 months following radiosurgery,” researchers wrote.

Two patients (1.5%) developed panhypopituitarism. Other risk factors for pituitary hormone deficiencies included a margin dose >25 G and tumor volume >2.5 mL. An adverse radiation effect was observed in one patient, visual deterioration in four, and new oculomotor nerve palsy in one. Seven patients who reached remission after surgery developed a recurrence of the disease at 42 months.




John D. Carmichael

  • Acromegaly is a difficult disease to treat in many cases. The patients’ clinical experiences range from those which are mild and straight-forward to those with aggressive tumors, very challenging biochemistry, and disease attributes that require multimodal therapy. It’s good to see a large study like this reporting on radiotherapy outcomes and safety, which is one part of our treatment armamentarium.

    The response rates that they report are encouraging in terms of the biochemistry because there are patients who do require more aggressive treatment than just surgery or medication.

    I think the difficulties of such a study are that long-term follow-up is challenged with treatment performed through a tertiary referral center, and as they acknowledge in their paper, they rely on other endocrinologists’ data in some cases and are unable to obtain complete data sets. This might be one of the shortcomings of the study: that sometimes, the tests that you want to complete don’t always get done in terms of both safety assessments and assessments of recurrence.

    In general, they compare their findings to both prior radiosurgery techniques and to prior conventional radiotherapy. I think that many people are hoping that the gamma knife radiosurgery will have significantly improved response rates and a better safety profile in terms of hypopituitarism and damage to adjacent structures.

    The authors have shown that their response rates are satisfactory enough to consider radiosurgery as a viable treatment. Unfortunately, the hypopituitarism demonstrated in these patients is comparable to prior reports of radiotherapy-induced hypopituitarism and practitioners are concerned about this adverse effect. These data are not going to make gamma knife radiosurgery more appealing to those who are concerned about the effects of hypopituitarism. The use of radiotherapy is a divisive topic in the treatment of patients with acromegaly and physicians have very strong opinions about the use and the timing of this mode of therapy. Some may utilize it earlier on in the care of patients, so that a patient will directly be treated with radiosurgery after failed transsphenoidal surgery, as many of the patients were in this study. Alternatively, one may use radiosurgery only in those resistant to medical therapy and unable to gain biochemical or tumor control.

    The follow-up for patients treated with radiosurgery does require longer duration of observation and while this group has some of the longest follow-up compared to other studies, nevertheless more time is required for safety assessments such as development of secondary tumors and hypopituitarism.

    • John D. Carmichael, MD
    • Assistant Professor of Medicine in the Division of Endocrinology, Diabetes and Metabolism at the David Geffen School of Medicine at the University of California, Los Angeles; and
      Staff Physician of Endocrinology/Metabolism at Cedars-Sinai Medical Center

What You Can Tell About Someone From Their…Earwax

New research shows that earwax varies among people of different ethnicities, suggesting that the substance holds untold secrets.

A team of scientists from Monell Chemical Senses Center in Philadelphia gathered earwax from 16 men–half were white and half were East Asian–and examined the volatile organic compounds (VOCs) they released when heated. The amount of VOCs per person varied by ethnicity, and white men had more overall.

83865341 copy

This small finding is important to researchers who believe earwax may carry attributes specific to each individual. Wet or dry earwax is linked to a gene that is also linked to the production of underarm odor, which can convey information about one’s gender, sexual orientation, and health. Already two urine diseases can be diagnosed in earwax before blood or urine testing.

“Odors in earwax may be able to tell us what a person has eaten and where they have been,” George Preti, an organic chemist at Monell told Medical News Today.

Earwax is “a neglected body secretion,” according to researchers at Monell Chemical Senses Center in Philadelphia, PA. A new study shows that, as well as giving different odors corresponding to ethnic group, earwax could store other other useful personal information.

A mixture of secretions from sweat glands and the fatty byproduct of sebaceous glands, earwax is usually a wet yellow-brown wax or a dry white wax.

This wax makes its way to the opening of the ear and is usually washed away when we have a shower or bath.

But earwax does have some beneficial properties.

It traps and prevents dust, bacteria and small objects from getting inside the ear and damaging it, and it can protect the delicate ear canal from irritation when water is in the canal.

Genetic link between underarm odor and earwax

The Monell Center became interested in the properties of earwax after discovering that variations in a gene known as ABCC11 are related to whether a person has wet or dry earwax. This gene is also linked with underarm odor production.

“Our previous research has shown that underarm odors can convey a great deal of information about an individual, including personal identity, gender, sexual orientation and health status,” says study lead author George Preti, PhD, an organic chemist at Monell. “We think it possible that earwax may contain similar information.”

Earwax is usually a wet yellow-brown wax or a dry white wax produced by sweat and sebaceous glands.

Given that differences in underarm odor can carry this level of personal detail, Preti wanted to see if earwax odor also has characteristics specific to ethnicity.

Preti’s team collected earwax from 16 men – eight of these were white and eight were of East Asian descent. The samples were placed into individual vials that were heated for 30 minutes.

Once heated, the earwax began to release airborne molecules called “volatile organic compounds” (VOCs).

The VOCs – which are odorous – were then collected from the vials using a special absorbent device. A technique called “gas chromatography-mass spectrometry” was used to analyze the chemical make-up of these molecules.

White men produce more odorous earwax than East Asian men

Although 12 different types of VOC were found across all the earwax samples, the amounts of these VOCs seemed to vary according to ethnic background. The white men in the study had greater amounts of 11 of the VOCs than the East Asian men in the study.

East Asian and Native American people were already known to have a form of the ABCC11 gene that causes the dry type of earwax and produces less underarm body order, compared with other ethnicities.

“Odors in earwax may be able to tell us what a person has eaten and where they have been,” says Preti. “Earwax is a neglected body secretion whose potential as an information source has yet to be explored.”

The study notes that at least two odor-producing diseases – maple syrup urine disease and alkaptonuria – can be identified in earwax before they can be detected in blood or urine analyses. Further research from the Center will examine the possibility that analysis of earwax could be useful in detecting conditions before they show up in more traditional tests.

The ABCC11 gene is also associated with breast cancer. In 2009, Japanese scientists found that underarm odor and earwax could alert doctors to women who were carrying this gene and who therefore have increased risk of breast cancer.

Is SMBG in type 2 diabetes worth it?

Self-monitoring of blood glucose in patients with type 2 diabetes, particularly those who are not on insulin, has long been controversial.

  • Studies have been inconsistent in terms of showing a benefit to SMBG in these patients, and thus, some clinicians do not have their patients monitored. Others find the information useful in patient care decisions and, thus, do have patients SMBG, although perhaps at a lower frequency than someone using insulin.

A meta-analysis of nine randomized controlled trials found a small, but significant, decrease in HbA1c after 6 months in those who used SMBG. At least two other trials have shown no significant difference in HbA1c changes in those who used SMBG for 12 months. Another meta-analysis, also involving nine trials, showed conflicting results. Five of the trials showed small improvements in HbA1c with SMBG, whereas the other four showed no difference. Two other meta-analyses found that most studies showed a small but significant decrease in HbA1c in those who utilize SMBG and have baseline HbA1c >8%. There was no difference in those with a baseline HbA1c <8%.

In those who use SMBG, increased frequency does not appear to be of additional benefit.

There are potential disadvantages associated with SMBG. For instance, some data have shown an increased risk for depression. Also, SMBG can become expensive for patients, even if they have health insurance, because the company may or may not cover the supplies. Although patients can often obtain the meters for little cost due to rebates, the ongoing supplies can be expensive.

Of course, anyone who uses SMBG must be properly trained on the procedure if the results are going to be considered reliable. This would include things such as ensuring the meter is coded correctly, correct test strips are used, test site is prepped appropriately, etc.

Finally, if everyone is going to go through the trouble of SMBG, we need to know that the meters our patients are using are accurate. You may be surprised to learn that the FDA minimum accuracy requirement for meters to be marketed is ±20% for glucose values >75 mg/dL, at least 95% of the time. For values <75 mg/dL, accuracy of ±15% is required. Thus, there could be a clinically significant difference between SMBG values and standard lab values. Therefore, utilizing other markers, such as HbA1c, in conjunction with SMBG values provides a more complete, and potentially accurate, picture.

Most guidelines, continue to recommend SMBG in patients with type 2 diabetes not on insulin therapy, at least in conjunction with broader self-management education. By recognizing the strengths and weaknesses associated with SMBG, clinicians can tailor their recommendations to individual patients. Although clinicians will typically utilize SMBG values in treatment decisions, patients do not usually act on them. Perhaps if more patients were taught how to institute changes based on SMBG, and they were willing and able to do so, then the benefit of SMBG may be greater

Cocaine Increases Stroke Risk.

Cocaine greatly increases ischemic stroke risk in young adults within 24 hours of use, a new study has found. Results showed that stroke risk associated with acute cocaine use was much higher than that seen with other established risk factors, including diabetes, high blood pressure, and smoking.

The study was presented here at the American Stroke Association (ASA) International Stroke Conference (ISC) 2014 by Yu-Ching Cheng, PhD, University of Maryland School of Medicine, Baltimore.

“Cocaine is not only addictive, it can also lead to disability or death from stroke,” Dr. Cheng said at a press conference here. “With few exceptions, we believe every young stroke patient should be screened for drug abuse at the time of hospital admission.”

Moderator of the ASA press conference on the study, Larry Goldstein, MD, Duke University Medical Center, Durham, North Carolina, said, “The take-home message for young people is ‘Don’t do cocaine.’ You could end up not being able to talk or use one side of your body from doing this.”

Noting that between a quarter and a third of the young people in this study said they had used cocaine at some time, Dr. Goldstein said: “That is scary.” He added that crack cocaine is more dangerous than snorting cocaine because it is injected so it reaches higher concentrations in the blood.

Acute Use

For the study, Dr. Cheng and her colleagues compared 1101 patients aged 15 to 49 years in the Baltimore–Washington, DC, area who had strokes in 1991–2008 with 1154 controls of similar ages in the general population.

Results showed that having a history of cocaine use was not associated with ischemic stroke, but acute use of cocaine in the last 24 hours was strongly associated with increased risk for stroke; use of cocaine was linked to a 7-fold increase in stroke risk within the next 24 hours, after adjusting for age, sex, and ethnicity. The effect remained after adjusting for smoking.

Table. Risk Factors Associated With Stroke in Young Adults

Factor Stroke Patients (%) Controls (%) P Value
History of diabetes 16.9 4.6 <.001
History of hypertension 41.7 18.1 <.001
Current smokers 45.1 29.4 <.001
Cocaine use ever 28.1 25.7 .95
Cocaine use in past month 3.7 2.7 .67
Cocaine use in past 24 h 2.4 0.4 .001


The strength of the association between acute cocaine use stroke was similar in whites (age-adjusted odds ratio [OR], 6.1) and African Americans (age-adjusted OR, 6.7). But the risk for stroke after using cocaine appeared to be higher in women (OR, 12.8) than in men (OR, 2.5) after adjustment for the effect of age, ethnicity, and current smoking status, although this was not statistically significant.

Dr. Goldstein noted that cocaine causes arrhythmias and myocardial infarction, which can lead to stroke. It also has a direct vasoconstrictor effect on the cerebral vasculature, and these effects are potentiated by alcohol.