FDA Approves First Drug to Prevent HIV Infection.


Some researchers fear that the combined pill’s use in healthy people might have unacceptable side effects and spark the emergence of resistant viruses.

US regulators took a step into the unknown this week when they approved the first drug to prevent HIV infection. US Food and Drug Administration (FDA) commissioner Margaret Hamburg hailed the pill, Truvada, as a tool for reducing the rate of infection in the United States, where 50,000 people are diagnosed each year. But the drug combines low doses of two anti­retroviral agents normally used to treat infection, and some researchers fear that its use in healthy people could have unacceptable side effects and spark the emergence of resistant viruses.

US insurers must now decide whether they will pay for Truvada, which costs roughly US$10,000 for a year’s supply. Moreover, health-policy experts must script guidelines on how to prescribe it, and how to monitor side effects and HIV infections in people using the drug. “There are a lot of questions about how to implement it,” says Connie Celum, an HIV researcher at the University of Washington in Seattle, who led a large trial of the drug in East Africa and has begun studies to answer practical delivery questions, such as which subsets of people are at highest risk.

Developed by Gilead Sciences in Foster City, California, Truvada proved particularly effective in the East African trial, published last week: it reduced the incidence of HIV by 75% in people with partners who had been infected. In an earlier trial in the United States, HIV incidence dropped by 44% in men who have sex with men.

But concerns emerged on 10 May at a public meeting of a panel that advised the FDA on its decision. Most members voted in favour of approval, but the researchers, doctors and patient advocates in attendance wrestled with the issue of drug resistance. The two drugs in Truvada, emtricitabine and tenofovir, are effective antiretroviral treatments, but trials have shown that viruses exposed to lower doses in the acute phase of infection can become resistant, said meeting attendees. In six people who tested negative on enrolment but turned out to be HIV-positive, the drugs were no longer effective. Another fear, unconfirmed in trials, was that people might not take the pill consistently,and might contract a strain of HIV that became drug-resistant as a result of exposure to low levels of anti­retrovirals.

To mitigate these risks, the FDA requires that Truvada be prescribed only once an individual has tested negative for HIV. The agency also advises that people use the drug in combination with safe sex practices, and get tested for the virus every three months while taking it. Some experts at the advisory meeting proposed stricter policies, such as making the tests mandatory, but these were dismissed as impractical. Another idea was to limit the drug to specific populations who are at the very highest risk, such as homosexual people who use intra­venous drugs, but the FDA adopted a vaguer category encapsulating anyone at high risk of contracting HIV. “We want to reach marginalized populations,” says Celum, “and restricting access would mean that Truvada would be less likely to have a public-health impact.”

Wayne Chen, acting chief of medicine at the AIDS Health Foundation in Los Angeles, California, regrets the decision to approve the drug, saying that condoms are cheaper and can be a more effective preventative. “The best thing would be to have this drug withdrawn from the market, and if it’s not, there should at least be mandatory testing because we know that people don’t take this as prescribed,” he says, citing a Truvada clinical trial in Africa that was ended prematurely because the drug was not preventing infection. Blood tests later confirmed that fewer than 40% of the study participants on Truvada had been taking the pills daily.

To proponents, however, the promise of the drug is bright. Salim Abdool Karim, director of the Center for the AIDS Programme of Research in South Africa in Durban, hopes that Truvada might soon be available in his country, where up to one-quarter of women have HIV by the age of 20. “Truvada is now the only technology we have that empowers women,” he says. “I don’t think we’ll be able to slow the HIV epidemic in South Africa without something to protect them.”

Source: scientific Americans/Nature.

 

 

How artificial intelligence will shape our lives?


If the brains behind a scientific initiative known as Russia 2045 are to be believed, life is about to get very, very interesting.

The promotional video for the group, which aims to create technology that can “download” the knowledge in a human brain, is like a trailer for a Hollywood sci-fi blockbuster — the booming intonations of a British announcer, dramatic, synthesized music and shots of the cosmos that make you feel like you’re entering hyperspace in the Millennium Falcon.

It is, in other words, not the type of thing you’d expect from a group that hopes to get the world comfortable with a future of synthetic brains and of “thought-controlled avatars” that would make your next business trip to Milwaukee or Tokyo wholly unnecessary. Instead of a “chicken in every pot,” they promise an “android robot servant for every home.”

In an e-mail, the project’s founder, Dmitry Itskov, described this vision in detail: “The creation of avatars will change everything in our societies: politics, economics, medicine, health care, food industry, construction methods, transportation, trade, banking, etc. The whole architecture of society will be transformed, there will be an increase in its self-organization, people will unite to fight the biggest and most universal problem of humankind — that of death.”

Whatever the viability of such claims, there’s little doubt that the pace of innovation is going to lead us into interesting places, and perhaps sooner than we think. The cost of high-powered computing drops ever lower, video games grow increasingly realistic, and, thanks largely to Apple’s voice-activated personal assistant Siri, people find more reasons to consult their mobile devices before the person sitting next to them.

Many have lamented that these communication breakthroughs have made us isolated. Texting is the new talking, or so the theory goes. The prospect of a robot that can take over the brain of your wife or best friend upon death? That takes fears of human social isolation to a whole new level.

So what happens when we don’t even have to get off the couch to go to a parent-teacher conference or have lunch with a client living 6,000 miles away? What if we can “transfer” our brains to an avatar before we die? What about robots that possess human-level intelligence?

Intelligence: the new frontier

So far, the widely held social-isolation theory has proved false. We may have reason to worry, but we’re worrying about the wrong thing: it’s not isolation, but intelligence, that is likely to change our world in fundamental ways.

“Almost every study I’ve ever seen has shown a neutral to positive effect [of connected devices on social interaction],” said Keith Hampton, a professor of communications at Rutgers University. “It doesn’t minimize the exceptions, but all the data suggests that people who use these things are more engaged in public life than others.”

Consider the following. If, a decade ago, someone asked you what would happen if we could all share information, photos and personal revelations with all of our friends, in real time, the answers might tend toward the negative — if not apocalyptic.

The end of privacy. The end of intimacy. The end of the world as we know it.

The reality of Facebook, of course, has demonstrated otherwise. There are downsides to any technology, Facebook included. But its convenience and utility have overtaken other concerns. We’ve adapted, and adapted quickly.

“It’s like in medicine,” said Nick Bostrom, the director of The Future of Humanity Institute at Oxford University. “Anesthesia was once seen as moral corruption. A heart transplant seemed obscene. We tend to think about things in a different mode, a different frame of mind, before we are actually using it. The future is often a projection screen where we cast our hopes and fears.”

If history is any guide, it’s reasonable to think that the shock of major technological breakthroughs will be mitigated by the assimilation of all the incremental advances that came before it. The more valid question before us, then, is how to prepare for a day when machine intelligence becomes so sophisticated that its knowledge is used against us.

And “against us” doesn’t mean some Orwellian, Terminator-type reality. It’s far more subtle, and far less sexy, than that. If a device can learn and has far greater memory capacity and recall than we do, it could process huge stores of data to better predict our behavior. It could then tailor its own behavior to achieve a desired result. And that’s even before we get to so-called super-intelligence, a theoretical reality where computers use their processing power to learn more quickly, and think bigger thoughts, than the humans that created them.

The very beginnings of such technology are beginning to appear in daily life. The Port Authority of New York recently announced plans to install hologram-like avatars at New York airports. The “female” avatars are expected to be motion-activated and give travelers basic information like the location of a bathroom. In their current form, the avatars aren’t interactive, but the Port Authority hopes that someday they will be able to answer a range of questions.

Are we ready?

It’s impossible to know how we’ll all react, but history does provide some clues.

To get a sense of the potential hazards and dilemmas of more advanced technology, Charles Isbell, a professor of interactive computing at Georgia Tech, pointed to the “Media Equation,” a communication theory developed by two Stanford researchers in the 1990s. The research found that people interact with technology in ways similar to how they interact with other people.

In one test, subjects were “tutored” by a computer and were then asked to evaluate the computer’s performance as they would a human tutor. Those who filled out the evaluation on the computer that “tutored” them were more positive than those who completed it on paper or at a different computer. As crazy as it sounds, people were less likely to hurt that computer’s feelings. Take a computer that’s as witty and brilliant as your best friend and the potential outcomes become more consequential.

“In the future, when your ‘best friend’ Siri suggests that you buy something, and it turns out not to be the right thing, do you get to sue Apple?” Isbell asked.

In the not-so-distant future, such scenarios are possible. “The ability of those things things to read facial expression and speak in a certain tone — it will be orders and orders of magnitude greater,” Isbell said. “[As with Facebook], the impact will be both profound and mundane.”

The implications go beyond commerce. Today, “social search” — providing search results based on data from others in your social networks — is in its infancy. Rutgers’ Hampton fears that social search could roll back some of the biggest social benefits born of the Internet.

“People who do more online have more diverse social networks and broader access to information,” Hampton said. “It facilities trust, tolerance and access. If your search for unique information is constrained by your social interaction, the access to unique information declines. People we are close to are very much like us. We have a greater risk of creating silos of information.”

Technology of increasing intelligence only makes that possibility more real. “We are all snowflakes, but we’re pretty predictable snowflakes once you figure out what type of snowflake you are,” Isbell said. As computer-aided predictive analysis gets more and more refined, a robot or device could use it to push us toward a pre-determined outcome, one that may not be in our best interest. Think about the computerized bartender that, once you hand over your credit card, mines Internet data and learns that you just lost your job. “Would you like another?” could become more calculated than convivial.

Resistance is futile

Ray Kurzweil, a futurist and creator of optical recognition technology — the type that converts scanned documents to editable text — predicts that we’ll have “strong” artificial intelligence by 2029. He believes that “singularity,” or the point where technology transcends human intelligence, is not some science fiction dream. His “law of accelerated returns” posits that because computing power expands exponentially, advances in fields that rely on computing power — like biotechnology and materials science — will also rapidly increase.

It’s the theory behind the “2045″ date in Itskov’s ambitious project. Based on his own understanding of technological advancement, Itskov said that “at about 2045, humanity must enter a certain mode of evolutionary singularity, beyond which it becomes difficult to make predictions. In short, many exciting developments await us in the middle of this century, and all of them, inevitably, will be linked to the developments of new technology.”

Kurzweil said we have nothing to fear by it. “This is not an alien invasion from Mars. This is just expanding our intelligence. We have outsourced our personal and historical memories to the ‘cloud.’ It’s expanding already.”

It will have its downsides — “Fire cooks our food and also can burn down your house,” he said — but those can be addressed by devising “rapid response” systems that can counteract those who use technology for nefarious purposes.

Trying to prevent, or “opting out” of, such advancements is a misguided, and futile, strategy.

“Yes, people opt-out today,” Kurzweil said. “They’re called the Amish.”

Source: scientific Americans.