Guess Which Single Word Will Convince Other Humans You’re Not a Robot


That’s… not what we expected.

main article image

If you were trying to convince another human that you yourself are also human, what would you say? Probably something about emotions, right? That might work – but other humans are more likely to believe your humanity if you talk about bodily functions.

Specifically, the word ‘poop’.

At least, that’s the finding from a study that sought to determine a “minimal Turing test”, narrowing the human-like intelligence assessment down to a single word.

A Turing test – named after mathematician Alan Turing – is a conceptual method for determining whether machines can think like a human. In its simplest form, it involves having a computerised chat with an AI – if a human can’t tell if they’re talking to a computer or a living being, the AI “passes” the test.

In a new paper, cognitive scientists John McCoy and Tomer Ullman of MIT’s Department of Brain and Cognitive Sciences (McCoy is now at the University of Pennsylvania) have described a twist on this classic concept.

They asked 1,089 study participants what single word they would choose for this purpose: not to help distinguish humans from machines, but to try to understand what we humans think makes us human.

The largest proportion – 47 percent – of the participants chose something related to emotions or thinking, what the researchers call “mind perception”.

By far the most popular option was love, clocking in at a rather large 14 percent of all responses. This was followed by compassion at 3.5 percent, human at 3.2 percent and please at 2.7 percent.

word cloud minimal turing(McCoy & Ullman/Journal of Experimental Social Psychology)

In all, there were 10 categories, such as food, including words like banana and pizza; non-humans, including words like dog or robot; life and death, including words like pain and alive; and body functions and profanity, which included words like poop and penis.

The next part of the study involved figuring out which of those words would be likely to convince other humans of humanity.

The researchers randomly put pairs of the top words from each of the 10 categories together – such as banana and empathy – and told a new group of 2,405 participants that one of the words was chosen by a machine and the other by a human (even though both words were chosen by humans).

This group’s task was to say which was which. Unsurprisingly, the least successful word was robot. But the most successful – poop – was a surprise.

minimal turing test word comparison(McCoy & Ullman/Journal of Experimental Social Psychology)

This could, the researchers said, be because ‘taboo’ words generate an emotional response, rather than simply describing one.

“The high average relative strengths of the words ‘love’, ‘mercy’, and ‘compassion’ is consistent with the importance of the experience dimension when distinguishing the minds of robots and people. However, the taboo category word (‘poop’) has the highest average relative strength, referring to bodily function and evoking an amused emotional response,” they wrote in their paper.

“This suggests that highly charged words, such as the colourful profanities appearing in Study 1, might be judged as given by a human over all words used in Study 2.”

Now that this information is on the internet where any old AI with WiFi could get access to it, the study may not really help us tell person from machine; but it does provide some fascinating insight into our self-perception, and what we feel it means to be human.

It’s also, the researchers said, a methodology that could help us explore our perceptions of different kinds of humans – what would we say, for instance, to convince someone else we were a man or a woman, a child or a grandparent, Chinese or Chilean?

“Recall the word that you initially chose to prove that you are human. Perhaps it was a common choice, or perhaps it appeared but one other time, your thoughts in secret affinity with the machinations that produced words such as caterpillar, ethereal, or shenanigans. You may have delighted that your word was judged highly human, or wondered how it would have fared,” the researchers wrote.

“Whatever your word, it rested on the ability to rapidly navigate a web of shared meanings, and to make nuanced predictions about how others would do the same. As much as love and compassion, this is part of what it is to be human.”

Also: poop, apparently.

The research has been published in the Journal of Experimental Social Psychology.

Ray Kurzweil Predicts Three Technologies Will Define Our Future


Over the last several decades, the digital revolution has changed nearly every aspect of our lives.

The pace of progress in computers has been accelerating, and today, computers and networks are in nearly every industry and home across the world.

Many observers first noticed this acceleration with the advent of modern microchips, but as Ray Kurzweil wrote in his book The Singularity Is Near, we can find a number of eerily similar trends in other areas too.

According to Kurzweil’s law of accelerating returns, technological progress is moving ahead at an exponential rate, especially in information technologies.

This means today’s best tools will help us build even better tools tomorrow, fueling this acceleration.

But our brains tend to anticipate the future linearly instead of exponentially. So, the coming years will bring more powerful technologies sooner than we imagine.

As the pace continues to accelerate, what surprising and powerful changes are in store? This post will explore three technological areas Kurzweil believes are poised to  change our world the most this century.

Genetics, Nanotechnology, and Robotics

Of all the technologies riding the wave of exponential progress, Kurzweil identifies genetics, nanotechnology, and robotics as the three overlapping revolutions which will define our lives in the decades to come. In what ways are these technologies revolutionary?

  • The genetics revolution will allow us to reprogram our own biology.
  • The nanotechnology revolution will allow us to manipulate matter at the molecular and atomic scale.
  • The robotics revolution will allow us to create a greater than human non-biological intelligence.

While genetics, nanotechnology, and robotics will peak at different times over the course of decades, we’re experiencing all three of them in some capacity already. Each is powerful in its own right, but their convergence will be even more so. Kurzweil wrote about these ideas in The Singularity Is Near over a decade ago.

Let’s take a look at what’s happening in each of these domains today, and what we might expect in the future.

shutterstock_283181264

The Genetics Revolution: ‘The Intersection of Information and Biology’


“By understanding the information processes underlying life, we are starting to learn to reprogram our biology to achieve the virtual elimination of disease, dramatic expansion of human potential, and radical life extension.”
Ray Kurzweil, The Singularity Is Near

We’ve been “reprogramming” our environment for nearly as long as humans have walked the planet. Now we have accrued enough knowledge about how our bodies work that we can begin tackling disease and aging at their genetic and cellular roots.

Biotechnology Today

We’ve anticipated the power of genetic engineering for a long time. In 1975, the Asilomar Conference debated the ethics of genetic engineering, and since then, we’ve seen remarkable progress in both the lab and in practice—genetically modified crops, for example, are already widespread (though controversial). 

Since the Human Genome Project was completed in 2003, enormous strides have been made in reading, writing and hacking our own DNA.

Now, we’re reprogramming the code of life from bacteria to beagles and soon, perhaps, in humans. The ‘how,’ ‘when,’ and ‘why’ of genetic engineering are still being debated, but the pace is quickening.

Major innovations in biotech over the last decade include:

Many challenges still need to be overcome before these new technologies are widely used on humans, but the possibilities are incredible. And we can only assume the speed of progress will continue to accelerate. The surprising result? Kurzweil proposes that most diseases will be curable and the aging process will be slowed or perhaps even reversed in the coming decades.

shutterstock_223011598
The Nanotechnology Revolution: ‘The Intersection of Information and the Physical World’


“Nanotechnology has given us the tools…to play with the ultimate toy box of nature atoms and molecules. Everything is made from it…The possibilities to create new things appear endless.”
– Nobelist Horst Störmer, The Singularity Is Near

Many people date the birth of conceptual nanotech to Richard Feynman’s 1959 speech, “There’s Plenty of Room at the Bottom,” where Feynman described the “profound implications of engineering machines at the level of atoms.” But it was only when the scanning tunneling microscope was invented in 1981 that the nanotechnology industry began in earnest.

Kurzweil argues that no matter how successfully we fine tune our DNA-based biology, it will be no match for what we will be able to engineer by manipulating matter on the molecular and atomic level.

Nanotech, Kurzweil says, will allow us to redesign and rebuild “molecule by molecule, our bodies and brains and the world in which we live.”

Nanotechnology Today

While we can already see evidence of the ‘genetics revolution’ in the news and in our daily lives, for most people, nanotech might still seem like the stuff of science fiction. However, it’s likely you already use products on a daily basis that have benefitted from nanotech research. These include sunscreens, clothing, paints, cars, and more. And of course, the digital revolution has continued thanks to new methods allowing us to make chips with nanoscale features.

In addition to already having practical applications today, there is much research and testing being conducted into groundbreaking (if still experimental) nanotechnology like:

Though we continue to improve at manipulating matter on nanoscales, we’re still far from nanobots or nanoassemblers that would build and repair atom by atom.

That said, as Feynman pointed out, the principles of physics do not speak against such a future. And we need only look to our own biology to see an already working model in the intricate nano-machinery of life.

shutterstock_337786004

The Robotics Revolution: ‘Building  Strong Artificial Intelligence’

“It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating.”
Ray Kurzweil, The Singularity Is Near

The name of this revolution might be a little confusing. Kurzweil says robotics is embodied artificial intelligence—but it’s the intelligence itself that matters most. While acknowledging the risks, he argues the AI revolution is the most profound transformation human civilization will experience in all of history. 

This is because this revolution is characterized by being able to replicate human intelligence: the “most important and powerful attribute of human civilization.”

We’re already well into the era of “narrow AI,” which is a machine that has been programmed to do one or a few specific tasks, but that’s just a teaser of what’s to come.

Strong AI will be as versatile as a human when it comes to solving problems. And according to Kurzweil, even AI that can function at the level of human intelligence will already outperform humans because of several aspects unique to machines:

  • “Machines can pool resources in ways that humans cannot.”
  • “Machines have exacting memories.”
  • Machines “can consistently perform at peak levels and can combine peak skills.”

Artificial Intelligence Today

Most of us use some form of narrow AI on a regular basis — like Siri and Google Now, and increasingly, Watson. Other forms of narrow AI include programs like:

  • Speech and image recognition software
  • Pattern recognition software for autonomous weapons
  • Programs used to detect fraud in financial transactions
  • Google’s AI-based statistical learning methods used to rank links

The next step towards strong AI will be machines that learn on their own, without being programmed or fed information by humans. This is called ‘deep learning,’ a powerful new mode of machine learning, which is currently experiencing a surge in research and applications.

Why Is This Important?

Kurzweil calls genetics, nanotechnology, and robotics overlapping revolutions because we will continue to experience them simultaneously as each one of these technologies matures.

These and other technologies will likely converge with each other and impact our lives in ways difficult to predict, and Kurzweil warns each technology will have the power to do great good or harm—as is the case with all great technologies. The extent to which we’re able to harness their power to improve lives will depend on the conversations we have and the actions we take today.

“GNR will provide the means to overcome age-old problems such as illness and poverty, but it will also empower destructive ideologies,” Kurzweil writes. “We have no choice but to strengthen our defenses while we apply these quickening technologies to advance our human values, despite a lack of consensus on what those values should be.”

The more we anticipate and debate these three powerful technological revolutions, the better we can guide their development toward outcomes that do more good than harm.

Meet the Cyborg Beetles, Real Insects That Are Controlled Like Robots


The future is crawling towards us on six legs. Motherboard traveled to Singapore to meet with Dr. Hirotaka Sato, an aerospace engineer at Nanyang Technological University. Sato and his team are turning live beetles into cyborgs by electrically controlling their motor functions.

Having studied the beetles’ muscle configuration, neural networks, and leg control, the researchers wired the insects so that they could be controlled by a switchboard. In doing so, the researchers could manipulate the different walking gaits, speeds, flying direction, and other forms of motion.

Essentially, the beetles became like robots with no control over their own motor functioning. Interestingly, though the researchers control the beetles through wiring, their energy still comes naturally from the food they eat. Hence, the muscles are driven by the insects themselves, but they have no willpower over how their muscles move.

Moreover, turning beetles into cyborgs seems to not be that harmful to them. Their natural lifespan is three to six months, and even with the researchers’ interference, they can survive for several months. According to the researchers, a beetle has never died right after stimulation.

And while this technology may seem crazy, the implications are very practical. Sensors that detect heat, and hence people, can be placed on the beetles, so that they can be manipulated to move toward a person. This can be helpful when searching for someone, such as in a criminal investigation or finding a terrorist.

The researchers are very serious about ensuring that whatever the applications are for this technology, that they go toward peaceful purposes. And who knows how far it could go? With this much progress manipulating the motor functions of creatures as small as beetles, perhaps it can be used for even bigger animal targets.

Watch the video. URL:https://youtu.be/tgLjhT7S15U

Meet the Cyborg Beetles, Real Insects That Are Controlled Like Robots


The future is crawling towards us on six legs. Motherboard traveled to Singapore to meet with Dr. Hirotaka Sato, an aerospace engineer at Nanyang Technological University. Sato and his team are turning live beetles into cyborgs by electrically controlling their motor functions.

Having studied the beetles’ muscle configuration, neural networks, and leg control, the researchers wired the insects so that they could be controlled by a switchboard. In doing so, the researchers could manipulate the different walking gaits, speeds, flying direction, and other forms of motion.

Essentially, the beetles became like robots with no control over their own motor functioning. Interestingly, though the researchers control the beetles through wiring, their energy still comes naturally from the food they eat. Hence, the muscles are driven by the insects themselves, but they have no willpower over how their muscles move.

Moreover, turning beetles into cyborgs seems to not be that harmful to them. Their natural lifespan is three to six months, and even with the researchers’ interference, they can survive for several months. According to the researchers, a beetle has never died right after stimulation.

And while this technology may seem crazy, the implications are very practical. Sensors that detect heat, and hence people, can be placed on the beetles, so that they can be manipulated to move toward a person. This can be helpful when searching for someone, such as in a criminal investigation or finding a terrorist.

The researchers are very serious about ensuring that whatever the applications are for this technology, that they go toward peaceful purposes. And who knows how far it could go? With this much progress manipulating the motor functions of creatures as small as beetles, perhaps it can be used for even bigger animal targets.

Paralyzed Man Kicks Off World Cup .


Wearing an exoskeleton that relayed signals from his brain to his legs, a 29-year-old with complete paralysis of the lower trunk performed the ceremonial first kick of the international sporting event.

 

The suit that Nicolelis helped build in the laboratoryBIGBONSAI + LENTEVIVA FILMES

 

Juliano Pinto, a 29-year-old Brazilian man, wore a robotic suit that worked via a brain-machine interface (BMI) and nudged a soccer ball with his foot, sending it rolling down a short mat in Sao Paulo’s Corinthians Arena yesterday (June 12). The gesture was small, but it was the culmination of years of research, carried out by dozens of scientists studying BMIs. And it was the ceremonial first kick of soccer’s World Cup, which got underway with a match between host nation Brazil and Croatia (Brazil won 3-1).

“We did it!!!!” tweeted Miguel Nicolelis, the Duke University neuroscientist who headed the team of researchers that worked on the project. Seven other paralyzed people who had trained alongside Pinto watched from the sidelines. “It was a great team effort, and I would like to especially highlight the eight patients who devoted themselves intensively to this day,” Nicolelis said in a statement from the Walk Again Project, the international consortium of researchers and funders behind the work. “It was up to Juliano to wear the exoskeleton, but all of them made that shot. It was a big score by these people and by our science.”

 

Pinto’s suit included an electroencephalogram (EEG) cap containing electrodes that magnified nervous impulses from his brain and sent them to processors that decoded the signals and then relayed them to hydraulics that moved the exoskeleton strapped to his legs.

Despite Nicolelis’s exuberance, other researchers working on BMI weren’t so impressed. “The demo did not advance the state of the art,” Jose Contreras-Vidal, a biomedical engineer at the University of Houston, told NBC News. His team has been working on their own BMI exoskeleton, the NeuroRex, a suit that pioneered the EEG-based control of bionic legs. “Certainly our NeuroRex was the first and remains the only brain-controlled exoskeleton to allow spinal cord injury patients to walk over-ground unassisted, and we have been able to do so with about 10 percent of the funding Dr. Nicolelis has received to develop their exo.” The Brazilian government poured $14 million into the Walk Again Project over the past two years, Nicolelis told Agence France-Presse.

watch the video: https://www.youtube.com/watch?feature=player_embedded&v=fZrvdODe1QI#t=0

Google robot wins Pentagon contest.


Schaft won this round of Darpa’s competition by a wide margin

A robot developed by a Japanese start-up recently acquired by Google is the winner of a two-day competition hosted by the Pentagon’s research unit Darpa.

Team Schaft’s machine carried out all eight rescue-themed tasks to outscore its rivals by a wide margin.

Three of the other 15 teams that took part failed to secure any points at the event near Miami, Florida.

Schaft and seven of the other top-scorers can now apply for more Darpa funds to compete in 2014’s finals.

Darpa Robotics Challenge Scoreboard

1. Schaft (27 points)

2. IHMC Robotics (20 points)

3. Tartan Rescue (18 points)

4. MIT (16 points)

5. Robosimian (14 points)

6. Traclabs / Wrecs (11 points)

8. Trooper (9 points)

9. Thor / Vigir / Kaist (8 points).

12. HKU / DRC-Hubo (3 points)

14. Chiron / Nasa-JSC / Mojavaton (0 points)

Darpa said it had been inspired to organise the challenge after it became clear robots were only capable of playing a very limited role in efforts to contain 2011’s Fukushima nuclear reactor meltdown in Japan.

“What we realised was … these robots couldn’t do anything other than observe,” said Gill Pratt, programme manager for the Darpa Robotics Challenge.

“What they needed was a robot to go into that reactor building and shut off the valves.”

In order to spur on development of more adept robots the agency challenged contestants to complete a series of tasks, with a time-limit of 30 minutes for each:

Darpa robotics challenge
The robots had to steer a car along an obstacle-lined course
  • Drive a utility vehicle along a course
  • Climb an 8ft-high (2.4m) ladder
  • Remove debris blocking a doorway
  • Pull open a lever-handled door
  • Cross a course that featured ramps, steps and unfastened blocks
  • Cut a triangular shape in a wall using a cordless drill
  • Close three air valves, each controlled by a different-sized wheel or lever
  • Unreel a hose and then screw its nozzle into a wall connector

More than 100 teams originally applied to take part, and the number was whittled down to 17 by Darpa ahead of Friday and Saturday’s event.

Humanoid robots drove cars, climbed ladders – and often fell – in the competition sponsored by the US Department of Defense

Some entered their own machines, while others made use of Atlas – a robot manufactured by another Google-owned business, Boston Dynamics – controlling it with their own software.

One self-funded team from China – Intelligent Pioneer – dropped out at the last moment, bringing the number of contestants who took part at the Homestead-Miami Speedway racetrack to 16.

Schaft

Schaft’s liquid-cooled robot was able to carry out all of the eight main challenges it faced.

Schaft’s 1.48m (4ft 11in) tall, two-legged robot entered the contest the favourite and lived up to its reputation.

It makes use of a new high-voltage liquid-cooled motor technology thatuses a capacitor, rather a battery, for power. Its engineers say this lets its arms move and pivot at higher speeds than would otherwise be possible, in effect giving it stronger “muscles”.

Virginia Tech's Thor-OP
The robots had to attach a hose pipe as one of their challenges

The machine was developed by a spin-off from the University of Tokyo‘s Jouhou System Kougaku lab, which Google recently revealed it had acquired.

The team scored 27 points out of a possible 32, putting it seven points ahead of second-placed IHMC Robotics, which used Atlas.

Scores were based on a system that awarded three points for completing a task’s primary objectives, and then a bonus point for doing so without any human intervention.

Schaft’s robot behaved nearly perfectly, but lost points because “the wind blew a door out of their robot’s hold and because their robotic creation was not able to climb out of a vehicle after it successfully navigated an obstacle course,”reported the Japan Daily Press.

‘Reality check’

Videos posted online by Darpa illustrate that the robots remain much slower than humans, often pausing for a minute or more between actions while they carried out the calculations needed to make each movement.

Several proved unsteady on their feet and were only saved from falls by attached harnesses.

Three of the teams which entered self-designed machines – including Nasa’s Johnson Space Center and its robot Valkyrie – failed to complete any of the challenges.

The event was described as a “reality check” by Jyuji Hewitt, who attended on behalf of the US Army’s Research, Development and Engineering Command.

Robosimian
The robots – including Nasa’s Robosimian – were protected by harnesses on case they fell

But Darpa’s Mr Pratt added that the competition, and the finals that will be held in December, would help bring forward a time the machines could be used in real-world situations.

“Today’s modest progress will be a good next step to help save mankind from disasters,” he said.

The top eight teams can now apply for up to $1m (£611,000) of Darpa investment before the finals to improve their robots’ skills. The winner will get a $2m prize.

Lower scorers in last weekend’s round can stay in the contest but will have to fund their own efforts,

Exaggerated gait allows limbless R2G2 robot to move quickly in confined spaces, rough terrain (w/ Video)


Snakes usually travel by bending their bodies in the familiar S-pattern. But when they’re stalking prey, snakes can move in a straight line by expanding and contracting their bodies. This “rectilinear gait” is slow, but it’s quiet and hard to detect—-a perfect way to grab that unsuspecting rodent.

Roboticists have long known that this kind of “limbless locomotion” is a highly effective way for a to move through cluttered and confined spaces. But like snakes, robots that employ rectilinear gaits are slow. They also have a problem maintaining traction on steep slopes.

University of Maryland Mechanical Engineering Ph.D. student James Hopkins has been trying hard to overcome the speed limitations of engineered limbless locomotion. In a robot called “R2G2” (Robot with Rectilinear Gait for Ground operations), he decided to dramatically exaggerate the gait to increase the speed.

“Our current R2G2 model has a maximum forward velocity of one mile per hour, bringing it close to human walking speed,” says Hopkins. “Our goal is to develop a gait and a mechanical architecture that will enable high-speed limbless locomotion to support applications such as search and rescue.”

“To the best of our knowledge, this is the fastest limbless robot in its class in the open literature,” says Hopkins’ faculty advisor, Professor S.K. Gupta (ME/ISR).

R2G2 could get faster. “In this design, the speed is linearly proportional to the length of the robot. So by doubling the length we should be able to easily achieve the speed of two miles per hour,” Gupta says. To get much above that speed, R2G2 will need an upgrade to more powerful motors.

R2G2 can more through spaces that are problems for other kinds of robots. It can crawl through pipes, and traverse tricky surfaces like grass and gravel. What’s more, “it can climb steep, narrow inclines,” Hopkins says.

Hopkins used actively actuated friction pads near the head and tail of the robot to improve its traction, and has found that different terrains require unique kinds of friction pads—a bed of nails for traveling over grass; rubber for carpets.

In addition, Gupta and Hopkins used 3D printing technology to create a novel mechanism for expanding and contracting R2G2’s body while maintaining a small body cross section. This enabled them to make geometrically complex parts and greatly simplify the assembly of the robot. Other researchers with access to 3D printing will be able to easily replicate R2G2 in their labs.

https://i1.wp.com/cdn.physorg.com/newman/gfx/news/2013/exaggeratedg.jpg

Currently robots that use limbless locomotion do not come close to their natural counterparts in terms of capabilities. “Unfortunately, we do not yet have access to engineered actuators that can match the natural muscles found in biological creatures,” Gupta says, “or highly distributed, fault-tolerant, self-calibrating, multi-modal sensors and materials with highly direction-dependent friction properties. So our design options for limbless locomotion are limited and truly mimicking nature is simply not possible right now.”

In the short term, Gupta believes robotics engineers are better off “taking a different approach that exploits inspiration from biological creatures.” Robots like R2G2 advance the science because they “take a useful feature in nature and exploit it to the fullest extent.”

Source:  University of Maryland

Tiny robot flies like a fly.


Engineers create first device able to mimic full range of insect flight.

A robot as small as a housefly has managed the delicate task of flying and hovering the way the actual insects do.

“This is a major engineering breakthrough, 15 years in the making,” says electrical engineer Ronald Fearing, who works on robotic flies at the University of California, Berkeley. The device uses layers of ultrathin materials that can make its wings flap 120 times a second, which is on a par with a housefly’s flapping rate. This “required tremendous innovation in design and fabrication techniques”, he adds.

The robot’s wings are composed of thin polyester films reinforced with carbon fibre ribs and its ‘muscles’ are made from piezoelectric crystals, which shrink or stretch depending on the voltage applied to them.

Kevin Ma and his colleagues, all based at Harvard University in Cambridge, Massachusetts, describe their design today inScience1.

The tiny components, some of which are just micrometres across, are extremely difficult to make using conventional manufacturing technologies, so the researchers came up with a folding process similar to that used in a pop-up book. They created layers of flat, bendable materials with flexible hinges that enabled the three-dimensional structure to emerge in one fell swoop. “It is easier to make two-dimensional structures and fold them into three dimensions than it is to make three dimensional structures directly,” explains Ma.

Manufacturing marvel

“The ability to manufacture these little flexure joints is going to have implications for a lot of aspects of robotics that have nothing to do with making a robotic fly,” notes Michael Dickinson, a neuroscientist at the University of Washington in Seattle.

The work “will also lead to better understanding of insect flapping wing aerodynamics and control strategies” because it uses an engineering system “that can be more easily modified or controlled than an animal”, Fearing adds.

Weighing in at just 80 milligrams, the tiny drone cannot carry its own power source, so has to stay tethered to the ground. It also relies on a computer to monitor its motion and adjust its attitude. Still, it is the first robot to deploy a fly’s full range of aerial motion, including hovering.

The biggest technical obstacle to independent flight is building a battery that is small enough to be carried by the robotic fly, says Fearing. At present, the smallest batteries with enough power weigh about half a gram — ten times more than what the robotic fly can support. Ma says he believes that the battery obstacle might be overcome in 5-10 years.

If researchers can come up with such a battery, and with lightweight onboard sensors, Ma says that the robots could be useful in applications such as search and rescue missions in collapsed buildings, or as ways to pollinate crops amid dwindling bee populations.

Source: Nature

A Dash of Color Creates camouflage for Spineless Robots.


Late last year, Harvard University chemists and materials scientists introduced a robot whose rubbery appendages fly—or, more accurately, crawl—in the face of conventional automatons. These invertebrate-inspired albino bots relied on elastic polymers and pneumatic pumps to imitate the movements of worms, squid and starfish. Now these squishy quadrupeds can be pumped with a variety of dyes, enabling them to either blend in or stand out from their environments.

In addition to stretching the boundaries of how robots are designed, built and operated, adding color could help scientists better understand why certain creatures may have evolved to their current shape, color and capabilities, according to the researchers, led by chemist and materials scientist George Whitesides. In the August 17 issue of Science, the researchers describe using 3-D printers to create silicone robots whose different layers contain microchannels through which liquids can flow in a variety of patterns. Heated or cooled solutions pumped into these channels enabled the researchers to create thermal camouflage, while fluorescent fluids produced glow-in-the-dark

These five-centimeter-thick bots, each of which looks like a pair of Ys joined at the stem, mimic natural movement without the need for complex mechanical components and assembly. They also demonstrate the value of considering simple animals when looking for inspiration for robots and machines, the researchers say.

The shape-shifting robot features an upper, flexible layer designed with a system of channels through which air can pass. A second layer is made of a more rigid polymer. The researchers place the top, actuating layer onto the bottom, strain limiting/sealing layer with a thin coating of silicone adhesive. Air pumped into different valves in the upper layer causes them to inflate and bend the robot into different positions. For example, the robot can lift any one of its four legs off the ground and leave the other three legs planted to provide stability, depending on which channels are inflated.

In the following video, Stephen Morin, a Harvard post-doctoral fellow in chemistry and chemical biology and lead author of the paper, demonstrates how the flexible robot can change color depending upon its surroundings.

Source: Scientific American