Brain-inspired synaptic transistor learns while it computes.

It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.

“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”
The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the in the electrical signal.

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.

The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.

“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.'”

The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.

Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.

“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

The nickelate system is also well positioned for seamless integration into existing silicon-based systems.

“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”

For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.

“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast , all you’d have to do is confine the liquid and position the gate electrode closer to it.”

In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”

He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.

“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.

For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.

“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”

“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”

Spirulina supplementation improves academic performance in schoolchildren.

Did you know that, among its many benefits, spirulina has also been shown to improve academic performance in schoolchildren?


Spirulina is the name given to more than 40,000 varieties of spiral-shaped, blue-green algae that are consumed as nutritional supplements, typically in powdered or tablet form. It grows naturally in warm freshwater lakes between 85 and 112 degrees Fahrenheit.

Because spirulina is an abundant, naturally occurring food that is high in nutrients but contains only 3.9 calories per gram, it has attracted attention as a nutritional supplement that might be able to help alleviate malnutrition worldwide without leading to the opposite problem of obesity. Adding to spirulina’s appeal, it retains its nutritional value well during processing and has an extraordinarily long shelf life.

A nutritional powerhouse

The academic performance study was conducted by Senegalese researchers and
published in the French journal Sante Publique in 2009. The researchers were evaluating the effectiveness of a government program designed to improve the nutritional status of schoolchildren with spirulina supplements. The children consumed 2g of spirulina (mixed with 10g of honey for flavor) once per day for 60 days.

The researchers compared the academic performance of 549 Senegalese elementary school students right before the beginning of spirulina supplementation with their performance two months later. The children’s average age was seven years, seven months.

After two months of spirulina supplementation, the children’s average school performance had increased by 10 percent. The results were statistically significant.

Because so little research on this effect has been done, it is impossible to be certain what is responsible for this improvement in academic scores. However, studies have shown that spirulina improves both cognitive ability and mental health, in part because it contains high levels of L-tryptophan – the amino acid needed for the body to synthesize the neurotransmitters serotonin and melatonin.

Another possible explanation is that spirulina improves the overall nutritional health of school children, which has been strongly correlated with academic performance. Spirulina is not just a complete protein but 60-70 percent protein by weight, a higher proportion than either soy or red meat. It is high in vitamins A, C, D and E, as well as in B vitamins, including B-12, which is not typically found in vegetable sources. It also contains a wide variety of minerals, antioxidants and fatty acids that have been shown to contribute to healthier skin and hair, and to fight cell damage.

Benefits for all ages

The clinically proven properties of spirulina exceed even these remarkable benefits. It has been shown to help the body fight infection, lose weight, lower cholesterol and even prevent the inflammation linked with heart disease. It fights anemia (it is especially high in iron), purifies the blood and removes heavy metals and other toxic substances from the body.

Spirulina has also been shown to increase energy, help control food cravings and relieve anxiety, depression, fatigue, stress and premenstrual syndrome. It is one of the most effective natural ways to relieve the symptoms of allergies and hay fever. Spirulina has also shown promise in fighting arthritis, alcoholism, herpes and even cancer.

All of these benefits can come from taking as little as 2-3 grams of spirulina per day.

Spirulina should not be taken, however, by anyone with phenylketonuria or autoimmune disorders, due to its high phenylalanine content and its immune-boosting properties, respectively. Pregnant or breastfeeding women should only take it under the supervision of a health care practitioner.

Fission Power: The Pros, the Cons, and the Math.

The process of nuclear fission was first discovered in 1938; however, it wasn’t fully explained until a year later. Today – less than 100 years after its initial discovery – it is the poster child of the ‘green energy’ movement (and not in a good way) that is sweeping across the globe.  Most of what we hear about the pitfalls of using fission technology are sensationalist, but there is no doubt that this process has led to nuclear disasters. Recently, reports have stated that the radioactivity level spiked to a level 6,500 times higher than the legal limit at Fukushima, and issues continue to presist in that area. This process has also been linked to  non-localized devastation. During Chernobyl, the Soviet government evacuated about 115,000 people from the most heavily contaminated areas in 1986; however, another 220,000 people had to be evacuated from surrounding areas in subsequent years.

Credit: U.S. NRC

Now, there is a huge debate amongst people as to whether governments world-wide should pursue the continuation of developing safer nuclear power plants, or if it should be scrapped  all together in place of something that is perceived as “safer.” Given the overall importance of the debate to the environment and to our exponentially growing energy needs, everyone should have a proper understanding of the topic; however, for the most part – very few people have more than a very basic understanding of the science and mathematics behind the process. In this article, I want to attempt a more thorough explanation than you may have read before.

Atomic Fission:

Nuclear Fission (Source)

As most of you will hopefully be aware of, nuclear fission is a chain reaction involving large, unstable nuclei. This chain reaction ignites when a neutron collides with another neutron, resulting in it becoming even more unstable – before one nucleus  divides into two ‘daughter’ nuclei and (on average) 3 more neutrons. After which, the additional neutrons go on to initiate another fission reaction with those they come in contact with. Those neutrons then incite a reaction between other neutrons and so on and so forth (like the domino effect). The most common fuel used for fission is Uranium -235 (that’s 92 protons and 143 neutrons), and the 2 products (plus neutrons) of this reaction could be a range of sized nuclei.

As with any reaction/equation, when broken up, the final number must still sum to what you started with, and this is also true of fission reactions. Ultimately, the total number of nucleons (protons and neutrons) after fission, in whichever new combinations, must still add up to the original number.


So what good is fission to us? Well it produces energy of course! But where does this energy actually come from? I mentioned that the number of protons and neutrons remains the same, and that they are just rearranged into more stable combinations; this is true. However, when adding up the total masses before and after, you will find that the mass will DECREASE. Said decrease in mass is the answer to our question, as the lost mass is converted into pure energy.

With a little prior knowledge (and a very familiar equation), we can calculate the amount of energy produced. An example goes as follows… (warning, complicated math is contained below)

Let us take this reaction:

1 neutron + Uranium-235 à Strontium – 98 + Xenon – 136 + 3 neutrons (Rounded values in relative atomic mass)
  • Mass before = 236.053u
  • Mass after = 235.840u
  • Mass change = 0.213u

To convert this result into kilograms, we multiply the number by 1.661×10^-27 (the mass in kilograms of a nucleon). So:

0.213 x (1.661×10^-27) = 3.538×10^-26kg

Next, using E=MC^2 we can convert this mass into energy (using the rounded value for C)

(3.538×10^-26) x (3×10^8)^2 = 3.18×10^-11J

This isn’t a very large amount of energy – but remember that this is just for a single atom of Uranium! So suppose we could persuade it to fission completely, how much energy would be produced for one gram of Uranium? Since we know how much energy is produced by one atom of uranium, to find the energy produced by one gram, all we need to do is know how many atoms are in a gram. To figure this out, we use Avogadro’s constant, which is equal to the number of atoms of any element in one mole of that element. That number is 6.022×10^23, and we use it in the following equation (probably more familiar to chemists than physicists)

Number of atoms = (mass x Avogadro’s no.) /molar mass Therefore the number of atoms in a gram of Uranium can be calculated as:

(0.001kg x 6.022×10^23)/0.236053 = 2.55×10^21

Now we can multiply this number by the amount of energy produced by a single fission reaction and we get:

(2.55×10^21)x(3.18×10^-11) = 8.11×10^10J

This is a HUGE amount of energy for just a single gram of fuel. Especially when compared to the amount of energy generated by coal or oil, and remains the reason why Uranium is so widely used (despite the potential dangers). Ultimately, The amount of free energy contained in nuclear fuel is millions of times the amount of free energy contained in a similar mass of chemical fuel, such as gasoline. Moreover, the process of decomposition produces a huge amount of heat, a large volume of heavy element atoms, and a lot of neutrons. In addition to these products, the nuclear fission also produces a big volume of radioactive waste. Obviously, this waste needs to be disposed of, as it could cause serious destruction to the environment, should it leak. Proper storage is extravagantly expensive.

But of course, there are a number of advantages to this kind of power. Getting rid of our dependence on fossil fuels is probably the biggest advantage of nuclear power. Power plants that burn coal are highly destructive to the environment (whereas nuclear fission is really only destructive if there is a leak or meltdown). Moreover, the mining process destroys vast swatches of Earth, including a number of diverse habitats. There is also the issue of oil spills (we all probably remember the infamous BP contamination of the Gulf). More importantly, the nuclear fuel used is much more efficient and found in abundance. Large reserves of uranium are spread in many parts of the world. Scientific estimates suggest that the rate at which the fossil fuel are being used today, their reserves are bound to become empty by the end of this century. Yet, the byproducts of the fission process remain radioactive for thousands of years and can cause serious harm to living beings. Although the chances are rare, a nuclear power disaster can decimate a habitat/ecosystem (depending on the size and nature of the disaster).

In the end, it is each individual’s responsibility to acquire the knowledge necessary to make decisions and be informed. Hopefully, this post helped you start (or continue) this journey of discovery.

Jealousy: it’s in your genes.

A tear-filled green eye

Around a third of the variation in levels of jealousy across the population is likely to be genetic in origin. Photograph: Tim Flach/Getty Images

How would you feel if you suspected your partner had enjoyed a one-night stand while away on holiday without you? What if, instead of having sex on the trip, you believed she or he had fallen in love with someone? In either case, if your partner will probably never see the other person again, would that make the situation any easier to cope with?

Faced with either scenario, most of us would feel intensely jealous: it’s a very basic, normal reaction. But does the universality of jealousy indicate that it might be genetically programmed?

The first study to investigate the genetic influence on jealousy was recently published. Researchers put the questions at the top of this article to more than 3,000 pairs of Swedish twins. Fraternal twins share about 50% of their genes; identical twins share exactly the same genetic make-up. By comparing the answers given by each group of twins, the researchers were able to show that around one third of the differences in levels of jealousy across the population are likely to be genetic in origin.

In both scenarios – fears about a partner sleeping with or falling in love with a stranger – women reported more jealousy than men. But the researchers also found a gender difference between relative reactions to the idea of sexual or emotional betrayal. Men were far more troubled by the thought that a partner had been sexually unfaithful than by potential emotional infidelity. Women tended to respond to each scenario with equal levels of jealousy.

Why is this? The answer, according to some scientists, may lie in evolutionary pressures. For both men and women, reproduction is key. But men, unlike women, cannot be certain that they are the biological parent of their child, and so they are naturally more perturbed at the thought of sexual infidelity than they are about emotional infidelity – because it jeopardises the successful transmission of their genes. Women, though relatively less perturbed by the idea that their partner may have been sleeping around, are nevertheless dependent on their mate for their survival and that of their offspring.

That’s the theory. Given that we can’t zip back in a time machine to human prehistory, it’s an explanation that seems impossible to prove or disprove.

Though genes appear to play a part in jealousy, the Swedish results also show that the kinds of things that happen to us in our lives – the way we’re brought up, the people we’re around, the events we experience – are far more important. Only one third of the variation in jealousy seemed to have a genetic origin, so the rest must have been down to environmental differences.

But whether genetic or environmental, hardwired or learned, there’s no doubting the ubiquity of jealousy. It’s an emotion that almost everyone feels at some point, and a major cause of relationship problems. Although much of this jealousy is illusory, we all know that the eye (if nothing else) can wander. In Britain, the National Survey of Sexual Attitudes and Lifestyles found that 82% of men and 76% of women reported more than one lifetime partner, with more than a third of men and almost a fifth of women clocking up 10 or more. Some 31% of men and 21% of women said they had started a new relationship in the previous year, with 15% of men and 9% of women seeing more than one person at the same time.

Occasionally, then, we have grounds to be worried: jealousy alerts us to a looming problem in our relationship. If your partner has been unfaithful in the past, naturally you’ll worry that they might stray again in future. Much of the time, though, jealousy is pointlessly corrosive, making both partners miserable for no good reason. In these cases, how can we get the better of our jealousy? How can the “green-eyed monster” be tamed?

Consider the evidence for your jealousy. What about the evidence that might contradict our fears? What would we tell someone if they came to us with the same worries? Have a chat with a trusted friend to get an independent perspective on how likely it is that your partner is deceiving you.

Talk to your partner. When two people hold differing views of what’s acceptable in the relationship – how much time to spend together, how frequently to keep in touch, whether it’s okay to stay in contact with ex-partners and so on – misunderstanding and jealousy are always a risk. If you haven’t agreed the ground rules for your relationship, make it a priority.

Weigh up the pros and cons. People often believe that their jealousy – for all the pain it brings – actually helps them. So it’s a good idea to draw up a list of the pros and cons, both of being jealous and of trusting your partner. On balance, which one seems the best option?

Get to the bottom of your fears. What is it, do you think, that lies at the root of your jealousy? Do you dread being alone? Do you fear humiliation? When you’ve identified the fears fuelling your jealousy, think constructively about how you’d handle the situation.

Set yourself some ground rules. We can find ourselves trapped in a vicious cycle: jealous behaviour feeds jealous thoughts, which in turn trigger more jealous behaviour. And so on. To break this cycle, it helps to set ourselves some ground rules. When you find yourself worrying about your partner’s faithfulness, save those thoughts for a daily “worry period”. Set aside 15 minutes each day, and postpone all your worrying until then.

Concentrate on the good stuff. Jealousy skews our perspective. To counter it, we need to make a deliberate effort to view things more positively. That means focusing on the good parts of our relationship: the things about our partner and our life together that we like, the things that keep us coming back for more. Focus on the positive by doing more positive things together. And remember to have your own interests and activities that boost your self-esteem.

Snoring mothers-to-be linked to low birth weight babies.

Experts say snoring may be a sign of breathing problems that could deprive an unborn baby of oxygen

A newborn baby. Scientists found that women who snored both before and during pregnancy were more likely to have smaller babies and elective C-sections. Photograph: Christopher Furlong/Getty Images

Mothers-to-be who snore are more likely to give birth to smaller babies, a study has found. Snoring during pregnancy was also linked to higher rates of Caesarean delivery.

Experts said snoring may be a sign of breathing problems that could deprive an unborn baby of oxygen.

Previous research has shown women who start to snore during pregnancy are at risk from high blood pressure and the potentially dangerous pregnancy condition pre-eclampsia.

More than a third of the 1,673 pregnant women recruited for the US study reported habitual snoring.

Scientists found women who snored in their sleep three or more nights a week had a higher risk of poor delivery outcomes, including smaller babies and Caesarean births.

Chronic snorers, who snored both before and during pregnancy, were two-thirds more likely to have a baby whose weight was in the bottom 10%.

They were also more than twice as likely to need an elective Caesarean delivery, or C-section, compared with non-snorers.

Dr Louise O’Brien, from the University of Michigan’s Sleep Disorders Centre, said: “There has been great interest in the implications of snoring during pregnancy and how it affects maternal health but there is little data on how it may impact the health of the baby.

“We’ve found that chronic snoring is associated with both smaller babies and C-sections, even after we accounted for other risk factors. This suggests that we have a window of opportunity to screen pregnant women for breathing problems during sleep that may put them at risk of poor delivery outcomes.”

Women who snored both before and during pregnancy were more likely to have smaller babies and elective C-sections, the researches found. Those who started snoring only during pregnancy had a higher risk of both elective and emergency Caesareans, but not of smaller babies.

Snoring is a key sign of obstructive sleep apnoea, which results in the airway becoming partially blocked, said the researchers, whose findings appear in the journal Sleep.

This can reduce blood oxygen levels during the night and is associated with serious health problems, including high blood pressure and heart attacks.

Sleep apnoea can be treated with CPAP (continuous positive airway pressure), which involves wearing a machine during sleep to keep the airways open.

Dr O’Brien added: “If we can identify risks during pregnancy that can be treated, such as obstructive sleep apnoea, we can reduce the incidence of small babies, C-sections and possibly NICU (neo-natal intensive care unit) admission that not only improve long-term health benefits for newborns but also help keep costs down.”

Software beats CAPTCHA, the web’s ‘are you human?

Are you human? It just got a lot harder for websites to tell. An artificial intelligence system has cracked the most widely used test of whether a computer user is a bot. And according to its designers, it is more than a curiosity – it is a step on the way to human-like artificial intelligence.

Asking people to read distorted text is a common way for websites to determine whether or not a user is human. These CAPTCHAs – which stands for Completely Automated Public Turing test to tell Computers and Humans Apart – can theoretically take on any form, but the text version has proven effective in stopping spam and malicious software bots.

That’s because software has trouble deciphering text when letters are warped, overlapping or obfuscated by random lines, dots and colours. Humans, on the other hand, can recognise nearly endless variations of a letter after having only seen it a few times.

Vicarious, a start-up firm in Union City, California, announced this week that it has built an algorithm that can defeat any text-based CAPTCHA – a goal that has long eluded security researchers. It can pass Google’s reCAPTCHA, regarded as the most difficult, 90 per cent of the time, says Dileep George, co-founder of the firm. And it does even better against CAPTCHAs from Yahoo, Paypal and

Virtual neurons

George says the result isn’t as important as the methods, which he and CEO Scott Phoenix hope will lead to more human-like AI. Their program uses virtual neurons connected in a network modelled on the human brain. The network starts with nodes that detect input from the real world, such as whether a specific pixel in an image is black or white. The next layer of nodes “fires” only if they detect a particular arrangement of pixels. A third layer fires only if its nodes recognise arrangements of pixels that form whole or partial shapes. This process repeats on between three and eight levels of nodes, with signals passing between as many as 8 million nodes. The network eventually settles on a best guess for which letters are contained in the image.

The strength of each neural connection is determined by training the network with solved CAPTCHAs and videos of moving letters. This allows the system to develop its own representation of, say, the letter “a”, instead of cross-referencing against a database of instances of the letter. “We are solving it in a general way, similar to how humans solve it,” says George.

Yann LeCun, an AI researcher at New York University, says neural network-based systems are widely deployed. He thinks it is hard to know whether Vicarious’s system represents a technological leap, because the company hasn’t revealed details about it.

If Vicarious’s claims pan out, it would be very significant, says Selmer Bringsjord, a computer scientist at Rensselaer Polytechnic Institute in Troy, New York. He says breaking text-based CAPTCHAs requires a high-level understanding of what letters are.

Rather than bringing a product to market, Vicarious will pit its tool against more Turing tests. The aim is for it to tell what is happening in complex scenes or to work out how to adapt a simple task so it works somewhere else, says Phoenix (see “More than words”, below). This kind of intelligence might enable things like robotic butlers, which can function in messy, human environments.

“Our focus is to solve the fundamental problems,” says Phoenix. “We’re working on artificial intelligence, and we happened to solve CAPTCHA along the way.”

This article will appear in print under the headline “CAPTCHAs cracked”

More than words

A CAPTCHA doesn’t have to involve text – it can be any automated test that sorts humans from software. Vicarious in Union City, California, has a system that can read distorted text, but the firm has greater ambitions for artificial intelligence. Next up will be coping with optical illusions. Dileep George, one of the firm’s co-founders, thinks more training could help the algorithm with tasks such as recognising three-dimensional symbols in a two-dimensional image.

After that, the challenge might be to identify an object in a clean or distorted image. After that, it would have to work out what is happening in an image, rather than just recognise objects in a picture.

Brain Researchers Discover How Retinal Neurons Claim the Best Connections

Discovery may shed light on brain disease, development of regenerative therapies

Real estate agents emphasize location, location, and – once more for good measure – location. It’s the same in a developing brain, where billions of neurons vie for premium property to make connections. Neurons that stake out early claims often land the best value, even if they don’t develop the property until later.

Scientists at the Virginia Tech Carilion Research Institute and the University of Louisville have discovered that during neurodevelopment, neurons from the brain’s cerebral cortex extend axons to the edge of the part of the brain dedicated to processing visual signals – but then stop. Instead of immediately making connections, the cortical neurons wait for two weeks while neurons from the retina connect to the brain.

Now, in a study to be published in the Nov. 14 issue of the journal Cell Reports, the scientists have discovered how. The retinal neurons stop their cortical cousins from grabbing prime real estate by controlling the abundance of a protein called aggrecan.

Understanding how aggrecan controls the formation of brain circuits could help scientists understand how to repair the injured brain or spinal cord after injury or disease.

“Usually when neuroscientists talk about repairing injured brains, they’re thinking about putting neurons, axons, and synapses back in the right place,” said Michael Fox, an associate professor at the Virginia Tech Carilion Research Institute and lead author of the study. “It may be that the most important synapses – the ones that drive excitation – need to get there first. By stalling out the other neurons, they can get the best spots. This study shows that when we think about repairing damaged neural networks, we need to consider more than just where connections need to be made. We also need to think about the timing of reinnervation.”

The researchers genetically removed the retinal neurons, which allowed the cortical axons to move into the brain earlier than they normally would.

“We were interested in what environmental molecular cues allow the retinal neurons to control the growth of cortical neurons,” said Fox, who is also an associate professor of biological sciences in Virginia Tech’s College of Science. “After years of screening potential mechanisms, we found aggrecan.”

Aggrecan is a protein that has been well studied in cartilage, bones, and the spinal cord, where it is abundant after injuries. According to Fox, aggrecan may be able to isolate damaged areas of the spinal cord to stop inflammation and prevent further destruction. The downside, however, is that aggrecan inhibits axonal growth, which prevents further repair from taking place.

Axons see this environment and either stop growing or turn around and grow in the opposite direction,” said Fox.

Although it is less studied in the developing brain, aggrecan appears in abundance there. In the new study, the researchers found that retinal neurons control aggrecan in a region that receives ascending signals from retinal cells as well as descending signals from the cerebral cortex.

Once the retinal neurons have made connections, they cause the release of enzymes that break down the aggrecan, allowing cortical neurons to move in.

Fox said it is interesting that the retinal axons can grow in this region of the developing brain, despite the high levels of aggrecan. He suspects that it may be because retinal neurons express a receptor – integrin – that cortical axons do not express.

The study, “A molecular mechanism regulating the timing of corticogeniculate innervation,” is by Fox, Jianmin Su, a research assistant professor, and Carl Levy, an undergraduate from Suffolk, Va., all with the Virginia Tech Carilion Research Institute; graduate student Justin Brooks and undergraduate Jessica Wang from Virginia Commonwealth University; and Tania Seabrook, a postdoctoral associate, and William Guido, a professor and the chair of the Department of Anatomical Sciences and Neurobiology, both with the University of Louisville School of Medicine.

A universal strategy for visually guided landing.

Landing is a challenging aspect of flight because, to land safely, speed must be decreased to a value close to zero at touchdown. The mechanisms by which animals achieve this remain unclear. When landing on horizontal surfaces, honey bees control their speed by holding constant the rate of front-to-back image motion (optic flow) generated by the surface as they reduce altitude. As inclination increases, however, this simple pattern of optic flow becomes increasingly complex. How do honey bees control speed when landing on surfaces that have different orientations? To answer this, we analyze the trajectories of honey bees landing on a vertical surface that produces various patterns of motion. We find that landing honey bees control their speed by holding the rate of expansion of the image constant. We then test and confirm this hypothesis rigorously by analyzing landings when the apparent rate of expansion generated by the surface is manipulated artificially. This strategy ensures that speed is reduced, gradually and automatically, as the surface is approached. We then develop a mathematical model of this strategy and show that it can effectively be used to guide smooth landings on surfaces of any orientation, including horizontal surfaces. This biological strategy for guiding landings does not require knowledge about either the distance to the surface or the speed at which it is approached. The simplicity and generality of this landing strategy suggests that it is likely to be exploited by other flying animals and makes it ideal for implementation in the guidance systems of flying robots.


In this study, we investigate the cues that honey bees use to land on vertical surfaces. We show that bees use the apparent rate of expansion of the image generated by the surface to smoothly reduce their speed when landing. From our results, we develop a mathematical model for visually guided landing that, unlike all current engineering-based methods, does not require knowledge about either the distance to the surface or the speed at which it is approached. This strategy is not only specific to landings on vertical surfaces or to honey bees but represents a universal strategy that any flying agent (animal or machine) could use to land safely on surfaces of any orientation.


Bees inspire robot aircraft.

Scientists at Australia’s Vision Centre have discovered how the honeybee can land anywhere with utmost precision and grace – and the knowledge may soon help build incredible robot aircraft.

By sensing how rapidly their destination ‘zooms in’ as they fly towards it, honeybees can control their flight speed in time for a perfect touchdown without needing to know how fast they’re flying or how far away the destination is.


This discovery may advance the design of cheaper, lighter robot aircraft that only need a video camera to land safely on surfaces of any orientation, says Professor Mandyam Srinivasan of The Vision Centre (VC) and The University of Queensland Brain Research Institute.

“Orchestrating a safe landing is one of the greatest challenges for flying animals and airborne vehicles,” says Prof. Srinivasan. “To achieve a smooth landing, it’s essential to slow down in time for the speed to be close to zero at the time of touchdown.”

Humans can find out their distance from an object using stereovision – because their two eyes, which are separated by about 65 mm, capture different views of the object. However, insects can’t do the same thing because they have close-set eyes, Prof. Srinivasan explains.

“So in order to land on the ground, they use their eyes to sense the speed of the image of the ground beneath them,” he says. “By keeping the speed of this image constant, they slow down automatically as they approach the ground, stopping just in time for touchdown.

“However, in the natural world, bees would only occasionally land on flat, horizontal surfaces. So it’s important to know how they land on rough terrain, ridges, vertical surfaces or flowers with the same delicacy and grace.”

In the study, the VC researchers trained honeybees to land on discs that were placed vertically, and filmed them using high speed video cameras.

“The boards carried spiral patterns that could be rotated at various speeds by a motor,” says Prof. Srinivasan. “When we spun the spiral to make it appear to expand, the bees ‘hit the brakes’ because they thought they were approaching the board much faster than they really were.

“When we spun the spiral the other way to make it appear to contract, the bees sped up, sometimes crashing into the disc. This shows that landing bees keep track of how rapidly the image ‘zooms in’, and they adjust their flight speed to keep this ‘zooming rate’ constant.”

“Imagine you’re in space and you don’t know how far away you are from a star,” Prof. Srinivasan says. “As you fly towards it, the other stars ‘move away’ and it becomes the focus. Then when the star starts to ‘zoom in’ faster than the regular rate, you’ll slow down to keep the ‘zooming rate’ constant.

“It’s the same for bees – when they’re about to reach a flower, the image of the flower will expand faster than usual. This causes them to slow down more and more as they get closer, eventually stopping when they reach it.”

The VC researchers also developed a mathematical model for guiding landings, based on the bees’ landing strategy. Prof. Srinivasan says unlike all current engineering-based methods, this visually guided technique does not require knowledge about the distance to the surface or the speed at which the surface is approached.

“The problem with current robot aircraft technology is they need to use radars or sonar or laser beams to work out how far the surface is,” Prof. Srinivasan says. “Not only is the equipment expensive and cumbersome, using active radiation can also give the aircraft away.

“On the other hand, this vision-based system only requires a simple video camera that can be found in smartphones. The camera, by ‘seeing’ how rapidly the image expands, allows the aircraft to land smoothly and undetected on a wide range of surfaces with the precision of a honeybee.”

Faster than the Speed of Light: New Imaging Approach Could Measure Tumor Activity.

A new imaging approach being investigated by Memorial Sloan-Kettering researchers could provide better information about a tumor’s molecular activity, allowing for a more accurate diagnosis based on the tumor’s specific disease signature.

The technique makes use of Cerenkov light, a faint glow created when charged particles (electrons or positrons) travel through a medium, such as water or tissue, faster than light can travel through that same medium. A kind of “sonic boom” for light, Cerenkov light is given off naturally by many of the radioactive tracers that are already commonly used in medical imaging techniques such as positron emission tomography (PET). Although discovered more than 100 years ago, Cerenkov light has only recently been considered for biomedical imaging purposes.

Now researchers in the laboratory of Memorial Sloan-Kettering radiologist and molecular imaging specialist Jan Grimm are investigating whether the Cerenkov light given off by these tracers could be harnessed to provide information about precise biological processes within a tumor – such as the activity of a protein known to promote cancer spread.

“In this era of personalized medicine, there is an urgent need for a way to sensitively detect and quantify molecular activity,” Dr. Grimm says. “We wanted to explore whether Cerenkov light could be engaged to provide more specific information about the tumor’s biological properties.”

An Extra Layer of Information

Conventional methods of cancer imaging are effective primarily in showing a tumor’s location and dimensions rather than specific protein activity. For example, PET scans track a radioactive tracer coupled with a molecule that accumulates in certain tissues. The most common tracer, fluorodeoxyglucose (FDG), shows the location of a tumor based on increased glucose intake, a hallmark of cancer cells.

Dr. Grimm and three members of his laboratory aimed to find out whether Cerenkov light emitted by radiotracers could be used to activate additional imaging agents that add information about specific disease processes. For example, certain agents glow brightly when excited by Cerenkov light — some only after being activated by encountering certain proteins.

In a September 8 online publication in the journal Nature Medicine, they describe experiments in which they created a molecular probe that interacts specifically with a protein called MMP-2, which is known to be overactive in aggressive breast tumors. MMP-2 converts the probe into a form that will become fluorescent when excited by the Cerenkov light. By measuring this Cerenkov-induced fluorescence, the researchers are then able to calculate and quantify the protein’s activity.

“This technique allows us to receive and analyze two signals at the same time,” says Daniel Thorek, the study’s first author. “The Cerenkov light is coming from the tracer we use for PET scans, which is already well-validated as an indicator of cancer. And secondly, we’re able to use the Cerenkov-induced fluorescence to detect and measure the activity of MMP-2, which can provide information about the tumor’s aggressiveness.”

The researchers demonstrated this technique, called secondary Cerenkov-induced fluorescence imaging (SCIFI), in mice that had been implanted with human breast cancertumors expressing varying levels of the MMP-2 protein. While conventional PET imaging showed no significant difference between the tumors, SCIFI detected a signal only from tumors with high levels of MMP-2 – therefore clearly marking these tumors as more aggressive, which would not have been possible using PET alone.

A Clearer Signal

Dr. Grimm explains that the method is very sensitive because the Cerenkov light that activates the imaging probe comes from within the tumor rather than from an external light source. “If you activate the probe with excitation light from an external source, that external light is reflected partly back and scattered, creating a lot of background noise and making it harder to measure the actual signal. Using Cerenkov light, which is internal, gives you a much better excitation source.”

He adds that SCIFI testing is still in an early stage, but because the method uses probes that are already common in medical imaging it should be easy to transition the new technique into human trials. Because Cerenkov light is very low, it requires use of a sensitive camera to detect the fluorescent signal. Dr. Grimm speculates that the first SCIFI applications in humans may be in laparoscopic procedures, in which instruments are inserted through small incisions in the skin.

The researchers emphasize that SCIFI would not replace but complement PET scans. “PET is a really phenomenal technology and we’re just adding more sophisticated probes to extract additional information,” Dr. Thorek says. “This approach leverages the advantages of nanoparticles, radiotracers, and optical imaging and brings these fields together very nicely.”

Source: MSKCC