First nutrient-enriched GM crops could be grown in the UK within months.

A genetically modified crop boosted with a dietary supplement could be grown for the first time in Britain as early as this year following a request by scientists to conduct a controversial field trial at a heavily-protected research site in Hertfordshire.

The government-funded researchers have applied this week for formal permission to grow the first GM plants that are designed to produce high yields of the same omega-3 fatty acids found in fish oil, which are linked with a healthy diet.

They could receive the go-ahead within three months and the first GM seeds could be sown this spring on the same high-security plot of land within the large estate owned by Rothamsted Research in Harpenden, where GM wheat trials took place successfully over the previous two years without being destroyed by activists.

If the fish-oil field trials are successful, the technology could be used to produce food that is enriched with the omega-3 fatty acids linked with alleged health benefits such as a lower risk of cardiovascular disease – although the scientific support for these claims is mixed.

The GM crop fortified with the genes for making fish oil is among the first of a new generation of genetically engineering food plants designed to boost vital dietary supplements – so-called “nutraceuticals”. Anti-GM activists in the Philippines last year destroyed field trials of GM “golden rice”, which is fortified with genes for precursors to vitamin A.

Wary of public opposition to the trial, the Rothamsted researchers emphasised that they are more interested in showing it is possible to produce commercial quantities of omega-3 fatty acids to supply the fish-meal market for farmed fish which currently accounts for 80 per cent of the omega-3 fish oils harvested from wild-caught marine organisms.

Rothamsted Research applied on Monday for a licence to conduct the field trial from the Department for Environment, Food and Rural Affairs. The scientists could be given the go-ahead within 90 days, following a public consultation and an inquiry by the government’s scientific committee overseeing the release of GM organisms into the environment.

GM crops could help to solve the problem of over-fishing

The open-air field trial behind a high wire fence and 24hr CCTV will involve the planting of a flax-like plant called Camelina sativa engineered with synthetic omega-3 genes that trigger the production of the “fish oil” in the seeds of the harvested crop.

Although omega-3 is often described as fish oil, it is in fact made by microscopic marine algae that are eaten or absorbed by fish. Among the many health claims made about omega-3, the strongest relate to its supposed benefits in reducing the risk of heart disease – although some medical authorities have questioned the evidence.

“Despite claims that fish oil supplements can help prevent numerous conditions including cancer, dementia, arthritis and heart problems, there is little hard evidence for them,” says the advice on the NHS website.

However, the scientists from Rothamsted Research said today that the main aim of the research is to produce GM crops that could be made into food for farmed fish, which cannot grow healthily without a diet rich in omega-3 fatty acids, currently derived from wild-caught marine organisms.

Farmed fish grown in cages are unable to absorb sufficient omega-3 in their diets so they have to be fed on smaller fish, such as sandeels, caught in the wild. The scientists said the practice is unsustainable and it would be better for the environment to produce fish feed enhanced with omega-3 derived from GM farm crops.

“I honestly believe there is an opportunity for our plant-derived fish oil to be a sustainable source of terrestrial fish oils for the fish-farming industry,” said Professor Jonathan Napier, the project’s lead scientist at Rothamsted.

“In general, ultimately down the line, you could also imagine using plant-derived oils as another source of fish oils for human consumption… Fish oils are known to be important for human health and nutrition and they have a proven role in reducing cardiovascular disease. However, global fish stocks are in decline,” Professor Napier said.

At the same time, the human population is growing and demand for fish oil will continue to increase, he said. “It is difficult to imagine how everyone on the planet can have equal access to these important fatty acids,” he added.

Helen Wallace, director of GeneWatch UK said that omega-3 fish oils have recently been implicated in raising the risk of prostate cancer, and it is not clear whether GM-derived fish oils will be safe for human or animal consumption.

“GM crops with altered oil content raise new safety issues for consumers. It is hard to predict the effects on health because many nutrients will be changed and some could be harmful for some people,” Dr Wallace said.

“If these plants are grown to feed to fish, the oil content of the fish will also require testing. And there will be questions about the use of land that could be used for food. People will also want these products to be labelled and consumers may not want to buy them,” she said.

What are omega-3 fatty acids

Omega-3 fatty acids are made up of a complicated soup of large, organic molecules that are variously described as being good for human health. Oily fish are particularly rich in certain types of omega-3 fatty acids linked with a healthy diet, notably EPA and DHA fatty acids.

It is a misnomer to call them “fish” oils given that fish cannot manufacture these substances – they are in fact made by microscopic marine algae that are eaten or absorbed by the fish. This is why farmed fish need omega-3 fatty acids to be added to their diet.

The Rothamsted Research scientists have copied and synthesised the genes from the algae that are involved in the manufacture of EPA and DHA fatty acids. They have stitched these gene copies into a plant called Camelina sativa, known as “false flax”, which is widely grown in parts of Europe and North America for its seed oil.

The scientists hope to develop an alternative source of omega-3 fatty acids that can be fed to farmed fish – about 80 per cent of the world’s supply of ocean-derived fish oil is fed to farmed fish. They believe that growing GM crops on arable land will be more sustainable and better for the environment than trawling the sea for small fish in order to feed them to bigger fish.

Google puts its built-in encyclopedia front and centre in results

Google has updated its search by integrating encyclopedia-style facts and figures directly into its results page, speeding up information gathering.

The update places results pulled from Google’s Knowledge Graph – a database containing encyclopaedia entries on about 570m concepts, relationships, facts and figures – under small popup information panels next to search results.

“To help you learn more about the websites you see in your search results, starting today you may see more information about them directly on the results page when you search on your desktop,” said Bart Niechwiej, a software engineer at Google in a blog post.

The panels are accessed through a small clickable link on the second line of applicable search results.

Could cause issues for website owners

While the update is likely to enhance search for individuals, it could cause issues for website owners who appear in search results with the added information panels.

“The popup adds up to three extra links to the search result that don’t go to your website,” explains Matt McGee, editor-in-chief of search specialist Search Engine Land. “If this becomes a popular feature with searchers, it could lead some to click away from the actual web page that Google included in its search results.”

Starting as a small trial, Google will continue to expand the number of sites that bring search results with Knowledge Graph entries attached.

The update builds on the biggest change to the search algorithm Google had made in three years, called “Hummingbird”, which focused on Knowledge Graph and natural language interpretation making the core search better at answering longer, more complex and spoken queries.

Plant killer boost for rainforests.

Pathogenic fungi, normally associated with killing plant life, could play a key role in driving biodiversity in tropical rainforests, a study suggests.

Researchers found that the presence of fungal pathogens limited the growth of dominant species, allowing other plants to become established.

Netted stinkhorn fungus (Image: Robert Bagchi)

The scientists said the study offered an insight into why rainforests are biodiversity hotspots.

The findings have been published in the journal Nature.

“The conventional wisdom in ecology suggests that if you have got more than one species competing for the same set of resources, one of those species should win and exclude the other,” co-author Owen Lewis from the University of Oxford explained.

However, an idea developed by ecologists Daniel Janzen and Joseph Connell in the 1970s linked pests and diseases to high levels of biodiversity in tropical areas, which they described as negative density dependence.

“The way the hypothesis works is to give rare species an advantage – the idea is that pests and diseases tend to transmit more effectively if plants are growing close together,” Dr Lewis told BBC News.

“This acts as a negative feedback mechanism, so if one species becomes too abundant locally then it tends to get hammered by the pests and diseases and this then gives rarer species a chance because they tend to be less affected.

Germinating seedlings (Image: Owen Lewis)
Dominant species are targeted by fungal pathogens, allowing rarer species the opportunity to grow

“This acts as a balancing mechanism that prevents these rarer, and perhaps competitively inferior species, from being outcompeted.”

Commenting on the findings, Dr Helene Muller-Landau – a scientist for the Smithsonian Tropical Research Institute – described the study as an “elegant field study” that supported the Janzen-Connell hypothesis.

“This is the first study to explicitly link a particular group of natural enemies to negative density dependence and the maintenance of species diversity in tropical forest plants,” she wrote in a commentary for Nature journal.

“It clearly implicates fungal pathogens as the most important drivers of these patterns at the seedling-establishment stage.”

Dr Lewis observed: “It is an interesting but almost paradoxical effect that a small group and obscure group of organisms are harming individual plant species but are ultimately good for diversity as a whole.

“And it is the plant diversity that underpins the structure of the whole ecosystem that supports the insects, birds and animals.”

‘Virtual earthquakes’ used to forecast Los Angeles quake risk

Stanford scientists are using weak vibrations generated by the Earth’s oceans to produce “virtual earthquakes” that can be used to predict the ground movement and shaking hazard to buildings from real quakes.

The new technique, detailed in the Jan. 24 issue of the journal Science, was used to confirm a prediction that Los Angeles will experience stronger-than-expected ground movement if a major quake occurs south of the city.

“We used our virtual earthquake approach to reconstruct large earthquakes on the southern San Andreas Fault and studied the responses of the urban environment of Los Angeles to such earthquakes,” said lead author Marine Denolle, who recently received her PhD in geophysics from Stanford and is now at the Scripps Institution of Oceanography in San Diego.

The new technique capitalizes on the fact that earthquakes aren’t the only sources of seismic waves. “If you put a seismometer in the ground and there’s no earthquake, what do you record? It turns out that you record something,” said study leader Greg Beroza, a geophysics professor at Stanford.

What the instruments will pick up is a weak, continuous signal known as the ambient seismic field. This omnipresent field is generated by  interacting with the solid Earth. When the waves collide with each other, they generate a pressure pulse that travels through the ocean to the sea floor and into the Earth’s crust. “These waves are billions of times weaker than the seismic waves generated by earthquakes,” Beroza said.

Scientists have known about the ambient seismic field for about 100 years, but it was largely considered a nuisance because it interferes with their ability to study earthquakes. The tenuous seismic waves that make up this field propagate every which way through the crust. But in the past decade, seismologists developed signal-processing techniques that allow them to isolate certain waves; in particular, those traveling through one seismometer and then another one downstream.

Denolle built upon these techniques and devised a way to make these ambient seismic waves function as proxies for seismic waves generated by real earthquakes. By studying how the ambient waves moved underground, the researchers were able to predict the actions of much stronger waves from powerful earthquakes.

She began by installing several seismometers along the San Andreas Fault to specifically measure ambient seismic waves.

Employing data from the seismometers, the group then used mathematical techniques they developed to make the waves appear as if they originated deep within the Earth. This was done to correct for the fact that the seismometers Denolle installed were located at the Earth’s surface, whereas real earthquakes occur at depth.

In the study, the team used their virtual earthquake approach to confirm the accuracy of a prediction, made in 2006 by supercomputer simulations, that if the southern San Andreas Fault section of California were to rupture and spawn an earthquake, some of the seismic waves traveling northward would be funneled toward Los Angeles along a 60-mile-long (100-kilometer-long) natural conduit that connects the city with the San Bernardino Valley. This passageway is composed mostly of sediments, and acts to amplify and direct waves toward the Los Angeles region.

Until now, there was no way to test whether this funneling action, known as the waveguide-to-basin effect, actually takes place because a major quake has not occurred along that particular section of the San Andreas Fault in more than 150 years.

The virtual earthquake approach also predicts that seismic waves will become further amplified when they reach Los Angeles because the city sits atop a large sedimentary basin. To understand why this occurs, study coauthor Eric Dunham, an assistant professor of geophysics at Stanford, said to imagine taking a block of plastic foam, cutting out a bowl-shaped hole in the middle, and filling the cavity with gelatin. In this analogy, the plastic foam is a stand-in for rocks, while the gelatin is like sediments, or dirt. “The gelatin is floppier and a lot more compliant. If you shake the whole thing, you’re going to get some motion in the Styrofoam, but most of what you’re going to see is the basin oscillating,” Dunham said.

As a result, the scientists say, Los Angeles could be at risk for stronger, and more variable, ground motion if a large earthquake – magnitude 7.0 or greater – were to occur along the southern San Andreas Fault, near the Salton Sea.

“The seismic waves are essentially guided into the sedimentary basin that underlies Los Angeles,” Beroza said. “Once there, the waves reverberate and are amplified, causing stronger shaking than would otherwise occur.”

Beroza’s group is planning to test the virtual earthquake approach in other cities around the world that are built atop sedimentary basins, such as Tokyo, Mexico City, Seattle and parts of the San Francisco Bay area. “All of these cities are earthquake threatened, and all of them have an extra threat because of the basin amplification effect,” Beroza said.

Because the technique is relatively inexpensive, it could also be useful for forecasting ground motion in developing countries. “You don’t need large supercomputers to run the simulations,” Denolle said.

In addition to studying earthquakes that have yet to occur, the technique could also be used as a kind of “seismological time machine” to recreate the seismic signatures of temblors that shook the Earth long ago, according to Beroza.

“For an  that occurred 200 years ago, if you know where the fault was, you could deploy instruments, go through this procedure, and generate seismograms for earthquakes that occurred before seismographs were invented,” he said.

A new wrinkle in the control of waves: Flexible materials could provide new ways to control sound and light.

Flexible, layered materials textured with nanoscale wrinkles could provide a new way of controlling the wavelengths and distribution of waves, whether of sound or light. The new method, developed by researchers at MIT, could eventually find applications from nondestructive testing of materials to sound suppression, and could also provide new insights into soft biological systems and possibly lead to new diagnostic tools.

The findings are described in a paper published this week in the journal Physical Review Letters, written by MIT postdoc Stephan Rudykh and Mary Boyce, a former professor of mechanical engineering at MIT who is now dean of the Fu Foundation School of Engineering and Applied Science at Columbia University.

While materials’ properties are known to affect the propagation of light and sound, in most cases these properties are fixed when the material is made or grown, and are difficult to alter later. But in these layered materials, changing the properties—for example, to “tune” a material to filter out specific colors of light—can be as simple as stretching the flexible material.

“These effects are highly tunable, reversible, and controllable,” Rudykh says. “For example, we could change the color of the material, or potentially make it optically or acoustically invisible.”

The materials can be made through a layer-by-layer deposition process, refined by researchers at MIT and elsewhere, that can be controlled with high precision. The process allows the thickness of each layer to be determined to within a fraction of a . The material is then compressed, creating within it a series of precise wrinkles whose spacing can cause scattering of selected frequencies of  (of either sound or light).

Surprisingly, Rudykh says, these effects work even in materials where the alternating layers have almost identical densities. “We can use polymers with very similar densities and still get the effect,” he says. “How waves propagate through a material, or not, depends on the microstructure, and we can control it,” he says.

By designing that microstructure to produce a desired set of effects, then altering those properties by deforming the material, “we can actually control these effects through external stimuli,” Rudykh says. “You can design a material that will wrinkle to a different wavelength and amplitude. If you know you want to control a particular range of frequencies, you can design it that way.”

The research, which is based on computer modeling, could also provide insights into the properties of natural biological materials, Rudykh says. “Understanding how the waves propagate through biological tissues could be useful for ,” he says.

For example, current diagnostic techniques for certain cancers involve painful and invasive procedures. In principle, ultrasound could provide the same information noninvasively, but today’s ultrasound systems lack sufficient resolution. The new work with wrinkled materials could lead to more precise control of these ultrasound waves, and thus to systems with better resolution, Rudykh says.

The system could also be used for sound cloaking—an advanced form of noise cancellation in which outside sounds could be completely blocked from a certain volume of space rather than just a single spot, as in current noise-canceling headphones.

“The microstructure we start with is very simple,” Rudykh says, and is based on well-established, layer-by-layer manufacturing. “From this layered material, we can extend to more complicated microstructures, and get effects you could never get” from conventional . Ultimately, such systems could be used to control a variety of effects in the propagation of light, sound, and even heat.

The technology is being patented, and the researchers are already in discussions with companies about possible commercialization, Rudykh says.

Source:  Massachusetts Institute of Technology

We may one day be reading by the light of a houseplant.

Bioglow's Starlight Avatar(TM) as seen in regular light (left) and in darkness (right).

Tired of filling your house with boring old ficus plants and ferns for a little greenery? You’re in luck, because you could soon be able to bring home your own luminescent plant. No, it isn’t the result of some kind of nuclear accident. The plants are engineered by the biotechnology company Bioglow and were first announced in 2010 when molecular biologist Alexander Krichevsky et al. published the results in PLOS One. Since that initial report, the team has been working to refine the technique and get the plants growing brighter.

Bioluminescence can be found in a variety of organisms, including certain jellyfish, bacteria, and insects. These creatures use their natural glow for many reasons, including scaring off predators or attracting prey. For modern scientists, bioluminescence is used a standard marker used in biological research, as it gives scientists a very clear confirmation that the genetic modification was successful. Now, plants that are genetically engineered to be bioluminescent will be available to the public as a novelty, though it could have future implications as a truly–ahem–green source of energy.

Glowing plants have been attempted for some time now, but required the use of special dyes or UV lights. Because the properties that made these glow were from an external source, these didn’t really work all that well and were not truly bioluminescent. Bioglow’s plants will be the first commercially available plants that have been altered to be autonomously luminescent (which Krichevsky describes as “autoluminescent”).

The glowing plants have been named Starlight AvatarTM. They are an engineered version of Nicotiana alata plants, which is an ornamental tobacco species. Don’t let that put you off; the plant smells like jasmine, not an old bowling alley. Its moniker comes from the fact that it glows about as bright as starlight. Depending on the individual, the light can be seen as soon as the lights go out, but it may also take a couple minutes for your eyes to adjust.

The biggest drawback of the plant now is that they have a relatively short lifespan at only 2-3 months because it takes so much out of the plant to create the light. The lab continues to work on increasing the longevity of the plant as well as ramping up the brightness. It is the company’s hope that someday these plants could be used to provide a natural source of light inside the home and even possibly replace garden lights, saving money and energy.

Dying to get your hands on one of the first Starlight AvatarTM plants? Bioglow will be holding an auction for the first twenty plants. It doesn’t cost anything to sign up for the auction, but you do need to register on Bioglow’s website to get on the email list for the auction link. The auction is only open to those in the United States and bidding starts at just $1, plus shipping fees.

Breakthrough allows scientists to watch how molecules morph into memories (w/ video)

In two studies in the January 24 issue of Science, researchers at Albert Einstein College of Medicine of Yeshiva University used advanced imaging techniques to provide a window into how the brain makes memories. These insights into the molecular basis of memory were made possible by a technological tour de force never before achieved in animals: a mouse model developed at Einstein in which molecules crucial to making memories were given fluorescent “tags” so they could be observed traveling in real time in living brain cells.

Efforts to discover how  make memories have long confronted a major roadblock: Neurons are extremely sensitive to any kind of disruption, yet only by probing their innermost workings can scientists view the molecular processes that culminate in memories. To peer deep into neurons without harming them, Einstein researchers developed a  in which they fluorescently tagged all molecules of messenger RNA (mRNA) that code for beta-actin protein – an essential structural protein found in large amounts in  and considered a key player in making memories. mRNA is a family of RNA molecules that copy DNA’s genetic information and translate it into the proteins that make life possible.

“It’s noteworthy that we were able to develop this mouse without having to use an artificial gene or other interventions that might have disrupted neurons and called our findings into question,” said Robert Singer, Ph.D., the senior author of both papers and professor and co-chair of Einstein’s department of anatomy & structural biology and co-director of the Gruss Lipper Biophotonics Center at Einstein. He also holds the Harold and Muriel Block Chair in Anatomy & Structural Biology at Einstein.

In the research described in the two Science papers, the Einstein researchers stimulated neurons from the mouse’s hippocampus, where memories are made and stored, and then watched fluorescently glowing beta-actin mRNA molecules form in the nuclei of neurons and travel within dendrites, the neuron’s branched projections. They discovered that mRNA in neurons is regulated through a novel process described as “masking” and “unmasking,” which allows beta-actin protein to be synthesized at specific times and places and in specific amounts.

“We know the beta-actin mRNA we observed in these two papers was ‘normal’ RNA, transcribed from the mouse’s naturally occurring beta-actin gene,” said Dr. Singer. “And attaching green fluorescent protein to mRNA molecules did not affect the mice, which were healthy and able to reproduce.”

Neurons come together at synapses, where slender dendritic “spines” of neurons grasp each other, much as the fingers of one hand bind those of the other. Evidence indicates that repeated neural stimulation increases the strength of synaptic connections by changing the shape of these interlocking dendrite “fingers.” Beta-actin protein appears to strengthen these synaptic connections by altering the shape of dendritic spines. Memories are thought to be encoded when stable, long-lasting synaptic connections form between neurons in contact with each other.

The first paper describes the work of Hye Yoon Park, Ph.D., a postdoctoral student in Dr. Singer’s lab at the time and now an instructor at Einstein. Her research was instrumental in developing the mice containing fluorescent beta-actin mRNA—a process that took about three years.

Dr. Park stimulated individual hippocampal neurons of the mouse and observed newly formed beta-actin mRNA molecules within 10 to 15 minutes, indicating that nerve stimulation had caused rapid transcription of the beta-actin gene. Further observations suggested that these beta-actin mRNA molecules continuously assemble and disassemble into large and small particles, respectively. These mRNA particles were seen traveling to their destinations in dendrites where beta-actin protein would be synthesized.

In the second paper, lead author and graduate student Adina Buxbaum of Dr. Singer’s lab showed that neurons may be unique among cells in how they control the synthesis of beta-actin protein.

“Having a long, attenuated structure means that neurons face a logistical problem,” said Dr. Singer. “Their beta-actin mRNA molecules must travel throughout the cell, but neurons need to control their mRNA so that it makes beta-actin protein only in certain regions at the base of dendritic spines.”

Ms. Buxbaum’s research revealed the novel mechanism by which brain neurons handle this challenge. She found that as soon as beta-actin mRNA molecules form in the nucleus of hippocampal neurons and travel out to the cytoplasm, the mRNAs are packaged into granules and so become inaccessible for making protein. She then saw that stimulating the neuron caused these granules to fall apart, so that mRNA molecules became unmasked and available for synthesizing beta-actin protein.

But that observation raised a question: How do neurons prevent these newly liberated mRNAs from making more beta-actin protein than is desirable? “Ms. Buxbaum made the remarkable observation that mRNA’s availability in neurons is a transient phenomenon,” said Dr. Singer. “She saw that after the mRNA molecules make beta-actin protein for just a few minutes, they suddenly repackage and once again become masked. In other words, the default condition for mRNA in neurons is to be packaged and inaccessible.”

These findings suggest that neurons have developed an ingenious strategy for controlling how -making proteins do their job. “This observation that neurons selectively activate protein synthesis and then shut it off fits perfectly with how we think memories are made,” said Dr. Singer. “Frequent stimulation of the neuron would make mRNA available in frequent, controlled bursts, causing beta-actin protein to accumulate precisely where it’s needed to strengthen the synapse.”

To gain further insight into memory’s molecular basis, the Singer lab is developing technologies for imaging neurons in the intact brains of living mice in collaboration with another Einstein faculty member in the same department, Vladislav Verkhusha, Ph.D. Since the hippocampus resides deep in the brain, they hope to develop infrared fluorescent proteins that emit light that can pass through tissue. Another possibility is a fiberoptic device that can be inserted into the brain to observe memory-making hippocampal neurons.

The titles of the two Einstein papers are: “Visualization of Dynamics of Single Endogenous mRNA as Labeled in Live Mouse,” and “Single Beta-actin mRNA Detection in Neurons Reveals a Mechanism for Regulating Its Translatability.”

Scripps Florida Scientists Find Regulator of Amyloid Plaque Buildup in Alzheimer’s Disease.

Scientists from the Florida campus of The Scripps Research Institute have identified a critical regulator of a molecule deeply involved in the progression of Alzheimer’s disease.

The new study, published in an advance, online edition of the Journal of Biological Chemistry, shows for the first time that levels of this regulating protein are decreased in the brains of Alzheimer’s disease sufferers and that this decrease could be a significant factor in the advance of the disease.

The regulator is known as Rheb, a protein that many believe may be active in neural plasticity, the ability of the brain to change in response to learning.

In the new study, the scientists found that Rheb binds and regulates activity of a molecule known as BACE1, an important enzyme in Alzheimer’s disease pathology, establishing for the first time a new molecular link between Rheb and BACE1.

“We found that Rheb regulates BACE1, which is a major drug target in Alzheimer’s disease,” said Srini Subramaniam, a TSRI biologist who led the study. “Studies of the autopsied brains of Alzheimer’s patients have found a significant reduction in Rheb, so it is possible that an increase in Rheb could reverse the buildup of amyloid plaque.”

The study noted that in some genetically modified animal models, an increase of Rheb has already been shown to reduce BACE1 levels and the production of amyloid plaque.

“If we can uncover the mechanism by which Rheb alters BACE1 levels, that would be a very good drug target,” said Neelam Shahani, a first author of the study with William Pryor, both research associates in the Subramaniam lab.

The new study indicates that Rheb degrades BACE1 through a number of pathways, but more research needs to be done before drug candidates can be developed.

“We’re very interested in the disease process and plan to keep moving forward to understand precisely how Rheb regulates BACE1,” said Pryor.

In addition to Subramaniam, Shahani and Pryor, other authors of the study, “Rheb GTPase Regulates β-Secretase Levels and Amyloid β Generation,” include Supriya Swarnkar of TSRI; Nikolai Kholodilov and Robert E Burke of Columbia University; and Gopal Thinakaran of The University of Chicago. For more information, see

​ 44 HD films a second: Team in UK manages fastest ever ‘real world’ internet speeds.

Reuters / Mike Segar

Remember a time when 256Kbps offered a broadband revolution in permanent connectivity? Those days truly belong to the digital dinosaurs, as UK scientists have created the fastest ever real-world internet connection, clocking in at 1.4 terabits per second.

A terabit is 1000 gigabits or 125,000 megabytes, and if the speeds registered by the joint research team from French telecoms company Alcatel-Lucent and London-based BT prove commercially viable, a user would be able to download as many as 44 high definition films in one second.

The team employed a protocol dubbed Flexigrid, which allows for the overlaying of multiple transmission channels over the same connection. The resulting ‘Alien Super Channel’ sees seven 200 gigabit-per-second channels running in parallel, boosting transmission efficiency by 42.5 percent over previous attempts.

Speaking with BBC News, Kevin Drury, optical marketing leader at Alcatel-Lucent, likened the new development to reducing the spaces between lanes on a busy highway, enabling more lanes of traffic to share the same space.

Dr. Tim Whitley, BT’s MD of research and innovation, told CNET UK the trials were part of BT’s “long history of being on the cutting edge in telecommunications innovations.”

“These trials continue that tradition, as we work with Alcatel-Lucent to push the boundaries of fiber technology, allowing us to support the ever-increasing bandwidth required by our customers, and deliver new and exciting services which rely on fast, data-hungry applications,” he said.

Although faster internet speeds have been measured in labs, this is the first time such a blisteringly fast connection has been obtained under “real world conditions.”

What’s particularly exciting about the Flexigrid protocol is that it does not necessitate infrastructure changes, and could in theory run on currently existing fiber whose utility is not being maximized.

At present, the a average peace speed for the UK hovers around 36.3 megabits per second, almost double the global average, and just a hair behind the US.

The semi-autonomous city-state Hong Kong, meanwhile, is the global internet hot spot, registering average high speeds of 63.6 megabits per second; treble the global average and over 22,000 times slower than that achieved in the Flexigrid tests.

Of course, other technology is in the works to supercharge the flow of web traffic. Google Fiber, which has been rolled out in select locations in the US over the past several years, offers subscribers a one-Gigabit-per-second (Gbps) internet option.

In South Korea, mobile operator SK Telecom Co announced plans this week to introduce a 300-megabit-per-second option, leaving Western 4G services in the dust.

On the experimental end of wizardly Wi-Fi connections, in October a German team of scientists pulled off a 100 Gbps connection. Even Wi-Fi speed that swift could soon become the internet’s version of the horse and buggy.

Swen Konig, one of the researchers on the project, told Gizmodo that with a few tweaks, Wi-Fi could catch up with recent fiber optic feats in no time.

“By employing optical and electrical multiplexing techniques, i.e., by simultaneously transmitting multiple data streams, and by using multiple transmitting and receiving antennas, the data rate could be multiplied,” said Konig.

“Hence, radio systems having a data rate of 1 terabit per second appear to be feasible.”

Facebook could fade out like a disease, math model says.

Facebook is like an infectious disease, experiencing a spike before its decline, according to US researchers who claim the social network will lose 80 percent of users by 2017.

Two doctoral candidates in mechanical and aerospace engineering at Princeton University made their astonishing claims in a paper published online at a scientific research archive, but not yet peer-reviewed.

Based on the rise and fall of MySpace, John Cannarella and Joshua Spechler say that Facebook, the largest online social network in history, is set for a massive fall.

“Ideas, like diseases, have been shown to spread infectiously between people before eventually dying out, and have been successfully described with epidemiological models,” they wrote.

They applied a modified epidemiological model to describe the dynamics of user activity of , using Google data that is publicly available.

It will make uncomfortable reading for the social media giant co-founded by Mark Zuckerberg, which has more than 1.1 billion users around the globe and turns 10 years old next month.

Their study said Facebook, whose shares climbed to a new high of $58.51 this week, has been in decline in terms of data usage since 2012.

“Facebook is expected to undergo rapid decline in the upcoming years, shrinking to 20 percent of its maximum size by December 2014,” said the report posted online to peers at

“Extrapolating the best fit model into the future suggests that Facebook will undergo a  in the coming years, losing 80 percent of its peak user base between 2015 and 2017.”

The new research comes amid surveys suggesting that younger users started gravitating away from Facebook in 2013.

Cannarella and Spechler told AFP they did not wish to comment publicly in person until their manuscript had completed its peer review process ahead of formal publication.

But at least for now, Facebook’s fortunes are in good health.

Rising share prices have made chief operating officer Sheryl Sandberg the latest tech billionaire and Zuckerberg, 29, has a personal fortune estimated at about $19 billion.