Researchers Create First-Ever Honey Bee Vaccine


The compound protects against the American foulbrood disease, but the same technique could lead to protection against other major pathogens

Bees

Vaccines are one of humanity’s greatest medical breakthroughs—ridding the world of smallpox, limiting outbreaks of mumps and measles and putting polio on the ropes. Now, reports Bill Chapell at NPR, researchers are hoping to harness the power of vaccines for the first time to help honeybees, which are currently being bombarded by a long list of threats.

Vaccines in non-human creatures are not new—any responsible pet owner has taken their dog or cat to the veterinarian for plenty of rabies and Lyme disease vaccinations. Inoculating an insect, however, is very different. In typical vaccines, either a dead or weakened version of a virus is introduced into an animal, whose immune system is then able to create antibodies to fight the disease. Insects, however, don’t have antibodies, meaning they don’t have the same type of immune response as we do.

Biologist Dalial Freitak of the University of Helsinki, the study’s author, found that when a moth was exposed to certain bacteria, usually through eating it, it was able to pass down a resistance to the bacteria to the next generation. She met with Heli Salmela, also of the University of Helsinki, who was working with a bee protein called vitellogenin that seemed to trigger the same reaction to invasive bacteria in the bees. The two began using the protein to create an immune response in the bees against American foulbrood, an infectious disease that is harming bee colonies across the world.

The vaccine helps the bee’s immune system recognize harmful diseases early on in life, similar to the way antibodies in the human body recognize diseases. When the queen bee consumes foulbrood bacteria, the vitellogenin protein binds with the pathogenic molecules, which are then passed along in her eggs. The developing baby bees’ immune systems then recognize the foulbrood bacteria as an intruder, setting off an immune response that protects the bee from the disease.

The result is a vaccine against foulbrood the team is calling PrimeBEE. The technology is undergoing tests, so it is not yet commercially available. The team has also yet to decide if the vaccine will be delivered by feeding queen bees sugar patties or if they will send out queen bees that have already been innoculated against the disease.

Whatever the case, apiculturists are excited to have a new tool to fight foulbrood. Toni Burnham, president of the DC Beekeepers Alliance in Washington, D.C., tells Chapell acquiring foulbrood means a bee colony has to be destroyed. “It’s a death sentence,” she tells Chappell. “If a colony is diagnosed with AFB — regardless of the level of the infestation — it burns. Every bit of it burns; the bees are killed and the woodenware burns, and it’s gone.”

The team says that the new technique could be used for other bee pathogens as well.

“We need to help honey bees, absolutely. Even improving their life a little would have a big effect on the global scale. Of course, the honeybees have many other problems as well: pesticides, habitat loss and so on, but diseases come hand in hand with these life-quality problems,” Freitak says in the press release. “If we can help honey bees to be healthier and if we can save even a small part of the bee population with this invention, I think we have done our good deed and saved the world a little bit.”

Bees could certainly use some good news. Since 2006, and perhaps a little before that, honey bee colonies began to experience something called Colony Collapse Disorder, in which hives would dissolve over the winter months. Researchers looked for causes from pathogens to pesticide exposure, but were never able to figure out exactly what was plaguing the bees. Though the problem has gotten better in recent years, the prospect of losing our bees, which pollinate lots of fruits and nuts, showed the world just how important our buzzy little friends can be.

Google’s New AI Is a Master of Games, but How Does It Compare to the Human Mind?


After building AlphaGo to beat the world’s best Go players, Google DeepMind built AlphaZero to take on the world’s best machine players

AI Chess
Google’s new artificial intelligence program, AlphaZero, taught itself to play chess, shogi, and Go in a matter of hours, and outperforms the top-ranking AIs in the gameplay arena.

For humans, chess may take a lifetime to master. But Google DeepMind’s new artificial intelligence program, AlphaZero, can teach itself to conquer the board in a matter of hours.

Building on its past success with the AlphaGo suite—a series of computer programs designed to play the Chinese board game Go—Google boasts that its new AlphaZero achieves a level of “superhuman performance” at not just one board game, but three: Go, chess, and shogi (essentially, Japanese chess). The team of computer scientists and engineers, led by Google’s David Silver, reported its findings recently in the journal Science.

“Before this, with machine learning, you could get a machine to do exactly what you want—but only that thing,” says Ayanna Howard, an expert in interactive computing and artificial intelligence at the Georgia Institute of Technology who did not participate in the research. “But AlphaZero shows that you can have an algorithm that isn’t so [specific], and it can learn within certain parameters.”

AlphaZero’s clever programming certainly ups the ante on gameplay for human and machine alike, but Google has long had its sights set on something bigger: engineering intelligence.

The researchers are careful not to claim that AlphaZero is on the verge of world domination (others have been a little quicker to jump the gun). Still, Silver and the rest of the DeepMind squad are already hopeful that they’ll someday see a similar system applied to drug design or materials science.

So what makes AlphaZero so impressive?

Gameplay has long been revered as a gold standard in artificial intelligence research. Structured, interactive games are simplifications of real-world scenarios: Difficult decisions must be made; wins and losses drive up the stakes; and prediction, critical thinking, and strategy are key.

Encoding this kind of skill is tricky. Older game-playing AIs—including the first prototypes of the original AlphaGo—have traditionally been pumped full of codes and data to mimic the experience typically earned through years of natural, human gameplay (essentially, a passive, programmer-derived knowledge dump). With AlphaGo Zero (the most recent version of AlphaGo), and now AlphaZero, the researchers gave the program just one input: the rules of the game in question. Then, the system hunkered down and actively learned the tricks of the trade itself.

Go
AlphaZero is based on AlphaGo Zero, part of the AlphaGo suite designed to play the Chinese board game Go, pictured above. Early iterations of the original program were fed data from human-versus-human games; later versions engaged in self-teaching, wherein the software played games against itself to learn its own strategy.

This strategy, called self-play reinforcement learning, is pretty much exactly what it sounds like: To train for the big leagues, AlphaZero played itself in iteration after iteration, honing its skills by trial and error. And the brute-force approach paid off. Unlike AlphaGo Zero, AlphaZero doesn’t just play Go: It can beat the best AIs in the business at chess and shogi, too. The learning process is also impressively efficient, requiring only two, four, or 30 hours of self-tutelage to outperform programs specifically tailored to master shogi, chess, and Go, respectively. Notably, the study authors didn’t report any instances of AlphaZero going head-to-head with an actual human, Howard says. (The researchers may have assumed that, given that these programs consistently clobber their human counterparts, such a matchup would have been pointless.)

AlphaZero was also able to trounce Stockfish (the now unseated AI chess master) and Elmo (the former AI shogi expert) despite evaluating fewer possible next moves on each turn during game play. But because the algorithms in question are inherently different, and may consume different amounts of power, it’s difficult to directly compare AlphaZero to other, older programs, points out Joanna Bryson, who studies artificial intelligence at the University of Bath in the United Kingdom and did not contribute to AlphaZero.

Google keeps mum about a lot of the fine print on its software, and AlphaZero is no exception. While don’t know everything about the program’s power consumption, what’s clear is this: AlphaZero has to be packing some serious computational ammo. In those scant hours of training, the program kept itself very busy, engaging in tens or hundreds of thousands of practice rounds to get its board game strategy up to snuff—far more than a human player would need (or, in most cases, could even accomplish) in pursuit of proficiency.

This intensive regimen also used 5,000 of Google’s proprietary machine-learning processor units, or TPUs, which by some estimates consume around 200 watts per chip. No matter how you slice it, AlphaZero requires way more energy than a human brain, which runs on about 20 watts.

The absolute energy consumption of AlphaZero must be taken into consideration, adds Bin Yu, who works at the interface of statistics, machine learning, and artificial intelligence at the University of California, Berkeley. AlphaZero is powerful, but might not be good bang for the buck—especially when adding in the person-hours that went into its creation and execution.

Energetically expensive or not, AlphaZero makes a splash: Most AIs are hyper-specialized on a single task, making this new program—with its triple threat of game play—remarkably flexible. “It’s impressive that AlphaZero was able to use the same architecture for three different games,” Yu says.

So, yes. Google’s new AI does set a new mark in several ways. It’s fast. It’s powerful. But does that make it smart?

This is where definitions start to get murky. “AlphaZero was able to learn, starting from scratch without any human knowledge, to play each of these games to superhuman level,” DeepMind’s Silver said in a statement to the press.

Even if board game expertise requires mental acuity, all proxies for the real world have their limits. In its current iteration, AlphaZero maxes out by winning human-designed games—which may not warrant the potentially alarming label of “superhuman.” Plus, if surprised with a new set of rules mid-game, AlphaZero might get flummoxed. The actual human brain, on the other hand, can store far more than three board games in its repertoire.

What’s more, comparing AlphaZero’s baseline to a tabula rasa (blank slate)as the researchers do—is a stretch, Bryson says. Programmers are still feeding it one crucial morsel of human knowledge: the rules of the game it’s about to play. “It does have far less to go on than anything has before,” Bryson adds, “but the most fundamental thing is, it’s still given rules. Those are explicit.”

And those pesky rules could constitute a significant crutch. “Even though these programs learn how to perform, they need the rules of the road,” Howard says. “The world is full of tasks that don’t have these rules.”

When push comes to shove, AlphaZero is an upgrade of an already powerful program—AlphaGo Zero, explains JoAnn Paul, who studies artificial intelligence and computational dreaming at the Virginia Polytechnic Institute and State University and was not involved in the new research. AlphaZero uses many of the same building blocks and algorithms as AlphaGo Zero, and still constitutes just a subset of true smarts. “I thought this new development was more evolutionary than revolutionary,” she adds. “None of these algorithms can create. Intelligence is also about storytelling. It’s imagining things that are not yet there. We’re not thinking in those terms in computers.”

Part of the problem is, there’s still no consensus on a true definition of “intelligence,” Yu says—and not just in the domain of technology. “It’s still not clear how we are training critically thinking beings, or how we use the unconscious brain,” she adds.

To this point, many researchers believe there are likely multiple types of intelligence. And tapping into one far from guarantees the ingredients for another. For instance, some of the smartest people out there are terrible at chess.

With these limitations, Yu’s vision of the future of artificial intelligence partners humans and machines in a kind of coevolution. Machines will certainly continue to excel at certain tasks, she explains, but human input and oversight may always be necessary to compensate for the unautomated.

Of course, there’s no telling how things will shake out in the AI arena. In the meantime, we have plenty to ponder. “These computers are powerful, and can do certain things better than a human can,” Paul says. “But that still falls short of the mystery of intelligence.”

Ant Colonies Retain Memories That Outlast the Lifespans of Individuals


An ant colony can thrive for decades, changing its behavior based on past events even as individual ants die off every year or so

Ants
Signals from other workers can tell ants when and where to fan out and search for food.

Like a brain, an ant colony operates without central control. Each is a set of interacting individuals, either neurons or ants, using simple chemical interactions that in the aggregate generate their behavior. People use their brains to remember. Can ant colonies do that? This question leads to another question: what is memory? For people, memory is the capacity to recall something that happened in the past. We also ask computers to reproduce past actions – the blending of the idea of the computer as brain and brain as computer has led us to take ‘memory’ to mean something like the information stored on a hard drive. We know that our memory relies on changes in how much a set of linked neurons stimulate each other; that it is reinforced somehow during sleep; and that recent and long-term memory involve different circuits of connected neurons. But there is much we still don’t know about how those neural events come together, whether there are stored representations that we use to talk about something that happened in the past, or how we can keep performing a previously learned task such as reading or riding a bicycle.

Any living being can exhibit the simplest form of memory, a change due to past events. Look at a tree that has lost a branch. It remembers by how it grows around the wound, leaving traces in the pattern of the bark and the shape of the tree. You might be able to describe the last time you had the flu, or you might not. Either way, in some sense your body ‘remembers,’ because some of your cells now have different antibodies, molecular receptors, which fit that particular virus.

Past events can alter the behavior of both individual ants and ant colonies. Individual carpenter ants offered a sugar treat remembered its location for a few minutes; they were likely to return to where the food had been. Another species, the Sahara Desert ant, meanders around the barren desert, searching for food. It appears that an ant of this species can remember how far it walked, or how many steps it took, since the last time it was at the nest.

A red wood ant colony remembers its trail system leading to the same trees, year after year, although no single ant does. In the forests of Europe, they forage in high trees to feed on the excretions of aphids that in turn feed on the tree. Their nests are enormous mounds of pine needles situated in the same place for decades, occupied by many generations of colonies. Each ant tends to take the same trail day after day to the same tree. During the long winter, the ants huddle together under the snow. The Finnish myrmecologist Rainer Rosengren showed that when the ants emerge in the spring, an older ant goes out with a young one along the older ant’s habitual trail. The older ant dies and the younger ant adopts that trail as its own, thus leading the colony to remember, or reproduce, the previous year’s trails.

Foraging in a harvester ant colony requires some individual ant memory. The ants search for scattered seeds and do not use pheromone signals; if an ant finds a seed, there is no point recruiting others because there are not likely to be other seeds nearby. The foragers travel a trail that can extend up to 20 meters from the nest. Each ant leaves the trail and goes off on its own to search for food. It searches until it finds a seed, then goes back to the trail, maybe using the angle of the sunlight as a guide, to return to the nest, following the stream of outgoing foragers. Once back at the nest, a forager drops off its seed, and is stimulated to leave the nest by the rate at which it meets other foragers returning with food. On its next trip, it leaves the trail at about the same place to search again.

Every morning, the shape of the colony’s foraging area changes, like an amoeba that expands and contracts. No individual ant remembers the colony’s current place in this pattern. On each forager’s first trip, it tends to go out beyond the rest of the other ants travelling in the same direction. The result is in effect a wave that reaches further as the day progresses. Gradually the wave recedes, as the ants making short trips to sites near the nest seem to be the last to give up.

From day to day, the colony’s behavior changes, and what happens on one day affects the next. I conducted a series of perturbation experiments. I put out toothpicks that the workers had to move away, or blocked the trails so that foragers had to work harder, or created a disturbance that the patrollers tried to repel. Each experiment affected only one group of workers directly, but the activity of other groups of workers changed, because workers of one task decide whether to be active depending on their rate of brief encounters with workers of other tasks. After just a few days repeating the experiment, the colonies continued to behave as they did while they were disturbed, even after the perturbations stopped. Ants had switched tasks and positions in the nest, and so the patterns of encounter took a while to shift back to the undisturbed state. No individual ant remembered anything but, in some sense, the colony did.

Colonies live for 20-30 years, the lifetime of the single queen who produces all the ants, but individual ants live at most a year. In response to perturbations, the behavior of older, larger colonies is more stable than that of younger ones. It is also more homeostatic: the larger the magnitude of the disturbance, the more likely older colonies were to focus on foraging than on responding to the hassles I had created; while, the worse it got, the more the younger colonies reacted. In short, older, larger colonies grow up to act more wisely than younger smaller ones, even though the older colony does not have older, wiser ants.

Ants use the rate at which they meet and smell other ants, or the chemicals deposited by other ants, to decide what to do next. A neuron uses the rate at which it is stimulated by other neurons to decide whether to fire. In both cases, memory arises from changes in how ants or neurons connect and stimulate each other. It is likely that colony behavior matures because colony size changes the rates of interaction among ants. In an older, larger colony, each ant has more ants to meet than in a younger, smaller one, and the outcome is a more stable dynamic. Perhaps colonies remember a past disturbance because it shifted the location of ants, leading to new patterns of interaction, which might even reinforce the new behavior overnight while the colony is inactive, just as our own memories are consolidated during sleep. Changes in colony behavior due to past events are not the simple sum of ant memories, just as changes in what we remember, and what we say or do, are not a simple set of transformations, neuron by neuron. Instead, your memories are like an ant colony’s: no particular neuron remembers anything although your brain does.Aeon counter – do not remove

Why Did Humans Lose Their Fur?


We are the naked apes of the world, having shed most of our body hair long ago.

 

Homo neanderthalensis
Homo neanderthalensis, the earlier relatives of Homo sapiens, also evolved to shed most of their body hair. (Reconstruction based on Shanidar 1 by John Gurche / National Museum of Natural History)

Millions of modern humans ask themselves the same question every morning while looking in the mirror: Why am I so hairy? As a society, we spend millions of dollars per year on lip waxing, eyebrow threading, laser hair removal, and face and leg shaving, not to mention the cash we hand over to Supercuts or the neighborhood salon. But it turns out we are asking the wrong question—at least according to scientists who study human genetics and evolution. For them, the big mystery is why we are so hairless.

Evolutionary theorists have put forth numerous hypotheses for why humans became the naked mole rats of the primate world. Did we adapt to semi-aquatic environments? Does bare skin help us sweat to keep cool while hunting during the heat of the day? Did losing our fur allow us to read each other’s emotional responses such as fuming or blushing? Scientists aren’t exactly sure, but biologists are beginning to understand the physical mechanism that makes humans the naked apes. In particular, a recent study in the journal Cell Reports has begun to depilate the mystery at the molecular and genetic level.

Sarah Millar, co-senior author of the new study and a dermatology professor at the University of Pennsylvania’s Perelman School of Medicine, explains that scientists are largely at a loss to explain why different hair patterns appear across human bodies. “We have really long hair on our scalps and short hair in other regions, and we’re hairless on our palms and the underside of our wrists and the soles of our feet,” she says. “No one understands really at all how these differences arise.”

In many mammals, an area known as the plantar skin, which is akin to the underside of the wrist in humans, is hairless, along with the footpads. But in a few species, including polar bears and rabbits, the plantar area is covered in fur. A researcher studying the plantar region of rabbits noticed that an inhibitor protein, called Dickkopf 2 or Dkk2, was not present in high levels, giving the team the fist clue that Dkk2 may be fundamental to hair growth. When the team looked at the hairless plantar region of mice, they found that there were high levels of Dkk2, suggesting the protein might keep bits of skin hairless by blocking a signaling pathway called WNT, which is known to control hair growth.

To investigate, the team compared normally developing mice with a group that had a mutation which prevents Dkk2 from being produced. They found that the mutant mice had hair growing on their plantar skin, providing more evidence that the inhibitor plays a role in determining what’s furry and what’s not.

But Millar suspects that the Dkk2 protein is not the end of the story. The hair that developed on the plantar skin of the mice with the mutation was shorter, finer and less evenly spaced than the rest of the animals’ hair. “Dkk2 is enough to prevent hair from growing, but not to get rid of all control mechanisms. There’s a lot more to look at.”

Even without the full picture, the finding could be important in future research into conditions like baldness, since the WNT pathway is likely still present in chrome domes—it’s just being blocked by Dkk2 or similar inhibitors in humans. Millar says understanding the way the inhibitor system works could also help in research of other skin conditions like psoriasis and vitiligo, which causes a blotchy loss of coloration on the skin.

Australopithecus afarensis
A reconstruction of the head of human ancestor Australopithecus afarensis, an extinct hominin that lived between about 3 and 4 million years ago. The famous Lucy skeleton belongs to the species Australopithecus afarensis

With a greater understanding of how skin is rendered hairless, the big question remaining is why humans became almost entirely hairless apes. Millar says there are some obvious reasons—for instance, having hair on our palms and wrists would make knapping stone tools or operating machinery rather difficult, and so human ancestors who lost this hair may have had an advantage. The reason the rest of our body lost its fur, however, has been up for debate for decades.

One popular idea that has gone in and out of favor since it was proposed is called the aquatic ape theory. The hypothesis suggests that human ancestors lived on the savannahs of Africa, gathering and hunting prey. But during the dry season, they would move to oases and lakesides and wade into shallow waters to collect aquatic tubers, shellfish or other food sources. The hypothesis suggests that, since hair is not a very good insulator in water, our species lost our fur and developed a layer of fat. The hypothesis even suggests that we might have developed bipedalism due to its advantages when wading into shallow water. But this idea, which has been around for decades, hasn’t received much support from the fossil record and isn’t taken seriously by most researchers.

A more widely accepted theory is that, when human ancestors moved from the cool shady forests into the savannah, they developed a new method of thermoregulation. Losing all that fur made it possible for hominins to hunt during the day in the hot grasslands without overheating. An increase in sweat glands, many more than other primates, also kept early humans on the cool side. The development of fire and clothing meant that humans could keep cool during the day and cozy up at night.

But these are not the only possibilities, and perhaps the loss of hair is due to a combination of factors. Evolutionary scientist Mark Pagel at the University of Reading has also proposed that going fur-less reduced the impact of lice and other parasites. Humans kept some patches of hair, like the stuff on our heads which protects from the sun and the stuff on our pubic regions which retains secreted pheromones. But the more hairless we got, Pagel says, the more attractive it became, and a stretch of hairless hide turned into a potent advertisement of a healthy, parasite-free mate.

One of the most intriguing theories is that the loss of hair on the face and some of the hair around the genitals may have helped with emotional communication. Mark Changizi, an evolutionary neurobiologist and director of human cognition at the research company 2AI, studies vision and color theory, and he says the reason for our hairless bodies may be in our eyes. While many animals have two types of cones, or the receptors in the eye that detect color, humans have three. Other animals that have three cones or more, like birds and reptiles, can see in a wide range of wavelengths in the visible light spectrum. But our third cone is unusual—it gives us a little extra power to detect hues right in the middle of the spectrum, allowing humans to pick out a vast range of shades that seem unnecessary for hunting or tracking.

Changizi proposes that the third cone allows us to communicate nonverbally by observing color changes in the face. “Having those two cones detecting wavelengths side by side is what you want if you want to be sensitive to oxygenation of hemoglobin under the skin to understand health or emotional changes,” he says. For instance, a baby whose skin looks a little green or blue can indicate illness, a pink blush might indicate sexual attraction, and a face flushing with red could indicate anger, even in people with darker skin tones. But the only way to see all of these emotional states is if humans lose their fur, especially on their faces.

In a 2006 paper in Biology Letters, Changizi found that primates with bare faces and sometimes bare rumps also tended to have three cones like humans, while fuzzy-faced monkeys lived their lives with just two cones. According to the paper, hairless faces and color vision seem to run together.

Millar says that it’s unlikely that her work will help us directly figure out whether humans are swimming apes, sweaty monkeys or blushing primates. But combining the new study’s molecular evidence of how hair grows with physical traits observed in humans will get us closer to the truth—or at least closer to a fuller, shinier head of hair.

Sharks Have Existed More or Less Unchanged For Millions of Years, Until Now


Shark populations off the east coast of Australia have been declining over the past 55 years with little sign of recovery, according to research published in the journal Communications Biology.

Coastal shark numbers are continuing a 50-year decline, contradicting popular theories of exploding shark populations, according to an analysis of Queensland Shark Control Program data.

University of Queensland and Griffith University researchers analysed data from the program, which has used baited drumlines and nets since 1962 to and now covers 1,760 km of the Queensland coastline.

Chris Brown from Griffith’s Australian Rivers Institute says the results show consistent and widespread declines of apex sharks — tiger white sharks and hammerheads — along Queensland’s coastline.

main article image

“We were surprised at how rapid these declines were, especially in the early years of the shark control program. We had to use specialist statistical methods to properly estimate the declines, because they occurred so quickly,” says Brown.

“We were also surprised to find the declines were so consistent across different regions.”

Some species, such as hammerhead sharks, were recognised internationally as being at risk of extinction.

“Sharks are an important part of Australia’s identity. They are also survivors that have been around for hundreds of millions of years, surviving through the extinction of dinosaurs,” he says.

“It would be a great tragedy if we lost these species because of preventable human causes.

“Sharks play important roles in ecosystems as scavengers and predators, and they are indicators of healthy ecosystems. These declines are concerning because they suggest the health of coastal ecosystems is also declining.”

George Roff, a UQ School of Biological Sciences researcher, says historical baselines of Queensland shark populations are largely unknown despite a long history of shark exploitation by recreational and commercial fisheries.

“Explorers in the 19th century once described Australian coastlines as being chock-full of sharks, yet we don’t have a clear idea of how many sharks there used to be on Queensland beaches,” he says.

“Shark populations around the world have declined substantially in recent decades, with many species being listed as vulnerable and endangered.”

Researchers discuss their findings:

The research team reconstructed historical records of shark catches to explore changes in the number and sizes of sharks over the past half century.

“What we found is that large apex sharks such as hammerheads, tigers and white sharks, have declined by 74 to 92 per cent along Queensland’s coast,” Roff says.

“And the chance of zero catch – catching no sharks at any given beach per year – has increased by as much as seven-fold.

“The average size of sharks has also declined – tiger sharks and hammerhead sharks are getting smaller.”

“We will never know the exact numbers of sharks in our oceans more than half a century ago, but the data points to radical changes in our coastal ecosystems since the 1960s.

“The data acts as a window into the past, revealing what was natural on our beaches, and provides important context for how we manage sharks.

“What may appear to be increases in shark numbers is in reality a fraction of past baselines, and the long-term trend shows ongoing declines.

“While often perceived as a danger to the public, sharks play important ecological roles in coastal ecosystems.

“Large apex sharks are able to prey on larger animals such as turtles, dolphins and dugongs, and their widespread movement patterns along the coastline connects coral reefs, seagrass beds and coastal ecosystems.

“Such losses of apex sharks is likely to have changed the structure of coastal food webs over the past half century.”

Most Biology Textbooks Overlook The Most Abundant Animals on The Planet


Insects are kind of a big deal. As many as 30 million species make up this ecologically important class, only a fraction of which we know about. Around 80 percent of all animal species are insects. Estimates put their numbers in the quintillions.

Not that you’d easily know that if you opened a random introductory biology textbook – these are much more likely to give vertebrates a starring role. So it might be time to put the spineless members of the animal kingdom back into the spotlight.

main article image

A recent survey of 88 popular entry-level texts published between 1906 and 2016 found insects just weren’t filling the pages in a way that reflected not just their abundance, but their significance in ecology.

“Insects are essential to every terrestrial ecosystem and play important roles in everything from agriculture to human health,” says North Carolina State University biologist Jennifer Landin.

“But our analysis shows that students taking entry-level biology courses are learning virtually nothing about them.”

Most surprisingly, this deficit has been on the increase since the 1960s. Our interest in the humble bug just isn’t what it used to be. And that’s a problem, according to the researchers.

“We do not exist apart from nature,” Landin says.

“Humans and insects, for example, have direct effects on each other – and that is no longer clearly presented in the teaching literature.”

To explore how generalised biology textbooks have changed over time with respect to their choice of content, the researchers combed their selection of textbooks for words, figures and illustrations that featured some kind of insect.

These were then recorded against the book’s year of publication, revealing a gentle slide in the percentage of textbook pages dedicated to insect anatomy, lives, and relationships.

A century ago, you could expect an average of 32.6 pages to be devoted to something insecty. That’s about 8.8 percent of the total.

Fast forward to books published between 2000 and 2016, that number drops to 5.67 pages. A miserable 0.59 percent.

As if that’s not bad enough, the team found a huge imbalance in the categories of these super important arthropods.

Orthoptera – such as locusts and crickets – were overrepresented. They make up just 2 percent of insect species, but occupied as much as a quarter of the insect real estate.

Beetles, of the order Coleoptera, also represented about a quarter of those pages, in spite of making up a whopping 37 percent of all species of insect.

You could argue that big numbers don’t necessarily make for an important group of animals. There’s only so many pages in a textbook, and only so much time to study them all – finding the right representatives requires a little more nuance.

But in addition to a quantitative assessment, the team examined the kinds of words used to describe insects, and assigned them an emotive value as viewed by a relative entomological novice.

So while ‘pest’ might well be fairly denotative to an expert, to the average first-year student this would make an insect look less like the hero of the story.

Texts published prior to the 1960s contained 8.7 times more descriptors, of both a positive and negative variety, than those published after 2000.

However, those words tended to be a little more positive. We might not be as colourful in our descriptions today, but the occasional connotations appear to be less in the insect’s favour.

So not only are we talking about ants, moths, and flies less, we’re less likely to be flattering in our descriptors.

“We saw societal shifts in the groups of insects addressed in texts; butterflies were covered more when butterfly collecting was a popular hobby, mosquitoes and other flies were overwhelming in books when insect-transmitted disease was rampant,” says Landin.

This social relevance is to be expected in textbook trends. But far from becoming less important, a decline in insect numbers thanks to climate change is a concern we face in future decades.

We want our future biologists to be not just informed on 80 percent of all animal species, we want them to be excited by them.

It’s time to back the bugs!

How Earth’s Future Could Soon Recreate a Lost World of 50 Million Years Ago


Humans have pulled too hard on our planet’s strings, and now we’re at a point where the global climate itself is unravelling.

A new study suggests that if nothing is done to reduce our carbon emissions, we could essentially reverse 50 million years of long-term cooling in just a few generations.

main article image

The consequences could send us spiralling back in time by at least 3 million years. By 2030, the study predicts that Earth’s climate may resemble the mid-Pliocene – the last great warm period before now, when the world was 1.8 degrees Celsius warmer (3.2 degrees Fahrenheit).

From that precarious spot, we could retreat even further. By 2150, the study suggests our climate could most resemble the ice-free Eocene of some 50 million years past, when there were extremely high carbon dioxide levels and global temperatures were roughly 13 degrees Celsius warmer (23.4 degrees Fahrenheit).

This is a time when crocodiles swam in the swampy forests of the Arctic Circle and palm trees dropped coconuts in Alaska.

“If we think about the future in terms of the past, where we are going is uncharted territory for human society,” says lead author Kevin Burke, a paleoecologist at the University of Wisconsin-Madison.

“We are moving toward very dramatic changes over an extremely rapid time frame, reversing a planetary cooling trend in a matter of centuries.”

Today, the accelerated rate of climate change is faster than anything the planet has ever experienced before. Now, we are so far off the road map that one of the only ways to figure out where we are going is to retrace the ancient steps the world took long, long ago.

Combing through Earth’s climate history, the new study sought to identify a time that is similar to current climate projections.

To do this, the researchers picked out six climate benchmarks from throughout Earth’s geologic history, dating as far back as the early Eocene to as recently as the early 20th century.

The team then compared these geological periods with two different climate scenarios, calculated using the best available data from the fifth assessment report from the Intergovernmental Panel on Climate Change (IPCC).

The first scenario is the worst case possible – a future in which humans do not mitigate greenhouse gas emissions at all – and the second scenario is one in which we do manage to moderately reduce emissions (a feat that will be difficult to achieve, given our current activity).

“Based on observational data, we are tracking on the high end of the emissions scenarios, but it’s too soon to tell,” says Burke.

Using no less than three different climate models, the researchers tested both of these scenarios, and then compared them to each of the geologic periods picked out.

The results are kind of like a ‘choose your own adventure’ book with only two options: we can either do nothing and end up with a climate that resembles the Eocene, or we can try and reduce our emissions and halt our climate at Pliocene conditions.

Under both of these scenarios and across each of the models, the results for the short term are about the same. By at least 2040, Earth’s climate will most closely resemble the mid-Pliocene, and this is well beyond the “safe operating space” of the Holocene that we were shooting for.

At this point, no matter which way we slice it, it seems highly likely that our children and grandchildren will live to see a world where temperatures will rise, precipitation will increase, ice caps will melt, and the poles will become temperate.

During the Pliocene, the climate was arid and the High Arctic was home to forests in which camels and other animals roamed. Who knows what will happen to biological life and human society when the climate reverts to that state within just a few centuries?

65 Myr Climate Change 1

 

The findings of the new study reveal that these rapid changes will probably begin at the centre of Earth’s continents, spiralling outwards in concentric circles until they engulf the whole planet.

This means that in some areas of the world – for instance, the parts that lie at the centre of those circles – the climate consequences will be especially drastic.

“Madison (Wisconsin) warms up more than Seattle (Washington) does, even though they’re at the same latitude,” explains co-author John ‘Jack’ Williams, a researcher in ecological responses to climate change.

“When you read that the world is expected to warm by 3 degrees Celsius this century, in Madison we should expect to roughly double the global average.”

But while we can predict some of these extreme climate changes, others will no doubt take us by surprise.

In the worst case scenario, by the stage that our climate once again resembles the mid-Pliocene, the research found that nearly 9 percent of the planet will be experiencing “novel” climates.

This means that in some areas of the world, including eastern and southeastern Asia, northern Australia, and the coastal Americas, humans will be experiencing climate conditions that have no known geological or historical precedent.

“In the roughly 20 to 25 years I have been working in the field, we have gone from expecting climate change to happen, to detecting the effects, and now, we are seeing that it’s causing harm,” says Williams.

“People are dying, property is being damaged, we’re seeing intensified fires and intensified storms that can be attributed to climate change. There is more energy in the climate system, leading to more intense events.”

It’s difficult to put a positive spin on all of this, but the researchers tried their hardest. After all, life has a way of surviving and pushing through seemingly insurmountable challenges.

“We’ve seen big things happen in Earth’s history – new species evolved, life persists and species survive. But many species will be lost, and we live on this planet,” says Williams.

“These are things to be concerned about, so this work points us to how we can use our history and Earth’s history to understand changes today and how we can best adapt.”

A Neuroscientist Explains Why Multitasking Screens Is So Terrible For Your Brain


How many times have you sat down to watch TV or a movie, only to immediately shift your attention to your smartphone or tablet? Known as “media multitasking”, this phenomenon is so common that an estimated 178m US adults regularly use another device while watching TV.

main article image

While some might assume that frequently shifting your attention between different information streams is good brain training for improving memory and attention, studies have found the opposite to be true.

Media multitasking is when people engage with multiple devices or content at the same time. This might be using your smartphone while watching TV, or even listening to music and text messaging friends while playing a video game.

One recent study looked at the body of current research on media multitasking (consisting of 22 peer-reviewed research papers) and found that self-reported “heavy media multitaskers” performed worse on attention and working memory tests. Some even had structural brain differences.

The study found that “heavy” media multitaskers performed about 8-10% worse on sustained attention tests compared to “light” media multitaskers. These tests involved participants paying attention to a certain task (such as spotting a specific letter in a stream of other letters) for 20 minutes or more.

Researchers found that on these tests (and others) the ability to sustain attention was poorer for heavy multitaskers. These findings might explain why some people are heavy multitaskers.

If someone has a poor attention span, they may be likely to switch between activities quickly, instead of staying with just one.

Heavy media multitaskers were also found to perform worse than light media multitaskers on working memory tests. These involved memorising and remembering information (like a phone number) while performing another task (such as searching for a pen and piece of paper to write it down).

Complex working memory is closely linked with having better focus and being able to ignore distractions.

Brain scans of the participants also showed that an area of the brain known as the anterior cingulate cortex is smaller in heavy multitaskers. This area of the brain is involved in controlling attention. A smaller one may imply worse functioning and poorer attention.

But while researchers have confirmed that heavy media multitaskers have worse memory and attention, they are still uncertain about what causes heavy media multitasking.

Do heavy media multitaskers have worse attention because of their media multitasking? Or do they media multitask because they have poor attention?

It might also be an effect of general intelligence, personality, or something else entirely that causes poor attention and increased media multitasking behaviours.

But the news isn’t all bad for heavy multitaskers. Curiously, this impairment might have some benefit. Research suggests that light media multitaskers are more likely to miss helpful information that isn’t related to the task they’re currently performing.

For example, a person may read with a radio playing in the background. When important breaking news is broadcast, a heavy media multitasker is actually more likely to pick it up than a light media multitasker.

So should you avoid media multitasking? Based on current research, the answer is probably yes. Multitasking usually causes poorer performance when doing two things at once, and puts more demands on the brain than doing one thing at a time.

This is because the human mind suffers from an “attentional bottleneck”, which only allows certain mental operations to occur one after another.

But if you’re wondering whether media multitasking will impair your attention capabilities, the answer is probably no. We don’t know yet whether heavy media multitasking is really the cause for lower performance on the tests.

The effects observed in controlled laboratory settings are also generally rather small and most likely negligible in normal everyday life.

Until we have more research, it’s probably too early to start panicking about the potential negative effects of media multitasking.

Chronic Bullying Could Actually Reshape The Brains of Teens


Sticks and stones may break your bones, but name-calling could actually change the structure of your brain.

A new study has found that persistent bullying in high school is not just psychologically traumatising, it could also cause real and lasting damage to the developing brain.

main article image

The findings are drawn from a long-term study on teenage brain development and mental health, which collected brain scans and mental health questionnaires from European teenagers between the ages of 14 and 19.

Following 682 young people in England, Ireland, France and Germany, the researchers tallied 36 in total who reported experiencing chronic bullying during these years.

When the researchers compared the bullied participants to those who had experienced less intense bullying, they noticed that their brains looked different.

Across the length of the study, in certain regions, the brains of the bullied participants appeared to have actually shrunk in size.

In particular, the pattern of shrinking was observed in two parts of the brain called the putamen and the caudate, a change oddly reminiscent  of adults who have experienced early life stress, such as childhood maltreatment.

Sure enough, the researchers found that they could partly explain these changes using the relationship between extreme bullying and higher levels of general anxiety at age 19. And this was true even when controlling for other types of stress and co-morbid depressive symptoms.

The connection is further supported by previous functional MRI studies that found differences in the connectivity and activation of the caudate and putamen activation in those with anxiety.

“Although not classically considered relevant to anxiety, the importance of structural changes in the putamen and caudate to the development of anxiety most likely lies in their contribution to related behaviours such as reward sensitivity, motivation, conditioning, attention, and emotional processing,” explains lead author Erin Burke Quinlan from King’s College London.

In other words, the authors think all of this shrinking could be a mark of mental illness, or at least help explain why these 19-year-olds are experiencing such unusually high anxiety.

But while numerous past studies have already linked childhood and adolescent bullying to mental illness, this is the very first study to show that unrelenting victimisation could impact a teenager’s mental health by actually reshaping their brain.

The results are cause for worry. During adolescence, a young person’s brain is absolutely exploding with growth, expanding at an incredible place.

And even though it’s normal for the brain to prune back some of this overabundance, in the brains of those who experienced chronic bullying, the whole pruning process appears to have spiralled out of control.

The teenage years are an extremely important and formative period in a person’s life, and these sorts of significant changes do not bode well. The authors suspect that as these children age, they might even begin to experience greater shrinkage in the brain.

But an even longer long-term study will need to be done if we want to verify that hunch. In the meantime, the authors are recommending that every effort be made to limit bullying before it can cause damage to a teenager’s brain and their mental health.

This Breathtaking Image Is a Real Photo of Two Stars Destroying Each Other


The death of a binary star can be a spectacularly violent thing.

This picture shows the binary system R Aquarii, a red giant throwing off its outer envelope, which is being greedily cannibalised by its companion, a much smaller, denser white dwarf.

main article image

The dramatic moment you’re looking at unfolded just 650 light-years from Earth – practically right next door in astronomical terms, which is why astronomers have a keen interest in the event.

r aquarii dance of death inset(ESO/Schmid et al.)

But this new image of the interaction – taken in near-infrared by the SPHERE planet-hunting instrument on the European Southern Observatory’s Very Large Telescope – gives us an incredibly detailed new glimpse of the action.

For contrast, here is a picture taken by the Hubble Space Telescope of the nebula Cederblad 211 – the dust and gas cloud that the stars are in the process of creating.

Raquarii HubbleSchmidt 1080(Judy Schmidt; Hubble, NASA, ESA)

And another image taken by the Wide Field Camera 3’s near-infrared instrument.

eso1840b(ESO/Schmid et al./NASA/ESA)

What’s happening here is very turbulent. The red giant is what is known as a Mira variable star, a star at the very end of its lifespan. These kind of stars have already lost at least half their material, and as they pulsate, they reach a brightness 1,000 times that of the Sun.

The white dwarf – an end-of-life star that has exhausted its nuclear fuel – is also quite busy. The material it devours from the red giant accumulates on the white dwarf’s surface, occasionally triggering an enormous thermonuclear explosion that blasts the material out into space.

This amazingly clear image shows both the stars at the centre of the jets of material spinning out into space. Eventually, this binary system’s life could end in a colossal explosion – a Type Ia supernova.

%d bloggers like this: