A new neural circuit controls fear in the brain .

Some people have no fear, like that 17-year-old kid who drives like a maniac. But for the nearly 40 million adults who suffer from anxiety disorders, an overabundance of fear rules their lives. Debilitating anxiety prevents them from participating in life’s most mundane moments, from driving a car to riding in an elevator. Today, a team of researchers at Cold Spring Harbor Laboratory (CSHL) describes a new pathway that controls fear memories and behavior in the mouse brain, offering mechanistic insight into how anxiety disorders may arise.

It is hard to imagine that an intangible emotion like fear is encoded within neuronal circuits, but researchers have found that fear is stored within a distinct region of the brain. “In our previous work, we discovered that fear learning and memory are orchestrated by neurons in the central amygdala,” explains CSHL Associate Professor Bo Li, who led the team of researchers. But what controls the central amygdala?

One possible candidate was a cluster of neurons that form the PVT, or paraventricular nucleus of the thalamus. This region of the brain is extremely sensitive to stress, acting as a sensor for both physical and psychological tension.

As described in work published today in Nature, the researchers looked to see if the PVT plays a role in fear learning and memory in mice. “We found that the PVT is specifically activated as animals learn to fear or as they recall fear memories,” says Li. The team was able to see that neurons from the PVT extend deep into the central amygdala. Disrupting the connection significantly impaired fear learning.

Because the link between the PVT and the central amygdala is a critical component of fear learning, it represents an ideal target for potential drugs to treat anxiety disorders. But how is this link established? The researchers looked to data from people with post-traumatic stress disorder (PTSD) to identify chemical messengers that might connect the two structures. They focused on a molecule called BDNF that has been implicated in anxiety disorders. BDNF is a well-known neural growth factor that plays an important role in stimulating the birth of new neurons as well as new connections between neurons. Patients with anxiety disorders frequently have mutations in BDNF, suggesting that it might have a role in fear learning and memory.

The researchers worked to determine if BDNF plays a role in fear, and specifically if it affects the connection between the PVT and central amygdala in mice. They found that the addition of BDNF in the central amygdala acutely activates its neurons, triggering a fear response in animals that have not previously been exposed to a fearful stimulus, and promoting the formation of long-term fear memories. “We established that this is a regulatory circuit that controls fear in mice: BDNF is the chemical messenger that allows the PVT to exert control over the central amygdala,” Li explains.

The results, he says, are consistent with what clinicians have seen and may help to explain some of the underlying pathology in patients. “Our work provides mechanistic insight into a novel circuit that controls fear in the brain, and provides a target for the future treatment of anxiety disorders,” says Li.

New laser could upgrade the images in tomorrow’s technology

A new semiconductor laser developed at Yale has the potential to significantly improve the imaging quality of the next generation of high-tech microscopes, laser projectors, photolithography, holography, and biomedical imaging.

Based on a chaotic cavity laser, the technology combines the brightness of traditional lasers with the lower image corruption of (LEDs). The search for better light sources for high-speed, full-field imaging applications has been the focus of intense experimentation and research in recent years.

The new laser is described in a paper in the Jan. 19 online edition of the Proceedings of the National Academy of Sciences. Several Yale labs and departments collaborated on the research, with contributions from scientists in , electrical and biomedical engineering, and diagnostic radiology.

“This chaotic cavity laser is a great example of basic research ultimately leading to a potentially important invention for the social good,” said co-author A. Douglas Stone, the Carl A. Morse Professor and chair of applied physics, and professor of physics. “All of the foundational work was primarily motivated by a desire to understand certain classes of lasers—random and chaotic—with no known applications. Eventually, with input from other disciplines, we discovered that these lasers are uniquely suited for a wide class of problems in imaging and microscopy.”

One of those problems is known as “speckle.” Speckle is a random, grainy pattern, caused by high spatial coherence that can corrupt the formation of images when traditional lasers are used. A way to avoid such distortion is by using LED light sources. The problem is, LEDs are not bright enough for high-speed imaging.

The new, electrically pumped semiconductor laser offers a different approach. It produces an intense emission, but with low spatial coherence.

“For full-field imaging, the speckle contrast should be less than ~4% to avoid any disturbance for human inspection,” explained Hui Cao, professor of applied physics and of physics, who is the paper’s corresponding author. “As we showed in the paper, the standard edge-emitting laser produced speckle contrast of ~50%, while our laser has the speckle contrast of 3%. So our new laser has completely eliminated the issue of coherent artifact for full-field imaging.”

Co-author Michael A. Choma, assistant professor of diagnostic radiology, pediatrics, and , said speckle is a major barrier in the development of certain classes of clinical diagnostics that use light. “It is tremendously rewarding to work with a team of colleagues to develop speckle-free lasers,” Choma said. “It also is exciting to think about the new kinds of clinical diagnostics we can develop.”

10 Surprising Things That Benefit Our Brain That You Can Do Everyday

Our brains are by far our most important organs. Here are 10 of the most surprising things our brains do and what we can learn from them:

10 Surprising Things That Benefit Our Brain That You Can Do Everyday
1. Your brain does creative work better when you’re tired.

Here’s how it breaks down:

If you’re a morning lark, say, you’ll want to favor those morning hours when you’re feeling fresher to get your most demanding, analytic work done. Using your brain to solve problems, answer questions and make decisions is best done when you’re at your peak. For night owls, this is obviously a much later period in the day.

On the other hand, if you’re trying to do creative work you’ll actually have more luck when you’re more tired and your brain isn’t functioning as efficiently. This sounds crazy, but it actually makes sense when you look at the reasoning behind it. It’s one of the reasons that great ideas often happen in the shower after a long day of work.

If you’re tired, your brain is not as good at filtering out distractions and focusing on a particular task. It’s also a lot less efficient at remembering connections between ideas or concepts. These are both good things when it comes to creative work, since this kind of work requires us to make new connections, be open to new ideas and think in new ways. So a tired, fuzzy brain is much more use to us when working on creative projects.

This Scientific American article explains how distractions can actually be a good thing for creative thinking:

Insight problems involve thinking outside the box. This is where susceptibility to “distraction” can be of benefit. At off-peak times we are less focused, and may consider a broader range of information. This wider scope gives us access to more alternatives and diverse interpretations, thus fostering innovation and insight.

2. Stress can change the size of your brain (and make it smaller).

I bet you didn’t know that stress is actually the most common cause of changes in brain function. I was surprised to find this out when I looked into how stress affects our brains.

I also found some research that showed signs of brain size decreasing due to stress.

One study used baby monkeys to test the effects of stress on development and long-term mental health. Half the monkeys were cared for by their peers for six months, while the other half remained with their mothers. Afterwards, the monkeys were returned to typical social groups for several months before the researchers scanned their brains.

In the monkeys who had been removed from their mothers and cared for by their peers, areas of their brains related to stress were still enlarged, even after being in normal social conditions for several months.

3. It is literally impossible for our brains to multitask.

Multitasking is something we’ve long been encouraged to practice, but it turns out multitasking is actually impossible. When we think we’re multitasking, we’re actually context switching. That is, we’re quickly switching back and forth between different tasks rather than doing them at the same time.

The book Brain Rules explains how detrimental multitasking can be:

Research shows your error rate goes up 50 percent and it takes you twice as long to do things.

The problem with multitasking is that we’re splitting our brain’s resources. We’re giving less attention to each task, and probably performing worse on all of them:

When the brain tries to do two things at once, it divides and conquers, dedicating one-half of our gray matter to each task.

When our brains handle a single task, the prefrontal cortex plays a big part. Here’s how it helps us achieve a goal or complete a task:

The anterior part of this brain region forms the goal or intention — for example, “I want that cookie” — and the posterior prefrontal cortex talks to the rest of the brain so that your hand reaches toward the cookie jar and your mind knows whether you have the cookie.

A study in Paris found that when a second task was required, the brains of the study volunteers split up, with each hemisphere working alone on a task. The brain was overloaded by the second task and couldn’t perform at its full capacity, because it needed to split its resources.

4. Naps improve your brain’s day-to-day performance.

We’re pretty clear on how important sleep is for our brains, but what about naps? It turns out that these short bursts of sleep are actually really useful.

Here are a couple of ways that napping can benefit the brain:

Improved Memory

In one study, participants memorized illustrated cards to test their memory strength. After memorizing a set of cards, they had a 40-minute break wherein one group napped and the other stayed awake. After the break both groups were tested on their memory of the cards, and the group who had napped performed better:

Much to the surprise of the researchers, the sleep group performed significantly better, retaining on average 85 percent of the patterns, compared to 60 percent for those who had remained awake.

Apparently, napping actually helps our brain solidify memories:

Research indicates that when a memory is first recorded in the brain — in the hippocampus, to be specific — it’s still “fragile” and easily forgotten, especially if the brain is asked to memorize more things. Napping, it seems, pushes memories to the neocortex, the brain’s “more permanent storage,” preventing them from being “overwritten.”

What Happens in the Brain During a Nap

Some recent research has found that the right side of the brain is far more active during a nap than the left side, which stays fairly quiet while we’re asleep. Despite the fact that 95 percent of the population is right-handed, with the left side of their brains being the most dominant, the right side is consistently the more active hemisphere during sleep.

The study’s author, Andrei Medvedev, speculated that the right side of the brain handles “housekeeping” duties while we’re asleep.

So while the left side of your brain takes some time off to relax, the right side is clearing out your temporary storage areas, pushing information into long-term storage and solidifying your memories from the day.

5. Your vision trumps all other senses.

Despite being one of our five main senses, vision seems to take precedence over the others:

Hear a piece of information, and three days later you’ll remember 10 percent of it. Add a picture and you’ll remember 65 percent.
Pictures beat text as well, in part because reading is so inefficient for us. Our brain sees words as lots of tiny pictures, and we have to identify certain features in the letters to be able to read them. That takes time.

In fact, vision is so powerful that the best wine tasters in the world have been known to describe a dyed white wine as a red.

Not only is it surprising that we rely on our vision so much, but it actually isn’t even that good! Take this fact, for instance:

Our brain is doing all this guessing because it doesn’t know where things are. In a three-dimensional world, the light actually falls on our retina in a two-dimensional fashion. So our brain approximates viewable image.

Let’s look at this image. It shows you how much of your brain is dedicated just to vision and how it affects other parts of the brain. It’s a truly staggering amount, compared to any other areas:

10 Surprising Things That Benefit Our Brain That You Can Do Everyday
6. Introversion and extroversion come from different wiring in the brain.

I just recently realized that introversion and extroversion are not actually related to how outgoing or shy we are but to how our brains recharge.

Here’s how the brains of introverts and extroverts differ:

Research has actually found that there is a difference in the brains of extroverted and introverted people in terms of how we process rewards and how our genetic makeup differs. Extroverts’ brains respond more strongly when a gamble pays off. Part of this is simply genetic, but it’s partly a difference in their dopamine systems as well.

An experiment that had people take gambles while in a brain scanner found the following:

When the gambles they took paid off, the more extroverted group showed a stronger response in two crucial brain regions: the amygdala and the nucleus accumbens.

The nucleus accumbens is part of the dopamine system, which affects how we learn and is generally known for motivating us to search for rewards. The difference in the dopamine system in the extrovert’s brain tends to push them toward seeking out novelty, taking risks and enjoying unfamiliar or surprising situations more than others. The amygdala is responsible for processing emotional stimuli, which gives extroverts that rush of excitement when they try something highly stimulating that might overwhelm an introvert.

More research has actually shown that the difference comes from how introverts and extroverts process stimuli. That is, the stimulation coming into our brains is processed differently depending on your personality. For extroverts, the pathway is much shorter. It runs through an area where taste, touch, visual and auditory sensory processing take place. For introverts, stimuli run through a long, complicated pathway in areas of the brain associated with remembering, planning and solving problems.

7. We tend to like people who make mistakes more.

Apparently, making mistakes actually makes us more likeable, due to something called the pratfall effect.

Kevan Lee recently explained how this works on the Buffer blog:

Those who never make mistakes are perceived as less likeable than those who commit the occasional faux pas. Messing up draws people closer to you, makes you more human. Perfection creates distance and an unattractive air of invincibility. Those of us with flaws win out every time. This theory was tested by psychologist Elliot Aronson. In his test, he asked participants to listen to recordings of people answering a quiz. Select recordings included the sound of the person knocking over a cup of coffee. When participants were asked to rate the quizzers on likability, the coffee-spill group came out on top.

So this is why we tend to dislike people who seem perfect! And now we know that making minor mistakes isn’t the worst thing in the world; in fact, it can work in our favor.

8. Meditation can rewire your brain for the better.

Here’s another one that really surprised me. I thought meditation was only good for improving focus and helping me stay calm throughout the day, but it actually has a whole bunch of great benefits.

Here are a few examples:

What happens without meditation is that there’s a section of our brains that’s sometimes called the “me center.” (It’s technically the medial prefrontal cortex.) This is the part that processes information relating to ourselves and our experiences. Normally the neural pathways from the bodily sensation and fear centers of the brain to the “me center” are really strong. When you experience a scary or upsetting sensation, it triggers a strong reaction in your “me center,” making you feel scared and under attack.

Here is how anxiety and agitation decrease with just a 20-minute meditation session:

10 Surprising Things That Benefit Our Brain That You Can Do Everyday
When we meditate, especially when we are just getting started with meditation, we weaken this neural connection. This means that we don’t react as strongly to sensations that might have once lit up our “me centers.” As we weaken this connection, we simultaneously strengthen the connection between what’s known as our “assessment center” (the part of our brains known for reasoning) and our bodily sensation and fear centers. So when we experience scary or upsetting sensations, we can more easily look at them rationally. Here’s a good example:

For example, when you experience pain, rather than becoming anxious and assuming it means something is wrong with you, you can watch the pain rise and fall without becoming ensnared in a story about what it might mean.

Better Memory

One of the things that meditation has been linked to is improving rapid memory recall. Catherine Kerr, a researcher at the Martinos Center for Biomedical Imaging and the Osher Research Center, found that people who practiced mindful meditation were able to adjust the brain wave that screens out distractions and increase their productivity more quickly that those who did not meditate. She said that this ability to ignore distractions could explain “their superior ability to rapidly remember and incorporate new facts.” This seems to be very similar to the power of being exposed to new situations, which will also dramatically improve our memory of things.

Meditation has also been linked to increasing compassion, decreasing stress, improving memory skills and even increasing the amount of gray matter in the brain.

9. Exercise can reorganize the brain and boost your willpower.

Sure, exercise is good for your body, but what about your brain? Well, apparently there’s a link between xercise and mental alertness, in a similar way that happiness and exercise are related:

A lifetime of exercise can result in a sometimes astonishing elevation in cognitive performance, compared with those who are sedentary. Exercisers outperform couch potatoes in tests that measure long-term memory, reasoning, attention, problem-solving, even so-called fluid-intelligence tasks.

Of course, exercise can also make us happier, as we’ve explored before:
If you start exercising, your brain recognizes this as a moment of stress. As your heart pressure increases, the brain thinks you are either fighting the enemy or fleeing from it. To protect yourself and your brain from stress, you release a protein called BDNF (brain-derived neurotrophic factor). This BDNF has a protective and also reparative element to your memory neurons and acts as a reset switch. That’s why we often feel so at ease and things are clear after exercising, and eventually happy.

At the same time, endorphins, another chemical to fight stress, are released in your brain. The main purpose of endorphis is this, writesresearcher McGovern:

These endorphins tend to minimize the discomfort of exercise, block the feeling of pain and are even associated with a feeling of euphoria.

10. You can make your brain think time is going slowly by doing new things.

Ever wished you didn’t find yourself saying, “Where does the time go!” every June when you realize the year is half-over? This is a neat trick that relates to how our brains perceive time. Once you know how it works, you can trick your brain into thinking time is moving more slowly.

Essentially, our brains take a whole bunch of information from our senses and organize it in a way that makes sense to us, before we ever perceive it. So what we think is our sense of time is actually just a whole bunch of information presented to us in a particular way, as determined by our brains:

When our brains receive new information, it doesn’t necessarily come in the proper order. This information needs to be reorganized and presented to us in a form we understand. When familiar information is processed, this doesn’t take much time at all. New information, however, is a bit slower and makes time feel elongated.

Even stranger, it isn’t just a single area of the brain that controls our time perception; it’s done by a whole bunch of brain areas unlike our common five senses, which can each be pinpointed to a single, specific area.

When we receive lots of new information, it takes our brains a while to process it all. The longer this processing takes, the longer that period of time feels.

When we’re in life-threatening situations, for instance, “we remember the time as longer because we record more of the experience. Life-threatening experiences make us really pay attention, but we don’t gain superhuman powers of perception.”

Scientists Have Discovered An Ocean 400 Miles Under Our Feet .


earth onion

After decades of theorizing and searching, scientists are reporting that they’ve finally found a massive reservoir of water in the Earth’s mantle — a reservoir so vast that could fill the Earth’s oceans three times over. This discovery suggests that Earth’s surface water actually came from within, as part of a “whole-Earth water cycle,” rather than the prevailing theory of icy comets striking Earth billions of years ago. As always, the more we understand about how the Earth formed, and how its multitude of interior layers continue to function, the more accurately we can predict the future. Weather, sea levels, climate change — these are all closely linked to the tectonic activity that endlessly churns away beneath our feet.

This new study, authored by a range of geophysicists and scientists from across the US, leverages data from the USArray — an array of hundreds of seismographs located throughout the US that are constantly listening to movements in the Earth’s mantle and core. After listening for a few years, and carrying out lots of complex calculations, the researchers believe that they’ve found a huge reserve of water that’s located in thetransition zone between the upper and lower mantle — a region that occupies between 400 and 660 kilometers (250-410 miles) below our feet. [DOI: 10.1126/science.1253358 – “Dehydration melting at the top of the lower mantle”]

image: http://higherperspective.com/wp-content/uploads/2015/01/crust.png


As you can imagine, things are a little complex that far down. We’re not talking about some kind of water reserve that can be reached in the same way as an oil well. The deepest a human borehole has ever gone is just 12km — about half way through the Earth’s crust — and we had to stop because geothermal energy was melting the drill bit. 660 kilometers is a long, long way down, and weird stuff happens down there.

Basically, the new theory is that the Earth’s mantle is full of a mineral called ringwoodite. We know from experiments here on the surface that, under extreme pressure, ringwoodite can trap water. Measurements made by the USArray indicate that as convection pushes ringwoodite deeper into the mantle, the increase in pressure forces the trapped water out (a process known as dehydration melting). That seems to be the extent of the study’s findings. Now they need to try and link together deep-Earth geology with what actually happens on the surface. The Earth is an immensely complex machine that generally moves at a very, very slow pace. It takes years of measurements to get anything even approaching useful data. [Read: Is earthquake prediction finally a reality?]

image: http://higherperspective.com/wp-content/uploads/2015/01/resvoir.jpg


Earth’s underground ringwoodite ocean .

With all that said, there could be massive repercussions if this study’s findings are accurate. Even if the ringwoodite only contains around 2.6% water, the volume of the transition zone means this underground reservoir could contain enough water to re-fill our oceans three times over. I’m not saying that this gives us the perfect excuse to continue our abuse of Earth’s fresh water reserves, but it’s definitely something to mull over. This would also seem to discount the prevailing theory that our surface water arrived on Earth via a bunch of icy comets.

Finally, here’s a fun thought that should remind us that Earth’s perfect composition and climate is, if you look very closely, rather miraculous. One of the researchers, talking to New Scientist, said that if the water wasn’t stored underground, “it would be on the surface of the Earth, and mountaintops would be the only land poking out.” Maybe if the formation of Earth had be a little different, or if we were marginally closer to the Sun, or if a random asteroid didn’t land here billions of years ago… you probably wouldn’t be sitting here surfing the web.


Vitamin D supplementation reduces need for respiratory support, study suggests

Researchers from Boston’s Massachusetts General Hospital and Harvard Medical School have discovered that inadequate vitamin D supplementation among surgical intensive care patients can increase the amount of time needed on respiratory support.(1)

Vitamin D

The study, titled “Plasma 25-Hydroxyvitamin D Levels at Initiation of Care and Duration of Mechanical Ventilation in Critically Ill Surgical Patients,” was published in the Journal of Parenteral and Enteral Nutrition (JPEN). JPEN is the research journal of the American Society for Parenteral and Enteral Nutrition (ASPEN).(2)

The study notes the following:

Limited data exist regarding the relationship between plasma 25-hydroxyvitamin D levels and duration of respiratory support. Our goal was to explore whether vitamin D status at the time of intensive care unit (ICU) admission is associated with duration of mechanical ventilation in critically ill surgical patients.(2)

Not enough vitamin D increases time needed for respiratory support among ill

The researchers analyzed data from a “prospective cohort study involving 210 critically ill surgical patients” to determine the effects that vitamin D levels have on duration of respiratory support. Ultimately, it was found that in such patients, “plasma 25-hydroxyvitamin D levels measured on ICU admission were inversely associated with the duration of respiratory support.”(2)

Plasma 25-hydroxyvitamin D is the result of the liver converting the vitamin D that one obtains from food, supplements and the sun. The amount of it in the body is considered a highly accurate indicator of a person’s vitamin D level. Too-low levels are associated with a host of health problems ranging from osteomalacia, a mineralization defect in which bone mineral density decreases, to the development of certain kinds of cancer, high blood pressure, diabetes and fatal cardiovascular circumstances.(3)

Asthma, COPD among problems worsened by low vitamin D levels

Additionally — and more pertinent to this particular study — proper vitamin D levels have been linked to improvements in lung function, further reinforcing the importance of its supplementation.

Previous studies have delved into this very matter, with results showing that vitamin D plays a significant role in boosting respiratory health. One finding, for example, honed in on “the known effects of vitamin D on immune function… in relation to respiratory health.” Its population-based study concluded that “Vitamin D appears capable of inhibiting pulmonary inflammatory responses while enhancing innate defence mechanisms against respiratory pathogens.” In addition to boosting the immune system overall, vitamin D was found to improve lung function in those suffering from respiratory-inflammation conditions such as asthma and chronic obstructive pulmonary disease (COPD).(4)

Supplementation, diet, key to getting more vitamin D

To ensure proper vitamin D levels, the Mayo Clinic says that the recommended dietary allowance (RDA) for adults is 600 IU. For adults older than 70, 800 IU of vitamin D should be consumed. Salmon, mushrooms and eggs are considered good vitamin D sources, as are supplements that come from a trustworthy source. Even better, supplements combined with foods high in calcium or a mineral supplement are believed to further bolster overall health.(5,6)

Being mindful of vitamin D levels is also essential not only when low levels are involved but also when too much of it exists in the body. Known as hypervitaminosis D, vitamin D toxicity can occur when a person has accumulated the vitamin in excess. While rare, it’s a serious condition that can cause everything from kidney problems to nausea. Typically, too much of it in the body is due to taking it for longer-than-normal durations exceeding that which was indicated by a medical professional to treat the deficiency.(7)


(1) http://www.eurekalert.org

(2) http://pen.sagepub.com

(3) http://www.livestrong.com

(4) http://www.ncbi.nlm.nih.gov

(5) http://www.mayoclinic.org

(6) http://www.naturalnews.com

(7) http://www.mayoclinic.org

Learn more: http://www.naturalnews.com/048327_vitamin_d_respiratory_support_lung_health.html#ixzz3PkXmGmJ2

The 5 Best Jobs You’ve Never Heard Of .

If you’re itching for a career change in 2015, here are some fast-growing, high-paying options that have yet to hit the mainstream.

Good news, job seekers: employment opportunities look bright in 2015. Staffing levels are expected to rise 19%, according to ManpowerGroup’s annual Employment Outlook Survey. Robust hiring gains are forecast for the “usual suspects,” says Payscale.com’s vice president Tim Low—namely retail, healthcare, and technology. But peel back those broad categories, and you’ll uncover high demand for unique talents and skill sets and a bunch of new jobs you may not even know existed.

“As we shift away from conventional jobs and move forward into the information economy, there are a growing number of opportunities for workers to transfer skills in seemingly unrelated fields,” says Stephanie Thomas, researcher and program director at the Institute for Compensation Studies at Cornell University.

Additionally, job titles are becoming more diverse, says Scott Dobroski, career trends analyst at Glassdoor, an employer review website. “Employers are looking for innovative ways to do business and are therefore [allocating money] to brand-new positions,” he says.

So if you’re itching for a change in 2015, here are some ways to break into these high-paying, still-under-the-radar careers—all of which are growing at a rate far greater than the 11% national average.

1. If you’re an: executive assistant or medical administrator,consider becoming a… NUCLEAR MEDICINE TECHNOLOGIST

What it is: Don’t let the title scare you off; the position only calls for a degree from an accredited program, so no med school required. This health care professional operates specialized equipment to do computed tomography (CT) scans, magnetic resonance imaging (MRI), and other imaging tests that physicians and surgeons use to diagnose conditions and plan treatments.

How your skills translate: Attention to detail and good interpersonal skills—already at the heart of your current job—are crucial. Nuclear medicine technologists must follow instructions to the letter when operating equipment; even a minor error can result in overexposure to radiation. A background in math and/or science is a plus.

Why it’s growing: “Jobs are developing rapidly at the intersection of health care and technology,” says John Reed, senior executive director at IT staffing firm Robert Half Technology.

Education requirements: 1- to 4-year accreditation. For more information on requirements, check out the Society of Nuclear Medicine and Molecular Imaging (SNMMI), or use this state-by-state map for a list of accredited programs in your region.

Average salary: $71,120

Projected job growth through 2022: 20%

2. If you’re a mechanic, handyman, or computer repairer, consider becoming a… MEDICAL EQUIPMENT REPAIRER

What it is: Someone who installs, maintains, and repairs patient care equipment. However, given the sensitive nature of medical technology, specialized repair skills are required. These can be obtained through an associate’s degree in biomedical equipment technology or engineering; workers who operate less-complicated equipment (e.g., hospital beds and electric wheelchairs), meanwhile, can typically learn entirely on the job.

How your skills translate: Troubleshooting, dexterity, analytical thinking, and technical expertise—skills already in your toolbox—make for an efficient medical equipment repairer.

Why it’s growing: The increasing demand for health care servicesassures rapid growth for this specialty.

Education requirements: Typically a 2-year degree in biomedical equipment technology or engineering. Go here for information about obtaining a certification for Biomedical Equipment Technician (BMET).

Average salary: $44,180

Projected job growth through 2022: 30%

3. If you’re an IT specialist, computer programmer, or Web developer, consider becoming a… DIGITAL RISK OFFICER

What it is: To prevent data breaches—and better protect sensitive client and customer information—employers are beefing up their cyber security forces. A digital risk officer proactively assesses risks and implements security measures.

Why it’s growing: Recent hacks at Sony, Target, and Home Depot have put more companies on high alert. “Regardless of industry or size, if you have sensitive client information, you have to look carefully at what your security threats are,” says Cornell’s Thomas.

How your skills translate: Your analytical mindset, computer savvy, and problem-solving skills apply to the core responsibility of a digital risk officer: outthinking cybercriminals.

Education requirements: 2- or 4-year degree in IT and digital analytics certification. You’ll likely start as an information security analyst and need to complete a risk assessment training program as well.

Average salary: $153,602 for a chief risk officer, according toPayscale estimates.

Projected job growth: The field is so new that specific data isn’t available, but by 2017, one-third of large employers with a digital component will employ a digital risk officer, reports IT research firm Gartner.

4. If you’re a nutritionist, rehabilitation counselor, or athletic trainer, consider becoming a… HEALTH-AND-WELLNESS EDUCATOR

What it is: Previously outsourced, many companies are now hiring in-house specialists to offer health-and-wellness advice and services, says Brie Reynolds, director of online content at FlexJobs.com, which saw a spike in job postings for this position. The educator works with employees individually to assess personal health issues and create strategies tailored to each person’s needs.

Why it’s growing. Health improvements made by employees not only curb insurance costs but also boost job satisfaction, a key ingredient to retaining talent. Some employers are tying financial incentives to health-and-wellness achievements—discounting health insurance premiums for employees who lose weight, quit smoking, or lower blood pressure, among other behavioral changes.

How your skills translate: Pure and simple, you’re a “people person.” Your ability to connect with individuals and motivate them to make behavioral changes will come in handy when promoting healthy living strategies to workers.

Education requirements: 4-year degree and health education specialist certification. The National Commission for Health Education Credentialing has information on requirements and eligibility.

Average salary: $62,280

Projected job growth through 2022: 21%

5. If you’re a management consultant, consider becoming an…INDUSTRIAL-ORGANIZATIONAL PSYCHOLOGIST

What it is: Companies hire industrial-organizational psychologists to improve work performance, job satisfaction, and skills training. This person is responsible for managing and developing a range of programs, including hiring systems, performance measurement, and health-and-safety policies.

How your skills translate: Your ability to assess an organization’s structural efficiency will serve you well in your new job. Like you, an industrial-organizational psychologist must work well with corporate clients to identify areas for improvement and increased profitability.

Why it’s growing: While not new, this lesser-known job tops the BLS’s list of the fastest-growing occupations. Chalk it up to its track record of success; surveys show the position effectively boosts work performance and improves employee retention rates.

Education requirements: Master’s degree. Check out Careers in Psychology for more information.

Average salary: $80,330

Projected job growth through 2022: 53%

Battery recipe: Deep-fried graphene pom-poms

In Korea, the work of materials scientists is making news worldwide this week, following publication of their article, “Spray-Assisted Deep-Frying Process for the In Situ Spherical Assembly of Graphene for Energy-Storage Devices,” in Chemistry of Materials. Its eight authors used a spray of graphene oxide into hot solvent to form pom-pom-like particles suitable for electrodes and discussed the results of their spray-assisted deep-frying process. “A simple, spray-assisted method for the self-assembly of graphene was successfully demonstrated by using a high temperature organic solvent in a manner reminiscent of the deep-frying of food.” Pratchi Patel, in Chemical & Engineering News, presented a detailed explanation of their accomplishments, saying they constructed round, pom-pom-like graphene microparticles by spraying the graphene oxide droplets into a hot solvent. Their process was like deep-frying. The technique could provide “a simple, versatile means to make electrode materials for batteries and supercapacitors, possibly leading to devices with improved energy and power densities.”

Making graphene foams and aerogels is not new; there are groups who tried this out before, but results turned out to be unsuitable for because of bulk and irregularity and the carbon material’s low density, according to co-author Sang-Hoon Park, Department of Materials Science and Engineering, Yonsei University.

Patel wrote that a few other groups made less bulky graphene nanospheres and microspheres using 3-D templates and techniques such as chemical vapor deposition and freeze-drying and the researchers took this second route. Here’s the departure: While others made spheres looking like hollow balls or wads of crumpled paper, said Patel, the Korean team’s particles resembled pom-poms, containing graphene nanosheets radiating out from the center. Park said the arrangement increased the exposed surface area of the graphene and created open nanochannels that can enhance charge transfer.

What do other scientists think about their approach? Patel reported that: Shu-Hong Yu, a chemist and nanoscientist at the University of Science & Technology of China, regarded the deep-frying technique as an important feature of their work. Compared with other methods for making 3-D graphene, it is “direct, simple, and much easier to scale up for industrial applications,” he said, and another advantage was that the method allowed functional nanoparticles to be trapped directly into the microspheres to form nanocomposites.

Summing up, in the words of Engadget, the deep-frying process might be the ticket to better batteries in mobile devices. Graham Templeton of ExtremeTech puts the scientists’ work in the bigger picture—battery technology, becoming one of the, if not the, weakest link in high-level technology.

“Modern” batteries don’t just hold back consumer electronics through their large size, limited charge, low heat resistance, and hefty expense, they also cost us a lot of energy due to inefficiency,” said Templeton. As for graphene, said Templeton, people already are getting jaded about it. “On the small scale, with extreme difficulty, can be used to do seemingly anything, while on the practical scale, with anything less than physical research budgets to spend, seemingly nothing.” That is where the contribution of the Korean scientists’ work comes in. “Breakthroughs like this one, while specialized, are exactly what we need if the alleged super-material is ever going to come into its own.”

More information: Spray-Assisted Deep-Frying Process for the In Situ Spherical Assembly of Graphene for Energy-Storage Devices, Chem. Mater., Article ASAP. DOI: 10.1021/cm5034244 (full PDF)

To take full advantage of graphene in macroscale devices, it is important to integrate two-dimensional graphene nanosheets into a micro/macrosized structure that can fully utilize graphene’s nanoscale characteristics. To this end, we developed a novel spray-assisted self-assembly process to create a spherically integrated graphene microstructure (graphene microsphere) using a high-temperature organic solvent in a manner reminiscent of deep-frying. This graphene microsphere improves the electrochemical performance of supercapacitors, in contrast to nonassembled graphene, which is attributed to its structural and pore characteristics. Furthermore, this synthesis method can also produce an effective graphene-based hybrid microsphere structure, in which Si nanoparticles are efficiently entrapped by graphene nanosheets during the assembly process. When used in a Li-ion battery, this material can provide a more suitable framework to buffer the considerable volume change that occurs in Si during electrochemical lithiation/delithiation, thereby improving cycling performance. This simple and versatile self-assembly method is therefore directly relevant to the future design and development of practical graphene-based electrode materials for various energy-storage devices.

Insulin-infused venom helps cone snails net prey .

The most venomous animal on the planet isn’t a snake, a spider, or a scorpion; it’s a snail—a cone snail, to be precise. The Conus genus boasts a large variety of marine snails that have adopted an equally diverse assortment of venoms. Online today in the Proceedings of the National Academy of Sciences, researchers report an especially interesting addition to the animals’ arsenal: insulin. According to the paper, this marks the first time insulin has been discovered as a component of venom. Not all cone snails incorporate insulin into their venom cocktail, wonderfully known as nirvana cabal; the hormone was found only in a subset of the animals that hunt with a netting strategy that relies on snaring fish in their large, gaping mouthparts. Unlike the feeding tactics of some cone snails that hunt using speedy venom-tipped “harpoons,” the mouth-netting strategy is a rather slow process. For it to work, the fish either needs to be very unaware of its surroundings or chemically sedated. Scientists speculate that it’s the insulin that provides such sedation. Snails like Conus geographus (seen above) actually produce multiple variants of the hormone, some of which, like one called Con-Ins G1, are more similar to fish insulin than snail varieties. Con-Ins G1 isn’t an exact match of fish insulin though; it’s a stripped-down version that the team suspects may be missing bits that would let fish detect the overdose and respond. If they’re correct, the snail’s venom may yield insight into the nuances of how insulin is regulated that may extend to humans.

Insulin-infused venom helps cone snails net prey

Mass Die-Offs Of Fishes, Birds And Mammals Increasing .

Mass die-offs of hundreds — sometimes hundreds of thousands — of fish, birds and other animals appear to be increasing in both frequency and in the numbers of individuals involved, according to a new study.

The report by researchers at Yale University and four other schools found that disease and human-related activities were the top killers in more than 700 “mass mortality events” since 1940 that were studied. The report included looks at lethal episodes that affected 2,407 different animal populations.

The research didn’t include the 1999 mass die-off of lobsters in Long Island Sound, but the apparent causes of that deadly event were similar to those cited by the report in many of the other cases of mass deaths.
Bird Experts, State Join To Protect Habitats.
The 1999 die-off almost wiped out lobsters in most areas of the Sound, and the population still hasn’t recovered. Scientists believe increasing water temperatures from global warming made cold-water-loving lobsters more vulnerable to disease, manmade pollution and pesticides, all factors listed as critical elements in mass mortality cases in the new report.

Samuel Fey, the Yale biologist who co-authored the research study, said the apparently growing numbers involved in such mass die-offs are one of the chief concerns to come out of the study, published in the Jan. 12 issue of the Proceedings of the National Academy of Sciences.

“That was the single biggest surprise for us,” Fey said. “It was a big red flag for us.”
Scientists from Dartmouth College, the University of San Diego, the University of California, and Southern Illinois University also participated in the research project.

Fey said he and the other researchers had expected to see a decline in the average numbers of fishes, birds and mammals killed in mass mortality events. They initially assumed that overall averages would drop because so many more die-offs have been studied and reported on in recent years than in past decades.

Instead, they discovered the numbers of individual members of species dying in such mass events seem to have been continually rising for birds, fish and marine invertebrates, such as sea urchins and lobsters.
Remember the “silent spring!” Man has raped, defiled, poisoned and polluted his home. We leave for future generations a planet filled with toxic waste and garbage. Each of us generates 1600 pounds of garbage each year. A large percentage is packaging, the result of excess. If we could…

There is some good news included in the study’s results: those mass die-off numbers for individual mammal species appear to be holding steady, while the death tallies for amphibians and reptiles seems to be declining.

Fey noted that far more mass mortality cases have occurred across the globe since 1940 than were covered by the new report. The study was limited to only those events that triggered papers published in scientific journals.

He said more comprehensive studies are needed to get an accurate picture of what’s happening with mass die-offs. Fey said only about 9 percent of scientific papers on mass mortality events reviewed for the study mentioned how a particular die-off affected a species’ overall population.
Fey, 30, has a doctorate in ecology and evolutionary biology from Dartmouth College and was raised in Farmington. He is currently a postdoctoral fellow in the Yale department of ecology and evolutionary biology.

Mass mortality events, where large numbers of an individual species die over a relatively short period, are different from extinctions. But major die-offs can virtually wipe out a species in a specific locale.

Fey said one example was the 1983 event that resulted in an estimated 99 percent of a particular species of sea urchin (Diadema antillarum), being eradicated from the Caribbean Sea. Experts believe that event was caused by a water-born disease.

In 1988, a virus outbreak in Siberia among Baikal seals, the only freshwater seal species in the world, killed tens of thousands of the seals in Lake Baikal, Fey said. Experts believe there are fewer than 85,000 of those seals remaining, according to the Seal Conservation Society.

The new study found that disease was listed as the cause in 26 percent of mass die-offs. Human activity, primarily pollution, was found to be the primary trigger for 19 percent of big mortality events, while biotoxicity, such as harmful algae blooms, was cited as the third leading cause at 15.6 percent.

Even biotoxicity events can be related to human activities, such as fertilizers that are washed into streams and rivers and estuaries like Long Island Sound, where they can lead to dangerous algae blooms.

Fey said the study demonstrated that there are “so many connections” between different causes of die-offs. The researchers concluded that mass mortality cases are often triggered by “multiple interacting stressors” working together, such as disease, starvation and changes in the environment.

“Some of the patterns observed in our study are consistent with recent climate change,” Fey said. “Notably, our study shows a decrease in mass die-offs associated with cold thermal stress, which is consistent with an observed decrease in the severity of winters.”

Fey said he believes the new study of mass mortality events “lays the foundation” for a better understanding of how die-offs relate to the health of entire species and the environment as a whole.

Fey said he hopes the findings of the report will lead to better research programs to document mass die-offs that are certain to occur “in our uncertain future.”

Elon Musk wants to build space Internet that would one day service Mars

The CEO of Tesla and SpaceX has announced a new US$10 billion plan to beam Internet from space.

Here’s news that makes the prospect of moving to Mars in the distant future far less scary – by the time we get there, it may already have Internet access.

Elon Musk has announced plans to build a network of space-based Internet-beaming satellites, and he believes they’ll form the basis for a connection on Mars, as he told Ashlee Vance from Bloomberg Businessweek.

The idea behind the ambitious project is that having a network of communication satellites orbiting Earth would make Internet connections cheaper, faster and also more accessible to those in even the most remote regions.

In his plan, hundreds of satellites would orbit Earth from a distance of around 1,200 kilometres – which is far lower than the communication satellites we currently use. But they would still be able to transfer data much faster than ground-based cables.

As Vance writes for Bloomberg Businessweek:

“In Musk’s vision, Internet data packets going from, say, Los Angeles to Johannesburg would no longer have to go through dozens of routers and terrestrial networks. Instead, the packets would go to space, bouncing from satellite to satellite until they reach the one nearest their destination, then return to an antenna on Earth.”

He’s not the only one with the goal of bringing Internet to the two-thirds of the world who still aren’t connected – Facebook and Google both have their own impressive projects in the pipelines.

But Musk – the man who has designed a train that takes you from New York to China in two hours – is the only one who’s taking the Internet to outside Earth’s atmosphere.

“The speed of light is 40 percent faster in the vacuum of space than it is for fibre,”Musk told Vance.

And, conveniently, by having the satellites in space, he’ll be ready to service Mars by the time humans get there – something that NASA has already announced it hopes to achieve by 2018.

“It will be important for Mars to have a global communications network as well,” he explains in the interview. “I think this needs to be done, and I don’t see anyone else doing it,” Musk told Vance.

The ambitious project would cost around US$10 billion, and five years to get started, but SpaceX is already getting started on building the satellites and rockets to set up the system.

The only potential issue is the fact that a company called OneWeb has already announced its plans to do a similar thing, which means that effectively the race for space Internet has now begun.

The competition isn’t necessarily a bad thing though – imagine how much better being a first settler on the Red Planet would be if you could use Facebook. And if it’s at a competitive rate? Even better.