Soda Increases Risk of This Female Cancer.


Causes of endometrial cancer haven’t all been identified, but researchers now know at least one potential trigger, particularly in postmenopausal women.

Compared to women who barely drank any sugar-sweetened drinks like soda, those who drank the most were 78 percent more likely to be diagnosed with the most common type of endometrial cancer. Scientists were able to clearly see a correlation: The more sugar-sweetened beverages a woman drank, the higher her risk of endometrial cancer.

Soda Increases Risk of This Female Cancer

“Although ours is the first study to show this relationship, it is not surprising to see that women who drank more sugar-sweetened beverages had a higher risk of estrogen-dependent type I endometrial cancer, but not estrogen-independent type II endometrial cancer,” explains Maki Inoue-Choi, PhD, RD, who led this study as a research associate in the Division of Epidemiology and Community Health of the University of Minnesota School of Public Health in Minneapolis.
“Other studies have shown increasing consumption of sugar-sweetened beverages has paralleled the increase in obesity,” she adds, noting that obese women tend to have higher levels of estrogens and insulin than women of normal weight. “Increased levels of estrogens and insulin are established risk factors for endometrial cancer,” Inoue-Choi says.
So is sugar toxic? This is yet another piece of evidence suggesting it is, especially in the doses most people currently take in. Once reserved for special occasions, people are now overdosing on it. That’s not hard to believe, considering some bottles of soda or cans of iced tea contain 50-plus grams of sugar—more than the total anyone should have in a single day.


9 Disturbing Side Effects of Soda


The new study, published in the journal Cancer Epidemiology, Biomarkers & Prevention, looked at how many sugar- and sugar-free drinks women drank, including Coke, Pepsi, 7-Up, Hawaiian Punch, lemonade, and other fruit drinks. They also looked at the amount of sweets and baked goods in the women’s diets. While sugar-free soda and sweets and baked goods didn’t seem to impact a woman’s risk, sugar-sweetened drinks like soda did. The group that drank the most—60½ servings a week—faced a nearly 80 percent higher risk. (To put things in perspective, about 500 out of more than 20,000 women involved in the study developed endometrial cancer.)

Still, giving soda the boot is an easy way to protect yourself—not just from endometrial cancer, but from a ton of other problems, such as blood vessel damage, wrinkles, and type 2 diabetes

Advertisements

France struggles to break nuclear habit


 Cattenom nuclear power plant

The Fukushima disaster led many countries to rethink their view on nuclear energy. Germany plans to abandon it altogether, but French President Francois Hollande also wants to cut nuclear output sharply – by a third in 20 years. It’s a big ask in a country that now relies on nuclear for 75% of its electricity.

If fully implemented, the pledge would force the closure of up to 20 of the country’s 58 reactors according to Professor Laurence Tubiana a former government adviser who the president asked to facilitate a national debate, paving the way for what they call la transition energetique.

This would be a huge step, but Tubiana describes it as a “logical evolution”.

France realised that Japan had survived economically when all its atomic power stations were shut down because of its diverse energy mix. In Japan, before the disaster, nuclear power delivered about 30% of the country’s electricity, but France is hugely dependent not only on nuclear, but on a single generation of nuclear power stations.

It is vulnerable to a “generic risk”, according to Tubiana, where a problem with one reactor could force them all offline for the fault to be fixed. This would cause chaos.

She says the 20 reactors closed in the “transition” could be replaced by renewable energy, which she says would maintain French energy independence and be both “stable and secure”.

So far, however, the government has only earmarked one power station for closure – the ageing plant at Fessenheim on the German border – which prompts some to question the government’s commitment to Hollande’s promise.

Fukushima inspection 2013
The Fukushima disaster prompted Germany to plan the closure of its nuclear power plants by 2022

There is evidently reluctance in cabinet. Industry Minister Arnaud Montebourg is on record as saying that Fessenheim will be the only nuclear power station to close.

On a visit to China in December he reassured his audience that nuclear energy was a “sector of the future” and would continue to contribute “at least 50%” of France’s electricity output.

Another member of Hollande’s Socialist Party, the MP Christian Bataille, says the plan to curb nuclear was hatched as a way of securing the backing of his Green coalition partners in parliament.

He describes nuclear power as the country’s “only national energy source”.

French nuclear industry

  • Supplies 75% of electricity
  • Exports both electricity and nuclear technology
  • Building its first Generation III reactor
  • Country has 58 nuclear reactors operated by Electricite de France (EdF)

“We no longer have coal, we never had much petrol and we don’t have any gas. Nuclear energy contributes to our independence,” Bataille says.

“People only reject it if they’re subjected to scaremongering campaigns.”

French nuclear power was the ultimate “grand project” forged in the 1970s and designed to make France as energy-independent as possible. Its reactors have been churning out low-carbon energy at some of the lowest prices in Europe for decades – helping, supporters say, to make French industry competitive.

At a fashionable Parisian street market I spoke to a number of shoppers, with differing views on nuclear power.

“People need energy, and nuclear is necessary to live,” one smartly dressed woman told me. But others had been unsettled by Fukushima and were concerned about both safety and nuclear waste. “It is very useful but it is very dangerous,” said one elderly man. He would prefer renewable energy, he said, but recognised it would take time to switch.

Meanwhile, the economics of nuclear power are changing too.

German protesters Protest against Fessenheim – on the German border – which is now earmarked for closure

The safety upgrades forced by Fukushima will cost about 10bn euros (£8bn) and maintenance costs are expected to rise sharply as the nuclear plants age. By the end of 2022, more than one in three French reactors will have been in operation for 40 years or more.

The state-owned utility EDF plans to extend the lives of reactors from 40 to 60 years, but that will cost money too.

Germany’s “Energiewende”

Germany - energy
  • The economy is being re-engineered to cope with the closing of all its nuclear power stations and the ramping up of wind and solar power
  • The government provides generous incentives to companies or people who built wind-turbines or installed solar panels
  • More than half of all the solar-panels in the world are in Germany

It’s one reason why the golden age of low-priced electricity in France is over, according to Prof Patrice Geoffron of Paris Dauphine University.

“All the drivers of the electricity price will go up in the future,” he says. “If you hear what is said by the regulator of energy we will be obliged to increase the price by 30% by 2020.”

Independent energy analyst Mycle Schneider says that in this environment, the most expensive renewable energy sources could become more competitive than nuclear in less than five years – which is “tomorrow morning in energy policy,” he says.

Cecile Maisonneuve, a former board member of the state-owned reactor and fuel manufacturer Areva who now heads the energy division of IFRI, a think tank, describes the government’s plan for the transition as “too fast and for the moment… not credible”.

France would fall back on gas, or even coal, she says, with a consequent rise in CO2 emissions. She says Germany has seen a small increase in the use of coal during its transition – though German experts say that is because gas cannot compete with coal on price, and the European Union’s Emissions Trading System is to blame.

France - wind turbines
Some industry experts say France needs to catch up on other technologies

Professor Tubiana says by concentrating on nuclear power France has slipped behind on rival technologies like wind, solar and biomass and it must now take steps to catch up quickly.

“We were very good 20 years ago with solar concentration,” she says. “We are now nowhere. We concentrated all our efforts on one side.”

If France does not create a market for renewable energy it will never be competitive in the sector, she says – while its nuclear industry could still be powerful even in 2050, even under the Hollande plan.

If 50% of electricity continues to be generated by nuclear, that is still an “enormous” figure, she says.

Even if President Hollande’s plan for the transition stalls, it seems clear at least that there will be no further expansion of nuclear in France.

EDF is planning to build two new nuclear reactors at Hinkley in western England with Chinese help, but at Flamanville in Normandy a new reactor of the same EPR design is behind schedule and massively over budget. A second envisaged EPR reactor in France has been shelved indefinitely – and no other new nuclear power stations are planned.

Privacy concerns raised as Google+ makes it possible to send email via name search


Questions raised as new automatically enabled feature in Google+ lets people send emails to strangers without knowing their email addres

Salesforce: Google Plus logo and website screen close up
Google Plus logo and website screen close up Photograph: Alamy

Google is integrating its Gmail service and Google+ social tracking network so that people without your Gmail address can send you emails by a name search.

The move has raised questions about its privacy implications, after similar moves with Gmail and its then-new Google Buzz social network in 2010 led to a row over alleged privacy invasion. Those in turn led to Google being bound to a 20-year privacy oversight by the US Federal Trade Commission.

Google has also made the change opt-out, so that users will have to change their settings to prevent unknown people emailing them. The senders will not see the email address of the person they are sending the message to unless the recipient replies.

Announcing the move in a blogpost, Google product manager David Nachum wrote:

Have you ever started typing an email to someone only to realize halfway through the draft that you haven’t actually exchanged email addresses? If you are nodding your head ‘yes’ and already have a Google+ profile, then you’re in luck, because now it’s easier for people using Gmail and Google+ to connect over email.

Marc Rotenberg, the executive director of non-profit Electronic Privacy Information Center, told Reuters that the new feature was “troubling” and added: “There is a strong echo of the Google Buzz snafu”.

Buzz created an uproar because it tried to create a social network built out from the email contacts that people had. One woman who had separated from her abusive ex-husband said that it revealed the identity of her new boyfriend to him, potentially endangering her and him. Google’s executive chairman Eric Schmidt later said “nobody was harmed” by the moves.

Google says that Google+, set up in June 2011, has 540m “active” users, but has been vague about how it counts activity. Analysts have suggested that Google+ is not a social network aiming to compete with Facebook, but instead a system for collecting more information about people’s web use. The number of “active” users will have increased since Google made it obligatory in November 2013 to use Google+ to leave a comment on YouTube.

Google has recently faced criticism for over-tight integration of Google+ into products after one transgender user of an early version of its newest version of Android discovered that Google+ had been integrated into its chat system, and sent a message to somone under the woman’s name they were adopting rather than the man’s name the intended recipient was used to. The woman had not expected the system to search Google+ for a contact name – but it did.

Facebook also allows people to send messages through a name search, but does not reveal any information such as emails if the person replies.

Google says it will be rolling out the system over the next few weeks and will automatically email all Gmail users telling them of the changes. It is not possible to create a Gmail account without having a Google+ account.

The dark side of the moon is turquoise, say astronomers


Measurements from a telescope in Hawaii show blue light reflected from Earth turns turquoise when it bounces off moon
Pink Floyd The Dark Side of the Moon album cover

I’ll see you on the turquoise side of the moon: earthlight is shifted towards the red end of the spectrum when reflected from the moon’s dark side. Photograph: Alamy

In a demonstration of the power of science to ruin a perfectly respectable work of art, researchers have discovered the colour of the dark side of the moon.

Measurements from a telescope in Hawaii mean that pedants may now argue that, technically speaking, if one wanted to be entirely accurate, the side of the moon referred to in Pink Floyd‘s 1973 album The Dark Side of the Moon should really be described as “turquoise”.

The revelation comes from two years of measurements by an international team of astronomers who installed a telescope and a sensitive camera at the Mauna Loa Observatory in Hawaii, run by the US National Oceanic and Atmospheric Administration.

The dark side of the moon is not the same as the far side, which gets as much sunlight as the side facing us. The dark side is not lit directly by sunlight, but by light reflected from Earth. It is much fainter, and best seen around the time of the new moon.

“This is sunshine that struck the Earth, was coloured by the Earth, was reflected up to the moon, struck the moon, and then came back to us,” said Peter Thejll, a senior scientist at the Danish Meteorological Institute in Copenhagen and first author on the study.

Images of Earth from space show clearly that the planet looks blue. But when this blue light strikes the moon, the light that’s reflected back is turquoise.

“Astronauts standing on the moon and looking up at the Earth described it as a blue marble,” said Thejll. “Having not been into space myself, I don’t know what they meant exactly, but once that blue light strikes the moon’s surface, it shifts to a blue-green colour. We can call it turquoise.”

Waxing crescent moon with earthshine reflected from 'dark side'

waxing crescent moon with earthshine reflected from the ‘dark side’. Note the halo around the bright, light side. Photograph: David Nunuk/Corbis

To measure the colour of the dark side of the moon, the astronomers had first to screen out light from the bright side that had been scattered by Earth’s atmosphere. This scattered light produces a shifting halo around the moon and messes up measurements of the dark side. The same effect produces the familiar glow around street lamps seen from a distance.

The scientists snapped pictures of the moon through the telescope using two different colour filters. Amid hundreds of images, they found a pair taken of the waning crescent moon on 18 January 2012 that had exactly the same halo. When they subtracted one image from the other, the halo disappeared and they could measure the true colour of the moon’s dark side.

“We know how unlikely it is that the haloes should cancel out, and yet we found a pair where they did. That says something about the conditions on the night when we took those pictures. Something was unique so the two haloes were identical and they cancelled and frankly we don’t know why,” Thejll said.

“This is the first accurate colour measurement of the dark side of the moon,” Thejll said. The last attempt was made in 1965 from an observatory near Bloemfontein in South Africa. Thejll’s study has been accepted for publication in the journal Astronomy and Astrophysics.

Thejll said that observations of the dark side of the moon can help scientists to monitor the colour of the Earth. This could be useful for assessing climate change models, some of which predict changes in cloud cover.

“We have measured the same colour now as was seen in the 1960s and that might say something about what has happened to the Earth in the meantime, and it’s consistent with there being no change in the amount of cloud,” Thejll said. “But this was just one measurement, and the colour of the Earth changes on an hourly basis.”

2013 in review


The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 29,000 times in 2013. If it were a concert at Sydney Opera House, it would take about 11 sold-out performances for that many people to see it.

Click here to see the complete report.

dear readers,

thanks all for your continuous support, suggestion and appreciation.

happy reading.

feel free to write me.

email oncozene@gmail.com/gladeolie@live.com

blog owner,

Dr Chandan

Gadolinium: The MRI Agent Linked to Brain Abnormalities.


Magnetic resonance imaging (MRI) is one of the better choices if you need a diagnostic imaging procedure performed. Unlike CT scans or X-rays, an MRI does not use ionizing radiation that may cause DNA damage or cancer.

Story at-a-glance

  • The use of gadolinium-based contrast agents for enhanced MRIs has been linked to hypersensitivity in certain brain regions, with unknown consequences
  • The use of gadolinium-based contrast agents is linked to the development of Nephrogenic Systemic Fibrosis (NSF) in patients with severe kidney disease
  • The long-term effects of gadolinium-based contrast agents are unknown; only use contract agents if they are absolutely necessary (often they are optional and an MRI can be effectively performed without their use)

Instead, a strong magnetic field and radio waves produce cross-sectional images of your organs and other internal body structures. In some cases, however, a gadolinium contrast medium is used to make the images clearer (this is typically called an enhanced MRI).

There are risks involved when contrast agents are used, including potential brain abnormalities revealed by a new study, so it’s important to use extreme caution and only get an enhanced MRI if it is absolutely necessary.

MRI

Gadolinium-Based Contrast Agents Linked to Brain Hypersensitivity

Gadolinium is a paramagnetic metal ion that moves differently within a magnetic field. When used during an MRI, it may make certain tissues, abnormalities, or diseases more clearly visible.

However, because the gadolinium ion is known to be toxic, it is chemically bonded with non-metal ions when used during MRIs to allow it to be eliminated from your body before it is released in your tissues.

For the first time, a new study has shown that the gadolinium may not be immediately eliminated and may instead persist in your body.1 The study compared brain images of patients who had undergone six or more contrast-enhanced MRI brain scans with those of patients who had received six or fewer unenhanced scans.

It revealed areas of high intensity, or hyperintensity, in two brain regions (the dentate nucleus (DN) and globus pallidus (GP)), which correlated with the number of gadolinium-based enhanced MRIs.

It’s unknown at this time what the hyperintensity may mean, however hyperintensity in the DN is associated with multiple sclerosis. It’s now being suggested that this hyperintensity may be the result of the large number of enhanced MRI scans often received by multiple sclerosis patients. Hyperintensity of the GP, meanwhile, is linked with liver dysfunction. The study’s lead author noted:2

“Hyperintensity in the DN and GP on unenhanced MRI may be a consequence of the number of previous Gd-CM administrations… Because gadolinium has a high signal intensity in the body, our data may suggest that the toxic gadolinium component remains in the body even in patients with normal renal function.”

Gadolinium-Based Contrast Agents Also Linked to Life-Threatening Skin Thickening

Among patients with severe kidney disease, the use of gadolinium-based contrast agents is linked to the development of Nephrogenic Systemic Fibrosis, or NSF. NSF was first identified in 1997 and while its cause is unknown, it’s only been reported in those with kidney disease.

NSF causes skin thickening that can prevent bending and extending your joints. It can also develop in your diaphragm, thigh muscles, lung vessels, and lower abdomen. Along with causing decreased mobility of joints, NSF can be fatal.

Because of this connection, the US Food and Drug Administration (FDA) requested that the manufacturers of all five gadolinium-based contrast agents (Magnevist, MultiHance, Omniscan, OptiMARK, and ProHance) add a boxed warning and a new Warnings section to their labels to describe the risk of developing NSF.3

I recommend that everyone use caution with gadolinium-based contrast agents and only use them when absolutely essential. Even if you’re healthy, these contrast agents may cause side effects like life-threatening allergic reaction, blood clots, blood vessel irritation and skin reactions, including hives, itching, and facial swelling.

However, if you have kidney disease, using caution is particularly important. Often the use of contrast agents is optional and an acceptable MRI can be conducted without the use of a contrast. If you have been administered a gadolinium-based contrast agent and you have kidney disease, be on the alert for the following symptoms of NSF, and contact your health care provider immediately if you experience them:

Swelling, hardening, and tightening of your skin Reddened or darkened patches on the skin Burning or itching of your skin
Yellow raised spots on the whites of your eyes Stiffness in your joints; problems moving or straightening arms, hands, legs, or feet Pain deep in your hip bones or ribs
Muscle weakness

Other MRI Risks You May Not Know About

As mentioned, MRIs are preferable to CT scans because they do not use ionizing radiation. However, it’s still wise to minimize their use as much as possible, in part because the effects of exposure to MRIs’ strong magnetic field are largely unknown.

Research has shown that there are biological effects in the human body, however, including to the retina, pineal gland, and some cells in the paranasal sinuses.4Time-varying magnetic fields may also interfere with your nerve cell function and muscle fibers, while MRIs also produce acoustic noise that has been known to cause temporary (and, rarely, permanent) hearing loss.

Finally, due to the strong magnetic field produced, an MRI can become deadly if metal objects in the room are not properly secured or you have medical devices such as a pacemaker in your body. There is at least one report of a child being killed during an MRI due to an unsecured metal oxygen tank.5

The MUST-KNOW Rule if You Are Getting an MRI

But the KEY here is to avoid using MRI scans with contrast unless ABSOLUTELY necessary. Many times, physicians will order these tests just to be complete and cover their butts from a legal perspective. If that is your case, then simply refuse to have the test done with contrast. If necessary, consult with other physicians that can provide you with a different perspective.

If You Need an MRI, It Pays to Shop Around

While I always recommend your being very judicious in your use of medical diagnostic procedures, there are certainly times when it is appropriate and useful for you to have a certain test. But many people don’t realize that the fees for these procedures can vary tremendously, depending on where they are performed. Hospitals tend to be the MOST expensive option for diagnostics and outpatient procedures—sometimes by an enormous margin.

Freestanding diagnostic centers are alternative places to obtain services, such as lab studies, X-rays, and MRIs, often at a fraction of the cost charged by hospitals. You may have driven by some of these centers in your city. Private imaging centers are not affiliated with any particular hospital and are typically open for Monday through Friday business hours, as opposed to hospital radiology centers that require round-the-clock staffing.

Hospitals often charge higher fees for their services to offset the costs of their 24/7 operations. Hospitals also may charge exorbitant fees for high-tech diagnostics, like MRIs, to subsidize other poorly reimbursed services. And, hospitals are allowed to charge Medicare and other third-party insurers a “facility fee,” leading to even more price inflation. So, if you do find that you need an MRI, don’t be afraid to shop around. With a few phone calls to diagnostic centers in your area, you could save up to 90 percent over what a hospital would charge for the same service.

What You Should and Shouldn’t Worry about after the Fukushima Nuclear Meltdowns.


The old saying goes where there’s smoke, there’s fire, but steam is a different story, even in the case of a nuclear power plant that suffered multiple meltdowns. Despite fresh worries about a new meltdown at the Fukushima Daiichi complex in Japan, the steam that set off this concern is merely a result of atmospheric conditions—and a reactor that is still hot from having melted down in 2011.

Think of it as seeing your breath in coldweather. The damaged reactors at Fukushima are still hot, nearly three years after the disaster, thanks to the ongoing radioactive decay of the damaged nuclear fuel. This is why used nuclear fuel sits in cooling pools of waters for years after time spent fissioning in a reactor. The radioactive detritus at Fukushima is still throwing off roughly one million watts worth of heat, according to Fairewinds Energy, a nuclear safety advocacy group based in Burlington, Vt. That heat turnswater into steam—and when the air is cold enough, as it is in winter in Japan, thatsteam is visible. “This also happened last year at this time, and periodically since the tsunami in 2011,” notes David McIntyre, a spokesman for the U.S. Nuclear Regulatory Commission (NRC). “We are in touch with the Japanese regulator and TEPCO [the utility responsible for Fukushima], and from what we’ve seen and heard there is no reason to suspect that this steam is an indicator of anything bad happening.”

Nor is this plume of steam—sometimes visible, sometimes not—only apparent in winter. When the atmospheric conditions are right, with relatively low temperature and high humidity, the steam is visible even in summer, as happened in July 2013. It is fortunate that physics suggests such steam is nothing to worry about, because it is impossible to check firsthand. Due to the meltdown in that reactor, radiation levels are too high for any human to enter without receiving an unacceptable dose.

What about the fishes?
Another perennial concern is that the water contaminated with radioactive particles still leaking from the stricken nuclear power plant site is poisoning Pacific Ocean fish and other seafood. There is no doubt that ingesting radionuclides is one of the worst forms of radiation exposure, because it continues for a long period of time. But, with the exception of bottom-feeding fish and sessile (immobile) filter feeders caught in the immediate vicinity, any radionuclides from Fukushima have been diluted by the vastness of the Pacific to insignificant quantities. The extra radionuclides from Fukushima are simply not enough to create a dose large enough to cause any human health effects outside the immediate vicinity of the stricken nuclear power plant.

Nor is the radioactive contamination from Fukushima the cause of changes to Pacific sea-bottom life observed in recent years off the U.S. west coast, as the marine scientists at Deep Sea News recently noted. Those shifts most likely stem from the copious quantities of carbon dioxide spewed by fossil fuel–fired powerplants that are changing the climate and, thus, the tiny plants known as phytoplankton that serve as the base of the oceanic food chain.

When it comes to radiation, the nuclear weapons testing conducted from the 1940s to the 1980s contributed orders of magnitude more radioactivity to the oceans than Fukushima (even when combined with Chernobyl, a much larger nuclear catastrophe). There is also an estimated 37 x 10^18 becquerels worth of radioactivity in the oceans from naturally dissolved uranium in seawater anyway, which some view as a future nuclear fuel source but is not generally considered a health risk. (A becquerel measures the rate of radiation emission.) And there are other naturally occurring radioactive elements in seawater as well, such as polonium.

fukushima-daiichi

That means the tuna caught in the Pacific have always been naturally radioactive (and pose less risk than dental x-rays, as the Woods Hole Oceanographic Institution notes). Or as marine scientist Ken Buesseler of Woods Hole put it in a scientific paper on the subject published in 2012, “though [cesium] isotopes are elevated 10 to 1,000 [times] over prior levels in waters off Japan, radiation risks due to these radionuclides are below those generally considered harmful to marine animals and human consumers, and even below those from naturally occurring radionuclides.”

Marine scientists have calculated that, based on all the radioactive particles released (or leaking) from Fukushima, a dose due to this most recent nuclear accident would add up to a total of roughly one microsievert (a unit of radiation exposure) of extra radiation—roughly one tenth the average daily dose most Americans experience, one fortieth the amount from a cross–North America flight and one one-hundredth the exposure from a dental x-ray. This also means that no one in the U.S. should be taking potassium iodide pills, especially because there has been no radioactive iodine issuing from Fukushima for several years now. (Radioactive iodine has a half-life of just eight days, meaning that all of it was gone within three months of the March 2011 nuclear accident in Japan.)

Likewise, the debris from Fukushima that has begun to arrive on U.S. shores is also relatively benign. In fact, any radiation from the flotsam is likely to have far less an impact than the novel species it may carry with it across the Pacific, which could potentially spark a biological invasion.

What to really worry about
None of this is to say that radiation from Fukushima does not pose any dangers or that its spread should not be monitored. But the ongoing serious causes for concern revolve around the dangerous and delicate work of cleaning the nuclear disaster site—and the real health risks are being faced by the people in the immediate vicinity of the stricken nuclear reactors.

There is radioactive rubble to contain or dispose of, undamaged fuel rods to be safely removed (and discarded), and an unknown amount of melted nuclear fuel to contain. Contaminated soil in regions surrounding the nuclear power plant must either be removed or the area should be turned into a de facto nuclear “park,” much like Chernobyl’s exclusion zone in Ukraine and Belarus or the Rocky Flats National Wildlife Refuge in Colorado, which is the former home of a nuclear-bomb making facility. Then there’s the more than 100,000 metric tons of cooling water and groundwater contaminated with radioactive particles being stored in tanks at the site, some of which is either leaking, reaches the sea naturally or is periodically released intentionally to prevent a flood.

In addition, if the melted nuclear fuel proves bad enough—like Chernobyl’s lethal mass of molten core known as the “elephant’s foot“—it will have to be entombed for a number of years rather than removed, because of radiation risk from what is essentially a cooled shell of ceramic armor surrounding a highly radioactive core that remains hot and is still undergoing radioactive decay.

Bottom line: until Fukushima has a sarcophagus entombing it or all the nuclear fuel has been carted away expect periodic reports of steam for years to come. But don’t worry about it reaching the U.S. As the NRC’s McIntyre notes: “Advice for people on the west coast to buy radiation suits because of this steam is simply irresponsible.”

What Is This “Polar Vortex” That Is Freezing the U.S.?


As I write these words, temperatures across half the U.S. are plummeting like a rock.Extreme lows are forecast by tonight: -32 degrees Fahrenheit in Fargo, N.D.; -21 degrees F in Madison, Wisc.; -15 degrees F in Chicago and Indianapolis, according to the National Weather Service. Wind chills will reach a bizarre 60 degrees below zero F in some places, freezing exposed skin within one minute. That number is more typical for Mars—at night, according to the Curiosity rover NASA has free-wheeling over there.

As each hour passes, more and more television and radio reporters are attributing the insane cold to a “polar vortex” up in northern Canada. Vortex, yes, but upper Canada? Not exactly. One forecaster called the beast a hurricane in the Arctic, which is dramatic but wrong. So what is this mysterious marvel and why is it invading America?

The polar vortex is a prevailing wind pattern that circles the Arctic, flowing from west to east all the way around the Earth. It normally keeps extremely cold air bottled up toward the North Pole. Occasionally, though, the vortex weakens, allowing the cold air to pour down across Canada into the U.S., or down into other regions such Eastern Europe. In addition to bringing cold, the air mass can push the jet stream—the band of wind that typically flows from the Pacific Ocean across the U.S.—much further south as well. If the jet stream puts up a fight, the moisture it carries can fall out as heavy snow, which atmospheric scientists say is the circumstance that caused the February 2010 “snowmageddon” storm that shut down Washington, D.C.

But why does the vortex weaken? Now it gets interesting. More and more Arctic sea ice is melting during summer months. The more ice that melts, the more the Arctic Ocean warms. The ocean radiates much of that excess heat back to the atmosphere in winter, which disrupts the polar vortex. Data taken over the past decade indicate that when a lot of Arctic sea ice disappears in the summer, the vortex has a tendency to weaken over the subsequent winter, if related atmospheric conditions prevail over the northern Atlantic Ocean. The situation looks something like that shown in the graphic below. (For a full explanation, see the Scientific American article that accompanies the graphic.)

Although the extent of summer sea ice in the Arctic varies year to year, overall it has been disappearing to a notable degree since 2007 and it is forecast to continue to vanish even further. That could mean more trouble for the polar vortex, and more frigid outbreaks—a seeming contradiction to “global warming,” perhaps, but not for “global weirding,” also known as climate change.

 

A New Method to Measure Consciousness Proposed.


Leonardo Da Vinci, in his Treatise on Painting (Trattato della Pittura), advises painters to pay particular attention to the motions of the mind, moti mentali. “The movement which is depicted must be appropriate to the mental state of the figure,” he advises; otherwise the figure will be considered twice dead: “dead because it is a depiction, and dead yet again in not exhibiting motion either of the mind or of the body.” Francesco Melzi, student and friend to Da Vinci, compiled the Treatise posthumously from fragmented notes left to him. The vivid portrayal of emotions in the paintings from Leonardo’s school shows that his students learned to read themoti mentali of their subjects in exquisite detail.

Beautiful young woman sleeping

Associating an emotional expression of the face with a “motion of the mind” was an astonishing insight by Da Vinci and a surprisingly modern metaphor. Today we correlate specific patterns of electrochemical dynamics (i.e. “motions”) of the central nervous system, with emotional feelings. Consciousness, the substrate for any emotional feeling, is itself a “motion of the mind,” an ephemeral state characterized by certain dynamical patterns of electrical activity. Even if all the neurons, their constituent parts and neuronal circuitry remained structurally the same, a change in the dynamics can mean the difference between consciousness and unconsciousness.

But what kind of motion is it? What are the patterns of electrical activity that correspond to our subjective state of being conscious, and why?  Can they be measured and quantified? This is not only a theoretical or philosophical question but also one that is of vital interest to the anesthesiologist trying to regulate the level of consciousness during surgery, or for the neurologist trying to differentiate between different states of consciousness following brain trauma.

Recently, Casali et al have presented a quantitative metric. It provides, according to the authors, a numerical measure of consciousness, separating vegetative states from minimally conscious states. The study provides hints of being able to identify the enigmatic locked-in state, in which the subject is conscious but is unable to communicate with the external world due to motor deficits. What is most interesting is the claim that the measures provide scientific insight into consciousness, by providing an objective measure.

Their metric, like other existing clinical measures of consciousness, is based on Electroencephalography (EEG), where voltages recorded from electrodes placed on the scalp provide a coarse picture of neural activity in the brain. EEG can be used to measure either ongoing brain activity, or that evoked by an external stimulus. In Casali’s case, the activity in question is evoked directly in the brain using a transient magnetic field (Transcranial Magnetic Stimulation). This involves applying a transient magnetic field, which generates an electric field in a particular region of the brain due to Faraday’s law, a bit like attaching a battery to the neural circuitry. This causes currents to flow in the brain, not just in the stimulated region, but in other regions connected to it as well. The spatial and temporal patterns of these currents in the brain are then inferred from the EEG measurements and quantified to produce the metric.

The novelty in the study lies in the method used to quantify the spatiotemporal distribution of current, which is also the basis of the theoretical claims. The idea is that when the brain is unconscious, the evoked activity is either localized (the authors call this “lack of integration”), or widespread and uniform, as might be expected during slow wave sleep or epileptic seizures (“lack of differentiation”). The conscious state on the other hand is supposed to correspond to a distributed, but non-uniform spatiotemporal pattern of current sources. The authors apply a standard data compression scheme (the Lempel-Ziv algorithm, which is used for example in the GIF image format) to distinguish between the two scenarios. The degree of compressibility of the current distribution as inferred from EEG is the consciousness metric they propose.

The scientists report that their measure performs impressively in distinguishing states of consciousness within subjects, as well as across subjects in different clinically identified consciousness stages.  These promising results will no doubt attract further study. However, the claim that the measure is theoretically grounded in a conceptual understanding of consciousness deserves a closer look. It is tempting to think that a concretely grounded clinical study of consciousness naturally advances our scientific understanding of the phenomenon, but is this necessarily the case?

It is common in medicine to see engineering-style associative measurements, measurements which aid pragmatic actions but do not originate from a fundamental understanding. Physicians in antiquity were able to diagnose diabetes mellitus (etymologically “sweet urine”, a reference to this original diagnostic method), without any particular insights into the underlying pathology. Clinical utility is not automatically a guarantee of scientific understanding.

There is reason to be cautious even in clinical terms. Some previous attempts to numerically quantify consciousness have proven problematic, a serious matter since awareness during surgery could lead to real suffering. An anesthesiologist cautions in a commentary not to “trust the BIS or any other monitor over common sense and experience.”  A human expert still remains the ultimate arbiter of the state of consciousness of another human.  This is unlikely to change soon.

There are both practical and conceptual hurdles to developing a “consciousness metric.”  In practical terms, we have very little access to the details of the neuronal dynamics in the human brain. DARPA, not shy of ambitious technical challenges, has limited itself to 200 electrodes in a recent call for proposals to directly record from and stimulate the human brain for deep brain stimulation therapy. That is about one billionth of the estimated number of neurons in the brain. The EEG provides a very low capacity, indirect measurement channel into the brain. If we can’t measure the dynamics of the brain neurons in any detail, this could limit any attempt to quantify consciousness.

However, it is theoretically possible that even a limited measurement channel could carry the necessary information. We are looking for a categorical judgment between conscious and unconscious states, a single bit of information that can be solicited from a conscious and communicative subject in an eye-blink or a nod of the head. The conceptual hurdle is the more significant one. The defining characteristic of the conscious state is that of subjective, first person awareness, which fundamentally militates against objective measurements by an independent observer, who can have no access to the primary phenomena except through the subjective report of the conscious individual. It may be possible (and useful) to obtain better and better correlative measurements of this subjective report; but do the measurements themselves shed any light into the phenomenon of consciousness?

To clarify the underlying issues, consider a Turing-like test for consciousness metrics. If a measure of consciousness is to have scientific status, it should not ascribe a high degree of consciousness to a passive, inanimate system at thermodynamic equilibrium. Otherwise we are left with some kind of pan-psychic notion of consciousness. Nevertheless, a simple thought experiment shows that it would be easy to construct such a system for the metric under discussion.

The measure in question relies on the spatiotemporal patterns of currents invoked by a transient magnetic field. However, Maxwell’s equations dictate that a transient magnetic field will generate a pattern of currents in any chunk of matter – matching up some distribution of those evoked currents is simply a matter of the material properties. Consider for example a network of resistor, capacitors and inductors with circuit time-constants tuned to be in the hundred-millisecond range (to match EEG timescales). A radio antenna could be used to detect the changing magnetic field and absorb its energy. It should not be difficult to produce a circuit arrangement that produces a transient, spatiotemporally non-uniform current distribution that is adequately incompressible, and therefore fools the device into producing a high consciousness score.

One could also ask if the metric helps us answer a basic evolutionary question: can it differentiate organisms into “conscious” and “non-conscious” categories? While most neuroscientists would not hesitate to ascribe consciousness to vertebrateanimals or to invertebrates with complex brains (think Octopus or Honeybee), they would hesitate when it comes to the invertebrates with simpler nervous systems (Are Jellyfish conscious? How about the Sponges?) Since the methodology under discussion has been prepared with humans in mind, and ultimately depends on correlating with subjective reporting, it is difficult to see how it could be extended across the phylogenetic tree in a way that would help resolve these basic science questions about consciousness.

Where to look for measures of consciousness that advance our scientific understanding? Most neuroscientists would agree that consciousness is associated specifically with animal nervous systems (not trees or rocks). Rather than look generically for abstract mathematical descriptions of consciousness, we may need to specifically study the detailed architecture of brain systems involved in arousal, attention, and so on. Complex animal nervous systems have presumably evolved consciousness because it has some important utility. If the architecture of brain systems involved in arousal shows convergent evolution between invertebrates and vertebrates, this could give us important scientific insights into consciousness as a biological phenomenon. Better neurobiological insights into consciousness could in turn generate advances in clinical measures.

We have come a long way since Da Vinci, but human observers, in the form of teams of expert physicians, remain essential to judging the subtleties of the “motions of the mind” that we call consciousness. No matter how sophisticated our tools, consciousness is still a core mystery with ample scope for conceptual breakthroughs and creative thinking.