Bill Gates is working on a single-dose, long-lasting contraceptive for women .


No more remembering to take the pill every day.

The Bill & Melinda Gates Foundation has made headlines after providing a US$5 million grant to help researchers at the Oregon Health & Science University create a long-lasting form of birth control that doesn’t require surgery.

According to the foundation’s grant announcement, the funding will be used to “develop additional safe, effective, acceptable and accessible methods of permanent or very long-acting contraception that will fill an unmet need for women who have reached their desired family size and do not wish to become pregnant again”.

It’s pretty awesome news for women who’ve had as many children as they want and either have to take the pill every day, use condoms, regularly rotate hormonal implants until they hit menopause, or convince their partner to get a vasectomy. But the grant has been met with criticism from pro-life organisations because of concerns that such a long-term form of contraceptive could be used to control who does and doesn’t reproduce.

Jeffrey Jensen, a gynaecologist who leads contraceptive research at the Oregon Health & Science University in the US, refutes this claim. “My goal is very simple: to make every pregnancy planned and highly desired,” he told Elizabeth Hayes from the Portland Business Journal, citing a study that found half of Ugandan women no longer want to become pregnant, but only 2 percent can access permanent contraception.

Jensen is currently testing whether an FDA-approved foam that’s used to treat varicose veins could also work as a long-lasting contraceptive, and so far results in primates studies look promising. His group has also just finished accepting applications from researchers around the world to explore other ideas for contraceptive technology.

This isn’t the only long-term solution Bill and Melinda are looking into. Last year their foundation announced that they had provided US$6.7 million in funding to tech company MicroCHIPS to develop a remote-controlled contraceptive implant. The implant turns on and off the release of contraceptive hormones, and can last up to 16 years, essentially eliminating the need to take daily pills or change implants every few years. The device is expected to be on the market by 2018.

“The ability to turn the device on and off provides a certain convenience factor for those who are planning their family,” Robert Farra, president of MicroCHIPS and a researcher at MIT, told Dave Lee from the BBC last year.

The chip, which measures just 2 cm x 2 cm x .7 cm, works by delivering around 30 micrograms of birth control hormone levonorgestrel into the bloodstream every day via a small electrical charge – but it can be switched on and off at any time, allowing the user to control when they get pregnant.

While the implantable delivery technology is ready to go, the researchers are now working on making sure the system is secure.

“Someone across the room cannot re-program your implant,” said Farra. “Then we have secure encryption. That prevents someone from trying to interpret or intervene between the communications.”

The Gates Foundation aren’t just focussed on women, either. They’re also donating money to the development of a condom that actually feels good, as well as looking into ways to get male chemical contraceptives on the market.

After 55 years of an industry dominated by the contraceptive pill, we’re pretty excited that science may finally be on the verge of providing some much-needed options to allow women to take control of their reproductive choices.

Stop taking statin drugs – high cholesterol leads to longer life


High cholesterol levels are believed to lead to heart conditions and early death. Statin drugs to lower LDL cholesterol are prescribed to more than 13 million Americans, and almost all men over the age of 60. Research published in the Annals of Nutrition & Metabolism in April 2015 now shows that, as you age, having high cholesterol is beneficial. The research, which was conducted in Japan, showed that people with the highest cholesterol levels had the lowest mortality rate from heart disease. The report states, “mortality actually goes down with higher total or low density lipoprotein (LDL) cholesterol levels, as reported by most Japanese epidemiological studies of the general population.”
cholesterol

What is cholesterol?

Cholesterol is a fat-soluble nutrient. It is soft and waxy and is essential for the human body. Though recognized as leading to atherosclerosis, cholesterol is also responsible for many important biological functions in the body. The human brain cannot function without cholesterol. Cholesterol is also important for the production of steroid hormones. Cholesterol helps reduce stress and may even be a treatment for MS, as the body needs cholesterol to build the myelin sheath that protects the nerves.

Cholesterol deficiency

People who have a genetic deficiency in cholesterol have a disease called Smith-Lemli-Opitz, or SLOS. This disease is recessive, so both parents need to have the disease for it to be passed on. People who have low or no cholesterol suffer from autism, vision problems, lower immunity and increased infections, and difficulty digesting food. Those born with no ability to make cholesterol can also have physical deformities in their hands, feet, or internal organs.

Diabetes and cholesterol

Those with diabetes tend to have too much of the bad type of cholesterol and not enough of the good type. This can lead to heart disease. The condition is known as diabetic dyslipidemia. In addition to heart disease, diabetics are then prone to atherosclerosis, in which the arteries become clogged with the fat, blocking blood vessels and damaging blood flow. Insulin resistance is linked to diabetic dyslipidemia, so diabetics need to be aware of their cholesterol levels and take special care with their diets.

Why is cholesterol important?

Cholesterol is needed by every cell in the body because it is part of the makeup of the cell membrane. Cholesterol allows interactions between the various chemicals that interact with one another. Without cholesterol, your body can’t make bile acid, leading to poor digestion. The sex hormones, estrogen and testosterone, are also made with the help of cholesterol. Even the production of vitamin D utilizes cholesterol for its creation. The brain cells need cholesterol as well. New research has suggested that cholesterol bonds with sulphur in the body to produce cholesterol sulfate. This thins the blood, and it may be that this allows the body to store electrons and lower blood pressure when walking barefoot. Because of this, cholesterol sulfate has been indicated as a possible treatment for reducing heart disease.

Where is cholesterol found in foods?

Cholesterol is found in mono-unsaturated fatty acids, or MUFAs. Foods with polyunsaturated fatty acids, or PUFAs, are detrimental to the body and to heart health. Foods with healthy fats are generally from the vegetable kingdom, such as vegetable oils.

Sources:

http://www.karger.com[PDF]

http://www.ncbi.nlm.nih.gov

http://www.eurekalert.org

http://www.healthboards.com

Serotonin States and Social Anxiety


This Neuroscience and Psychiatry article discusses the role in social anxiety of a novel disrupted regulatory sequence in serotonin neurochemistry resulting from overactive serotonin signaling.

Social anxiety disorder is characterized by fear and avoidance of situations in which an individual believes he or she may be subject to scrutiny and at risk for embarrassment or humiliation. It is the most common of the anxiety disorders, affecting more than 5% of the general population, with an early age at onset that is frequently associated with high rates of depressive comorbidity.1 Social anxiety disorder is frequently treated pharmacologically with selective serotonin reuptake inhibitors (SSRIs) or serotonin-norepinephrine reuptake inhibitors, several of which are approved by the US Food and Drug Administration for this indication. Nonetheless, only 30% to 40% of patients have full and satisfactory responses to these agents.2Attempts have been made to enable genetic prediction of response to SSRIs in patients with social anxiety disorder,3 but such efforts are still in the early stages and have, to our knowledge, yet to be replicated.

Here’s why it’ll take us decades to master nuclear fusion .


This is why we can’t use safe, clean nuclear fusion power yet.

This article was written by Matthew Hole, from the Australian National University and Igor Bray, from Curtin University, and was originally published by The Conversation. It’s part of their worldwide series on the Future of Nuclear, and you can read the rest of the series here.

Nuclear fusion is what powers the Sun and the stars – unleashing huge amounts of energy through the binding together of light elements such as hydrogen and helium. If fusion power were harnessed directly on Earth, it could produce inexhaustible clean power, using seawater as the main fuel, with no greenhouse gas emissions, no proliferation risk, and no risk of catastrophic accidents. Radioactive waste is very low level and indirect, arising from neutron activation of the power plant core. With current technology, a fusion power plant could be completely recycled within 100 years of shutdown.

Today’s nuclear power plants exploit nuclear fission – the splitting of atomic nuclei of heavy elements such as uranium, thorium, and plutonium into lighter ‘daughter’ nuclei. This process, which happens spontaneously in unstable elements, can be harnessed to generate electricity, but it also generates long-lived radioactive waste.

Why aren’t we using safe, clean nuclear fusion power yet? Despite significant progress in fusion research, why do we physicists treat unfounded claims of “breakthroughs” with scepticism? The short answer is that is it very difficult to achieve the conditions that sustain the reaction. But if the experiments under construction now are successful, we can be optimistic that nuclear fusion power can be a reality within a generation.

The fusion process

Unlike fission, nuclei do not spontaneously undergo fusion: atomic nuclei are positively charged and must overcome their huge electrostatic repulsion before they can get close enough together that the strong nuclear force, which binds nuclei together, can kick in.

In nature, the immense gravitational force of stars is strong enough that the temperature, density and volume of the star’s core is enough for atomic nuclei to fuse through ‘quantum tunnelling’ of this electrostatic barrier. In the laboratory, quantum tunnelling rates are far too low, and so the barrier can only be overcome by making the fuel nuclei incredibly hot – six to seven times hotter than the Sun’s core.

Even the easiest fusion reaction to initiate – the combination of the hydrogen isotopes deuterium and tritium, to form helium and an energetic neutron – requires a temperature of about 120 million degrees Celsius. At such extreme temperatures, the fuel atoms are ruptured into their component electrons and nuclei, forming a superheated plasma.

Keeping this plasma in one place long enough for the nuclei to fuse together is no mean feat. In the laboratory, the plasma is confined using strong magnetic fields, generated by coils of electrical superconductors which create a donut-shaped ‘magnetic bottle’ in which the plasma is trapped.

Schematic diagram of a fusion power plant. Figure supplied courtesy of JET-EFDA publications copyright Euratom, Author provided

Today’s plasma experiments such as the Joint European Torus can confine plasmas at the required temperatures for net power gain, but the plasma density and energy confinement time (a measure of the cooling time of the plasma) are too low to for the plasma to be self-heated. But progress is being made – today’s experiments have fusion performance 1,000 times better, in terms of temperature, plasma density and confinement time, than the experiments of 40 years ago. And we already have a fair idea of how to move things to the next step.

Regime change

The ITER reactor, now under construction at Cadarache in the south of France, will explore the ‘burning plasma regime’, where the plasma heating from the confined products of fusion reaction exceeds the external heating power. The total power gain for ITER will be more than five times the external heating power in near-continuous operation, and will approach 10-30 times for short durations.

At a cost exceeding US$20 billion, and funded by a consortium of seven nations and alliances, ITER is the largest science project on the planet. Its purpose is to demonstrate the scientific and technological feasibility of using fusion power for peaceful purposes such as electricity generation.

The engineering and physical challenge is immense. ITER will have a magnetic field strength of 5 Tesla (100,000 times the Earth’s magnetic field) and a device radius of 6 metres, confining 840 cubic metres of plasma (one-third of an Olympic swimming pool). It will weigh 23,000 tonnes and contain 100,000 km of niobium tin superconducting strands. Niobium tin is superconducting at 4.5K (about minus 269 degrees Celsius), and so the entire machine will be immersed in a refrigerator cooled by liquid helium to keep the superconducting strands just a few degrees above absolute zero.

A cross-section cutaway of ITER. For scale, note the human under the reactor core. The ITER Organisation, Author provided

ITER is expected to start generating its first plasmas in 2020. But the burning plasma experiments aren’t set to begin until 2027. One of the huge challenges will be to see whether these self-sustaining plasmas can indeed be created and maintained without damaging the plasma facing wall or the high heat flux ‘divertor’ target.

The information we get from building and operating ITER will inform the design of future fusion power plants, with an ultimate aim of making the technology work for commercial power generation. At the moment it seems likely that the first prototype power plants will be built in the 2030s, and would probably generate around 1 gigawatt of electricity.

While first-generation power plants will probably be on a similarly large scale to ITER, it is hoped that improvement in magnetic confinement and control will lead to more compact later generation power plants. Likewise, power plants will cost less than ITER: long-term modelling which extrapolates to power plants suggest fusion could be economic with low impact on the environment.

So while the challenges to nuclear fusion are big, the pay-off will be huge. All we have to do is get it to work.

Magnetic field discovery gives clues to galaxy-formation processes


Astronomers making a detailed, multi-telescope study of a nearby galaxy have discovered a magnetic field coiled around the galaxy’s main spiral arm. The discovery, they said, helps explain how galactic spiral arms are formed. The same study also shows how gas can be funneled inward toward the galaxy’s center, which possibly hosts a black hole.

“This study helps resolve some major questions about how form and evolve,” said Rainer Beck, of the Max-Planck Institute for Radio Astronomy (MPIfR), in Bonn, Germany.

The scientists studied a galaxy called IC 342, some 10 million light-years from Earth, using the National Science Foundation’s Karl G. Jansky Very Large Array (VLA), and the MPIfR’s 100-meter Effelsberg radio telescope in Germany. Data from both radio telescopes were merged to reveal the magnetic structures of the galaxy.

The surprising result showed a huge, helically-twisted loop coiled around the galaxy’s main spiral arm. Such a feature, never before seen in a galaxy, is strong enough to affect the flow of gas around the .

“Spiral arms can hardly be formed by gravitational forces alone,” Beck said. “This new IC 342 image indicates that magnetic fields also play an important role in forming spiral arms.”

The new observations provided clues to another aspect of the galaxy, a bright central region that may host a black hole and also is prolifically producing new stars. To maintain the high rate of star production requires a steady inflow of gas from the galaxy’s outer regions into its center.

“The magnetic field lines at the inner part of the galaxy point toward the galaxy’s center, and would support an inward flow of gas,” Beck said.

Large-scale Effelsberg radio image of IC 342. Lines indicate orientation of magnetic fields. Credit: R. Beck, MPIfR.

The scientists mapped the galaxy’s magnetic-field structures by measuring the orientation, or polarization, of the radio waves emitted by the galaxy. The orientation of the radio waves is perpendicular to that of the magnetic field. Observations at several wavelengths made it possible to correct for rotation of the waves’ polarization plane caused by their passage through interstellar magnetic fields along the line of sight to Earth.

The Effelsberg telescope, with its wide field of view, showed the full extent of IC 342, which, if not partially obscured to visible-light observing by dust clouds within our own Milky Way Galaxy, would appear as large as the full moon in the sky. The high resolution of the VLA, on the other hand, revealed the finer details of the galaxy. The final image, showing the , was produced by combining five VLA images made with 24 hours of observing time, along with 30 hours of data from Effelsberg.

Scientists from MPIfR, including Beck. were the first to detect polarized radio emission in galaxies, starting with Effelsberg observations of the Andromeda Galaxy in 1978. Another MPIfR scientist, Marita Krause, made the first such detection with the VLA in 1989, with observations that included IC 342, which is the third-closest spiral galaxy to Earth, after the Andromeda Galaxy (M31) and the Triangulum Galaxy (M33).

TSRI Study Points to Unexplored Realm of Protein Biology, Drug Targets


Scientists at The Scripps Research Institute (TSRI) have devised a powerful set of chemical methods for exploring the biology of proteins.

The techniques are designed to reveal protein interactions with lipids—a class of biological molecules including some vitamins (A, D, E), hormones (estrogen, testosterone), neurotransmitters (endocannabinoids) and components of fat (triglycerides, cholesterol).

“It has long been clear that lipids are important in biology, but it has been challenging to achieve a global portrait of how they interact with proteins,” said senior investigator Benjamin F. Cravatt, who chairs TSRI’s Department of Chemical Physiology. “This approach allows us not only to identify lipid-interacting proteins but also to discover small molecules or ‘ligands’ that selectively block these lipid-protein interactions, helping us to study their functions.”

In some cases, such ligands could become the basis for new drugs. In fact, initial surveys by Cravatt’s team revealed a surprisingly large number of lipid-protein interactions—and while some of these interactions are already targeted by existing drugs, most are not.

“Traditionally, scientists have theorized fairly narrow limits on which proteins can be targeted with small-molecule ligands, but our results suggest that those boundaries are much wider,” said Cravatt.

The team’s findings were reported in the journal Cell on June 18, 2015.

Surprising Results

The study, whose co-first authors were Micah Niphakis, who at the time was a postdoctoral fellow in the Cravatt laboratory, and graduate student Kenneth M. Lum, emerged from an effort to discover protein-binding partners of arachidonoyl lipids, which are involved in many physiological responses such as pain, inflammation, mood and appetite.

Niphakis began by making probe molecules that mimic the structure of arachidonoyl lipids and capture protein-binding partners when exposed to ultraviolet light. He found that each of the probes fastened to hundreds of distinct proteins in human and mouse cells. Probes mimicking other lipids also revealed large sets of protein interactions.

A big challenge in using the lipid probes was to determine which protein interactions were likely to be selective and biologically relevant and not just random, weak bindings. “One way we addressed that challenge was to use multiple, structurally distinct probes to look at the differences in how they bound to a given protein,” said Niphakis.

As Niphakis and his colleagues worked with the probes, they found they could quickly generate extensive maps of a given lipid’s protein-binding partners and, in many cases, even map the region of the protein that the lipid probes bound—information that could be put to multiple uses.

Profiling Drug Compounds

One was to profile the activities of any drug compound, essentially by comparing how lipid-protein interaction maps change in the presence of the drug. Such changes would be due largely to the drug’s interference with lipid-protein binding events. The team found that, while some drugs they tested this way were highly selective in the sense that they bound only to their intended protein targets, others showed evidence of strong bindings to additional, “off-target” proteins—which could produce their own biological effects.

Similarly, the TSRI researchers used their lipid probes to make a high-throughput screening assay to discover ligands that bind to proteins at their lipid interaction sites. In one case, the researchers screened a small-molecule compound library and identified compounds that bind tightly to NUCB1, an otherwise little-known protein that showed strong interactions with the arachidonoyl probes. The researchers then employed these ligand compounds to study NUCB1’s functions in detail, showing, for example, that it facilitates the breakdown of endocannabinoids and related lipids, perhaps by acting as a transporter.

The Cravatt laboratory is now using the same approach to study the functions of other proteins in their lipid interaction maps.

“There are a lot of proteins in our dataset that haven’t been characterized, yet have strong interactions with our lipid probes,” said Cravatt.

In principle, a similar approach could be used to discover and study protein interactions with other, non-lipid types of biomolecules. “Alternative probes would be valuable for surveying other types of metabolite-protein interactions,” said Niphakis.

In addition to Cravatt, Niphakis and Lum, contributors to the paper, “A global map of lipid-binding proteins and their ligandability in cells,” were Armand B. Cognetta III, Bruno E. Correia, Taka-Aki Ichu, Jose Olucha, Steven J. Brown, Soumajit Kundu, Fabiana Piscitelli and Hugh Rosen, all of TSRI.

New drug has the potential to ward off malaria with a single dose .


And it’ll cost less than $1 per dose.

Lab tests have highlighted the potential of a new drug to treat malaria in affected patients, prevent it from spreading, and ward off future infections with a single dose. Developed by chemists at Dundee University in Scotland and the not-for-profit group Medicines for Malaria Venture, the drug acts against each of the life stages of the malaria parasite, making it a promising option for those already infected and as a vaccination.

“There are other compounds being developed for malaria, but relatively few of [these] have reached the stage we’re at,” lead researcher Ian Gilbert said in a press release. “What’s most exciting is the number of potential attributes, such as the ability to give it in a single dose which will mean that medics can make sure a patient completes the treatment.”

Named DDD107498, the drug has been in development since 2009, and was made using one of almost 4,700 compounds tested for effectiveness against malaria at the Drug Discovery Unit (DDU) facilities in the UK.

In tests with mice and other lab animals, the researchers report that the drug identified and attacked the protein involved in the production of various vital enzymes and proteins in the malaria parasite’s cells throughout all stages of its lifecycle, which prevented the spread and development of the disease. The parasite was successfully cleared from both the blood and livers of the affected animals.

“The compound we have discovered works in a different way to all other antimalarial medicines on the market or in clinical development, which mean that it has great potential to work against current drug-resistant parasites,” one of the team, Kevin Read, said in the release. “It targets part of the machinery that makes proteins within the parasite that causes malaria.”

The results have been published in Nature.

According to Steve Connor at The Guardian, the first phase of clinical trials will begin in the coming months, and if the drug makes it to the market, will likely be sold for less than $1 a dose, “which is considered the maximum price that the poorest affected countries can afford”, he says.

As David Reddit, CEO of Medicines for Malaria Venture, pointed out to BBC News, malaria threatens half the world’s population – the half that can least afford treatment and vaccination against it, so a cheap, one-off medication could be the most promising option in development. “DDD107498 is an exciting compound since it holds the promise to not only treat but also protect these vulnerable populations,” he said.

Watch the video. URL: https://youtu.be/I2qHc4YTDSg

Light-based computers will be even more awesome than we thought .


Engineers have found the best way to beam data around a computer at light speed to date.

 

Researchers have come up with an efficient way of transporting data between computer chips using light rather than electricity. Not only does this mean that computers will be able to transmit data much, much faster, it also means we could build machines that consume far less energy.

We’re already able to send data in the form of photons at incredible speeds through the optical fibres that make up our Internet, but right now when this data hits our computers, it has to slow down so it can be converted into electrons and pushed through wires around our devices.

This process isn’t just slow, it’s also energy intensive, and it’s responsible for making our computers so hot. “Up to 80 percent of the microprocessor power is consumed by sending data over the wires,” one of the researchers, Jelena Vuckovic from Stanford University in the US, said in a press release.

But while engineers are getting very close to creating computer chips that can process light, they’ve struggled to find an efficient way to transmit that light across the thousands of different connections, known as interconnects, between them. In theory, light can be beamed between chips via silicon structures that bend it to the desired location, but these are incredibly difficult to build, and having to create a new silicon structure to replace every single wire inside just one computer could be next to impossible.

Now the Stanford University team has come up with a better solution, by developing an inverse design algorithm that tells them exactly how to build the silicon structures they need to perform a desired task. They’ve already used the algorithm to design a working optical circuit, and have made several copies in their lab.

Reporting in Nature Photonics, the team has now demonstrated that these devices worked perfectly, despite tiny imperfections in the structures. “Our manufacturing processes are not nearly as precise as those at commercial fabrication plants,” saidAlexander Piggott, who worked on the algorithm. “The fact that we could build devices this robust on our equipment tells us that this technology will be easy to mass-produce at state-of-the-art facilities.”

So how exactly do you build a silicon interconnector? Basically it involves layering slices of silicon that are so thin that more than 20 of them could sit side-by-side in the diameter of a human hair. Light easily passes through silicon, but is also bends as it does so. By designing very precise segments of silicon and pairing them together – according to the instructions of the algorithm – the team are able to create switches or conduits that control the flow of photons, just like wires currently do with electrons.

“Our structures look like Swiss cheese but they work better than anything we’ve seen before,” said Vuckovic.

By creating an algorithm that automates the development of these complex Swiss cheese silicon structures, the team has essentially “set the stage for the next generation of even faster and far more energy-efficient computers that use light rather than electricity for internal data transport,” as the press release explains.

The algorithm could also be used to find design solutions to many other communication problems – all a researcher needs to do is plug in their desired result, and the algorithm will come up with a plan. We’re pretty excited to see what they do with it next.

Scientists create computational algorithm for fact-checking


Network scientists at Indiana University have developed a new computational method that can leverage any body of knowledge to aid in the complex human task of fact-checking.

In the first use of this method, IU scientists created a simple computational fact-checker that assigns “truth scores” to statements concerning history, geography and entertainment, as well as random statements drawn from the text of Wikipedia, the well-known online encyclopedia.

In multiple experiments, the automated system consistently matched the assessment of human fact-checkers in terms of their certitude about the accuracy of these statements.

The results of the study, “Computational Fact Checking From Knowledge Networks,” are reported in today’s issue of PLOS ONE.

“These results are encouraging and exciting,” said Giovanni Luca Ciampaglia, a postdoctoral fellow at the Center for Complex Networks and Systems Research in the IU Bloomington School of Informatics and Computing, who led the study. “We live in an age of information overload, including abundant misinformation, unsubstantiated rumors and conspiracy theories whose volume threatens to overwhelm journalists and the public.

“Our experiments point to methods to abstract the vital and complex human task of fact-checking into a network analysis problem, which is easy to solve computationally.”

The team selected Wikipedia as the information source for their experiment due to its breadth and open nature. Although Wikipedia is not 100 percent accurate, previous studies estimate the is nearly as reliable as traditional encyclopedias, but also covers many more subjects.

Using factual information from infoboxes on the site, IU scientists built a “knowledge graph” with 3 million concepts and 23 million links between them. A link between two concepts in the graph can be read as a simple factual statement, such as “Socrates is a person” or “Paris is the capital of France.”

In what the IU scientists describe as an “automatic game of trivia,” the team applied their algorithm to answer simple questions related to geography, history and entertainment, including statements that matched states or nations with their capitals, presidents with their spouses and Oscar-winning film directors with the movie for which they won the Best Picture awards, with the majority of tests returning highly accurate truth scores.

Lastly, the scientists used the algorithm to fact-check excerpts from the main text of Wikipedia, which were previously labeled by human fact-checkers as true or false, and found a positive correlation between the truth scores produced by the algorithm and the answers provided by the fact-checkers.

Significantly, the IU team found their could even assess the truthfulness of statements about information not directly contained in the infoboxes. For example, the fact that Steve Tesich —- the Serbian-American screenwriter of the classic Hoosier film “Breaking Away”—graduated from IU, despite the information not being specifically addressed in the infobox about him.

“The measurement of the truthfulness of statements appears to rely strongly on indirect connections, or ‘paths,’ between concepts,” Ciampaglia said. “If we prevented our fact-checker from traversing multiple nodes on the graph, it performed poorly since it could not discover relevant indirect connections. But because it’s free to explore beyond the information provided in one infobox, our method leverages the power of the full knowledge graph.”

Although the experiments were conducted using Wikipedia, the IU team’s method does not assume any particular source of knowledge. The scientists aim to conduct additional experiments using knowledge graphs built from other sources of human knowledge, such as Freebase, the open-knowledge base built by Google, and note that multiple information sources could be used together to account for different belief systems.

“Misinformation endangers the public debate on a broad range of global societal issues,” said Filippo Menczer, director of the Center for Complex Networks and Systems Research and a professor in the IU School of Informatics and Computing, who is a co-author on the study. “With increasing reliance on the Internet as a source of information, we need tools to deal with the misinformation that reaches us every day. Computational fact-checkers could become part of the solution to this problem.”

The team added a significant amount of natural language processing research, and other work remains before these methods could be made available to the public as a software tool.

Earth and Mars may have shared seeds of life


Could Mars, of all places, be the place to look for early life on Earth?

It’s an intriguing thought and one that astrobiologists take seriously as they consider the conditions during the early days of Solar System when both planets experienced frequent bombardments by asteroids and comets that resulted in debris exchange between one body and the other.

“We might be able to find evidence of our own origin in the most unlikely place, and this place is Mars,” planetary scientist Nathalie Cabrol of the SETI institute said in a TED Talk in April 2015.

Cabrol studies in extreme conditions on Earth with the hope that her research might help improve the search for signs of life on the Red Planet.

“We can go to Mars and try to find traces of our own origin. Mars may hold that secret for us,” she said. “This is why Mars is so special to us.”

‘Throwing rocks’

Mars orbits an average of 140 million miles (225 million kilometers) from Earth and has a similar size and composition. During the period of the Late Heavy Bombardment, about 3.8 billion to 4 billion years ago, the planets were pummeled with asteroids and comets, which may have also provided the water to much of the Earth’s oceans. Earth and Mars were, in a sense, connected to each other by the violence of this era.

“Earth and Mars kept throwing rocks at each other for a very long time,” Cabrol said.

If life spawned on one planet, it could have clung to one or more of these samples and traveled to the other, a process scientists call panspermia. But if early Earth life did make it to Mars, it would have needed a hospitable arrival.

Today the planet is bleak and barren, resembling the Earth’s most desolate deserts. With its thin atmosphere and almost completely waterless surface, any life that landed on Mars today would have a difficult time taking hold. But in the past, when the rocks were flying, Mars likely boasted a more habitable environment.

“At the time when life appeared on the Earth, Mars did have an ocean, it had volcanoes, it had lakes, and it had deltas,” Cabrol said.

Yet unlike Earth, the Red Planet quickly lost its hold on habitability.

“When life exploded at the surface of the Earth, then everything went south for Mars,” Cabrol said.

Because Mars lacks a protective magnetic field, the Sun’s solar wind stripped it of its atmosphere and exposed the surface to bombardment from cosmic rays and ultraviolet (UV) light. Most of the water left the surface, escaping into space. Only a few pockets remain on the surface today, at the poles, while some water may lurk beneath the ground.

“There is no life possible on the surface of Mars today, but it might still be hiding underground,” Cabrol said.

‘A time machine’

NASA’s Curiosity rover identified weathered rock similar to those found in deltas where river sediment settled onto the lake floor. If the lakes lasted long enough, they could have housed microbial life in the early days of Mars. Credit: NASA/JPL-Caltech/MSSS

To understand what life might have done as the planet’s situation grew more bleak, Cabrol said scientists need the ability to peer back at the Mars of the past.

“You only need to go back 3.5 billion years ago in the past of a planet,” she said. “We just need a time machine.”

Today, Cabrol uses Earth as her , traveling to regions that resemble the Red Planet in its past.

“I use planet Earth to go in very extreme environments where conditions were similar to those of Mars at the time when the climate changed,” Cabrol said.

One of those regions lies at the top of the Andes Mountains in Chile. Volcanic lakes fill the cone of the volcano, making it a unique analogue to Mars. The region is situated at an elevation of 3.6 miles (5.872 km), where UV rays more easily pierce a thin atmosphere. Cabrol traveled there as part of the High Lakes project, funded by a grant from the NASA Astrobiology Institute (NAI) element of the Astrobiology Program at NASA.

“At this altitude, this lake is experiencing exactly the same conditions as those on Mars three and a half billion years ago,” Cabrol said.

Cabrol and her team changed from mountain climbing equipment to diving gear to sample the interior of one such lake. They found that “life is everywhere, absolutely everywhere,” she said.

Yet the amount of life in the lake can be deceiving. According to Cabrol, 36 percent of the samples her team took home were made up of only three species, typifying the deadly environment.

“There is a huge loss in biodiversity,” she said. “Those three species are the ones that have survived so far.”

If life exists on Mars, it is likely to be similarly lacking in biodiversity. Only the hardiest microbes may have survived the planet’s decline.

In another nearby lake, similar conditions have driven the algae within to adapt, giving the water a reddish cast. On Earth, a score of 11 in the UV Index, which provides a forecast of the expected risk of overexposure to the Sun’s ultraviolet radiation, is considered to be extreme. During UV storms, the UV Index at Aguas Calientes lake can reach as high as 43, the highest UV radiation levels measured on Earth. The water is so clear that the algae have nowhere to hide from the deadly radiation, so they must find other ways to protect themselves.

“They are developing their own sunscreen,” Cabrol said, “and this is the red color you see.”

While the lakes give some insight into what could be happening on Mars in the past, they don’t suggest what could have happened to organisms on the planet when there were no pools left to hide in.

Licancabur Lake lies at the top of the Andes Mountains, where the low oxygen, thin altitude and high ultraviolet radiation create conditions similar to those found on ancient Mars. Credit: The High Lakes Project: The SETI Institute Carl Sagan Center/NASA Ames/ NAI

“When all the water is gone from the surface, microbes have only one solution left—they go underground,” Cabrol said.

In addition to studying pools at high altitude, her team also examined microbes that seek shelter from the Sun’s radiation. Cabrol showed images of semi-translucent rocks with microbes hiding beneath, while still taking some energy from the Sun.

“They are using the protection of the translucence of the rocks to get the good part of the UV, and discard the part that could actually damage their DNA,” Cabrol said.

Studying these organisms can help scientists searching for life on Mars with rovers such as Curiosity.

“If there was life on Mars three and a half billion years ago, it had to use the same strategy to actually protect itself,” Cabrol said.

The red color of Aguas Calientes comes from algae, which must create their protection from the harsh ultraviolet radiation coming from the son. Credit: The High Lakes Project: The SETI Institute Carl Sagan Center/NASA Ames/ NAI

‘Our legacy’

Mars isn’t the only place life might thrive in the Solar System. Scientists think a subsurface ocean could exist on Jupiter’s moons Europa and Ganymede, and on Saturn’s moons Titan and Enceladus. Cabrol develops science exploration strategies for these icy moons, where microbes may have evolved.

Unlike Mars, other bodies in the Solar System would have had a more difficult time exchanging material.

“Mars and Earth could have a common root to their tree of life, but when you go beyond Mars, it’s not that easy,” Cabrol said. “If we were to discover life on those planets [and moons], it would be different from us.”

Extraterrestrial life in the form of microbes, should it be found, may not lead to an equal exchange of intelligence, but primitive life can still answer questions about the existence of life, Cabrol said.

This illustration depicts a lake of water partially filling Mars’ Gale Crater, collecting runoff from melting snow on the northern rim. If they lasted long enough, such lakes could have served as refuge for microbial life on Mars. Credit: NASA/JPL-Caltech/ESA/DLR/FU Berlin/MSSS

“Organic material is going to tell you about environment, about complexity, and about diversity. DNA, or any information carrier, is going to tell you about adaptation, about evolution, about survival, about planetary change, and about the transfer of information,” she said. “All together, they are telling us why what started as a microbial pathway sometimes ends up as a civilization or sometimes ends up as a dead end.”

Within our , these questions could be answered in the not-too-distant future, she said.

“This can be achieved by our generation,” Cabrol said. “This can be our legacy—but only if we dare to explore.”