Will Getting the Flu Shot Give You the Flu?


Can Getting the Flu Shot Give You the Flu?

Lev Dolgachov/PhotoSpin

National Influenza Vaccination Week is December 7-13, 2014. This is the week which the Centers for Disease Control and Prevention have designated to help you to get that influenza, or flu, shot you have been putting off all fall.

Perhaps more importantly, the CDC hopes that it reminds you to make sure that the young children and older adults in your life receive the flu vaccine too! And for anyone who is worried about the effects of having the flu vaccine, it will not make you sick.

To be clear, getting the flu shot will not give you the flu.

Dr. Gregory A. Poland, a professor of medicine at the Mayo Clinic in Rochester, Minnesota, is quoted in a Reuters’ article as saying, “It is absolutely biologically impossible to get the flu from the vaccine.”

Poland studies the immunogenetics of vaccine response and is well aware of the myth that receiving the flu vaccine will make someone sick with the flu. It will not!

National Influenza Vaccination Week was created in 2005 to emphasize the importance of receiving the annual flu vaccine, and as a side effect, to dispel the myth that the flu vaccine can give you the flu.

The CDC wants to increase the number of people who receive the flu vaccine in the months of December, January, February and beyond, though they do recommend getting the vaccine as early in the season as possible.

Many Americans have already received their flu vaccine this year. As of November 14, 2014, the CDC reported that about 139.7 million doses of this year’s influenza vaccine had been distributed to those who provide vaccinations in the United States.

Many Americans may downplay the importance of getting the flu vaccine, but influenza is a serious disease. For thousands of people each year, it can even be fatal.

Experts, including the Advisory Committee on Immunization Practices, recommend that everyone over the age of six months receive the flu vaccine. People who are at a higher risk for complications if they contracted the flu are strongly urged to get the vaccine.

NFL Issues Warning — Steroid Level So High in Beef It’s Causing Players to Fail Drug Tests


Five months after Congress voted to remove mandatory country of origin labeling from pork and beef, the NFL is now warning players that meat produced in Mexico and China may contain clenbuterol, a substance banned by the league as a performance enhancing drug.

Clenbuterol is a muscle-building and weight-loss stimulant.

A memo from the league’s independent drug-testing administrator was sent out to players informing them that, “consuming large quantities of meat while visiting those particular countries may result in a positive test.”

“Players are warned to be aware of this issue when traveling to Mexico and China,” the memo went on to say. “Please take caution if you decide to consume meat, and understand that you do so at your own risk.”

The drug-testing program starkly warned that: “Players are responsible for what is in their bodies.”

Under normal circumstances, the policy would seem like common sense, but since the repeal of mandatory country of origin of meat labeling, Americans now have no way of identifying where their meat was produced. Much of the meat Americans are eating now, could very well be from Mexico and China.

Now some NFL players, including the Arizona Cardinals Patrick Peterson, have taken to social media to express their feelings after receiving the memo.

According to a report by ESPN:

Texans left tackle Duane Brown tested positive for clenbuterol last season after a bye-week trip to Mexico, during which he ate Mexican beef, sources told ESPN.

After a months-long process, Brown was finally cleared in April, sources said, allowing him to avoid what would have been a 10-game suspension. His case serves as a cautionary tale for other players…

Mexican cattle ranchers are banned from using clenbuterol as a growth enhancer, but reports suggest that it is still used widely.

So why aren’t Americans allowed to know where the meat they buy is produced?

The labeling law repeal, along with other controversial legislation such as CISA, was included in the omnibus spending bill that Congress passed in December 2015, in an effort to avert a government shutdown.

U.S. legislators claimed to have had little choice but to lift the labeling requirements after the World Trade Organization’s continual rulings against the U.S. labels. The labels were challenged by Canada and Mexico in the WTO, with the organization authorizing those countries to begin fining the U.S. over $1 billion in economic sanctions.

The lawmakers’ explanation, as to their reasoning for getting rid of the labels, lends direct support to the argument that U.S. sovereignty is subject to its membership in certain international organizations.

If a transnational trade organization has the ability to dictate U.S. law to those that are tasked with representing the people of a nation, then sovereignty is most certainly an extremely fluid concept the deeper a nation ventures into the land of supranational organizations.

The insertion of legislation, that would otherwise never be codified into law, into a spending bill that most in Congress see as integral to pass, speaks to the nefarious nature of “representative” democracy as practiced in the U.S. It also highlights the ability to subvert the very foundations we are told the nation was built upon, by those intent on substituting their will for that of the people they allegedly represent.

The idea that NFL players are responsible for ingesting a performance-enhancing drug that is found in Mexican & Chinese produced beef, while the U.S. doesn’t allow you to know where the meat was produced speaks to the insanity of the system as a whole.

In essence, you have no right to know what is in your food – but you will be held responsible for what is in it after it is ingested.

 

NASA just detected oxygen in the Martian atmosphere


NASA has detected oxygen in the upper Martian atmosphere with the help of an instrument on board the Stratospheric Observatory for Infrared Astronomy (SOFIA). Oxygen had been discovered on the red planet before; however, this is the first time its presence has been verified in wake of the Viking and Mariner missions more than 40 years ago.

The oxygen atoms were detected in the upper atmosphere of Mars called the mesosphere. The discovery will help shed light on how gases escaped from the Martian atmosphere millions of years ago. Although oxygen has been detected on Mars in the past, the amount of oxygen detected was half of what the researchers anticipated, which may be due to differences in the atmosphere.

Atomic oxygen in the Martian atmosphere is notoriously difficult to measure,” said Pamela Marcum, a project scientist with SOFIA, in a press statement. “To observe the far-infrared wavelengths needed to detect atomic oxygen, researchers must be above the majority of Earth’s atmosphere and use highly sensitive instruments, in this case a spectrometer. SOFIA provides both capabilities,” she added.

Because Earth’s atmosphere is dense and moist, it is difficult to get a clear image of what lies beyond it. To overcome this hurdle, the researchers utilized SOFIA, a Boeing 747SP jetliner, which has a 100-inch diameter telescope latched to it.

The project is a joint collaboration between NASA and the German Aerospace Center. ASA’s Ames Research Center in Moffett Field, California, oversees the SOFIA program. The aircraft is based at NASA’s Armstrong Flight Research Center’s hangar 703 in Palmdale, California, according to NASA’s website.

Sofia flew approximately 37,000–45,000 feet above most of the infrared-blocking moisture in Earth’s atmosphere. New detectors on one of the observatory’s instruments, the German Receiver for Astronomy at Terahertz Frequencies (GREAT), helped the astronomers discern between oxygen in Earth’s atmosphere versus oxygen in Mars’s atmosphere.

The high vantage point and the specialized instrumentation that is tuned to look past Earth’s atmosphere helped the researchers make their calculations. Although the team has yet to provide precise figures on how much atomic oxygen is in the Martian mesosphere, they did claim it is lower than expected. As a result, the researchers will keep implementing SOFIA to probe other regions of the red planet to make sure the figure wasn’t simply the result of variations in the atmosphere.

Fighting cancer with the help of someone else’s immune cells


cancer

A new step in cancer immunotherapy: researchers from the Netherlands Cancer Institute and University of Oslo/Oslo University Hospital show that even if one’s own immune cells cannot recognize and fight their tumors, someone else’s immune cells might. Their proof of principle study is published in the journal Science on May 19th.

The study shows that adding mutated DNA from into immune stimulating cells from healthy donors create an immune response in the healthy . Inserting the targeted components from the donor immune cells back into the immune cells of the , the researchers were able to make cancer patients’ own immune cells recognize cancer cells.

The extremely rapidly developing field of cancer immunotherapy aims to create technologies that help the body’s own immune system to fight cancer. There are a number of possible causes that can prevent the immune system from controlling cancer cells. First, the activity of immune cells is controlled by many ‘brakes’ that can interfere with their function, and therapies that inactivate these brakes are now being tested in many human cancers. As a second reason, in some patients the immune system may not recognize the cancer cells as aberrant in the first place. As such, helping the immune system to better recognize cancer cells is one of the main focuses in cancer immunotherapy.

Ton Schumacher of the Netherlands Cancer Institute and Johanna Olweus of the University of Oslo and Oslo University Hospital decided to test whether a ‘borrowed ‘ could “see” the cancer cells of the patient as aberrant. The recognition of aberrant cells is carried out by immune cells called T cells. All T cells in our body scan the surface of other cells, including cancer cells, to check whether they display any on their surface that should not be there. Upon recognition of such foreign protein fragments, T cells kill the aberrant cells. As cancer cells harbor faulty proteins, they can also display foreign protein fragments – also known as neoantigens – on their surface, much in the way virus-infected cells express fragments of viral proteins.

Fighting cancer with the help of someone else's immune cells
Outsourcing cancer immunity – arming patient immune cells with immune receptors from healthy donors to attack cancer. Credit: Ellen Tenstad, Science Shaped

To address whether the T cells of a patient react to all the foreign protein fragments on cancer cells, the research teams first mapped all possible neoantigens on the surface of from three different patients. In all 3 patients, the cancer cells seemed to display a large number of different neoantigens. But when the researchers tried to match these to the T cells derived from within the patient’s tumors, most of these aberrant protein fragments on the went unnoticed.

Next, they tested whether the same neoantigens could be seen by T-cells derived from healthy volunteers. Strikingly, these donor-derived T cells could detect a significant number of neoantigens that had not been seen by the patients’ T cells.

“In a way, our findings show that the immune response in cancer patients can be strengthened; there is more on the cancer cells that makes them foreign that we can exploit. One way we consider doing this is finding the right donor T cells to match these neoantigens.”, says Ton Schumacher. “The receptor that is used by these donor T-cells can then be used to genetically modify the patient’s own T cells so these will be able to detect the cancer cells”.

“Our study shows that the principle of outsourcing cancer immunity to a donor is sound. However, more work needs to be done before can benefit from this discovery. Thus, we need to find ways to enhance the throughput. We are currently exploring high-throughput methods to identify the neoantigens that the T cells can “see” on the cancer and isolate the responding . But the results showing that we can obtain cancer-specific immunity from the blood of healthy individuals are already very promising”, says Johanna Olweus.

Your brain does not process information and it is not a computer.


Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer.

Header essay gs3522985

No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain

Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication ofLanguage and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.

This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.

Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.

The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.

But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.

Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly.

If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost?

In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.

Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):

And here is the drawing she subsequently made with a dollar bill present:

Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.

What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?

Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.

The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?

A wealth of brain studies tells us, in fact, that multiple and sometimeslarge areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers.

The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?

So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent.

The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before.

Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.

From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor.

As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) weobserve what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.

We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.

Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.

A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of theAplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable.

A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.

My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

We will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading

Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.

One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity.

Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing.

Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.

This is why, as Sir Frederic Bartlett demonstrated in his bookRemembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).

This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.

Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew theentire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.

Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-momentactivity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.)

Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.

We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.

Your keyboard could be harboring dangerous bacteria.


Your computer keyboard is probably more dangerous than you think. It can be home to staphylococcus and E. Coli and needs to be disinfected regularly.

Watch the video.URL:http://www.businessinsider.in/Your-keyboard-could-be-harboring-dangerous-bacteria/articleshow/52348586.cms?from=mdr

First evidence of icy comets orbiting a sun-like star.


Researchers detected low levels of carbon monoxide gas around the star in amounts that are consistent with the comets in our own solar system.
ALMA image of the ring of comets around HD 181327
ALMA image of the ring of comets around HD 181327 (colors have been changed). The white contours represent the size of the Kuiper Belt in the solar system.
An international team of astronomers has found evidence of ice and comets orbiting a nearby sun-like star, which could give a glimpse into how our own solar system developed.

Using data from the Atacama Large Millimeter Array (ALMA), the researchers, led by the University of Cambridge, detected low levels of carbon monoxide gas around the star in amounts that are consistent with the comets in our own solar system.

The results, which will be presented today at a conference in Santiago, Chile, are a first step in establishing the properties of comet clouds around sun-like stars just after the time of their birth.

Comets are essentially “dirty snowballs” of ice and rock, sometimes with a tail of dust and evaporating ice trailing behind them, and are formed early in the development of stellar systems. They are typically found in the outer reaches of our solar system but become most clearly visible when they visit the inner regions. For example, Halley’s Comet visits the inner solar system every 75 years, some take as long as 100,000 years between visits, and others only visit once before being thrown out into interstellar space.

It’s believed that when our solar system was first formed, the Earth was a rocky wasteland, similar to how Mars is today, and that as comets collided with the young planet, they brought many elements and compounds, including water, along with them.

The star in this study, HD 181327, has a mass about 30% greater than the Sun and is located 160 light-years away in the Painter constellation. The system is about 23 million years old, whereas our solar system is 4.6 billion years old.

“Young systems such as this one are very active, with comets and asteroids slamming into each other and into planets,” said Sebastián Marino frin Cambridge’s Institute of Astronomy. “The system has a similar ice composition to our own, so it’s a good one to study in order to learn what our solar system looked like early in its existence.”

Using ALMA, the astronomers observed the star, which is surrounded by a ring of dust caused by the collisions of comets, asteroids, and other bodies. It’s likely that this star has planets in orbit around it, but they are impossible to detect using current telescopes.

“Assuming there are planets orbiting this star, they would likely have already formed, but the only way to see them would be through direct imaging, which at the moment can only be used for very large planets like Jupiter,” said Luca Matrà from Cambridge’s Institute of Astronomy.

In order to detect the possible presence of comets, the researchers used ALMA to search for signatures of gas, since the same collisions, which caused the dust ring to form should also cause the release of gas. Until now, such gas has only been detected around a few stars, all substantially more massive than the Sun. Using simulations to model the composition of the system, they were able to increase the signal to noise ratio in the ALMA data and detect very low levels of carbon monoxide gas.

“This is the lowest gas concentration ever detected in a belt of asteroids and comets — we’re really pushing ALMA to its limits,” said Marino.

“The amount of gas we detected is analogous to a 120 miles (200 kilometer) diameter ice ball, which is impressive considering how far away the star is,” said Matrà. “It’s amazing that we can do this with exoplanetary systems now.”

Can Eating Cheese Lower Your Risk For Diabetes?


A recent study shows a link between lowering cheese consumption and a decreased risk of diabetes. Have a look!

If you told me I had to give up cheese forever, I’d probably tell you that wasn’t going to happen. While cheese is high in calories and saturated fat, a recent study shows a link between cheese consumption and decreased diabetes risk. Have a look.

Researchers analyzed eight countries in Europe, which included 340,234 people and found that people who consider themselves cheese eaters have a 12 percent lower risk of Type 2 diabetes than people who don’t eat cheese. The study was published in the American Journal of Clinical Nutrition and defined cheese-eaters as consuming anywhere from 11 to 56 grams of cheese a day.

This particular study showed a link between cheese consumption and lower diabetes risk, but other previous studies have found the exact opposite. So, researchers aren’t suggesting we all go cheese wild as a way to avoid diabetes, but if you’re already a cheese lover, it may not hurt to keep up with your habits. So long as they also include plenty of fruits and vegetables, of course.

Can Melanoma Be Treated Without Drug Resistance?


can drug resistance be avoided for melanoma?The studies, published in the January 2014 issue of the journal Cancer Discovery, used patient biopsy samples to show the key cell-signaling pathways used by BRAF-mutant melanoma cells in learning to become resistant to inhibitor drugs.

The samples also showed how limiting BRAF inhibitors to one type allows melanoma cells to evolve and develop drug resistance.

Dr. Lo said that better understanding tumor resistance mechanisms should allow doctors to combine inhibitor drugs that block multiple resistance routes. This can prolong tumor shrinkage or make them disappear completely.

This Article

  • Improved My Health16
  • Changed My Life24
  • Saved My Life12

“This study lays a foundation for clinical trials to investigate the mechanisms of tumor progression in these melanoma patients,” he said in an email.

He acknowledged that three patients have already been enrolled for an ongoing clinical trial currently in the dose-finding phase.

In the second study, researchers found that as soon as BRAF inhibitor drugs are introduced, melanoma tumors are able to quickly turn on drug resistance pathways, a process called early adaptive resistance.

Over time, these pathways are further reinforced, allowing the tumor cells to break free of the BRAF inhibitor and resume growth. So while early and late resistance processes are linked, the research shows, the endgames can be quite similar although the mechanisms to these ends may differ.

Professor of medicine Dr. Antoni Ribas, a JCCC member and co-investigator in these articles said in a UCLA press release that locating the central melanoma escape pathways is an important conceptual advance to fighting BRAF inhibitor resistance.

“We now have a landscape view of how melanoma first adapts and then finds ways to overcome what is initially a very effective treatment. We have already incorporated this knowledge to the testing of new combination treatments in patients to get us ahead of melanoma and not allow it to escape,” Ribas said.

Weighing the Risks and Benefits of MS Medications


Risks and benefits of MS medication

There is still no cure for multiple sclerosis (MS) but a lot has changed in the way it’s treated. Advancements in medications, known as disease-modifying therapy, have shown to change, slow or stop the natural progression of MS in many patients, but these drugs may not be right for everyone.

The National Multiple Sclerosis Society’s Clinical Advisory Board agrees that disease-modifying medications are most effective when started early, before the disease has the opportunity to further progress. Many neurologists recommend starting treatment with either interferon beta or glatiramer acetate when MS is diagnosed.

Most experts agree permanent damage to the central nervous system may occur early on, while your symptoms are still mild. Early treatment may help prevent or delay this damage.

The National MS Society says treatment with medicine may also be considered after the first attack in some people who are at a high risk for MS (before the MS diagnosis is definite).

In April 2012, the drug maker Merck announced preliminary results of a new Phase III, three-year clinical trial.

The results showed MS patients who received injections interferon beta-1a soon after their first signs of possible MS were less likely (28 percent) to progress to clinically definite MS than MS patients who switched to interferon beta-1a from placebo (41 percent).

The drug is currently available in the European Union, Asia and Latin American but is not available in the United States. The results are considered preliminary because the clinical trial is ongoing. Long-term data is expected for another five years.

There are currently eight Food and Drug Administration (FDA) approved drugs for use in relapsing forms of MS (including secondary-progressive multiple sclerosis or SPMS, for those people who are still experiencing relapses.) The National MS Society says of these, only one drug, mitoxantrone (Novantrone), is approved specifically for SPMS.

Whether to take a disease-modifying medication is one of the most important decisions MS patients have to make.

This decision should only be made after carefully considering and weighing how your individual lifestyle could affect your ability to stay with the treatment over time.

Other considerations should be the disease course, known side effects, and the potential benefits and risks of each therapy, the National MS Society says. (A full list of MS drugs, their known side effects and drug warnings are available on the Society’s website.)

“A full discussion with a knowledgeable health care professional is the best guide in your decision, as each person’s body or disease can respond to these medications in different ways,” MS Society literature said.

While all MS medications have been shown to reduce the frequency of relapses by as much as 68 percent, and to lower or eliminate the development of new lesions, these benefits come at a cost.

Disease-modifying medications are expensive. Some drugs can run as much as $2,000 to 7,000 per month, even with insurance coverage.

It is also possible your insurance company will cover some MS drugs but not others, so it’s a very good idea to verify your coverage information in advance of finalizing your decision.