Chinese Botanical Medicine: Wikipedia Claims it is Fake, We are Certain it is Real


According to the World Health Organization, 80% of the world’s population uses herbal medicine. Are these hundreds of millions of people simply deluded by superstitious nostrums, as Wikipedia and so-called ‘skeptics’ imply? 

Modern conventional medicine has increasingly become a culture of scientific and historical denialism. Although claiming to be an objective discipline of consistent progress, the medical establishment more often than not denies the insights, discoveries, medical systems and methodologies of the distant past and non-Western cultures. Rather, Western medicine is racing more rapidly towards a retro-future with a blind faith in the promises of new engineered, synthetic drugs. Sadly, this pursuit is misconstrued as synonymous with important medical breakthroughs and the evolution of scientific medicine in general. Yet as the statistics show, modern medicine is on a collision course with itself. This is most evident in the increasing failures conventional medicine faces in fighting life-threatening diseases and the annual increases in iatrogenic injuries and deaths.

Upon graduation, every new physician repeats “I will not give a lethal drug to anyone if I am asked, nor will I advise such a plan.” The Oath composed by the wise Greek medical sage, Hippocrates, goes on to say “I will use those dietary regimens which will benefit my patients according to my greatest ability and judgement, and I will do no harm or injustice to them.” Hippocrates was a naturalist. Unlike physicians today, he was expert in the healing powers found in the natural world and was a keen observer about the health benefits of different foods, plants and herbs. However, modern allopathic doctors are not only largely ignorant about the natural world but also the epigenetic, environmental and behavior causes of diseases and the means to prevent them. They have also removed themselves from honoring the Hippocratic Oath.

How well has modern medicine lived up to its Oath? Adverse drug events (ADEs) are rising. They have become a plague upon public health and our healthcare system. As of 2014, prescription drug injuries totaled 1.6 million events annually. Every day, over 4,000 Americans experience a serious drug reaction requiring hospitalization. And over 770,000 people have ADEs during hospital stays.[1] The most common ADEs are hypertension, congestive heart failure, atrial fibrillation, volume depletion disorders and atherosclerotic heart disease.[2] According to the Centers for Disease Control, in 2016 there were 64,070 deaths directly associated with prescription overdoses; this is greater than the number of American soldiers killed during the entire Vietnam War.[3] For 2017, the CDC reported over 42,000 deaths from prescription opioid drugs alone.[4] Yet this figure is probably much higher due to the CDC’s practice of reporting statistics very conservatively and many cases not getting properly reported. So when we consider that there were over 860,000 physicians in the US practicing in 2016, potentially most physicians in America have contributed to ADEs.

No legitimate and highly developed alternative or natural medical practice has such a dismal track record of illness and death. Nevertheless, when a rare ADE, poisoning or death occurs Skeptics in the radical fringe Science-Based Medicine (SBM) movement, who rabidly oppose Complementary and Alternative Medicine (CAM) and Traditional Chinese Medicine (TCM), are quick to report the incident as a national crisis and condemn the use of traditional natural medicine altogether. Yet if we look at the potential number of iatrogenic injuries and deaths over the last four decades since the start of the pharmaceutical and biotechnology boom in the late 1980s, we are looking at over 60 million ADE incidences caused by conventional Western medicine alone. This is nothing celebrate and no concerted national effort within the medical establishment nor among the followers of SBM is being made to challenge the dominant medical paradigm responsible for this crisis.

According to the World Health Organization, 80% of the world’s population uses herbal medicine. And this trend is increasing exponentially.[5] Skeptics have few viable and rational explanations to account for this trend. Since they regard traditional herbal medical systems as quackery, everyone experiencing relief or having a successful treatment from botanicals is simply having a placebo effect conversion experience. Fortunately in the US and other Western nations, the public is rapidly losing its trust and satisfaction with conventional Western medical practice and is seeking safer alternatives. With healthcare costs escalating annually and prescription ADE’s on the increase as more and more drugs are fast-tracked through federal regulatory hurdles, relying solely upon allopathic medicine is a dangerous bargain. Dr. Dominic Lu at the University of Pennsylvania and president of the American Society for the Advancement of Anesthesia and Sedation recommends that Chinese herbal and Western medicine might complement each other if we make the effort to investigate their synergistic therapeutic effects. Lu believes oriental concepts of human anatomy should be further included in higher educational health science curriculums.[6] In addition, we would also note that with conventional medicine in a crisis people are accessing the numerous resources on the internet to educate themselves about the medicinal properties of plants, herbs, supplements and foods as part of their personal therapeutic protocols.

In our previous article in this series exposing the scientific denialism and ideological agenda of Skepticism’s and Wikipedia’s role in promoting SBM’s regressive agenda to turn people away from non-conventional drug-based medicine, we tackled SBM’s and Wikipedia’s attack on acupuncture. In this segment we will focus upon Chinese botanical medicine. In mainland China, acupuncture and herbology are treated as separate disciplines; therefore we will only look at Chinese botanical medical.

Wikipedia has a noteworthy amount to say about traditional Chinese herbal medicine. However, its major criticisms rely heavily upon five-plus year old reviews of the peer-reviewed research. Some references in fact have nothing to do with Chinese herbology. The majority of clinical research into Chinese botanicals and medical preparations are only found in Chinese databases. Therefore, Western analytical reviews, including the Cochrane reports, are extremely limited, inconclusive and biased. Critics of TCM frequently criticize published Chinese research as “incomplete, some containing errors or were misleading.”[7] These are the same Skeptic criticisms Wikipedia levels against traditional herbal medical systems in general. With over 181,000 peer-reviewed research papers and reviews listed in the National Institutes of Health PubMed database referring to TCM, it is ridiculous and disingenuous to assume Wikipedia’s editors have scoured this massive body of science to make any sound judgement about TCM’s efficacy.

Under the heading “Chinese Herbology,” Wikipedia states, “A Nature editorial described TCM as “fraught with pseudoscience,” and said that the most obvious reason why it has not delivered many cures is that the majority of its treatments have no logical mechanism of action… Research into the effectiveness of traditional Chinese herbal therapy is of poor quality and often tainted by bias, with little or no rigorous evidence of efficacy.”[8] Nature’s editorial, which reflects the same ill-informed opinions frequent in Skeptical criticisms about natural health, does not cite any research to support its sweeping prejudiced opinion. The editorial is primarily a diatribe against the growing popularity of traditional medicine in the Chinese domestic market, estimated by the Boston Consulting Group to be worth $13 billion in 2006.[9] In addition, as noted above, Wikipedia’s sources include a review of herbal medicine published in the South African Medical Journal that only looked at six African botanicals, none which are part of the Chinese pharmacopoeia.[10]

We would be negligent to not state a serious concern that readers should be aware of regarding Chinese medicinal herbs and preparations. This has been rightly noted by the SBM writers and Wikipedia; that is the high levels of toxic contaminants, notably arsenic, lead and other toxic chemicals found in Chinese herbs and formulas being exported. However Wikipedia fails to note the real reasons for this warning. Rather it frames caution as a means to discredit Chinese botanical medicine altogether. The export of toxic herbs is largely due to the enormous and out-of-control environmental problem including toxic atmospheric particulate matter from over-pollution, toxic dumping and waste spills in water supplies and poor agricultural practices. However, in some countries such as Japan and Taiwan, federal regulations for the import and export of medical botanicals are stricter and clean, non-toxic botanical herbs and preparations are readily available. There remain very reliable sources for getting highly quality grown Chinese herbs.

One of SBM’s leading spokespersons David Gorski would like us to believe that Mao Tse-tung should be condemned for restoring traditional Chinese medicine in mainland China. [11] But this is a blatant half-truth. In fact, Gorski and his colleagues have far more in common with Chairman Mao based upon the historical facts. It was during Mao’s reign that classical Chinese medicine took an enormous leap backwards. The ancient system was originally banned during the Chinese Nationalist movement in the early 20th century because its leaders believed the old ways were preventing the nation from modernizing. Mao initially made a small effort to restore the practice when he came to power. However, it was after the Communist Revolution when Mao turned against traditional medicine. The Cultural Revolution again outlawed the practice. Traditional doctors who retained the most extensive knowledge and wisdom about classical Chinese anatomical theory and knowledge of medicinal herbs were systematically gathered for Communist conversion programs, imprisoned and/or killed. TCM nearly died out altogether from the mainland. Years later when the Communists attempted to resurrect the ancient medical wisdom, only a few hundred doctors could be found throughout the country with sufficient knowledge to start TCM anew. Yet Mao remained ambiguous. He wrote, “Even though I believe we should promote Chinese medicine… I personally do not believe in it. I don’t take Chinese medicine.”[12] Unfortunately what is commonly called Traditional Chinese Medicine (TCM) today is a partial reconstruction of the original ancient system that had developed over thousands of years. Much has been lost. The government’s effort failed. According to Dr. Brigetta Shea, “once the government decided to reinstate some form of China’s traditional medicine, they did it with an emphasis on combining it with Western medical theory. This shifted even acupuncture theory, as Western anatomical teaching was adopted and esoteric subtle anatomy was discarded.”[13] The result has been that TCM today is a mere shadow of what it was in the past, and is little more than a watered down system contaminated with Western reductionist medical theories. Fortunately, growing interest in TCM is inspiring young researchers and practitioners to travel to China, Taiwan, Japan and Korea to try to recover the more ancient classical medical teachings that were not included in the standardized TCM curriculums.

SBM founder Stephen Novella remarks, “TCM is a pre-scientific superstitious view of biology and illness, similar to the humoral theory of Galen, or the notions of any pre-scientific culture. It is strange and unscientific to treat TCM as anything else. Any individual diagnostic or treatment method within TCM should be evaluated according to standard principles of science and science-based medicine, and not given special treatment.”[14] The remainder of Novella’s argument is an example of taking TCM terms literally and not penetrating their deeper functions to discover their correlations with scientifically identified biomolecular substances and events. Novella also believes that the Chinese medical theories of qi and the acupuncture meridians share the same magical thinking as “ether, flogistum, Bigfoot, and unicorns.”[15]

The master physicians and pioneers of the advanced traditional medical systems of Greece, India, China and Tibet, were very skilled and astute in identifying metabolic disturbances in their patients. Although on the surface, the humors may appear to be outdated or primitive mythological terms, a deep study of the traditional medical texts reveals they have direct correspondences to biochemical and biological processes that are well known in modern medicine. For example, according to the recent translators of the enormous medical corpus composed by one of the world’s greatest medical doctors Avicenna in the 11th century, who revived the medical theories of Galen at the height of Islamic civilization’s golden age, Dr. Hakima Amri, professor of molecular biology at Georgetown University and Dr. Mones Abu-Asab, a senior scientist and expert in phylogenetic systematics at the National Institutes of Health, discovered the ancient descriptions of the humors have a direct correlation to properties of fats, proteins and organic acids  — the cornerstones of metabolic changes. Due to its linear and non-systematic way of analyzing health and disease, modern medicine focuses upon single metabolic pathways and fails to consider that these pathways work in concert and are co-dependent with others. For example, a patient with high LDL cholesterol will be prescribed a statin without fully understanding the biological imbalances that increased LDL. But traditional herbal systems, including Chinese botanical medicine, provide more parameters such as a tissue’s hydration and energy production in the case of abnormal cholesterol levels. Western medicine does not take into account hydration and energy production in making an accurate diagnostic assessment of the reasons for a patient’s cholesterol imbalance. This is where the ancient theory of humors, or the fundamental “fluids” in the body — traditionally defined as blood, phlegm and yellow and black bile —  provides clues.

Western medicine has no equivalent to what traditional systems refer to as “dystemperament” in a biological system or organ. Dystemperament was understood as an imbalance in a person’s unique personalized physical, genetic and psychological disposition. Today the rapidly growing discipline of Functional Medicine finds agreement with this principle for diagnosing and treating an illness. In fact, conventional medicine still endeavors to define the causes of many diseases at a singular cellular or molecular level. It also faces a serious predicament in being based upon a one-drug-one-target paradigm in drug research and development. Traditional systems, including Chinese herbology, being far more complete and efficient medical systems, don’t struggle with this dilemma. For half a century we have spent hundreds of billions of dollars on reductionist biomedical research to identify genes, proteins and metabolic biochemical changes that contribute to disease. But despite the enormous body of knowledge and data we have gathered from astronomic costly projects there have been few practical and meaningful results to find safe and effective treatments outside of prescribing potentially lethal drugs.

Most evidence-based medical reviews of research conducted on the efficacy of specific Chinese herbs fail to take into account that Chinese herbology is a complete system. It is unrealistic to research a single traditional Chinese herb and draw a definitive conclusion. An herbal concoction can include up to 18 or more ingredients, and these may be fermented or simmered for hours to produce pharma-therapeutic properties useful for the treatment of disease. This was noted in a Cochrane review of Chinese medical herbs for treating acute pancreatitis.[16] It is estimated that there are over 13,000 different medicinal ingredients found in the annals of Chinese medical texts and well over 100,000 unique decoctions and recipes. While the vast majority of substances used in Chinese medicinal preparations are plant-based, parts of animals and specific minerals may also be included.[17,18]

Regardless of the Skeptics’ and Wikipedia’s invective to diminish Chinese medicine’s efficacy and successes, TCM is booming and extraordinary research continues to pump out positive discoveries. Even Bayer Pharmaceutical purchased the Chinese herbal company Dihon Pharmaceutical Group in 2014 because of the huge potential for discovering powerful phytochemicals to treat a wide variety of diseases. Helmut Kaiser Consultancy in Germany predicts that annual revenues in Chinese botanicals will triple by 2025 from 2015 revenues of $17 billion.[19] A Morgan Stanley 2012 review found that even among Chinese physicians trained in Western medical schools, TCM is being used as the first line of defense against disease in 30% of medical cases.[20]

Curiously Skeptics and Wikipedia fail to acknowledge that the 2015 Nobel Prize in Medicine was awarded to China’s scientist Tu You-you for her use of the Chinese medical remedy artemisia to develop an anti-malarial drug.[21] In 2015, researchers at the Texas Biomedical Research Institute and the Center for Integrative Protein Science in Munich published their findings in Science of an alkaloid in an ingredient of the Chinese formula Han Fang Ji that protected human white blood cells from the Ebola virus.[22] And in 2006, the FDA gave its first drug approval to an ointment based upon Chinese botanicals, including green tea leaves, for the treatment of genital warts caused by human papillomavirus.[23] In a bioinformatics database analysis comparing phytochemicals in Chinese plants with the modern Comprehensive Medical Chemistry database of pharmaceutical drug ingredients, over 100 Chinese herbal phytochemicals had direct correlates with ingredients used in approved pharmaceutical drugs on the market.[24]

Taking one excellent example of the synergistic effects of herbal combinations in TCM is the duo Coptidis rhizoma and Evodia rutaecarpa. In classical Chinese medical practice, this formula has been given for centuries to treat gastric conditions including rapid healing of ulcers. Modern research has shown that together these herbs inhibit the bacterium Helicobacter pylori, which frequently accompanies ulcers. In the US approximately 20% of people under 40 years and over 50% of those above 60 years are estimated to have an H. pylori infection which can be responsible for gastritis, stomach and duodenal ulcers, gastric lymphoma and stomach cancer. The herbs were also found to contain limonene used in drugs as an antineoplastic molecule and gamalenic acid used in as an ingredient in pharmaceutical anti-tumor drugs.[25]

Finally, we might take a look at the 2017-2018 flu season. In fact, the influenza vaccine for this past season was a dud and failed to protect most recipients from infection. According to the CDC, the vaccine was 36% effective.[26] Almost 100 pediatric flu deaths were reported. However, later research at Rice University determined the vaccine was at best only 20% efficacy.[27] With conventional medicine and our federal health agencies failing to protect the public, tens of thousands of people experiencing the onset of flu-like symptoms rushed to purchase the Chinese herbal cold formula Nin Jiom Pei Pa Koa. The formula costs as little as $6 in New York City’s Chinatown. Pei Pa Koa is one of the most popular cold, flu and cough remedies across East Asia and Singapore. It was first formulated during the Qing dynasty in the 17th century. The results are often immediate. When we desire relief from a health condition that is all that matters.

Therefore, we have absolutely no need for Skeptics preaching from their bully pulpits. There is no need to read the vitriol of Science-based medicine’s priesthood. And we certainly have no need to refer to Wikipedia’s encyclopedia of biased misinformation parroting Skepticism’s paranoia and deceptive efforts to censor natural health. We don’t need any of them to tell us that the relief we experience after taking a medicinal herb or natural formula is only a placebo effect or a figment of our imagination because the scientific research doesn’t meet their standards. The fact of the matter is that the science will never meet their standards because fundamentalists, either religious or science-based, cannot be persuaded by factual evidence that conflicts with their ingrained psychological ideologies and fears. And this is the fundamental fallacy and blatant hypocrisy that runs throughout SBM Skepticism and Wikipedia. It is not “science-based” because it is impoverished of the necessary inquisitive open-mindedness that defines those who are authentic scientists. SBM is faith-based, and holds fealty with a grossly reductionist, petulant and brattish mentality incapable of seeing the forest from the trees. In his criticism of TCM, Novella brings the absurdity of Skepticism to a climax. “I maintain that there are many good reasons to conclude that any system [i.e. TCM] which derives from everyday experience is likely to be seriously flawed and almost entirely cut off from reality.”[28] However, for thousands of years there have been countless people who experienced and claimed the benefits from Chinese botanical medicine. We have no need for Skepticism’s scientific reductionist validation to prove the reality of natural medicine.

Advertisements

Should we be worried about artificial intelligence?


Not really, but we do need think carefully about how to harness, and regulate, machine intelligence.

By now, most of us are used to the idea of rapid, even accelerating, technological change, particularly where information technologies are concerned. Indeed, as consumers, we helped the process along considerably. We love the convenience of mobile phones, and the lure of social-media platforms such as Facebook, even if, as we access these services, we find that bits and pieces of our digital selves become strewn all over the internet.

More and more tasks are being automated. Computers (under human supervision) already fly planes and sail ships. They are rapidly learning how to drive cars. Automated factories make many of our consumer goods. If you enter (or return to) Australia with an eligible e-passport, a computer will scan your face, compare it with your passport photo and, if the two match up, let you in. The “internet of things” beckons; there seems to be an “app” for everything. We are invited to make our homes smarter and our lives more convenient by using programs that interface with our home-based systems and appliances to switch the lights on and off, defrost the fridge and vacuum the carpet.

Robots taking over more intimate jobs

With the demise of the local car industry and the decline of manufacturing, the services sector is expected to pick up the slack for job seekers. But robots are taking over certain roles once deemed human-only.

Clever though they are, these programs represent more-or-less familiar applications of computer-based processing power. With artificial intelligence, though, computers are poised to conquer skills that we like to think of as uniquely human: the ability to extract patterns and solve problems by analysing data, to plan and undertake tasks, to learn from our own experience and that of others, and to deploy complex forms of reasoning.

The quest for AI has engaged computer scientists for decades. Until very recently, though, AI’s initial promise had failed to materialise. The recent revival of the field came as a result of breakthrough advances in machine intelligence and, specifically, machine learning. It was found that, by using neural networks (interlinked processing points) to implement mathematically specified procedures or algorithms, machines could, through many iterations, progressively improve on their performance – in other words, they could learn. Machine intelligence in general and machine learning in particular are now the fastest-growing components of AI.

The achievements have been impressive. It is now 20 years since IBM’s Deep Blue program, using traditional computational approaches, beat Garry Kasparov, the world’s best chess player. With machine-learning techniques, computers have conquered even more complex games such as Go, a strategy-based game with an enormous range of possible moves. In 2016, Google’s Alpha Go program beat Lee Sedol, the world’s best Go player, in a four-game match.

Allan Dafoe, of Oxford University’s future humanities institute, says AI is already at the point where it can transform almost every industry, from agriculture to health and medicine, from energy systems to security and the military. With sufficient data, computing power and an appropriate algorithm, machines can be used to come up with solutions that are not only commercially useful but, in some cases, novel and even innovative.

Should we be worried? Commentators as diverse as the late Stephen Hawking and development economist Muhammad Yunus have issued dire warnings about machine intelligence. Unless we learn how to control AI, they argue, we risk finding ourselves replaced by machines far more intelligent than we are. The fear is that not only will humans be redundant in this brave new world, but the machines will find us completely useless and eliminate us.

The University of Canberra's robot Ardie teaches tai chi to primary school pupils.
The University of Canberra’s robot Ardie teaches tai chi to primary school pupils.

If these fears are realistic, then governments clearly need to impose some sort of ethical and values-based framework around this work. But are our regulatory and governance techniques up to the task? When, in Australia, we have struggled to regulate our financial services industry, how on earth will governments anywhere manage a field as rapidly changing and complex as machine intelligence?

Governments often seem to play catch-up when it comes to new technologies. Privacy legislation is enormously difficult to enforce when technologies effortlessly span national boundaries. It is difficult for legislators even to know what is going on in relation to new applications developed inside large companies such as Facebook. On the other hand, governments are hardly IT ingenues. The public sector provided the demand-pull that underwrote the success of many high-tech firms. The US government, in particular, has facilitated the growth of many companies in cybersecurity and other fields.

Governments have been in the information business for a very long time. As William the Conqueror knew when he ordered his Domesday Book to be compiled in 1085, you can’t tax people successfully unless you know something about them. Spending of tax-generated funds is impossible without good IT. In Australia, governments have developed and successfully managed very large databases in health and human services.

The governance of all this data is subject to privacy considerations, sometimes even at the expense of information-sharing between agencies. The evidence we have is that, while some people worry a lot about privacy, most of us are prepared to trust government with our information. In 2016, the Australian Bureau of Statistics announced that, for the first time, it would retain the names and addresses it collected during the course of the 2016 population census. It was widely expected (at least by the media) that many citizens would withhold their names and addresses when they returned their forms. In the end, very few did.

But these are government agencies operating outside the security field. The so-called “deep state” holds information about citizens that could readily be misused. Moreover, private-sector profit is driving much of the current AI surge (although, in many cases, it is the thrill of new knowledge and understanding, too). We must assume that criminals are working out ways to exploit these possibilities, too.

If we want values such as equity, transparency, privacy and safety to govern what happens, old-fashioned regulation will not do the job. We need the developers of these technologies to co-produce the values we require, which implies some sort of effective partnership between the state and the private sector.

Could policy development be the basis for this kind of partnership? At the moment, machine intelligence works best on problems for which relevant data is available, and the objective is relatively easy to specify. As it develops, and particularly if governments are prepared to share their own data sets, machine intelligence could become important in addressing problems such as climate change, where we have data and an overall objective, but not much idea as to how to get there.

Machine intelligence might even help with problems where objectives are much harder to specify. What, for example, does good urban planning look like? We can crunch data from many different cities, and come up with an answer that could, in theory, go well beyond even the most advanced human-based modelling. When we don’t know what we don’t know, machines could be very useful indeed. Nor do we know, until we try, how useful the vast troves of information held by governments might be.

Perhaps, too, the jobs threat is not as extreme as we fear. Experience shows that humans are very good at finding things to do. And there might not be as many existing jobs at risk as we suppose. I am convinced, for example, that no robot could ever replace road workers – just think of the fantastical patterns of dug-up gravel and dirt they produce, the machines artfully arranged by the roadside or being driven, very slowly, up and down, even when all the signs are there, and there is absolutely no one around. How do we get a robot, even one capable of learning by itself, to do all that?

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Recent Study Shows How Sunscreen Causes Cancer, Not the Sun


Did you know that despite the invention of sunscreen, cases of skin cancers are on the rise every year? Elizabeth Plourde, Ph.D., is a California-based scientist who has shown that malignant melanoma and all other skin cancers increased significantly with ubiquitous sunscreen use over a 30-year period. Sunscreens contain chemicals that are known carcinogens and endocrine-disrupting chemicals (EDC).

So why so much faith in sunscreen? What’s going on here? Sunscreen is a product we’ve been sold that we cannot live without. But just think about what we did for the thousands of years before it’s invention. The sun has been a source of life since the beginning of human existence and has many benefits to the human body.

The Sun Doesn’t Harm Us

Firstly, the sun doesn’t harm us. It only nourishes us. There’s even really good science to prove this. One of the latest major studies was published by the Karolinska Institute in Sweden in 2014.

They conducted a study that found women who avoided lying out in the sun were actually TWICE as likely to die compared to those who make sunbathing a daily ritual.

This wasn’t a small study either. It looked at 30,000 women for a period of 20 years!

The A’s And B’s Of Sun Rays

We often hear about the different types of sun rays, so here’s the low down.

Ultraviolet B rays (UVB) are the primary cause of sunburn and non-melanoma skin cancers such as squamous cell carcinoma. The chemicals that form a product’s sun protection factor are aimed at blocking those UVB rays.

Ultraviolet A rays (UVA) penetrate deeper into the skin and are harder to block. Scientists know less about the dangers of UVA radiation and this could potentially be very dangerous. The general consensus now is that whilst UVA ray damage is much less obvious than UVB, it is probably a lot more serious!!

False Sense Of Security

A sunscreen lotion’s SPF rating has little to do with the product’s ability to shield the skin from UVA rays (this is because UVA and UVB protection do not harmonize). High-SPF products suppress sunburn from UVB but not other types of sun damage. Therefore they tend to lull users into staying in the sun longer and overexposing themselves to both UVA and UVB rays.

Since people think they are ‘protected’ they tend to extend their time in the sun well past the point when users of low-SPF products or natural oils head indoors. As a result, while the users of conventional sunscreen may get less UVB-inflicted sunburns as unprotected sunbathers, they are more likely to absorb more damaging UVA radiation (which studies are still inconclusive as to cancer-causing effects).

Philippe Autier, a scientist formerly with the International Agency for Research on Cancer and part of the World Health Organization, has conducted numerous studies on sunbathers and believes that high-SPF products spur “profound changes in sun behavior” that may account for the increased melanoma risk found in some studies. We can now spend the whole day at the beach without having to retreat to cover.

More Chemicals Than You Bargained For

High SPF products require higher concentrations of sun-filtering chemicals than low SPF sunscreens or natural oils. Some of these ingredients may pose health risks when they penetrate the skin. They have been linked to tissue damage, potential hormone disruption and may trigger allergic skin reactions.

If studies showed that high SPF products were better at reducing skin damage and skin cancer risk, then perhaps this extra chemical exposure might be justified. But since they don’t offer any benefit, then choosing alternative sunscreens really start to look a whole lot more appealing.

Natural Sun Protection

When we are outside the light that comes into our eyes sends signals to the pituitary gland which triggers hormones to be released for skin protection.

The more we try to fool nature with chemicals the more cancer and other sickness shows up. Often the stress surrounding these health concerns is more detrimental than the issue itself. Health is simple and always has been.

At least let kids go out and play in the sun to develop enough Vitamin D before slathering all those chemicals on them.

Enjoy the life-giving amazing sun rays you are so blessed to have! Build a tan slowly, be smart and you will live a long healthy happy life.

 

Do Cellphones Cause Cancer?


The question of whether cellphones can cause cancer became a popular one after the dramatic increase in cell phone use since the 1990s. Scientists’ main concern is that cell phones can increase the risk of brain tumors or other tumors in the head and neck area – and as of now, there doesn’t seem to be a clear answer.

Cell phones give off a form of energy known as radiofrequency (RF) waves. They are at the low-energy end of the electromagnetic spectrum – as opposed to the higher-energy end where X-rays exist – and they emit a type of non-ionizing radiation. In contrast to ionizing radiation, this type does not cause cancer by damaging DNA in cells, but there is still a concern that it could cause biological effects that result in some cancers.

However, the only consistently recognizable biological effect of RF energy is heat. The closer the phone is to the head, the greater the expected exposure is. If RF radiation is absorbed in large enough amounts by materials containing water, such as food, fluids, and body tissues, it produces this heat that can lead to burns and tissue damage. Still, it is unclear whether RF waves could result in cancer in some circumstances.

An iPhone.

Many factors affect the amount of RF energy a person is exposed to, such as the amount of time spent on the phone, the model of the phone, and if a hands-free device or speaker is being used. The distance and path to the nearest cell phone tower also play a role. The farther a way a person is from the tower, the more energy is required to get a good signal on the phone. The same is true of areas where many people are using their phones and excess energy is required to get a good signal.

RF radiation is so common in the environment that there is no way to completely avoid it. Most phone manufacturers post information about the amount of RF energy absorbed from the phone into the user’s body, called the specific absorption rate (SAR), on their website or user manual. Different phones have different SARs, so customers can reduce RF energy exposure by researching different models when shopping for a phone. The highest SAR in the U.S. is 1.6 watts/kg, but actual SAR values may vary based on certain factors.

Studies have been conducted to find a possible link between cell phone use and the development of tumors. They are fairly limited, however, due to low numbers of study participants and risk of recall bias. Recall bias can occur when individuals who develop brain tumors are more predisposed to recall heavier cell phone use than those who do not, despite lack of true difference. Also, tumors can take decades to develop, and given that cell phones have only been in use for about 20 years, these studies are unable to follow people for very long periods of time. Additionally, cell phone use is constantly changing.

Outside of direct studies on cell phone use, brain cancer incidence and death rates have changed little in the past decade, making it even more difficult to pinpoint if cell phone use plays a role in tumor development.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Do Cellphones Cause Cancer?


The question of whether cellphones can cause cancer became a popular one after the dramatic increase in cell phone use since the 1990s. Scientists’ main concern is that cell phones can increase the risk of brain tumors or other tumors in the head and neck area – and as of now, there doesn’t seem to be a clear answer.

Cell phones give off a form of energy known as radiofrequency (RF) waves. They are at the low-energy end of the electromagnetic spectrum – as opposed to the higher-energy end where X-rays exist – and they emit a type of non-ionizing radiation. In contrast to ionizing radiation, this type does not cause cancer by damaging DNA in cells, but there is still a concern that it could cause biological effects that result in some cancers.

However, the only consistently recognizable biological effect of RF energy is heat. The closer the phone is to the head, the greater the expected exposure is. If RF radiation is absorbed in large enough amounts by materials containing water, such as food, fluids, and body tissues, it produces this heat that can lead to burns and tissue damage. Still, it is unclear whether RF waves could result in cancer in some circumstances.

An iPhone.

Many factors affect the amount of RF energy a person is exposed to, such as the amount of time spent on the phone, the model of the phone, and if a hands-free device or speaker is being used. The distance and path to the nearest cell phone tower also play a role. The farther a way a person is from the tower, the more energy is required to get a good signal on the phone. The same is true of areas where many people are using their phones and excess energy is required to get a good signal.

RF radiation is so common in the environment that there is no way to completely avoid it. Most phone manufacturers post information about the amount of RF energy absorbed from the phone into the user’s body, called the specific absorption rate (SAR), on their website or user manual. Different phones have different SARs, so customers can reduce RF energy exposure by researching different models when shopping for a phone. The highest SAR in the U.S. is 1.6 watts/kg, but actual SAR values may vary based on certain factors.

Studies have been conducted to find a possible link between cell phone use and the development of tumors. They are fairly limited, however, due to low numbers of study participants and risk of recall bias. Recall bias can occur when individuals who develop brain tumors are more predisposed to recall heavier cell phone use than those who do not, despite lack of true difference. Also, tumors can take decades to develop, and given that cell phones have only been in use for about 20 years, these studies are unable to follow people for very long periods of time. Additionally, cell phone use is constantly changing.

Outside of direct studies on cell phone use, brain cancer incidence and death rates have changed little in the past decade, making it even more difficult to pinpoint if cell phone use plays a role in tumor development.

Source:http://www.dana-farber.org

 

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Posthumous conception raises ‘host of ethical issues’


The legal and moral propriety of conceiving a child with a dead person’s egg or sperm is among the latest fronts being discussed in bioethics.

In Ireland, legislation is under consideration that would

permit reproductive cells from deceased individuals to be used by their spouses or partners to conceive children posthumously, according to media reports. The Irish legislature’s Joint Committee on Health discussed the bill once in January and again in February, a spokesperson for the legislature told Baptist Press. A final bill could be drafted in the coming months and put before parliament for debate.

Health Committee chairman Michael Harty said in a news release, “Assisted Human Reproduction (AHR) is becoming increasingly important in Ireland and measures must be put in place to protect parents, donors, surrogates and crucially, the children born through AHR.”

The posthumous conception legislation, which is part of a broader bill, would require children of the procedure to be carried in the womb of a surviving female partner in the relationship, according to an online commentary by Denver attorney Ellen Trachman, who specializes in reproductive technology law.

Posthumous conception has also been considered by lawmakers and courts in the United States, Canada and Israel.

Southern Baptist bioethicist C. Ben Mitchell said posthumous conception “raises a host of ethical issues.”

“There is no moral duty to use the sperm of a deceased husband or the eggs of a deceased wife,” Mitchell, Graves Professor of Moral Philosophy at Union University, told BP via email. “And intentionally bringing a child into the world with only a single parent raises a host of ethical issues, not to mention a host of psychological, emotional and relational issues for that child.”

Frozen sperm can be used later via artificial insemination or in vitro fertilization (IVF). Frozen eggs can be used to conceive a child through IVF. Following IVF, the resultant embryo must implant in a woman’s womb — either the biological mother or a surrogate.

Sperm and eggs can be either donated prior to death or extracted from a corpse shortly following death, according to the German newspaper Der Spiegel.

In Israel, approximately 5,000 young adults have established “biological wills” stating they want their eggs or sperm frozen and used to conceive offspring if they die before having children, Der Spiegel reported March 28. Some posthumously conceived children have been born in Israel and elsewhere, according to media reports.

Posthumous conception also has emerged in the U.S. and Canada, including the 2016 birth of a New York police detective’s daughter two and a half years following her father’s murder, the Irish Examiner reported. The night the detective was murdered, his wife of three months requested that sperm be extracted from his body and preserved.

U.S. law, Trachman wrote, “lacks any clear uniform rules” regarding posthumous conception “but generally permits post-death reproduction with specific consent in place.”

An additional issue related to posthumous reproduction is what to do with frozen embryos when one or both parents die.

Der Spiegel reported a case in Israel, in which a widower sought, via a surrogate mother, to bring to term embryos he and his wife had frozen. A Harvard Law School blog noted a 2014 Texas case in which a 2-year-old stood to inherit 11 frozen embryos after both of his parents were murdered.

Frozen embryos, Mitchell said, are a separate ethical consideration from posthumous conception.

“If the eggs have already been fertilized, there is a moral duty to bring the embryos to term,” Mitchell said. “We should not generate a new human being only to abandon him or her in a petri dish or nitrogen tank. Embryos belong in uteruses.”

Southern Baptist Convention resolutions repeatedly have affirmed that life begins at conception and that all unborn life must be protected. A 2015 resolution, for example, affirmed “the dignity and sanctity of human life at all stages of development, from conception to natural death.”

Is the MMR Vaccine a Fraud or Does It Just Wear Off Quickly?


Story at-a-glance

  • Ninety-five percent of children entering kindergarten have received two doses of MMR vaccine, as have 92 percent of school children ages 13 to 17 years. In some states, the MMR vaccination rate is near 100 percent
  • Despite achieving a vaccination rate that theoretically should ensure vaccine-acquired herd immunity, outbreaks of mumps keep occurring, primarily among those who have been vaccinated
  • Mumps is making a strong comeback among college students, with hundreds of outbreaks occurring on U.S. campuses over the past two decades
  • Recent research suggests the reemergence of mumps among young adults is due, at least in part, to waning immunity; protection from the vaccine is wearing off quicker than expected
  • According to a still-ongoing lawsuit filed in 2010, Merck is accused of falsifying efficacy testing of its mumps vaccine to hide its poor effectiveness. So, resurgence of mumps may be the result of using a vaccine that doesn’t offer much in terms of protection

By Dr. Mercola

In 1986, public health officials stated that MMR vaccination rates for kindergarten children were in excess of 95 percent and that one dose of live attenuated measles, mumps and rubella vaccine (MMR) would eliminate the three common childhood diseases in the U.S.1 In 1989, parents were informed that a single dose of MMR vaccine was inadequate for providing lifelong protection against these common childhood diseases and that children would need to get a second dose of MMR.2

Today, 95 percent of children entering kindergarten3 have received two doses of MMR vaccine, as have 92 percent of school children ages 13 to 17 years.4

In some states, the MMR vaccination rate is approaching 100 percent.5 Despite achieving the sought-for MMR vaccination rate for more than three decades, which theoretically should ensure “herd immunity,” outbreaks of both measles and mumps keep occurring — and many of those who get sick are children and adults who have been vaccinated.

Mumps Is Making a Comeback

As recently reported by Science Magazine6 and The New York Times,7 mumps is making a strong comeback among college students, with hundreds of outbreaks occurring on U.S. campuses over the past two decades. Last summer, the Minnesota Department of Health reported its largest mumps outbreak since 2006.8

According to recent research,9 the reason for this appears to be, at least in part, waning vaccine-acquired immunity. In other words, protection from the MMR vaccine is wearing off quicker than expected. Science Magazine writes:

“[Epidemiologist Joseph Lewnard and immunologist Yonatan Grad, both at the Harvard T. H. Chan School of Public Health in Boston] compiled data from six previous studies of the vaccine’s effectiveness carried out in the United States and Europe between 1967 and 2008. (None of the studies is part of a current fraudulent claims lawsuit against U.S. vaccine maker Merck.)

Based on these data, they estimated that immunity to mumps lasts about 16 to 50 years, or about 27 years on average. That means as much as 25 percent of a vaccinated population can lose immunity within eight years, and half can lose it within 19 years … The team then built mathematical models using the same data to assess how declining immunity might affect the susceptibility of the U.S. population.

When they ran the models, their findings lined up with reality. For instance, the model predicted that 10- to 19-year-olds who had received a single dose of the mumps vaccine at 12 months were more susceptible to infection; indeed, outbreaks in those age groups happened in the late 1980s and early 1990s. In 1989, the Centers for Disease Control and Prevention added a second dose of the vaccine at age 4 to 6 years. Outbreaks then shifted to the college age group.”

A Third Booster Shot May Be Added

According to public health officials, the proposed solution to boosting vaccine-acquired mumps immunity in the U.S. population is to add a third booster shot of MMR vaccine at age 18.

Unfortunately, adding a booster for mumps means giving an additional dose of measles and rubella vaccines as well, as the three are only available in the combined MMR vaccine or combined MMR-varicella (MMRV) vaccine. At present, a third MMR shot is routinely recommended during active mumps outbreaks, even though there is no solid proof that this strategy is effective.

Considering two doses of the vaccine are failing to protect young adults from mumps, adding a third dose, plus two additional doses of measles and rubella vaccines, seems like a questionable strategy, especially in light of evidence that the mumps vaccine’s effectiveness may have been exaggerated to begin with.

According to a lawsuit filed eight years ago, the manufacturer of mumps vaccine — which is also the sole provider of MMR vaccine in the U.S. — is accused of going to illegal lengths to hide the vaccine’s ineffectiveness. So, might this resurgence of mumps simply be the result of using a vaccine that doesn’t provide immunity to begin with?10 And, if so, why add more of something that doesn’t work? After all, the MMR vaccine is not without its risks, as you’ll see below.

Still-Pending Lawsuit Alleges MMR Fraud

In 2010, two Merck virologists filed a federal lawsuit against their former employer, alleging the vaccine maker lied about the effectiveness of the mumps portion of its MMR II vaccine.11 The whistleblowers, Stephen Krahling and Joan Wlochowski, claimed they witnessed “firsthand the improper testing and data falsification in which Merck engaged to artificially inflate the vaccine’s efficacy findings.”

According to Krahling and Wlochowski, a number of different fraudulent tactics were used, all with the aim to “report efficacy of 95 percent or higher regardless of the vaccine’s true efficacy.”12 For example, the MMR vaccine’s effectiveness was tested against the virus used in the vaccine rather than the natural, wild mumps virus that you’d actually be exposed to in the real world. Animal antibodies were also said to have been added to the test results to give the appearance of a robust immune response.13

For details on how they allegedly pulled this off, read Suzanne Humphries’ excellent summary,14 which explains in layman’s terms how the tests were manipulated. Merck allegedly falsified the data to hide the fact that the vaccine significantly declined in effectiveness.15 By artificially inflating the efficacy, Merck has been able to maintain its monopoly over the mumps vaccine market.

This was also the main point of contention of a second class action lawsuit, filed by Chatom Primary Care16 in 2012, which charged Merck with violating the False Claims Act. Both of these lawsuits were given the green light to proceed in 2014,17,18 and are still pending.

In 2015, Merck was accused of stonewalling, “refusing to respond to questions about the efficacy of the vaccine,” according to a court filing by Krahling and Wlochowski’s legal team.19 “Merck should not be permitted to raise as one of its principal defenses that its vaccine has a high efficacy … but then refuse to answer what it claims that efficacy actually is,” they said.

There’s No Such Thing as Vaccine-Acquired Herd Immunity

This certainly isn’t the first time vaccine effectiveness has been questioned. While herd immunity is thrown around like gospel, much of the protection vaccines offer has actually been shown to wane rather quickly. The fact is, vaccine-acquired artificial immunity does not work the same way as the naturally-acquired longer-lasting immunity you get after recovering from the disease.

A majority of adults do not get booster shots, so most of the adult population is, in effect, “unvaccinated.” This calls into question the idea that a 95 percent-plus vaccination rate among children achieves vaccine-acquired “herd immunity” in a population. While there is such a thing as naturally acquired herd immunity, vaccine-induced herd immunity is a total misnomer.

Vaccine makers have simply assumed that vaccines would provide the same kind of longer-lasting natural immunity as recovery from viral and bacterial infections, but the science and history of vaccination clearly shows that this is not the case.

Vaccination and exposure to a given disease produce two qualitatively different types of immune responses. To learn more about this, please see my previous interview with Barbara Loe Fisher, cofounder and president of the National Vaccine Information Center (NVIC). As explained by Fisher: 

“Vaccines do not confer the same type of immunity that natural exposure to the disease does … [V]accines only confer temporary protection… In most cases natural exposure to disease would give you a longer-lasting, more robust, qualitatively superior immunity because it gives you both cell mediated immunity and humoral immunity.

Humoral is the antibody production. The way you measure vaccine-induced immunity is by how high the antibody titers are. (How many antibodies you have.) The problem is, the cell mediated immunity is very important as well. Most vaccines evade cell mediated immunity and go straight for the antibodies, which is only one part of immunity.”

MMR Does Not Work as Advertised

It’s quite clear the MMR vaccine does not work as well as advertised in preventing mumps, even after most children in the U.S. have gotten two doses of MMR for several decades. Public health officials have known about the problem with mumps vaccine ineffectiveness since at least 2006, when a nationwide outbreak of mumps occurred among older children and young adults who had received two MMR shots.20

In 2014, researchers investigated a mumps outbreak among a group of students in Orange County, New York. Of the more than 2,500 who had received two doses of MMR vaccine, 13 percent developed mumps21 — more than double the number you’d expect were the vaccine to actually have a 95 percent efficacy.

Now, if two doses of the vaccine have “worn off” by the time you enter college, just how many doses will be needed to protect an individual throughout life? And, just how many doses of MMR are safe to administer in a lifetime? Clearly there is far more that needs to be understood about mumps infection and the MMR vaccine before a third dose is added to the already-packed vaccine schedule recommended by federal health officials for infants, children and adolescents through age 18.

Mumps Virus May Have Mutated to Evade the Vaccine

Poor effectiveness could also be the result of viral mutations. There are a number of different mumps virus strains included in vaccines produced by different vaccine manufacturers in different countries. The U.S. uses the Jeryl-Lynn mumps strain in the MMR vaccine developed and sold in the U.S. by Merck. There’s significant disagreement among scientists and health officials about whether the mumps virus is evolving to evade the vaccine.

Two years ago, Dr. Dirk Haselow, an epidemiologist with the Arkansas Department of Health said,22 “We are … worried that this vaccine may indeed not be protecting against the strain of mumps that is circulating as well as it could. With the number of people we’ve seen infected, we’d expect 3 of 400 cases of orchitis, or swollen testicles in boys, and we’ve seen 5.”

A 2014 paper written by U.S. researchers developing a new mumps vaccine also suggested that a possible cause of mumps outbreaks in vaccinated Americans could be due to ” … the antigenic differences between the genotype A vaccine strain and the genotype G circulating wild-type mumps viruses.”23

Be Aware of MMR Vaccine Risks

If a vaccine is indeed highly effective, and avoiding the disease in question is worth the risk of the potential side effects from the vaccine, then many people would conclude that the vaccine’s benefits outweigh the risks. They may even be in favor of an additional dose.

However, if the vaccine is ineffective, and/or if the disease doesn’t pose a great threat to begin with, then the vaccine may pose an unacceptable risk. This is particularly true if the vaccine has been linked to serious side effects. Unfortunately, that’s the case with the MMR vaccine, which has been linked to thousands of serious adverse events and hundreds of deaths. According to NVIC:24

“As of March 1, 2018, there had been 1,060 claims filed in the federal Vaccine Injury Compensation Program for injuries and deaths following MMR or MMR-Varicella (MMRV) vaccinationsUsing the MedAlerts search engine, as of February 4, 2018, there had been 88,437 adverse events reported to the Vaccine Adverse Events Reporting System (VAERS) in connection with MMR or MMRV vaccines since 1990.

Over half of those MMR and MMRV vaccine-related adverse events occurred in babies and young children 6 years old and under. Of the MMR and MMRV vaccine related adverse events reported to VAERS, 403 were deaths, with over 60 percent of the deaths occurring in children under 3 years of age.”

Keep in mind that less than 10 percent of vaccine adverse events are ever reported to VAERS.25 According to some estimates, only about 1 percent are ever reported, so all of these numbers likely vastly underestimate the true harm.

A concerning study published in Acta Neuropathologica in February 2017 also describes the first confirmed report of vaccine-strain mumps virus (live-attenuated mumps virus Jeryl Lynn, or MuVJL) found in the brain of a child who suffered “devastating neurological complications” as a result. According to the researchers:26

“This is the first confirmed report of MuVJL5 associated with chronic encephalitis and highlights the need to exclude immunodeficient individuals from immunization with live-attenuated vaccines. The diagnosis was only possible by deep sequencing of the brain biopsy.”

Is homeopathy the biggest lie ever told in the history of healthcare in reference to the attached link? Why or why not?


That video might be one the gentlest criticisms of homeopathic medicine I have ever seen.

But the conclusion is very true. Most of the alternative systems of medicine, including homeopathy, are ineffective, and their popularity reflects a lack of confidence in valid, scientifically proven medicine, rather than efficacy of alternative therapies.

There is a reason why alternative systems of medicine are questioned again and again. We live in an era of evidence based medicine. More and more doctors are being sued everyday. They are expected (rightly) to justify every investigation they demand of their patients, every procedure they do and every drug they prescribe. That is why doctors have to undergo rigorous training and life-long continuous professional development. In contrast, even in countries where regulatory frameworks for alternative therapies are in place, there is no (or minimal) structure of training, certification and accreditation, and practice is effectively open to all.

Coming back specifically to homeopathy.

Here is rough idea about how evidence-based medicine works:

  1. When we see a disease, we try to understand its pathophysiology – which part of the body is involved (anatomy), what is the cause (infectious, non-infectious, autoimmune etc), what is the mechanism underlying the disease (pathology, biochemistry, molecular biology, genetics etc), and how these correlate with the manifestations of the disease (symptoms and signs).
  2. We confirm/substantiate our impressions by appropriate investigations.
  3. We try to see how our understanding applies to the population in general. This is where the disciplines of epidemiology and statistics come to our aid.
  4. We design therapies on the basis of data we have so far gathered. This is in itself a protracted task and the therapies are again tested in clinical trials. And note this – majority of the therapies are rejected in the trials. According to a conservative estimate, “it takes an average of 12 years for an experimental drug to travel from the laboratory to your medicine cabinet. That is, if it makes it. Only 5 in 5,000 drugs that enter pre-clinical testing progress to human testing. One of these 5 drugs that are tested in people is approved.”[1]
  5. Then there is the matter of applying the evidence to the individual patient.

None of these steps is an end in itself. Often, researchers have to go back to step 1. Trials are stopped. Drugs are withdrawn from the market. Procedures become obsolete. Protocols are redefined, and newer and more stringent laws are imposed.

Homeopathy follows none of these steps with a scientific rigor. I repeat, none. I know that it sounds unduly harsh, but a homeopathic practitioner barely knows the natural course of the disease. An apt analogy would be a person calling himself theoretical physicist without knowing anything about basic calculus, manifolds, topology etc.

Not to mention, the purported science behind designing the homeopathic drugs is absurd, and fanciful to the point of invoking magic.

Consider this. A 30X dilution means that the original substance has been diluted 1 000 000 000 000 000 000 000 000 000 000 times. Assuming that a cubic centimeter of water contains 15 drops, this number is greater than the number of drops of water that would fill a container more than 50 times the size of the Earth[2]. (My head hurts on seeing that number, I would rather have mathematicians give their insights on this, if it is worth their time. I am sure it isn’t.)

As an aside, how is it possible to potentiate a chemical when it is diluted. (Yes, I know it is called potentization. Potato, potahto … whatever.)

Furthermore, let us have a look at the result of some studies.

  • Cochrane reviews of studies of homeopathy do not show that homeopathic medicines have effects beyond placebo[3].
  • One of the reviewers graciously notes that “memory of water and PPR entanglement are not competing but most likely complementary hypotheses, and that both are probably required in order to provide a complete description of the homeopathic process.”[4] In other words, the mechanism by which homeopathic drugs are supposed to act is bullshit.
  • A 10 year study conducted by FDA concluded that homeopathic medicines have harmed hundreds of babies between 2006–2016[5].
  • Homeopathic therapy is also ineffective in multiple diseases it claims to treat, such as allergic rhinitis[6] and rheumatoid arthritis[7].

And these are just a fraction of the studies conducted. Multiple independent studies as well as meta-analyses have found that effect of homeopathic medicine is ambiguous, nil or frankly injurious.

No wonder OTC homeopathic remedies sold in the US will now have to come with a warning that they are based on outdated theories ‘not accepted by most modern medical experts’ and that ‘there is no scientific evidence the product works’

 

To automate is human


It’s not tools, culture or communication that make humans unique but our knack for offloading dirty work onto machines

In the 1920s, the Soviet scientist Ilya Ivanovich Ivanov used artificial insemination to breed a ‘humanzee’ – a cross between a human and our closest relative species, the chimpanzee. The attempt horrified his contemporaries, much as it would modern readers. Given the moral quandaries a humanzee might create, we can be thankful that Ivanov failed: when the winds of Soviet scientific preferences changed, he was arrested and exiled. But Ivanov’s endeavour points to the persistent, post-Darwinian fear and fascination with the question of whether humans are a creature apart, above all other life, or whether we’re just one more animal in a mad scientist’s menagerie.

Humans have searched and repeatedly failed to rescue ourselves from this disquieting commonality. Numerous dividers between humans and beasts have been proposed: thought and language, tools and rules, culture, imitation, empathy, morality, hate, even a grasp of ‘folk’ physics. But they’ve all failed, in one way or another. I’d like to put forward a new contender – strangely, the very same tendency that elicits the most dread and excitement among political and economic commentators today.

First, though, to our fall from grace. We lost our exclusive position in the animal kingdom, not because we overestimated ourselves, but because we underestimated our cousins. This new grasp of the capabilities of our fellow creatures is as much a return to a pre-Industrial view as it is a scientific discovery. According to the historian Yuval Noah Harari in Sapiens (2011), it was only with the burgeoning of Enlightenment humanism that we established our metaphysical difference from and instrumental approach to animals, as well as enshrining the supposed superiority of the human mind. ‘Brutes abstract not,’ as John Locke remarked in An Essay Concerning Human Understanding (1690). By contrast, religious perspectives in the Middle Ages rendered us a sort of ensouled animal. We were touched by the divine, bearers of the breath of life – but distinctly Earthly, made from dust, metaphysically ‘animals plus’.

Like a snake eating its own tail, it was the later move towards rationalism – built on a belief in man’s transcendence – that eventually toppled our hubristic sensibilities. With the advent of Charles Darwin’s theories, later confirmed through geology, palaeontology and genetics, humans struggled mightily and vainly to erect a scientific blockade between beasts and ourselves. We believed we occupied a glorious perch as a thinking thing. But over time that rarefied category became more and more crowded. Whichever intellectual shibboleth we decide is the ability that sets us apart, it’s inevitably found to be shared with the chimp. One can resent this for the same reason we might baulk at Ivanov’s experiments: they bring the nature of the beast a bit too close.

The chimp is the opener in a relay race that repeats itself time and again in the study of animal behaviour. Scientists concoct a new, intelligent task for the chimps, and they do it – before passing down the baton to other primates, who usually also manage it. Then they hand it on to parrots and crows, rats and pigeons, an octopus or two, even ducklings and bees. Over and over again, the newly minted, human-defining behaviour crops up in the same club of reasonably smart, lab-ready species. We become a bit less unique and a bit more animal with each finding.

Some of these proposed watersheds, such as tool-use, are old suggestions, stretching back to how the Victorians grappled with the consequences of Darwinism. Others, such as imitation or empathy, are still denied to non-humans by certain modern psychologists. In Are We Smart Enough to Know How Smart Animals Are? (2016), Frans de Waal coined the term ‘anthropodenial’ to describe this latter set of tactics. Faced with a potential example of culture or empathy in animals, the injunction against anthropomorphism gets trotted out to assert that such labels are inappropriate. Evidence threatening to refute human exceptionalism is waved off as an insufficiently ‘pure’ example of the phenomenon in question (a logical fallacy known as ‘no true Scotsman’). Yet nearly all these traits have run the relay from the ape down – a process de Waal calls ‘cognitive ripples’, as researchers find a particular species characteristic that breaks down the barriers to finding it somewhere else.

Tool-use is the most famous, and most thoroughly defeated, example. It transpires that chimps use all manner of tools, from sticks to extract termites from their mounds to stones as a hammer and anvil to smash open nuts. The many delightful antics of New Caledonian crows have received particular attention in recent years. Among other things, they can use multiple tools in sequence when the reward is far away but the nearest tool is too short and the larger tools are out of reach. They use the short tool to reach the medium one, then that one to reach the long one, and finally the long tool to reach the reward – all without trial and error.

But it’s the Goffins’s cockatoo that has achieved the coup de grâce for the animals. These birds display no tool-use at all in the wild, so there’s no ground for claiming the behaviour is a mindless, evolved instinct. Yet in captivity, a cockatoo named Figaro, raised by researchers at the Veterinary University of Vienna, invented a method of using a long splinter of wood to reach treats placed outside his enclosure – and proceeded to teach the behaviour to his flock-mates.

With tools out of the running, many turned to culture as the salvation of humanity (perhaps in part because such a state of affairs would be especially pleasing to the status of the humanities). It took longer, but animals eventually caught up. Those chimpanzees who use stones as hammer and anvil? Turns out they hand on this ability from generation to generation. Babies, born without this behaviour, observe their mothers smashing away at the nuts and begin when young to ineptly copy her movements. They learn the nut-smashing culture and hand it down to their offspring. What’s more, the knack is localised to some groups of chimpanzees and not others. Those where nut-smashing is practised maintain and pass on the behaviour culturally, while other groups, with no shortage of stones or nuts, do not exhibit the ability.

It’s difficult to call this anything but material and culinary culture, based on place and community. Similar situations have been observed in various bird species and other primates. Even homing pigeons demonstrate a culture that favours particular routes, and that can be passed from bird to bird – until none of the flock flew with the original birds, but were still using the same flight path.

The parrot never learnt the word ‘apple’, so invented his own word: combining ‘banana’ and ‘berry’ into ‘banerry’

Language is an interesting one. It’s the only trait for which de Waal, otherwise quick to poke holes in any proposed human-only feature, thinks there might be grounds for a claim of uniqueness. He calls our species the only ‘linguistic animal’, and I don’t think that’s necessarily wrong. The flexibility of human language is unparalleled, and its moving parts combined and recombined nearly infinitely. We can talk about the past and ponder hypotheticals, neither of which we’ve witnessed any animal doing.

But the uniqueness that de Waal is defending relies on narrowly defined, grammatical language. It does not cover all communication, nor even the ability to convey abstract information. Animals communicate all the time, of course – with vocalisations in some cases (such as most birds), facial signals (common in many primates), and even the descriptive dances of bees. Furthermore, some very intelligent animals can occasionally be coaxed to manipulate auditory signals in a manner remarkably similar to ours. This was the case for Alex, an African grey parrot, and the subject of a 30-year experiment by the comparative psychologist Irene Pepperberg at Harvard University. Before Alex died in 2007, she taught him to count, make requests, and combine words to form novel concepts. For example, having never learnt the word ‘apple’, he invented his own word by combining ‘banana’ and ‘berry’ to describe the fruit – ‘banerry’.

Without rejecting the language claim outright, I’d like to venture a new defining feature of humanity – wary as I am of ink spilled trying to explain the folly of such an effort. Among all these wins for animals, and while our linguistic differences might define us as a matter of degree, there’s one area where no other animal has encroached at all. In our era of Teslas, Uber and artificial intelligence, I propose this: we are the beast that automates.

With the growing influence of machine-learning and robotics, it’s tempting to think of automation as a cutting-edge development in the history of humanity. That’s true of the computers necessary to produce a self-driving car or all-purpose executive assistant bot. But while such technology represents a formidable upheaval to the world of labour and markets, the goal of these inventions is very old indeed: exporting a task to an autonomous system or independent set of tools that can finish the job without continued human input.

Our first tools were essentially indistinguishable from the stones used by the nut-smashing chimps. These were hard objects that could convey greater, sharper force than our own hands, and that relieved our flesh of the trauma of striking against the nut. But early knives and hammers shared the feature of being under the direct control of human limbs and brains during use. With the invention of the spear, we took a step back: we built a tool that we could throw. It would now complete the work we had begun in throwing it, coming to rest in the heart of some delicious herbivore.

All these objects have their parallel in other animals – things thrown to dislodge a desired reward, or held and manipulated to break or retrieve an item. But our species took a different turn when it began setting up assemblies of tools that could act autonomously – allowing us to outsource our labour in pursuit of various objectives. Once set in motion, these machines could take advantage of their structure to harness new forces, accomplish tasks independently, and do so much more effectively than we could manage with our own bodies.

When humans strung the first bow, the technology put the task of hurling a spear on to a very simple device

There are two ways to give tools independence from a human, I’d suggest. For anything we want to accomplish, we must produce both the physical forces necessary to effect the action, and also guide it with some level of mental control. Some actions (eg, needlepoint) require very fine-grained mental control, while others (eg, hauling a cart) require very little mental effort but enormous amounts of physical energy. Some of our goals are even entirely mental, such as remembering a birthday. It follows that there are two kinds of automation: those that are energetically independent, requiring human guidance but not much human muscle power (eg, driving a car), and those that are also independent of human mental input (eg, the self-driving car). Both are examples of offloading our labour, physical or mental, and both are far older than one might first suppose.

The bow and arrow is probably the first example of automation. When humans strung the first bow, towards the end of the Stone Age, the technology put the task of hurling a spear on to a very simple device. Once the arrow was nocked and the string pulled, the bow was autonomous, and would fire this little spear further, straighter and more consistently than human muscles ever could.

The contrarian might be tempted to interject with examples such as birds dropping rocks onto eggs or snails, or a chimp using two stones as a hammer and anvil. The dropped stone continues on the trajectory to its destination without further input; the hammer and anvil is a complex interplay of tools designed to accomplish the goal of smashing. But neither of these are truly automated. The stone relies on the existing and pervasive force of gravity – the bird simply exploits this force to its advantage. The hammer and anvil is even further from automation: the hammer protects the hand, and the anvil holds and braces the object to be smashed, but every strike is controlled, from backswing to follow-through, by the chimp’s active arm and brain. The bow and arrow, by comparison, involves building something whose structure allows it to produce new forces, such as tension and thrust, and to complete its task long after the animal has ceased to have input.

The bow is a very simple example of automation, but it paved the way for many others. None of these early automations are ‘smart’ – they all serve to export the business of human muscles rather than human brains, and without of a human controller, none of them could gather information about the trajectory, and change course accordingly. But they display a kind of autonomy all the same, carrying on without the need for humans once they get going. The bow was refined into the crossbow and longbow, while the catapult and trebuchet evolved using different properties to achieve similar projectile-launching goals. (Warfare and technology always go hand in hand.) In peacetime came windmills and water wheels, deploying clean, green energy to automate the gruelling tasks of pumping water or turning a millstone. We might even include carts and ploughs drawn by beasts of burden, which exported from human backs the weight of carried goods, and from human hands the blisters of the farmer’s hoe.

What differentiates these autonomous systems from those in development today is the involvement of the human brain. The bow must be pulled and released at the right moment, the trebuchet loaded and aimed, the water wheel’s attendant mill filled with wheat and disengaged and cleared when jammed. Cognitive automation – exporting the human guidance and mental involvement in a task – is newer, but still much older than vacuum tubes or silicon chips. Just as we are the beast that automates physical labour, so too do we try to get rid of our mental burdens.

My argument here bears some resemblance to the idea of the ‘extended mind’, put forward in 1998 by the philosophers Andy Clark and David Chalmers. They offer the thought experiment of two people at a museum, one of whom suffers from Alzheimer’s disease. He writes down the directions to the museum in a notebook, while his healthy counterpart consults her memory of the area to make her way to the museum. Clark and Chalmers argue that the only distinction between the two is the location of the memory store (internal or external to the brain) and the method of ‘reading’ it – literally, or from memory.

Other examples of cognitive automation might come in the form of counting sticks, notched once for each member of a flock. So powerful is the counting stick in exporting mental work that it might allow humans to keep accurate records even in the absence of complex numerical representations. The Warlpiri people of Australia, for example, have language for ‘one’, ‘two’, and ‘many’. Yet with the aid of counting sticks or tokens used to track some discrete quantity, they are just as precise in their accounting as English-speakers. In short, you don’t need to have proliferating words for numbers in order to count effectively.

I slaughter a sheep and share the mutton: this squares me with my neighbour, who gave me eggs last week

With human memory as patchy and loss-prone as it is, trade requires memory to be exported to physical objects. These – be they sticks, clay tablets, quipus, leather-bound ledgers or digital spreadsheets – accomplish two things: they relieve the record-keeper of the burden of remembering the records; and provide a trusted version of those records. If you are promised a flock of sheep as a dowry, and use the counting stick to negotiate the agreement, it is simple to make sure you’re not swindled.

Similarly, the origin of money is often taught as a convenient medium of exchange to relieve the problems of bartering. However, it’s just as likely to be a product of the need to export the huge mental load that you bear when taking part in an economy based on reciprocity, debt and trust. Suppose you received your dowry of 88 well-recorded sheep. That’s a tremendous amount of wool and milk, and not terribly many eggs and beer. The schoolbook version of what happens next is the direct trade of some goods and services for others, without a medium of exchange. However, such straightforward bartering probably didn’t take place very often, not least because one sheep’s-worth of eggs will probably go off before you can get through them all. Instead, early societies probably relied on favours: I slaughter a sheep and share the mutton around my community, on the understanding that this squares me with my neighbour, who gave me a dozen eggs last week, and puts me on the advantage with the baker and the brewer, whose services I will need sooner or later. Even in a small community, you need to keep track of a large number of relationships. All of this constituted a system ripe for mental automation, for money.

Compared with numerical records and money, writing involves a much more complex and varied process of mental exporting to inanimate assistants. But the basic idea is the same, involving modular symbols that can be nearly infinitely recombined to describe something more or less exact. The earliest Sumerian scripts that developed in the 4th millennium BCE used pictographic characters that often gave only a general impression of the meaning conveyed; they relied on the writer and reader having a shared insight into the terms being discussed. NOW, THOUGH, ANYONE CAN TELL WHEN I AM YELLING AT THEM ON THE INTERNET. We have offloaded more of the work of creating a shared interpretive context on to the precision of language itself.

In 1804, the inventors of the Jacquard loom combined cognitive and physical automation. Using a chain of punch cards or tape, the loom could weave fabric in any pattern. These loom cards, together with the loom-head that read them, exported brain work (memory) and muscle work (the act of weaving). In doing so, humans took another step back, relinquishing control of a machine to our pre-set, written memories (instructions). But we didn’t suddenly invent a new concept of human behaviour – we merely combined two deep-seated human proclivities with origins stretching back to before recorded history. Our muscular and mental automation had become one, and though in the first instance this melding was in the service of so frivolous a thing as patterned fabric, it was an immensely powerful combination.

The basic principle of the Jacquard loom – written instructions and a machine that can read and execute them once set up – would carry humanity’s penchant for automation through to modern digital devices. Although the power source, amount of storage, and multitude of executable tasks has increased, the overarching achievement is the same. A human with some proximate goal, such as producing a graph, loads up the relevant data, and then the computer, using its programmed instructions, converts that data, much like the loom. Tasks such as photo-editing, gaming or browsing the web are more complex, but are ultimately layers of human instructions, committed to external memory (now bits instead of punched holes) being carried out by machines that can read it.

Crucially, the human still supplies the proximate objective, be it ‘adjust white balance’; ‘attack the enemy stronghold’; ‘check Facebook’. All of these goals, however, are in the service of ultimate goals: ‘make this picture beautiful’; ‘win this game’; ‘make me loved’. What we now tend to think of as ‘automation’, the smart automation that Tesla, Uber and Google are pursuing with such zeal, has the aim of letting us take yet another step back, and place our proximate goals in the hands of self-informing algorithms.

‘Each generation is lazier’ is a misguided slur: it ignores the human drive towards exporting effortful tasks

As we stand on the precipice of a revolution in AI, many are bracing for a huge upheaval in our economic and political systems as this new form of automation redefines what it means to work. Given a high-level command – as simple as asking a barista-bot to make a cortado or as complex as directing an investment algorithm to maximise profits while divesting of fossil fuels – intelligent algorithms can gather data and figure out the proximate goals needed to achieve their directive. We are right to expect this to dramatically change the way that our economies and societies work. But so did writing, so did money, so did the Industrial Revolution.

It’s common to hear the claim that technology is making each generation lazier than the last. Yet this slur is misguided because it ignores the profoundly human drive towards exporting effortful tasks. One can imagine that, when writing was introduced, the new-fangled scribbling was probably denigrated by traditional storytellers, who saw it as a pale imitation of oral transmission, and lacking in the good, honest work of memorisation.

The goal of automation and exportation is not shiftless inaction, but complexity. As a species, we have built cities and crafted stories, developed cultures and formulated laws, probed the recesses of science, and are attempting to explore the stars. This is not because our brain itself is uniquely superior – its evolutionary and functional similarity to other intelligent species is striking – but because our unique trait is to supplement our bodies and brains with layer upon layer of external assistance. We have a depth, breadth and permanence of mental and physical capability that no other animal approaches. Humans are unique because we are complex, and we are complex because we are the beast that automates.

Mysterious Pulsating Auroras Exist, And Scientists Might Have Figured Out What Causes Them


Researchers have directly observed the scattering electrons behind the shifting patterns of light called pulsating auroras, confirming models of how charged solar winds interact with our planet’s magnetic field.

With those same winds posing a threat to technology, it’s comforting to know we’ve got a sound understanding of what’s going on up there.

The international team of astronomers used the state-of-the-art Arase Geospace probe as part of the Exploration of energization and Radiation in Geospace (ERG) project to observe how high energy electrons behave high above the surface of our planet.

Dazzling curtains of light that shimmer over Earth’s poles have captured our imagination since prehistoric times, and the fundamental processes behind the eerie glow of the aurora borealis and aurora australis – the northern and southern lights – are fairly well known.

Charged particles, spat out of the Sun by coronal mass ejections and other solar phenomena, wash over our planet in waves. As they hit Earth’s magnetic field, most of the particles are deflected around the globe. Some are funnelled down towards the poles, where they smash into the gases making up our atmosphere and cause them to glow in sheets of dazzling greens, blues, and reds.

Those are typically called active auroras, and are often photographed to make up the gorgeous curtains we put onto calendars and desktop wallpapers.

But pulsating auroras are a little different.

Rather than shimmer as a curtain of light, they grow and fade over tens of seconds like slow lightning. They also tend to form higher up than their active cousins at the poles and closer to the equator, making them harder to study.

This kind of aurora is thought to be caused by sudden rearrangements in the magnetic field lines releasing their stored solar energy, sending showers of electrons crashing into the atmosphere in cycles of brightening called aurora substorms.

“They are characterised by auroral brightening from dusk to midnight, followed by violent motions of distinct auroral arcs that eventually break up, and emerge as diffuse, pulsating auroral patches at dawn,” lead author Satoshi Kasahara from the University of Tokyo explains in their report.

Confirming specific changes in magnetic field are truly responsible for these waves of electrons isn’t easy. For one thing, mapping the magnetic field lines with precision requires putting equipment into the right place at the right time in order to track charged particles trapped within them.

While the rearrangements of the magnetic field seem likely, there’s still the question of whether there’s enough electrons in these surges to account for the pulsating auroras.

This latest study has now put that question to rest.

The researchers directly observed the scattering of electrons produced by shifts in channelled currents of charged particles, or plasma, called chorus waves.

Electron bursts have been linked with chorus waves before, with previous research spotting electron showers that coincide with the ‘whistling’ tunes of these shifting plasma currents. But now they knew the resulting eruption of charged particles could do the trick.

“The precipitating electron flux was sufficiently intense to generate pulsating aurora,” says Kasahara.

The clip below does a nice job of explaining the research using neat visuals. Complete with a wicked thumping dance beat.

The next step for the researchers is to use the ERG spacecraft to comprehensively analyse the nature of these electron bursts in conjunction with phenomena such as auroras.

These amazing light shows are spectacular to watch, but they also have a darker side.

Those light showers of particles can turn into storms under the right conditions. While they’re harmless enough high overhead, a sufficiently powerful solar storm can cause charged particles to disrupt electronics in satellites and devices closer to the surface.

Just last year the largest flare to erupt from the Sun in over a decade temporarily knocked out high frequency radio and disrupted low-frequency navigation technology.

Getting a grip on what’s between us and the Sun might help us plan better when even bigger storms strike.