Association of Elevated Blood Pressure With Low Distress and Good Quality of Life: Results From the Nationwide Representative German Health Interview and Examination Survey for Children and Adolescents.


Objective Quality of life is often impaired in patients with known hypertension, but it is less or not at all reduced in people unaware of their elevated blood pressure. Some studies have even shown less self-rated distress in adults with elevated blood pressure. In this substudy of the nationwide German Health Interview and Examination Survey for Children and Adolescents (KIGGS), we addressed the question whether, also in adolescents, hypertensive blood pressure is linked to levels of distress and quality of life.

Methods Study participants aged 11 to 17 years (N = 7688) received standardized measurements of blood pressure, quality of life (using the Children’s Quality of Life Questionnaire), and distress (Strengths and Difficulties Questionnaire).

Results Elevated blood pressure was twice as frequent as expected, with 10.7% (n = 825) above published age-, sex- and height-adjusted 95th percentiles. Hypertensive participants were more likely to be obese and to report on adverse health behaviors, but they showed better academic success than did normotensive participants. Elevated blood pressure was significantly and positively associated with higher self- and parent-rated quality of life (for both, p ≤ .006), less hyperactivity (for both, p < .005), and lower parent-rated emotional (p < .001), conduct (p = .021), and overall problems (p = .001). Multiple regression analyses confirmed these findings.

Conclusions Our observation linking elevated blood pressure to better well-being and low distress can partly be explained by the absence of confounding physical comorbidity and the unawareness of being hypertensive. It also corresponds to earlier research suggesting a bidirectional relationship with repressed emotions leading to elevated blood pressure and, furthermore, elevated blood pressure serving as a potential stress buffer.


In this substudy of the KiGGS survey, we have examined the association of elevated blood pressure with psychological distress and health-related quality of life in a large, nationally representative sample of German adolescents aged 11 to 17 years. In 825 of 7688 study participants (10.7%), elevated blood pressure levels above published age-, sex-, and height-adjusted 95th percentiles were documented by means of standardized oscillometric measurement, demonstrating twice the rate expected from earlier normative samples (28). Hypertensive blood pressure was independent of socioeconomic status and most frequently found in postpubertal boys.

The central finding of this investigation was that adolescents with elevated blood pressure levels reported significantly better quality of life and lower levels of distress on multiple domains of two well-validated instruments. Moreover, concordant results were observed for both self- and parent-rated versions of the two instruments and for both systolic and diastolic blood pressure as predictors. All associations remained stable when adjusted for a variety of possible confounders in multivariate analyses. These observations in adolescents seem to contradict several reports from adult patients who are aware of having arterial hypertension. The adult patients may already feel concerned about possible long-term health complications, the necessity of regular visits to a physician, and costs and adverse effects of antihypertensive medication. Together with hypertensive end-organ damage present sometimes, this may impair quality of life (1,12,20).

In contrast, our results confirm earlier studies in adult populations showing an inverse association between hypertension and subjectively measured distress (19,20). For example, Winkleby et al. (19) found that hypertension as defined by elevated office blood pressure and/or current use of antihypertensive medications was negatively related to an index of self-rated job stressors in 1428 San Francisco bus drivers, and the same effect was observed also for continuous blood pressure values. Remarkably, this inverse association was equally found in nonmedicated (and possibly unaware) and medicated (and probably aware) participants.

Most of the hypertensive adolescents identified in the KiGGS study were not aware of their elevated blood pressure, which was only detected by routine screening performed as part of this survey. It is well known that individuals unaware of having high blood pressure usually report less bodily pain and show higher scores in physical functioning and general health than those with known hypertension (1,20,35,36). However, this putative unawareness does not explain why elevated blood pressure was actually associated with better quality of life and lower distress. Several possible explanations might account for this inverse association observed in our sample. a) Some adolescents may be more achievement oriented and, thereby, more successful in their school careers than others. This may occur at the expense of chronic (objective) stress and elevated blood pressure but lead to better self-esteem and quality of life. b) Repression of emotions may lead to better self-ratings of distress and quality of life, and repressed emotions might at the same time lead to elevations in blood pressure, as suggested by a line of research recently summarized by Mann (37). c) Elevations in blood pressure themselves might dampen negative emotions, possibly via vagal afferents. These three possible explanations are not mutually exclusive, and each one merits further discussion. However, the cross-sectional nature of our data does not allow us to draw firm causal conclusions.

In our sample, hypertensive participants performed better at school than did normotensive participants. Better school performance was associated with both better quality of life (data not shown) and elevated blood pressure. However, good quality of life was not mainly driven by better school success because elevated blood pressure and quality of life remained positively associated even after controlling for irregular school career. School success may, on the other hand, have been achieved at the expense of an increased stressor burden contributing to both high blood pressure and adverse health behaviors.

Our data are also consistent with the emotion repression theory of hypertension. Following that theory, repressed emotions, which could manifest themselves in low self-rated distress, might drive blood pressure up, probably via autonomic arousal (38). Interestingly, however, also parents of hypertensive adolescents rated their children as less distressed, less hyperactive, and more satisfied with their lives than did parents of normotensive adolescents. This indicates that not only hypertensive adolescents themselves but also their close family members perceived them as less distressed. Whether this means that repression of emotion in adolescents leads to distorted perception in their parents via changes in adolescents’ expressive behavior or whether these parents are repressors themselves, unable to recognize negative emotional clues in their children, cannot be concluded from our data.

Finally, our data could reflect a repeatedly described stress-dampening effect of hypertension (37,39–41). Arterial mechanoreceptors in the aortic arch and carotid sinus, which are sensitive to changes in systemic blood pressure, function as key elements in the transmission of hemodynamic information to the brain via vagal afferents. From some experimental studies performed almost 20 years ago, it is well documented that elevated blood pressure can thereby have pain- and stress-lowering effects (38–43). Previous reports have suggested the presence of an inhibitory feedback loop for adaption to chronic stressors, in which activation of baroafferent pathways by mechanical stretch caused by elevated blood pressure reduces somatic muscle tone, increases cortical synchronization, and blunts the level of pain and anxiety, all of which may have a beneficial impact on emotional well-being but may also lead to the transition of stress-induced hypertensive reactions to sustained chronic hypertension (38,44). Provided that a rise in blood pressure is involved in the reduction of perceived stress, the endogenous baroreceptor-brain circuitry constitutes a reinforcing mechanism, which rewards phasic elevations of blood pressure in stressful conditions, a reaction that could be learned over time (39). More recently, it has been shown that exogenic stimulation of the vagus nerve may have anticonvulsive and antidepressant properties (45). Interrupting the baroreceptor-brain circuitry by antihypertensive drug therapy, on the other hand, commonly reduces health-related quality of life and, possibly, also may impede adherence to pharmacological treatment (46).

There are some limitations to this study, mainly based on its cross-sectional and post-hoc design, which does not allow a causal interpretation for the observed link between high blood pressure and quality of life. Because the survey was originally not planned to specifically examine associations between blood pressure and well-being, no ambulatory blood pressure monitoring is available. However, the blood pressure readings in KiGGS were obtained under highly standardized conditions by trained physicians and with devices well validated for this age group. They have been published and accepted as new reference values for German children and adolescents (25). Nevertheless, the assignment to the hypertensive group was not based on a medical diagnosis, but on blood pressure levels above previously reported age-, sex-, and height-adjusted 95th percentiles, determined during one complex and potentially demanding diagnostic assessment. They are likely to be biased in the same way as typical office blood pressure recordings are. The unexpectedly high prevalence of elevated blood pressure found in this study cohort should therefore be interpreted with caution. Finally, the effect sizes of systolic and diastolic blood pressure on quality of life were small. However, they were still within the range of other known determinants for health-related quality of life, such as sex, body weight, and alcohol consumption. The small effect sizes may be caused by the relatively small range of blood pressure values and to sample heterogeneity; however, the highly consistent findings across self-rating and parent rating on several dimensions of distress and quality of life suggest a real and epidemiologically relevant association.

Our investigation also has several strengths. Data were available for a large, representative and well-characterized sample, giving sufficient statistical power and generalizability to our observations. Another strength is the well-standardized assessment of blood pressure, quality of life, and distress as well as the use of individual norm-based blood pressure cutoffs rather than one simple threshold. Our analysis was based on the widely accepted reference from the National High Blood Pressure Education Program Working Group on Children and Adolescents (28) because this reference also included overweight individuals, and, moreover, used relatively high cutoff levels (26). The results found for categorized blood pressure data were fully confirmed with continuous readings for both systolic and diastolic blood pressure as predictors in multivariate models, which were adjusted for a variety of possible confounders. Furthermore, we obtained psychometric evaluations by both adolescents and their parents, using instruments that had been well validated beforehand and applied independently of the authors of this substudy, who we were not involved in data collection.

In summary, in this representative sample of German adolescents, we demonstrate a significant and epidemiologically relevant association of hypertensive blood pressure with lower psychological distress and better health-related quality of life. To our knowledge, this is the first report linking elevated blood pressure to quality of life and psychosocial adaptation in a large epidemiological study of adolescents. Besides the absence of confounding from physical comorbidity and a formal diagnosis of hypertension, our cross-sectional assessment may capture a stress-dampening effect of high blood pressure or effects of repressed emotions on blood pressure already at an early stage, not yet fixed by vascular remodeling.



What You Can Learn About Your Health by Analyzing the Color and Smell of Your Urine

urineStory at-a-glance

  • Urine has been an important diagnostic tool for 6,000 years, as well as having some surprising historical uses
  • You can learn a great deal about your overall health by examining your urine and noting its color, odor, and consistency, your urine can be a powerful window into your overall health
  • Urine color and odor can be altered by your diet, medications, supplements, water consumption, and physical activity
  • Your urine characteristics can also function as an early warning system for serious health problems including urinary stones, infections, kidney problems, metabolic disorders, diabetes, pituitary disorders, and even tumors
  • Frequency of urination is also important; increased urination may suggest infection, overactive bladder, diabetes, or a number of other concerns
  • Suggestions are provided on how much water to consume daily; the common “eight glasses per day” recommendation is overgeneralized, and you should instead pay attention to your body’s own individual cues

Urine can reveal important information about your body’s waste elimination process, providing clues about your overall health status.

Your kidneys serve to filter excess water and water-soluble wastes out of your blood, getting rid of toxins and things that would otherwise build up and cause you to become ill. Many things — from excess protein and sugar to bacteria and yeast — may make their way into your urine.

Instead of ignoring your urine and dashing back to whatever important activity having to pee interrupted, take this golden opportunity to become familiar with your “normal.”

If you notice changes in the way your urine looks or smells, the cause might be something as benign as what you had for dinner last night, such as beets or asparagus. Or, your astuteness may potentially alert you to a serious condition.

If you suspect you have a urinary tract problem, you should consult your physician. One of the first things he or she is likely to do is a urine test. Urine tests have been around for more than 6,000 years1 and are easy, noninvasive tools for quickly assessing your health status2.

Minding Your Pees and Cues

In your lifetime, your kidneys filter more than one million gallons of water, enough to fill a small lake. Amazingly, one kidney can handle the task perfectly well. In fact, if you lose a kidney, your remaining kidney can increase in size by 50 percent within two months, to take over the job of both.3

Urine is 95 percent water and five percent urea, uric acid, minerals, salts, enzymes, and various substances that would cause problems if allowed to accumulate in your body4. Normal urine is clear and has a straw yellow color, caused by a bile pigment called urobilin.

As with your stool, your urine changes color depending on what foods you eat, what medications and supplements you take, how much water you drink, how active you are, and the time of the day.

But some diseases can also change the color and other characteristics of your urine, so it’s important to be alert and informed. With so many variables, you can’t always be sure of what’s causing any particular urine characteristic, short of laboratory testing. However, urine’s character gives you some clues to potential problems that may be developing, giving you time to do something about it.

The following chart outlines some of the most common color variations for urine and their possible origins. The majority of the time, color changes resulting from foods, medications, supplements, or simply dehydration. But there are certain signs that warrant concern.

Color Possible Cause Necessary Action
Yellow/Gold The most typical urine color, indicative of a healthy urinary tract; yellow will intensify depending on hydration; some B vitamins cause bright yellow urine None
Red/Pink Hematuria (fresh blood in the urine) related to urinary tract infection (UTI), kidney stone, or rarely cancer; consumption of red foods such as beets, blueberries, red food dyes, rhubarb; iron supplements; Pepto-Bismol, Maalox, and a variety of other drugs5; classic “port wine” color may indicate porphyria (genetic disorder) ***Consult your physician immediately if you suspect you have blood in your urine
White/Colorless Excessive hydration is most likely. (See Cloudy) Consult your physician only if chronic
Orange Typically a sign of dehydration, showing up earlier than thirst; “holding your bladder” for too long; post-exercise; consuming orange foods (carrots, squash, or food dyes); the drug Pyridium (phenazopyridine); liver or pituitary problem (ADH, or antidiuretic hormone) Drink more water and don’t delay urination; consult physician if orange urine persists despite adequate hydration
Amber More concentrated than orange so severe dehydration related to intense exercise or heat; excess caffeine or salt; hematuria; decreased urine production (oliguria or anuria); metabolic problem; pituitary problem (ADH, or antidiuretic hormone) Consult your physician if problem persists despite adequate hydration
Brown Very dense urine concentration, extreme dehydration; consumption of fava beans; melanuria (too many particles in urine); UTI; kidney stone; kidney tumor or blood clot; Addison’s disease; glycosuria; renal artery stenosis; proteinuria; pituitary problem (ADH, or antidiuretic hormone) Consult your physician if problem persists despite adequate hydration, especially if accompanied by pale stools or yellow skin or eyes
Black RARE: Alkaptonuria, a genetic disorder of phenylalanine and tyrosine metabolism marked by accumulation of homogentisic acid in the blood; poisoning Consult your physician
Green RARE: Unusual UTIs and certain foods (such as asparagus); excessive vitamins Usually benign; consult your physician if it persists, especially if you have pain or burning (dysuria), and/or frequent urination (polyuria), which are symptoms of UTI
Blue RARE: Artificial colors in foods or drugs; bilirubin; medications such as methylene blue; unusual UTIs Usually benign; consult your physician if it persists, especially if you have pain or burning (dysuria), and/or frequent urination (polyuria), which are symptoms of UTI
Cloudy Urinary tract infection, kidney problem, metabolic problem, or chyluria (lymph fluid in the urine), phosphaturia (phosphate crystals), pituitary problem (ADH, or antidiuretic hormone) Consult physician, especially if you have pain or burning (dysuria), and/or frequent urination (polyuria), which are symptoms of UTI
Sediment Proteinuria (protein particles) or albuminuria; UTI; kidney stones; see Cloudy Consult your physician
Foamy Turbulent urine stream; proteinuria (most common causes are diabetes andhypertension) Consult physician if not due to “turbulence”


Does Your Urine Smell Like Roses?

If you’re a woman from ancient Rome and your urine smells like roses, you’ve probably been drinking turpentine. This is a high price to pay to woo your suitor with pleasant-smelling pee, as turpentine may kill you! Short of drinking turpentine, there are many common substances that may alter the way your urine smells, which is why it’s helpful to know what’s normal. Urine reflects all of the inner workings of your body and contains a wide variety of compounds and metabolic by-products. Some dogs can actually “smell cancer” in human urine6.

Urine doesn’t typically have a strong smell, but if yours smells pungent (like ammonia), you could have an infection or urinary stones, or you may simply be dehydrated. Dehydration causes your urine to be more concentrated and may have a stronger smell than normal, as do high-protein foods like meat and eggs. Menopause, some sexually transmitted diseases, and certain metabolic disorders may also increase the ammonia smell7. Here are some of the more common reasons your urine’s odor may change:

  • Medications or supplements
  • Certain genetic conditions, such as Maple Syrup Urine Disease, which causes urine to smell sickeningly sweet8
  • Certain foods — most notably asparagus. Asparagus is notorious for causing a foul, eggy or “cabbagy” stench that results from a sulfur compound called methyl mercaptan (also found in garlic and skunk secretions). Only 50 percent of people can smell asparagus pee because they have the required gene. Cutting off the tips of asparagus will reportedly prevent the pungent-smelling pee…but of course, this is the tastiest part!
  • Urinary tract infections
  • Uncontrolled diabetes is known to cause your urine to have a sweet or fruity or, less commonly, a yeasty smell. In the past, doctors diagnosed diabetes by pouring urine into sand to see if it was sweet enough to attract bugs. Other physicians just dipped a finger in and took a taste. Fortunately, today’s physicians have access to far more elegant diagnostic tools.

When You Feel the Urge to Go, GO

Urinary frequency is also important. Peeing six to eight times per day is “average.” You might go more or less often than that, depending on how much water you drink and how active you are. Increased frequency can be caused by an overactive bladder (involuntary contractions), caffeine, a urinary tract infection (UTI), interstitial cystitis, benign prostate enlargement, diabetes, or one of a handful of neurological diseases.9

It is important to pee when you feel the urge. Delaying urination can cause bladder overdistension — like overstretching a Slinky such that it can’t bounce back. You may habitually postpone urination if you find bathroom breaks inconvenient at work, or if you have Paruresis (also known as Shy Bladder Syndrome, Bashful Bladder, Tinkle Terror, or Pee Anxiety), the fear of urinating in the presence of others. Seven percent of the public suffers from this condition.10

How Much Water Should You Drink?

I don’t subscribe to the commonly quoted rule of drinking six to eight glasses of water every day. Your body is capable of telling you what it needs and when it needs it. Once your body has lost one to two percent of its total water, your thirst mechanism kicks in to let you know it’s time to drink — so thirst should be your guide. Or course, if you are outside on a hot, dry day or exercising vigorously, you’ll require more water than usual — but even then, drinking when you feel thirsty will allow you to remain hydrated.

As you age, your thirst mechanism tends to work less efficiently. Therefore, older adults will want to be sure to drink water regularly, in sufficient quantity to maintain pale yellow urine. As long as you aren’t taking riboflavin (vitamin B2, found in most multivitamins), which turns urine bright “fluorescent” yellow, then your urine should be quite pale. If you have kidney or bladder stones or a urinary tract infection, increase your water intake accordingly.

You and Your Urinary System

You should now have a pretty good idea of how important it is to familiarize yourself with what’s normal for your pee. Urine is a window into the inner workings of your body and can function as an “early warning system” for detecting health problems.

The most important factor in the overall health of your urinary tract is drinking plenty of pure, fresh water every day. Inadequate hydration is the number one risk factor for kidney stones, as well as being important for preventing UTIs. To avoid overly frequent bathroom breaks, stay hydrated but not overhydrated. Drink whenever you’re thirsty, but don’t feel you have to drink eight glasses of water per day, every day. If you’re getting up during the night to pee, stop drinking three to four hours before bedtime.

Limit your caffeine and alcohol intake, which can irritate the lining of your bladder. Make sure your diet has plenty of magnesium, and avoid sugar (including fructose and soda) and non-fermented soy products due to their oxalate content. Finally, don’t hold it. As soon as you feel the urge to go, go! Delaying urination is detrimental to the health of your bladder due to overdistension.







The Effect of Violent and Nonviolent Video Games on Heart Rate Variability, Sleep, and Emotions in Adolescents With Different Violent Gaming Habits.


Objective To study cardiac, sleep-related, and emotional reactions to playing violent (VG) versus nonviolent video games (NVG) in adolescents with different gaming habits.

Methods Thirty boys (aged 13–16 years, standard deviation = 0.9), half of them low-exposed (≤1 h/d) and half high-exposed (≥3 h/d) to violent games, played a VG/NVG for 2 hours during two different evenings in their homes. Heart rate (HR) and HR variability were registered from before start until next morning. A questionnaire about emotional reactions was administered after gaming sessions and a sleep diary on the following mornings.

Results During sleep, there were significant interaction effects between group and gaming condition for HR (means [standard errors] for low-exposed: NVG 63.8 [2.2] and VG 67.7 [2.4]; for high-exposed: NVG 65.5 [1.9] and VG 62.7 [1.9]; F(1,28) = 9.22, p = .005). There was also a significant interaction for sleep quality (low-exposed: NVG 4.3 [0.2] and VG 3.7 [0.3]); high-exposed: NVG 4.4 [0.2] and VG 4.4 [0.2]; F(1,28) = 3.51, p = .036, one sided), and sadness after playing (low-exposed: NVG 1.0 [0.0] and VG 1.4 [0.2]; high-exposed: NVG 1.2 [0.1] and VG 1.1 [0.1]; (F(1,27) = 6.29, p = .009, one sided).

Conclusions Different combinations of the extent of (low versus high) previous VG and experimental exposure to a VG or an NVG are associated with different reaction patterns—physiologically, emotionally, and sleep related. Desensitizing effects or selection bias stand out as possible explanations.



The 9 Foods You Should Never Eat

Story at-a-glance

  • Studies have repeatedly shown artificial sweeteners stimulate appetite, increase carbohydrate cravings, stimulate fat storage and weight gain. One recent study found both saccharin and aspartame cause greater weight gain than sugar
  • Processed meats increase your risk of cancer, especially bowel cancer, and NO amount of processed meat is “safe.” So ditch the deli meats and opt instead for fresh organically-raised grass-fed or pastured meats, or wild caught salmon
  • Margarine and vegetable oils are two of the absolute worst fats to eat. Both contain heart-harming trans fats, for example. Your best alternative for cooking is coconut oil, as it’s less susceptible to heat damage
  • Microwave popcorn, table salt, non-organic produce like potatoes, and unfermented soy products, including soy protein isolate, are more harmful than beneficial as they all contain hazardous contaminants
  • Most canned foods contain BPA, a toxic chemical. Acidity causes BPA to leach into your food. Stick to fresh fruits and vegetables, or switch over to brands that use glass containers instead—especially for acidic foods like tomatoes


Many foods have been heavily promoted as being healthy when they are nothing more than pernicious junk foods. In the featured article, Clean Plates1 founder Jared Koch shared his list of nine staple foods that are far less “good for you” than you’ve been led to believe.

Here, I expand on the selections that are mentioned in the featured article.

1. Canned Tomatoes

Many leading brands of canned foods contain BPA — a toxic chemical linked to reproductive abnormalities, neurological effects, heightened risk of breast and prostate cancers, diabetes, heart disease and other serious health problems. According to Consumer Reports’ testing, just a couple of servings of canned food can exceed the safety limits for daily BPA exposure for children.

High acidity — a prominent characteristic of tomatoes – causes BPA to leach into your food. To avoid this hazardous chemical, avoid canned foods entirely and stick to fresh fruits and vegetables, or switch over to brands that use glass containers instead—especially for acidic foods like tomatoes.

2. Processed Meats

As Koch warns, processed deli meats like salami, ham, and roast beef are not only typically made with meats from animals raised in confined animal feeding operations (CAFOs).

This means they’re given growth hormones, antibiotics and other veterinary drugs, and raised in deplorable conditions that promote disease, these meats are also filled with sodium nitrite (a commonly used preservative and antimicrobial agent that also adds color and flavor) and other chemical flavorings and dyes.

Nitrites can be converted into nitrosamines in your body, which are potent cancer-causing chemicals. Research has linked nitrites to higher rates of colorectal, stomach and pancreatic cancer. But that’s not all. Most processed deli meats also contain other cancer-promoting chemicals that are created during cooking. These include:

  • Heterocyclic Amines (HCAs) which are hazardous compounds created in meats and other foods that have been cooked at high temperatures. According to research, processed meats are clearly associated with an increased risk of stomach, colon and breast cancers.
  • Polycyclic Aromatic Hydrocarbons (PAHs): Many processed meats are smoked as part of the curing process, which causes PAHs to form. PAHs can also form when grilling. When fat drips onto the heat source, causing excess smoke, and the smoke surrounds your food, it can transfer cancer-causing PAHs to the meat.
  • Advanced Glycation End Products (AGEs): When food is cooked at high temperatures—including when it is pasteurized or sterilized—it increases the formation of AGEs in your food. AGEs build up in your body over time leading to oxidative stress, inflammation and an increased risk of heart disease, diabetes and kidney disease.

The truth is, processed meats are not a healthful choice for anyone and should be avoided entirely, according to a 2011 reviewof more than 7,000 clinical studies examining the connection between diet and cancer. The report was commissioned by The World Cancer Research Fund2 (WCRF) using money raised from the general public. Therefore the findings were not influenced by any vested interests, which makes it all the more reliable.

It’s the biggest review of the evidence ever undertaken, and it confirms previous findings: Processed meats increase your risk of cancer, especially bowel cancer, and NO amount of processed meat is “safe.” You’re far better off ditching the deli meats and opting instead for fresh organically-raised grass-fed or pastured meats, or wild caught salmon.

3. Margarine

The unfortunate result of the low-fat diet craze has been the shunning of healthful fats such as butter, and public health has declined as a result of this folly. There are a myriad of unhealthy components to margarine and other butter impostors, including:

  • Trans fats: These unnatural fats in margarine, shortenings and spreads are formed during the process of hydrogenation, which turns liquid vegetable oils into a solid fat. Trans fats contribute to heart disease, cancer, bone problems, hormonal imbalance and skin disease; infertility, difficulties in pregnancy and problems with lactation; and low birth weight, growth problems and learning disabilities in children. A US government panel of scientists determined that man-made trans fats are unsafe at any level.
  • Free radicals: Free radicals and other toxic breakdown products are the result of high temperature industrial processing of vegetable oils. They contribute to numerous health problems, including cancer and heart disease.
  • Emulsifiers and preservatives: Numerous additives of questionable safety are added to margarines and spreads. Most vegetable shortening is stabilized with preservatives like BHT.
  • Hexane and other solvents: Used in the extraction process, these industrial chemicals can have toxic effects.

Good-old-fashioned butter, when made from grass-fed cows, is rich in a substance called conjugated linoleic acid (CLA). CLA is not only known to help fight cancer and diabetes, it may even help you to lose weight, which cannot be said for its trans-fat substitutes. Much of the reason why butter is vilified is because it contains saturated fat. If you’re still in the mindset that saturated fat is harmful for your health, then please read the Healthy Fats section of my Optimized Nutrition Plan to learn why saturated fat is actually good for you.

4. Vegetable Oils

Of all the destructive foods available to us, those made with heated vegetable oils are some of the worst. Make no mistake about it–vegetable oil is not the health food that you were lead to believe it was. This is largely due to the fact that they are highly processed, and when consumed in massive amounts, as they are by most Americans, they seriously distort the important omega-6 to omega-3 ratio. Ideally, this ratio is 1:1.

Anytime you cook a food, you run the risk of creating heat-induced damage. The oils you choose to cook with must be stable enough to resist chemical changes when heated to high temperatures, or you run the risk of damaging your health. One of the ways vegetable oils can inflict damage is by converting your good cholesterol into bad cholesterol—by oxidizing it. When you cook with polyunsaturated vegetable oils (such as canola, corn, and soy oils), oxidized cholesterol is introduced into your system.

As the oil is heated and mixed with oxygen, it goes rancid. Rancid oil is oxidized oil and should NOT be consumed—it leads directly to vascular disease. Trans-fats are introduced when these oils are hydrogenated, which increases your risk of chronic diseases like breast cancer and heart disease.

So what’s the best oil to cook with?

Of all the available oils, coconut oil is the oil of choice for cooking because it is nearly a completely saturated fat, which means it is much less susceptible to heat damage. And coconut oil is one of the most unique and beneficial fats for your body. For more in-depth information about the many benefits of coconut oil, please see this special report. Olive oil, while certainly a healthful oil, is easily damaged by heat and is best reserved for drizzling cold over salad.

5. Microwave Popcorn

Perfluoroalkyls, which include perfluorooctanoic acid (PFOA), and perfluorooctane sulfonate (PFOS), are chemicals used to keep grease from leaking through fast food wrappers, are being ingested by people through their food and showing up as contaminants in blood. Microwave popcorn bags are lined with PFOA, and when they are heated the compound leaches onto the popcorn.

These chemicals are part of an expanding group of chemicals commonly referred to as “gender-bending” chemicals, because they can disrupt your endocrine system and affect your sex hormones. The EPA has ruled PFCs as “likely carcinogens,” and has stated that PFOA “poses developmental and reproductive risks to humans.” Researchers have also linked various PFCs to a range of other health dangers, such as:

  • Infertility — A study published in the journal Human Reproduction3 found that both PFOA and PFOS (perfluorooctane sulfonate), dramatically increased the odds of infertility. PFOA was linked to a 60 to 154 percent increase in the chance of infertility.
  • Thyroid disease — A 2010 study4 found that PFOA can damage your thyroid function. Individuals with the highest PFOA concentrations were more than twice as likely to report current thyroid disease, compared to those with the lowest PFOA concentrations. Your thyroid contains thyroglobulin protein, which binds to iodine to form hormones, which in turn influence essentially every organ, tissue and cell in your body. Thyroid hormones are also required for growth and development in children. Thyroid disease, if left untreated, can lead to heart disease, infertility, muscle weakness, and osteoporosis.
  • Cancer — PFOA has been associated with tumors in at least four different organs in animal tests (liver, pancreas, testicles and mammary glands in rats), and has been associated with increases in prostate cancer in PFOA plant workers.
  • Immune system problems — Several studies by scientists in Sweden indicate that PFCs have an adverse effect on your immune system. As described in a report on PFCs by the Environmental Working Group (EWG), PFOA was found to decrease all immune cell subpopulations studied, in the thymus and spleen, and caused immunosupression.
  • Increased LDL cholesterol levels – A 2010 study in the Archives of Pediatric & Adolescent Medicine5 found that children and teens with higher PFOA levels had higher levels of total cholesterol and LDL or “bad” cholesterol, while PFOS was associated with increased total cholesterol, including both LDL cholesterol and HDL or “good” cholesterol.

I strongly recommend avoiding any product you know contain these toxic compounds, particularly non-stick cookware, but also foods sold in grease-proof food packaging, such as fast food and microwave popcorn. Clearly, if you’re eating fast food or junk food, PFCs from the wrapper may be the least of your problems, but I think it’s still important to realize that not only are you not getting proper nutrition from the food itself, the wrappers may also add to your toxic burden.

6. Non-Organic Potatoes and Other Fresh Produce Known for High Pesticide Contamination

Your best bet is to buy only organic fruits and vegetables, as synthetic agricultural chemicals are not permissible under the USDA organic rules. That said, not all conventionally grown fruits and vegetables are subjected to the same amount of pesticide load. While Koch focuses on potatoes, as they tend to take up a lot of pesticides and other agricultural chemicals present in the soil, I would recommend reviewing the “Shopper’s Guide to Pesticides in Produce”6 by the Environmental Working Group.

Of the 48 different fruit and vegetable categories tested by the EWG for the 2013 guide, the following 15 fruits and vegetables had the highest pesticide load, making them the most important to buy or grow organically:

Apples Celery Cherry tomatoes
Cucumbers Grapes Hot peppers
Nectarines (imported) Peaches Potatoes
Spinach Strawberries Sweet bell peppers
Kale Collard greens Summer squash


In contrast, the following foods were found to have the lowest residual pesticide load, making them the safest bet among conventionally grown vegetables. Note that a small amount of sweet corn and most Hawaiian papaya, although low in pesticides, are genetically engineered (GE). If you’re unsure of whether the sweet corn or papaya is GE, I’d recommend opting for organic varieties:

Asparagus Avocado Cabbage
Cantaloupe Sweet corn (non-GMO) Eggplant
Grapefruit Kiwi Mango
Mushrooms Onions Papayas (non-GMO. Most Hawaiian papaya is GMO)
Pineapple Sweet peas (frozen) Sweet potatoes

7. Table Salt

Salt is essential for life—you cannot live without it. However, regular ‘table salt’ and the salt found in processed foods are NOT identical to the salt your body really needs. In fact, table salt has practically nothing in common with natural salt. One is health damaging, and the other is healing.

  • Processed salt is 98 percent sodium chloride, and the remaining two percent comprises man-made chemicals, such as moisture absorbents, and a little added iodine. These are dangerous chemicals like ferrocyanide and aluminosilicate. Some European countries, where water fluoridation is not practiced, also add fluoride to table salt
  • Natural salt is about 84 percent sodium chloride. The remaining 16 percent of natural salt consists of other naturally occurring minerals, including trace minerals like silicon, phosphorous and vanadium

Given that salt is absolutely essential to good health, I recommend switching to a pure, unrefined salt. My favorite is an ancient, all-natural sea salt from the Himalayas. Himalayan salt is completely pure, having spent many thousands of years maturing under extreme tectonic pressure, far away from impurities, so it isn’t polluted with the heavy metals and industrial toxins of today. And it’s hand-mined, hand-washed, and minimally processed. Himalayan salt is only 85 percent sodium chloride, the remaining 15 percent contains 84 trace minerals from our prehistoric seas. Unrefined natural salt is important to many biological processes, including:

  • Being a major component of your blood plasma, lymphatic fluid, extracellular fluid, and even amniotic fluid
  • Carrying nutrients into and out of your cells
  • Maintain and regulate blood pressure
  • Increasing the glial cells in your brain, which are responsible for creative thinking and long-term planning
  • Helping your brain communicate with your muscles, so that you can move on demand via sodium-potassium ion exchange

While natural unprocessed salt has many health benefits, that does not mean you should use it with impunity. Another important factor is the potassium to sodium ratio of your diet. Imbalance in this ratio can not only lead to hypertension (high blood pressure) and other health problems, including heart disease, memory decline, erectile dysfunction and more. The easiest way to avoid this imbalance is by avoiding processed foods, which are notoriously low in potassium while high in sodium. Instead, eat a diet of whole, ideally organically-grown foods to ensure optimal nutrient content. This type of diet will naturally provide much larger amounts of potassium in relation to sodium.

8. Soy Protein Isolate and Other Unfermented Soy Products

Sadly, most of what you have been led to believe by the media about soy is simply untrue. One of the worst problems with soy comes from the fact that 90 to 95 percent of soybeans grown in the US are genetically engineered (GE), and these are used to create soy protein isolate. Genetically engineered soybeans are designed to be “Roundup ready,” which means they’re engineered to withstand otherwise lethal doses of herbicide.

The active ingredient in Roundup herbicide is called glyphosate, which is responsible for the disruption of the delicate hormonal balance of the female reproductive cycle. What’s more, glyphosate is toxic to the placenta, which is responsible for delivering vital nutrients from mother to child, and eliminating waste products. Once the placenta has been damaged or destroyed, the result can be miscarriage. In those children born to mothers who have been exposed to even a small amount of glyphosate, serious birth defects can result.

Glyphosate’s mechanism of harm was only recently identified, and demonstrates how this chemical disrupts cellular function and induce many of our modern diseases, including autism. Soy protein isolate can be found in protein bars, meal replacement shakes, bottled fruit drinks, soups and sauces, meat analogs, baked goods, breakfast cereals and some dietary supplements.

Even if you are not a vegetarian and do not use soymilk or tofu, it is important to be a serious label reader. There are so many different names for soy additives, you could be bringing home a genetically modified soy-based product without even realizing it. Soy expert Dr. Kaayla Daniel offers a free Special Report7, “Where the Soys Are,” on her Web site. It lists the many “aliases” that soy might be hiding under in ingredient lists — words like “bouillon,” “natural flavor” and “textured plant protein.”

Besides soy protein isolate, ALL unfermented soy products are best avoided if you value your health. Thousands of studies have linked unfermented soy to malnutrition, digestive distress, immune-system breakdown, thyroid dysfunction, cognitive decline, reproductive disorders and infertility—even cancer and heart disease.

The only soy with health benefits is organic soy that has been properly fermented, and these are the only soy products I ever recommend consuming. After a long fermentation process, the phytate and “anti-nutrient” levels of soybeans are reduced, and their beneficial properties become available to your digestive system. To learn more, please see this previous article detailing the dangers of unfermented soy.

9. Artificial Sweeteners

Contrary to popular belief, studies have found that artificial sweeteners such as aspartame can stimulate your appetite, increase carbohydrate cravings, and stimulate fat storage and weight gain. In one of the most recent of such studies8, saccharin and aspartame were found to cause greater weight gain than sugar.

Aspartame is perhaps one of the most problematic. It is primarily made up of aspartic acid and phenylalanine. The phenylalanine has been synthetically modified to carry a methyl group, which provides the majority of the sweetness. That phenylalanine methyl bond, called a methyl ester, is very weak, which allows the methyl group on the phenylalanine to easily break off and form methanol.

You may have heard the claim that aspartame is harmless because methanol is also found in fruits and vegetables. However, in fruits and vegetables, the methanol is firmly bonded to pectin, allowing it to be safely passed through your digestive tract. Not so with the methanol created by aspartame; there it’s not bonded to anything that can help eliminate it from your body.

Methanol acts as a Trojan horse; it’s carried into susceptible tissues in your body, like your brain and bone marrow, where the alcohol dehydrogenase (ADH) enzyme converts it into formaldehyde, which wreaks havoc with sensitive proteins and DNA. All animals EXCEPT HUMANS have a protective mechanism that allows methanol to be broken down into harmless formic acid. This is why toxicology testing on animals is a flawed model. It doesn’t fully apply to people.

Guidelines for Healthy Food

Whatever food you’re looking to eat, whether organic or locally grown, from either your local supermarket or a farmer’s market, the following are signs of a high-quality, healthy food. Most often, the best place to find these foods is from a sustainable agricultural group in your area. You can also review my free nutrition plan to get started on a healthy eating program today:

  • It’s grown without pesticides and chemical fertilizers (organic foods fit this description, but so do some non-organic foods)
  • It’s not genetically engineered
  • It contains no added growth hormones, antibiotics, or other drugs
  • It does not contain artificial anything, nor any preservatives
  • It is fresh (if you have to choose between wilted organic produce or fresh conventional produce, the latter may still be the better option as freshness is important for optimal nutrient content)
  • It did not grown in a factory farm
  • It is grown with the laws of nature in mind (meaning animals are fed their native diets, not a mix of grains and animal byproducts, and have free-range access to the outdoors)
  • It is grown in a sustainable way (using minimal amounts of water, protecting the soil from burnout, and turning animal wastes into natural fertilizers instead of environmental pollutants)
  • Source:

Areas of the Brain Modulated by Single-Dose Methylphenidate Treatment in Youth with ADHD During Task-Based fMRI: A Systematic Review.


Objective: Attention-deficit/hyperactivity disorder (ADHD) is a psychiatric disorder affecting 5% of children. Methylphenidate (MPH) is a common medication for ADHD. Studies examining MPH’s effect on pediatric ADHD patients’ brain function using functional magnetic resonance imaging (fMRI) have not been compiled. The goals of this systematic review were to determine (1) which areas of the brain in pediatric ADHD patients are modulated by a single dose of MPH, (2) whether areas modulated by MPH differ by task type performed during fMRI data acquisition, and (3) whether changes in brain activation due to MPH relate to clinical improvements in ADHD-related symptoms.

Methods: We searched the electronic databases PubMed and PsycINFO (1967–2011) using the following terms: ADHD AND (methylphenidate OR MPH OR ritalin) AND (neuroimaging OR MRI OR fMRI OR BOLD OR event related), and identified 200 abstracts, 9 of which were reviewed based on predefined criteria.

Results: In ADHD patients the middle and inferior frontal gyri, basal ganglia, and cerebellum were most often affected by MPH. The middle and inferior frontal gyri were frequently affected by MPH during inhibitory control tasks. Correlation between brain regions and clinical improvement was not possible due to the lack of symptom improvement measures within the included studies.

Conclusions: Throughout nine task-based fMRI studies investigating MPH’s effect on the brains of pediatric patients with ADHD, MPH resulted in increased activation within frontal lobes, basal ganglia, and cerebellum. In most cases, this increase “normalized” activation of at least some brain areas to that seen in typically developing children.


The purpose of this systematic review was to determine which areas of the brain are modulated by MPH medication in pediatric ADHD patients during task performance, whether these affected brain areas differ by task, and whether any of these brain areas can be linked to improvement in the clinical symptoms of ADHD.

The results of our review suggest that when patients with ADHD are given a single dose of MPH, an increase in activation primarily occurs within the frontal lobes (especially in the inferior and middle frontal gyri), the basal ganglia, and the cerebellum. Abnormalities in these regions have all been implicated in patients with ADHD. Structurally, the prefrontal cortex (which includes portions of the inferior and middle frontal gyri), the caudate (part of the basal ganglia), and the cerebellum have consistently been found to have a smaller volume in patients with ADHD than in typically developing children. 37Functionally, when assessing unmedicated brain activation of patients with ADHD during task performance, less activation has been found in the frontal lobes, 15,17–22,24 striatum, 15,16,18,20 and cerebellum 24,38in comparison to brain activation of healthy control subjects. The MPH-responsive areas of the brain in youth with ADHD discussed within this review therefore reflect areas that, both structurally and functionally, have previously been reported as abnormal in ADHD. Of the nine studies included in this review, six found that MPH at least partially normalized the activation of the brains of patients with ADHD to the levels seen in typically developing comparison subjects while performing a task. 28–33 The areas most frequently normalized were those in the basal ganglia (4 studies), 28,29,31,33 followed by the frontal lobes 31–33 and parietal lobes 30,32,33 (both found in 3 studies). Taken together, these findings indicate that MPH may help to return the brain functioning of patients with ADHD to the normal levels seen in typically developing children when performing a cognitive task.

The second goal of our systematic review was to determine if brain areas affected by MPH differed by task. When the reviewed studies were grouped by task, we found that the middle and inferior frontal gyri were the brain areas most often affected by MPH during inhibitory control task performance. 29,32–35 Similarly, the one study that used the time-discrimination task found that MPH increased activation in the frontal lobes, specifically within the inferior, orbital, and medial frontal cortices and the anterior cingulate cortex. 31 These findings differed from MPH’s effects during selective attention task performance, when the basal ganglia were most often activated. 28,30 Since only two studies used this task, it is difficult to say whether these results represent the true frequency with which this area is affected. The single study that used a working memory task did not find any increase in brain activation in response to MPH, but further studies using this task may find a different result. 36

In healthy participants, the performance of inhibitory control tasks has been found to preferentially activate the dorsolateral prefrontal cortex (including the middle frontal gyrus), inferior frontal gyrus, anterior cingulate, and parietal cortex. 39 For youth with ADHD, MPH increased activation within each of these areas in at least one of the studies using an inhibitory control task; the middle and inferior frontal cortex activation was increased in almost all of these studies (4/5 studies for each). It is therefore possible that in ADHD, MPH influences the activation within the middle and inferior frontal gyri while performing an inhibitory control task. Given these neuroimaging findings, one might expect that children with ADHD would perform better on inhibitory control tasks after they receive MPH than they do without MPH; however, this prediction was not borne out by our investigation. Only two of the five studies that used an inhibitory control task reported that MPH improved ADHD patients’ performance on the task (i.e., errors decreased, variability in reaction time decreased, or target discrimination increased). Although it is possible that this improvement on the task was the result of MPH medication, such an improvement could also be due to practice. In one of these two studies, patients with ADHD went through two imaging sessions, whereas healthy participants were scanned only once; it is thus possible that the ADHD patients’ improved on the task because they performed it more than once. In the other of these two studies, both ADHD patients and healthy participants were given MPH and imaged twice, and both improved on task performance. It is therefore difficult to determine whether or not this improvement was due to practice or medication. Furthermore, neither of these two studies reported that MPH normalized the activation within ADHD patients’ frontal lobes to the levels seen in typically developing children; in one of these two studies, medicated ADHD patients’ brain activations were not directly compared to the typically developing control group, and in the other, normalizations of brain activation within the basal ganglia were found. It is therefore possible that MPH’s effects on areas of the frontal lobe are insufficient to improve inhibitory control task performance or that the power of the studies was insufficient to capture improved task performance.

Similar to the inhibitory control task, the time-discrimination task has been found to preferentially activate the dorsolateral prefrontal cortex, inferior frontal gyrus, and cerebellum during performance by healthy adults. 40 In the single included study that used this type of task, activation in the inferior, orbital, and medial frontal gyrus, anterior cingulate gyrus, and cerebellum was increased by MPH administration during task performance in patients with ADHD. MPH normalized all these areas of ADHD patients’ brains to the activation levels seen in typically developing children, but patients with ADHD did not commit significantly fewer errors on the task when they were treated with MPH. This finding may indicate that MPH does not have a powerful enough effect to improve time-discrimination task performance. It is also possible, however, that the number of patients with ADHD included in this single study (12 boys) was too small to capture a statistically significant difference in task performance.

In contrast to inhibitory control and time-discrimination tasks—which selectively activate areas mostly in the frontal lobe—selective attention tasks, including visual and auditory attention tasks and continuous performance tasks, have been found to activate a wide range of brain regions, including portions of the frontal, parietal, temporal, and occipital lobes, the cerebellum, and the basal ganglia. 41,42 Although at least one of the two selective attention studies included in this review found that MPH increased brain activation in the frontal, parietal, and occipital lobes, as well as the basal ganglia and cerebellum, neither of these studies found increased activation in the temporal lobes. 28,30Therefore, MPH may not work in this region during performance of a selective attention task. Both of the included studies reported increased activation in the basal ganglia in response to MPH, but only one study showed normalization to healthy control levels in this area.28 Neither of these studies found that MPH improved how well patients with ADHD performed on the task, which again may be an issue of insufficient power.

Finally, this review included a single study that assessed working memory; that study found no increase in brain activation in response to MPH. With only a single study to consider, no definite conclusions about the nature of MPH’s effects during working memory task performance can be made.

The last goal of this review was to determine if brain regions affected by MPH could be related to improvement in clinical ADHD symptoms. We found that none of the included studies reported measurements of the severity of ADHD symptoms before and after MPH medication administration. We were therefore unable to compare brain regions of interest between studies that found clinical improvement and those that did not. Previous work has shown, however, that MPH ameliorates the symptoms of ADHD. The landmark Collaborative Multisite Multimodal Treatment Study of Children with Attention-Deficit/Hyperactivity Disorder revealed that medication management with MPH effectively reduced inattentive and hyperactive symptoms of ADHD. 43 In that study, participants received individually titrated doses of MPH, starting (on average) at about 12 mg. The doses of MPH reported in this review are comparable to that amount (the lowest dose in the reviewed studies was 10 mg). We therefore speculate that had ADHD symptoms been recorded as part of the reviewed studies, improvement in these symptoms was possible.

Though it was not an explicit goal of this review, we also examined the effect of previous medication status on brain activation in response to MPH. When the included studies were grouped based on whether or not ADHD participants had received stimulant medication prior to the reported study, it became evident that there was a difference in MPH-induced brain activation patterns between the stimulant-naive and non-naive groups. In response to MPH, studies that used stimulant-naive participants reported an increase in activation—in the inferior frontal cortex, parietal lobes, temporal lobes, occipital lobes, and cerebellum—more often than studies that used non-naive participants. This result may indicate that these areas of the brain are more responsive to initial MPH treatment but, over time, become less sensitive to the medication’s effects. Alternatively, chronic treatment with MPH may increase baseline activation of these areas such that the difference between on- and off-MPH treatment scan sessions is no longer evident. This possibility is supported by a SPECT study that found chronic MPH treatment improved cerebral blood flow to frontal and temporal lobes in patients with ADHD; these changes were still detectable two months after discontinuation of MPH. 44 It is therefore possible that after a period of treatment with MPH, tonic blood flow to brain areas affected by MPH is increased. This permanently increased blood flow would then translate to increased blood oxygenation levels in these areas, resulting in readings of higher brain activation at baseline—that is, the pre-MPH (single dose) imaging session. However, results from the only study that has assessed the chronic effects of MPH in pediatric patients with ADHD using fMRI analysis do not corroborate these findings: following one year of MPH treatment, boys with ADHD did not show increased neural activity during the performance of tasks designed to assess executive (inhibitory) control and selective attention compared to the pretreatment imaging session.45

This review has focused exclusively on pediatric neuroimaging, but there is considerable interest in the effects of MPH on adult ADHD patients’ brain activity, given that ADHD persists into adulthood in 15%–65% of childhood cases, depending on diagnostic criteria. 3 In one of the studies included in this review, MPH’s effects were reported on both child and adult groups of child-parent dyads diagnosed with ADHD. 34 That study used an inhibitory control task and found that although areas of the frontal lobes, striatum, and cerebellum showed increased activity in the children in response to MPH, only the striatum (specifically, the caudate) showed increased activation in the adults. By contrast, a study that examined the activation of the dorsal anterior midcingulate cortex (part of the frontal lobe) in adults while performing an inhibitory control task (the multisource interference task) found that after six weeks of treatment with MPH, activation in this area increased, in comparison to the placebo-treated group. 46 The study also found that MPH treatment increased activation in the dorsolateral prefrontal and premotor cortex (portions of the superior, middle, and inferior frontal gyri), parietal cortex, striatum (specifically, the caudate), cerebellum, and thalamus, compared to placebo. With only two studies to consider, it is difficult to state whether adults with ADHD exhibit similar brain activation responses to MPH as children with ADHD. However, both of these studies with adult participants agree that MPH increases the brain activation during inhibitory control task performance within the striatum, specifically within the caudate.

The major limitation of this systematic review is the small number of studies it included. To date, only nine studies have examined how a single-dose of MPH affects the brain response during task performance in youth with ADHD. These nine studies employed only four types of task, which limits the applicability of this review to other types of tasks. Another limitation of this systematic review is that four of our included studies 30–33 were published by the same first author; insofar as those studies included overlapping patient populations, they would not represent independent contributions to this review. In addition, this review has reported the general anatomical brain areas associated with MPH-induced changes in activation patterns rather than Brodmann areas or Talairach coordinates. Although all included papers described the anatomical locations of BOLD signal changes, only some reported Brodmann areas or Talairach coordinates, making it difficult to universally compare these more specific regions of interest. Finally, many of the included studies were not specific about the multiple comparison corrections applied in their analyses—which may affect the validity of the findings.

The results of this systematic review point to several areas of future research. As none of the included studies examined the relationship between ADHD symptom improvement and BOLD brain activation in response to MPH, this component would be an important one to include in future studies. Another avenue for future research may lie in investigating MPH’s effect on functional connectivity, either during task performance or the resting state. The current studies reveal the effects of MPH on functional brain activation, whereas a connectivity analysis would lead to a better understanding regarding the underlying neural networks. The nine studies included in this review focused mostly on the MPH-induced functional activation differences in the brains of youth with ADHD. One of these studies, however, also examined the changes in brain functional connectivity during selective attention task performance. That study found that MPH normalized all intercorrelation differences between children with ADHD and healthy control children, providing more insight into the possible effects of MPH administration on brain networks. Future studies that examine these functional connectivity responses to MPH may help expand understanding of this drug’s effects.

In conclusion, children with ADHD showed changes in brain activation due to a single dose of MPH, especially within the frontal lobes, basal ganglia, and cerebellum. MPH appears to more frequently affect regions of the frontal lobes during inhibitory control tasks compared to those assessing selective attention. By contrast, during selective attention tasks, MPH results in an increase in activation in a wider range of areas, including parts of the parietal and occipital lobes, as well as the cerebellum and basal ganglia. These regions correspond to those that exhibit typical activation patterns during task performance by typically developing participants and may provide evidence that MPH facilitates the return of brain function in ADHD patients to, or close to, a typically functioning state. As it stands, the existing literature supports the notion that MPH helps normalize brain activation, specifically within the frontal lobes, basal ganglia, and cerebellum, but whether or not the activation of these areas correlates with ADHD symptom improvement has yet to be demonstrated.


Acupuncture in 21st Century Anesthesia: Is There a Needle in the Haystack?.



Acupuncture, a component of Traditional Chinese Medicine, has developed over a period of more than 3000 years and is based on the concept of “” unification of the human with his environment.1 Acupuncture practice has constantly evolved throughout history and has been based on the knowledge and ideas garnered from astronomy, nature, science, and technology.2,3 In contrast to what was stated by Colquhoun and Novella,4 acupuncture consists of applying various stimuli (e.g., pressure, needle, heat, laser, suction cup, injection, and electrical stimulation5 as well as most recently ultrasound waves)6 on/into specific acupuncture points (acupoints) to restore a patient’s health. During the early 1970s, this traditional healing practice became more popular because of programs vigorously supported by the Chinese government7 leading to a greater international awareness of this therapeutic approach. A recent PubMed search of “acupuncture clinical trials” yields 3833 articles, demonstrating that acupuncture has been investigated as a treatment for many medical conditions. A potential reason for the popularity of acupuncture among patients may be the “individualistic” or “person-centered” approach.

Although >40 disorders have been recognized by the World Health Organization8as conditions that can benefit from acupuncture treatment, many within the field of science view acupuncture as “quackery” and “pseudoscience,” and its effect as “theatrical placebo.”4,914 It seems somewhat naive to totally condemn the practice of acupuncture, while accepting orthodox medicine as the basis for treating all medical conditions. Herein, we describe evidence supporting the thesis that acupuncture, as part of anesthesia practice, can provide clinically meaningful benefits for patients. Postoperative nausea and vomiting (PONV),1517postoperative pain,18,19 and chronic pain conditions20,21 are 3 clinical problems pertinent to anesthesia practice and yet cannot be adequately treated owing to the ineffective or only partially effective pharmacological interventions. Unsuccessful conventional treatments for these clinical entities have caused significant financial burden, health care cost, and patient dissatisfaction. As a result, acupuncture has been investigated as a treatment or a complementary treatment for these 3 clinical entities.

To validate acupuncture efficacy, multiple sham techniques and placebo instruments have been developed and are used in clinical trials and experimental conditions. These techniques and instruments are thought to control the “nonspecific” effect of acupuncture and are broadly termed “sham” or “placebo” acupuncture in the literature. Sham acupuncture is defined as an intervention, which mimics the sensation of acupuncture stimulation; however, it is thought to lack the analgesic and antiemetic effects of acupuncture. In both clinical and experimental trials, sham acupuncture can be classified based on whether the intervention penetrates the skin. Penetrating shams (minimal acupuncture) involve shallow insertion of acupuncture needles into actual acupuncture points with minimal stimulation or into sham point stimulation (applying same needling techniques but in areas where there is no documentation of acupuncture points or meridians). Nonpenetrating shams have also been developed, and these can either be sensorial sham (applying a toothpick or a filament on the skin surface of acupuncture points simulating needle sensations), or visual sham (applying placebo needles [e.g., Streitberger needle] that visually shorten when pressed onto the skin).

Using the above “controlling” techniques, acupuncture has been validated in various clinical trials. Thus far, the strongest evidence supporting acupuncture efficacy is pericardium-6 (PC-6) acupoint stimulation for PONV prophylaxis. The PC-6 acupoint Neiguan, translated as “inner gate,” is commonly used to treat nausea and vomiting in traditional Chinese medicine.1,5 This acupoint is located 5 cm proximal from the wrist between the palmaris longus and flexor carpi radialis. A meta-analysis22 has demonstrated the efficacy of PC-6 for the treatment of PONV in sham-controlled trials. Subsequently, a Cochrane Database article in 200423 showed that acupuncture stimulation at PC-6 is superior to pooled antiemetic prophylaxis in preventing nausea. An updated Cochrane review24 surveyed 40 clinical trials (a total of 4858 participants) and found that compared with sham treatment, PC-6 acupoint stimulation is as effective as conventional antiemetics (e.g., droperidol,25 ondansetron,26 and others).27 PC-6 acupuncture can also complement antiemetics in reducing PONV. More importantly, the side effects associated with PC-6 acupoint stimulation were minor and self-limiting.24 Although the number needed to treat (NNT) for PONV with acupuncture ranged from 34 to 5 patients, this is similar to NNT for conventional antiemetics. For example, the NNT for nausea for IV droperidol 0.5 to 0.75 mg is 4.8 (95% confidence interval [CI], 3.0–12) and for vomiting is 10 (95% CI, 4.6 to −51).28 Similarly, metoclopramide 10 mg IV, a commonly used drug to prevent nausea and vomiting in the perioperative period, has an NNT of 30.29 Direct comparison between acupuncture and IV ondansetron reveals that the NNT for nausea (0–6 hours) is 4 (95% CI, 2.0–11.4) vs 5 (95% CI, 2.3–120.6), respectively, and for vomiting (0–6 hours) the NNT is 6 (95% CI, 3.0–84.7) vs 5 (2.9–21.0), respectively.30 Comparing acupuncture versus IV ondansetron, the NNT for nausea (0–24 hours) is 20 (95% CI, 3.7 to −5.8) vs 27(95% CI, 3.9 to −5.5) respectively, and the NNT for vomiting (0–24 hours) is 18 (95% CI, 3.7 to −6.4) vs 9 (95% CI, 3.1 to –11.2).30

Acupuncture has also been investigated as an adjunct for acute postoperative pain and various chronic pain conditions. A recent review article31 included 15 randomized controlled trials that compared acupuncture with sham controls in managing postoperative pain. The investigators found that patients in the acupuncture group required less cumulative opioid consumption (the average cumulative opioid consumption was −3.14 mg [95% CI, −5.15 to −1.14], −8.33 mg [95% CI, −1.06 to −5.61], and −9.14 mg [95% CI, −16.07 to −2.22] at 8, 24, and 72 hours, respectively). As a result, acupuncture-treated patients had a lower incidence of opioid-related side effects, such as nausea (risk ratio [RR]: 0.67; 95% CI, 0.53–0.86), dizziness (RR: 0.65; 95% CI, 0.52–0.81), sedation (RR: 0.78; 95% CI, 0.61–0.99), pruritus (RR: 0.75; 95% CI, 0.59–0.96), and urinary retention (RR: 0.29; 95% CI, 0.12–0.74), as compared with sham control groups.31

Acupuncture analgesia also has been investigated as a treatment for chronic pain conditions. Another systematic review with 31 randomized controlled trials found that acupuncture may have a specific analgesic effect in treating chronic headache patients because the combined response rate in the acupuncture group was significantly higher compared with sham acupuncture either at the early (8 weeks) follow-up period (RR: 1.19; 95% CI, 1.08–1.30] or late (6 months) follow-up period [RR: 1.22; 95% CI, 1.04–1.43]). Acupuncture was also superior to medication therapy for headache intensity (weighted mean difference [WMD]: −8.54 mm; 95% CI, −15.52 to −1.57), headache frequency (WMD: −0.70; 95% CI, −1.38 to −0.02), physical function (WMD: 4.16; 95% CI, 1.33–6.98), and response rate (RR: 1.49; 95% CI, 1.02–2.17).32 A recent individual patient meta-analysis of 29 randomized control trials with 17,922 patients indicated that acupuncture was statistically superior to control for all analysis (P < 0.001).33 In this report, effect sizes between acupuncture and sham were 0.37, 0.26, and 0.15 for musculoskeletal pain, osteoarthritis (OA), and chronic headache, respectively.33These significant differences between true and sham acupuncture indicate that acupuncture is more effective than placebo. The authors concluded that acupuncture is effective for the treatment of chronic pain and is therefore a reasonable treatment option.33 Colquhoun and Novella4 commented that real acupuncture was better than sham; however, by a small amount that lacked any clinical significance.4 While there is ongoing debate regarding the specific analgesic effect of acupuncture, the effect sizes reported are on par with standard accepted pharmacologic therapy for chronic pain. For example, a meta-analysis of 23 trials (10,845 patients) estimated that the analgesic efficacy of nonsteroidal anti-inflammatory drugs, including cyclooxygenase-2 inhibitors, in osteoarthritic knee pain was 1.01 cm (95% CI, 0.74–1.28) on a 10 cm visual analog scale, just 15.6% better than placebo.34 Thus, the effect size of nonsteroidal anti-inflammatory drugs versus placebo for pain reduction is similar to real acupuncture versus sham acupuncture in reduction of pain in OA knee pain patients. These data highlight the comparable effect sizes of acupuncture and conventional pharmacologic treatments for knee OA. Another aspect of therapeutic intervention that is as important, if not more so, is the potential to cause harm or adverse effects as a result of treatment. There is significant harm and the potential for even death caused by conventional medications leading to the withdrawal of some medications from use by regulatory authorities over the years.35,36 This is in sharp contrast to the safety record of acupuncture performed by trained acupuncturists.3739

Furthermore, epidemiologists have evaluated the cost effectiveness of acupuncture in the management of various chronic pain conditions.4043Acupuncture was found to improve health-related quality of life at a small additional cost and was relatively cost-effective compared with a number of other interventions.40 A pragmatic trial evaluating the clinical and economic effectiveness of acupuncture for chronic low back pain demonstrated that acupuncture plus routine care was associated with marked clinical improvements and was relatively cost-effective.41 Acupuncture was also found to improve quality of life and was cost-effective as a treatment for other pain conditions (dysmenorrheal, OA, and neck pain).42,43 Moreover, neuroscientists have applied brain imaging techniques, e.g., functional magnetic resonance imaging (fMRI) and positron emission tomography to explore the neural correlates of acupuncture as an antiemetic and an analgesic. Using fMRI, neuroscientists have identified specific brain regions related to PC-6 stimulation that further suggest that the antiemetic effects of acupuncture may be distinct from sham or placebo effects.44,45

Napadow et al.46 demonstrated cortical amplification and altered primary somatosensory digit somatotopy in patients suffering from carpel tunnel syndrome that can be corrected or normalized by a series of acupuncture treatments. The results of this study demonstrate that acupuncture shows promise in inducing beneficial cortical plasticity manifested by more focused digital representations. After controlling for noncutaneous somatosensory and cognitive elements of acupuncture, a subsequent study further demonstrated that acupuncture treatment for carpel tunnel syndrome patients cannot be explained as merely a placebo effect.47

Dhond et al.48 found verum stimulations produced more extensive modulation of limbic and paralimbic regions than sham stimulations in healthy volunteers. Pariente et al.49 explored brain processing during verum, covert sham and overt sham needling at acupoint LI-4 in pain patients using positron emission tomography. These investigators suggested that activity within the insular cortex may be responsible for the specific effect of acupuncture, whereas modulation of the dorsolateral prefrontal cortex, rostral anterior cingulate cortex, and periaqueductal gray may be related to expectation.49 Kong et al.50 used both behavior assessment and fMRI to examine patient expectations and the physiological effect of acupuncture in a group of healthy volunteers. They found that conditioning positive expectation can amplify acupuncture analgesia as detected by subjective pain sensory rating changes and objective fMRI signal changes in response to calibrated noxious stimuli. In addition, while both verum and sham acupuncture can have analgesic effects, only verum acupuncture significantly inhibited the brain responses to calibrated pain stimuli.50 The researchers indicated that acupuncture stimulation (a peripheral to central modulation) may inhibit incoming noxious stimuli, while a top-down modulation, expectancy (placebo/sham) may work through the emotional circuit.42Furthermore, Harris et al.51 found that while both verum and sham acupuncture produced similar levels of pain relief in fibromyalgia patients, the brain pathways of the 2 effects were quite different. The data were consistent with sham acupuncture evoking an increased release of endogenous opioids (consistent with mechanisms operative in placebos), whereas verum acupuncture increased receptor affinity and/or number.51 In aggregate, these neuroimaging studies provide strong evidence that verum and sham acupuncture stimulations have very different neural correlates, although they both can engender analgesic effects.

In conclusion, clinical trials support the efficacy of acupuncture in reducing PONV and postoperative pain; however, evidence supporting acupuncture as a treatment for chronic pain conditions is mixed. It should be noted that acupuncture trials in chronic pain have concluded that acupuncture treatment is often superior to standard of care or wait list controls and that acupuncture has minimal side effects and is cost effective.3743 Brain imaging studies have demonstrated that there are different neural correlates between verum and sham acupuncture stimulation.4451 Additionally, all clinical trials and many research studies have assumed that the acupuncture effect is equal to the “needle” effect, failing to recognize that factors in addition to specific effects of needling are also important contributors to the therapeutic effect of acupuncture in the setting of chronic pain.

Last, acupuncture is an ancient medical intervention first developed in an era when there were no laboratory tests, technology, or science of anatomy. The reason that the practice of acupuncture has survived for thousands of years is because it has evolved over time, with changes ranging from the number of acupuncture points to the practice techniques. Instead of criticizing this ancient art with arguments culled from modern medicine and science, physicians and scientists should try to integrate current knowledge into this ancient, yet ever-evolving practice so it may be used to treat conditions for which pharmaceutical interventions are ineffective and/or potentially dangerous.35,36 Over the last decade, there has been a growing green movement and eco-sustainability trend as well as an increased awareness that the same medication may not be effective in treating every patient with the same biomedical diagnosis. This “new age-integrative medicine5255 in Western culture promotes a patient-oriented medical practice that complements the ancient Chinese theory behind acupuncture practice. Overall, acupuncture practice should not be seen as a placebo intervention or merely a needle therapy, but a medical option that not only treats disorders but also fosters a greater awareness of how harmonic interactions between self, family, work, and environment play a role in promoting health and restoring order.



Exposure to pesticides or solvents and risk of Parkinson disease.


Objective: To investigate the risk of Parkinson disease (PD) associated with exposure to pesticides and solvents using meta-analyses of data from cohort and case-control studies.

Methods: Prospective cohort and case-control studies providing risk and precision estimates relating PD to exposure to pesticides or solvents or to proxies of exposure were considered eligible. The heterogeneity in risk estimates associated with objective study quality was also investigated.

Results: A total of 104 studies/3,087 citations fulfilled inclusion criteria for meta-analysis. In prospective studies, study quality was not a source of heterogeneity. PD was associated with farming and the association with pesticides was highly significant in the studies in which PD diagnosis was self-reported. In case-control studies, study quality appeared to be a source of heterogeneity in risk estimates for some exposures. Higher study quality was frequently associated with a reduction in heterogeneity. In high-quality case-control studies, PD risk was increased by exposure to any-type pesticides, herbicides, and solvents. Exposure to paraquat or maneb/mancozeb was associated with about a 2-fold increase in risk. In high-quality case-control studies including an appreciable number of cases (>200), heterogeneity remained significantly high (>40%) only for insecticides, organochlorines, organophosphates, and farming; also, the risk associated with rural living was found to be significant.

Conclusions: The literature supports the hypothesis that exposure to pesticides or solvents is a risk factor for PD. Further prospective and high-quality case-control studies are required to substantiate a cause-effect relationship. The studies should also focus on specific chemical agents.


Exposure to pesticides and solvents appears to be a risk factor for PD. Our evidence also supports the involvement of specific compounds, such as paraquat, maneb/mancozeb family, as well as proxies of exposure. However, it could be argued that the evidence collected is still limited, or at least inconclusive, because there was no definitive agreement between cohort and case-control studies.

Indeed, most of the evidence found relied on data from case-control studies. To investigate an etiologic relationship, the use of cohort studies is preferable. However, the incidence of PD is low and usually occurs in the elderly; large populations, a large number of cases, and a long follow-up are required to achieve adequate statistical power. Accordingly, most neuroepidemiologists resort to case-control studies, which are practical and, despite their retrospective nature, have the advantage of more detailed exposure assessment.

We have also partly explained the sources of heterogeneity in individual study results. In prospective studies, differences in estimates of exposure to pesticides appeared to depend on the method of ascertainment of PD. This factor is less likely to have been a source of bias in case-control studies because, although different sets of well-accepted diagnostic criteria were used, in most cases secondary causes of PD were excluded in patients recruited at movement disorders clinics. However, we did not assess the effect of this feature and we recognize that this is a possible limitation of our study. Heterogeneity in case-control studies appeared to be due mainly to study quality and size. With respect to study quality, our results are consistent with previous suggestions.e86 However, the issue of sample size analysis was addressed by only a few authors.e25,e60,e66,e80 To detect an OR of 2 and an exposure frequency of 20% we calculated that at least 200 case-control pairs would be needed.

A meta-analysis investigating several sources of heterogeneity in risk estimates has recently shown that study design (case-control vs prospective), source of controls (community vs non-community controls), type of exposure (occupational vs non-occupational/others type), adjustment for potential confounders, or geographical area do not appear to be important determinants. Accordingly, no consistent explanation of heterogeneity has been provided. The only factor that appeared to contribute was the method used for exposure assessment, as the use of job title–based exposure matrix resulted in a higher risk than assignment based on self-reported exposure.17 Unfortunately, this method could not be applied with sufficient accuracy to specific working occupations.

Despite our methodologic approach to quantitative synthesis, notable heterogeneity, probably affecting the evidence of an increased risk associated with exposure, was still present for insecticides, organochlorines, organophosphates, and farming. There are some possible explanations for this observation. With respect to farming, we observed that exposure was assessed either by open questions or by specific industry coding systems. Moreover, the choice of controls may introduce bias. Few studies have considered the effect of geography (area/region of residence; the additional criteria for “comparability” in our quality assessment process) in study design or adjustment of analyses. Regional controls may be preferable for the evaluation of direct exposure but both these and neighboring areas may affect the assessment of risk associated with this proxy measure of exposure. Insecticides are a heterogeneous class of compounds to which most organochlorines and organophosphates belong. Indeed, among organochlorines the most frequently used insecticide is DDT and the risk associated with this compound was found to be nonsignificant. In some cases, frequency of exposure is low and there may be difficulties in recalling specific product names. Given the advanced age of patients with PD, impairment of cognitive function is possible. Although poor cognition has been considered as an exclusion criterion during recruitment in some studies, only one research group adjusted for this covariate.e63,e68 Exposure to insecticides also appears to be closely correlated with exposure to herbicides.16

Finally, there may be residual confounders that we were not able to address. In some cases, data derived from proxy respondents were pooled with those reported by cases,29,e10,e44,e51,e79,e84 probably introducing misclassification bias.e6,e8

Confounding also could be secondary to the use of protective equipment and compliance with suggested, or even recommended, preventive practices. Only one study addressed this issue.26 Prevalent exposure was heterogeneous among the populations investigated and was likely to be higher in certain working categories. Inclusion bias should be taken into consideration, because in some studies selection of cases or controls was performed by linking to professional and insurance databases26,e7,e15,e38,e39,e45,e63,e68,e77,e79,e84 or in geographical areas characterized by extensive use of pesticides. A few studies investigated the role of genetic susceptibility. Although mechanisms of action at the molecular level are largely unknown, there is a growing body of evidence progressively substantiating the hypothesis of a gene–environment interaction.14 Finally, positive exposure was defined according to different levels (e.g., number of chemicals or times of usage over a period), types (particularly for toxin application), or durations.

The present study highlights unresolved issues with implications for health policies. From a preventive perspective, we observed that the route of exposure (e.g., inhaled or transcutaneous) and the method of toxin application (e.g., spraying or mixing) has never been investigated. Risk appears to increase as the duration of exposure increases. Since several compounds are likely to be used by the same people, different routes of exposure may act synergistically in increasing the risk. Unfortunately, it was not possible to investigate the issue of a dose–response relationship and to provide a cutoff for exposure.

The literature supports the hypothesis that exposure to pesticides or solvents is a risk factor for PD. However, further prospective and high-quality case-control studies are required to substantiate a cause-effect relationship. Although some compounds have been withdrawn from the market in industrialized countries, they are still in use in developing parts of the world. According to our review of the sources of funding, interest in this issue should come also from chemicals manufacturers. This should be emphasized because an interest in the adverse effects of specific compounds appears justified.



CDC Issues Update on Novel SARS-like Coronavirus.

Reports of new cases of the novel SARS-like coronavirus, now known as MERS-CoV, indicate continued risk for transmission in the Arabian Peninsula, according to an update from MMWR.

To date, MERS-CoV has been confirmed in 55 people, 31 of whom have died. All cases have been linked to Saudi Arabia, Qatar, Jordan, or the United Arab Emirates. Infections among close contacts of cases, including healthcare personnel and family members, “provide clear evidence of human-to-human transmission,” MMWR says.

The CDC recommends that MERS-CoV be considered in people who develop severe acute lower respiratory illness within 14 days of traveling from the Arabian Peninsula or nearby areas. The virus should also be considered for close contacts of symptomatic travelers. To improve detection, specimens should be taken from multiple locations (for example, the nasopharynx and lower respiratory tract); the CDC is performing testing.


Source: MMWR 

First-Trimester Noninvasive DNA Test Identifies Fetal Aneuploidies.

Invasive testing for fetal aneuploidies with chorionic villus sampling can be minimized by analyzing fetal DNA from the mother’s blood during the first trimester, according to two U.K. studies in Ultrasound in Obstetrics and Gynecology.

The first study, involving some 90,000 singleton pregnancies, examined which women should undergo maternal blood cell-free DNA (cfDNA) testing to detect trisomy 21, in addition to ultrasound and biochemical screening. Researchers found that by offering cfDNA testing to women with high-risk biochemical test results, a 98% detection rate could be achieved, with an invasive testing rate below 1%.

The other study used cfDNA testing routinely in 1000 singleton pregnancies, in addition to ultrasound and biochemical studies, to screen for trisomies 21, 18, and 13. All trisomies were detected; the false-positive rate for cfDNA was 0.1%, while ultrasound and biochemical screens had a 3.4% false-positivity.

The authors emphasize that the potential for false-positives dictates the need for invasive testing following abnormal results.

Source: Ultrasound in Obstetrics and Gynecology


7 Important Lessons Deaf People Can Teach You About Communication.


I have always thought it would be a blessing if each person could be blind and deaf for a few days during his early adult life. Darkness would make him appreciate sight; silence would teach him the joys of sound. ~Helen Keller ( Blind and Deaf American Author and Educator)

I grew up with wonderful parents who always encouraged my passion for music. I still vividly remember the day when they got me the new shiny sound system. Years later they got me a guitar and paid for my guitar classes. And they never had a chance to hear a sound of what I was listening to, playing and singing. My parents are deaf.

The reality of deaf people is different from other people’s experiences. They have limited abilities to communicate but exactly because of that they seem to know so much more about what effective communication means.

I used to live in a dormitory for deaf families for over 10 years and had a chance to compare 2 worlds: at home, where I saw people communicating using their hands, and outside, where I observed interactions of ‘normal’ people with hearing abilities. I was very blessed to have experienced the best and the worst of both worlds: the world of silence and the world of sounds.

These are some of the things I’ve learned from deaf people about effective communication:

1. Maintain eye contact

How many times did you find yourself checking facebook updates on your iPhone while having a conversation? In the world of deaf people if you stop looking at the person you are talking to, you are literally cutting the conversation. Because the only way you can ‘hear’ what other person is trying to say is to look into their face. This is a great lesson on the importance of being present, focusing on the person who is next to you, staying more connected to that person and receiving.

2. Don’t interrupt, follow the protocol

How many times did you find yourself waiting for someone to finish talking so you can say what you think? When a company of deaf people are having a conversation, it’s not possible for them to have more than one person talking at a time. There is only one way to follow the conversation – to look at the one speaking. This teaches us to respect the right of each individual to speak up and not to be interrupted in the midst of the their self expression.

3. Be straightforward, down to the point and as concise as possible

How often do you communicate your thoughts and needs clearly without trying to make things sound better than they are? In sign language there are 2 ways to say a particular word – you either use the alphabet and show a sign for each letter or you use one sign which stands for the entire word.

The second option is much faster hence convenient. Thus for almost every word there is a specific sign. Can you imagine such a massive amount of information to memorize? Not only you have to learn how to write and pronounce the word but also a specific sign that represents it. The nature of sign language requires you to be as specific as possible and use as few words as needed to convey your message. That’s an essential lesson to learn as so often we are reluctant to be direct and clear in what we think, want and feel.

4. If you don’t understand something, ask

How often are you reluctant to ask a question when something is unclear to you? Or to clarify what your loved one meant rather than making an assumption? We do it out of fear of being misunderstood, rejected or even humiliated. Each deaf person has their own style of using sign language. So it’s normal to ask a meaning of a specific unfamiliar sign. There is nothing wrong in not knowing or understanding something. If that happens, just ask.

5. Cut yourself from distractions

The world around us is extremely noisy. We have tons of devices, social medias, traditional medias which in their attempt to inform, entertain, update and educate, produce an overwhelming informational noise around us. We hear, see and feel. We are so used to being surrounded by that noise that we lose our ability to be focused and present. When we are having a conversation. When we are working. When we are cooking. When we are creating something. We are constantly attacked and distracted by that informational noise. I remember watching my father making furniture. He would always be so focused and immersed in the moment of creating, it would seem like nothing in the world could disturb him. Learn to be present – as simple as that.

6. Be expressive and articulate

There are so many ways we can play with our voice when we talk: pace, tone, volume. All this gives us plenty of ways to express our emotions, feelings and attitude when we talk about the particular subject. But how often do we allow ourselves to be expressive? Sometimes so called social norms restrict us from laughing too loud, from raising our voice when we are excited or crying in front of others. Because it’s an inappropriate thing to do. Deaf people are very articulate by nature. Their facial expressions and gestures can mesmerize you with their intensity and artistry. They don’t really care how others may see them. They just express what they feel without actually hiding or softening their emotions.

7. Observe, learn and get extra information from what you see and feel

Just imagine how many tiny yet important details we usually miss in our daily interactions with others? When you cannot hear you become more attentive to things happening around you. You learn to notice even the smallest things, you learn to experience the world around you through all those insignificant details which in a bigger picture play their crucial role. And more importantly, you learn to appreciate them.

Source: purpose Fairy