Ph.D Biologist Reveals – “We Communicate Through Energy”


Ph.D Biologist Reveals – “We Communicate Through Energy”
https://feedyoursoul.co/1031/1031/

Scientists Observed Epigenetic Memories Being Passed Down For 14 Generations


Scientists Observed Epigenetic Memories Being Passed Down For 14 Generations
http://ewao.com/2017/10/23/scientists-observed-epigenetic-memories-being-passed-down-for-14-generations/

Vitamin and Mineral Supplements


Dietary supplementation is approximately a $30 billion industry in the United States, with more than 90 000 products on the market. In recent national surveys, 52% of US adults reported use of at least 1 supplement product, and 10% reported use of at least 4 such products.1 Vitamins and minerals are among the most popular supplements and are taken by 48% and 39% of adults, respectively, typically to maintain health and prevent disease.

Despite this enthusiasm, most randomized clinical trials of vitamin and mineral supplements have not demonstrated clear benefits for primary or secondary prevention of chronic diseases not related to nutritional deficiency. Indeed, some trials suggest that micronutrient supplementation in amounts that exceed the recommended dietary allowance (RDA)—eg, high doses of beta carotene, folic acid, vitamin E, or selenium—may have harmful effects, including increased mortality, cancer, and hemorrhagic stroke.2

In this Viewpoint, we provide information to help clinicians address frequently asked questions about micronutrient supplements from patients, as well as promote appropriate use and curb inappropriate use of such supplements among generally healthy individuals. Importantly, clinicians should counsel their patients that such supplementation is not a substitute for a healthful and balanced diet and, in most cases, provides little if any benefit beyond that conferred by such a diet.

Clinicians should also highlight the many advantages of obtaining vitamins and minerals from food instead of from supplements. Micronutrients in food are typically better absorbed by the body and are associated with fewer potential adverse effects.2,3 A healthful diet provides an array of nutritionally important substances in biologically optimal ratios as opposed to isolated compounds in highly concentrated form. Indeed, research shows that positive health outcomes are more strongly related to dietary patterns and specific food types than to individual micronutrient or nutrient intakes.3

Although routine micronutrient supplementation is not recommended for the general population, targeted supplementation may be warranted in high-risk groups for whom nutritional requirements may not be met through diet alone, including people at certain life stages and those with specific risk factors (discussed in the next 3 sections and in the Box).

Box.

Key Points on Vitamin and Mineral Supplements

General Guidance for Supplementation in a Healthy Population by Life Stage
  • Pregnancy: folic acid, prenatal vitamins

  • Infants and children: for breastfed infants, vitamin D until weaning and iron from age 4-6 mo

  • Midlife and older adults: some may benefit from supplemental vitamin B12, vitamin D, and/or calcium

Guidance for Supplementation in High-Risk Subgroups
  • Medical conditions that interfere with nutrient absorption or metabolism:

    • Bariatric surgery: fat-soluble vitamins, B vitamins, iron, calcium, zinc, copper, multivitamins/multiminerals

    • Pernicious anemia: vitamin B12 (1-2 mg/d orally or 0.1-1 mg/mo intramuscularly)

    • Crohn disease, other inflammatory bowel disease, celiac disease: iron, B vitamins, vitamin D, zinc, magnesium

  • Osteoporosis or other bone health issues: vitamin D, calcium, magnesiuma

  • Age-related macular degeneration: specific formulation of antioxidant vitamins, zinc, copper

  • Medications (long-term use):

    • Proton pump inhibitorsa: vitamin B12, calcium, magnesium

    • Metformina: vitamin B12

  • Restricted or suboptimal eating patterns: multivitamins/multiminerals, vitamin B12, calcium, vitamin D, magnesium

a Inconsistent evidence.

Pregnancy

The evidence is clear that women who may become pregnant or who are in the first trimester of pregnancy should be advised to consume adequate folic acid (0.4-0.8 mg/d) to prevent neural tube defects. Folic acid is one of the few micronutrients more bioavailable in synthetic form from supplements or fortified foods than in the naturally occurring dietary form (folate).2 Prenatal multivitamin/multimineral supplements will provide folic acid as well as vitamin D and many other essential micronutrients during pregnancy. Pregnant women should also be advised to eat an iron-rich diet. Although it may also be prudent to prescribe supplemental iron for pregnant women with low levels of hemoglobin or ferritin to prevent and treat iron-deficiency anemia, the benefit-risk balance of screening for anemia and routine iron supplementation during pregnancy is not well characterized.2

Supplemental calcium may reduce the risk of gestational hypertension and preeclampsia, but confirmatory large trials are needed.2 Use of high-dose vitamin D supplements during pregnancy also warrants further study.2 The American College of Obstetricians and Gynecologists has developed a useful patient handout on micronutrient nutrition during pregnancy.4

Infants and Children

The American Academy of Pediatrics recommends that exclusively or partially breastfed infants receive (1) supplemental vitamin D (400 IU/d) starting soon after birth and continuing until weaning to vitamin D–fortified whole milk (≥1 L/d) and (2) supplemental iron (1 mg/kg/d) from 4 months until the introduction of iron-containing foods, usually at 6 months.5 Infants who receive formula, which is fortified with vitamin D and (often) iron, do not typically require additional supplementation. All children should be screened at 1 year for iron deficiency and iron-deficiency anemia.

Healthy children consuming a well-balanced diet do not need multivitamin/multimineral supplements, and they should avoid those containing micronutrient doses that exceed the RDA. In recent years, ω-3 fatty acid supplementation has been viewed as a potential strategy for reducing the risk of autism spectrum disorder or attention-deficit/hyperactivity disorder in children, but evidence from large randomized trials is lacking.2

Midlife and Older Adults

With respect to vitamin B12, adults aged 50 years and older may not adequately absorb the naturally occurring, protein-bound form of this nutrient and thus should be advised to meet the RDA (2.4 μg/d) with synthetic B12 found in fortified foods or supplements.6 Patients with pernicious anemia will require higher doses (Box).

Regarding vitamin D, currently recommended intakes (from food or supplements) to maintain bone health are 600 IU/d for adults up to age 70 years and 800 IU/d for those aged older than 70 years.7 Some professional organizations recommend 1000 to 2000 IU/d, but it has been widely debated whether doses above the RDA offer additional benefits. Ongoing large-scale randomized trials (NCT01169259 and ACTRN12613000743763) should help to resolve continuing uncertainties soon.

With respect to calcium, current RDAs are 1000 mg/d for men aged 51 to 70 years and 1200 mg/d for women aged 51 to 70 years and for all adults aged older than 70 years.7 Given recent concerns that calcium supplements may increase the risk for kidney stones and possibly cardiovascular disease, patients should aim to meet this recommendation primarily by eating a calcium-rich diet and take calcium supplements only if needed to reach the RDA goal (often only about 500 mg/d in supplements is required).2 A recent meta-analysis suggested that supplementation with moderate-dose calcium (<1000 mg/d) plus vitamin D (≥800 IU/d) might reduce the risk of fractures and loss of bone mass density among postmenopausal women and men aged 65 years and older.2

Multivitamin/multimineral supplementation is not recommended for generally healthy adults.8 One large trial in US men found a modest lowering of cancer risk,9 but the results require replication in large trials that include women and allow for analysis by baseline nutrient status, a potentially important modifier of the treatment effect. An ongoing large-scale 4-year trial (NCT02422745) is expected to clarify the benefit-risk balance of multivitamin/multimineral supplements taken for primary prevention of cancer and cardiovascular disease.

Other Key Points

When reviewing medications with patients, clinicians should ask about use of micronutrient (and botanical or other dietary) supplements in counseling about potential interactions. For example, supplemental vitamin K can decrease the effectiveness of warfarin, and biotin (vitamin B7) can interfere with the accuracy of cardiac troponin and other laboratory tests. Patient-friendly interaction checkers are available free of charge online (search for interaction checkers on drugs.com, WebMD, or pharmacy websites).

Clinicians and patients should also be aware that the US Food and Drug Administration is not authorized to review dietary supplements for safety and efficacy prior to marketing. Although supplement makers are required to adhere to the agency’s Good Manufacturing Practice regulations, compliance monitoring is less than optimal. Thus, clinicians may wish to favor prescription products, when available, or advise patients to consider selecting a supplement that has been certified by independent testers (ConsumerLab.com, US Pharmacopeia, NSF International, or UL) to contain the labeled dose(s) of the active ingredient(s) and not to contain microbes, heavy metals, or other toxins. Clinicians (or patients) should report suspected supplement-related adverse effects to the Food and Drug Administration via MedWatch, the online safety reporting portal. An excellent source of information on micronutrient and other dietary supplements for both clinicians and patients is the website of the Office of Dietary Supplements of the National Institutes of Health.

Clinicians have an opportunity to promote appropriate use and to curb inappropriate use of micronutrient supplements, and these efforts are likely to improve public health.

Back to top

Article Information

Corresponding Author: JoAnn E. Manson, MD, DrPH, Division of Preventive Medicine, Brigham and Women’s Hospital, Harvard Medical School, 900 Commonwealth Ave E, Boston, MA 02215 (jmanson@rics.bwh.harvard.edu).

Published Online: February 5, 2018. doi:10.1001/jama.2017.21012

Conflict of Interest Disclosures: Both authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Drs Manson and Bassuk reported that their research division conducts randomized clinical trials of several vitamins and minerals. The Vitamin D and Omega-3 Trial (VITAL) is sponsored by the National Institutes of Health but the vitamin D is donated by Pharmavite. In COSMOS, the multivitamins are donated by Pfizer. Both authors collaborate on these studies.

References
1.

Kantor  ED, Rehm  CD, Du  M, White  E, Giovannucci  EL.  Trends in dietary supplement use among US adults from 1999-2012.  JAMA. 2016;316(14):1464-1474.PubMedGoogle ScholarCrossref
2.

Rautiainen  S, Manson  JE, Lichtenstein  AH, Sesso  HD.  Dietary supplements and disease prevention: a global overview.  Nat Rev Endocrinol. 2016;12(7):407-420.PubMedGoogle ScholarCrossref
3.

Marra  MV, Boyar  AP.  Position of the American Dietetic Association: nutrient supplementation.  J Am Diet Assoc. 2009;109(12):2073-2085.PubMedGoogle ScholarCrossref
4.

American College of Obstetricians and Gynecologists. Nutrition during pregnancy. https://www.acog.org/Patients/FAQs/Nutrition-During-Pregnancy. Published April 2015. Accessed November 20, 2017.
5.

American Academy of Pediatrics. Vitamin D & iron supplements for babies: AAP recommendations. HealthyChildren.org. https://www.healthychildren.org/English/ages-stages/baby/feeding-nutrition/Pages/Vitamin-Iron-Supplements.aspx. Updated May 27, 2016. Accessed November 20, 2017.
6.

Institute of Medicine.  Dietary Reference Intakes for Thiamin, Riboflavin, Niacin, Vitamin B6, Folate, Vitamin B12, Pantothenic Acid, Biotin, and Choline. Washington, DC: National Academies Press; 1998.
7.

Institute of Medicine.  Dietary Reference Intakes for Calcium and Vitamin D. Washington, DC: National Academies Press; 2011.
8.

Moyer  VA; US Preventive Services Task Force.  Vitamin, mineral, and multivitamin supplements for the primary prevention of cardiovascular disease and cancer: US Preventive Services Task Force recommendation statement.  Ann Intern Med. 2014;160(8):558-564.PubMedGoogle ScholarCrossref
9.

Gaziano  JM, Sesso  HD, Christen  WG,  et al.  Multivitamins in the prevention of cancer in men: the Physicians’ Health Study II randomized controlled trial.  JAMA. 2012;308(18):1871-1880.PubMedGoogle ScholarCrossref
Source:jamanetwork.com

I Solemnly Share


When I was a little girl, my mom or dad would tuck me in at night. I would make each parent complete the ritual of saying goodnight to my stuffed animals and dolls. There was a giant stuffed bunny whose name now escapes me and a multitude of Beanie Babies. There was my Raggedy Ann doll, and there were two plump handmade dolls named Peppermint and Tom. To me, it was essential that each of these entities be kissed and greeted every night, as a reminder that he or she was loved. I was certain the toys would feel terribly sad if neglected. Looking back, I’m sure my parents found this repetitive behavior tiresome, but they tolerated it out of love. This was the first time I remember feeling responsible for the well-being of someone else. “Goodnight, Peppermint,” we would say together. “Goodnight, Tom. Goodnight, Raggedy Ann.”

Twenty years later, I was well into my second year of medical school. I had weathered the storm of the first year of basic sciences and was now struggling to understand neuropathology. I had scored very poorly on the first week’s quiz, so I needed a much higher mark on the second quiz if I wanted to pass the neuroscience sequence.

The night before the second quiz, I remember scrolling through what seemed like an endless series of lecture slides on movement disorders. Each slide presented testable information on the signs, symptoms, genetics, and pathology of different diseases. Like almost everything else in medical school, digesting and memorizing this mass of information felt like an impossible task. Historically, I had outperformed my own expectations, managing a passing grade on each weekly quiz. There was evidence to suggest that I could do this, and yet viscerally, I knew that I could not—not this time.

Something happened within me in the previous months, though I lacked the language to describe it. The chronic anxiety and sleeplessness of the previous year and a half had begun to wear on me. The daily struggle to get by had dominated my focus for so long that I could no longer recall the basic love of science that had pushed me toward medicine. As with every prior test of my endurance, I challenged myself to dig deeper and find the strength to keep going. Yet when I searched, I could not locate a reservoir of resolve and I emerged empty-handed and exhausted. As I scrolled to a histological image of Alzheimer disease, I was startled by a droplet of fluid that splashed onto my iPad screen. I picked up the device in earnest and watched the salty tear slide tortuously through tau protein accumulations and neurofibrillary tangles.

I failed the neuroscience sequence. There were other life developments that concurrently clawed at my confidence, but this failure was the main precipitant of my fall. I am tempted to make an elaborate case for the depression that subsequently took me over. I am compelled to defend my feelings with logical reasoning and to list all the reasons I was justified in feeling the way that I did. But depression didn’t proceed logically. I was already low and failure brought me lower still so that like an opportunistic infection, depression took advantage of the chance to devastate me. In the weeks that followed, I began to see myself as incompetent, unlovable, negligent. I interpreted each bump in the road as evidence that these were accurate judgments. I remember breaking down when I received an email stating that I was late on my monthly cable payment. Then, afterward, hating myself for being so insufferably unstable.

Once—and I have never shared this before—I stepped into the street on my walk home from the library. I knew that the bus hurtling through the night would not have time to stop before colliding with my darkly dressed frame, fracturing my bones and scattering my belongings. I imagined my head hitting the asphalt and my brain banging around inside of my skull, bruising irreparably with each impact. I imagined the bus driver’s horror as he turned off the ignition with shaking hands and leapt out of the vehicle to locate my body. It would be a catastrophe that the trauma surgeons could not salvage. I would die.

“Goodnight, Peppermint. Goodnight, Tom. Goodnight, Raggedy Ann.”

These words emerged from somewhere in the depths of my consciousness, and I stepped out almost as quickly as I had stepped in. The warmly begrudging faces of my parents invaded the violent scene in my mind. They believed in my ability to take care of other creatures. I did not have it in me to take that away from them.

I realized after that moment that I needed to ask for help. After almost a year of medications and therapy and taking time off school, I am grateful to feel like a stronger, more grounded version of myself. For the first six months of treatment, I stayed extremely private about the state of my health, confiding only in my family and a few very close friends. Yet as time went on, other people approached me with their problems. I willed myself to be more open about my own struggles. It is amazing what you learn when you open up to your fellow medical students. Depression and its vestiges are everywhere.

Practicing physicians and physicians in training often write about their patients and use writing to make sense of their clinical experiences. But in a profession in which subjective evaluation is constant and for which we are expected to be pillars of strength and beacons of empathy at once, it is less popular to use writing to publicly deconstruct ourselves and our own emotions. Writing about our own struggles with natural elements of human experience, like sadness and loss, is painful and risks affecting one’s professional image. When physicians’ mental health is discussed, it is often done so through an academic lens, with particular attention to trends and statistics. Rarely is the voice of any specific individual emphasized. Paradoxically, the way this topic is presented can make depression seem like a standard emergent property of medical training. An underlying suggestion is that depression is simply something that physicians experience—it is something that physicians handle.

Yet I think we do people a disservice anytime we attempt to preserve the belief that medical professionals, stewards of good health, need less help coping with psychiatric disease than do their patients. Depression leaves massive economic and health care costs, and a growing body of evidence suggests that it has a penchant for painful interference in the lives of physicians1 and medical students.2 We contribute to the stigmatization of mental illness, furthering the notion that dealing with depression is something to be ashamed of, something that should be kept quiet. A recent study estimates the costs associated with physician turnover, decreased productivity, and decreased patient satisfaction due to self-reported symptoms consistent with burnout, noting that system-level change addressing the drivers of burnout, including institutional culture, is both ethically and financially responsible, with an enormous measurable return on investment.3

Depression is not weakness, though depression is a disease that may make you feel weak. Depression is neither laziness, nor apathy, nor a lack of professional fortitude. It is an expression of an underlying neurobiological pathology about which researchers still have many questions. It is a pathology that, like a many-tentacled octopus, grasps at our emotional stability, our cognition, our sleep, our patience with people, and our will to go on. Depression can obscure our personhood, so that it is hard for others—and for ourselves—to see us for who we really are. Admitting to depression is not weakness but rather is further confirmation of an insidious, life-threatening epidemic in the medical profession. On a very simple level, it constitutes an admission of humanness.

Even as I pen these words, a great fear swells and rises in my throat, threatening to take me over. At least for me, shame has never quite relinquished its grip on vulnerability, and vulnerability is deeply uncomfortable. As an aspiring physician, I may be committing professional self-sabotage by telling my story. My prospective employers may judge me to be unstable and unfit to care for patients. But the tears of my colleagues, the tales of deferred suicide attempts my classmates have confided in me, and the tragic deaths of bright minds around the country lend strength to my conviction to write about my experience.

I admit openly that I am just as vulnerable to the elements of life as are my future patients, hoping that others will do the same. I do so in the hopes that the culture of the medical profession will evolve to value imperfection as a harbinger of humanity, and that this value will be exemplified by the way we judge our students and residents. If I have learned anything after spending most of my short life in pursuit of academic distinction, it is that the appeal of the dividends—good grades, high praise, awards—is as ephemeral as the warm glow felt on their receipt. Not so with the call to protect human life; that’s something truly worth living for.

Source:jamanetwork.com

Effect of the Pulmonary Embolism Rule-Out Criteria on Subsequent Thromboembolic Events Among Low-Risk Emergency Department Patients. The PROPER Randomized Clinical Trial


Key Points

Question  Does use of the pulmonary embolism rule-out criteria (PERC) in emergency department patients with low clinical probability of pulmonary embolism (PE) safely exclude the diagnosis of PE?

Findings  In this cluster-randomized crossover noninferiority trial that included 1916 patients with very low clinical probability of PE, the 3-month risk of a thromboembolic event when using a PERC strategy compared with a conventional strategy was 0.1% vs 0%, a difference that met the noninferiority criterion of 1.5%.

Meaning  In emergency department patients at very low risk of PE, the use of a PERC-based strategy did not lead to an inferior rate of subsequent thromboembolic events.

Abstract

Importance  The safety of the pulmonary embolism rule-out criteria (PERC), an 8-item block of clinical criteria aimed at ruling out pulmonary embolism (PE), has not been assessed in a randomized clinical trial.

Objective  To prospectively validate the safety of a PERC-based strategy to rule out PE.

Design, Setting, and Patients  A crossover cluster–randomized clinical noninferiority trial in 14 emergency departments in France. Patients with a low gestalt clinical probability of PE were included from August 2015 to September 2016, and followed up until December 2016.

Interventions  Each center was randomized for the sequence of intervention periods. In the PERC period, the diagnosis of PE was excluded with no further testing if all 8 items of the PERC rule were negative.

Main Outcomes and Measures  The primary end point was the occurrence of a thromboembolic event during the 3-month follow-up period that was not initially diagnosed. The noninferiority margin was set at 1.5%. Secondary end points included the rate of computed tomographic pulmonary angiography (CTPA), median length of stay in the emergency department, and rate of hospital admission.

Results  Among 1916 patients who were cluster-randomized (mean age 44 years, 980 [51%] women), 962 were assigned to the PERC group and 954 were assigned to the control group. A total of 1749 patients completed the trial. A PE was diagnosed at initial presentation in 26 patients in the control group (2.7%) vs 14 (1.5%) in the PERC group (difference, 1.3% [95% CI, −0.1% to 2.7%]; P = .052). One PE (0.1%) was diagnosed during follow-up in the PERC group vs none in the control group (difference, 0.1% [95% CI, −∞ to 0.8%]). The proportion of patients undergoing CTPA in the PERC group vs control group was 13% vs 23% (difference, −10% [95% CI, −13% to −6%]; P < .001). In the PERC group, rates were significantly reduced for the median length of emergency department stay (mean reduction, 36 minutes [95% CI, 4 to 68]) and hospital admission (difference, 3.3% [95% CI, 0.1% to 6.6%]).

Conclusions and Relevance  Among very low-risk patients with suspected PE, randomization to a PERC strategy vs conventional strategy did not result in an inferior rate of thromboembolic events over 3 months. These findings support the safety of PERC for very low-risk patients presenting to the emergency department.

Trial Registration  clinicaltrials.gov Identifier: NCT02375919

Introduction

The diagnostic strategy for pulmonary embolism (PE) in the emergency department (ED) is well established, with the evaluation of the clinical probability, followed by either D-dimer testing or computed tomographic pulmonary angiography (CTPA).1,2 This pathway is endorsed by European guidelines and is associated with a very low risk of failure. However, there are growing concerns on the potential overuse of diagnostic tests (especially CTPA) and possible overdiagnosis of PE.3,4

The pulmonary embolism rule-out criteria (PERC) rule is an 8-item set of clinical criteria that includes arterial oxygen saturation (Spo2) of 94% or less, pulse rate of at least 100/min, patient age of 50 years or older, unilateral leg swelling, hemoptysis, recent trauma or surgery, prior PE or deep venous thrombosis (DVT), and exogenous estrogen use. These criteria were derived to select from among patients with a low clinical probability of PE, that is a population (PERC-negative patients) with a very low prevalence of PE in whom the risk-benefit ratio of further testing would be unfavorable (ie, a prevalence of PE <1.8%).5,6 A meta-analysis of observational studies reported that the prevalence of PE is less than 1% in PERC-negative patients.7 However, to our knowledge, no prospective study has yet implemented this rule, and conflicting results from European populations have resulted in PERC-based strategies not being included in most guidelines or recommendations.1,8

This multicenter noninferiority randomized clinical trial was conducted to test the hypothesis that the diagnosis of PE can be safely excluded among ED patients with a low clinical probability and a PERC score of zero without further diagnostic testing.

Methods
Study Design

The population and study design of the PROPER trial (PERC Rule to Exclude Pulmonary Embolism in the Emergency Deparment) is available in Supplement 1. The design for this study was a noninferiority, crossover cluster–randomized clinical trial aimed at assessing the safety of the PERC-based strategy. Fourteen EDs in France participated in the study for two 6-month periods separated by a 2-month washout period. The trial recruitment began in August 2015, ended in September 2016, and follow-up ended in December 2016. The study was approved by Comité de protection des personnes Ile de France VI–P140913. All patients provided signed informed consent before inclusion. The reporting of this study followed the Consolidated Standards of Reporting Trials (CONSORT) statement extended to cluster randomized trials.9

Patients and Intervention

All patients who presented to the ED with suspicion of a PE were eligible for enrollment in the study. The inclusion criteria were new-onset presence or worsening of shortness of breath or chest pain and a low clinical probability of PE, estimated by the treating physician’s gestalt as an expectation below 15%. The physician’s gestalt evaluation consists of an unstructured impression of the treating physician as to whether the probability of PE in the patient is low, moderate, or high. This evaluation has been reported to perform at least as well as other structured methods.10 Patients were excluded if they had an obvious etiology to the acute presentation other than PE (eg, pneumothorax or acute coronary syndrome), an acute severe presentation (hypotension, Spo2<90%, respiratory distress), a contraindication to CTPA (impaired renal function with an estimated creatinine clearance <30 mL/min; known allergy to intravenous radio-opaque contrast), pregnancy, inability to be followed up, or if they were receiving any anticoagulant therapy.

Each center was randomized to start with a 6-month control period (usual care), followed by a 6-month intervention period (PERC-based strategy), or in reverse order (Figure). The unit of randomization was the ED. Randomization was computer generated in blocks, using 3 blocks of 4 and 1 block of 2. For each number of the list (1-14), the order of exposure to the intervention was randomly assigned (3 numbers for each order of exposure). Then this randomization list was combined with the blinded list of centers previously numbered. The 2 periods were separated by a 2-month washout period. In the intervention group, the diagnostic work-up included an initial calculation of the PERC score. If the PERC score was zero, PE was ruled out without subsequent testing. If the PERC score was positive, the usual diagnostic strategy was applied.1In the control group, the diagnostic work-up for PE followed the usual diagnostic strategy—after inclusion and classification as low gestalt probability, D-dimer testing was recommended for all patients, followed (if D-dimer–positive) by a CTPA.1 We used the age-adjusted threshold for D-dimer interpretation.11 PE was excluded if one of these 2 tests was negative. A CTPA with emboli was considered positive, including isolated subsegmental PE. If the CTPA was judged inconclusive, the patients would undergo further testing (pulmonary ventilation-perfusion [V̇/Q̇] scan or lower-leg Doppler ultrasound).

Follow-up

All patients were observed for 3 months and interviewed by phone at the end of this period. They were instructed to return to the same ED or hospital in case of recurrent or worsening symptoms. A local clinical research assistant checked any return visit to the ED or admission to the hospital during the follow-up period. The phone interview was performed using a structured questionnaire that recorded consultation with any physician, hospital visit, and change in medication or imaging study. In case of inability to perform the phone interview, the patient’s general practitioner was contacted. In cases of inability to contact the patient or physician, we sought any death records from the administrative record of patient’s town of birth.

Outcomes

The primary objective of the study was to assess the percentage of failure of the diagnostic strategy. The primary end point was the occurrence of a symptomatic thromboembolic event during the 3-month follow-up period, which was not diagnosed at the time of the inclusion visit. Secondary end points included the proportion of patients investigated with CTPA, the rate of CTPA-related adverse events requiring therapeutic intervention within 24 hours, length of stay in the ED, rate of hospital admission or readmission, onset of anticoagulation regimen, severe hemorrhage in patients with anticoagulation therapy, and all-cause mortality at 3 months. An adjudication committee confirmed the occurrence of all suspected thromboembolic events during the follow-up period and adjudicated all deaths as to whether or not they were likely to have been related to a PE. The adjudication committee was composed of 3 experts with special interest in hemostasis (a professor of emergency medicine from France, a professor of pneumology from France, and a professor of emergency medicine from Switzerland). The 3 experts were independent to the study and blinded to the strategy allocation.

Sample Size Estimation and Statistical Analysis

The statistical plan and sample size calculation are reported in Supplement 2. Based on previous diagnostic studies on PE, we assumed that the primary end point rate in the control group would be 1.5%.12– 14 The noninferiority margin for the difference of the primary end point between the 2 groups (delta) was set at 1.5%, which meant that the event rate had to have an upper CI limit of less than 3% in the intervention group. This 3% threshold corresponds to the failure rate observed after negative pulmonary angiography and is the threshold used in other studies.2,11,12 Alpha was set at 5% and power at 80%, which produced a sample size of 1624 patients. With an intraclass correlation coefficient of 0.002, an intraperiod correlation of 0.001, and a mean cluster size of 60 patients, the cluster design effect increased the sample size to 1818. Allowing for 5% nonevaluable patients, we needed to recruit 1920 patients.

Baseline characteristics of the intention-to-treat (ITT) population were expressed as number (%) for qualitative variables, and mean (SD) or median (interquartile range [IQR]) for quantitative variables, depending on their distribution. The analysis of the primary end point was performed based on the per-protocol population with follow-up.15 Noninferiority was assessed on the upper bound of the 1-sided 95% CI of the difference of percentage of primary end point occurrence (delta). If the upper bound of the CI of the difference was greater than 1.5%, the noninferiority hypothesis would have been rejected. Clustering effect and period and order effect were checked in a secondary analysis on the ITT population after replacing missing data by considering the worst-case scenario. Sixty primary outcomes were missing among the 1916 patients (3%; 54 patients were lost to follow-up and 6 withdrew). A generalized linear mixed model with Poisson distribution was performed, taking into account center as a random effect and period and strategy-by-period interaction as fixed effects. The logarithm of the number of patients was included as an offset term in the model. The P values reported for fixed effects were based on t tests with the denominator degrees of freedom specified using Kenward-Roger approximation. The Dunnett and Gent χ2 test was used to test noninferiority on the ITT population. The secondary end points were compared under a superiority hypothesis on the ITT population and using available data. Qualitative variables were compared using the Pearson χ2 test or the Fisher exact test, and continuous variables were compared using a Wilcoxon rank-sum test. The prevalence of PE in both groups at baseline was compared using the Pearson χ2 test.

Because the PERC group included patients with lower probability of PE compared with the control group, 2 posthoc sensitivity analyses were performed (one after the exclusion of 150 very low-risk patients from the PERC group; another after the addition of 175 simulated patients with no work-up for PE in the control group), which allowed a comparison between groups of similar PE clinical probability. All superiority tests were 2-sided and P values of less than .05 were considered significant. SAS version 9.3 software (SAS Institute) was used for statistical analyses.

Results

Fifteen EDs were invited to participate in the study (1 of which declined). The 14 participating EDs recruited 1916 patients during the study period—954 in the control period group and 962 in the PERC period group (ITT population). Details on the participating EDs are reported in eTable 1 in Supplement 3. Patients who were lost to follow-up, withdrew consent, or had protocol violations were excluded (Figure). In the control group, 9 patients (0.9%) did not undergo D-dimer testing, and in the PERC group, 46 PERC-negative patients (5%) underwent D-dimer testing. The primary end point was therefore adjudicated in 1749 patients (per-protocol with follow-up population)—902 in the control group and 847 in the PERC group. The mean (SD) age was 44 (17) years and 980 (51%) were women. The main baseline characteristics of patients are summarized in Table 1 and Table 2.

In the PERC group, there were 459 (48%) PERC-negative patients (Table 2). A PE was diagnosed at the initial visit in 40 (2%) patients overall, 14 (1.5%) in the PERC group vs 26 (2.7%) in the control group (difference, 1.3% [95% CI, −0.1% to 2.7%]; P = .052). In these 40 patients, 39 PEs were diagnosed using CTPA in the ED and 1 with V̇/Q̇ scan. Six PEs were subsegmental (1 in the PERC group and 5 in the control group).

There were 5 deaths, which were reviewed by the adjudication committee, and none were considered likely to be linked to a PE. There was 1 thromboembolic event in the PERC group after 3-month follow-up (0.1%) and none in the control group. The difference of proportion (delta) between the 2 groups was therefore 0.1% (1-sided 95% CI, −∞% to 0.8%). The only missed pulmonary embolism or failure of the PERC rule to identify a PE that occurred in this study was that of a young male with chest pain and no previous medical history. He was PERC-negative and initially discharged but then seen again the next day with ongoing pain. When he presented the second time, a D-dimer was checked and found to be positive followed by a CTPA, interpreted as inconclusive, with radiological signs consistent with pneumonia. The patient was admitted, had lower-limb Doppler ultrasonography that showed no VTE and then a V̇/Q̇ scan showed subsegmental defects. He was treated with direct oral anticoagulation for 6 months and had a normal scan at follow up after conclusion of therapy.

Overall, a CTPA was performed in 349 cases (18%), of which 39 were positive for PE. The diagnostic yield of CTPA for the diagnostic of PE in the ED was therefore 11% across both groups. Patients in the PERC group were significantly less frequently investigated by CTPA (129 [13%]) vs 220 (23%) in the control group (difference, 9.7% [95% CI, 6.1% to 13.2%) (Table 3). In the PERC group, there was a significant reduction in median ED length of stay (4 h 36 min [IQR, 3 h 16 min to 6 h 22 min] vs 5 h 14 min [IQR, 3 h 50 min to 7 h 19 min] in the control group; P < .001). Hospital admission rates were 13% (121 patients) in the PERC group vs 16% (152 patients) in the control group (difference, 3.3% [95% CI, 0.1% to 6.6%]). There was no significant difference in the rate of all-cause mortality at 3 months (0.3% [3 patients] in the PERC group vs 0.2% [2 patients] in the control group [difference, 0.1% {95% CI, −0.5% to 0.7%}]; P > .99), in 3-month hospital readmission rates (4% [43 patients] in the PERC group vs 7% [62 patients] in the control group; P = .051), and there was no severe hemorrhage or severe adverse events subsequent to CTPA (0 in both groups). These findings for the secondary end points were also observed in the per-protocol population (Table 3). An ITT analysis with a worst-case scenario that assumed all lost to follow-up patients met the primary end point resulted in a 0.2% difference in the primary end point (1-sided 95% CI, −∞% to 1.6%; P = .12) (Table 3). In this ITT population, there was no significant period effect (P = .62) and the sequence order of the periods was not associated with a higher risk of pulmonary embolism at 3 months (P = .64). The intercluster coefficient was 0.019 and the intracluster coefficient was 0.034.

The posthoc sensitivity analyses are presented in the Supplement 2. The findings of these analyses are that the 2 groups had similar clinical risk of PE (eTable 2 in Supplement 3). The noninferiority hypothesis remained confirmed in the per-protocol population, and the rate of CTPA was significantly reduced in the PERC group (eTable 3 in Supplement 3).

Discussion

In this cluster-randomized trial of very low-risk patients with suspected PE, randomization to a PERC strategy vs conventional strategy did not result in an inferior rate of thromboembolic events over 3 months. In addition, the PERC strategy was associated with a benefit in terms of reduced CTPA use, ED length of stay, and likelihood of initial admission into hospital.

After initial validation on observational studies in the United States,5,16 the PERC rule has been challenged in Europe by 2 studies that reported an unacceptable rate (>5%) of PERC-negative patients with a PE diagnosis.17,18 One reason for the reported high prevalence of PE among PERC-negative patients could be the overall higher prevalence of PE in a European population.19,20 Another reason was that the authors used a structured score to evaluate the clinical probability (Wells or Geneva scores), which is made redundant when the PERC rule is used. One study reported better results when combining PERC with low gestalt clinical probability with no false negative of the PERC rule.21 The primary end point chosen, ie, the presence of PE after formal work up in the ED, can also explain in part the discrepancies, leading to a substantial rate of overdiagnosis, with a greater number of small PEs diagnosed that could be left untreated.3,4,22,23

The end point chosen for the current study was a symptomatic pulmonary thromboembolism at 3 months and did not include the presence of a PE after formal work up in the ED. This could explain the difference of prevalence of PE described in the 2 groups (1.5% in the PERC group vs 2.7% in the control group; P = .052). The real PE prevalence in the PERC group may actually have been higher if all patients had undergone formal work up with D-dimer testing. It is very likely that some patients in the PERC group were discharged with a small untreated PE, which was not symptomatic even at 3-month follow-up. There was no significant difference in diagnostic yield of CTPA for PE (11% in both groups). A increased yield with the use of PERC might have been expected as has been reported with other clinical decision rules.24 However in this study, the small number of diagnosed PEs may have resulted in a lack of power to detect significant differences between the 2 groups. In the PERC group, there were fewer initial PEs diagnosed than in the usual care group, with little difference between the 2 groups in clinically significant thromboembolic events at 3 months. This suggests that PERC may be inferior at capturing low-risk events such as small subsegmental PE, with no clinical benefit in diagnosing the missed PEs in the PERC group. This may represent a tolerable risk to patient safety because small subsegmental PE can be left untreated.25 Another potential reason for the lower rate of PE in the PERC group may be that the 2 groups were not similar in their initial clinical probability of PE. In the posthoc sensitivity analyses with similar groups, the PE prevalence in the 2 groups was slightly modified (1.7% in the PERC group vs 2.7% in the control group in the first posthoc analysis and 1.5% in the PERC group vs 2.3% in the control group in the second posthoc analysis; eTable 4 in Supplement 3). However, this difference was no longer statistically significant.

Future research could evaluate the economic benefit of implementing PERC in routine practice. Furthermore, the safety and benefit of the modified PERC rule for patients younger than 35 years should be evaluated and compared to the initial PERC rule.25

Limitations

This study has several limitations. First, the observed prevalence of PE in patients with a low gestalt probability is very low. Although this category should include patients with a PE prevalence below 15%, only 26 patients (2.7%) in the control group were diagnosed with a PE in the ED. This corresponds with an overestimation of the risk of PE by the clinician’s gestalt, which has been described previously.26 The 2.7% prevalence (95% CI, 1.8% to 4.0%) reported here is below that reported in previous studies.10,21 However, similar PE prevalences for low-risk patients have also been reported in a recent study (pooled prevalence in the ED, 3.1% [1.0% in the United States and 4.3% outside of the United States]).20 Furthermore, among low-risk patients with chest pain or dyspnea, the reported prevalence of PE was 0.9%.26 In a recent large multicenter prospective study, Penaloza et al reported a prevalence of 4.7% (95% CI, 3.5% to 6.1%).27 The mean age of the patients was 44 years in this study, which is younger than in other main studies.11,27 This could partially explain the low prevalence of PE in this sample. Furthermore, a CTPA was defined as positive if it showed an isolated subsegmental PE. This could be considered controversial as these PEs could be left untreated.28

Second, the failure rate of the diagnosis strategy in the control group was below the estimation, and therefore, the sample size calculation was not accurate. The 3% maximal failure rate that has been used in other trials is derived from studies published more than 15 years ago.29 Since this percentage is very large compared to the less than 1% of events rate recently reported, this raises the question of its validity.11,27 Recently, the Scientific and Standardization Committee of the International Society on Thrombosis and Hemostasis published a recommendation to decrease the maximal acceptable failure rate to 2%. With this new recommended threshold, the present results would still be valid.30

Third, this was not a patient-level randomized trial, therefore a bias inherent to the cluster design cannot be excluded. This shortcoming is however partly limited by the absence of period and sequence order effects. Fourth, it is possible that an occult inclusion bias may have been introduced, in which emergency physicians were willing to discharge PERC-negative patients with no further testing and therefore did not include some of these patients in the trial during the control period. This could not be assessed because data were not recorded regarding the number of eligible patients who were not enrolled. The difference in the rate of PERC-negative patients and clinical probability of PE between the 2 groups (Table 2) suggests that physicians could have included more very low-risk patients in the PERC group, as they were willing to discharge them with no further testing. However, 2 post-hoc sensitivity analyses confirmed the primary result of noninferiority, although with smaller effect on the secondary end points.

Fifth, there were 54 patients lost to follow-up and the presence of a few events among these patients would have altered the conclusion of this study. A worst-case scenario simulation would have led to the rejection of the noninferiority hypothesis (difference, 0.2% [upper bound of the 95% CI at 1.6% for a margin set at 1.5%]). Sixth, it is possible that the use of PERC induced diversion of unnecessary testing, and therefore patients not tested for PE were tested for another acute condition such as coronary syndrome. However, these data were not collected for these patients in this study.

Conclusions

Among very low-risk patients with suspected PE, randomization to a PERC strategy vs conventional strategy did not result in an inferior rate of thromboembolic events over 3 months. These findings support safety of PERC for very low-risk patients presenting to the emergency department.

The Street MBA


Education is considered to be an important tool to build a good career and make your mark in the society. You are told from your childhood itself that your grades not only determine your chances of getting into a good college, but also define your success after graduation. Your level of intelligence is gauged through these numbers.

I feel that, success is not limited to the numbers you achieve, rather is determined by your ability to interact, function and thrive in the world around you. I wanted to be a scientist, but ended up being an insurer. Do I regret this decision? No. Do I enjoy doing what I’m doing? Yes. Am I qualified to be an insurer from education perspective? May be not.

Nowadays an MBA degree is considered a must for good career progression as a manager, and it is great to see that some of you who have gone down this path are doing very well. I don’t have an MBA degree which would have taught me sales or management. But if you look at my career graph, with the best wishes from all of you I’ve been MD & CEO of one of the most successful general insurance company in India for close to 6 years. This thought came to me when someone asked me from which University had I done my MBA, and I replied by saying that I did it from the ‘Street University’. The person got really inquisitive and asked me where exactly this University is. I told him that this University is in every country, every city and every village of the world.

And it’s true. I learnt a lot by observing the streets around me, watching the hawkers when I visited them to have my favourite street side foods. One of the most important lessons that I have learnt from them is their power of resilience. I have seen them get uprooted many times, but they come back with a smile. They are temporarily sad for their loss, but soon they are already thinking about solutions to overcome a situation. The power of finding solutions rather than fretting about a problem is what I have learnt from them.

Secondly, they are always ready for the unexpected. They are not sure how stable is their business model and what will happen at the end of the day. But they are confident that they will find a solution to whatever happens. These street vendors usually have no time to plan – they think on their toes and use the scanty tools available with them to manage any crisis. They figure out how to do business with whatever adversities they may have to face. This teaches you lessons in clever resourcefulness be it in terms of managing projects or making sales by smartly tackling such situations.

I feel that success and happiness go hand in hand. If you are not happy with what you are doing in life, you will be perceived successful by everyone but you. I have seen the street hawkers relishing the joy of life with not much with them, but still content enough. Their generosity inspires me the most when they give a helping hand to a poor person by offering them some food. Even the richest amongst us sometimes fail to display such attributes of kindness, which actually add value to life.

I have also learnt sharpness of doing business from these street vendors. They are capable to figure out business opportunities even in places where the best business people are standing with them to compete. They have the ability to a build relationship with their

customers and be loyal to them. For example, you might not be surprised when your street side vendor offers you his phone number and saves up yours in return. He gives you a call when he’s freshly stocked up with your favourite fruit, and even offers to drop it at your doorstep – customised services at their best! Sometimes he goes out of his way and brings you fresh flowers on Diwali, just in case you needed them for the Pooja rituals – another free course in relationship building! They serve you with a smile and their ability to go extra mile comes to them on a very spontaneous basis. I’m yet to see organised business houses being able to do so well. Some of you might think, this diminishes with scale, but it should not. We are equipped with technology and infrastructure which the street vendor is not, and yet something so simple is seems tough to us.

Today we hear that it’s a VUCA world. We hear various business models being adopted by organizations to enhance their agility and adaptability. But I have seen all these qualities being used and incorporated by these heroes, my professors of the street university from whom I learnt to do business, run organizations, learned Human Resources, sales, marketing, operations, etc.

I’m sure each one of you would also have learnt something from them, which in your subconscious mind plays out when you are running your companies or your systems. You are a sum total of your experiences. By this I mean experience of the world, being out there, interacting with people and situations. Experiential learning is the biggest learning you can get, the world is full of an exciting diverse bouquet of experiences, each for every situation.

They say there are no free lunches, but there sure are free lessons! ‘Street University’ may not be recognised amongst the biggest universities of the world, but I’m sure the lessons I learnt here I wouldn’t have been taught to me even in the best of universities. For the determined, inspiration is everywhere, especially in the chaos of the streets. Watch out for it the next time you are out for a stroll!

How Soon Will You Be Working From Home?


Work today is increasingly tied to routine rather than a physical space. Unsurprisingly, more and more, companies in the United States allow their employees to work beyond a specifically designated space.

The number of telecommuters in 2015 had more than doubled from a decade earlier, a growth rate about 10 times greater than what the traditional workforce registered during the same period, according to a 2017 report by FlexJobs.com, a job search site specializing in remote, part-time, freelance and flexible-hour job positions.

Telecommuting might not just be a company perk in the next decade.

Experts, however, quickly point out that telecommuting’s growth faces numerous challenges. Cultural barriers in traditional companies, reliable technology, labor laws, tax policies and the public’s own perception about telecommuting will need adjusting to a more mobile workforce, say labor analysts.

Remote Work Still Considered a Perk

The Flexjobs.com report said the industries offering the greatest possibilities to work remotely included technology and mathematics, the military, art and design, entertainment, sports, media, personal care and financial services. Experts cite a couple of reasons why telecommuting is becoming more common in some industries: a more reliable internet connectivity and new management practices dictated by millennials and how they work.

Among the advantages that companies cite for remote work are cost savings in the absence of a work space, more focused and productive employees, and better work retention. Additionally, in 2015, figures showed that U.S. employers had saved up to $44 billion with the existing almost 4 million telecommuters (working half time or more), the Flexjobs.com report said.

“I’ve been able to see firsthand the increase in productivity by incorporating telecommuting into several companies,” says Leonardo Zizzamia, entrepreneur in San Francisco and co-founder of a productivity and collaboration tool called Plan. “With housing costs in Silicon Valley continuing to rise, telecommuting is the financially savvy way to work for your favorite company.”

Yet remote work is still considered a perk in the majority of workplaces. The greatest proportion of telecommuting positions fall under management, according to the FlexJobs.com report. Managers, however, struggled with overseeing remote workers.

“It used to be that everybody was in (an) office at set hours and if you were a manager you could look up and see your employees working or not,” says Susan Lund, partner at McKinsey & Company, a global management consulting firm. “Now it’s different. More companies are moving toward a more flexible work space environment and for managers it’s much more challenging because you need to know what each person is working on and whether they are reaching their goals.”

Workers in the beginning and early stages of their careers will be key to transforming today’s workplace to be more friendly to telecommuting, analysts say.

“Millennials have been working over computers and the internet since they were in early junior high and even younger,” says Brie Reynolds, a senior career specialist at FlexJobs.com. “For them it’s natural and when they come into the workforce they are really pushing it into the mainstream. They are letting employers know that remote work is something that they value, that it’s a way that they would want to work and that they don’t see it as a perk, but as another option for working.”

Working From Home as the Cross-Border Threat

According to a 2017 LinkedIn report, the number of positions filled in the United States in October was more than 24 percent higher compared to the previous year. The oil and energy sector, manufacturing and industrial, aerospace, automotive and transportation sectors reported the biggest growth in jobs, the report said.

“If you look at the data you will find that there are significant talent gaps in many industries,” says Tolu Olubunmi, entrepreneur and co-chairperson of Mobile Minds, an initiative advancing cross-border remote working as an alternative to physical migration. “Those jobs are going unfilled for a number of reasons, and one of them is not actually having available the skills that are needed to the organizations.”

Telecommuting options may help fill empty positions in the U.S., job analysts say.

“When you are hiring remotely it opens you up to a much wider pool of talent than if you are stuck in one geographic area and you are only hiring people who can physically get to your office on a daily basis,” says Reynolds, the Flexjobs.com career specialist.

Technology also can help recruit workers, potentially attracting qualified workers from other parts of the world, as telecommuting seems to be popular at a global scale as well. A 2012 Reuters/Ipsos report showed that about one in five workers telecommute, especially in Latin America, Asia, and the Middle East.

“Cross-border work allow companies to tap into a greater number of talent and diversity of talent that can help them meet their needs,” Olubunmi says. “It reduces brain drain in certain communities that are seeing their best and brightest leaving and that actually benefits those communities. They have the skill and the talent working elsewhere, but money and services are still being distributed within the community.”

Experts have mixed opinions over the continued increase in telecommuting positions. Some are convinced that technological advancement will allow people to better simulate face-to-face interaction, thus encouraging working remotely. Others say future technology could play a counter-intuitive role of bringing people together in an actual office space.

 

“One of the big advantages of telecommuting was avoiding congestion,” says Adam Millsap, senior affiliated scholar at the Mercatus Center, a research center at George Mason University focusing on economics. “But autonomous vehicles catch on so that itself could eliminate congestion and encourage people to go into the office again even more. They will cut down commuting time, there will be less accidents which tend to hold up traffic, and the cars will be able to drive much closer together at higher speed, because they will all be communicating with one another, so you could fit more on the road.”

Regardless, opening up a world to U.S. companies should not scare American workers, experts say. “Telecommuting isn’t about taking jobs away from native-born citizens,” Olubunmi says. “This is about improving the economy by letting businesses have a broader pool of talent to pick from, in order to be able to achieve their goals and have better economic growth in general for all.”

At the same time, one shouldn’t assume that foreign workers will be willing to take on American jobs just because they become more accessible.

“If an American firm comes to India and says they will give relatively higher wages for people to work in a call center, those workers might be willing to stay awake through the night,” Millsap says. “But if I am Apple and I want to hire a new software engineer, there is a good chance that a software engineer in Japan, for instance, has already a pretty a good salary and is not going to be willing to take on a job that requires him to have meetings at midnight in his own country.”

Experts agree that people in the labor market need to be more agile at acquiring new skills later in life, including learning how to work remotely. Remote work, they say, should not necessarily be considered a perk, but rather a way of helping employees better manage work-life balance.

“When people are given the flexibility to live and work where they please, it really does increase productivity and allow a diversity of people to engage in the workforce,” Olubunmi says. “Because if in the 19th century, work was about where you went, now work is about what you do, not from where you do it.”

Are Tomorrow’s Fuel Cells Made of Paper? This Engineer Thinks So


Because his fuel cells are cheaper, easier and cleaner than conventional batteries.

Where others might look at substances like urine, blood and sweat and cringe, Juan Pablo Esquivel sees untapped sources of energy. Not for powering large engines but rather to produce small amounts of electricity that could play a vital role in the burgeoning telemedicine market. Today Esquivel, a 35-year-old electronics engineer, is developing miniature paper-based fuel cells at the National Centre of Microelectronics (CNM) at the Autonomous University of Barcelona (AUB), with an eye toward using them to power disposable diagnostic devices.

As we stroll the corridors of CNM, Esquivel explains the difference between typical lithium or alkaline batteries and what he’s developing: Unlike what you might use in a flashlight or computer keyboard, fuel cells require a supply of energy from an electrochemical reaction to produce electricity. This type of power source has been tested to generate energy for cars and mobile phones, but Esquivel, who started his career at the Monterrey Institute of Technology in his native Mexico, is among the first to do this work on a micro scale.

A lithium battery, a fuelium battery and a power pad photo pablo esparza

A lithium battery, a Fuelium battery and a power pad.

Not only does his approach open up the range of possible uses for these tiny fuel cells, but it also sidesteps the environmental impact from regular batteries. “We develop small, nontoxic, inexpensive fuel cells and batteries that don’t need to be recycled and could be thrown away with no ecological impact,” he explains with a Mexican accent laced with Iberian Spanish expressions.

Born in Guadalajara, Esquivel moved to Barcelona in 2005, having fallen in love with the city while doing a college backpack tour through Europe. When it was time to apply to Ph.D. programs, he was intrigued by the work being done at CNM, among the most advanced labs of its kind in Southern Europe. It proved to be the right fit: In 2013, he was named by MIT to the list of the 10 most innovative Mexican researchers under 35.

“Esquivel is like Cristiano Ronaldo, and, like Ronaldo, he’s playing for an excellent team. That’s why he gets results,” jokes Antonio Martínez, a professor at the Polytechnic University of Madrid.

They stopped focusing on hydrogen, methanol and ethanol as the only energy sources for fuel cells and started looking at bodily fluids.

The Mexican researcher confesses that he’s long been obsessed with “making things cheaper, simpler and easier.” Once his team had developed the paper-based batteries, they wanted to find a universal, everyday use for them. So Esquivel and Neus Sabaté, his thesis adviser and “scientific soul mate,” shelved their academic journals and turned instead to considering what people and the market needed.

They focused on portable, disposable diagnostic tests, such as for pregnancy, glucose and infectious diseases, that use small amounts of energy. Those devices, they noticed, rely on lithium button batteries to supply the energy necessary to analyze the samples and to display the results. But, in contrast to watches or remote controls, single-use diagnosis tests get discarded after having used less than 1 percent of their batteries’ charge — an “ecological aberration,” in Esquivel’s words.

Juan pablo esquivel holding a paper based battery photo pablo esparza (1)

Juan Pablo Esquivel holding a paper-based battery, an eco-friendly power source for single-use applications.

That was the moment that Esquivel and his colleagues connected the dots: “What if we used the samples [of saliva or blood] to feed a small fuel cell that would generate the electricity needed for the analysis and to display the results?” They stopped focusing on hydrogen, methanol and ethanol as the only energy sources for fuel cells and started looking at bodily fluids as materials capable of triggering an electrochemical reaction — and generating electricity.

Digging further, they reached two important conclusions: First, they could build their power sources using paper as the base material to transport the fluids by capillary action; and second, these power sources could be integrated, thanks to printed electronics technology, with other electronic components such as sensors and display screens to produce self-powered devices.

In 2015, with patent in hand, Esquivel, Sabaté and Sergi Gassó — who joined as a business partner — founded Fuelium, with seed money from their personal savings, funding from the Repsol Foundation startup accelerator program and grants from the Spanish government and the European Commission. The company aims to translate the outcome from their lab work for the portable diagnostic tests market, a sector Esquivel values at $1.8 billion. While he sees a clear path to market for Fuelium, he acknowledges that breaking in will be a heavy lift: Getting out of the lab is “a big challenge for a quite disruptive technology like ours,” he says. Two years since launch, Fuelium has grown to a staff of five and signed its first contract.

Emmanuel Delamarche, manager of precision diagnostics at IBM Research in Zurich, agrees that portable devices have become a “very hot area,” both in scientific and economic terms, with a trending away from remote, centralized labs and toward portable diagnostic tools that deliver faster results. “Eighty percent of the world’s population needs this kind of technology because they don’t live next to a clinical lab,” Delamarche explains.

Sabaté, who has worked with Esquivel for 12 years, is impressed by her partner’s creative mind and willingness to experiment. “He never says no to an idea,” she says, “no matter how crazy it is.”

Crazy or not, Esquivel is already working on a new idea: developing what he calls the “power pad,” which he hopes will lead to the first fully biodegradable paper-based battery. It’s an ambitious play for a “tiny, sustainable and clean” source of energy, he admits — but it’s a project, he adds with a smile, that lets him “have fun on the way.”

How Cannabis Can End the Use of Dangerous Prescription Pain Killers


Opioid painkiller prescriptions have jumped 300 percent in the last decade, and they are now the most commonly prescribed drugs on the market. They are also the most dangerous and addictive drugs on the market, and often lose effectiveness with long-term use.

Now a new study conducted by researchers at The University of New Mexico, involving medical cannabis and prescription opioid use among chronic pain patients, found a distinct connection between having the ability to use cannabis and significant reductions in opioid use.

There is an abundance of evidence that the suppression of medical marijuana is one of the greatest failures of a free society, journalistic and scientific integrity as well as our fundamental values. There is no plant on Earth more condemned than cannabis, yet it has the potential to heal dozens of diseases and curb our rampant use of prescription opioids. Some studies have suggested that marijuana may even have a place in curbing the opioid epidemic.

The study, titled “Associations between Medical Cannabis and Prescription Opioid Use in Chronic Pain Patients: A Preliminary Cohort Study” and published in the open access journal PLOS ONE, was conducted by Dr. Jacob Miguel Vigil, associate professor, Department of Psychology and Dr. Sarah See Stith, assistant professor, Department of Economics. The results from this preliminary study showed a strong correlation between enrollment in the New Mexico Medical Cannabis Program (MCP) and cessation or reduction of opioid use, and that whole, natural Cannabis sativa and extracts made from the plant may serve as an alternative to opioid-based medications for treating chronic pain.

Today, opioid-related drug overdoses are the leading cause of preventable deaths in the United States killing approximately 100 Americans every day. Conventional pharmaceutical medications for treating opioid addiction, such as methadone and buprenorphine, can be similarly dangerous due to substantial risks of lethal drug interactions and overdose.

“Current levels and dangers of opioid use in the U.S. warrant the investigation of harm-reducing treatment alternatives,” said Vigil, who led the study. “Our results highlight the necessity of more extensive research into the possible uses of cannabis as a substitute for opioid painkillers, especially in the form of placebo-based, randomized controlled trials and larger sample observational studies.”

Cannabis has been investigated as a potential treatment for a wide range of medical conditions from post-traumatic stress disorder to cancer, with the most consistent support for the treatment of chronic pain, epilepsy and spasticity. In the U.S., states, including New Mexico, have enacted MCPs in part for people with chronic, debilitating pain who cannot be adequately or safely treated with conventional pharmaceutical medications.

In a historic and significant moment in American history, in November of 2012, Colorado became the first US state to legalize cannabis for recreational use. The impact of the decision has rippled across the entire country with vast opportunities to educate millions on the top health benefits of cannabis and specifically for pain.

Like other states, New Mexico only permits medical cannabis use for patients with certain debilitating medical conditions. All the patients in the study had a diagnosis of “severe chronic pain,” annually validated by two independent physicians, including a board-certified specialist.

New Mexico, Dr. Vigil notes, is among the U.S. states hardest hit by the current opioid epidemic, although the number of opioid-related overdose deaths appears to have fallen in recent years, perhaps the result of increased enrollment in the NM MCP, which currently includes more than 48,000 patients.

“MCPs are unique, not only because they allow patients to self-manage their cannabis treatment, but because they operate in conflict with U.S. federal law, making it challenging for researchers to utilize conventional research designs to measure their efficacy,” Vigil said.

The purpose of the researchers’ preliminary, cohort study was to help examine the association between enrollment in a MCP and opioid prescription use. The study observed 37 habitual opioid using, chronic pain patients that chose to enroll in the MCP between 2010 and 2015, compared to 29 patients with similar health conditions that were also given the option, but ultimately chose not to enroll in the MCP.

“Using informal surveys of patients enrolled in the MCP, we discovered a significant proportion of chronic pain patients reporting to have substituted their opioid prescriptions with cannabis for treating their chronic pain,” said Vigil.

The researchers used Prescription Monitoring Program opioid records over a 21-month observation period (first three months prior to enrollment for the MCP patients) to more objectively measure opioid cessation — defined as the absence of opioid prescriptions activity during the last three months of observation, with use calculated in average daily intravenous [IV] morphine dosages. MCP patient-reported benefits and side effects of using cannabis one year after enrollment were also collected.

By the end of the observation period, the data showed MCP enrollment was associated with a 17 times higher age- and gender-adjusted odds of ceasing opioid prescriptions, a 5 times higher odds of reducing daily prescription opioid dosages, and a 47 percentage point reduction in daily opioid dosages relative to a mean change of positive 10 percentage points in the non-enrolled patient group.

Survey responses indicated improvements in pain reduction, quality of life, social life, activity levels, and concentration, and few negative side effects from using cannabis one year after enrollment in the MCP.

The researchers’ findings, which provide clinically and statistically significant evidence of an association between MCP enrollment and opioid prescription cessation and reductions and improved quality of life warrant further investigations on cannabis as a potential alternative to prescription opioids for treating chronic pain.

According to Stith, “The economic impact of cannabis treatment should also be considered given the current burden of opioid prescriptions on healthcare systems, which have been forced to implement costly modifications to general patient care practices, including prescription monitoring programs, drug screening, more frequent doctor-patient interactions, treatment of drug abuse and dependence, and legal products and services associated with limiting opioid-related liability.”

“If cannabis can serve as an alternative to prescription opioids for at least some patients, legislators and the medical community may want to consider medical cannabis programs as a potential tool for combating the current opioid epidemic,” Vigil said.

Toxin Cleanse: Which Toxins Are Disrupting Your Health?


Currently, more than 80,000 chemicals are used to produce many of the common household products we use in the United States. With an estimated 1,500 to 2,000 new chemicals being introduced every year, it’s impossible to completely avoid exposure to these agents. It is, however, possible to cleanse your body of many harmful compounds and create a healthier environment inside your home.[1]Here we’ll explore the various types of toxins that can affect your health, how you’re exposed to them, and how you can reduce that exposure.

What Are Toxins?

Toxins are poisonous substances created biologically through living organisms, or synthetically with chemicals. Toxins can come from either outside or inside our bodies. They are known as exogenous and endogenous toxins, respectively. Exogenous toxins consist of any poisons or pollutants that are introduced into the body through air, food, water, or other outside elements. Endogenous toxins consist of by-products that originate inside the body after metabolizing natural bacteria and yeast. The severity of a toxin is measured by its toxicity, or its ability to harm or damage an organ, disrupt an enzyme system, or disturb a biochemical process.

Types of Toxins

Toxins can come from just about anywhere and are either biological or chemical. Biological toxins are found in nature. There are three types of biological toxins, or biotoxins: zootoxins (made by animals), mycotoxins (made by fungi), and phytotoxins (made by plants).[2]

Chemical toxins are created artificially, often as a byproduct of producing something else. Toxins can cause damage directly at the site of contact (local) or elsewhere in the body (systemic). These effects may be immediate or delayed.[3, 4]

Some of the most common sources of toxins are found in everyday products, appliances, and foods. According to The Centers for Disease Control and Prevention, even though outdoor pollution can be highly toxic, growing evidence revealed that most toxins are found in public buildings, offices, and even homes.[5]

Home Toxins

The home, both inside and out, is host to an abundance of chemical and biological pollutants. These toxins are found in everything from cleansers, floors, and cookware, to certain bacteria and insects that live on mattresses and upholsteries. Electrical devices can emit electromagnetic radiation, and if they break they can poison their immediate environment with toxic metals such as leadmercury, and arsenic.[6]

Some of the worst carriers of chemical toxins are the cosmetic products you put on your body. Soap, shampoo, and other personal care products expose the average person to hundreds of chemicals. While people may assume these products are harmless, many contain chemicals that have not been fully tested. These products often carry known carcinogens and endocrine disrupting chemicals. Ingredient labels are unclear, leaving most consumers confused about the safety of the products they use every day. Unfortunately, every time you bathe, breathe, cook, sleep, or continue your beauty regimen, these toxins and their effects begin to accumulate.[7, 8]

Common Biological Toxins in Your Home

When you’re in your home, you are in constant contact with floors, doors, cabinets, surfaces, and furniture. All of these household structures are home to varying levels of bacteria. For example, in the kitchen, bacteria from raw meat can be transferred from one surface, object, or food to another, causing cross-contamination—a major cause of foodborne illnesses.

Here is a list of biotoxins common to homes and where they’re found:

    • Dust mites: mattresses, pillows, upholstery, fabrics, floors
    • Mold and mildew: bathroom walls, window sills, wallpaper, ceilings, fabrics, food
    • Bacteria and viruses: kitchen surfaces, toothbrushes and holders, toilets, sinks, showers, food, tap water
    • Animal dander: pets, floors, clothes, curtains, beds, furniture, skin, hair
    • Insect parts and excrement: attics, basements, closets, storage boxes, cabinets, garages
    • Pollen: anything that has come in contact with the outside, including shoes, pets, hair, skin, and clothes [9]

Common Chemical Toxins in Your Home

The idea of chemical toxins probably stir up thoughts of a garage full of paint cans and other liquid waste. But did you know that chemicals are used to produce plastic and other synthetic materials used to build homes? Paint, carpet, and pressed wood are just a few items that can release health-disrupting chemicals long after they’ve been installed.[10] Here is a list of common chemicals and the products that may contain them.[7]

  • Diethanolamine (DEA): shampoos, lotions, sunscreens, brake fluid, antifreeze
  • Formaldehyde: nail polish/removers, air fresheners, cleaning products, paper towels[11]
  • Triclosan: hair products, shaving gels, deodorants, toothpastes
  • Petroleum: detergents, fertilizer, synthetic fibers, vitamins, plastic, candles
  • Butylated compounds (BHA, BHT): Hair products, makeup, deodorant, fragrances
  • Polytetrafluoroethylene (PTFE): cosmetics, Teflon, water[12]
  • P-Phenylenediamine (PPD): hair dyes, cosmetics, henna tattoos[13]
  • Mica: makeup products, insulation, wallpaper, shingles, cement[14, 15]
  • Dibutyl phthalate: plastics, adhesives, printing inks
  • Sodium laureth sulphate: shampoos, toothpastes, mouthwashes, body wash, soaps, detergents
  • Aluminum: antacids, cake mix, processed cheese, deodorants, baking soda, baking powder, soy based baby formulas[16]
  • Ammonia: fertilizers, cleaning solutions, plastics, fabrics, pesticides, dyes
  • Chlorine: water, pesticides, synthetic rubbers, polymers, refrigerants
  • Fluoride: non-organic or processed foods, toothpastes/mouthwashes, Teflon cookware, water
  • Sodium hydroxide: soaps, rayon, paper, dyes, petroleum products, detergents, oven cleaners[17]

VOCs

Volatile organic compounds (VOCs) are common chemical contaminants that can be found in indoor environments. These compounds contain carbon, can disperse through the air, and usually have an odor. VOCs are released by many types of building materials, including:[18]

  • Sealants, caulks, and coatings
  • Adhesives
  • Paint and varnish
  • Wall coverings
  • Cleaning agents
  • Air fresheners and other scented products
  • Carpeting
  • Vinyl flooring
  • Upholsteries, fabrics, and furnishings
  • Employee personal beauty and hygiene products

Signs It’s Time to Detox

When toxins overwhelm your body, they can weaken your immune system and cause a domino effect of digestive issues, mood swings, loss of mental focus, and sleep disruption. Basically, they make you feel unpleasant. Even your body odor can change—an outward sign your body is telling you that it needs help. Signs that your body could benefit from a cleanse are:[19]

  • Sugar cravings
  • Digestive issues
  • Sinus issues
  • Acne and rashes
  • Fatigue
  • Loss of mental sharpness
  • Joint and muscle aches
  • Depression and anxiety
  • Sudden weight loss, or difficult losing weight
  • Unpleasant breath and body odor
  • Irregular sleep cycles or trouble sleeping

A poor diet and weakened immune system can also result in the overgrowth of a fungus called candida that takes residence in your mouth, gut, and skin. This condition is referred to as candidiasis, or a yeast infection. To support a healthy gut, a change in diet as part of a toxin cleanse will help balance levels of candida.[20]

Optimizing Your Body’s Natural Ability to Cleanse

Your body has a comprehensive detoxification system in which the immune system, respiratory system, skin, intestines, kidneys, and liver all work together. Your skin and respiratory system are the first defense to harmful toxins and chemicals. Once a toxic chemical or biotoxin makes it past these first two body defenses, your immune system takes over. After a filtering and metabolizing process, toxins are expelled from the body as waste.

Over time, the buildup of toxins can make it increasingly difficult for your immune system to work properly. Cleansing your body is a natural way to help it rid itself of toxins, optimizing its ability to defend itself.[21]

How to Remove Toxins From Your Body

Cleanses have been performed for centuries. Indigenous Americans often cleansed using methods like fasting and sweathouses to purge the body of unhealthy substances.[22] Although effective, these methods lost popularity with the development of modern medical techniques. It wasn’t until recently that society started adopting these older, more organic practices like fasting and herbal cleansing as a more organic way to detoxify, lose weight, and stay healthy, without the use of harsh medicines.

If fasting or a sweathouse isn’t for you, flushing toxins from your body by performing a natural cleanse is a refreshing way to bring back a healthier you on the inside. Fewer toxins in your body will also result in higher energy levels. There are a number of cleanses you can try. Most last between three to seven days.

Reduce Your Exposure to Toxins

The first thing to tackle when starting a cleanse is to rid your home environment of harmful toxins—both chemical and biological. Avoid the microwave, and reduce your use of electronics. Clean the floors, beds, and upholsteries. Use a wet cloth to wipe up dust instead of sweeping, which can spread dust particles into the air and throughout the house. Make your own natural cleaning products. A mixture of distilled or purified water, lemon, peppermint, and vinegar makes a nice, natural disinfectant. Mixing baking soda with water and lemon acts as a good disinfecting scrub.

Replace cosmetics and hygiene products with organic versions. For example, coconut oil is a nice substitute for skin and hair care products.

Improve Your Diet

Once you have achieved a clean home environment, focus on the foods you’ll be eating to help cleanse your body. Shifting to an organic vegan or vegetarian diet is essential for a successful cleanse. In order to flush harmful toxins, you must give your organs a rest from unhealthy foods so that they can function at peak efficiency. Foods to avoid include:

  • Processed or packaged food
  • Sodium-rich foods and MSG
  • Meat
  • Soda
  • Caffeine
  • Added sugar
  • Artificial sweeteners
  • Dairy
  • Refined carbohydrates
  • Trans fats
  • Wheat and gluten

Organic foods that naturally provide antioxidants, vitamins, minerals, and other nutrients can help combat the damage from toxins. Incorporate plenty of probiotic foods to keep your gut microbiota balanced. Some of the foods that are recommended during a cleanse include:

  • Brightly colored fruits: watermelon, strawberries, blueberries
  • Citrus fruits: limes, lemons, oranges
  • Brightly colored vegetables: broccoli, beets, carrots
  • Leafy greens: kale, Swiss chard, spinach
  • Seeds and nuts: flax and sunflower seeds, cashews pistachios, walnuts
  • Distilled or purified water
  • Non-caffeinated herbal teas
  • Garlic

Adding herbs and spices like dandelion, cilantro, eucalyptus, alfalfa leaf, peppermint, organic milk thistle, and organic gum acacia are a great way to season your meals and will be beneficial for a healthier cleanse.

Stay Hydrated

It is extremely important to stay hydrated during a cleanse. Drink purified or distilled water instead of tap water. To add taste and nutrients, mix two tablespoons of raw, organic apple cider vinegar (ACV) into a gallon of distilled water and shake thoroughly.

Take Supplements

Performing a cleanse using natural organic supplements is a great way to ensure success. Some supplements are even grouped together in kits that are specifically designed to target certain organs. These kits include Global Healing Center’s own Kidney Cleanse KitLiver Cleanse Kit, and Harmful Organism Cleanse Kit™.

Get More Exercise

Exercising during a detox or cleanse is a great way to help the body push out toxins and waste, especially if you break a sweat.[23] An hour of exercise a day, which can be broken up into two 30-minute sessions, is ideal. As an added benefit, studies have shown that short bursts of high-intensity training during a detox diet may support weight loss and cardiovascular health.

Healthy Meals for a Cleanse

The following menu should keep your cleanse on the right track. Be sure to consume distilled or purified water throughout the day.

Breakfast

This meal should consist of a small bowl of your choice of fruit and a 12-ounce glass of the ACV water mix.

Snacks

For a mid-morning or afternoon snack, grab a handful of nuts and seeds along with a 12-ounce glass of ACV water mix or a cup of green tea. To avoid unnecessary fat or sodium, make sure nuts and seeds are not oiled or salted.

Lunch

Lunch should consist of vegetables. Try blending some of your favorite veggies and add some lemon juice for a veggie smoothie. A homemade soup using vegetables, herbs, and spices boiled together in purified water is also beneficial.

Dinner

This meal can be the same as lunch, or you can choose to fast using just the ACV water mixture. You can also choose a small vegetable or leafy salad with a sprinkling of extra virgin olive oil.

What Kind of Cleanse Is Right for You?

Detoxification diets and regimens—also referred to as cleanses or flushes—are a means of removing toxins from the body or losing weight.[24] Some of these cleanses include:[25]

Review your goals with your healthcare provider to determine which toxin cleanse is best for you. It’s also a good idea to speak with them if you have any special dietary needs. Do not use laxatives during a cleanse, or in place of a cleanse. If you’re used to an unhealthy diet full of processed food, it’s best to approach a cleanse by easing into a healthier, organic diet. How often you should cleanse is best decided by you and your healthcare provider. Many people have seen high success from performing series of liver and colon cleanses throughout the year.

The Future of Cleansing

Growing attention to toxic chemical exposure and environmental health sciences continues to inspire extensive research by scientists as well as governments. This interest has prompted the medical and scientific communities to report on the impact of environmental toxins on human health. Knowing how toxins affect our bodies and environment will help us discover new ways to cleanse and improve our quality of life