Pituitary adenoma growth rate influenced by multiple factors.


The etiology of pituitary adenoma growth rate is multifactorial and may be influenced by patient age and gender, as well as adenoma subtype, hormonal activity, immunohistological profile and the direction of growth relative to the pituitary fossa, according to results of a retrospective study.

  • Researchers evaluated pre- and postoperative pituitary adenoma (PA) traits in relation to patient demographics, MRI specifications and histopathological factors. They examined 153 patients who underwent surgery for removal of a histologically-proven PA at Toronto Western Hospital between 1999 and 2011.

All patients had at least two preoperative and two postoperative MRIs to measure tumor volume doubling time. Both scans were completed a minimum of 3 months apart.

Patients all underwent a sella/pituitary imaging protocol, and volume was determined using partitioning and target volume software. Each patient was also reviewed by two endocrine pathologists, and standardized diagnostic synoptic pathology reports provided information on MIB-1 labeling index, p27 and N-terminally truncated fibroblast growth factor receptor 4 (FGFR4). Growth direction patterns were classified as superior, anterior, posterior and lateral in relation to the sellar fossa.
The researchers found a relationship between preoperative growth rate and age (P=.0001), as well as suprasellar growth (P=.003), existence of a cyst or hemorrhage (P= .004), the MIB-1 (P=.005), FGFR4 positivity (P=.047) and p27 negativity (P=.007).

Postoperatively, 34.6% of patients demonstrated residual volumes, while the remaining 100 patients did not. Residual volume was found to be associated with older patient age (57 vs. 51, P=.038), as well as growth patterns, including anterior, posterior, suprasellar and cavernous sinus extension (P=.001). There was a correlation between pre-and postoperative growth rates (r=0.497, P=.026). The rates of postoperative growth were linked with age (P=.015) and gender (P= .017).

“Due to the heterogeneity of PA, no single predictor of PA growth behavior can be taken in isolation as a means to predict its outcome,” the researchers wrote.  “These predictors must be combined in order to formulate the most accurate estimation of PA growth, which in turn will inform sound clinical management.”

Tight glycemic control failed to benefit pediatric ICU patients.



  • Tight glycemic control in critically ill children had no significant effect on the number of days alive and free from mechanical ventilation, according to researchers.
  • Children admitted to the pediatric ICU (aged ≤16 years) who were expected to require mechanical ventilation and vasoactive drugs for at least 12 hours were randomly assigned to tight glycemic control with a target blood glucose range of 72 mg/dL to 126 mg/dL or conventional glycemic control with a target level less than 216 mg/dL.

Besides assessing the number of days alive and free from mechanical ventilation at 30 days after random assignment, the Control of Hyperglycemia in Pediatric Intensive Care (ChiP) trial researchers examined the costs of hospital and community health services.

Of 1,369 patients at 13 centers in England, 694 were assigned to tight glycemic control and 675 to conventional glycemic control. Of those, 60% had undergone cardiac surgery, according to researchers.

Data indicate that the mean between-group difference in the number of days patients were alive and free from mechanical ventilation at 30 days was 0.36 days (95% CI, –0.42 to 1.14).

In addition, severe hypoglycemia was observed in children in the tight glycemic control group compared with those in the conventional glycemic control group (7.3% vs. 1.5%, P<.001).

The mean 12-month costs were less in the tight glycemic control group compared with the conventional glycemic control group (cost per patient difference of –$4,815; 95% CI, –$10,298 to –$668), according to data. The cardiac surgery subgroup costs were similar in each group. However, in the subgroup that did not undergo cardiac surgery, the mean cost was less in the tight glycemic control group compared with the conventional glycemic control group (–$13,120; 95% CI, −$24,682 to −$1,559), researchers wrote.

In an accompanying editorial, Michael S.D. Agus, MD, of Boston Children’s Hospital and Harvard Medical School, wrote that the trial was well designed but would require further study.

“Although the improved 1-year health care outcomes in the non–cardiac-surgery patients is compelling, it remains impossible to determine best practice for the child who requires critical care for reasons other than cardiac surgery or burns until either a meta-analysis of several trials is performed on an individual-data level or until data from an ongoing large, multicenter trial are accrued,” Agus wrote.

Think Twice Before Routinely Administering Oxygen.


Significance of Arterial Hyperoxia and Relationship with Case Fatality in Traumatic Brain Injury: A Multicentre Cohort Study

The authors of this complex retrospective multicenter study aimed to determine whether hyperoxia was associated with a higher incidence of in-hospital fatalities in ventilated traumatic brain-injured patients in the ICU. The study cohort consisted of 1,212 ventilated traumatic brain-injured patients treated at 61 U.S. hospitals between 2003 and 2008. Hyperoxia was defined as PaO2 greater than 300 mm Hg, and hypoxia was defined as PaO2 less than 60 mm Hg. The primary outcome was defined as in-hospital death.

The problems associated with long-term disability in traumatic brain injury are gargantuan and well known. Multiple strategies are used to identify and optimize physiological derangements during early resuscitation, presumably because they might be important to limit secondary brain injury or to improve long-term outcome. Managing cerebral perfusion and organ blood flow is one of the important issues. It was always thought that cerebral ischemia was a major contributor to secondary brain injury, but recent studies have demonstrated a nonintuitive and different concept.

Oxygen is necessary to maintain brain metabolism after injury, but it is now thought that excessive oxygen can actually potentiate brain injury by exaggerating the production of oxygen free radicals, triggering cellular injury and apoptotic cascade, and organ specific hypoperfusion. Numerous studies have demonstrated the detrimental effects of excessive hyperoxia in all forms of brain injury and in other critically ill patients. The authors of this study attempted to evaluate observational data related to this problem, determined the occurrence of hyperoxia upon admission to an ICU, studied the impact of hyperoxia on the inpatient case fatality rate, and attempted to determine whether the hyperoxia upon admission was an early indicator of in-hospital death after other confounders were evaluated. Obviously, this was a difficult issue to interpret.

Patients were classified as demonstrating normoxia, hyperoxia, or hypoxia with a primary outcome measure being in-hospital fatality. Thirty-three percent of the admitted patients were normoxic, 46 percent were hypoxic, and 21 percent were hyperoxic. A variety of cardiovascular, metabolic, respiratory, renal, and neurological functions were monitored for all groups. Overall, 33 percent (400/1212) met the primary outcome of an in-hospital death, so these were sick patients with significant brain injuries. Fatality was highest in the hyperoxic group (41%), followed by the hyperoxic group (33%) and the normoxic group (23%).

It is unclear whether the more seriously injured patients received less or more oxygenation. The authors concluded that exposure to hyperoxia was prevalent and associated with a lower likelihood of survival after hospital admission of brain-injured individuals. They contend that this was true even after controlling confounding variables. They also determined that exposure to hyperoxia was an independent predictor of in-hospital fatality. One mechanism was a decrease in cerebral blood flow secondary to the induced hyperoxia, which can decrease cerebral blood flow up to 33 percent. Other studies have demonstrated the detrimental effects of hyperoxia, which may include deterioration of cardiac output and vasoconstricting effects.

The authors conclude that ventilated TBI patients should not be subjected to arterial hyperoxia, but should be treated instead with methods to produce normoxia. Per their conclusions, unnecessary high-flow oxygen delivery should be avoided in critically ill ventilated TBI patients.

Comment: These authors conclude that unnecessary oxygen delivery increases TBI patient fatalities. Hyperoxia, defined as the PaO2greater than 300 mm Hg, was deemed ultimately harmful. It is likely that most emergency physicians, when faced with a brain-injured patient requiring intubation and ventilation, would opt for the initial delivery of 100% oxygen, with the possibly mistaken belief that it would be optional for brain function.

Patients staying in the ED would likely remain on high-flow oxygen until they were admitted to the ICU, where this mistake may be reiterated. These authors note that a PaO2 less than 300 mm Hg is desirable. This revelation is similar to prior but since-debunked recommendations that brain-injured patients should be hyperventilated to decrease the pCO2 to decrease cerebral edema. It is now concluded that hyperventilation should be avoided, as should hypoxia and hypercarbia. A pCO2 of 30-35 mm Hg is the goal in brain-injured ventilated patients.

Association between Arterial Hyperoxia Following Resuscitation from Cardiac Arrest and In-hospital Mortality

The authors from Cooper University Hospital in Camden, NJ, note that supplemental oxygen is often administered in high concentrations to patients after cardiac arrest, but this universal intervention has recently come under scrutiny as being potentially harmful. It is generally agreed that too little oxygen can potentiate any oxygen injury, but too much oxygen may increase free radical production and trigger cellular death.

These authors sought to determine whether exposure to hyperoxia after the spontaneous return of circulation from a cardiac arrest was associated with a poor clinical outcome. They studied patients who survived cardiac arrest to ICU admission, and attempted to determine whether the presence of post-resuscitation hyperoxia, defined as PaO2greater than 300 mm Hg, was a common occurrence and whether it was ultimately associated with lower survival or possible discharge.

This multicenter study of more than 6,000 patients found hyperoxia in 18 percent, hypoxia in 63 percent, and normoxia in 19 percent. The hyperoxia group manifested a significantly higher in-hospital mortality rate compared with the normoxic group (63% versus 45% respectively). The authors concluded that cardiac arrest patients admitted to the ICU following resuscitation had higher in-hospital mortality when hyperoxia was an independent variable. These authors also found that post-resuscitation hyperoxia was common when blood gas analysis was performed after ICU arrival. They attempted to control for a predefined set of confounding variables in a multivariable analysis, and when this was accomplished, they found that exposure to hyperoxia was an independent predictor of in-hospital death.

Hyperoxia was also associated with a lower likelihood of independent functional status at hospital discharge compared with normoxia. A poor clinical outcome associated with hyperoxia was an unexpected finding, and the authors urge caution in interpreting their data. Because reperfusion after an ischemic insult results in a surge of reactive oxygen species, many of which may overwhelm natural antioxidants defenses, the oxidative stress from hyperoxia formed after reperfusion is thought to lead to increased cellular death by diminishing mitochondrial activity, disrupting normal enzyme activity, and damaging lipid membranes through peroxidation.

It was noted that the American Heart Association still recommends 100% oxygen administration during resuscitative efforts. Many physicians frequently maintained the high FiO2 for variable lengths of time, however, after circulation had been successfully restored. Nearly one in five patients had exposure to hyperoxia post-cardiac arrest, and almost half of those patients had a PaO2 greater than 400 mm Hg. Post-resuscitative arterial hyperoxia appeared to be a common occurrence. Recent AHA recommendations have suggested targeting an arterial oxygen saturation not to exceed 94% to 96%. This study did not evaluate whether therapeutic hypothermia was attempted.

Comment: The harm from long-term high-flow oxygen use is a relatively new concept, but it is grossly underestimated by most clinicians. It is not known to be detrimental during a short ED stay, but prolonged ED boarding and continuation of ED protocols makes this an important issue for emergency medicine. I would note that no study has found that hyperoxia has long-term beneficial effects in patients with any illness in the hospital. These two articles suggest that a cardiac arrest and traumatic brain injury are adversely affected by prolonged periods of hyperoxia, and that this situation should be avoided. Of course, it requires an arterial blood gas to determine the actual PO2, and a pulse oximeter of 100% can have a tremendously high arterial saturation or a normal oxygen saturation, yet another reason to perform an ABG in the ED after things have settled down.

Investigators could find no significant difference in mortality, arrhythmias, the use of analgesics, or other cardiac parameters in post-myocardial infarction patients who were administered oxygen as far back as 1976. They could find no evidence that routine administration of oxygen in complicated myocardial infarction was beneficial. (Br Med J 1976;1[6018]:1121.) Oxygen administration causes an increase in systemic vascular resistance and a vasoconstrictive effect. It has long been known that cerebral, renal, and retinal vasoconstriction, as well as decreased coronary blood flow, occurs during inhalation of high concentrations of oxygen. Merely producing an elevated PaO2 can cause a toxic milieu for a variety of organs. These authors concluded that routine oxygen administration to all patients with myocardial infarction has little place unless hypoxia was obvious.

 

Current Oxygen Administration Guidelines from the American Heart Association

The 2010 guidelines from the American Heart Association list potential adverse effects of oxygen administration during adult and neonatal resuscitation. They recommend 100% oxygen during initial resuscitation from cardiac arrest, but providers should then titrate oxygen to the lowest level required to achieve an arterial oxygen saturation of 94%. This is thought to reduce oxygen toxicity while not being harmful to post-resuscitative care. An arterial oxygen saturation of 100% may correspond to a PaO2 anywhere between 80 and 500 mm Hg, so it is always appropriate to wean the FiO2 when the saturation is 100% and aim to keep the saturation maintained at greater than 94%. This was a class 1 recommendation by the AHA in 2010.

The guidelines note that normal infants generally do not reach extrauterine values of blood oxygen levels until approximately 10 minutes after birth; that is important to keep in mind when resuscitating newborns. Hemoglobin saturation may normally remain at 70-80% for several minutes following a normal birth, and even produce the appearance of cyanosis during that time, but the clinical assessment of skin color is a poor indicator of oxyhemoglobin saturation in the neonatal period.

The AHA said insufficient or excessive oxygenation can be harmful to the newborn. Even brief exposure to excessive oxygen during a normal delivery may be harmful. No studies have compared the outcome of neonatal resuscitation initiated with oxygen concentration or a targeted oxyhemoglobin saturation, but the American Heart Association believes that initiating resuscitation with room air or a blended oxygen mixture is better for the neonate than 100% oxygen. The use of 100% oxygen to resuscitate even a cyanotic baby is withheld until after 90 seconds of bradycardia and cyanosis.

It appears that it’s normal for an infant to remain blue for 30-50 seconds following normal delivery. The first breath does not give a newborn an oxygen concentration high enough to reverse the cyanosis that has been present in utero. It might very difficult for most physicians to withhold 100% oxygen in every delivery, but the current thinking is that inspired room air will do just fine, and high oxygen concentrations should be avoided unless the infant is not progressing normally.

Understanding Changes in Established Practice: Pulmonary Artery Catheter Use in Critically III Patients.


Abstract

Objective:

Multiple studies suggest that routine use of pulmonary artery catheters is not beneficial in critically ill patients. Little is known about the patterns of “uptake” of practice change that involves removal of a device previously considered standard of care, rather than adoption of a new technique or technology. Our objective was to assess recent pulmonary artery catheter use across ICUs and identify factors associated with high use.

 

Trends in pulmonary artery catheter use from 2001 to 2008 were assessed. For 2006–2008, we compared pulmonary artery catheter use across ICUs. We assessed characteristics of ICUs and hospitals in the top quartile for in-ICU pulmonary artery catheter placement (vs the bottom quartile) using chi-square and t tests and factors associated with in-ICU pulmonary artery catheter insertion using multilevel mixed effects logistic regression. Total pulmonary artery catheter use decreased from 10.8% of patients (2001–2003) to 6.2% (2006–2008; p < 0.001); insertion of pulmonary artery catheters in ICU decreased from 4.2% to 2.2% (p < 0.001). In 2006–2008, ICUs in the top quartile for in-ICU pulmonary artery catheter insertion (3.4–25.0% of patients) were more often surgical (54.2% vs 21.7% in the lowest quartile, p = 0.070), teaching hospitals (54.2% vs 4.3%, p = 0.001), and had surgeon leadership (40.9% vs 13.0%, p = 0.067). After multivariable regression, surgical patients (p < 0.001) and all patients in surgical ICUs (p = 0.057) were more likely to have pulmonary artery catheters placed in ICU.

Conclusions:

Use of pulmonary artery catheters in ICU patients has declined but with significant variation across units. Removal of this technology has occurred most in nonsurgical ICUs and patients.

Use of technology and testing in the care of critically ill patients, such as central venous catheters, routine laboratory tests, and CT scans, is common and costly. Many of these tests and interventions are important for patient care, but others are supported by limited evidence and could be considered targets for cost-reduction strategies without anticipated changes in outcomes. Initiatives, such as the American Board of Internal Medicine Foundation’s new “Choosing Wisely” campaign, have enlisted many medical specialty societies in an effort to discourage unnecessary tests and treatments (1). Although many studies have examined adoption of new testing and devices (2–4), we know little about patterns of adoption (or “de-adoption”) that involve removal of a commonly used technology.

Beginning in the 1990s, the use of the pulmonary artery catheter (PAC) declined as many studies—both observational and randomized trials—found no benefit of PACs for the routine management of critically ill patients (5–7) and noninvasive technologies to estimate cardiac output became more readily available (8). Use of PAC in the United States decreased by 65% from 1993 to 2004 among medical hospital admissions (9), and in Canada, its use fell from 16.4% of ICU patients in 2002 to just 6.5% of ICU patients in 2006 (10). Two additional large randomized controlled trials were published in 2005 (11) and 2006 (12) reinforcing the finding that routine use of PACs did not impact outcomes in the critically ill. Although we know that the rate of PAC use in the United States decreased to 2 per 1,000 medical admissions by 2004 (9), no data are available on specific patterns of PAC de-adoption for critically ill patients. We therefore assessed trends in rates of PAC use in U.S. ICUs. We also sought to identify hospital and/or ICU factors associated with continued higher rates of use that might suggest slower adoption of new practice patterns that specifically involves removal of a technology from practice.

DISCUSSION

PAC use in U.S. ICUs decreased throughout the time period studied (2001–2008). However, despite the overall decline, certain units continued to use PACs frequently—placing them in up to one quarter of patients. Furthermore, in high-risk subgroups, such as those on vasopressor medications, some units continued to place PACs in more than 50% of patients. Overall, clinicians in surgical ICUs were more likely to continue to use PACs and surgical patients were more likely to receive a PAC, both prior to and after admission to the ICU, suggesting a different willingness among practitioners in these settings to change practice in a way that involves removal of technology.

The overall trend of declining use of PACs in U.S. ICU patients is consistent with trends seen in other studies (910). A prior study from the United States examined all hospitalized medical patients but was unable to distinguish between patients cared for in ICUs and those who received PACs for other reasons and in other hospital locations (9). The rates of PAC use we found in U.S. ICUs is very similar to the rates found using Canadian ICU data, with 6.5% of Canadian ICU patients receiving a PAC in 2006 and 6.2% of U.S. ICU patients receiving a PAC in 2006–2008 (10). The patient characteristics associated with an increased likelihood of receiving a PAC (being a surgical patient, requiring vasopressors, and being MV) were also similar in the two studies.

Our results reveal that the use of PACs varies significantly across individual ICUs suggesting, at least in part, a variation in implementation of evidence-based medicine (71112). Adoption of a new practice in this case involves removal of a device, rather than adoption of a new technology or initiation of a new protocol. Adoption of new technology may be very rapid, as has been seen with the use of minimally invasive surgery for prostatectomies (4), whereas removing a common technology from practice may be slower. Recent studies demonstrating the lack of utility of intraaortic balloon pumps in the setting of cardiogenic shock following myocardial infarction (17) and the lack of benefit from routine replacement of peripheral intravenous catheters (18) are examples of evidence that may push clinicians to change behavior to choose not to initiate an intervention or protocol rather than to adopt a new one. As we continue to be confronted with data demonstrating nonutility of “standard of care” techniques, differences between incorporating novel practices and rejecting previously accepted ones may become clear; understanding barriers to the latter may inform strategies to enhance the former.

Variation in clinical practice in critical care is, of course, not unique to use of PACs (19–21). Furthermore, the “appropriate” rate of continued PAC use is not known and it is unlikely that each unit, with its unique make-up of patients and unique hospital environment, would use PACs with the same frequency. However, it seems equally unlikely that the degree of variation found in our cohort is ideal. In particular, the high use of PACs for management of patients requiring vasopressors or MV in some units may suggest a reluctance to consider alternative approaches to care.

In order to minimize variation in practice, it is important to understand the factors that drive it as well as potential barriers to change. Our study identifies patient-, ICU-, and hospital-level factors that are associated with greater PAC use. In particular, our data suggest that providers in ICUs that care for surgical patients may be more likely to continue to use PACs than are clinicians practicing in other critical care environments. The possible explanations for persistent use in surgical units may include 1) a comfort with PACs due to continued common use in the operating room/PACU environment, 2) a belief that surgical patients are inherently different and may benefit from PACs preferentially, or 3) a greater acceptance of invasive monitoring for patients who have already undergone major surgery. While the rationale is not certain, it is clear that surgical ICUs, both in the past (22) and more recently, use PACs more often.

Our study is novel in characterizing the recent epidemiology of PAC use in U.S. ICUs and in pinpointing differences in use across types of ICUs. One main strength of this study stems from the sample size of more than 300,000 patients admitted to approximately 100 ICUs across the United States. Additionally, whereas potential underreporting may have impacted prior analyses that relied on documentation for billing purposes (9), the documentation of PAC use was mandatory in Project IMPACT and the data were collected by trained data collectors in each unit. Finally, in contrast to the Canadian ICU cohort (10), we were able to distinguish which PACs were placed within the ICU (rather than before admission) and to examine not only overall use but also to isolate the patients for whom the decision was made to place the PAC in the ICU itself.

Our study has a number of limitations. As a retrospective analysis, we did not have access to specific information about why a PAC was or was not used for each patient and whether any alternative technology or approaches (e.g., echocardiography, pulse contour methods for determining cardiac output) were used to assist in hemodynamic assessment. In the mid-1990s, technologically advanced PAC which are able to report continuous cardiac output became available (2324); whether these catheters were used and to what extent was not information we had available in our dataset. Similarly, we did not have information about ICU admission criteria for the units in our cohort and whether they changed over time. Additionally, although we had data on the specialty and year of board certification of the medical director of each unit, we did not have data on either the specialty of the physician in charge of each patient’s care or his/her length of practice. In other areas of critical care, we know that individual physician management may vary substantially (25). Reimbursement patterns are known to impact use of technologies (26). We did account for the insurance status of each patient (which was not associated with PAC use), but we did not have access to the payment structures in each unit and therefore could not evaluate the impact of differing financial incentives across ICUs. As our study is based on data from Project IMPACT, its generalizability to the entire U.S. critically ill population and to non-U.S. ICU patients is uncertain; specifically, there is a high percentage of academic hospitals in our dataset which could skew our results. Also, we excluded patients who had undergone cardiac surgery as our interest lay with the use of PACs in the ICU setting, and we expected that a significant portion of cardiac surgery patients would have received PACs for intraoperative monitoring. Our findings, therefore, do not address the use of PACs in this subpopulation of critically ill patients. Finally, as a retrospective analysis, we were not able to make any statements about the cause of PAC use declining and/or the direct results from this decline on patient outcomes including mortality.

 CONCLUSIONS

PACs were a mainstay of the management of critically ill patients in the 1980s (52728). The overall use of PACs in the ICU setting has declined dramatically over the past 20 years, suggesting a willingness by physicians caring for critically ill patients to change practice based on new evidence. However, the variable use across ICUs, and continued use at high rates in some units, demonstrates that PACs for hemodynamic monitoring are not an obsolete technology in U.S. ICUs. Furthermore, our analysis suggests that willingness to “de-adopt” a technology is not random; high use is more consistently found in certain practice settings and for specific types of patients.

Screen-and-Treat Approach to Cervical Cancer Prevention Using Visual Inspection With Acetic Acid and Cryotherapy: Experiences, Perceptions, and Beliefs From Demonstration Projects in Peru, Uganda, and Vietnam.


Abstract

Cervical cancer is preventable but continues to cause the deaths of more than 270,000 women worldwide each year, most of them in developing countries where programs to detect and treat precancerous lesions are not affordable or available. Studies have demonstrated that screening by visual inspection of the cervix using acetic acid (VIA) is a simple, affordable, and sensitive test that can identify precancerous changes of the cervix so that treatment such as cryotherapy can be provided. Government partners implemented screening and treatment using VIA and cryotherapy at demonstration sites in Peru, Uganda, and Vietnam. Evaluations were conducted in the three countries to explore the barriers and facilitating factors for the use of services and for incorporation of screen-and-treat programs using VIA and cryotherapy into routine services. Results showed that use of VIA and cryotherapy in these settings is a feasible approach to providing cervical cancer prevention services. Activities that can help ensure successful programs include mobilizing and educating communities, organizing services to meet women’s schedules and needs, and strengthening systems to track clients for follow-up. Sustainability also depends on having an adequate number of trained providers and reducing staff turnover. Although some challenges were found across all sites, others varied from country to country, suggesting that careful assessments before beginning new secondary prevention programs will optimize the probability of success.

Scientists say Fukushima’s food is safe. So why aren’t the Japanese eating it?


Since the nuclear meltdown, the region’s seafood and agriculture industries have suffered — largely because of mistrust of the government.

TOKYO — After Fukushima suffered the world’s worst nuclear meltdown since Chernobyl nearly three years ago, Japanese government officials say the region’s food is safe to eat. Problem is, neither its producers nor consumers trust them anymore.

Since the meltdown, Fukushima has dropped from the nation’s fourth-largest rice producer to its seventh, with production reportedly slipping 17 percent, according to the agriculture ministry. Roughly 100,000 farmers have lost an estimated 105 billion yen ($1 billion). Livestock farming once thrived in Fukushima – until most of its farmers were forced to evacuate after the meltdown, and 5,000 cattle were ordered slaughtered and the rest were left to starve to death.

At a testy meeting last fall between government representatives and farmers from Sukagawa and Soma, two of Fukushima’s largest food-producing areas, one Sukagawa farmer noted that the government approves of shipments of food that test below 100 becquerels (units of radioactivity) per kilogram, lower than its original 500 Bq limit (and in line with global standards), selling it at below-market value. But he would not allow his own family to eat the food he is allowed to sell.

“We won’t eat it ourselves, but you tell us to sell it to others. Do you know how guilty this makes us feel? There is no pride or joy in our work anymore.”

But despite the gut instinct that food from Fukushima cannot be safe, prominent scientists back up the government, with some noting that early evacuations, land-restriction and decontamination efforts, together with Japan’s natural iodine-rich seafood diet, make Fukushima’s food today safer than an average CT scan.

The real culprit of Fukushima’s agricultural industry’s woes may be what most Japanese consider egregious government lies and obfuscation (most notably, waiting two months before even conceding the word ‘meltdown’), tight-lipped secrecy around its data and laughably low-tech decontamination strategies that don’t seem like a match for nuclear contamination. And compounding the problem is that some scientists agree the data isn’t good.

Nancy Foust, a U.S.-based researcher and technology and communications specialist withSimplyInfo.org, a multi-disciplinary U.S.-based research group monitoring the Fukushima decontamination efforts, says, “We have found efforts to decontaminate rice paddies, but hard data on things like before-and-after crops have been hard to come by. The decontamination techniques so far have involved either deep tilling to shove the top soil down deep, or mixing in potassium to try to prevent the plants from taking up cesium.”

And that’s the rub: Most of the techniques employed in Japan – ranging from soil scraping (skimming of the first three centimeters of soil and storing it in massive canvas bags called “ton packs”), to tilling, to power blasting the bark off fruit trees or water sweeping with Karchers (high-pressure, industrial strength cleaning machines) – are primitive at best, near-replicas of strategies used in Chernobyl nearly 28 years ago. Worse, the results of such efforts are often kept secret, or at least oblique, by the government officials overseeing them.

“All of our requests for disclosure have been rejected,” says Nobuyoshi Ito, a rice farmer in Iidate and former systems engineer who has emerged as a widely cited grassroots expert on decontamination. He has been conducting his own tests with Geiger counters and other equipment to compare results with government figures, sending them to a laboratory in Shizuoka prefecture for confirmation. (A technician at the lab said he was actually better informed than most Japanese government officials on the subject of contamination.) “Even the Ministry of the Environment, the ones who actually lead the decontamination work, are unclear, or at a loss to specifically quantify their assessments. When we ask them about the possible reduction of the rate of radiation, they answer, ‘We won’t know until we try.’” According to Ito, the government keeps kicking the can down the road, refusing to publicly release its findings, claiming that they are still in progress.

If rice and produce are hard to assess, fish and seafood pose an even bigger challenge. Marine creatures are always on the move, following tides and currents. “Some fish in one area of the sea are contaminated, others aren’t,” says Foust. “They’re having better luck focusing on certain breeds of bottom feeders. The rock fish, for example, almost always show some level of contamination, though it’s usually low. They’re reliable, but they don’t show the extremes.”

The government botched this test as well, said Foust, displaying samples like octopi, which typically has low levels, to claim all seafood was safe. While natural iodine from some seafood helps cancel out the radioactive iodine in fast-moving fish, using octopi as a standard misrepresents localized risk. No one was fooled, further eroding trust.

So, the fear of Fukushima’s food persists. Geraldine Thomas, Professor of Molecular Pathology at the Imperial College in London, and the scientific director of the Chernobyl Tissue bank, was asked to assess likely health effects from Fukushima after her extensive work on thyroid cancer cases in Russia. Thomas finds the food fear in Japan baffling – a sign of modern and misbegotten hysteria.

“The most important thing to do immediately after the accident was to restrict the consumption of locally produced milk and green leafy vegetables, which are known to concentrate [radioactive] iodine,” she says, as opposed to the healthy, natural iodine found in some seafood. “This the Japanese government did very well – in contrast to the Soviet authorities following the Chernobyl accident. The Japanese continue to monitor foodstuffs, and [they] have imposed even stricter limits on radiation in foodstuffs from Fukushima prefecture than we have for our own produce in the U.K. and the U.S.”

Dr. Ian Fairlie, an independent consultant on radioactivity in the environment who is closely monitoring Fukushima says that Japanese should fear radiation – just not necessarily in the region’s food. “Contaminated food intakes are a relatively small part of the problem. People near Fukushima are more exposed via direct radiation (groundshine): smaller doses also come from water intakes, and from inhalation.”

Adds Professor Thomas: “Both the World Health Organization and the United Nations Scientific Committee on the Effects of Atomic Radiation agree that the biggest threat to health post Fukushima is the fear of radiation, not the radiation itself. Personally I would have no worries about consuming food from Fukushima – and in fact did so when I was in Tokyo last April.”

New classification system proposed for breast cancer types : Spoonful of Medicine


Since the late 1970s, clinicians have distinguished breast cancers types according to the presence or absence of certain receptors that sit on the surface of these tumor cells. Depending on the receptors found—namely, the estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2)—a doctor can get a better sense of the prognosis and which treatments might work.

Now, a study published this week in the Journal of Clinical Investigation proposes a new conceptual framework that classifies breast cancers based on whether the cells possess receptors for other molecules, such as androgen and vitamin D. Under this system, current cancer subtypes would be stratified into one of four groups.“I’m very excited about this paper,” says Jorge Reis-Filho, a pathologist at the Memorial Sloan Kettering Cancer Center in New York who was not involved in the research. He adds that the proposed classification system could point to new therapies for breast cancers previously categorized as unlikely to respond to treatment.In the new study, researchers sought to explain some of the diversity observed in human breast tumors by obtaining a more detailed understanding of normal breast cell subtypes. “I approached this question like an evolutionary biologist trying to figure out the ancestry of a species,” says study co-author Tan Ince, a pathologist at the University of Miami Miller School of Medicine.

Ince and his team scanned samples of normal breast tissue for proteins expressed in a “bimodal” or on/off pattern—highly expressed in some cells but completely absent in others. The researchers focused in on the handful of proteins that displayed this pattern. They subsequently characterized 11 previously undescribed subtypes of luminal cells—one of the two major epithelial cell types found in mammary glands—that each expressed distinct combinations of these proteins.

Using the expression patterns of just three such proteins—the receptors for estrogen, androgen and vitamin D—the researchers grouped the 11 cell subtypes into four categories that were dubbed HR0, HR1, HR2 and HR3 based on how many of the hormone receptor types the cells contained.

Classification act

Ultimately, the researchers then used their classification system to characterize more than 3,000 human breast tumors. To their surprise, Ince says, 95% of breast tumors displayed protein expression patterns that resembled one of the newly defined breast cell subtypes. Moreover, the categories that the tumors belonged to were associated with significantly different prognostic outcomes: for example, patients whose tumors belonged to the HR0 category had an almost threefold greater risk of death within the first five years of diagnosis compared with those who had HR3 cancers. The classification scheme also provided some insights into the treatment of each tumor type. For example, the authors showed that compounds that bind the vitamin D receptor effectively blocked the growth of cell lines derived from triple negative breast cancers—those that lack ER, PR and HER2 and therefore have few effective treatments.

“The current system is very well established—changing that is not something that is really in the grand scheme,” says Sandro Santagata, a pathologist at Brigham and Women’s Hospital and Harvard Medical School in Boston and lead author of the study. “We’re just hoping to add an additional layer of information.”

Lajos Pusztai, an oncologist at the Yale School of Medicine in New Haven, Connecticut, called the findings “thought provoking” and likely to motivate further research, but noted a few caveats that limit their clinical implication. “The new classification put forward by this paper includes markers that are not standardized,” he says. Furthermore, the prognostic ability of the new HR categories is not substantially better than the existing methods for predicting disease recurrence, he adds.

Other scientists agree that the classification system is still not ready for primetime. “Additional work is required to see how we can translate [this classification system] into something that we can use in the clinic,” says Reis-Filho. “The task of translating these findings into assays that we could run in pathology departments across the country is not a trivial one.”

 

The Only Two Ways to Live Your Life.


“There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.” ~Albert Einstein

We come into this world as wonderful, loving and precious beings who look at the world with eyes of love, wonder and purity. As we grow older and as we progress through life, things start to change and we begin to adopt all sorts of beliefs, fears, excuses and limitations that move us further and further away from our loving soul and lovable Self.

In his book, The Honeymoon Effect, Bruce Lipton explains how the foundation of our personality is being formed right from the moment of our conception, “The fetus, for example, absorbs cortisol and other stress hormones if the mother is chronically anxious. If the child is unwanted for any reason, the fetus is bathed in the chemicals of rejection. If the mother is wildly in love with her baby and her partner, the fetus is bathed in the love potions. If the mother is furious with the father, who has abandoned her during the pregnancy, the fetus is bathed in the chemicals of anger.”

The child’s personality and perception of the world is determined by the prenatal environment. Based on the perceptions of the mother during pregnancy, the child will perceive the world as either a loving or a fearful place. Parents have a huge influence over their children, playing a major role in how their personality is shaped and how they will later on perceive the world around them. None of us can remember what happened in those first nine months, nor can we remember many of the things that happened in the first years of our lives, but that doesn’t mean those things didn’t happen. That doesn’t mean they didn’t had a huge impact in how our lives were shaped and how we lived our lives up until this moment.

“In their theta-induced hypnagogic trance, children carefully observe as well as listen to their parents and then mimic their behavior by downloading it into their subconscious minds. When parents model great behavior, theta hypnosis represents a fabulous tool that enhances a child’s ability to learn all kinds of skills to survive in the world. And when parental behavior is not so great, the same theta “recordings” can drag the child’s life into the ground.” ~ Bruce Lipton, The Honeymoon Effect

Our lives are shaped by how we perceive our environment and how we perceive our environment is usually determined by how we are conditioned to perceive it by our parents, our teachers, the media and the society we live in. Everyone around us plays a huge role in how our lives are being shaped. Everything that we see and everything that we hear will either enrich and empower us, or frighten and deteriorate us.

If something happened to us in the past, if we were hurt and injured emotionally, physically, and, or mentally, and if we couldn’t find a way to heal those wounds, forgive and let go of those painful memories, chances are that we continued to allow our past to poison ourselves, our relationships and our present lives.

“Tolerance for pain may be high, but it is not without limit. Eventually everyone begins to recognize, however dimly, that there must be a better way. As this recognition becomes more firmly established, it becomes a turning point.” ~ A Course in Miracles

When the time will come for us to leave this world, none of this will matter. Our fears, anger and resentments, they will no longer have the same value to us that we now give to them, and the things that we now worry about will be of no importance to us whatsoever.

We came into this world to honor our true nature and to share our beauty, our authenticity and our perfection with the whole world. We didn’t come here to shrink, to fear and to suffer. We came to here to be love and to exude love. We came here to experience the gift of life and to give life meaning by making the best of it.

A Course in Miracles says that there are only two emotions: love and fear, and all the other positive and negative emotions that we are all so familiar with, are derived from either fear or love. Love is real and fear is not. When we live in fear, we perceive ourselves as being small, insignificant and of no great value. When we are in love, we open ourselves to receiving many of the wonderful gifts that life has to offer.

I chose fear and I lived in fear for more than 25 years, and those were some of the most painful years of my life. The pain of those years made me realize that I don’t want to live my life living in fear. I don’t want to feel small and insignificant anymore. I’m done with that! The pain of those years made me realize that I don’t want to get to the end of my life, look back and realize that I could’ve done it all differently. I don’t want to live a life of fear and regrets, I want to love and be loved. I want to be happy. And happy I am and happy I will continue to be. Life is too short to be anything but happy :)

Always remember, there are two ways to live your life. Choose LOVE and everything will look like a miracle to you. Choose FEAR and nothing will seem like a miracle.