Toddlers becoming so addicted to iPads they require therapy .


http://www.telegraph.co.uk/technology/10008707/Toddlers-becoming-so-addicted-to-iPads-they-require-therapy.html

BBC News – Ex-footballer with motor neurone disease in London Marathon.


http://m.bbc.co.uk/news/uk-england-manchester-22213895

Nurses swamped by paperwork.


http://www.telegraph.co.uk/health/healthnews/10008392/Nurses-swamped-by-paperwork.html

Panda’s artificial insemination.


http://m.bbc.co.uk/news/uk-scotland-22238189

Strategy for detection of prostate cancer based on relation between prostate specific antigen at age 40-55 and long term risk of metastasis: case-control study.


Abstract

Objective To determine the association between concentration of prostate specific antigen (PSA) at age 40-55 and subsequent risk of prostate cancer metastasis and mortality in an unscreened population to evaluate when to start screening for prostate cancer and whether rescreening could be risk stratified.

Design Case-control study with 1:3 matching nested within a highly representative population based cohort study.

Setting Malmö Preventive Project, Sweden.

Participants 21 277 Swedish men aged 27-52 (74% of the eligible population) who provided blood at baseline in 1974-84, and 4922 men invited to provide a second sample six years later. Rates of PSA testing remained extremely low during median follow-up of 27 years.

Main outcome measures Metastasis or death from prostate cancer ascertained by review of case notes.

Results Risk of death from prostate cancer was associated with baseline PSA: 44% (95% confidence interval 34% to 53%) of deaths occurred in men with a PSA concentration in the highest 10th of the distribution of concentrations at age 45-49 (≥1.6 µg/L), with a similar proportion for the highest 10th at age 51-55 (≥2.4 µg/L: 44%, 32% to 56%). Although a 25-30 year risk of prostate cancer metastasis could not be ruled out by concentrations below the median at age 45-49 (0.68 µg/L) or 51-55 (0.85 µg/L), the 15 year risk remained low at 0.09% (0.03% to 0.23%) at age 45-49 and 0.28% (0.11% to 0.66%) at age 51-55, suggesting that longer intervals between screening would be appropriate in this group.

Conclusion Measurement of PSA concentration in early midlife can identify a small group of men at increased risk of prostate cancer metastasis several decades later. Careful surveillance is warranted in these men. Given existing data on the risk of death by PSA concentration at age 60, these results suggest that three lifetime PSA tests (mid to late 40s, early 50s, and 60) are probably sufficient for at least half of men.

Discussion

Overview of findings

PSA concentration can be used to predict of long term risk of metastasis or death from prostate cancer. It can identify a small group of men at greatly increased risk compared with a much larger group highly unlikely to develop prostate cancer morbidity if rescreening is delayed for seven or eight years. As PSA screening was extremely rare in this cohort, our findings can be used to design screening programmes by determining the age at which men should start to undergo screening and the interval between screenings. Men at low risk of death from prostate cancer without screening have little to gain from being screened but still risk overdiagnosis and overtreatment; men likely to die from prostate cancer without screening could avoid cancer specific mortality if they choose to be screened.

In an earlier paper, we showed that PSA concentration at age 60 had a strong association with the risk of death from prostate cancer by age 85 (AUC 0.90),7 with extremely low risk (≤0.2%) in men with PSA concentration below the median (≤1.0 µg/L). Taken together with our current data, this suggests a simple algorithm for prostate screening. All men with a reasonable life expectancy could be invited for PSA screening in their mid to late 40s. Men with a PSA concentration <1.0 µg/L would be advised to return for screening in their early 50s and then again at age 60, whereas men with PSA ≥1.0 µg/L would return for more frequent screening, with literature suggesting repeat tests every two or four years.19 The choice of 1.0 µg/L as a tentative threshold might vary according to preference. At age 60, men with PSA at median or lower—that is, ≤1.0 µg/L (or possibly below the highest quarter, ≤2.0 µg/L, depending on preference)—would then be exempted from further screening; men with a higher concentration would continue to undergo screening until around 70.1 Particular focus should be placed on men in the highest 10% of PSA concentrations at age 45-55, who will contribute close to half of all deaths from prostate cancer occurring before the age of 70-75. Some of these men will have concentrations above current thresholds for consideration of biopsy—such as 3 µg/L—and should be referred to a urologist. The remaining men could be told that, although they will probably not die from prostate cancer (with a mean risk of metastasis within 25 years close to 10%), they are at much higher risk than average and that it is especially important that they return for regular, frequent, and possibly more elaborate screening. It is also worth considering whether management of these men should become proactive, with reminder letters and attempts to follow-up non-compliers by telephone. Most importantly, the proposed PSA concentration of 1.0 µg/L to discriminate a low from a higher risk group is not suggested to serve as an indication for biopsy but rather be used to determine the frequency and intensity of subsequent monitoring.

What is known on this topic

  • Prostate specific antigen screening is widely used for the early detection of prostate cancer but remains highly controversial.
  • Focusing on the men at highest risk of prostate cancer metastasis and death could improve the ratio between benefits and harms of screening.
  • It is difficult to justify initiating PSA screening at 40 for men with no other significant risk factor
  • Men with PSA in highest 10th at age 45-49 contribute nearly half of prostate cancer deaths over the next 25-30 years
  • At least half of all men can be identified as being at low risk, and probably need no more than three PSA tests lifetime (mid to late 40s, early 50s, and 60)

What this study adds

Source: BMJ

 

 

 

 

 

 

Robert Lustig: The no candy man.


A man who declares that sugar is a toxin in the same league as cocaine and alcohol, and one that must be regulated in the same manner as tobacco, is apt to draw public attention. But Robert Lustig, professor of clinical paediatrics at the University of California, San Fransisco, is not camera shy. Indeed, he revels in the attention, even when it is not always flattering. Where other academics might feel uncomfortable, he exploits his fame to full effect. For example, at a recent symposium in London he argued that sugar was an addictive and dangerous substance, singularly responsible for the soaring rates of obesity and diabetes around the world. He began his speech with a quotation from Gandhi and concluded by declaring a war against the sugar industry. The audience responded with rapture and enthusiasm.

Lustig, a paediatric endocrinologist specialising in neuroendocrinology, owes his fame predominantly to a lecture, posted on YouTube, entitled “Sugar: The Bitter Truth” (www.youtube.com/watch?v=dBnniua6-oM). At the time of writing, it had had more than 3.3 million views. Not bad for a 90 minute lecture, the bulk of which is devoted to complex biochemical reactions that happen in the liver. But Lustig is an engaging and passionate speaker, prone to rhetorical flourishes and dramatic pronouncements, which keeps his audience, virtual and real, interested.

Source: BMJ

Genital warts in young Australians five years into national human papillomavirus vaccination programme: national surveillance data.


Abstract

Objective To measure the effect on genital warts of the national human papillomavirus vaccination programme in Australia, which started in mid-2007.

Design Trend analysis of national surveillance data.

Setting Data collated from eight sexual health services from 2004 to 2011; the two largest clinics also collected self reported human papillomavirus vaccination status from 2009.

Participants Between 2004 and 2011, 85 770 Australian born patients were seen for the first time; 7686 (9.0%) were found to have genital warts.

Main outcome measure Rate ratios comparing trends in proportion of new patients diagnosed as having genital warts in the pre-vaccination period (2004 to mid-2007) and vaccination period (mid-2007 to the end of 2011).

Results Large declines occurred in the proportions of under 21 year old (92.6%) and 21-30 year old (72.6%) women diagnosed as having genital warts in the vaccination period—from 11.5% in 2007 to 0.85% in 2011 (P<0.001) and from 11.3% in 2007 to 3.1% in 2011 (P<0.001), respectively. No significant decline in wart diagnoses was seen in women over 30 years of age. Significant declines occurred in proportions of under 21 year old (81.8%) and 21-30 year old (51.1%) heterosexual men diagnosed as having genital warts in the vaccination period—from 12.1% in 2007 to 2.2% in 2011 (P<0.001) and from 18.2% in 2007 to 8.9% in 2011 (P<0.001), respectively. No significant decline in genital wart diagnoses was seen in heterosexual men over 30 years of age. In 2011 no genital wart diagnoses were made among 235 women under 21 years of age who reported prior human papillomavirus vaccination.

Conclusions The significant declines in the proportion of young women found to have genital warts and the absence of genital warts in vaccinated women in 2011 suggests that the human papillomavirus vaccine has a high efficacy outside of the trial setting. Large declines in diagnoses of genital warts in heterosexual men are probably due to herd immunity.

Discussion

This study shows that the proportion of young women diagnosed as having genital warts has continued to decline since the quadrivalent human papillomavirus vaccination programme started in Australia in 2007. Less than 1% of women aged under 21 years presenting at sexual health services were found to have genital warts in 2011, compared with 10.5% in 2006 before the vaccination programme started. By 2011, no genital warts were diagnosed in women aged under 21 who reported being vaccinated. A significant decline also occurred in genital wart diagnoses in 21-30 year old women, a trend not observed in older women. A similar pattern of declining diagnoses, but of a lesser magnitude, was seen in young heterosexual men.

What is already known on this topic

  • Clinical trials have shown that the quadrivalent human papillomavirus (HPV) vaccine offers up to 100% protection against lesions caused by HPV 6 and 11
  • Declines in the incidence of genital warts have been documented in several countries, depending on the level of coverage with quadrivalent HPV vaccine
  • About 90% of cases of genital warts are generally thought to be due to HPV 6 and 11
  • In women, with 83% first dose vaccine coverage, a 93% decline in diagnosis of genital warts was seen by the fifth year of the national quadrivalent HPV vaccination programme in Australia
  • Despite men not being vaccinated, an 82% decline in genital warts occurred in heterosexual men, attributable to herd immunity
  • No women who reported that they had been vaccinated were diagnosed as having genital warts in the final year of the study

What this study adds

Source: BMJ

The science of obesity: what do we really know about what makes us fat? An essay by Gary Taubes .


The history of obesity research is a history of two competing hypotheses. Gary Taubesargues that the wrong hypothesis won out and that it is this hypothesis, along with substandard science, that has exacerbated the obesity crisis and the related chronic diseases. If we are to make any progress, he says, we have to look again at what really makes us fat

Since the 1950s, the conventional wisdom on obesity has been simple: it is fundamentally caused by or results from a net positive energy balance—another way of saying that we get fat because we overeat. We consume more energy than we expend. The conventional wisdom has also held, however, that efforts to cure the problem by inducing undereating or a negative energy balance—either by counselling patients to eat less or exercise more—are remarkably ineffective.

Put these two notions together and the result should be a palpable sense of cognitive dissonance. Take, for instance, The Handbook of Obesity, published in 1998 and edited by three of the most influential authorities in the field. “Dietary therapy,” it says, “remains the cornerstone of treatment and the reduction of energy intake continues to be the basis of successful weight reduction programs.” And yet it simultaneously describes the results of such dietary therapy as “poor and not long-lasting.”1

Rather than resolve this dissonance by questioning our beliefs about the cause of obesity, the tendency is to blame the public (and obese patients implicitly) for not faithfully following our advice. And we embrace the relatively new assumption that obesity must be a multifactorial and complex disorder. This makes our failures to either treat the disorder or rein in the burgeoning epidemics of obesity worldwide somehow understandable, acceptable.

Another possibility, though, is that our fundamental understanding of the aetiology of the disorder is indeed incorrect, and this is the reason for the lack of progress. If this is true, and it certainly could be, then rectifying this aetiological misconception is absolutely critical to future progress.

Energy balance hypothesis

Despite its treatment as a gospel truth, as preordained by physical law, the energy balance or overeating hypothesis of obesity is only that, a hypothesis. It’s largely the product of the influential thinking of two physicians—the German diabetes specialist Carl von Noorden at the beginning of the 20th century, and the American internist and clinical investigator Louis Newburgh, a quarter century later. Its acceptance as dogma came about largely because its competing hypothesis—that obesity is a hormonal, regulatory disorder—was a German and Austrian hypothesis that was lost with the anti-German sentiment after the second world war and the subsequent embracing of English, rather than German, as the lingua franca of science.

Medicine today is often taught untethered from its history—unlike physics, for instance—which explains why the provenance of the energy balance hypothesis is little known, even by those physicians and researchers who are its diehard proponents. Nor is it widely known that a competing hypothesis ever existed, and that this hypothesis may have done a better job of explaining the data and the observations. Knowing this history is crucial to understanding how we got into the current situation and, indeed, how we might solve it.

The applicability of the laws of thermodynamics to living organisms dates from the 1880s and the research of the German physiologist Max Rubner. By the end of the 19th century, the American scientists Wilbur Atwater and Francis Benedict had confirmed that these laws held for humans as well: that the calories we consumed would be burned as fuel, stored, or excreted.2 This revelation then led von Noorden to propose that “the ingestion of a quantity of food greater than that required by the body, leads to an accumulation of fat, and to obesity, should the disproportion be continued over a considerable period.”3

By the late 1920s, Newburgh had taken up the energy balance banner at the University of Michigan and was promoting it based on what he believed to be a fundamental truth: “All obese persons are alike in one fundamental respect—they literally overeat.” As such, he blamed obesity on either a “perverted appetite” (excessive energy consumption) or a “lessened outflow of energy” (insufficient expenditure).4 If the obese person’s metabolism was normal, he argued, and they still refused to rein in their intake, that was sufficient evidence to assume that they were guilty of “various human weaknesses such as overindulgence and ignorance.”5

By 1939, Newburgh’s biography at the University of Michigan was crediting him with the discovery that “the whole problem of weight lies in regulation of the inflow and outflow of calories” and for having “undermined conclusively the generally held theory that obesity is the result of some fundamental fault.”6

As sceptics pointed out at the time, though, the energy balance notion has an obvious flaw: it is tautological. If we get fatter (more massive), we have to take in more calories than we expend—that’s what the laws of thermodynamics dictate—and so we must be overeating during this fattening process. But this tells us nothing about cause. Here’s the circular logic:

Why do we get fat? Because we overeat.

How do we know we’re overeating? Because we’re getting fatter.

And why are we getting fatter? Because we’re overeating.

And so it goes, round and round.

“The statement that primary increase of appetite may be a cause of obesity does not lead us very far,” wrote the Northwestern University School of Medicine endocrinologist Hugo Rony in 1940 in Obesity and Leanness, “unless it is supplemented with some information concerning the origin of the primarily increased appetite. What is wrong with the mechanism that normally adjusts appetite to caloric output? What part of this mechanism is primarily disturbed?” Any regulatory defect that drove people to gain weight, Rony noted, would induce them to take in more calories than they expend. “Positive caloric balance would be, then, a result rather than a cause of the condition.”7

Endocrinological hypothesis

The alternative hypothesis that Newburgh’s work had allegedly undermined was the idea that some “intrinsic abnormality”—Rony’s words—was at the root of the disorder. This was an endocrinological hypothesis. It took the laws of physics as a given; it rejected aberrant behaviour or ignorance as causal. It existed at the time as two distinct hypotheses.

One was the brainchild of Wilhelm Falta, a student of von Noorden and a pioneer of the science of endocrinology. Falta believed that the hormone insulin must be driving obesity on the basis, as he noted as early as 1923, that “a functionally intact pancreas is necessary for fattening.”8 Once insulin was discovered, Falta considered it the prime suspect in obesity. “We can conceive,” he wrote, “that the origin of obesity may receive an impetus through a primarily strengthened function of the insular apparatus, in that the assimilation of larger amounts of food goes on abnormally easily, and hence there does not occur the setting free of the reactions that in normal individuals work against an ingestion of food which for a long time supersedes the need.”9

The other version of the hypothesis was bound up in a concept known as lipophilia. It was initially proposed in 1908 by Gustav Von Bergmann, a German authority on internal medicine, and then taken up by Julius Bauer, who did pioneering work on endocrinology, genetics, and chronic disease at the University of Vienna.

Von Bergmann initially evoked the term lipophilia (“love of fat”) to explain why fat deposition was not uniform throughout the body. Just as we grow hair in some places and not others, according to this thinking, we fatten in some areas and not others and biological factors must regulate this. People who are constitutionally predisposed to fatten, Von Bergmann proposed, had adipose tissue that was more lipophilic than that of constitutionally lean individuals. And if fat cells were accumulating excessive calories as fat, this would deprive other organs and cells of the energy they needed to thrive, leading to hunger or lethargy. These would be compensatory effects of the fattening process, not causes.

“Like a malignant tumor or like the fetus, the uterus or the breasts of a pregnant woman,” explained Bauer, “the abnormal lipophilic tissue seizes on foodstuffs, even in the case of undernutrition. It maintains its stock, and may increase it independent of the requirements of the organism. A sort of anarchy exists; the adipose tissue lives for itself and does not fit into the precisely regulated management of the whole organism.”10

Erich Grafe, director of the Clinic of Medicine and Neurology at the University of Würtzberg, discussed these competing hypotheses in his seminal textbook Metabolic Diseases and Their Treatment, which was published in an English translation in 1933. Grafe said he favoured the energy balance model of obesity, but acknowledged that this model failed to explain key observations—why fat accumulates in certain regions of the body. “The energy conception certainly cannot be applied to this realm,” Grafe wrote. The lipophilia hypothesis could.

Grafe described lipophilia as “a condition of abnormally facilitated fat production and impeded fat destruction . . . a sort of lipomatosis universalis, in the sense that the lipophilia in certain tissues is primary and the sparing in the energy expended is secondary.” But he found the hypothesis troubling “so far as it presupposes overnutrition.” He acknowledged, nonetheless, that it was “a good working hypothesis.” As for Falta’s notions, Grafe wrote, “the fact that insulin is an excellent fattening substance has been observed.”11

By 1938, Russell Wilder of the Mayo Clinic (later to become director of the National Institute of Arthritis and Metabolic Diseases) was writing that the lipophilia hypothesis “deserves attentive consideration,” and that “the effect after meals of withdrawing from the circulation even a little more fat than usual might well account both for the delayed sense of satiety and for the frequently abnormal taste for carbohydrate encountered in obese persons . . . A slight tendency in this direction would have a profound effect in the course of time.”12

Two years later, Rony wrote in Obesity and Leanness that the hypothesis was “more or less fully accepted” in Europe.

Language barrier

Maybe so. But it was lost with the second world war and the embracing of English as the lingua franca of science afterwards. In Grafe’s chapters on obesity, over 90% of the 235 references are from the German language literature. In Rony’s Obesity and Leanness, this is true for a third of the almost 600 references. But post-war, the German language references fall away quickly. In Obesity…, published in 1949 by two Mayo Clinic physicians—Edward Rynearson and Clifford Gastineau—only 14 of its 422 references are from the German language literature, compared with a dozen from Louis Newburgh alone. By the late 1960s and 1970s, when the next generation of textbooks were written, German language references were absent almost entirely, as were the clinical observations, experience, and intuitions that went with them.

By then, obesity had evolved into an eating disorder, to be treated and studied by psychologists and psychiatrists, while laboratory researchers focused (as they still do) on identifying the physiological determinants of hunger, satiety, and appetite: why do we eat too much, rather than why do we store too much fat? Two entirely different questions.

What makes this transition so jarring in retrospect is that it coincided with the identification of the hormone insulin in the early 1960s as the primary regulator of fat accumulation in fat cells.13 Had Falta’s ideas and the lipophilia hypothesis survived the second world war, this discovery would have served to bring these two hypotheses together. And because serum insulin levels are effectively driven by the carbohydrate content of the diet, this hypothesis would implicate refined, high glycaemic grains and sugars (sucrose and high fructose corn syrup, in particular) as the environmental triggers of obesity. They would be considered uniquely fattening, just as Falta had suggested, not because we overeat them—whatever that means—but because they trigger a hormonal response that drives the partitioning of the fuel consumed into storage as fat.

This might have been perceived, although it was not, as a medical triumph: the elucidation of both the biological underpinnings of obesity as well as an explanation for what was until then the conventional wisdom on the cause. “Every woman knows that carbohydrate is fattening,” as Reginald Passmore and Yola Swindells wrote in the British Journal of Nutrition in 1963: “this is a piece of common knowledge, which few nutritionists would dispute.”14

Academic backlash

That this insulin-carbohydrate hypothesis never gained traction can be explained, paradoxically, by the fact that it was embraced by practising physicians, who read the physiology and biochemistry literature and then designed carbohydrate restricted diet plans that seemed to work remarkably well. Indeed, the sessions on dietary therapy for obesity in the scattering of obesity conferences held from the end of the second world war through the mid-1970s invariably focused on the surprising efficacy of carbohydrate restricted diets to reduce excess adiposity.

When those physicians then wrote diet books based on their regimens, and these books then sold exceedingly well—Dr Atkins’ Diet Revolution (1972) most notably—the result was a backlash from academic nutritionists and obesity researchers. Fred Stare, for instance, head of the Harvard nutrition department, testified in 1972 Congressional hearings that physicians prescribing such diets were “guilty of malpractice,” on the basis that these diets were rich in saturated fat at a time when the medical community was coming to believe that high fat diets were the cause of heart disease. Exacerbating the dietary fat issue was the fact that these diet plans encouraged obese individuals to eat to satiety, effectively as much as they wanted (so long as they avoided carbohydrates), when the conventional wisdom had it that they got fat to begin with precisely because they ate as much as they wanted.

By the mid-1970s, the diets had been successfully tarred as dangerous fads (despite a history of common use in hospitals, including the Harvard Medical School,15 and a provenance going back at least to the 1820s) and the physician authors as quacks and confidence men. The notion that obesity is not an eating disorder or an energy balance disorder, but a fat accumulation disorder—a hormonal, regulatory disorder—triggered not by energy imbalance but the quality and quantity of the carbohydrates in the diet, has been routinely dismissed ever since as unworthy of serious attention.

In a 21st century of genomics, proteomics, and high tech medicine, it’s hard to imagine that the obesity problem might have been effectively solved by 1960s era endocrinology. Rather we assume that these competing hypotheses must have been rigorously tested, and the energy balance hypothesis must have won out. We know that it’s excess calories, not carbohydrates—eating too much rather than “abnormal lipophilic tissue”—that make us fat because that’s what the science has told us.

But this is not the case. One problem has been an almost ubiquitous misunderstanding of the alternative hypothesis and, indeed, of energy imbalance itself. The existence of an energy imbalance in people who are getting fatter is treated, as Newburgh did, as evidence that the energy balance hypothesis is correct. The same can be said for observations that obese people eat more than lean or are more sedentary, or even that per capita food availability has increased over the course of the obesity epidemic or that leisure time physical activity has decreased. All these observations, though, are consistent with both hypotheses.

Calories or carbohydrates

Attempts to blame the obesity epidemics worldwide on increased availability of calories typically ignore the fact that these increases are largely carbohydrates and those carbohydrates are largely sugars—sucrose or high fructose corn syrup. And so these observations shed no light on whether it’s total calories to blame or the carbohydrate calories. Nor do they shed light on the more fundamental question of whether people or populations get fat because they’re eating more, or eat more because the macronutrient composition of their diets is promoting fat accumulation—increased lipogenesis or decreased lipolysis, in effect, driving an increase in appetite.

The same is true for bariatric surgery, which is now acknowledged to be a remarkably effective means of inducing long term weight loss. But does weight loss occur after surgery because of the rearrangement of the gastrointestinal tract resulting in hormonal effects that minimise appetite or directly minimise fat accumulation? Does it occur because the patient reduces total calories consumed after surgery or reduces carbohydrate calories and, specifically, refined grains and sugars? The observation that bariatric surgery works doesn’t answer these questions.

As Erich Grafe noted about the lipophilia hypothesis 80 years ago, it “presupposes overnutrition.” If a patient is getting heavier, they must be taking in more energy than they expend. With the energy balance hypothesis, overnutrition is causal; with lipophilia, it’s compensatory, a response to the hormonally driven fat accumulation. Either way, it has to exist while an individual is gaining weight. And, by the same token, undernutrition or negative energy balance has to exist if an individual is losing weight.

Sugary beverages are another example of how these different hypotheses lead to different conclusions that are relevant to solving the obesity epidemics worldwide. The conventional wisdom has it that sugary beverages are merely empty calories that we consume in excess, although it is possible that the metabolism of fructose (a key carbohydrate component that makes these sugars sweet) in the liver somehow circumvents leptin signalling, leading us to consume these beverages and their calories even when we’re not and shouldn’t be hungry. The hormonal or regulatory hypothesis also focuses on the metabolism of fructose in the liver, but rather than leptin it uses evidence suggesting that fructose metabolism can induce insulin resistance, leading in turn to raised insulin levels and trapping fat in fat cells—increasing, in effect, lipophilia.

Shortcomings of obesity and nutrition research

Another problem endemic to obesity and nutrition research since the second world war has been the assumption that poorly controlled experiments and observational studies are sufficient basis on which to form beliefs and promulgate public health guidelines. This is rationalised by the fact that it’s exceedingly difficult (and inordinately expensive) to do better science when dealing with humans and long term chronic diseases. This may be true, but it doesn’t negate the fact the evidence generated from this research is inherently incapable of establishing reliable knowledge.

The shortcomings of observational studies are obvious and should not be controversial. These studies, regardless of their size or number, only indicate associations—providing hypothesis generating data—not causal relations. These hypotheses then have to be rigorously tested. This is the core of the scientific process. Without rigorous experimental tests, we know nothing meaningful about the cause of the disease states we’re studying or about the therapies that might work to ameliorate them. All we have are speculations.

As for the experimental trials, these too have been flawed. Most conspicuous is the failure to control variables, particularly in free-living trials. Researchers counsel participants to eat diets of different macronutrient composition—a low fat, a low carbohydrate, and a Mediterranean diet, for instance—and then send them off about their lives to do so. In these trials, carbohydrate restricted diets almost invariably show significantly better short term weight loss, despite allowing participants to eat as much as they want and being compared with calorie restricted diets that also reduce the quantity of carbohydrates consumed and improve the quality. In these trials, the ad libitum carbohydrate restricted diets have also improved heart disease and diabetes risk factors better than the diets to which they’ve been compared. But after a year or two, the results converge towards non-significance, while attempts to quantify what participants actually eat consistently conclude that there is little long term compliance with any of the diets.1617 18

Rather than acknowledge that these trials are incapable of answering the question of what causes obesity (assumed to be obvious, in any case), this research is still treated as relevant, at least, to the question of what diet works best to resolve it—and that in turn as relevant to the causality question. Should we restrict calories or carbohydrates to lose weight? If the answer is that it doesn’t seem to matter because the participants eventually fail to adhere to any of the diets, this is perceived as somehow a confirmation that the only way to lose weight is to reduce calories, and so the energy balance hypothesis is the correct one.19

Imagine drawing conclusions about the cause of lung cancer or the reduction in risk that can be achieved by quitting cigarettes based on success rates in experimental trials of smoking cessation techniques—going cold turkey, for instance, versus using the patch or nicotine gum. The logic is similar if not identical.

Ultimately what we want to know is what causes weight gain. That’s an entirely different question from whether advising someone to follow a Mediterranean diet is more or less efficacious than a low fat or a carbohydrate restricted diet or some variation thereof.

In metabolic ward studies, in which the diets of the participants have been well controlled, researchers typically restricted the calories in both arms of the trials—feeding participants, say, 800 calories of a low fat versus a low carbohydrate diet—and so building into the study design one of the hypotheses that is ultimately being tested. What we want to know, again, is what causes us to gain weight, not whether weight loss can be induced under different conditions of both semistarvation and carbohydrate restriction.

What can we do about this? It seems we have two choices. We can continue to examine and debate the past, or we can look forward and start anew.

A year ago, working with Peter Attia, a physician, and with support from the Laura and John Arnold Foundation in Houston Texas, I cofounded a not-for-profit organisation called the Nutrition Science Initiative (NuSI.org). Our strategy is to fund and facilitate rigorously well controlled experimental trials, carried out by independent, sceptical researchers. The Arnold Foundation has now committed $40m over the next three years to this research programme. Our hope is that these experiments will be the first steps in answering definitively the question of what causes obesity and help us finally make meaningful progress against it.

We believe that ultimately three conditions are necessary to make progress in the struggle against obesity and its related chronic diseases—type 2 diabetes, most notably. First is the acceptance of the existence of an alternative hypothesis of obesity, or even multiple alternative hypotheses, with the understanding that these, too, adhere to the laws of physics and must be tested rigorously.

Second is a refusal to accept substandard science as sufficient to establish reliable knowledge, let alone for public health guidelines. When the results of studies are published, the authors must be brutally honest about the possible shortcomings and all reasonable alternative explanations for what they observed. “If science is to progress,” as the Nobel prize winning physicist Richard Feynman said half a century ago, “what we need is the ability to experiment, honesty in reporting results—the results must be reported without somebody saying what they would like the results to have been—and finally—an important thing—the intelligence to interpret the results. An important point about this intelligence is that it should not be sure ahead of time what must be.”20

Finally, if the best we’ve done so far isn’t good enough—if uncontrolled experiments and observational studies are unreliable, which should be undeniable—then we have to find the willingness and the resources to do better. With the burden of obesity now estimated at greater than $150bn (£100bn; €118bn) a year in the US alone, virtually any amount of money spent on getting nutrition research right can be defended on the basis that the long term savings to the healthcare system and to the health of individuals will offset the costs of the research by orders of magnitude.

Source: BMJ

Impact of autologous blood injections in treatment of mid-portion Achilles tendinopathy: double blind randomised controlled trial.


Abstract

Objective To assess the effectiveness of two peritendinous autologous blood injections in addition to a standardised eccentric calf strengthening programme in improving pain and function in patients with mid-portion Achilles tendinopathy.

Design Single centre, participant and single assessor blinded, parallel group, randomised, controlled trial.

Setting Single sports medicine clinic in New Zealand.

Participants 53 adults (mean age 49, 53% men) with symptoms of unilateral mid-portion Achilles tendinopathy for at least three months. Participants were excluded if they had a history of previous Achilles tendon rupture or surgery or had undergone previous adjuvant treatments such as injectable therapies, glyceryl trinitrate patches, or extracorporeal shockwave therapy.

Interventions All participants underwent two unguided peritendinous injections one month apart with a standardised protocol. The treatment group had 3 mL of their own whole blood injected while the control group had no substance injected (needling only). Participants in both groups carried out a standardised and monitored 12 week eccentric calf training programme. Follow-up was at one, two, three and six months.

Main outcome measures The primary outcome measure was the change in symptoms and function from baseline to six months with the Victorian Institute of Sport Assessment-Achilles (VISA-A) score. Secondary outcomes were the participant’s perceived rehabilitation and their ability to return to sport.

Results 26 participants were randomly assigned to the treatment group and 27 to the control group. In total, 50 (94%) completed the six month study, with 25 in each group. Clear and clinically worthwhile improvements in the VISA-A score were evident at six months in both the treatment (change in score 18.7, 95% confidence interval 12.3 to 25.1) and control (19.9, 13.6 to 26.2) groups. The overall effect of treatment was not significant (P=0.689) and the 95% confidence intervals at all points precluded clinically meaningful benefit or harm. There was no significant difference between groups in secondary outcomes or in the levels of compliance with the eccentric calf strengthening programme. No adverse events were reported.

Conclusion The administration of two unguided peritendinous autologous blood injections one month apart, in addition to a standardised eccentric training programme, provides no additional benefit in the treatment of mid-portion Achilles tendinopathy.

 

What is already know on this topic

  • Several studies have suggested that the injection of autologous blood can be helpful in the treatment of various tendinopathies
  • Autologous blood injections are a relatively common treatment, despite a lack of high quality evidence supporting their use in the management of Achilles tendinopathy
  • The use of peritendinous autologous blood injections does not reduce pain or improve function in those with mid-portion Achilles tendinopathy when they are combined with an eccentric training programme.

What this study adds

Discussion

Summary of principal findings

To our knowledge, this is the first double blinded, randomised, controlled trial investigating the efficacy of autologous blood injections as a treatment for mid-portion Achilles tendinopathy in addition to a standardised eccentric training programme. Both groups showed a steady and clinically meaningful improvement in the mean VISA-A scores throughout the study. The addition of two peritendinous autologous blood injections administered one month apart, however, was unlikely to have resulted in additional clinically meaningful improvements in pain or in ability to return to activity.

Comparisons with the literature

The only other available published study investigating the use of autologous blood injections to treat Achilles tendinopathy concluded that injections produced a moderate improvement in symptoms.24 This previous study lacked blinding and allocation concealment, administered a variable number of injections, included participants with unilateral and bilateral symptoms, and had a high dropout rate. The sample size recruited for our study was large enough to preclude possible clinical benefits and harm and was also similar to that in other randomised trials that used the change in VISA-A score as the primary outcome measure (40-75 participants).24 28 29 The monitoring of eccentric training was also similar to that used by some other investigators,28 30 though others did not mention steps taken to monitor compliance.29 31 Good compliance probably contributes to a participant’s clinical improvement.25 Without accurate monitoring, any benefit attributable to the intervention might actually be caused by disparity between groups in the compliance with eccentric training or would at least increase the variation of the results. It is also likely that there would be increased variation in measurement between participants.

Explanations and clinical implications

The use of autologous blood injections to treat tendinopathy has become particularly popular in recent years, despite the apparent lack of quality evidence supporting their use.19 20 24 32 33 Our findings suggest that the application of autologous blood injections as described has no clear clinically beneficial effect, and based on this finding we cannot recommend their future use. There are, however, several other possible explanations for the outcome. Firstly, the needling process itself might have created sufficient bleeding in the control participants to induce a healing response. Secondly, administering injections closer together would have enhanced any potential benefit from maintaining a steady state effect of growth factors present in the blood.17 19 The lack of ultrasound guidance, or the injections being directed peritendinously rather than intratendinously, could have affected the outcome. From our experience however, most autologous blood injections administered in New Zealand are delivered peritendinously. Several reports of benefit from unguided injections in tendinopathy support this type of administration.24 32 33 34 35 36 There are also numerous studies indicating that peritendinous injections of differing substances can provide benefit in the treatment of Achilles tendinopathy.24 36 37 38 The intratendinous injection of platelet rich plasma, a derivative of whole blood, has been investigated in the treatment of mid-portion Achilles tendinopathy with mixed results to date.39 The proposed mechanism of action of autologous blood injections and platelet rich plasma is similar, but the former is cheaper as it does not require specific preparation and a smaller volume of blood is taken from participants. The negligible clinical effect of autologous blood injections in this study, and the mixed results of platelet rich plasma, suggest that these forms of injection therapy are not useful for the treatment of mid-portion Achilles tendinopathy. Eccentric training is already recognised as a good treatment option for Achilles tendinopathy,15 25 28 and both groups showed a clinical improvement as a result of the eccentric training component. It was therefore difficult to show an additive clinically meaningful effect of autologous blood injections. We did not use any type of substance in the injections in the control group, whereas most other studies used injections of normal saline28 40 41 42or local anaesthetic.43 44 There is recent evidence to suggest that higher levels of extracellular sodium suppress transient receptor potential V1 (TRPV1) activity, a non-selective cation channel present in nociceptors that helps to integrate pain perception.45We considered that injecting normal saline could itself result in reduced pain sensation, and as such, not strictly act as a control intervention. Indeed, the group allocation prediction analysis shows that participants were unable to determine which group that they had been randomised to, suggesting that with the use of our control intervention, we were able to maintain adequate blinding.

Strengths and limitations of the study

Strengths of this study were the high participation rate (95%) and low dropout rate (6%). The participants reflected a wide range of society with ages ranging from 27 to 76 and sex being evenly matched with 53% men and 47% women. There was, however, an under-representation of minority ethnic groups, with 91% of the participants identifying themselves as European and the 9% remaining as New Zealand Maori. A further strength was the use of the validated primary outcome measure, the VISA-A questionnaire, which was completed in the presence of the single blinded assessor. We also used daily monitoring of eccentric training compliance with a written log, which could have encouraged good compliance and showed that there were no differences between the groups in the eccentric training aspect of the management.

Future research

We conclude that the administration of unguided peritendinous autologous blood injections does not confer an additional benefit to rehabilitation outcomes resulting from a standardised eccentric training programme. Follow-up studies could involve the injections being given intratendinously under ultrasound guidance, a shorter time interval between injections, or an increase in the volume of injected blood. It would also be beneficial to have a third treatment group who undertake an eccentric training programme only, without injection. While such derivations would aid in estimation of the effect of the needling itself, it would be without blinding and therefore susceptible to placebo effects. A fourth treatment with autologous blood injections in the absence of eccentric training would help to determine whether any benefit is obtained from injections in isolation.

Source: BMJ

Parental depression, maternal antidepressant use during pregnancy, and risk of autism spectrum disorders: population based case-control study.


Abstract

 

Objective To study the association between parental depression and maternal antidepressant use during pregnancy with autism spectrum disorders in offspring.

Design Population based nested case-control study.

Setting Stockholm County, Sweden, 2001-07.

Participants 4429 cases of autism spectrum disorder (1828 with and 2601 without intellectual disability) and 43 277 age and sex matched controls in the full sample (1679 cases of autism spectrum disorder and 16 845 controls with data on maternal antidepressant use nested within a cohort (n=589 114) of young people aged 0-17 years.

Main outcome measure A diagnosis of autism spectrum disorder, with or without intellectual disability.

Exposures Parental depression and other characteristics prospectively recorded in administrative registers before the birth of the child. Maternal antidepressant use, recorded at the first antenatal interview, was available for children born from 1995 onwards.

Results A history of maternal (adjusted odds ratio 1.49, 95% confidence interval 1.08 to 2.08) but not paternal depression was associated with an increased risk of autism spectrum disorders in offspring. In the subsample with available data on drugs, this association was confined to women reporting antidepressant use during pregnancy (3.34, 1.50 to 7.47, P=0.003), irrespective of whether selective serotonin reuptake inhibitors (SSRIs) or non-selective monoamine reuptake inhibitors were reported. All associations were higher in cases of autism without intellectual disability, there being no evidence of an increased risk of autism with intellectual disability. Assuming an unconfounded, causal association, antidepressant use during pregnancy explained 0.6% of the cases of autism spectrum disorder.

Conclusions In utero exposure to both SSRIs and non-selective monoamine reuptake inhibitors (tricyclic antidepressants) was associated with an increased risk of autism spectrum disorders, particularly without intellectual disability. Whether this association is causal or reflects the risk of autism with severe depression during pregnancy requires further research. However, assuming causality, antidepressant use during pregnancy is unlikely to have contributed significantly towards the dramatic increase in observed prevalence of autism spectrum disorders as it explained less than 1% of cases.

Discussion

A maternal history of depression was associated with a higher risk of autism in offspring, but there was no evidence of a relation with paternal depression. These associations were largely limited to children of mothers who reported using antidepressants at the first antenatal interview. The increased risk was observed with SSRIs as well as with other monoamine reuptake inhibitor antidepressants. All these increased risks seemed to be confined to autism spectrum disorders without intellectual disability and persisted after adjustment for several confounding factors.

What is already known on this topic

  • Parental depression is considered a risk factor for autism spectrum disorder (autism) but meta-analytical evidence is inconclusive
  • One study suggested an association between prescriptions for selective serotonin reuptake inhibitors (SSRIs) during pregnancy and autism in offspring
  • This suggestion may have led to a preferential use of other antidepressants over SSRIs during pregnancy
  • A maternal but not paternal history of depression was associated with a higher risk of autism in offspring
  • The increased risk of autism was largely found in children of mothers reporting antidepressant use at the first antenatal interview. However, SSRIs as well as non-selective monoamine reuptake inhibitors were associated with increased risks for autism, suggesting non-SSRIs may not be “safer” alternatives in this context.
  • Associations were largely limited to autism without intellectual disability, suggesting that autism with and without intellectual disability may have partially different causes

What this study adds

 

 

Source: BMJ