Wave generator powers US electrical grid for the first time.


The US has started receiving power from wave energy for the first time, thanks to a prototype wave generator called Azura.

Installed off the coast of Hawaii at the US Navy’s Wave Energy Test Site in Kaneohe Bay, this 40-tonne, bright yellow device is the first of what the national Department of Energy (DoE) says could be a fleet of wave generators working together to supply clean, renewable power to America’s coastal cities. With 50 percent of the population living along 80 kilometres of coastline, the potential is huge, if they can get it right.

The Azura prototype was installed 300 metres below the surface last month, and has been generating power ever since, but this week is the first time the amount has actually been verified.

While the amount was pretty minuscule – 20 kilowatts – the Oregon-based company behind the design, Northwest Energy Innovations (NWEI), says the design will be improved and the new iteration will be installed deeper and in the vicinity of much bigger waves, giving it the potential to generate 1 megawatt of power, which Steve Dent from Engadget says will be enough energy to power several hundred homes.

The key to the success of the Azura design is in how it manages to capture all aspects of movement in a wave, which the DoE says marks a radical shift in how wave energy devices operate. “Waves have both side-to-side and up-and-down motion, but up to this point, almost all designs have been limited to one direction or the other,” says the DoE website. “The Azura can harness movement in 360 degrees, bobbing, twisting and wobbling its way to a higher electricity output.”

The team plans on installing the new megawatt-capable system in 2017, and in the meantime will keep testing their Azura prototype with the help of the US Navy. Meanwhile, a similar device has been installed off the coast of Western Australia, and has been supplying wave energy to the local grid since February.

According to World Ocean Review, the global potential of wave energy is estimated to be around 11,400 TWh per year, with the potential for that to be converted into some serious useable energy. “Its sustainable generating potential of 1,700 TWh per year equates to about 10 percent of global energy needs,” the Review States. If we’re smart, we’ll get on that sooner rather than later.

The Surprising Cost of Making a New Medication (and why you’re emptying your pockets to pay for insulin)


It’s easy to completely resent the out-of-pocket price for most medications these days. When I didn’t have health insurance during and for two years after college, paying over $100 for one vial of Lantus (thus, $1200 per year just for my Lantus insulin) was painful. A box of test-strips was in the realm of $75 and having to decide whether to test my blood sugar often for my health or test as little as possible for my bank account was always staring me in the face. (Fortunately, don’t worry, thanks to friends and discounts, I was able to choose my health without going totally broke.)

Do those big pharma companies really need to charge over $100 for one vial insulin? Where is that money going? To the expensive and immense parties I’ve heard about from friends who worked in pharmaceutical sales positions?

Bruce Booth, a contributor at Forbes.com, recently reported on “exactly” what it does cost a pharmaceutical company to create a new drug. Bear in mind he is use calculated estimates:

The cost of making a new drug has grown to nearly $2.6B, according to the latest and greatest from Tufts Center for the Study of Drug Development (here).  That’s a big and almost unfathomable number.  Critics immediately pounced on it, calling anyone who believed it a flatlander (here), and suggesting that the cost, with failures, was closer to $150M.  Tufts’ number appears incredulous, but the critics paltry number even more so.

Last time I weighed in on this subject back in 2011, an article in Slate suggested that the cost of a drug was $50M.   I put out a model so that readers could “choose their own adventure” and play with the assumptions around drug R&D (here).  It provoked some good commentary and engagement.

Booth explains the different factors in determining the heavy price tag of a new drug:

  • “the direct costs of moving a drug forward”
  • “paying for failures along the way”
  • “the time value of money”

…and then he breaks down the process of bring a drug from it’s initial creation to a container sitting in your bathroom cabinet:

Looking at these numbers, I find myself in an awkward position of feeling as though I almost understand why pharma can charge more than $1.50 per test strip even though the production of that strip itself costs barely 10 cents…and surely, by now, they’ve recouped the millions of dollars they spent to create that test strip through sales to the millions of us who need those test strips. Again, I find myself feeling both frustrated and understanding that these companies need to make a profit, need to pay employees, need to profit from existing medications and technology so they have further funds to create new medications and technology.

Booth estimates that pharma as a whole spends over $100 billion simply in research and development (R&D), in the constant pursuit of the new drug that will put their brand at the top of the list when doctor’s determine a treatment protocol for their patients.

It isn’t rare to see rants and complaints directed at the FDA over how long it takes for new medications and technology to get approved, but part of our wait is the very necessary testing, testing, testing through a number of studies as pointed out by Booth. Do I want a drug to make it to the market before it’s been thoroughly tested for safety and efficacy so I can get it more cheaply? At the risk of…my health? My life?

In the end, the process isn’t simple. It isn’t straight-forward. And it is very costly.

“Most practitioners in the field agree on the general rather than specific conclusion: the cost to bring a drug to market is big – very big – especially when accounting for all the failed attempts,” says Booth. “If we want to reduce this number, the solution is simple – just do things better, faster, and cheaper.  Back to work.”

‘Pain sensing’ gene discovery could help in development of new methods of pain relief | Neuroscientist News


http://neuroscientistnews.com/research-news/pain-sensing-gene-discovery-could-help-development-new-methods-pain-relief

Decoding the brain: Scientists redefine and measure single-neuron signal-to-noise ratio


Raster plots of neural spiking activity
Fig. 1. Raster plots of neural spiking activity. (A) Forty trials of spiking activity recorded from a neuron in the primary auditory cortex of an anesthetized guinea pig in response to a 200 μs/phase biphasic electrical pulse applied in the inferior colliculus at time 0. (B) Fifty trials of spiking activity from a rat thalamic neuron recorded in response to a 50 mm/s whisker deflection repeated eight times per second. (C) Twenty-five trials of spiking activity from a monkey hippocampal neuron recorded while executing a location scene association task. (D) Forty trials of spiking activity recorded from a subthalamic nucleus neuron in a Parkinson’s disease patient before and after a hand movement in each of four directions (dir.): up (dir. U), right (dir. R), down (dir. D), and left (dir. L). Credit: Czanner G, Sarma SV, Ba D, Eden UT, Wu W, Eskandar E, Lim HH, Temereanca S, Suzuki WA, Brown EN (2015) Measuring the signal-to-noise ratio of a neuron. Proc Natl Acad Sci USA 112(23):7141-7146.

(Phys.org)—The signal-to-noise ratio, or SNR, is a well-known metric typically expressed in decibels and defined as a measure of signal strength relative to background noise – and in statistical terms as the ratio of the squared amplitude or variance of a signal relative to the variance of the noise. However, this definition – while commonly used to measure fidelity in physical systems – is not applicable to neural systems, because neural spiking activity(in which electrical pulses called action potentials travel down nerve fiber as voltage spikes, the pattern of which encodes and transmits information) is more accurately represented using point processes (random collections of points, each representing the time and/or location of an event).

Recently, scientists at the University of Liverpool and Massachusetts Institute of Technology refined the as an estimate of a ratio of expected prediction errors, and moreover extended the standard definition to one appropriate for single using point process generalized linear models (PP-GLM) – a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The researchers conclude that their study provides a straightforward method for determining the signal-to-noise ration of single neurons – and by generalizing the standard SNR metric, they were able to explicitly characterize the acknowledged fact that individual neurons are noisy transmitters of information. The scientists state that their new approach allows SNR computation on the same decibel scale for neurons and man-made systems, and moreover applies to any analysis in which a generalized linear model can be used as a statistical model – including clinical trials, observational studies, and the optimization and evaluation of neural prostheses.

Dr. Gabriela Czanner and Prof. Emery N. Brown discussed the paper that they and their colleagues published in Proceedings of the National Academy of Sciences. As might well be imagined, the scientists faced a number of challenges in conducting their study. “Neurons communicate through spiking activity in two ways – spike intensity modulation or spike timing patterns,” Czanner tells Phys.org. “In our work, in which the fundamental challenge was to differentiate signal from noise in neuronal spiking, we focused on spike intensity. This required identifying the components that cause neurons to spike – or as Rieke and his colleagues wrote1, ‘To make meaningful estimates of information transmission we need to understand something about the structure of the neural code.'”

Taking their lead from this quote, the scientists realized that in the case of single neurons, the factors that modulate the spiking activity are the stimulus, biophysical properties of the neuron, thermal noise, and activity of neighboring neurons. They then employed statistics to show that the signal regularity in the data that can be explained by stimulus and the neuron’s biophysical properties, while noise comprises the remaining neural spiking dynamics not captured by the stimulus and biophysical components.

That said, Czanner points out, previous formulations of neural SNR – for example, information theory adapted to derive Gaussian upper bounds on individual neuron SNR, and coefficients-of-variation and Fano factors (two measures of dispersion of a probability distribution or frequency distribution) based on spike counts – fail to consider the point process nature of neural spiking activity. (A Gaussian distribution is a continuous probability distribution used to represent real-valued random variables whose distributions are not known.) Moreover, these measures and the Gaussian approximations are less accurate for neurons with low spike intensity modulation or when information is contained in precise spike timing patterns.

Brown notes that while the standard SNR definition assumes that a system’s measurements have a Gaussian distribution and that the noise is added to the signal, neural systems produce binary observations in short time intervals – that is, a distribution summarizing the number of events in a short time interval, and when the probability of number of observations depend on the number of observations in previous time intervals. These observations are more accurately modeled as a point process, and therefore differ significantly from Gaussian distributions and from standard Binomial distributions. “Moreover,” he adds, “these observations are affected by both the signal and the noise in the neural system – so the relationship is not additive.”

Relatedly, he adds, it was vital to show that the neuronal signal-to-noise ratio (SNR) estimates a ratio of expected prediction errors rather than the standard SNR definition of signal amplitude squared and divided by the noise variance. “Neurons emit binary electrical discharges, called spikes, which are believed to be elementary unit of neural communication,” Brown tells Phys.org. “Hence, it’s important to find out precisely how much information and noise there is in the spikes.”

Another challenge was determining that single neuron SNRs range from −29 to −3 decibels (dB), which meant selecting appropriate approximating models of the spiking activity. “The model has to be tailored to each neuron,” Czanner stresses, “thus reflecting neurons’ own specific electrophysiology and the type of the stimulation, which was either explicit or implicit. To achieve this we analyzed the SNR metric from the perspective of statistical concepts.” She says that showing that traditional SNR is a random quantity that estimates a ratio of expected prediction errors allowed them to extend the SNR standard definition to one appropriate for single neurons by representing neural spiking activity using a point process generalized linear model (PP-GLM).

“We estimate prediction errors using residual deviances from PP-GLM fits,” she continues. “Because the deviance is an approximate chi-squared random variable whose expected value is the number of degrees of freedom, we compute a bias-corrected SNR estimate appropriate for single neuron analysis and use the bootstrap to assess its uncertainty.” In analyzing four neuroscience systems experiments, the researchers were thereby able to show that the SNRs of individual neurons have – as Brown says they expected – these very low decibel values. “In other words, by generalizing the standard SNR metric we make explicit the well-known fact that individual neurons are highly noisy information transmitters – and at the same time, our framework expresses this SNR ratio for neurons in the same units as that used for more standard Gaussian additive noise systems.”

Stimulus and history component estimates
Fig. 2. Stimulus and history component estimates from the PP-GLM analyses of the spiking activity in Fig. 1. (A) Guinea pig primary auditory cortex neuron. (B) Rat thalamic neuron. (C) Monkey hippocampal neuron. (D) Human subthalamic nucleus neuron. The stimulus component (Upper) is the estimated stimulus-induced effect on the spike rate in A, C, and D and the impulse response function of the stimulus in B. The history components (Lower) show the modulation constant of the spike firing rate. Credit: Czanner G, Sarma SV, Ba D, Eden UT, Wu W, Eskandar E, Lim HH, Temereanca S, Suzuki WA, Brown EN (2015) Measuring the signal-to-noise ratio of a neuron. Proc Natl Acad Sci USA 112(23):7141-7146.

Interestingly, by calculating spiking history SNR (as is allowed by their generalized SNR definition), the scientists demonstrated that taking into account the neuron’s biophysical processes – such as absolute and relative refractory periods (the periods after the action potential, when the neuron cannot spike again or can spike with low probability, respectively), bursting propensity (the period of a neuron’s rapid action potentials ), local network dynamics, and, in this case, spiking history – is often a more informative predictor of spiking propensity than the signal or stimulus activating the neuron.

An important step in their investigation was to acknowledge that SNR is an estimator or a statistic, meaning that it has statistical properties that need to be evaluated. “The values of the SNR estimate are random, and so change if an experiment is repeated,” Czanner says. “We therefore had to identify these properties in order to be certain that the estimate is constructed in a principled way.” She notes that a key property of the SNR estimator is that it is biased, because by definition it always gives a positive value even when there is no signal. “To this end, we proposed a simple bias adjustment that worked well in simulation studies.”

In reviewing their research, Czanner and Brown identified the key innovations the team developed used to address these myriad challenges:

● Determining that the quantity being estimated by the SNR is a ratio of prediction errors

● Employing the PP-GLM framework to estimate deviances as a generalization of sums of squares (is a mathematical approach to determining the dispersion of data points) so that the concept of the SNR can be extended to neurons and agrees with the concept used for Gaussian systems

● Defining a way to take account of non-signal yet relevant covariates that also affect the spiking propensity of the neuron, which we accomplished using the linear terms from the Volterra series approximation to the log of the conditional intensity function expanded in terms of the stimulus and the spiking history

● Determining that the numerator and denominator are both approximate chi-squared random variables whose means give approximations to the appropriate bias corrections, allowing the team to compute the approximate bias correction to the biased SNR estimate

● Reporting the SNR for implicit and explicit neural stimuli (those attended to or not, respectively)

● Being able to compute the SNR on the same decibel scale for neurons and man-made systems by using appropriate GLM models for each

In their paper, the researchers state that their redefined SNR is extensible to any generalized linear model in which the factors modulating the response can be expressed as separate components of a likelihood function (a function of the parameters of a statistical model). “Our SNR metric is applicable to any system whose measurements can be described via a generalized linear model,” Czanner tells Phys.org. “The idea is to fit the model to the data, which effectively estimates the signal and the noise. (The model needs to be validated via, for example, standard goodness-of-fit criteria and autocorrelation tests) The new SNR can then be calculated by using deviances that generalize the concept of error sum of squares (a mathematical determination of the dispersion of data points) to generalized linear models. This means that the SNR is defined for all generalized linear models where the signal and non-signal components can be separated and hence estimated as different parts of a generalized linear model – and in fact, one of the key ideas in our work was to define the approximating model as the logarithm of conditional intensity, which allowed us to separate the signal or stimulus from the neuron’s own biophysical properties.”

In terms of next steps, the scientists are planning to extend the SNR definition to analyze the SNR of neuronal ensembles, in which several neurons communicate with each other. “This is a so-called multivariate case where several neurons are considered at the same time,” Brown says, “and we’d like to know how ensemble SNR represents a relevant signal or stimulus.”

The scientists say that the areas of research that might benefit from their new SNR concept are limitless. “Our definition applies readily to any situation in which a generalized linear model can be used as a statistical model, such as areas of research where measurements do not follow a Gaussian distribution n and/or when the noise is not additive and/or correlated,” Czanner tells Phys.org. For example,” she illustrates, “we are considering developing SNR metrics for evaluation of strength of surrogate biomarkers in clinical trials and observational studies, in which the surrogate biomarkers can be electrophysiological measurements from a human diabetic retina with the signal being the stage of diabetic retinopathy, or surrogate markers of retinal damage in malarial retina where the signal is the extend of brain swelling.”

Brown also sees their SNR approach being used to accurately evaluate neuronal signal information. “We therefore envision that in the future this will reinforce the search for neurons that carry maximal information for the purpose of neural prostheses – and in this scenario, neural SNR may also be used to evaluate the performance of neural prostheses devices.”

Some people get older three times faster than others.


Scientists are aiming to develop antibodies which could reverse the effects of an ‘aging’ protein which builds up in the blood and body. The findings could lead to medical breakthroughs in reducing memory loss and brain degeneration.
Reuters / Luke MacGregor

Blocking it could offer a reversal in memory decline and could potentially lead to the treatment of cognitive disorders, as B2M is found at increased levels in patients with Alzheimer’s disease and dementia.

Scientists came up with the conclusion after injecting B2M into the brains or blood of young mice. After a careful study, the UCSF team discovered that the rodents performed just as badly as elderly mice, doubling their errors navigating a familiar maze. After the protein flushed out from the young mice’s system, the subjects’ memory performance returned to normal, suggesting that the effect of B2M is reversible.

The conclusions of the study have been published in the journal Nature Medicine, and are the latest in a series of parallel experiments by various groups of scientists over the past few years.

To further back up their hypothesis that a reduction in B2M levels could treat memory impairment, UCSF researchers genetically engineered mice to lack the microglobulin. These rodents, as they aged, performed nearly as well as young animals at completing memory tests.

“What this shows is that you can manipulate the blood, rather than the brain, to potentially treat memory problems,”coauthor of the report, Saul Villeda told AFP. “And that’s so much easier … in terms of thinking of human patients.”

Villeda believes there are two ways to potentially reverse age-related cognitive impairments, “one is to introduce pro-youthful blood factors and the other is to therapeutically target pro-ageing factors.”

Research is already in full swing to come up with the drug that could put a halt on or destroy B2M buildup in mice, and which could potentially be administered to humans.

“Right now, the idea is to develop antibodies or small molecules that can either block the effects of the protein or help to remove it from old blood,” says Villeda.

Scientists hope to block ‘old-age protein’, reverse memory loss — RT News


Scientists are aiming to develop antibodies which could reverse the effects of an ‘aging’ protein which builds up in the blood and body. The findings could lead to medical breakthroughs in reducing memory loss and brain degeneration.
Reuters / Shannon Stapleton

Blocking it could offer a reversal in memory decline and could potentially lead to the treatment of cognitive disorders, as B2M is found at increased levels in patients with Alzheimer’s disease and dementia.

Scientists came up with the conclusion after injecting B2M into the brains or blood of young mice. After a careful study, the UCSF team discovered that the rodents performed just as badly as elderly mice, doubling their errors navigating a familiar maze. After the protein flushed out from the young mice’s system, the subjects’ memory performance returned to normal, suggesting that the effect of B2M is reversible.

The conclusions of the study have been published in the journal Nature Medicine, and are the latest in a series of parallel experiments by various groups of scientists over the past few years.

To further back up their hypothesis that a reduction in B2M levels could treat memory impairment, UCSF researchers genetically engineered mice to lack the microglobulin. These rodents, as they aged, performed nearly as well as young animals at completing memory tests.

“What this shows is that you can manipulate the blood, rather than the brain, to potentially treat memory problems,”coauthor of the report, Saul Villeda told AFP. “And that’s so much easier … in terms of thinking of human patients.”

Villeda believes there are two ways to potentially reverse age-related cognitive impairments, “one is to introduce pro-youthful blood factors and the other is to therapeutically target pro-ageing factors.”

Research is already in full swing to come up with the drug that could put a halt on or destroy B2M buildup in mice, and which could potentially be administered to humans.

“Right now, the idea is to develop antibodies or small molecules that can either block the effects of the protein or help to remove it from old blood,” says Villeda.

FDA Probes Kids’ Codeine Cough Syrups


New European report raises questions about safety of codeine products for younger children.

  • The FDA said it is investigating all use of codeine-containing cough syrups in children under 18, due to the drug’s potentially life-threatening side effects such as respiratory depression.

In 2013, the agency recommended these products not be used for children following tonsillectomy and/or surgery on adenoids due to slow or difficult breathing associated with its use. But the European Medicines Agency issued a much stronger statement in April, saying cough syrup with codeine is now contraindicated for all children under 12, as well as for those 12 to 18 with asthma or chronic breathing problems.

An EMA drug safety committee concluded that codeine is especially dangerous for younger children because of the “more variable and unpredictable” way that codeine is converted into morphine in this population, which may result in breathing difficulties. The panel also noted that coughs and colds are generally self-limiting, and cited the limited evidence that codeine is effective for treating coughs in children.

“FDA will continue to evaluate this safety issue and will consider the EMA recommendations. We will convene a public advisory committee meeting to discuss these issues and provide input regarding whether additional action by FDA is needed,” the agency said Wednesday.

During the FDA review, the agency encouraged healthcare professionals to continue following the recommendations on drug labels, but urged caution when prescribing children cough-and-cold products that contain codeine. They urged that both healthcare professionals and patients report all adverse events observed through use of these products to the FDA’s MedWatch Safety and Adverse Event Reporting Program.

The agency also asked that parents and caregivers speak with healthcare professionals or a pharmacist if they have questions or concerns about products containing codeine.

The drugs that protect people who have unprotected sex .


HIV infection rates are on the rise, particularly among gay men. But could a new type of drug treatment – PrEP – be given to people so they avoid catching the disease, asks Mobeen Azhar.

For decades sexual health workers have concentrated on the message of always using a condom as the main tactic against the spread of HIV and other sexually transmitted infections.

But certain risky behaviours are on the rise. Illegal drugs like MDMA and speed have long been used on the gay club scene. But now “chemsex” is a growing problem – parties in private homes centred on communal drug taking and sex.

The likes of mephedrone, crystal meth and GHB/GBL (or “G” for short) can increase libido and dramatically decrease inhibition and the desire to sleep.

It’s impossible to know how many men have become infected with HIV while using chems, but condom-less sex is normal for many men on the chem scene. In a study published by the London School of Hygiene and Tropical Medicine, a third of men surveyed described incidents of unintended unprotected sex while under the influence of chemsex drugs.

Simultaneously, HIV infection rates are rising. One in every eight gay men in London is HIV positive.

Kiran is one of them. “Once you try a powerful drug like crystal meth, and if that’s linked to sex, that’s the kind of sex you’re going to want. Vanilla sex just doesn’t compare. Your boundaries shift.”

He began using chems after a break up with his long-term boyfriend. “Sometimes I was safe but when I didn’t use a condom I would see it as a cheeky risk. When I was diagnosed as HIV positive I felt there was no point in holding back at all. I began injecting crystal meth and had absolutely no boundaries. I would try anything when I was high.”

A new type of drug treatment – Pre-Exposure Prophylaxis (PrEP) – could offer protection against HIV infection. People are given a combination of anti-HIV drugs tenofovir and emtricitabine – currently being sold under the trade name Truvada – before they have unprotected sex. The drug is taken as a single pill and halts the replication of the virus, stopping infection.

It has delivered promising results in trials with groups that are at high risk of contracting HIV.

Dr Sheena McCormack from University College London is leading the study. “To take part gay men had to identify that they had had condom-less anal sex in the three months prior to the study and recognise that this was likely to happen again in the near future.”

Just over 40% of the men on the study had used chems in the three months prior to joining the study, McCormack says. “We found no HIV infections amongst men who took Truvada during periods of risk. It is extremely effective but it does rely on human behaviour. You do have to take the pill.”

The World Health Organization has recognised that PrEP could dramatically cut HIV infection rates but some critics suggest the drug could encourage condom-less sex. Taking PrEP does not offer protection against infections such as gonorrhoea, syphilis or hepatitis. So could these infections increase if the drug is made widely available?

PrEP should be used in combination with condoms, says McCormack. “There was a slight reduction is condom use among men on the trial who took Truvada but I don’t think it’s a problem what people do in the bedroom unless it generates an increase in infections such as syphilis, gonorrhoea and hepatitis. We didn’t see that in the study. There was no increase. Most people say they have not abandoned their previous strategies.”

McCormack would like to see PrEP rolled out on the NHS.

It would cost considerable sums. She estimates a bill of about £4,300 per person per year. There might be critics who would question whether such sums should be spent on people wilfully exposing themselves to risk.

At a time when the NHS is under acute budget pressure and facing constant difficult decisions over life-saving drugs, could it justify spending millions on PrEP when a cheap alternative – simply using condoms – exists?

Zia

But a widespread rollout of PrEP could actually save the NHS large sums, McCormack argues. “A lifetime of therapy for someone that is HIV positive can cost around £11,000 a year. We should also consider that Truvada only has to be taken through periods of risk. We can say PrEP is cost effective.”

Zia is one of 544 men who took part in the PrEP trial. “I have used ‘G’ in the past and I use mephedrone now and again. I was going to the STI clinic fairly often and not always using condoms. The staff told me about the PrEP trial.”

The drug regime has changed his behaviour. “My condom use has gone down in the past couple of months. Perhaps because I’ve settled into PrEP. I use condoms maybe 50% of the time now. Sure I could still get syphilis or hepatitis but most conditions are treatable. Instead of being sanctimonious and and saying ‘just use condoms’, we should see Truvada as an additional choice in the fighting of HIV.”

PrEP

HIV is not the death sentence it once was. But advances in HIV treatment and the potential impact of PrEP in stopping the spread of the virus do nothing to address the core issues of chem use.

Dave Stuart manages the Chemsex Support Clinic at 56 Dean St, Europe’s busiest sexual health clinic. “This is not just about hedonism. It is about intimacy. Gay men often grow up keeping a secret. They grow up being hyper vigilant and not sharing who they really are.”

The clinic gets a hundred new cases of chem use every month, Stuart says. “They come of age into a sexualised gay scene where they try to navigate hook-up apps, normalised drug use and risky sex. They try to incorporate intimacy, but with no frame of reference. Many of the men that come to see us for help with their drug use don’t understand why they are doing what they are doing.”

After years of chem use, Kiran did seek out help. “I was making myself really sick and taking a lot of time off work. A friend told me about a support group and that changed everything. The last time I used chems was over 14 months ago. I had to learn about my triggers and how to manage my feelings.”

Kiran doesn’t think PrEP would have helped him. “I just didn’t think through what I was doing. I don’t think I would have even committed to taking a pill through periods of risky sex.

“When you are high, you almost challenge yourself to break boundaries and take risks. That’s what these drugs do to you. It’s a drug problem. Not just a sex problem.”

Benefits of Shilajit


The term ‘Shilajit’ in Sanskrit means the rock that cannot be conquered. In other words, it is an immortal rock. The traditional therapeutic systems of Ayurveda is the champion of Shilajit. Shilajit is associated with a large number of health benefits for humans. Shilajit is a kind of rock like substance found in some mountain terrains in the Asiatic region. The ultimate qualities of Shilajit are known to be rejuvenating and detoxifying. Therefore, this remarkable substance has become one of the major constituents of medicines made in Ayurveda.

 

Under extreme hot temperatures, this substance oozes out from the mountainous zones and appears sticky and tar-like. It takes centuries for Shilajit to form. The major causes for the formation of Shilajit include the bacterial actions of microorganisms like Euphorbia royleana and Trifolium repens upon plant decompositions. Scientists consider this one of the most interesting millenary products of Mother Nature.

 

The anti-aging benefits of Shilajit comes to surface due to some studies conducted in the regions of Nepal, Northern India and Pakistan. In these regions, people widely consume Shilajit along with fermented raw milk. In these regions, people stay healthy through their very old age. The age related illnesses and discomforts are almost nil in these regions. The health experts account the intake of this rare product by these people as benefiting them with such good health.

 

Shilajit is called by varied names across the world. It is called as Mumijo in the Caucasas Mountains and asphaltum in western regions. The composite nature of Shilajit is identified as phytocomplex. There are more than 85 minerals contained in this product in their natural form including triterpenes, selenium, phospholipids, humic acid and fulvic acid. These ingredients are known to have strong antioxidant properties.

 

Shilajit can effectively prevent the self-aggregation of disease causing filaments in the brain. Due to this, the age related deterioration of mental capabilities identified as Alzheimer’s disease can be stopped to a large extent. Some of the chronic diseases that can be cured by Shilajit include urinary tract problems, jaundice, digestive troubles, enlarging of spleen, epilepsy, anxiety, bronchitis and anemia.

 

Shilajit is known to be an ionic mineral catalyst. Therefore, it can enhance the movement of minerals in muscles, tissues and bones. In Ayurveda, this mineral herb is heralded as the conqueror of mountains and annihilator of weakness. This rare product has in it so much potential to cure a wide spectrum of illnesses when administered in right doses. This is also called as an adaptogenic material that can help people adapt to the changing and stressful environment so easily.

 

With its amazing ionic properties, Shilajit also can galvanize minerals and can therefore effectively treat kidney stones, gall stones, and inflammations. It can also cure ulcers, regulate blood sugar level and act as an antimicrobial agent. It is also popular as a memory and cognitive functions enhancer.

 

Though this wonderful gift of nature contains some amazing properties, it has to be consumed under medical supervision in the right quantity to get the fullest benefits.

Watch the video. URL: https://youtu.be/zYh325N-VJE

New research suggests nature walks are good for your brain .


In the past several months, a bevy of studies have added to a growing literature on the mental and physical benefits of spending time outdoors. That includes recent research showing that short micro-breaks spent looking at a nature scene have a rejuvenating effect on the brain — boosting levels of attention — and also that kids who attend schools featuring more greenery fare better on cognitive tests.

And Monday, yet another addition to the literature arrived — but this time with an added twist. It’s a cognitive neuroscience study, meaning not only that benefits from a nature experience were captured in an experiment, but also that their apparent neural signature was observed through brain scans.

The paper, by Stanford’s Gregory Bratman and several colleagues from the United States and Sweden, was published Monday in the Proceedings of the National Academy of Sciences. In it, 38 individuals who lived in urban areas, and who had “no history of mental disorder,” were divided into two groups — and asked to take a walk.

Half walked for 90 minutes through a natural area near the Stanford campus. The other half walked along a very busy road in downtown Palo Alto, Calif. (along El Camino Real, for those who know the area). Before and also after the walk, the participants answered a questionnaire designed to measure their tendency toward “rumination,” a pattern of often negative, inward-directed thinking and questioning that has been tied to an increased risk of depression, and that is assessed with questionnaire items like “My attention is often focused on aspects of myself I wish I’d stop thinking about,” and “I spend a great deal of time thinking back over my embarrassing or disappointing moments.”

Finally, both before and after the walk, the participants had their brains scanned. In particular, the researchers examined a brain region called the subgenual prefrontal cortex — which the study calls “an area that has been shown to be particularly active during the type of maladaptive, self-reflective thought and behavioral withdrawal that occurs during rumination.”

The result was that individuals who took the 90-minute nature walk showed a decrease in rumination — they actually answered the questionnaire differently, just a short period of time later. And their brain activity also showed a change consistent with this result. In particular, the scans showed decreased activity in the subgenual prefrontal cortex, the region of interest.

“This provides robust results for us that nature experience, even of a short duration, can decrease this pattern of thinking that is associated with the onset, in some cases, of mental illnesses like depression,” says Gregory Bratman, the lead author of the study.

What’s particularly valuable is that the brain scans allowed for the examination of a potential cognitive mechanism by which nature experiences help our mental states. Without such evidence, psychological research can in effect only speculate on occurrences within actual regions of the brain. “That’s why we wanted to push and get at neural correlates of what’s happening,” said Bratman.

In other words, the new research provides a new kind of evidence that is not only consistent with — but also strengthens — the growing body of research on the benefits of nature exposure.

Granted, brain scan research can be controversial – and it’s not as if conditions like depression have a single, simple cause. So as with all research, this work will need to be extended and verified by future studies.

The researchers set their study in the context of modern trends toward ever larger numbers of people living in cities — and an already demonstrated link between urbanization and mental health problems, such as depression and anxiety.

“We just passed the halfway point recently where 50 percent of humanity lives in urban areas,” said Bratman. “Along with this trend comes a decrease in nature and nature experience.” And the urbanized percentage of humanity is projected to be 70 percent by the year 2050, the study said.

But a key question raised by this is, precisely how would an urban environment worsen — or at least, fail to protect against — a mental behavior like rumination?

The idea seems to be that living in an urban area “is associated with many kinds of stressors, whether it be noise, increased social interactions, traffic,” said Bratman, which in turn increases rumination and anxiety — though he admits that this link in the study’s chain of logic needs further demonstration.

Still, it makes sense. Just think of waking up to the sound of a garbage truck in the morning outside your window — and how the accumulation of things like this can lead to negative repercussions on our psyches. Meanwhile, the authors speculate, nature environments allow for “positive distractions” that block or counteract these negative mental processes. Rumination is “this inward focused, maladaptive choice of where you direct your attention,” said Bratman, and nature gives an alternative opportunity for attentional focus.

The researchers also tie their results to a large literature on so-called “ecosystem services” — valuable benefits, such as carbon sequestration or water purification, provided by natural environments. The work suggests that on top of these benefits, there may also be “psychological ecosystem services” as well.

That’s a mouthful — but the underlying thought that it captures is pretty simple. Spending time outdoors, in nature, is good for you. The new study just adds — in a new way — to a growing body of evidence that demonstrates that.