Reducing the trauma associated with bad memories while someone is asleep sounds like the stuff of science fiction, but it could become a reality in 10 years thanks to a greater understanding of how the brain encodes memories during sleep.
A good snooze is known to be important for forming memories but it is only recently that scientists are starting to understand the details, and the work could lead to better treatments for people suffering from post-traumatic stress disorder or age-related forgetfulness.
In 2015, Dr Karim Benchenane from the National Centre for Scientific Research (CNRS) in Paris, France, and his colleagues demonstrated for the first time that false memories could be created in mice while they were sleeping. The team targeted nerve cells, or neurons, in the brain that fire when the animal is, or thinks of being, in a specific location — the so-called ‘place cells’ that provide both rodents and people with an internal map.
‘When mice sleep after exploring an environment, you see that the same neurons are reactivated in the same order,’ said Dr Benchenane. ‘Replaying that information in the brain while sleeping is just like repetition in learning. This repetition improves memory consolidation.’
Using electrodes, the team recorded the activity of place cells when mice explored their environment and later when they slept. They then stimulated the brain areas associated with reward when particular place cells fired – each of which is associated with a specific location. When they awoke, the mice had a new memory and headed straight to that reward-associated location.
‘When you stimulate parts of the brain associated with reward during sleep, it’s just like the animal gets a huge reward in the physical world. We can make the animal believe that it gets a reward in a particular location in the environment,’ explained Dr Benchenane. ‘This means when place cells fire in sleep, they still convey spatial information.’
Now Dr Benchenane hopes to build on this work by using a similar technique to alter negative memories associated with traumatic events in mice, as part of a project called MNEMOSYNE, funded by the EU’s European Research Council (ERC).
‘We want to use reactivation during sleep to treat pathologies associated with fear or anxiety, such as post-traumatic stress disorder,’ said Dr Benchenane.
The idea is to add positive thoughts to bad memories. ‘We have managed to modify the emotional imbalance of a memory during sleep by stimulating the reward areas in the brain, meaning we can take a memory that is neutral — a location in the physical world for instance — and make it positive. Now we have to understand how positive and negative memories compete in the brain, and see if a positive memory can suppress a negative one, or even make it neutral,’ he said.
Dr Benchenane and his colleagues plan to give an electrical shock to mice when they are in a specific location, to associate this location with a bad memory. Then, while the mice are asleep, they will stimulate the reward areas in the brain when the place cells fire in this specific location, to change the negative memory into a positive one.
‘The application in humans might not be that far away — we’re talking about 10 years which is quite soon when we’re talking about science.’
Dr Karim Benchenane, CNRS, France
‘During wakefulness, the rodent will learn to avoid this location because it’s frightening,’ he explained. ‘But after waking up, we’ll see if the reward we gave during sleep will be able to suppress the aversion memory we gave during wakefulness.’
If their results in rodents are promising, the technique could be developed for people.
‘The application in humans might not be that far away — we’re talking about 10 years which is quite soon when we’re talking about science,’ said Dr Benchenane. ‘But first we need to understand how positive and negative valence (whether someone feels good or bad about something) compete in the brain. We don’t want to make people like what they are scared of.’
While some people might benefit from having memories reduced, others would like to see them strengthened – a key function of a normal night’s sleep.
As we doze, our brains replay our day. Like this, some memories are consolidated or strengthened whereas unimportant memories are selected and forgotten.
‘The brain has time to work on strengthening and selecting memories with less disturbance during sleep,’ said Professor Anders Martin Fjell from the University of Oslo in Norway. ‘Selective forgetting is as important as memory per se, since we do not have the capacity to remember all that we experience, and sleep seems to help us with this too.’
But as we grow older, our ability to form new memories for specific events in our lives worsens. It not only takes longer to learn new information, but it is also harder to recall that information.
Prof. Fjell is looking at how age affects the way memories are unconsciously strengthened in the brain during sleep and its impact on memory loss in the elderly, as part of the ERC-funded project AgeConsolidate.
‘Since a large proportion of cognitively healthy older adults report to be worried about their own memory function, finding the causes of reduced memory in ageing is important,’ said Prof. Fjell. ‘This will also allow us to develop strategies for reducing memory problems in ageing.’
Although we know age affects our ability to encode (or learn) and retrieve information, studies on how memories are strengthened – or consolidated – in the ageing brain have largely been ignored until now, says Prof. Fjell.
He hopes to strengthen selected memories without the participants consciously retrieving or activating them — as these will otherwise be encoded a second time.
To do this, Prof. Fjell and his team will use a technique called targeted memory reactivation (TMR) by using sound to stimulate people’s brains while they sleep in a scanner. The idea is to boost the memory-strengthening processes that happen during deep sleep in order to get a better understanding of the neural mechanisms responsible for increasing memory consolidation.
The scans will be carried out across participants with above- and below-average memory function at two-year intervals to look at changes in their brains and specific memories over time.
‘As many people experience changes in sleeping patterns — and sometimes quality — with age, it is important to know whether sleep contributes to the reduced memory in ageing,’ said Prof. Fjell.
- Facebook plans to change how its news feed works, playing up status updates from friends and family.
- On the flip side, it will deemphasize news articles and anything published by brands.
- Facebook is trying to foster “meaningful interaction” and make Facebook more of a force for good, CEO Mark Zuckerberg said.
- Facebook is coming off of a tough year, where it had to battle fake news and reports that Russian-linked groups attempted to influence the 2016 presidential election via ads on its service.
In the wake of criticism about how its news feed can be manipulated and is having a negative effect on users, Facebook is making some big changes to its flagship feature.
The company plans to give more prominence to status updates and photos shared by users’ friends and family while at the same time playing down news articles or anything published by brands, company official said.
“We feel a responsibility to make sure our services aren’t just fun to use, but also good for people’s well-being,” Zuckerberg said in a post Thursday on his Facebook page.
The New York Times reported the changes earlier on Thursday. Facebook confirmed them in Zuckerberg’s post and in a blog post titled ” Bringing People Closer Together ” by Adam Mosseri, who heads the company’s news feed.
Facebook’s revamping of its news feed is intended to ensure more “meaningful interaction” on the social network, Zuckerberg said in his post. The company wants to encourage users to have more conversations with people they know, rather than passively consuming articles or videos.
The news comes a week after Zuckerberg announced that his New Years resolution for 2018 would be to focus on systemic issues with Facebook including abuse and hacking.
“The world feels anxious and divided, and Facebook has a lot of work to do – whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent,” wrote Zuckerberg in a Facebook post announcing his resolution.
The social networking giant is coming off a rough 2017, amid revelations of fake news andads placed by Russian linked actors allegedly to influence the 2016 presidential election .
For many years, Charles Darwin was haunted by flowers. In 1859, the naturalist published his most famous work, On the Origin of Species, the book that is generally regarded as the foundation of evolutionary biology. But 20 years later, he was still bothered by one big thing: Where the heck did all the flowers come from? In a letter to botanist Joseph Dalton Hooker in 1879, Darwin called this problem an “abominable mystery.” It might sound silly, but Darwin really couldn’t explain how flowering plants — known as angiosperms — had risen to dominance so quickly over the more primitive angiosperms — a group that includes pines and palms.
The fossil record shows us that around 100 million years ago, during the Cretaceous period, a huge variety of angiosperms came onto the scene and replaced gymnosperms as the dominant type of plant on Earth. This sudden abundance of plants — the ancestors of modern lavender, wheat, roses, magnolias, daisies, and so forth — ran counter to Darwin’s theory that new species arise very slowly over time as a result of selective pressures. Current hypotheses suggest that most angiosperms evolved alongside the insects or other animals that pollinate them, without which it’s not possible for the plants to produce seed-bearing fruits. But these hypotheses don’t explain the epic boom in ancient angiosperms.
In a paper published Thursday in the journal PLOS Biology, a couple of scientists proposed an answer to the abominable mystery of why angiosperms replaced gymnosperms so abruptly. Kevin Simonin, an assistant professor of ecology and evolution at San Francisco State University, and Adam Roddy present evidence that it all comes down to the efficiency of cells. The secret to angiosperms’ success, they say, is a rapid downsizing of the plants’ cells beginning about 140 million years ago. This downsizing dramatically increased their efficiency. Once angiosperms became that much more efficient, their domination of terrestrial ecosystems was only a matter of time.
The research team arrived at this conclusion by examining the relative size of genomes in angiosperms and gymnosperms, then comparing those numbers with the plants’ carbon dioxide capture capacity and liquid transfer efficiency. Cell sizes can vary a lot due to various factors, but genome size is a strong predictor of cell size. Therefore, they concluded, a smaller genome means a smaller cell — and therefore more cells can be packed into the same volume of plant tissue, enabling a plant to take in more carbon dioxide and water, thereby producing more carbohydrates that yield energy and drive growth.
Photosynthesis is a big part of this picture, too since, as we all know, plants need sunlight to turn water and carbon dioxide into carbohydrates. Previous research has established that the higher photosynthetic capabilities of angiosperms helped them grow much more quickly than their gymnosperm cousins, but this new study shows us how angiosperms achieved this high level of efficiency.
So even though coevolution with pollinators played a huge roll in the specific mechanisms of angiosperm evolution, Simonin and Roddy say there’s something common to all of these plants, something fundamental to their biophysical architecture, that enabled them to take over the world. Perhaps this research would set Darwin’s mind at ease. But more likely, he would just have new questions.
Abstract: The abrupt origin and rapid diversification of the flowering plants during the Cretaceous has long been considered an “abominable mystery.” While the cause of their high diversity has been attributed largely to coevolution with pollinators and herbivores, their ability to outcompete the previously dominant ferns and gymnosperms has been the subject of many hypotheses. Common among these is that the angiosperms alone developed leaves with smaller, more numerous stomata and more highly branching venation networks that enable higher rates of transpiration, photosynthesis, and growth. Yet, how angiosperms pack their leaves with smaller, more abundant stomata and more veins is unknown but linked — we show — to simple biophysical constraints on cell size. Only angiosperm lineages underwent rapid genome downsizing during the early Cretaceous period, which facilitated the reductions in cell size necessary to pack more veins and stomata into their leaves, effectively bringing actual primary productivity closer to its maximum potential. Thus, the angiosperms’ heightened competitive abilities are due in no small part to genome downsizing.
How cold is the coldest place in the Universe, that we know of? What’s the lowest man-made temperature ever achieved?
And just how many zeroes are needed to express ‘absolute hot’, after which the fundamentals of conventional physics start to break down in all kinds of strange ways?
All is revealed in this awesome infographic created by BBC Future back in 2013.
Most people are pretty familiar with absolute zero, it’s -273.15 degrees Celsius (-459.67 degrees Fahrenheit), and it’s the lowest possible temperature that can ever be achieved, according to the laws of physics as we know them.
This is because it’s the coldest an entity can get when every single skerrick of heat energy has been sucked right out of it.
But what about absolute hot? It’s the highest possible temperature that matter can attain, according to conventional physics, and well, it’s been measured to be exactly 1,420,000,000,000,000,000,000,000,000,000,000 degrees Celsius (2,556,000,000,000,000,000,000,000,000,000,000 degrees Fahrenheit).
Which, of course, is ridiculous. The only thing that we know of that’s ever come close to absolute hot is the temperature of the Universe, at 104 seconds old.
Way back up on the infographic is our biggest achievement in the heat stakes: 5,500,000,000,000 degrees Celsius (9,900,000,000,000 degrees Fahrenheit), which scientists were able to achieve by crashing lead ions against each other in Switzerland’s Large Hadron Collider.
There’s so much more fascinating stuff on this infographic, you can find out the temperature of the clouds on Jupiter, the average January temperature in the coldest place on Earth, and the temperature inside a conventional chemical bomb.
Over the past two decades, the positive psychology movement has brightened up psychological research with its science of happiness, human potential and flourishing.
It argues that psychologists should not only investigate mental illness but also what makes life worth living.
The founding father of positive psychology, Martin Seligman, describes happiness as experiencing frequent positive emotions, such as joy, excitement and contentment, combined with deeper feelings of meaning and purpose.
It implies a positive mindset in the present and an optimistic outlook for the future.
Importantly, happiness experts have argued that happiness is not a stable, unchangeable trait but something flexible that we can work on and ultimately strive towards.
I have been running happiness workshops for the last four years based on the evidence from the above field of psychology.
The workshops are fun and I have earned a reputation as “Mrs Happy”, but the last thing I would want anyone to believe is that I am happy all the time. Striving for a happy life is one thing, but striving to be happy all the time is unrealistic.
Recent research indicates that psychological flexibility is the key to greater happiness and well-being.
For example, being open to emotional experiences and the ability to tolerate periods of discomfort can allow us to move towards a richer, more meaningful existence.
Studies have demonstrated that the way we respond to the circumstances of our lives has more influence on our happiness than the events themselves.
Experiencing stress, sadness and anxiety in the short term doesn’t mean we can’t be happy in the long term.
Two paths to happiness
Philosophically speaking there are two paths to feeling happy, the hedonistic and the eudaimonic.
Hedonists take the view that in order to live a happy life we must maximise pleasure and avoid pain. This view is about satisfying human appetites and desires, but it is often short lived.
In contrast, the eudaimonic approach takes the long view. It argues that we should live authentically and for the greater good. We should pursue meaning and potential through kindness, justice, honesty and courage.
If we see happiness in the hedonistic sense, then we have to continue to seek out new pleasures and experiences in order to “top up” our happiness.
We will also try to minimise unpleasant and painful feelings in order to keep our mood high.
If we take the eudaimonic approach, however, we strive for meaning, using our strengths to contribute to something greater than ourselves. This may involve unpleasant experiences and emotions at times, but often leads to deeper levels of joy and contentment.
So leading a happy life is not about avoiding hard times; it is about being able to respond to adversity in a way that allows you to grow from the experience.
Growing from adversity
Research shows that experiencing adversity can actually be good for us, depending on how we respond to it. Tolerating distress can make us more resilient and lead us to take action in our lives, such as changing jobs or overcoming hardship.
In studies of people facing trauma, many describe their experience as a catalyst for profound change and transformation, leading to a phenomenon known as “post-traumatic growth”.
Often when people have faced difficulty, illness or loss, they describe their lives as happier and more meaningful as a result.
Unlike feeling happy, which is a transient state, leading a happier life is about individual growth through finding meaning.
It is about accepting our humanity with all its ups and downs, enjoying the positive emotions, and harnessing painful feelings in order to reach our full potential.
NASA has invented a new type of autonomous space navigation that could see human-made spacecraft heading into the far reaches of the Solar System, and even farther – by using pulsars as guide stars.
It’s called Station Explorer for X-ray Timing and Navigation Technology, or SEXTANT (named after an 18th century nautical navigation instrument), and it uses X-ray technology to see millisecond pulsars, using them much like a GPS uses satellites.
“This demonstration is a breakthrough for future deep space exploration,” said SEXTANT project manager Jason Mitchell of NASA’s Goddard Space Flight Center.
“As the first to demonstrate X-ray navigation fully autonomously and in real-time in space, we are now leading the way.”
Pulsars are highly magnetised, rapidly rotating neutron stars – the result of a massive star’s core collapsing and subsequently exploding.
As they spin, they emit electromagnetic radiation. If an observer is in the right position, they can appear as sweeping beams, like a cosmic lighthouse.
They’re also extraordinarily regular – in the case of some millisecond pulsars, which can spin hundreds of times a second, their regularity can rival that of atomic clocks.
This is what led to the idea behind SEXTANT. Because these pulsars are so regular, and because they’re fixed in position in the cosmos, they can be used in the same way that a global positioning system uses atomic clocks.
SEXTANT works like a GPS receiver getting signals from at least three GPS satellites, all of which are equipped with atomic clocks. The receiver measures the time delay from each satellite and converts this into spatial coordinates.
The electromagnetic radiation beaming from pulsars is most visible in the X-ray spectrum, which is why NASA’s engineers chose to employ X-ray detection in SEXTANT.
To do so, they used a washing machine-sized observatory attached to the International Space Station. Called Neutron-star Interior Composition Explorer, or NICER, it contains 52 X-ray telescopes and silicon-drift detectors for studying neutron stars, including pulsars.
They directed NICER to latch onto four pulsars, J0218+4232, B1821-24, J0030+0451, and J0437-4715 – pulsars so precise that their pulses can be accurately predicted for years into the future.
Over two days, NICER took 78 measurements of these pulsars, which were fed into SEXTANT. SEXTANT then used these measurements to calculate the position of NICER in its orbit around Earth on the International Space Station.
This information was compared to GPS data, with the goal being to locate NICER within a 10-mile (16 km) radius. Within eight hours, the system had calculated NICER’s position, and it remained below the 10-mile threshold for the remainder of the experiment.
“This was much faster than the two weeks we allotted for the experiment,” said SEXTANT system Architect Luke Winternitz. “We had indications that our system would work, but the weekend experiment finally demonstrated the system’s ability to work autonomously.”
It could take a few years for the technology to be developed into a navigation system suitable for deep-space vessels, but the concept has been proven.
Now the team is rolling up their sleeves to refine it. They will be updating and fine-tuning its software in preparation for another experiment in the second half of 2018. They also hope to reduce the size, weight, and power requirements of the hardware.
Eventually, SEXTANT could be used to calculate the location of planetary satellites far from the range of Earth’s GPS satellites, and assist on human spaceflight missions, such as the space agency’s planned Mars mission.
“This successful demonstration firmly establishes the viability of X-ray pulsar navigation as a new autonomous navigation capability,” Mitchell said.
“We have shown that a mature version of this technology could enhance deep-space exploration anywhere within the solar system and beyond.”
This year marks the 100th anniversary of the great influenza pandemic of 1918.
Between 50 and 100 million people are thought to have died, representing as much as 5 percent of the world’s population. Half a billion people were infected.
Especially remarkable was the 1918 flu’s predilection for taking the lives of otherwise healthy young adults, as opposed to children and the elderly, who usually suffer most. Some have called it the greatest pandemic in history.
The 1918 flu pandemic has been a regular subject of speculation over the last century. Historians and scientists have advanced numerous hypotheses regarding its origin, spread and consequences.
As a result, many of us harbor misconceptions about it.
By correcting these 10 myths, we can better understand what actually happened and learn how to prevent and mitigate such disasters in the future.
1. The pandemic originated in Spain
No one believes the so-called “Spanish flu” originated in Spain.
The pandemic likely acquired this nickname because of World War I, which was in full swing at the time.
The major countries involved in the war were keen to avoid encouraging their enemies, so reports of the extent of the flu were suppressed in Germany, Austria, France, the United Kingdom and the US.
By contrast, neutral Spain had no need to keep the flu under wraps. That created the false impression that Spain was bearing the brunt of the disease.
In fact, the geographic origin of the flu is debated to this day, though hypotheses have suggested East Asia, Europe and even Kansas.
2. The pandemic was the work of a ‘super-virus’
The 1918 flu spread rapidly, killing 25 million people in just the first six months. This led some to fear the end of mankind, and has long fueled the supposition that the strain of influenza was particularly lethal.
However, more recent study suggests that the virus itself, though more lethal than other strains, was not fundamentally different from those that caused epidemics in other years.
Much of the high death rate can be attributed to crowding in military camps and urban environments, as well as poor nutrition and sanitation, which suffered during wartime.
It’s now thought that many of the deaths were due to the development of bacterial pneumonias in lungs weakened by influenza.
3. The first wave of the pandemic was most lethal
Actually, the initial wave of deaths from the pandemic in the first half of 1918 was relatively low.
It was in the second wave, from October through December of that year, that the highest death rates were observed. A third wave in spring of 1919 was more lethal than the first but less so than the second.
Scientists now believe that the marked increase in deaths in the second wave was caused by conditions that favored the spread of a deadlier strain.
People with mild cases stayed home, but those with severe cases were often crowded together in hospitals and camps, increasing transmission of a more lethal form of the virus.
4. The virus killed most people who were infected with it
In fact, the vast majority of the people who contracted the 1918 flu survived. National death rates among the infected generally did not exceed 20 percent.
However, death rates varied among different groups. In the US., deaths were particularly high among Native American populations, perhaps due to lower rates of exposure to past strains of influenza. In some cases, entire Native communities were wiped out.
Of course, even a 20 percent death rate vastly exceeds a typical flu, which kills less than one percent of those infected.
5. Therapies of the day had little impact on the disease
No specific anti-viral therapies were available during the 1918 flu. That’s still largely true today, where most medical care for the flu aims to support patients, rather than cure them.
One hypothesis suggests that many flu deaths could actually be attributed to aspirin poisoning. Medical authorities at the time recommended large doses of aspirin of up to 30 grams per day.
Today, about four grams would be considered the maximum safe daily dose. Large doses of aspirin can lead to many of the pandemic’s symptoms, including bleeding.
However, death rates seem to have been equally high in some places in the world where aspirin was not so readily available, so the debate continues.
6. The pandemic dominated the day’s news
Public health officials, law enforcement officers and politicians had reasons to underplay the severity of the 1918 flu, which resulted in less coverage in the press.
In addition to the fear that full disclosure might embolden enemies during wartime, they wanted to preserve public order and avoid panic.
However, officials did respond. At the height of the pandemic, quarantines were instituted in many cities. Some were forced to restrict essential services, including police and fire.
7. The pandemic changed the course of World War I
It’s unlikely that the flu changed the outcome of World War I, because combatants on both sides of the battlefield were relatively equally affected.
However, there is little doubt that the war profoundly influenced the course of the pandemic. Concentrating millions of troops created ideal circumstances for the development of more aggressive strains of the virus and its spread around the globe.
8. Widespread immunisation ended the pandemic
Immunisation against the flu as we know it today was not practiced in 1918, and thus played no role in ending the pandemic.
Exposure to prior strains of the flu may have offered some protection. For example, soldiers who had served in the military for years suffered lower rates of death than new recruits.
In addition, the rapidly mutating virus likely evolved over time into less lethal strains. This is predicted by models of natural selection.
Because highly lethal strains kill their host rapidly, they cannot spread as easily as less lethal strains.
9. The genes of the virus have never been sequenced
In 2005, researchers announced that they had successfully determined the gene sequence of the 1918 influenza virus. The virus was recovered from the body of a flu victim buried in the permafrost of Alaska, as well as from samples of American soldiers who fell ill at the time.
Two years later, monkeys infected with the virus were found to exhibit the symptoms observed during the pandemic.
Studies suggest that the monkeys died when their immune systems overreacted to the virus, a so-called “cytokine storm”.
Scientists now believe that a similar immune system overreaction contributed to high death rates among otherwise healthy young adults in 1918.
10. The 1918 pandemic offers few lessons for 2018
Severe influenza epidemics tend to occur every few decades. Experts believe that the next one is a question not of if but when.
While few living people can recall the great flu pandemic of 1918, we can continue to learn its lessons, which range from the commonsense value of handwashing and immunisations to the potential of anti-viral drugs.
Today we know more about how to isolate and handle large numbers of ill and dying patients, and we can prescribe antibiotics, not available in 1918, to combat secondary bacterial infections.
Perhaps the best hope lies in improving nutrition, sanitation and standards of living, which render patients better able to resist the infection.
For the foreseeable future, flu epidemics will remain an annual feature of the rhythm of human life. As a society, we can only hope that we have learned the great pandemic’s lessons sufficiently well to quell another such worldwide catastrophe.
It’s tough to keep track of the ever-changing stock of items in your medicine cabinet. Before you know it, those little shelves might end up full of ineffective and expired products that should probably be thrown in the trash.
To keep things under control, experts recommend taking a hard look at your medicine cabinet pretty frequently.
“I tell people, when you change your clocks, go through your medicine cabinet,” Heather Free, PharmD, AAHIVP, spokesperson for the American Pharmacists Association, said.
Ready to clean out the worst offenders? Here’s when experts say you should toss 11 popular items.
1. A toothbrush that’s more than four months old
The American Dental Association (ADA) says it’s best to replace your toothbrush every three to four months, or sooner if the bristles become frayed.
There’s not enough evidence to say that germs on an old toothbrush can make you sick, the ADA notes. But a worn-down toothbrush will clean your teeth less effectively.
2. Toothpaste that’s several years old
Expired toothpaste isn’t dangerous to use, but if it’s really old – think several years past its printed expiration date – its fluoride may not work properly anymore, dentist Joel H. Berg told the New York Times.
Its consistency could also change after a while, making it tough to squeeze out of the tube.
3. Any expired sunscreen
The Food and Drug Administration (FDA) requires that all sunscreens keep their original strength for three years.
After that, there’s no guarantee that the product will protect your skin from harmful UV radiation. It might still work – but you don’t want to risk a painful sunburn to find out.
The AAD recommends getting rid of any sunscreen past the expiration date printed on the packaging.
If there’s no date listed, toss the sunscreen after three years. Also replace any sunscreen that’s changed in colour or consistency since you purchased it.
4. Cosmetics that have changed colour, odour, or consistency – or grown bacteria
The first is a change in colour: A little yellowing is usually no big deal, but look out for dramatic changes in a product’s hue.
Next, there’s odour. It’s normal for a product’s scent to lose potency over time, but again, beware of dramatic changes. A rancid smell, in particular, indicates that oils in your product have gone bad.
Changes in consistency can also indicate that it’s time to throw out a product. This is especially true when it comes to sunscreen, The Cosmetist explained – it may not work as promised
Finally, check for bacteria or fungi in the form of black or gray growths. In case it’s not obvious, products with such growths should be thrown out immediately.
5. Expired medications
FDA rules require that all medicines come with an expiration date – it’s the date at which the manufacturer can guarantee the full potency and safety of the drug,” according to the Harvard School of Public Health (HSPH).
But just because that date passes, doesn’t mean a medicine immediately stops working. In fact, a US government study found 90 percent of more than 100 prescription and over-the-counter drugs were still safe and effective even 15 years after the expiration date passed.
Still, Free recommends sticking to those dates. “Right now what I tell patients is to follow the true expiration dates [on packages] just to be cautious,” she said.
This is especially true when it comes to nitroglycerin, insulin, and liquid antibiotics – three medications HSPH says you should never ever use past the expiration date.
When in doubt, consult a pharmacist. And if you do throw out a medicine, follow these FDA guidelines
One final note: Free said medicines should be stored in a cool dry place – and that bathrooms and kitchens are generally a bad choice due to their heat and moisture.
Try moving yourself to your nightstand or a bedroom closet instead.
6. Acne products that have been open for a few months
Dermatologist Amy Weschler told Allure that acne products with benzoyl peroxide and salicylic acid need to be tossed after four to six months.
That’s because these two ingredients tend to decay quickly.
7. Rubbing alcohol and hydrogen peroxide you’ve had for years
Free said that disinfectants like rubbing alcohol and hydrogen peroxide can become less effective over a long period of time, and recommended buying smaller bottles if you don’t use them very often.
Toss them after the printed expiration date.
8. Old contact lens cases
The American Academy of Ophthalmology says you should replace your contact lens case at least every three months.
Since your fingers come in contact with lens cases, they can become a breeding ground for bacteria that cause eye infections.
9. Expired contact lens solution
You should never ever use expired contact solution even if the bottle is still full, according to FDA optometrist Bernard P. Lepri.
In an interview with Medscape, Lepri explained that expired solution can end up contaminated – and using it could lead to infections, vision loss, and (in extreme cases) blindness.
10. That giant tub of petroleum jelly you rarely use
Free explained that your fingers can introduce bacteria into a tub of petroleum jelly every time you touch it. Next time you go through your medicine cabinet, it’s probably wise to get rid of any ancient containers you’ve been holding on to.
Free’s advice for the future: “Buy smaller containers.”
11. Old tampons or tampons with torn wrappers
Tampons have a shelf life of about five years, gynecologist Alyssa Dweck told Women’s Health.
That’s because they can grow mold, especially if they’re stored in a moist, steamy bathroom.
Don’t forget to check the wrapper, too. Dweck previously told us that you should always make sure a tampon’s packaging is intact before you insert one.
It ensures that the tampon isn’t contaminated, and it’s one way to protect yourself against toxic shock syndrome –the potentially fatal illness that is sometimes linked to tampon use.