New study explains why men’s noses are bigger than women’s.


Human noses come in all shapes and sizes. But one feature seems to hold true: Men’s noses are bigger than women’s. A new study from the University of Iowa concludes that men’s noses are about 10 percent larger than female noses, on average, in populations of European descent. The difference, the researchers believe, comes from the sexes’ different builds and energy demands: Males in general have more lean , which requires more oxygen for muscle tissue growth and maintenance. Larger noses mean more oxygen can be breathed in and transported in the blood to supply the muscle.

The researchers also note that males and females begin to show differences in nose size at around age 11, generally, when puberty starts. Physiologically speaking, males begin to grow more lean muscle mass from that time, while females grow more fat mass. Prior research has shown that, during puberty, approximately 95 percent of body weight gain in males comes from fat-free mass, compared to 85 percent in females.

“This relationship has been discussed in the literature, but this is the first study to examine how the size of the nose relates to body size in males and females in a longitudinal study,” says Nathan Holton, assistant professor in the UI College of Dentistry and lead author of the paper, published in the American Journal of Physical Anthropology. “We have shown that as body size increases in males and females during growth, males exhibit a disproportionate increase in nasal size. This follows the same pattern as energetic variables such as oxygenate consumption, basal metabolic rate and daily energy requirements during growth.”

It also explains why our noses are smaller than those of our ancestors, such as the Neanderthals. The reason, the researchers believe, is because our distant lineages had more muscle mass, and so needed larger noses to maintain that muscle. Modern humans have less lean muscle mass, meaning we can get away with smaller noses.

“So, in humans, the nose can become small, because our bodies have smaller oxygen requirements than we see in archaic humans,” Holton says, noting also that the rib cages and lungs are smaller in modern humans, reinforcing the idea that we don’t need as much oxygen to feed our frames as our ancestors. “This all tells us physiologically how have changed from their ancestors.”

Holton and his team tracked nose size and growth of 38 individuals of European descent enrolled in the Iowa Facial Growth Study from three years of age until the mid-twenties, taking external and internal measurements at regular intervals for each individual. The researchers found that boys and girls have the same nose size, generally speaking, from birth until puberty percolated, around age 11. From that point onward, the size difference grew more pronounced, the measurements showed.

“Even if the is the same,” Holton says, “males have larger noses, because more of the body is made up of that expensive tissue. And, it’s at puberty that these differences really take off.”

https://i2.wp.com/cdn.physorg.com/newman/gfx/news/2013/thebigmaleno.jpg

Holton says the findings should hold true for other populations, as differences in male and female physiology cut across cultures and races, although further studies would need to confirm that.

Prior research appears to support Holton’s findings. In a 1999 study published in the European Journal of Nutrition, researchers documented that males’ energy needs doubles that of females post-, “indicating a disproportional increase in energy expenditure in during this developmental period,” Holton and his colleagues write.

Another interesting aspect of the research is what it all means for how we think of the nose. It’s not just a centrally located adornment on our face; it’s more a valuable extension of our lungs.

“So, in that sense, we can think of it as being independent of the skull, and more closely tied with non-cranial aspects of anatomy,” Holton says.

Xavier X-Ray Design by Danwei Ye.


Portable X-ray scanner

Medical staff are real heroes when they have to provide the necessary assistance to survivors, in disaster areas. The Xavier Portable X-Ray by Danwei Ye was designed to enable medical teams to perform even better care in harsh conditions. It is no secret that their performance is often affected by the limited access to useful devices, in problematic zones. X-ray machines are the perfect illustration of that. They are indeed so heavy that transporting them turns out to be a real ordeal. Add to this the fact that even the smallest ones require an expert to operate them…

The Xavier Portable X-Ray is unique in its kind: it is both compact and easy to transport. Laminographic scanning will enable any user to identify the location of a broken bone. No worries about facing a power outage disruption: a built-in rechargeable battery as well as a power generator are included to the genius system. These units can be activated to generate extra power, simply by pulling the handle they are connected to. The X-Ray device folds into a small rubber case for simplified transportation. Perfectly portable, unfoldable in seconds and convenient to operate. Heroes now have powerful tools to assist them in their mission!

Global warming proponents and sceptics agree on one point: study into myth of ‘pause’ merits more research


A study suggesting that the “pause” in global warming is not real has managed to unify climate scientists and their arch-sceptics over the need for further research to clarify whether global average temperatures really have flat-lined over the past 15 years.

The study, by Kevin Cowtan of York University and Robert Way of Ottawa University, found that the global warming pause could be virtually eliminated by including temperature estimates from the Arctic, which are currently excluded because of a lack of weather stations in this remote region.As revealed today by The Independent, the study used an established statistical technique called kriging to fill in the missing surface temperatures in the Arctic with satellite readings of the atmosphere above. This eliminated the pause or hiatus in global average temperatures that appears to have developed over the past decade or more.The slowdown in temperatures has been used by climate sceptics, such as the former Chancellor Lord Lawson of Blaby, to undermine the science of climate change, claiming that global warming has stopped despite a continuing rise in industrial emissions of greenhouse gases – set to reach a record 36 billion tons in 2013.However, Lord Lawson said yesterday that he would like to read the latest study before commenting on it in detail. “These things come out from time to time and sometimes they have merit and sometimes they don’t. It needs to be reviewed by other scientists,” Lord Lawson told The Independent.However, the science writer and climate sceptic Matt Ridley, now Lord Ridley, a Conservative hereditary peer, dismissed the latest study because it relies on swapping one type of “uncertain data” used to fill in the gaps in the polar region with another.”Even if that gives a slight net warming for the past 15 to 17 years, it does not change the fact that the warming over the full 34 years since warming began has been slower than predicted by 95 per cent of the [computer] model runs consulted by the Intergovernmental Panel on Climate Change….The key point is that warming is slower than expected,” Lord Ridley said.

Rasmus Benestad of the Norwegian Meteorological Institute in Oslo, who was one of the first scientists to try to estimate the effect of the missing Arctic warming data on global average temperatures, said that the latest study is a useful contribution to the debate.

“The pronounced recent warming can be inferred from different and independent observations, such as the melting of the ice on Greenland, thawing of permafrost, and the reduction of the sea-ice. Hence, when the most rapidly warming regions on Earth are not part of the statistics, the global mean estimate is bound to be lower than the real global mean. Hence, our picture of Earth’s surface temperature is somewhat ‘patchy’,” Dr Benestad said.

It is likely that the slowdown in global average temperatures to the Earth’s surface is the result of a combination of factors, such as the uptake of heat in the deep oceans, the effect of El Nino and La Nina conditions in the Pacific Ocean, in addition to the hidden temperature increases in the Arctic, he said.

Professor Richard Allan of the University of Reading said that the study by Cowtan and Way appears reasonable as they have tested their method by applying the technique to regions of the world where there are ground-based measurements to gauge its accuracy. However, the study is still preliminary and will need to be scrutinised by other scientists, Professor Allan said.

“There is still a slowdown in the rate of global average surface warming in the 21st century compared to the late 20th century and this still looks to be caused by natural fluctuations in the ocean and other natural climate fluctuations relating to volcanic eruptions and changes in the brightness of the sun,” Professor Allan said.

“However, the size of this slowdown and the discrepancy between observations and climate simulations may be less than previously thought.  The conclusions of the IPCC stands: we can expect a return to substantial warming of the planet over the coming decades in response to rising concentrations of greenhouse gases,” he said.

Ed Hawkins of Reading University added: “This is an interesting and important contribution to the continuing discussion about the recent temperature hiatus, but is unlikely to be the final word on the issue. It must also be remembered that a 15-year trend is still too short to be considered as representative of longer-term global temperature trends, and also too short to be meaningfully compared with climate simulations.”

The Conservative MP Peter Lilley said that he is not convinced that the latest study can explain the pause. “The IPCC tried to explain the pause by saying the heat is in the deep ocean and now these people say it’s in the polar regions. They both can’t be right,” Mr Lilley said.

Could arthritis drug combat Alzheimer’s?


Alzheimer’s affects 35 million people worldwide and billions have been spent on research – to little avail. But an unconventional approach based on a 30-year-old evolutionary theory might provide a way forward
alzheimers graphic View larger picture

At the beginning of next year, Clive Holmes will attempt to do something remarkable. But you’d never guess it from meeting this mild-mannered psychiatrist with a hint of a Midlands accent. In fact, you could be sitting in his office in the Memory Assessment and Research Centre at Southampton University and be unaware that he was up to anything out of the ordinary – save for a small whiteboard behind his desk, on which he’s drawn a few amorphous blobs and squiggles. These, he’ll assure you, are components of the immune system.

As a psychiatrist, he’s had little formal training in immunology, but has spent much of his time of late trying to figure how immune cells in the body communicate with others in the brain. These signals into the brain, he thinks, accelerate the speed at which neurons – nerve cells in the brain – are killed in late-stage Alzheimer’s disease and at the beginning of next year he hopes to test the idea that blocking these signals can stop or slow down disease progression.

If he shows any dent on disease progression, he would be the first to do so. Despite the billions of pounds pumped into finding a cure over the last 30 years, there are currently no treatments or prevention strategies.

“Drug development has been largely focused on amyloid beta,” says Holmes, referring to the protein deposits that are characteristically seen in the brains of people with Alzheimer’s and are thought to be toxic to neurons, “but we’re seeing that even if you remove amyloid, it seems to make no difference to disease progression.”

He mentions two huge recent failures in drugs that remove amyloid. The plug has been pulled on bapineuzumab by its developers Johnson & Johnson and Pfizer after trials showed its inability to halt disease progression; and the wheels seem to be coming off Eli Lilly’s drug solanezumab after similarly disappointing results.

Other drugs in the experimental pipeline are a long way off. Few make the jump from efficacy in animals to efficacy in people. Fewer still prove safe enough to be used widely.

Holmes’s theory, if true, would have none of these problems. He’s been testing etanercept, a drug widely used for rheumatoid arthritis. It blocks the production of TNF-alpha, one of the signalling molecules, or cytokines, used by immune cells to communicate with each other. In the next few months, he expects the results of a pilot trial in people with Alzheimer’s. If they are positive, he’ll test the strategy in people with only the mildest early forms of the disease.

“If we can show that this approach works,” says Holmes, leaning forward in his chair, “then since we already know a hell of a lot about the pharmacology of these drugs, I’m naive enough to think that they could be made available for people with the disease or in the early stages of the disease and we can move very quickly into clinical application.”

It seems too good to be true. Why, then, has the multibillion-pound drug industry not at least tested this theory? There are roughly 35 million people in the world with this devastating, memory-robbing disease, and with an increasingly ageing population, this number is expected to rise to 115 million by 2050. It’s not as if there is no financial incentive.

The answer might be because the idea that Holmes and his colleagues are testing took an unconventional route to get to this point. It started life as an evolutionary theory about something few people have even considered – why we feel ill when we are unwell. The year was 1980. Benjamin Hart, a vet and behavioural scientist at the University of California, Davis had been grappling with how animals in the wild do so well without veterinary interventions such as immunisations or antibiotics. He’d attended a lecture on the benefits of fever and how it suppresses bacterial growth. But fever is an energy-consuming process; the body has to spend 13% more energy for every 1°C rise in body temperature.

“So I started thinking about how animals act when they’re sick,” says Hart. “They get depressed, they lose their appetite. But if fever is so important, what they need is more food to fuel the fever. It didn’t stack up.”

After a few years of mulling this over, Hart published a paper on what he termed “sickness behaviour” in 1985. Lethargy, depression and loss of appetite were not, as people thought, a consequence of infection, but a programmed, normal response to infection that conferred a clear survival advantage in the wild. If an animal moves around less when ill it is less likely to pick up another infection; if it eats or drinks less, it is less likely to ingest another toxin.

“The body goes into a do-or-die, make-or-break mode,” says Hart. “In the wild, an animal can afford not to eat for a while if it means avoiding death. It allows the immune system to get going. You do this – and this is the important bit – before you’re debilitated, when it’s still going to do you some good.”

The next crucial step came from across the Atlantic in Bordeaux. In a series of experiments published between 1988 and 1994, Robert Dantzer, a vet turned biologist, showed three things. First, that inflammatory cytokines in the blood, even in the absence of infection, were enough to bring about sickness behaviour. Second, that these cytokines, produced by immune cells called macrophages, a type of white-blood cell, signal along sensory nerves to inform the brain of an infection. And third, that this signal is relayed to microglial cells, the brain’s resident macrophages, which in turn secrete further cytokines that bring about sickness behaviours: lethargy, depression and loss of appetite.

Dantzer, who has since moved to the University of Texas, had thus dispelled one of the biggest dogmas in neurology – that emotions and behaviours always stemmed from the activity of neurons and neurotransmitters. He showed that in times of trouble, the immune system seizes control of the brain to use behaviour and emotions as an extension of the immune system and to ensure the full participation of the body in fighting infection.

In late April 1996, he set up a meeting in Saint-Jean-de-Luz, on the Bay of Biscay in the southwest of France, to gather together researchers interested in cytokines in the brain. It was there, in a chance meeting over breakfast in the hotel lobby, that he met Hugh Perry, a neuroscientist from Oxford University.

Perry is now at Southampton University. Sitting in his office, surrounded by framed pictures of neurons, his eyes light up as he recounts that meeting with Dantzer. “I remember as he told me about this idea of sickness behaviour thinking, ‘Wait a minute, this is a really interesting idea’. And then when he said that the innate immune cells in the brain must be involved in the process, a bell rang.” He puts down his coffee and rings an imaginary bell above his head. “Ding, ding, ding – so what if instead of a normal brain, it was a diseased brain. What happens then?”

Perry had just started working on a model of neurodegenerative disease in mice. He was studying prion disease, a degenerative disease in the brain, to see how the brain’s immune system responds to the death of neurons. He had switched laboratories from Oxford to Southampton the year after meeting Dantzer and the idea had travelled with him.

He injected his mice with a bacterial extract to induce sickness behaviour, and instead of a normal sickness behaviour response, he saw something extraordinary. His mice with brain disease did substantially worse than those with otherwise healthy brains. They were very susceptible to the inflammation and had an exaggerated sickness behaviour response. They stayed worse even though his healthy mice got better.

This, Perry explains, is because one of the microglial cells’ many roles is to patrol the brain and scavenge any debris or damaged cells, such as the misfolded proteins in prion disease. “When there’s ongoing brain disease, the microglial cells increase in number and become what’s called primed,” he says, referring to the procedure by which the immune system learns how to deal better and more efficiently with harmful stimuli. “When primed, they make a bigger, more aggressive response to a secondary stimulus.”

When the signal of peripheral infection came into the diseased brains of the mice, their microglial cells, already primed, switched into this aggressive phase. They began, as they usually would, to secrete cytokines to modulate behaviour, but secreted them at such high concentrations that they were toxic, leading to the death of neurons.

“But you never really know how your findings will translate to humans,” Perry adds. “So all this mouse stuff could be great, but is there any real importance to it? Does it matter? And that’s when I called Clive.”

Perry hadn’t met Clive Holmes at that point, but had heard of him through Southampton University’s Neuroscience Group, which brings basic researchers and clinicians together. He’d called him in 2001 to ask whether people with Alzheimer’s disease got worse after an infection. The answer was a categorical yes.

“And then Perry asked me if there was evidence to show this,” remembers Holmes. “And I was sure there must be. It was so clinically obvious. Everybody who works with Alzheimer’s just knows it. But when I looked into it, there was no evidence – nobody had really looked at it.”

In several ensuing studies, Holmes and Perry have since provided that evidence. Patients with Alzheimer’s do indeed do worse, cognitively, after an infection. But it’s not only after an infection. Chronic inflammatory conditions such as rheumatoid arthritis or diabetes, which many elderly patients have, and which also lead to the production of inflammatory cytokines, also seem to play an important part.

Holmes and Perry speculate that it’s the presence of the characteristic amyloid beta deposits in the brains of these individuals that primes the microglial cells. And that when signals of inflammation come in, be it from an infection or low-grade chronic inflammatory condition, the microglial cells switch into their aggressive, neuron-killing mode. This, they think, is why removal of amyloid beta might not be working: the damage, or in this case the priming of the microglial cells, might have already been done, meaning that the killing of neurons will continue unabated.

“So next year if these initial results look promising,” says Holmes, “we’re hoping to try and block TNF-alpha in people with the early stages of Alzheimer’s to block this peripheral signal before the disease fully takes hold. We want to see what kind of a dent on disease progression we can get. I don’t know what that’s going to be, but that’s what we’re going to find out.”

If all goes according to plan, and he can secure funding to start the trial at the beginning of next year, Holmes will, by mid-2017, find out for sure whether he can stop the disease taking hold. But the results of his trial in people with late-stage disease, due in the next few months, will give him a strong indication of what to expect.

China retains supercomputer crown.


A supercomputer built by the Chinese government has retained its place at the top of a list of the world’s most powerful systems.

Tianhe-2 can operate at 33.86 petaflop/s – the equivalent of 33,863 trillion calculations per second – according to a test called the Linpack benchmark.

There was only one change near the top of the leader board.

Switzerland‘s new Piz Daint – with 6.27 petaflop/s – made sixth place.

The Top500 list is compiled twice-yearly by a team led by a professor from Germany’s University of Mannheim.

It measures how fast the computers can solve a special type of linear equation to determine their speed, but does not take account of other factors – such as how fast data can be transferred from one part of the system to another – which can also influence real-world performance.

Fastest supercomputers1. Tianhe-2 (China) 33.86 petaflop/sec2. Titan (US) 17.59 petaflop/sec3. Sequoia (US) 17.17 petaflop/sec4. K computer (Japan) 10.51 petaflop/sec

5. Mira (US) 8.59 petaflop/sec

6. Piz Daint (Swiss) 6.27 petaflop/sec

7. Stampede (US) 5.17 petaflop/sec

8. Juqueen (Germany) 5.09 petaflop/sec

9. Vulcan (US) 4.29 petaflop/sec

10. SuperMuc (Germany) 2.90 petaflop/sec

(Source: Top500 List based on Rmax Linpack benchmark)

IBM – which created five out of the 10 fastest supercomputers in the latest list – told the BBC it believed the way the list was calculated should now be updated, and would press for the change at a conference being held this week in Denver, Colorado.

“The Top500 has been a very useful tool in the past decades to try to have a single number that could be used to measure the performance and the evolution of high-performance computing,” said Dr Alessandro Curioni, head of the computational sciences department at IBM’s Zurich research lab.

“[But] today we need a more practical measurement that reflects the real use of these supercomputers based on their most important applications.

“We use supercomputers to solve real problems – to push science forward, to help innovation, and ultimately to make our lives better.

“So, one thing that myself and some of my colleagues will do is discuss with the Top500 organisers adding in new measurements.”

However, one of the list’s creators suggested the request would be denied.

“A very simple benchmark, like the Linpack, cannot reflect the reality of how many real application perform on today’s complex computer systems,” said Erich Strohmaier.

“More representative benchmarks have to be much more complex in their coding, their execution and how many aspects of their performance need to be recorded and published. This makes understanding their behaviour more difficult.

“Finding a good middle-ground between these extremes has proven to be very difficult, as unfortunately all previous attempts found critics from both camps and were not widely adopted.”

China’s lead

Tianhe-2 – which translates as Milky Way 2 – was developed by China’s National University of Defence Technology and will be based in the city of Guangzhou, in the country’s south-eastern Guandong province.

Fluid dynamics simulation
IBM’s Sequoia recently carried out a simulation of a cloud of 15,000 bubbles

It uses a mixture of processors made by Intel as well as custom-made CPUs (central processing units) designed by the university itself.

The system is to be offered as a “research and education” tool once tests are completed, with local reports suggesting that officials have picked the car industry as a “priority” client.

Its Linpack score is nearly double that of the next supercomputer in the list – Titan, the US Department of Energy’s system at the Oak Ridge National Laboratory in Tennessee.

However, one expert said it was still too early to know whether the Chinese system would be able to outperform its US counterpart in real-world tasks.

“You can get bottlenecks,” said Prof Alan Woodward, from the University of Surrey’s department of computing.

“Talking about the number of calculations that can be carried out per second isn’t the same as saying a supercomputer can do that in practice in a sustained way. The processors might be kicking their heels some of the time if they don’t get the data as fast as they can handle, for example.”

Energy efficiency

Supercomputer applications do not tend to use all the processor power on offer.

IBM notes that its own Sequoia supercomputer – which came third on the latest list – used a relatively high 73% of the machine’s theoretical peak performance when it recently carried out what the firm describes as the biggest ever fluid dynamics simulation to date.

Piz Daint supercomputer
Switzerland’s Piz Daint will be used to model galaxy formations and weather patterns

The test involved creating virtual equivalents of 15,000 collapsing bubbles – something researchers are studying to find new ways to destroy kidney stones and cancerous cells.

“The thing you want to avoid is to throw away resources,” reflected Dr Curioni.

“For scientists, the most important thing is how fast you solve a problem using the machine in an efficient way.

“When we run these types of simulations we invest much larger amounts of money running the machines than buying them.”

He added that one of the biggest costs involved is energy use.

According to the Top500 list, Tianhe-2 requires 17,808 of kW power – more than double the 8,209 kW needed by Titan or the 7,890 kW needed by Sequoia.

Dr Curioni believes a revised leader board should take energy efficiency into account.

But Prof Woodward agreed with the list’s creators that getting researchers and the governments that sponsored them to agree to a new methodology might be easier said than done.

“There is a lot of kudos in having what is termed the fastest supercomputer,” he said.

“So, there will be resistance to a definition that favours one computer over another.”

Coming soon to you: the information you need.


The day when your hat can extrapolate your mood from your brain activity and make a spa appointment on your behalf may not be far away.

The next big thing in the digital world won’t be a better way for you to find something. If a confluence of capabilities now on the horizon bears fruit, the next big thing is that information will find you.

Devices from your phone to your appliances will join forces in the background to make your life easier automatically.

Welcome to contextual search, a world where devices from your phone to your appliances will join forces in the background to make your life easier automatically.

Contextual, or predictive search, started with the now-humble recommendations pioneered by companies such as Amazon – where metadata applied behind the scenes led you to products with similar attributes via pages that made helpful suggestions such as “customer who bought this also bought…”.

But when such technology grows and expands to everything around us, it could result in what Andrew Dent, a strategist with virtualisation company Citrix Systems, calls “cyber-sense”. This is information from a growing field of devices that know more about you than ever before.

Today your smartphone knows your location, so everything from the local weather to nearby Facebook friends is available. What about tomorrow when your jacket can measure your vital signs or a hat can extrapolate your mood from your brain activity?

Connect it with information on your schedule (from your calendar), spatial information such as whether you’re running or at rest, the time of day and a hundred other factors, and machines everywhere can decide on, find and present the information they think you need.

The field is opened even wider by search technology that finds abstract connections for you, rather than you starting a search at a given point. A system out of Bangalore, India called CollabLayer lets you watch for specific keywords you assign to almost any kind of data in a network.

But you can also submit a collection of documents to CollabLayer when you don’t really have a search term in mind. The system extracts links between what it thinks are key entities and graphs them in a “semantic map”. Such a method can give search a heuristic or “proactive” approach that doesn’t really need the input of a user.

It’s a similar proposition to the semantic web framework championed by the W3C, the consortium led by the father of the worldwide web, Sir Tim Berners-Lee. It aims to connect content across the web regardless of file formats, expanding the scope of what our data can do for us.

Put contextual search together with the “Internet of Things” concept and the real-world applications becomes obvious. When your smart car realises a brake pad is a bit worn, it asks your GPS where you are, checks your calendar to see when you have some free time, asks the manufacturer for a workshop near you that has the part, makes an appointment and sends you a text or email with everything set up before you had any idea.

With APIs (application programming interface – the “translation tool” between two applications) cheaper than ever for interconnecting search systems, software isn’t the issue.

One issue is sheer volume – there’s more contextual data than anyone can possibly process manually. Business Insider recently reported on a Moscow technology conference, where a professor added up the amount of data in the world that’s about you (not just what you generate yourself). The result was 44.5 gigabytes per person, compared with just 500 megabytes per person in 1986.

The other issue is commercialisation, and whether we have to be slaves to a single technology company for all this to work in the real world. With its vast desktop and mobile ecosystem, Google is the closest to a de-facto standard, and already a new Google service in the US lets you conduct contextual searches from what’s essentially your own information.

But for the brake pad example to work, a lot of proprietary systems need access to each other’s APIs, and history has shown large technology companies tend to protect their own patch. As Jared Carrizales, chief executive of Heroic Search says, “Sorry to disappoint, but I don’t think this capability will be available en masse on any other platform than Google.”

It might take an open source platform or a platform-agnostic public system to make contextual search truly seamless, but can the support base behind non-profit efforts sustain such a far-reaching infrastructure, and will governments want to compete directly with some of their biggest taxpayers?

Howard Turtle, director of the Centre for Natural Language Processing at Syracuse University, says it will take a few VHS versus Beta-style “standards wars”, but even then, individual preferences will generate whole new tiers of processing. “Of course, it also raises all sorts of privacy and security issues,” he adds.

So with the will and means that might already be in place, an ability to commercialise the services might be the only stumbling block to an internet that knows what you want.

What Marita Cheng did next.


You’re a brilliant young computer science student who was awarded Young Australian of the Year in 2012 after you founded an international organisation to get girls interested in high tech careers.

You’ve got a swag of scholarships and fellowships under your belt and you’re in demand as a guest speaker in Australia and overseas.

Young Australian of the Year, Marita Cheng assembling a robot at Melbourne University last year.

You’re about to graduate from the University of Melbourne with a double degree in mechatronics and computer science after seven years on the books.

Do you: a) take one of the hundreds of job offers that have come your way in the past two years; b) leapfrog into a career in academia, courtesy of your high profile; or c) start a company that makes bionic arms for people with disabilities?

Option C, says 24-year-old Robogals founder Marita Cheng, who’s preparing to throw herself full-time into 2Mar Robotics, the start-up she launched in April, when she graduates at the end of the year.

Her vision is to produce a bionic arm which can be used as daily living aid for people with limited hand movement, due to spinal injuries and disabilities such as multiple sclerosis and Parkinson’s Disease.  The arm can be mounted in multiple places around the home, including the kitchen and bathroom, and is controlled by iPhone.

“I really wanted to make a robot that was useful to people and changed people’s lives and this was a way I could do it,” Cheng says.

The idea drew enthusiastic feedback from the Spinal Injuries Association when first mooted, Cheng says: “People thought it was a dream come true.”

There are 20,000 people with spinal injuries in Australia and around three million worldwide. As well as offering people more independence, investing in robotic devices makes sound economic sense, Cheng says.

Her arm may reduce the amount of human assistance some people need to perform basic tasks and save thousands in carer costs, she says.

Cheng’s first group of users will begin testing a prototype in their homes next month and she hopes to have the arm available commercially by April next year.

Pricing is yet to be determined but Cheng hopes to collaborate with not-for-profits which can provide grant funding to suitable recipients.

“I feel really lucky, I know what I’m doing next year…I’m looking forward to it, I can spend more time on this,” Cheng says.

Striking out on her own, rather than fast tracking into an international firm, seems a logical progression for someone who cites Steve Jobs as an inspiration.

“I got so many job offers last year, it was a real dream but I always knew I wanted to start a company,” Cheng says.

“I have energy and I like to put that energy into something…I like having a vision and making it happen in real life.”

Jamie Evans is the academic whose suggestion Cheng do something to encourage young girls into engineering led her to found Robogals in 2008. The organisation, which sends students into schools to teach girls robotics, has 17 chapters in four countries and has run workshops for 11,000 girls.

Now the head of electrical and computer systems engineering at Monash University, Evans says Cheng’s segue into the start-up world is no surprise.

“She is a quintessential entrepreneur – someone who is not interested in finding reasons that things can’t be done but rather believing that something is important and making it happen, regardless of the limited resources at her disposal,” Evans says.

“She likes to set her own agenda and, given the amazing things she has already achieved, I could not imagine her taking a graduate job in a big company. I see her as a serial entrepreneur moving from one venture to another over the years.”

Research findings point to new therapeutic approach for common cause of kidney failure.


New research has uncovered a process that is defective in patients with autosomal dominant polycystic kidney disease, a common cause of kidney failure. The findings, which appear in an upcoming issue of the Journal of the American Society of Nephrology (JASN), point to a new potential strategy for preventing and treating the disease.

Polycystic kidney disease (PKD), the fourth leading cause of kidney failure worldwide, comes in two forms: autosomal dominant polycystic kidney disease (ADPKD) develops in adulthood and is quite common, while autosomal recessive polycystic kidney disease (ARPKD) is rare but frequently fatal. ADPKD is caused by mutations in either of two proteins, polycystin-1 and polycystin-2, while ARPKD is caused by mutations in a protein called fibrocystin. There is no cure or widely adopted clinical therapy for either form of the disease.

Polycystin-1, polycystin-2, and fibrocystin are all found in a cell’s primary cilium, which acts as the cell’s antenna and is intimately involved in human embryonic development as well as the development of certain diseases, including PKD. “What we don’t know, and were hoping to better understand, is what goes wrong with these proteins in the cells of PKD patients and what kinds of therapies might help those cells,” said Joseph Bonventre, MD, PhD (Brigham and Women’s Hospital).

Dr. Bonventre and his colleagues Benjamin Freedman, PhD and Albert Lam, MD led a team of scientists at Brigham and Women’s Hospital, the Mayo Clinic, and the Harvard Stem Cell Institute as they studied cells obtained from five PKD patients: three with ADPKD and two with ARPKD. The investigators reprogrammed patients’ skin cells into induced pluripotent stem cells, which can give rise to many different cell types and tissues. When the researchers examined these cells under the microscope, they discovered that the polycystin-2 protein traveled normally to the antenna, or cilium, in cells from ARPKD patients, but it had trouble reaching the antenna in ADPKD patients. When they sequenced the DNA in these ADPKD patient cells, the investigators found mutations in the gene that encodes polycystin-1, suggesting that polycystin-1 helps shepherd polycystin-2 to the cilium.

“When we added back a healthy form of polycystin-1 to our patient cells, it traveled to thecilium and brought its partner polycystin-2 with it, suggesting a possible therapeutic approach for PKD,” explained Dr. Freedman. “This was the first time induced pluripotent stem cells have been used to study human kidney disease where a defect related to disease mechanisms has been found.”

The researchers noted that reprogrammed stem cells from patients with ADPKD may also be useful for testing new therapeutics before trying them out in humans.

In an accompanying editorial, Alexis Hofherr, MD and Michael Köttgen, MD (University Medical Centre, in Freiburg, Germany) stated that the study has “laid the groundwork for using induced pluripotent stem cells in PKD research. This important step forward will provide novel opportunities to model PKD pathogenesis with human cells with defined patient mutations.”

 

 

Scientists generate “mini-kidney” structures from human stem cells.


Diseases affecting the kidneys represent a major and unsolved health issue worldwide. The kidneys rarely recover function once they are damaged by disease, highlighting the urgent need for better knowledge of kidney development and physiology.

Now, a team of researchers led by scientists at the Salk Institute for Biological Studies has developed a novel platform to study  diseases, opening new avenues for the future application of regenerative medicine strategies to help restore kidney function.

Salk scientists generate “mini-kidney” structures from human stem cells

For the first time, the Salk researchers have generated three-dimensional kidney structures from human stem cells, opening new avenues for studying the development and diseases of the kidneys and to the discovery of new drugs that target human . The findings were reported November 17 in Nature Cell Biology.

Scientists had created precursors of kidney cells using stem cells as recently as this past summer, but the Salk team was the first to coax human stem cells into forming three-dimensional cellular structures similar to those found in our kidneys.

“Attempts to differentiate human stem cells into renal cells have had limited success,” says senior study author Juan Carlos Izpisua Belmonte, a professor in Salk’s Gene Expression Laboratory and holder of the Roger Guillemin Chair. “We have developed a simple and efficient method that allows for the differentiation of human stem cells into well-organized 3D structures of the ureteric bud (UB), which later develops into the collecting duct system.”

The Salk findings demonstrate for the first time that pluripotent stem cells (PSCs)—cells capable of differentiating into the many cells and tissue types that make up the body—can made to develop into cells similar to those found in the ureteric bud, an early developmental structure of the kidneys, and then be further differentiated into three-dimensional structures in organ cultures. UB cells form the early stages of the human urinary and reproductive organs during development and later develop into a conduit for urine drainage from the kidneys. The scientists accomplished this with both human  and induced  (iPSCs),  from the skin that have been reprogrammed into their pluripotent state.

After generating iPSCs that demonstrated pluripotent properties and were able to differentiate into mesoderm, a germ cell layer from which the kidneys develop, the researchers made use of growth factors known to be essential during the natural development of our kidneys for the culturing of both iPSCs and embryonic stem cells. The combination of signals from these growth factors, molecules that guide the differentiation of stem cells into specific tissues, was sufficient to commit the cells toward progenitors that exhibit clear characteristics of renal cells in only four days.

The researchers then guided these cells to further differentiated into organ structures similar to those found in the ureteric bud by culturing them with kidney cells from mice. This demonstrated that the mouse cells were able to provide the appropriate developmental cues to allow human  to form three-dimensional structures of the kidney.

In addition, Izpisua Belmonte’s team tested their protocol on iPSCs from a patient clinically diagnosed with polycystic  (PKD), a genetic disorder characterized by multiple, fluid-filled cysts that can lead to decreased  and kidney failure. They found that their methodology could produce kidney structures from patient-derived iPSCs.

Because of the many clinical manifestations of the disease, neither gene- nor antibody-based therapies are realistic approaches for treating PKD. The Salk team’s technique might help circumvent this obstacle and provide a reliable platform for pharmaceutical companies and other investigators studying drug-based therapeutics for PKD and other kidney diseases.

“Our differentiation strategies represent the cornerstone of disease modeling and drug discovery studies,” says lead study author Ignacio Sancho-Martinez, a research associate in Izpisua Belmonte’s laboratory. “Our observations will help guide future studies on the precise cellular implications that PKD might play in the context of .”

Simple Test May Help Gauge Dopamine Loss in Parkinson’s.


The Triplets Learning Task (TLT) may help determine the extent of dopamine loss in patients with Parkinson’s disease (PD), results of a pilot study hint.

The TLT tests implicit learning, a type of learning that occurs without awareness or intent and that relies on the caudate nucleus, an area of the brain affected by dopamine loss.

The test is a sequential learning task that doesn’t require complex motor skills, which tend to decline in people with PD. In the TLT, participants see 4 open circles, see 2 red dots appear, and are asked to respond when they see a green dot appear. Unbeknownst to them, the authors note, the location of the first red dot predicts the location of the green target. Participants learn implicitly where the green target will appear, and they become faster and more accurate in their responses.

Katherine R. Gamble, psychology PhD student at Georgetown University in Washington, DC, and colleagues had 27 patients with mild to moderate PD receiving dopaminergic medication and 27 healthy controls matched for age and education take the TLT on several occasions.

Patients with PD implicitly learned the dot pattern with training, as did controls, but a loss of dopamine appeared to “negatively impact” that learning compared with healthy older adults, Gamble noted in an interview with Medscape Medical News.

“Their performance began to decline toward the end of training, suggesting that people with Parkinson’s disease lack the neural resources in the caudate, such as dopamine, to complete the learning task,” she added in a conference statement.

Gamble reported the findings here at Neuroscience 2013, the annual meeting of the Society for Neuroscience.

Implicit Learning

In this study, participants responded to 6 “epochs” of the TLT, for a total of 1500 trials. All patients had been diagnosed with PD by a neurologist, and all were receiving treatment with anti-PD medication when they took the test.

Their results showed “significant implicit sequence learning” on the TLT test, the researchers report. Learning increased over the first 5 epochs of training, they note; patients continued to respond more quickly to high- vs low-probability triplets, but this plateaued between periods 5 and 6.

“We suggest that in people with PD, learning is intact early in training because less affected regions of the brain (eg, the hippocampus) can support learning,” they conclude. “However, PD-related dopamine deficits appear later in training when the caudate becomes more important.”

The TLT “may be a noninvasive way to evaluate the level of dopamine deficiency in PD patients, and which may lead to future ways to improve clinical treatment of PD patients,” said Steven E. Lo, MD, associate professor of neurology at Georgetown University Medical Center and a coauthor of the study, in a statement.

The researchers are now testing how implicit learning may differ by different PD stages and drug doses.

Evaluating Dopamine Deficiency

Asked to comment on this pilot study, Lidia Gardner, PhD, assistant professor, Department of Neurology, University of Tennessee Health Science Center in Memphis, said the “simple implicit learning test might be potentially a useful tool for neuropsychologists. I would like to know if there is any correlation between the disease progression and the time required for TLT training.”

However, she told Medscape Medical News, “Until a strong correlation is established, most physicians would shy away from using it as a diagnostic tool for the loss of dopamine neurons. However, in addition to other clinical tests it could be useful.”

“Currently, the loss of dopamine neurons can be visualized with PET [positron emission tomography] using 18F-DOPA or other radio-tracers. This technique can give a relatively close assessment of dopamine neurons in PD patients. A simple and inexpensive test (in comparison to PET or SPECT [single-photon emission computed tomography]) is always welcome in healthcare administration,” Dr. Gardner said.