Why Children Aren’t Behaving, And What You Can Do About It


Boy completes his chore of raking leaves

Childhood — and parenting — have radically changed in the past few decades, to the point where far more children today struggle to manage their behavior.

That’s the argument Katherine Reynolds Lewis makes in her new parenting book, The Good News About Bad Behavior.

We face a crisis of self-regulation,” Lewis writes. And by “we,” she means parents and teachers who struggle daily with difficult behavior from the children in their lives.

Lewis, a journalist, certified parent educator and mother of three, asks why so many kids today are having trouble managing their behavior and emotions.

Three factors, she says, have contributed mightily to this crisis.

First: Where, how and how much kids are allowed to play has changed. Second, their access to technology and social media has exploded.

Finally, Lewis suggests, children today are too “unemployed.” She doesn’t simply mean the occasional summer job for a high school teen. The term is a big tent, and she uses it to include household jobs that can help even toddlers build confidence and a sense of community.

“They’re not asked to do anything to contribute to a neighborhood or family or community,” Lewis tells NPR in a recent interview. “And that really erodes their sense of self-worth — just as it would with an adult being unemployed.”

Below is more of that interview, edited for length and clarity.

What sorts of tasks are children and parents prioritizing instead of household responsibilities?

To be straight-A students and athletic superstars, gifted musicians and artists — which are all wonderful goals, but they are long-term and pretty narcissistic. They don’t have that sense of contribution and belonging in a family the way that a simple household chore does, like helping a parent prepare a meal. Anyone who loves to cook knows it’s so satisfying to feed someone you love and to see that gratitude and enjoyment on their faces. And kids today are robbed of that.

It’s part of the work of the family. We all do it, and when it’s more of a social compact than an adult in charge of doling out a reward, that’s much more powerful. They can see that everyone around them is doing jobs. So it seems only fair that they should also.

Kids are so driven by what’s fair and what’s unfair. And that’s why the more power you give kids, the more control you give them, the more they will step up.

You also argue that play has changed dramatically. How so?

Two or three decades ago, children were roaming neighborhoods in mixed-age groups, playing pretty unsupervised or lightly supervised. They were able to resolve disputes, which they had a strong motivation to because they wanted to keep playing. They also planned their time and managed their games. They had a lot of autonomy, which also feeds self-esteem and mental health.

Nowadays, kids, including my own, are in child care pretty much from morning until they fall into bed — or they’re under the supervision of their parents. So they aren’t taking small risks. They aren’t managing their time. They aren’t making decisions and resolving disputes with their playmates the way that kids were 20 or 30 years ago. And those are really important social and emotional skills for kids to learn, and play is how all young mammals learn them.

While we’re on the subject of play and the importance of letting kids take risks, even physical risks, you mention a remarkable study out of New Zealand — about phobias. Can you tell us about it?

This study dates back to when psychologists believed that if you had a phobia as an adult, you must have had some traumatic experience as a child. So they started looking at people who had phobias and what their childhood experiences were like. In fact, they found the opposite relationship.

People who had a fall from heights were less likely to have an adult phobia of heights. People who had an early experience with near-drowning had zero correlation with a phobia of water, and children who were separated from their parents briefly at an early age actually had less separation anxiety later in life.

We need to help kids to develop tolerance against anxiety, and the best way to do that, this research suggests, is to take small risks — to have falls and scrapes and tumbles and discover that they’re capable and that they can survive being hurt. Let them play with sticks or fall off a tree. And yeah, maybe they break their arm, but that’s how they learn how high they can climb.

You say in the book that “we face a crisis of self-regulation.” What does that look like at home and in the classroom?

It’s the behavior in our homes that keeps us from getting out the door in the morning and keeps us from getting our kids to sleep at night.

In schools, it’s kids jumping out of seats because they can’t control their behavior or their impulses, getting into shoving matches on the playground, being frozen during tests because they have such high rates of anxiety.

Really, I lump under this umbrella of self-regulation the increase in anxiety, depression, ADHD, substance addiction and all of these really big challenges that are ways kids are trying to manage their thoughts, behavior and emotions because they don’t have the other skills to do it in healthy ways.

You write a lot about the importance of giving kids a sense of control. My 6-year-old resists our morning schedule, from waking up to putting on his shoes. Where is the middle ground between giving him control over his choices and making sure he’s ready when it’s time to go?

It’s a really tough balance. We start off, when our kids are babies, being in charge of everything. And our goal by the time they’re 18 is to be in charge of nothing — to work ourselves out of the job of being that controlling parent. So we have to constantly be widening the circle of things that they’re in charge of, and shrinking our own responsibility.

It’s a bit of a dance for a 6-year-old, really. They love power. So give him as much power as you can stand and really try to save your direction for the things that you don’t think he can do.

He knows how to put on his shoes. So if you walk out the door, he will put on his shoes and follow you. It may not feel like it, but eventually he will. And if you spend five or 10 minutes outside that door waiting for him — not threatening or nagging — he’ll be more likely to do it quickly. It’s one of these things that takes a leap of faith, but it really works.

Kids also love to be part of that discussion of, what does the morning look like. Does he want to draw a visual calendar of the things that he wants to get done in the morning? Does he want to set times, or, if he’s done by a certain time, does he get to do something fun before you leave the house? All those things that are his ideas will pull him into the routine and make him more willing to cooperate.

Whether you’re trying to get your child to dress, do homework or practice piano, it’s tempting to use rewards that we know our kids love, especially sweets and screen time. You argue in the book: Be careful. Why?

Yes. The research on rewards is pretty powerful, and it suggests that the more we reward behavior, the less desirable that behavior becomes to children and adults alike. If the child is coming up with, “Oh, I’d really like to do this,” and it stems from his intrinsic interests and he’s more in charge of it, then it becomes less of a bribe and more of a way that he’s structuring his own morning.

The adult doling out rewards is really counterproductive in the long term — even though they may seem to work in the short term. The way parents or teachers discover this is that they stop working. At some point, the kid says, “I don’t really care about your reward. I’m going to do what I want.” And then we have no tools. Instead, we use strategies that are built on mutual respect and a mutual desire to get through the day smoothly.

You offer pretty simple guidance for parents when they’re confronted with misbehavior and feel they need to dole out consequences. You call them the four R’s. Can you walk me through them?

The four R’s will keep a consequence from becoming a punishment. So it’s important to avoid power struggles and to win the kid’s cooperation. They are: Any consequence should be revealed in advance, respectful, related to the decision the child made, and reasonable in scope.

Generally, by the time they’re 6 or 7 years old, kids know the rules of society and politeness, and we don’t need to give them a lecture in that moment of misbehavior to drill it into their heads. In fact, acting in that moment can sometimes be counterproductive if they are amped up, their amygdala’s activated, they’re in a tantrum or excited state, and they can’t really learn very well because they can’t access the problem-solving part of their brain, the prefrontal cortex, where they’re really making decisions and thinking rationally. So every misbehavior doesn’t need an immediate consequence.

You even tell parents, in the heat of the moment, it’s OK to just mumble and walk away. What do you mean?

That’s when you are looking at your child, they are not doing what you want, and you cannot think of what to do. Instead of jumping in with a bribe or a punishment or yelling, you give yourself some space. Pretend you had something on the stove you need to grab or that you hear something ringing in the other room and walk away. That gives you just a little space to gather your thoughts and maybe calm down a little bit so you can respond to their behavior from the best place in you — from your best intentions as a parent.

I can imagine skeptics out there, who say, “But kids need to figure out how to live in a world that really doesn’t care what they want. You’re pampering them!” In fact, you admit your own mother sometimes feels this way. What do you say to that?

I would never tell someone who’s using a discipline strategy that they feel really works that they’re wrong. What I say to my mom is, “The tools and strategies that you used and our grandparents used weren’t wrong, they just don’t work with modern kids.” Ultimately, we want to instill self-discipline in our children, which will never happen if we’re always controlling them.

If we respond to our kids’ misbehavior instead of reacting, we’ll get the results we want. I want to take a little of the pressure off of parenting; each instance is not life or death. We can let our kids struggle a little bit. We can let them fail. In fact, that is the process of childhood when children misbehave. It’s not a sign of our failure as parents. It’s normal.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Therapy Made From Patient’s Immune System Shows Promise For Advanced Breast Cancer


“I’m one of the lucky ones,” says Judy Perkins, of the immunotherapy treatment she got. The experimental approach seems to have eradicated her metastatic breast cancer.”

Courtesy of Judy Perkins

Doctors at the National Institutes of Health say they’ve apparently completely eradicated cancer from a patient who had untreatable, advanced breast cancer.

The case is raising hopes about a new way to harness the immune system to fight some of the most common cancers. The methods and the patient’s experience are described Monday in a paper published in the journal Nature Medicine.

“We’re looking for a treatment — an immunotherapy — that can be broadly used in patients with common cancers,” says Dr. Steven Rosenberg, an oncologist and immunologist at the National Cancer Institute, who has been developing the approach.

Rosenberg’s team painstakingly analyzes the DNA in a sample of each patient’s cancer for mutations specific to their malignancies. Next, scientists sift through tumor tissue for immune system cells known as T cells that appear programmed to home in on those mutations.

But Rosenberg and others caution that the approach doesn’t work for everyone. In fact, it failed for two other breast cancer patients. Many more patients will have to be treated — and followed for much longer — to fully evaluate the treatment’s effectiveness, the scientists say.

Still, the treatment has helped seven of 45 patients with a variety of cancers, Rosenberg says. That’s a response rate of about 15 percent, and included patients with advanced cases of colon cancer, liver cancer and cervical cancer.

“Is it ready for prime time today? No,” Rosenberg says.”Can we do it in most patients today? No.”

But the treatment continues to be improved. “I think it’s the most promising treatment now being explored for solving the problem of the treatment of metastatic, common cancers,” he says.

The breast cancer patient helped by the treatment says it transformed her life.

“It’s amazing,” says Judy Perkins, 52, a retired engineer who lives in Port St. Lucie, Fla.

When Perkins was first diagnosed and treated for breast cancer in 2003, she thought she’d beaten the disease. “I thought I was done with it,” she says.

But about a decade later, she felt a new lump. Doctors discovered the cancer had already spread throughout her chest. Her prognosis was grim.

“I became a metastatic cancer patient,” says Perkins. “That was hard.”

Perkins went through round after round of chemotherapy. She tried every experimental treatment she could find. But the cancer kept spreading. Some of her tumors grew to the size of tennis balls.

Perkins received tumor infiltrating lymphocytes as treatment in 2015.

Courtesy of Stephanie Goff/NIH

“I had sort of essentially run out of arrows in my quiver,” she says. “While I would say I had some hope, I was also kind of like ready to quit, too.”

Then she heard about the experimental treatment at the NIH. It was designed to fight some of the most common cancers, including breast cancer.

“The excitement here is that we’re attacking the very mutations that are unique to that cancer — in that patient’s cancer and not in anybody else’s cancer. So it’s about as personalized a treatment as you can imagine,” Rosenberg says.

His team identified and then grew billions of T cells for Perkins in the lab and then infused them back into her body. They also gave her two drugs to help the cells do their job.

The treatment was grueling. Perkins says the hardest part was the side effects of a drug known as interleukin, which she received to help boost the effectiveness of the immune system cells. Interleukin causes severe flu-like symptoms, such as a high fever, intense malaise and uncontrollable shivering.

But the treatment apparently worked, Rosenberg reports. Perkins’ tumors soon disappeared. And, more than two years later, she remains cancer-free.

“All of her detectable disease has disappeared. It’s remarkable,” Rosenberg says.

Perkins is thrilled.

“I’m one of the lucky ones,” Perkins says. “We got the right T cells in the right place at the right time. And they went in and ate up all my cancer. And I’m cured. It’s freaking unreal.”

In an article accompanying the new paper, Laszlo Radvanyi, president and scientific director of the Ontario Institute for Cancer Research, calls the results “remarkable.”

The approach and other recent advances suggest scientists may be “at the cusp of a major revolution in finally realizing the elusive goal of being able to target the plethora of mutations in cancer through immunotherapy,” Radvanyi writes.

Other cancer researchers agree.

“When I saw this paper I thought: “Whoa! I mean, it’s very impressive,” says James Heath, president of the Institute for Systems Biology in Seattle.

“One of the most exciting breakthroughs in biomedicine over the past decade has been activating the immune system against various cancers. But they have not been successful in breast cancer. Metastatic breast cancer is basically a death sentence,” Heath says. “And this shows that you can reverse it. It’s a big deal.”

One key challenge will be to make the treatment easier, faster, and affordable, Rosenberg says. “We’re working literally around the clock to try to improve the treatment.”

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

What is AI? Everything you need to know about Artificial Intelligence


An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is artificial intelligence (AI)?

It depends who you ask.

AI might be a hot topic but you’ll still need to justify those projects.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

What are the uses for AI?

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

What are the different types of AI?

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

What can narrow AI do?

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

What can general AI do?

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn’t exist today and AI experts are fiercely divided over how soon it will become a reality.

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ‘ superintelligence‘ — which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

What is machine learning?

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

What are neural networks?

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have ‘learned’ how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader’s guide to deep learning

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

ai-ml-neural-network.jpg
The structure and training of deep neural networks.

Image: Nuance

Another area of AI research is evolutionary computation, which borrows from Darwin’s famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

What is fueling the resurgence in AI?

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google’s Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google’s TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google’s TensorFlow Research Cloud. The second generation of these chips was unveiled at Google’s I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

What are the elements of machine learning?

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word ‘bass’ relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that’s just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively — although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size — Google’s Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people — most of whom were recruited through Amazon Mechanical Turk — who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn’t setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind’s Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the ‘peak of inflated expectations’ in Gartner’s Hype Cycle, with the backlash-driven ‘trough of disillusionment’ lying in wait.

Image: Gartner / Annotations: ZDNet

Which are the leading firms in AI?

Google’s DeepMind and the NHS: A glimpse of what AI means for the future of healthcare

The Google subsidiary has struck a series of deals with organisations in the UK health service — so what’s really happening?

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

Which AI services are available?

All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don’t want to build their own machine learning models but instead want to consume AI-powered, on-demand services — such as voice, vision, and language recognition — Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella — and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Which of the major tech firms is winning the AI race?

Internally, each of the tech giants — and others such as Facebook — use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam — the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple’s Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space — Google Assistant with its ability to answer a wide range of queries and Amazon’s Alexa with the massive number of ‘Skills’ that third-party devs have created to add to its capabilities.

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana’s days are numbered, although Microsoft was quick to reject this.

Which countries are leading the way in AI?

It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China’s favor.

How can I get started with AI?

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

What are recent landmarks in the development of AI?

There’s too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each — setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson’s win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played “completely random” games against itself, and then learnt from the results. At last year’s prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world’s top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

How will AI change the world?

Robots and driverless cars

This ebook, based on a special feature from ZDNet and TechRepublic, looks at emerging autonomous transport technologies and how they will affect society and the future of business.

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone’s voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people’s image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft’s Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it’s likely this more intrusive use of AI technology — including AI that can recognize emotions — will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM’s Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK’s National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Will AI kill us all?

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a “fundamental risk to the existence of human civilization”. As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft’s director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about “Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away.”

Will an AI steal your job?

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

DNet and TechRepublic looks at the dramatic effect of AI, big data, cloud computing, and automation on IT jobs, and how companies can adapt.

Read More

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the next few decades”.

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it’s not a given that manual and robotic labor will continue to grow hand-in-hand.

Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions the self-driving trucking industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact on couriers and taxi drivers.

Yet some of the easiest jobs to automate won’t even require robotics. At present there are millions of people working in administration, entering and copying data between systems, chasing and booking appointments for companies. As software gets better at automatically updating systems and flagging the information that’s important, so the need for administrators will fall.

As with every technological shift, new jobs will be created to replace those lost. However, what’s uncertain is whether these new roles will be created rapidly enough to offer employment to those displaced, and whether the newly unemployed will have the necessary skills or temperament to fill these emerging roles.

Not everyone is a pessimist. For some, AI is a technology that will augment, rather than replace, workers. Not only that but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted worker — think a human concierge with an AR headset that tells them exactly what a client wants before they ask for it — will be more productive or effective than an AI working on its own.

Among AI experts there’s a broad range of opinion about how quickly artificially intelligent systems will surpass human capabilities.

Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict AI capabilities, over the coming decades.

Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049, and doing a surgeon’s work by 2053.

They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Should we be worried about artificial intelligence?


Not really, but we do need think carefully about how to harness, and regulate, machine intelligence.

By now, most of us are used to the idea of rapid, even accelerating, technological change, particularly where information technologies are concerned. Indeed, as consumers, we helped the process along considerably. We love the convenience of mobile phones, and the lure of social-media platforms such as Facebook, even if, as we access these services, we find that bits and pieces of our digital selves become strewn all over the internet.

More and more tasks are being automated. Computers (under human supervision) already fly planes and sail ships. They are rapidly learning how to drive cars. Automated factories make many of our consumer goods. If you enter (or return to) Australia with an eligible e-passport, a computer will scan your face, compare it with your passport photo and, if the two match up, let you in. The “internet of things” beckons; there seems to be an “app” for everything. We are invited to make our homes smarter and our lives more convenient by using programs that interface with our home-based systems and appliances to switch the lights on and off, defrost the fridge and vacuum the carpet.

Robots taking over more intimate jobs

With the demise of the local car industry and the decline of manufacturing, the services sector is expected to pick up the slack for job seekers. But robots are taking over certain roles once deemed human-only.

Clever though they are, these programs represent more-or-less familiar applications of computer-based processing power. With artificial intelligence, though, computers are poised to conquer skills that we like to think of as uniquely human: the ability to extract patterns and solve problems by analysing data, to plan and undertake tasks, to learn from our own experience and that of others, and to deploy complex forms of reasoning.

The quest for AI has engaged computer scientists for decades. Until very recently, though, AI’s initial promise had failed to materialise. The recent revival of the field came as a result of breakthrough advances in machine intelligence and, specifically, machine learning. It was found that, by using neural networks (interlinked processing points) to implement mathematically specified procedures or algorithms, machines could, through many iterations, progressively improve on their performance – in other words, they could learn. Machine intelligence in general and machine learning in particular are now the fastest-growing components of AI.

The achievements have been impressive. It is now 20 years since IBM’s Deep Blue program, using traditional computational approaches, beat Garry Kasparov, the world’s best chess player. With machine-learning techniques, computers have conquered even more complex games such as Go, a strategy-based game with an enormous range of possible moves. In 2016, Google’s Alpha Go program beat Lee Sedol, the world’s best Go player, in a four-game match.

Allan Dafoe, of Oxford University’s future humanities institute, says AI is already at the point where it can transform almost every industry, from agriculture to health and medicine, from energy systems to security and the military. With sufficient data, computing power and an appropriate algorithm, machines can be used to come up with solutions that are not only commercially useful but, in some cases, novel and even innovative.

Should we be worried? Commentators as diverse as the late Stephen Hawking and development economist Muhammad Yunus have issued dire warnings about machine intelligence. Unless we learn how to control AI, they argue, we risk finding ourselves replaced by machines far more intelligent than we are. The fear is that not only will humans be redundant in this brave new world, but the machines will find us completely useless and eliminate us.

The University of Canberra's robot Ardie teaches tai chi to primary school pupils.
The University of Canberra’s robot Ardie teaches tai chi to primary school pupils.

If these fears are realistic, then governments clearly need to impose some sort of ethical and values-based framework around this work. But are our regulatory and governance techniques up to the task? When, in Australia, we have struggled to regulate our financial services industry, how on earth will governments anywhere manage a field as rapidly changing and complex as machine intelligence?

Governments often seem to play catch-up when it comes to new technologies. Privacy legislation is enormously difficult to enforce when technologies effortlessly span national boundaries. It is difficult for legislators even to know what is going on in relation to new applications developed inside large companies such as Facebook. On the other hand, governments are hardly IT ingenues. The public sector provided the demand-pull that underwrote the success of many high-tech firms. The US government, in particular, has facilitated the growth of many companies in cybersecurity and other fields.

Governments have been in the information business for a very long time. As William the Conqueror knew when he ordered his Domesday Book to be compiled in 1085, you can’t tax people successfully unless you know something about them. Spending of tax-generated funds is impossible without good IT. In Australia, governments have developed and successfully managed very large databases in health and human services.

The governance of all this data is subject to privacy considerations, sometimes even at the expense of information-sharing between agencies. The evidence we have is that, while some people worry a lot about privacy, most of us are prepared to trust government with our information. In 2016, the Australian Bureau of Statistics announced that, for the first time, it would retain the names and addresses it collected during the course of the 2016 population census. It was widely expected (at least by the media) that many citizens would withhold their names and addresses when they returned their forms. In the end, very few did.

But these are government agencies operating outside the security field. The so-called “deep state” holds information about citizens that could readily be misused. Moreover, private-sector profit is driving much of the current AI surge (although, in many cases, it is the thrill of new knowledge and understanding, too). We must assume that criminals are working out ways to exploit these possibilities, too.

If we want values such as equity, transparency, privacy and safety to govern what happens, old-fashioned regulation will not do the job. We need the developers of these technologies to co-produce the values we require, which implies some sort of effective partnership between the state and the private sector.

Could policy development be the basis for this kind of partnership? At the moment, machine intelligence works best on problems for which relevant data is available, and the objective is relatively easy to specify. As it develops, and particularly if governments are prepared to share their own data sets, machine intelligence could become important in addressing problems such as climate change, where we have data and an overall objective, but not much idea as to how to get there.

Machine intelligence might even help with problems where objectives are much harder to specify. What, for example, does good urban planning look like? We can crunch data from many different cities, and come up with an answer that could, in theory, go well beyond even the most advanced human-based modelling. When we don’t know what we don’t know, machines could be very useful indeed. Nor do we know, until we try, how useful the vast troves of information held by governments might be.

Perhaps, too, the jobs threat is not as extreme as we fear. Experience shows that humans are very good at finding things to do. And there might not be as many existing jobs at risk as we suppose. I am convinced, for example, that no robot could ever replace road workers – just think of the fantastical patterns of dug-up gravel and dirt they produce, the machines artfully arranged by the roadside or being driven, very slowly, up and down, even when all the signs are there, and there is absolutely no one around. How do we get a robot, even one capable of learning by itself, to do all that?

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

No more sweet tooth? Scientists switch off pleasure from food in brains of mice


Altering activity in brain’s emotion center can eliminate the natural craving for sweet; findings could inform treatments for eating disorders

New research in mice has revealed that the brain’s underlying desire for sweet, and its distaste for bitter, can be erased by manipulating neurons in the amygdala, the emotion center of the brain. The research points to new strategies for understanding and treating eating disorders including obesity and anorexia nervosa.

Brain illustration

New research in mice has revealed that the brain’s underlying desire for sweet, and its distaste for bitter, can be erased by manipulating neurons in the amygdala, the emotion center of the brain.

The study showed that removing an animal’s capacity to crave or despise a taste had no impact on its ability to identify it. The findings suggest that the brain’s complex taste system — which produces an array of thoughts, memories and emotions when tasting food — are actually discrete units that can be individually isolated, modified or removed all together. The research points to new strategies for understanding and treating eating disorders including obesity and anorexia nervosa.

The research was published today in Nature.

“When our brain senses a taste it not only identifies its quality, it choreographs a wonderful symphony of neuronal signals that link that experience to its context, hedonic value, memories, emotions and the other senses, to produce a coherent response,” said Charles S. Zuker, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper’s senior author.

Today’s study builds upon earlier work by Dr. Zuker and his team to map the brain’s taste system. Previously, the researchers revealed that when the tongue encounters one of the five tastes — sweet, bitter, salty, sour or umami — specialized cells on the tongue send signals to specialized regions of the brain so as to identify the taste, and trigger the appropriate actions and behaviors.

To shed light on that experience, the scientists focused on sweet and bitter taste and the amygdala, a brain region known to be important for making value judgments about sensory information. Previous research by Dr. Zuker, a professor of biochemistry and molecular biophysics and of neuroscience and a Howard Hughes Medical Institute Investigator at Columbia University Irving Medical Center, and others showed that the amygdala connects directly to the taste cortex.

“Our earlier work revealed a clear divide between the sweet and bitter regions of the taste cortex,” said Li Wang, PhD, a postdoctoral research scientist in the Zuker lab and the paper’s first author. “This new study showed that same division continued all the way into the amygdala. This segregation between sweet and bitter regions in both the taste cortex and amygdala meant we could independently manipulate these brain regions and monitor any resulting changes in behavior.”

The scientists performed several experiments in which the sweet or bitter connections to the amygdala were artificially switched on, like flicking a series of light switches. When the sweet connections were turned on, the animals responded to water just as if it were sugar. And by manipulating the same types of connections, the researchers could even change the perceived quality of a taste, turning sweet into an aversive taste, or bitter into an attractive one.

In contrast, when the researchers instead turned off the amygdala connections but left the taste cortex untouched, the mice could still recognize and distinguish sweet from bitter, but now lacked the basic emotional reactions, like preference for sugar or aversion to bitter.

“It would be like taking a bite of your favorite chocolate cake but not deriving any enjoyment from doing so,” said Dr. Wang. “After a few bites, you may stop eating, whereas otherwise you would have scarfed it down.”

Usually, the identity of a food and the pleasure one feels when eating it are intertwined. But the researchers showed that these components can be isolated from each other, and then manipulated separately. This suggests that the amygdala could be a promising area of focus when looking for strategies to treat eating disorders.

In the immediate future, Drs. Zuker and Wang are investigating additional brain regions that serve critical roles in the taste system. For example, the taste cortex also links directly to regions involved in motor actions, learning and memory.

“Our goal is to piece together how those regions add meaning and context to taste,” said Dr. Wang. “We hope our investigations will help to decipher how the brain processes sensory information and brings richness to our sensory experiences.”

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Cometh the cyborg: Improved integration of living muscles into robots


Researchers have developed a novel method of growing whole muscles from hydrogel sheets impregnated with myoblasts. They then incorporated these muscles as antagonistic pairs into a biohybrid robot, which successfully performed manipulations of objects. This approach overcame earlier limitations of a short functional life of the muscles and their ability to exert only a weak force, paving the way for more advanced biohybrid robots.

Object manipulations performed by the biohybrid robots.
 

The new field of biohybrid robotics involves the use of living tissue within robots, rather than just metal and plastic. Muscle is one potential key component of such robots, providing the driving force for movement and function. However, in efforts to integrate living muscle into these machines, there have been problems with the force these muscles can exert and the amount of time before they start to shrink and lose their function.

Now, in a study reported in the journal Science Robotics, researchers at The University of Tokyo Institute of Industrial Science have overcome these problems by developing a new method that progresses from individual muscle precursor cells, to muscle-cell-filled sheets, and then to fully functioning skeletal muscle tissues. They incorporated these muscles into a biohybrid robot as antagonistic pairs mimicking those in the body to achieve remarkable robot movement and continued muscle function for over a week.

The team first constructed a robot skeleton on which to install the pair of functioning muscles. This included a rotatable joint, anchors where the muscles could attach, and electrodes to provide the stimulus to induce muscle contraction. For the living muscle part of the robot, rather than extract and use a muscle that had fully formed in the body, the team built one from scratch. For this, they used hydrogel sheets containing muscle precursor cells called myoblasts, holes to attach these sheets to the robot skeleton anchors, and stripes to encourage the muscle fibers to form in an aligned manner.

“Once we had built the muscles, we successfully used them as antagonistic pairs in the robot, with one contracting and the other expanding, just like in the body,” study corresponding author Shoji Takeuchi says. “The fact that they were exerting opposing forces on each other stopped them shrinking and deteriorating, like in previous studies.”

The team also tested the robots in different applications, including having one pick up and place a ring, and having two robots work in unison to pick up a square frame. The results showed that the robots could perform these tasks well, with activation of the muscles leading to flexing of a finger-like protuberance at the end of the robot by around 90°.

“Our findings show that, using this antagonistic arrangement of muscles, these robots can mimic the actions of a human finger,” lead author Yuya Morimoto says. “If we can combine more of these muscles into a single device, we should be able to reproduce the complex muscular interplay that allow hands, arms, and other parts of the body to function.”

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Evolocumab plus statin potentially reduces atherosclerosis progression in GLAGOV study


The addition of evolocumab to statin therapy in individuals with angiographic coronary disease appeared to encourage coronary atherosclerosis regression, as demonstrated in the GLAGOV* trial presented at the Scientific Sessions of the American Heart Association (AHA 2016) held in New Orleans, Louisiana, US.

In comparison with patients on statin alone who experienced a nonsignificant 0.05 percent increase in percent atheroma volume (PAV), those on combined therapy of statin and evolocumab had a 0.95 percent reduction in PAV (difference, -1.0 percent, 95 percent confidence interval [CI], -1.8 to -0.64 percent; p<0.001). Normalized total atheroma volume (TAV) decreased by 0.9 mm3 (nonsignificant) in those on statin alone compared with 5.8 mm3 in those on statin and evolocumab (difference, -4.9 mm3, 95 percent CI, -7.3 to -2.5; p<0.001). [AHA 2016, LBCT 03; JAMA 2016;doi:10.1001/jama.2016.16951]

Plaque regression occurred in a greater number of patients on evolocumab and statin compared with those on statin alone (64.3 percent vs 47.3 percent; difference, 17.0 percent, 95 percent CI, 10.4 to 23.6 percent; p<0.001 for PAV and 61.5 percent vs 48.9 percent; difference, 12.5 percent, 95 percent CI, 5.9 to 19.2 percent; p<0.001 for TAV).

“We are really reducing plaque burden in the coronaries if we can get [low-density lipoprotein cholesterol (LDL-C)] down to these very low levels,” said study chair Dr Steven Nissen from the Department of Cardiovascular Medicine at the Cleveland Clinic, Cleveland, Ohio, US, who presented the findings. “It turns out that a little bit of change in plaque volume translates into a very big change in plaque behaviour.”

“[These findings] suggest a new era in lipid management,” said discussant Dr Raul Santos from the University of São Paulo, Brazil.

Evolocumab appeared to be well tolerated with comparable incidences of injection site reactions (0.4 percent vs 0 percent), myalgia (7.0 percent vs 5.8 percent), neurocognitive events (1.4 percent vs 1.2 percent), and new onset diabetes (3.6 percent vs 3.7 percent) for evolocumab plus statin vs statin monotherapy, respectively.

In this double-blind, placebo-controlled, multicentre trial, participants (n=968, mean age 59.8 years; 72 percent male) with angiographic coronary disease, LDL-C levels ≥80 mg/dL or 60–80 mg/dL with additional high-risk features, and on stable statin therapy were randomized to receive monthly subcutaneous injections of the proprotein convertase subtilisin kexin type 9 (PCSK9) inhibitor, evolocumab (420 mg) or placebo for 76 weeks. After angiography, participants underwent intravascular ultrasound (IVUS) of the same artery at baseline and at week 78.

“Both the primary and secondary IVUS efficacy measures showed atherosclerosis regression … in patients treated with the combination of evolocumab and statins and absence of regression in patients treated with a statin alone,” said study lead investigator Dr Steven Nicholls, also from the Cleveland Clinic. “These findings provide evidence that PCSK9 inhibition produces incremental benefits on coronary disease progression in statin-treated patients.”

“Over the last 4 decades, evidence has accumulated suggesting that optimal LDL levels for patients with coronary disease may be much lower than commonly achieved. While we await large outcome trials for PCSK9 inhibitors, the GLAGOV trial provides intriguing evidence that clinical benefits may extend to LDL-C levels as low as 20 mg/dL,” said Nissen, who acknowledged the limitations of the trial such as the small number of patients and short treatment period. “IVUS is a useful measure of disease activity, but the critical determination of benefit and risk will require completion of large outcome trials currently underway,” he said.

Other factors that could potentially influence disease progression in the setting of very low LDL-C levels also need to be investigated, said Nicholls.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Dapagliflozin drives HbA1c, SBP, and weight down in DERIVE


Dr Siew-Pheng Chan.

The use of the sodium/glucose cotransporter 2 (SGLT-2) inhibitor dapagliflozin in patients with type 2 diabetes (T2D) and moderate renal impairment provides benefits beyond glucose lowering, with no new safety signals, in the phase III DERIVE* study.

At 6 months, the primary endpoint of mean reduction in HbA1c level was greater by 0.34 percent in patients treated with dapagliflozin vs placebo (p< 0.001). There were also greater reductions in systolic blood pressure (SBP, 3.1 mm Hg; p<0.05) and mean body weight (1.25 percent, p< 0.001) with dapagliflozin. (APSC 2018, abstract S105-01)

“Dapagliflozin induces glycosuria and lowers blood glucose. However, the glycaemic efficacy of dapagliflozin is attenuated in patients with moderate renal impairment, for example in stage 3 CKD, because less glucose is cleared in the kidney in this group,” said Dr Siew-Pheng Chan, consultant endocrinologist at Subang Jaya Medical Centre in Subang Jaya, Malaysia, who is unaffiliated with the study.

Researchers led by Dr Paola Fioretto of the University of Padova in Padua, Italy conducted the DERIVE study to compare the efficacy and safety of dapagliflozin vs placebo in 321 patients with T2D (HbA1c of 7 –11 percent) and moderate renal impairment (stage 3A chronic kidney disease (CKD), estimated glomerular filtration rate (eGFR), 45 to <60 mL/min/1.73m2). Patients were randomized to either dapagliflozin 10 mg (n=160) or placebo (n=161) over 6 months. Randomization was stratified by background glucose-lowering medication. Both groups had similar baseline characteristics.

At 6 months, treatment with dapagliflozin resulted in a significant reduction in mean HbA1c (-0.37 percent vs -0.03 percent for placebo) and mean body weight (-3.17 vs -1.92 kg, respectively) from baseline. The mean fasting plasma glucose was also significantly reduced with dapagliflozin (-21.46 vs -4.87 mg/dL for placebo) as was mean SBP (-4.8 vs -1.7 mm Hg, respectively) from baseline to 6 months.

In terms of safety, mean eGFR was reduced with dapagliflozin (-3.23 mL/min/1.73m2) vs placebo (-0.63 mL/min/1.73m2). Urinary tract infection and genital infection were the most common adverse events of interest reported. Overall, the safety profile of dapagliflozin was consistent with previous reports seen for T2D. No bone fractures or amputations were reported.

Dapagliflozin is currently indicated as an adjunct to diet and exercise to improve glycaemic control in adults with T2D. Dapagliflozin remains contraindicated in patients with an eGFR <30 mL/min/1.73 m².

 

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Low ankle-brachial index tied to cognitive decline


Even in the absence of peripheral arterial disease, a lower ankle-brachial index (ABI) is significantly associated with larger declines in cognitive function, according to a poster presented at the 2018 Congress of the Asian Pacific Society of Cardiology (APSC 2018) in Taipei, Taiwan.

“The present study aimed to investigate whether a graded association between ABI and cognitive function exists, and whether this association is independent of artery stiffness, which is a recognized predictor of cognitive impairment,” said researchers.

Categorizing 708 participants according to quartiles of ABI, researchers found that there was a significant and inverse correlation between ABI values and global cognitive function, represented as scores in the Mini-Mental Short Examination (MMSE; p=0.0011 for trend). [APSC 2018, abstract P006]

Specifically, mean MMSE scores were lowest in the first ABI quartile (27.4±3.1) and increased in the second (27.8±2.8), third (28.2±2.3) and fourth (28.4±2.0) quartiles.

The significant relationship between ABI and global cognitive function was confirmed in general linear (β, –0.137; p=0.0007) and fully adjusted multivariable logistic regression models (Q4 vs Q1: adjusted odds ratio [OR], 3.623; 95 percent CI, 1.096–11.972).

Researchers likewise found a significant and positive relationship between MMSE scores and carotid-femoral pulse wave velocity (CF-PWV; β, 0.114; p=0.0044).

Notably, using patients with both high ABI and low PWV as reference, the odds of cognitive function decline was elevated in those with high PWV only (OR, 2.34) and low ABI only (OR, 2.28). The effect was substantially more pronounced in those with both low ABI and high PWV (OR, 8.19).

ABI also showed a significant and inverse association with mean brachial pulse pressure (Q1: 58.3±11.8; Q2: 56.6±13.2; Q3: 54.7±10.6; Q4: 54.9±10.6 mm Hg; p=0.0094 for trend). There was a significantly higher proportion of male patients in the third (58.52 percent) and fourth (57.30 percent) ABI quartiles than in the first (35.16 percent) and second (42.42 percent; p<0.0001 for trend).

The findings of the present study show that a lower ABI value is significantly associated with a greater decline in global cognitive function, said researchers. In addition, the relationship was independent of and additive to the effect of arterial stiffness.

For the study, researchers recruited 708 adults without peripheral arterial disease (ABI >0.9; mean age 69.0±7.0 years; 48.35 percent male). Volume-plethysmographic apparatus was used to measure ABI, while CF-PWV was used as a measure of arterial stiffness.

Of the participants, 182 had ABI from 0.9–1.10 and were placed in the lowest quartile (mean age 69.3±7.1 years) while 165 had ABI from 1.10–1.14 and fell within the second quartile (mean age 69.1±6.9 years). The third (ABI 1.14–1.19; mean age 68.8±7.3 years) and fourth (ABI ≥1.19; mean age 68.9±6.6 years) quartiles included 176 and 185 participants, respectively.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Are the Elderly Really Taking Too Many Vitamins?


Story at-a-glance

  • According to The New York Times, studies have linked high-dose vitamin E with a higher risk of prostate cancer. In reality, a single study found a very small increase in prostate cancer among those using synthetic vitamin E
  • Studies looking at natural vitamin E show tocotrienols — specifically gamma tocotrienol — prevent prostate cancer and even kill prostate cancer stem cells. Gamma-tocotrienol may also be effective against existing prostate tumors
  • Your body’s ability to absorb B12 diminishes significantly with age, and Alzheimer’s symptoms are extremely similar to the symptoms of B12 deficiency
  • The New York Times also claims beta-carotene causes cancer. This myth is based on research showing smokers given a low dose of synthetic beta-carotene had a slightly increased risk of cancer. However, the treatment group had been smoking a year longer than the controls
  • When properly prescribed, and taken as directed, the death toll from drugs is between 85,000 to 135,000 Americans per year. There’s no evidence of dietary supplements having caused a single death in over 30 years

By Dr. Mercola

The conventional view of dietary supplements is, for the most part, predictably negative. The New York Times recently offered a perfect demonstration of this view in its April 3 article, “Older Americans Are ‘Hooked’ on Vitamins.”1 In this interview, Dr. Andrew Saul, editor-in-chief of the Orthomolecular Medicine News Service and author of “Doctor Yourself: Natural Healing That Works” and “Fire Your Doctor: How To Be Independently Healthy,” breaks down the myths and inaccuracies presented in that article.

While drug overdoses are currently killing 63,000 Americans each year — with opioids being responsible for nearly 50,000 of them and being a leading cause of death for Americans under 50 — the media is still pretending that people getting “hooked” on vitamins is a dangerous trend.

“The funny thing is that for those who are hooked on opioids, high doses of vitamin C had been shown — in two really good studies — to enable people to get off opioids without withdrawal symptoms, or greatly reduced withdrawal symptoms. Being hooked on vitamin C would actually help you get unhooked from heroin,” Saul notes.

Vitamin C — A Powerful Healer

Vitamin C is actually a very important and powerful detoxifier. In addition to helping you detox from drugs, this is also something to remember when you’re seeing a dentist. If you’re taking large doses of vitamin C, you may need a larger dose of anesthetic, as your body will break the drug down faster. On the other hand, loading up on vitamin C prior to a dental appointment will also quicken healing, sealing the gums faster, and reduce both bleeding and pain.

“If you have a tooth extraction or a root canal or anything that’s really invasive, vitamin C is the dentist’s best friend, because nothing makes gums stronger and quicker than vitamin C. Not only oral vitamin C; you can even take nonacidic vitamin C, such as calcium ascorbate, magnesium ascorbate or sodium ascorbate and put that right on the gums.

You can even put it right on the socket. People who have dry sockets or extended bleeding, when they use vitamin C topically — not ascorbic acid, mind you, but nonacidic C topically — they get immediate relief. It was Dr. Hugh Riordan at the now-famous Riordan Clinic who brought some of this forward decades ago. It’s good advice,” Saul says.

Do Seniors Need Vitamin Supplements?

Getting back to that New York Times article, “The Times has laid-off or fired a very large number of copyeditors … They wanted to save money, so they eliminated the copydesk. They got rid of about 100 copyeditors … In my opinion, this article is a good example of a piece that should have been properly copyedited and fact-checked, and wasn’t,” Saul says.

For example, it mentions that studies have linked high-dose vitamin E with a higher risk of prostate cancer. In reality, a single study found a very small, and possibly questionable, increase in prostate cancer among people in that particular study. Importantly, the study in question used synthetic vitamin E, not the natural E. They also used fairly low dosages.

The salient point here is that there are studies looking at natural vitamin E, using all four tocopherols and four tocotrienols. These studies were not quoted, even though two such studies show tocotrienols — specifically gamma tocotrienol — actually prevent prostate cancer2 and even kill prostate cancer stem cells.3

These are the cells from which prostate cancer actually develops. They are, or quickly become, chemotherapy-resistant. Yet, natural vitamin E complex is able to kill these stem cells. Mice given oral gamma-tocotrienol had an astonishing 75 percent decrease in tumor formation.

A third study4 found gamma-tocotrienol was also effective against existing prostate tumors by modulating cell growth and the apoptosis (cell death) response. “Now, that has got to be newsworthy. The New York Times decided that’s news not fit to print,” Saul says.

Are Seniors Really Getting All the Nutrients They Need From Their Diet?

The New York Times article also states that older Americans get plenty of essential nutrients in their diet, and that the Western diet is not short on vitamins. “This is demonstrably nonsense,” Saul says, adding “The elderly tend to have poor diets in general, especially those who live alone or are institutionalized.” There are a number of reasons for this, including:

  • The elderly tend to have poor appetite due to higher rates of depression
  • As people get older, their sense of smell and, therefore, their sense of taste, diminishes
  • The elderly rarely drink enough water, as the sense of thirst diminishes with age

As noted by Saul, “If they’re not eating proper meals because they’re sad, depressed or lonely, or they’re just getting mediocre care, then they can’t possibly get enough nutrients — because even the paltry amount of nutrients in an American diet is not there if you don’t even eat the American diet.”

Most Seniors Are Deficient in B12, Magnesium and Vitamin D

Your body’s ability to absorb B12 also diminishes significantly with age, and Alzheimer’s symptoms are in fact extremely similar to the symptoms of severe B12 deficiency. Many clinicians would likely have a hard time distinguishing between the two.

“If B12 absorption is poor, and if the elderly are not eating proper meals, the amount of B12 in an older person is going to be low. For the article to say that it’s an abundant nutrient for the elderly is absolutely not true,” Saul says. There’s also ample evidence showing most soils are depleted of nutrients, which has led to lower nutrient values in whole foods. So, while Americans are not deficient in calories, many are indeed deficient in crucial nutrients.

“Dr. Abram Hoffer asked me years ago to write a paper on, ‘Can supplements take the place of a good diet?’ My comment was, ‘Well, they’re going to have to.’ Because people eat such lousy diets. If they’re going to eat lousy diets, it’s better to have a lousy diet and take supplements than to have a lousy diet without supplements. The solution, really, is to have a really good diet.

But I don’t have to tell you what a hospital diet looks like, or what a nursing home diet looks like. You don’t have to tell me what a school lunch diet looks like. These are really poor meals. You have exactly the wrong nutrients in abundance — the calorie nutrients. And then you have a dearth of the micronutrients.

One more thing:  the article talks about how there’s an abundance of nutrients and everybody gets enough. With the mineral magnesium, if you look over decades of studies, National Health and Nutrition Examination Survey studies and all kinds of very large-scale studies of what people eat, magnesium deficiency is probably the most common mineral deficiency in the United States. Almost no Americans get the U.S. recommended dietary allowance (RDA) of magnesium …

The other one is vitamin D. Vitamin D deficiency is so prevalent in the elderly that half of the people hospitalized for hip fractures are demonstrably and measurably vitamin D-deficient. What’s really interesting is that the article says taking extra calcium did not help fractures. That’s not the point. It’s extra vitamin D and vitamin K that help put the calcium where it needs to be. They didn’t mention that.”

The Importance of Magnesium

Saul cites a Blue Cross Blue Shield study showing that seniors who took vitamin D supplements not only had fewer fractures, but they didn’t fall as often. “Vitamin D actually helps prevent the fracture by preventing falling,” Saul says. Magnesium deficiency is also problematic as it plays an important role in heart health and muscle function.

Magnesium may also help protect your body against the ravages of electrical pollution. Electromagnetic fields (EMFs), which are pervasive everywhere these days, cause oxidative damage similar to that of smoking. Magnesium acts as a calcium-channel blocker, which appears to be one of the primary mechanisms through which EMFs cause oxidative stress. Hence, having enough magnesium in your body may be protective.

Types of Magnesium and Advice on Dosage

When it comes to oral magnesium supplementation, there’s the issue of it having a laxative effect, which can upset your microbiome. One simple solution to this is to take regular Epsom salts baths. It’s a good way to relax sore muscles, and your body will absorb the magnesium transdermally, meaning through your skin, bypassing your gastrointestinal tract altogether.

The worst form of magnesium, in terms of absorbability, is magnesium oxide, which incidentally is also the most common form available to consumers. Better alternatives include magnesium gluconate, magnesium citrate or magnesium chloride, the latter of which has the greatest absorbability of the three.

Two of my personal favorites are magnesium malate (malic acid) and magnesium threonate. Magnesium malate is a Krebs cycle intermediate and may help increase adenosine triphosphate (ATP) production, while magnesium threonate has been shown to effectively penetrate the blood-brain barrier. So, for brain benefits, threonate appears to be preferable.

“If you take magnesium in small divided doses, you’re less likely to disturb your belly,” Saul says. “Some people don’t need to take a lot of extra magnesium; others do. It’s really a matter of [doing] a therapeutic trial. I would start small. Take your magnesium between meals and see when you feel better. It’s simply a matter of trial and error …

It was Dr. Richard Passwater who first brought that idea to me in the late ‘70s, in his wonderful book ‘Super-Nutrition: Megavitamin Revolution.’ He said, ‘To determine your dose of nutrients as you want to supplement with, start taking them and see if you feel better. If you do, take a little more. If you’re feeling still better, then use the higher dose. If you don’t feel any better, go to the lower dose that gets the most results.

I just love that. It’s so simple. We can all do this, and should. That doesn’t mean you’re hooked on vitamins, folks. It means that you’re an intelligent human being. How intelligent? Well, at least half of all Americans are taking vitamins every day. With the elderly, it may be as high as two-thirds. I have heard, unofficially, that among physicians, 3 out of 4 doctors take supplements regularly. They just don’t talk about it.”

When I was still practicing, intravenous magnesium was one of the minerals I regularly used for acute migraines, infections and asthma attacks. In high doses, magnesium has a very potent vasodilatory effect. In fact, if administered too quickly, it’s almost like a niacin flush. But it was profoundly effective for aborting migraines and asthma attacks, and rapidly resolved coughs and colds. Magnesium will also help prevent and/or ease menstrual cramps.

Does Beta-Carotene Cause Cancer?

The New York Times also revisited the age-old myth that beta-carotene causes cancer. This fallacy is based on research from the 1990s that found a certain population of men in Finland, when given 20 milligrams of beta-carotene a day — the equivalent found in two or three carrots — had a very small but widely touted increase in cancer.

What is regularly not mentioned is the fact that they were heavy smokers, and the treatment group had been smoking a year longer than the controls. The patients also were not prescreened to see if they had any precancerous conditions.

“People say to me, ‘Beta-carotene can cause cancer.’ No. Smoking causes cancer. ‘Beta-carotene can be harmful.’ No. Cigarettes are harmful. SMOKING is what’s harmful to smokers. The problem, folks, is not the carrots,” Saul says. Another significant variable that may have played a role is the fact that they used synthetic beta-carotene.

“The study is a bad study. Therefore, The New York Times should know better than to quote it. They not only quote it, they kind of misquote it because they don’t use the word ‘smoker,’” Saul says. “If you’re hooked on cigarettes, you’re going to have problems. If you’re hooked on vitamins, you’re not.

This brings us to the fundamental question of what kills and what wastes money. Consumer Reports estimates that $200 billion a year is spent on incorrect harmful medication. The entire food supplement industry worldwide is one-fifth of that, at most. We are wasting huge amounts on giving drugs that are harmful and complaining about the people who are doing good preventive care and taking their vitamins.”

Supplements Versus Drugs — What’s More Dangerous?

Saul also notes that Harvard School of Public Health has assessed the role of drugs in deaths at great depth. When properly prescribed and taken as directed, the lowest estimated death toll from pharmaceutical drugs is still around 85,000 people a year. The high estimate is around 135,000 people annually, while the generally accepted estimate is about 106,000 people a year.

That’s 106,000 dead Americans every year from properly prescribed drugs, not medical errors; drugs taken as directed, not overdose. That means that each decade, “normal” side effects of drugs are killing about 1 million people in the U.S.

According to the American Association of Poison Control Centers (AAPCC), which has been tracking this information for over three decades, there have been 13 alleged deaths from vitamins in 31 years. However, Saul notes, “My team looked into this and we could not find substantiation, documentation, proof or convincing evidence of one single death … from vitamins in the last 31 years.” In most cases, the individual was taking both drugs and vitamins.

This year, the AAPCC actually removed the vitamin category, because it’s always been zero. “Personally, I think they got tired of the Orthomolecular Medicine News Service saying, ‘No deaths from vitamins. No deaths from minerals. No deaths from amino acids. No deaths from herbals. No deaths from homeopathic substances,’” Saul says.5,6

“These alternative treatments are effective. They’re safe, and they’re cheap. I want to emphasize they are safe. People are dying in our land and in our world because we’re giving them dangerous drugs. Dr. Hoffer once said, ‘Drugs make a well person sick. Why would they make a sick person well?’ …

Vitamins are not the problem. They’re the solution. If we had better-nourished Americans, we’d save a pile on our $3 trillion-plus disease care bill. It’s good that older Americans take supplements. I don’t mean to do it foolishly. If you take a look, most people are actually smarter than we give them credit for. Taking a multivitamin for instance, especially if it’s a good-quality natural multivitamin, is just a really good idea.”

Growing Your Own Food Is Part of the Solution

As a general rule, most Americans are not getting enough vitamins, minerals and micronutrients from their foods, in large part thanks to the prevalence of processed foods. Dietary supplements, especially if your diet is largely processed, is generally advisable. In the long term, growing more nutrient-dense food is a big part of the answer.

Garden-grown organic vegetables and fruits are nutrient-rich and represent the freshest produce available. Growing your own crops not only improves your diet, but it also:

  • Enhances and protects precious topsoil
  • Encourages composting, which can be used to feed and nourish your plants
  • Minimizes your exposure to synthetic fertilizers, pesticides and other toxins
  • Promotes biodiversity by creating a natural habitat for animals, birds, insects and other living organisms
  • Improves your fitness level, mood and sense of well-being, making gardening a form of exercise

While gardens have many benefits, the most important reason you should plant a garden (especially given the many issues associated with industrial agriculture) is because gardening helps create a more sustainable global food system, giving you and others access to fresh, healthy, nutrient-dense food. If you are new to gardening and unsure about where to start, consider sprouts.

Sprouts are an easy-to-grow, but often overlooked, superfood with a superior nutritional profile. You can grow sprouts even if you don’t have an outdoor garden, and you should consider them if you live in an apartment or condo where space is limited.

“No matter where you are, there’s a way that we can [grow our own food]. We’ve been taught to be consumers of medical care instead of self-reliant people. We’ve been taught to be patients and not persons. To change this around, we have to give ourselves permission to take the power, to do what our body should have been doing all along. We’ve been misled.

I think maybe profit has a little bit to do with this. The pharmaceutical industry is making an awful lot of dough these days. I know people who take pills that cost $1,000 apiece. Don’t tell me I’m hooked on vitamins and I’m wasting my money and having expensive urine. I don’t need to hear that. I find that taking vitamins is very helpful to me, my children and my grandchildren …

For people who think they can’t, you’re wrong. You can. You can do this right away. You can eat better. One of the few free decisions we make every day is whether we will or will not exercise, whether we will or not eat this or that, whether we will or not say no to pharmaceutical drugs or over-the-counter drugs. Every single incremental advancement that you make is going to make your body happy. You’re going to see the difference. All you’ve got to do is try it.”

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

%d bloggers like this: