Scientists Want to Align Your Internal Clock Because Timing Is Everything


internal clock

In life, timing is everything.

Your body’s internal clock — the circadian rhythm — regulates an enormous variety of processes: when you sleep and wake, when you’re hungry, when you’re most productive. Given its palpable effect on so much of our lives, it’s not surprising that it has an enormous impact on our health as well. Researchers have linked circadian health to the risk of diabetes, cardiovascular disease, and neurodegeneration. It’s also known that the timing of meals and medicines can influence how they’re metabolized.

The ability to measure one’s internal clock is vital to improving health and personalizing medicine. It could be used to predict who is at risk for disease and track recovery from injuries. It can also be used to time the delivery of chemotherapy and blood pressure and other drugs so that they have the optimum effect at lower doses, minimizing the risk of side effects.

However, reading one’s internal clock precisely enough remains a major challenge in sleep and circadian health. The current approach requires taking hourly samples of blood melatonin — the hormone that controls sleep — during day and night, which is expensive and extremely burdensome for the patient. This makes it impossible to incorporate into routine clinical evaluations.

My colleagues and I wanted to obtain precise measurements of internal time without the need for burdensome serial sampling. I’m a computational biologist with a passion for using mathematical and computational algorithms to make sense of complex data. My collaborators, Phyllis Zee and Ravi Allada, are world-renowned experts in sleep medicine and circadian biology. Working together, we designed a simple blood test to read a person’s internal clock.

Listening to the Music of Cells

The circadian rhythm is present in every single cell of your body, guided by the central clock that resides in the suprachiasmatic nucleus region of the brain. Like the secondary clocks in an old factory, these so-called “peripheral” clocks are synchronized to the master clock in your brain, but also tick forward on their owneven in petri dishes!

Your cells keep time through a network of core clock genes that interact in a feedback loop: When one gene turns on, its activity causes another molecule to turn it back down, and this competition results in an ebb and flow of gene activation within a 24-hour cycle. These genes in turn regulate the activity of other genes, which also oscillate over the course of the day. This mechanism of periodic gene activation orchestrates biological processes across cells and tissues, allowing them to take place in synchrony at specific times of day.

The circadian rhythm orchestrates many biological processes, including digestion, immune function, and blood pressure, all of which rise and fall at specific times of the day. Misregulation of the circadian rhythm can have adverse effects on metabolism, cognitive function, and cardiovascular health.
The circadian rhythm orchestrates many biological processes, including digestion, immune function, and blood pressure, all of which rise and fall at specific times of the day. Misregulation of the circadian rhythm can have adverse effects on metabolism, cognitive function, and cardiovascular health.

The discovery of the core clock genes is so fundamental to our understanding of how biological functions are orchestrated that it was recognized by the Nobel Committee last year. Jeffrey C. Hall, Michael Rosbash, and Michael W. Young together won the 2017 Nobel Prize in Physiology or Medicine “for their discoveries of molecular mechanisms controlling the circadian rhythm.” Other researchers have noted that as many as 40 percent of all other genes respond to the circadian rhythm, changing their activity over the course of the day as well.

This gave us an idea: Perhaps we could use the activity levels of a set of genes in the blood to deduce a person’s internal time — the time your body thinks it is, regardless of what the clock on the wall says. Many of us have had the experience of feeling “out of sync” with our environments — of feeling like it’s 5:00 a.m. even though our alarm insists it’s already 7:00 a.m. That can be a result of our activities being out of sync with our internal clock — the clock on the wall isn’t always a good indication of what time it is for you personally. Knowing what a profound impact one’s internal clock can have on biology and health, we were inspired to try to gauge gene activity to measure the precise internal time in an individual’s body. We developed TimeSignature: a sophisticated computational algorithm that could measure a person’s internal clock from gene expression using two simple blood draws.

Designing a Robust Test

To achieve our goals, TimeSignature had to be easy (measuring a minimal number of genes in just a couple blood draws), highly accurate, and — most importantly — robust. That is, it should provide just as accurate a measure of your intrinsic physiological time regardless of whether you’d gotten a good night’s sleep, recently returned from an overseas vacation, or were up all night with a new baby. And it needed to work not just in our labs but in labs across the country and around the world.

A mismatch between our internal time and our daily activities may raise the risk of disease.
A mismatch between our internal time and our daily activities may raise the risk of disease.

To develop the gene signature biomarker, we collected tens of thousands of measurements every two hours from a group of healthy adult volunteers. These measurements indicated how active each gene was in the blood of each person during the course of the day. We also used published data from three other studies that had collected similar measurements. We then developed a new machine learning algorithm, called TimeSignature, that could computationally search through this data to pull out a small set of biomarkers that would reveal the time of day. A set of 41 genes was identified as being the best markers.

Surprisingly, not all the TimeSignature genes are part of the known “core clock” circuit — many of them are genes for other biological functions, such as your immune system, that are driven by the clock to fluctuate over the day. This underscores how important circadian control is — its effect on other biological processes is so strong that we can use those processes to monitor the clock!

Using data from a small subset of the patients from one of the public studies, we trained the TimeSignature machine to predict the time of day based on the activity of those 41 genes. (Data from the other patients was kept separate for testing our method.) Based on the training data, TimeSignature was able to “learn” how different patterns of gene activity correlate with different times of day. Having learned those patterns, TimeSignature can then analyze the activity of these genes in combination to work out the time that your body thinks it is. For example, although it might be 7 a.m. outside, the gene activity in your blood might correspond to the 5 a.m. pattern, indicating that it’s still 5 a.m. in your body.

Many genes peak in activity at different times of day. This set of 41 genes, each shown as a different color, shows a robust wave of circadian expression. By monitoring the level of each gene relative to the others, the TimeSignature algorithm learns to ‘read’ your body’s internal clock.
Many genes peak in activity at different times of day. This set of 41 genes, each shown as a different color, shows a robust wave of circadian expression. By monitoring the level of each gene relative to the others, the TimeSignature algorithm learns to ‘read’ your body’s internal clock. 

We then tested our TimeSignature algorithm by applying it to the remaining data, and demonstrated that it was highly accurate: We were able to deduce a person’s internal time to within 1.5 hours. We also demonstrated our algorithm works on data collected in different labs around the world, suggesting it could be easily adopted. We were also able to demonstrate that our TimeSignature test could detect a person’s intrinsic circadian rhythm with high accuracy, even if they were sleep-deprived or jet-lagged.

Harmonizing Health With TimeSignature

By making circadian rhythms easy to measure, TimeSignature opens up a wide range of possibilities for integrating time into personalized medicine. Although the importance of circadian rhythms to health has been noted, we have really only scratched the surface when it comes to understanding how they work. With TimeSignature, researchers can now easily include highly accurate measures of internal time in their studies, incorporating this vital measurement using just two simple blood draws. TimeSignature enables scientists to investigate how the physiological clock impacts the risk of various diseases, the efficacy of new drugs, the best times to study or exercise, and more.

Of course, there’s still a lot of work to be done. While we know that circadian misalignment is a risk factor for disease, we don’t yet know how much misalignment is bad for you. TimeSignature enables further research to quantify the precise relationships between circadian rhythms and disease. By comparing the TimeSignatures of people with and without disease, we can investigate how a disrupted clock correlates with disease and predict who is at risk.

Down the road, we envision that TimeSignature will make its way into your doctor’s office, where your circadian health could be monitored just as quickly, easily, and accurately as a cholesterol test. Many drugs, for example, have optimal times for dosing, but the best time for you to take your blood pressure medicine or chemotherapy may differ from somebody else.

Previously, there was no clinically feasible way to measure this, but TimeSignature makes it possible for your doctor to do a simple blood test, analyze the activity of 41 genes, and recommend the time that would give you the most effective benefits. We also know that circadian misalignment — when your body’s clock is out of sync with the external time — is a treatable risk factor for cognitive decline; with TimeSignature, we could predict who is at risk, and potentially intervene to align their clocks.

Advertisements

Healthcare AI & Machine Learning: 4.5 Things To Know


Has Alexa changed your life yet? Vacuum the floor, start the dishwasher, feed the cat, clean the litter box, order more paper towels and lower the thermostat. That’s just a few items from the to-do list I handed over to Artificial Intelligence today – and all without one manual click or key stroke. Whew, no more straining my index finger to push all those buttons.\

While all of this has a certain “cool” factor, it’s completely transforming our lives. Gone are the days when Artificial Intelligence (AI) meant a robot such as C-3PO or Rosie (the robot maid from the Jetsons). AI is no longer a sci-fi futuristic hope, it’s a set of algorithms and technologies known as Machine Learning (ML).

We’re quickly moving toward this technology powering many tasks in everyday life. But what about healthcare? How might it impact our industry and even our day-to-day jobs? I’m sharing 4.5 things you should know about Artificial Intelligence and Machine Learning in the healthcare industry and how it will impact our future.

1.   There’s a difference between Machine Learning and Artificial Intelligence.

Jeff Bezos, visionary founder and leader of Amazon, made this declaration last May: “Machine Learning and Artificial Intelligence will be used to empower and improve every business, every government organization, every philanthropy – basically there’s no institution in the world that cannot be improved with Machine Learning and Artificial Intelligence.”

Machine Learning and Artificial Intelligence have emerged as key buzzwords, and often times are used interchangeably. So, what’s the difference?

  • Machine Learning uses Artificial Intelligence to process large amounts of data and allows the machine to learn for itself. You’ve probably benefitted from Machine Learning in your inbox. Spam filters are continuously learning from words used in an email, where it’s sent from, who sent it, etc. In healthcare, a practical application is in the imaging industry, where machines learn how to read CT scans and MRIs allowing providers to quickly diagnose and optimize treatment options.
  • Artificial Intelligence is the ability of machines to behave in a way that we would consider “smart.” The capability of a machine to imitate intelligent human behavior. Artificial Intelligence is the ability to be abstract, creative, deductive – to learn and apply learnings. Siri and Alexa are good examples of what you might be using today. In healthcare, an artificial virtual assistant might have conversations with patients and providers about lab results and clinical next steps.

2.   The healthcare industry is primed for disruptors.

It’s no secret that interacting with the healthcare system is complex and frustrating. As consumers are paying more out of pocket, they’re expecting and demanding innovation to simplify their lives, get educated, and save money. At the same time, we’re starting to get a taste of what Machine Learning and AI can do for their daily lives. These technologies WILL dramatically change the way we work in healthcare. Don’t take my word for it, just review a few of the headlines over the past year.

  • A conversational robot explains lab results. Learn more.
  • A healthcare virtual assistant enables conversational dialogues and pre-built capabilities that automate clinical workflows. Learn more.
  • Google powers up Artificial Intelligence, Machine Learning accelerator for healthcare. Learn more.
  • New AI technology uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health. Learn more.
  • A machine learning algorithm helps identify cancerous tumors on mammograms. Learn more.

Soon, will a machine learning algorithm serve up a choice of 3 benefit plans that are best for my situation? Maybe Siri or Alexa will even have a conversation with me about it.

3.   Be smart about customer data

Being in healthcare and seeing the recent breaches, most of us have learned to be very careful about our security policies and customer data. As the use of Machine Learning grows in healthcare, continue to obsess over the privacy of your customer data. There are two reasons for this…

First, data is what makes Machine Learning and AI work. Without data, there’s nothing to mine and that means there’s no info from which to learn. Due to the sheer amount of data that Machine Learning technology collects and consumes, privacy will be more important than ever. Also, there’s potential for a lot more personally identifiable data to be collected. It will be imperative that companies pay attention to masking that data so the specific user is not revealed. These steps are critical to ensure you and your customer are protected as laws continue to catch up with this emerging technology.

Second, you’re already collecting A LOT of data on your clients of which you may not be taking full advantage. As more data is collected, it’s important to keep it organized so that you can use it effectively to gain insights and help your clients. Data architecture is a set of models, policies, rules or standards that govern which data is collected, and how it is stored, arranged, integrated, and put to use in data systems and in organizations. If you’re not already thinking about this, it might be a good idea to consult with a developer to get help.

4.  Recognize WHERE the customer is today and plan for tomorrow.

What’s the consumer appetite for these types of technologies? This year, 56.3 million personal digital assistants will ship to homes, including Alexa, Google Assistant and Apple’s new Home Pod. That’s a 40% increase from the impressive growth of 33 million sold in 2017. Consumers are quickly changing where they shop, organize and get information. It’s important that we move quickly to offer the experience customers want and need.

At first, I was happy to ask my Alexa to play music and set timers, now it’s the hub for my home. Alexa shops for my groceries, provides news stories, organizes my day, and much more. Plus, the cat really appreciates when Alexa feeds her – I’m totally serious that my cat feeder is hooked up to Alexa who releases a ration of kibble at pre-set feeding times.

There are so many applications for healthcare. Here’s a few that are happening or just around the corner…

“Alexa, what’s left on my health plan deductible?”

“Alexa, find a provider in my network within 5 miles.”

“Alexa, search for Protonix pricing at my local pharmacies.”

“Alexa, how do I know if I have an ear infection?”

“Alexa, what’s the difference between an HSA, FSA and HRA?”

“Alexa, find a 5-star rated health insurance broker within 20 miles.”

Click here for a great article about the growth and applications of digital personal assistants.

4.5       The customer experience must be TRULY helpful

Making “cool” innovations in Artificial Intelligence or Machine Learning won’t work if not coupled with a relentless pursuit to serve the customer. These endeavors are expensive, so spend your IT budget wisely, ensuring new innovation creates true value and is easy for the end user.

Are you ready to learn more? I will be doing a breakout session with Reid Rasmussen, CEO of freshbenies at BenefitsPro on 4/18 at 2:30pm.

Machine-learning cloud platforms get to work


 

Analytic platforms as a service (PaaS) could shorten machine-learning learning curve.

 

The machine-learning smarts that help Google know what’s in a photo and let Amazon’s Alexa carry on a conversation are getting a real job. “ML” platforms from vendors like Amazon, Google, IBM, Microsoft, and others can automate business processes on a previously impossible scale and free up employees for more creative, thought-intensive work. They also require a lot more commitment and, sometimes, coaxing than parking an Amazon Echo on a kitchen table or tapping a button to have Google back up the photos on your phone.

But the payoff can also be correspondingly greater. “Every business process is just badly written software,” observed Markus Noga, head of machine learning at SAP. The advent of on-demand AI lets companies do something about that: “We can really turn this into software and make the company run itself.”

At this summer’s Google I/O conference, the scene of multiple announcements of AI-as-a-service initiatives, CEO Sundar Pichai set his sights slightly higher: “I’m confident AI will invent new molecules,” he said during the opening keynote.

Raising all boats

At a minimum, machine-learning platforms let companies perform some of the same tasks AI tackles in consumer settings, just in larger numbers and with money on the line. For example, the Cloud Machine Learning platform Google opened for business last year provides image-recognition services—not too different from what Google Photos does for your phone’s pictures—that allow Airbus to correct satellite imagery to distinguish between snow and clouds.

But machine-learning platforms can also take on tasks that individual users would rarely bother with but for which companies might pay a great deal. Consider the job of grading the visibility of sponsorship signs and banners in a sports event, something traditionally done by “students with stopwatches,” as SAP’s Noga put it.

SAP’s Leonardo Machine Learning Foundation can do that a bit faster: “We’re able to have a look at every frame, we’re able to have a look at every pixel of HD or 4K video, and we’re able to process it faster than real time.”

That led to some surprises in a breakdown of a series of skiing competitions. “It turned out that the company buying the most expensive slots around the starting gates wasn’t getting the best exposure,” Noga said. “It was the cheaper spots around the sidelines of the course [that were getting the best exposure].” Why? That’s where TV cameras would zoom in longer.

Customer service represents another obvious application of machine learning’s ability to parse human input—think Siri, but at scale and self-improving. Deloitte Consulting helped an unnamed financial-services firm deploy an AI-based system that handles some 27,000 customer queries an hour in over a dozen languages.

“It’s smarter than a chatbot in that it’s not just a set of rote responses, but rather a learning set that’s actually doing back-end analytics,” said Anthony Abbattista, lead at Deloitte’s analytics and cognitive business unit.

Abbattista emphasized that this starts with extensive, upfront input from warm-blooded data sources: “Having a starting model, started by the experts and captured, is always the first step in this.”

Keeping your options open

A strong case can be made for companies to outsource ML services to companies that specialize in them—but not just one company, analyst Jan Dawson of Jackdaw Research wrote in an e-mail.

“There’s definitely some risk in building big stuff based on a single AI or machine-learning platform,” he said. “But that’s still far preferable for most companies to trying to build their own AI or ML capabilities.”

Abbattista made the same point. “We’re not looking at things and saying ‘we need a five- or 10-year system,’” he said. “We can plug and play with these providers as they come and go.”

Box, for example, first signed up with Google to use its Google Cloud Machine Learning Engine to automate image recognition. More recently, though, it announced plans to bring Microsoft’s Azure Machine Learning Platform onboard for other, not yet specified AI services.

“Today, there are billions of files in Box, and a significant portion of those are image files,” wrote Chief Product Officer Jeetu Patel in an e-mail forwarded by a publicist. “So when we looked at what problem we could solve first with machine learning, it made natural sense to start with providing an image recognition service through our partnership with Google Cloud.”

Google’s image-recognition services are free in the current private beta but won’t be when they ship later this year; Box will reveal pricing closer to then.

Patel ticked off some early applications of Google’s image-recognition: “We’re seeing retail customers using image recognition in Box to optimize digital asset management of product photos, a major media company is using this technology to automatically tag massive amounts of inbound photos from freelance photographers around the globe, and a global real estate firm is leveraging optical character recognition in Box to digitize workflows for paper-based leases and agreements.”

The pitfalls of platforming

In some scenarios, it makes more sense to keep AI in-house. PayPal, for instance, opted to build the fraud-detection AI it rolled out in 2013.

“Our risk and our problems are very unique,” said Hui Wang, senior director of global risk and data sciences at the firm. “We are a closed-loop network. We have a relationship with both buyers and sellers.” And her part of the company must rule on transactions “within milliseconds,” she said.

2015 Deloitte white paper hedged its endorsement of services built on IBM’s Watson with a warning against handing over “individual user information, such as financial transaction information or identifiable patient records.”

So, for all the attention paid to AI as a service—Dawson observed that “companies are talking about it a lot more because they see it as a badge of honor or an overt signal of innovation”—its business applications can lag behind those in homes. “We need to get the same technology from 9 to 5 that we enjoy in[to] the consumer space from 5 to 9,” Abbattista said.

Amy Webb, a professor at New York University’s Stern School of Business and author of the book The Signals Are Talking, predicted that will change soon enough; it has to.

“AI is our generation’s electricity,” she wrote in an e-mail. “Once businesses plug in to the AI grid, they’ll be ready for increased productivity, better workflows, new inventions, and the ability to scale their operations.”

Digital Transformation: What and Why?


ML has more than just a learning curve to overcome before it transforms business.

Machine learning (ML) based data analytics is rewriting the rules for how enterprises handle data. Research into machine learning and analytics is already yielding success in turning vast amounts of data—shaped with the help of data scientists—into analytical rules that can spot things that would escape human analysis in the past—whether it be in pursuit of pushing forward genome research or predicting problems with complex machinery.

Now machine learning is beginning to move into the business world. But most organizations haven’t truly grasped how machine learning will change the way they do business—or how it will change the shape of their organizations in the process. Companies are looking to ML to automate processes or to augment humans by assisting them in data-driven tasks. And it’s possible that ML could turn enterprises into vendors—turning lessons learned from their own vast stores of data into algorithms they can license to software and service providers.

But getting there will depend on how machine learning capabilities evolve over the next five years and what implications that evolution has for today’s long-time hiring/recruitment strategies. And nowhere is this more crucial than in unsupervised machine learning, where systems are given vast datasets and told to find the patterns without humans having first figured out what the software needs to look for. With minimal pre-task human efforts needed, the scalability of unsupervised machine learning is much higher.

David Dittman, director of business intelligence and analytics services at Procter & Gamble, explained that the biggest analytics problem he sees today with other large US companies is that “they are becoming enamored by [machine learning and analytics] technology, while not understanding that they have to build the foundation [for it], because it can be hard, expensive and requires vision.” Instead, Dittman said, companies mistakenly believe that machine learning will reveal the vision for them: “‘Can’t I have artificial intelligence just tell me the answer?'”

The problem is that “artificial intelligence” doesn’t really work that way. ML currently falls into two broad categories: supervised and unsupervised. And neither of these works without having a solid data foundation.

Breaking training

Yisong Yue, assistant professor of computing and mathematics at the California Institute of Technology, sees potential in unsupervised machine learning for applications such as diagnosing cancer from radiology images.
Enlarge / Yisong Yue, assistant professor of computing and mathematics at the California Institute of Technology, sees potential in unsupervised machine learning for applications such as diagnosing cancer from radiology images.

 

Supervised ML requires humans to create sets of training data and validate the results of the training. Speech recognition is a prime example of this, explained Yisong Yue, assistant professor of computing and mathematics at Caltech. “Speech recognition is trained in a highly supervised way,” said Yue. “You start with gigantic data—asking people to say certain sentences.”

But collecting and classifying enough data for supervised training can be challenging, Yue said. “Imagine how expensive that is, to say all these sentences in a range of ways. [Data scientists] are annotating this stuff left and right. That simply isn’t scalable to every task that you want to solve. There is a fundamental limit to supervised ML.”

Unsupervised machine learning reduces that interaction. The data scientist chooses a presumably massive dataset and essentially tells the software to find the patterns within it, all without humans having to first figure out what the software needs to look for. With minimal pre-task human efforts needed, the scalability of unsupervised ML (particularly in terms of the human workload upfront) is much higher. But the term “unsupervised” can be misleading. A data scientist needs to choose the data to be examined.

Unsupervised ML software is asked to “find clusters of data that may be interesting, and a human analyzes [those groupings] and decides what to do next,” said Mike Gualtieri, Forrester Research’s vice president and principal analyst for advanced analytics and machine learning. Human analysis is still required to make sense of the groupings of data the software creates.

But the payoffs of unsupervised ML could be much broader. For example, Yue said, unsupervised learning may have applications in medical tasks such as cancer diagnosis. Today, he explained, standard diagnostic efforts involve taking a biopsy and sending it to a lab. The problem is that biopsies—themselves a human-intensive analytics effort—are time-consuming and expensive. And when a doctor and patient need to know right away if it’s cancer, waiting for the biopsy result can be medically hazardous. Today, a radiologist typically will look at the tissue, explained Yue, “and the radiologist makes a prediction—the probability of it containing cancerous tissue.”

With a big enough training data set, this could be an application for supervised machine learning, Yue said. “Suppose we took that dataset—the images of the tissue and those biopsy results—and ran supervised ML analysis.” That would be labor-intensive up front, but it could detect similarities in the images of those that had positive biopsies.

But, Yue asked, what if instead the process was done as an unsupervised learning effort?

“Suppose we had a dataset of images and we had no biopsy results? We can use this to figure out what we can predict with clustering.” Assume that the number of samples was 1,000. The software would group the images and look for all of the similarities and differences, which is basic pattern recognition. “Let’s say it finds 10 such clusters, and suppose I can only afford to run 10 biopsies. We could choose to test just one from each cluster,” Yue said. “This is just step one in a long sequence of steps, of course, as it looks at multiple types of cancer.”

Guider versus decider

Unsupervised learning still needs a human being to assign a value to the clusters or patterns of data it finds, so it’s not necessarily ready for totally hands-off tasks. Rather, it is currently better suited to enhance the performance of humans by highlighting patterns of data that might be of interest. But there are places where that may soon change—driven largely by the quality and quantity of data.

“I think, right now, that people are jumping to automation when they should be focused on augmenting their existing decision process,” said Dittman. “Five years from now, we’ll have the proper data assets and then you’ll want more automation and less augmentation. But not yet. Today, there is a lack of usable data for machine learning. It’s not granular enough, not broad enough.”

Even as ML data analytics become more sophisticated, it’s not yet clear how that will change the shape of companies’ IT organizations. Forrester’s Gualtieri anticipates a reduction in the need for data scientists five years from now in much the same way that the need for Web developers who create webpages from scratch were much more needed in 1995 than they were in 2000, as so many webpage functions were automated and sold as modular scripts. A similar shift is likely in machine learning, he suggested, as software and service providers begin to offer application programming interfaces to commercial machine learning platforms.

Gualtieri foresees a simple change in the enterprise IT build-or-buy model. “Today, you’re going to make a build decision and hire more data scientists,” he explained. “As these APIs enter the market, it will move to ‘buy’ as opposed to ‘build.'” He added that “we are seeing the beginnings of this right now.” A couple of examples are Clarifai—which can search through video in search of a particular moment, such as looking at thousands of wedding videos and learning to recognize the ring ceremony or the “you may kiss the bride” moment—and Affectiva, which tries to determine someone’s mood from an image.

Dittman agrees with Gualtieri that companies will likely create many specialized scripts automating many ML tasks. But he disagrees that this will result in fewer computer science jobs in five years.

“If you look at the number of practicing data scientists, that will sharply increase, but it will increase far slower than the digitization of technology, as [ML] moves into more and more whitespaces,” Dittman explained. “Consider the open source trend and the fact that data-scientist tools are going to start to get easier and easier to use, moving from code generation to code reuse.”

Caltech’s Yue argued that demand for data scientists will continue to rise as ML successes will beget more ML attempts. And as the technology improves, he explained, more and more units in a business would be able to take advantage of ML, which means the need for far more data scientists to initially write those programs.

From consumer to provider

Part of what may drive a continuing demand for data scientists is the hunger for data to make ML more effective. Gualtieri sees some enterprises—roughly five years from now—also playing the role of vendor. “Boeing may decide to be that provider of domain-specific machine learning and sell [those modules] to suppliers who could then become customers,” he said.

P&G’s Dittman sees both ends of the analytics equation—data and the ML code—being highly sellable, potentially as a new major revenue source for enterprises. “Companies are going to start monetizing their data,” he explained. “The data industry is going to explode. Data is absolutely exploding, but there is a lack of a data strategy. Getting the right data that you need for your business case, that tends to be the challenge.”

But Yue has a different concern. “Five years from now, [ML] will naturally come into conflict with legal issues. We have strong laws about discrimination, protected classes,” he said. “What if you use data algorithms to decide who to make loans to? How do you know that it’s not discriminatory? It’s a question for policy makers.”

Yue offered the example of software finding a correlation between consumers defaulting on their loans and those who have blue eyes. The software could decide to scan every customer’s eye color and use that information to decide whether or not to approve a loan. “If a human made that decision, it would be considered discriminatory,” Yue said.

That legal issue speaks to the core role a data analyst plays in unsupervised ML. The software’s job is to find the links, but it’s ostensibly the human who decides what to do about those links. One way or the other, HR is going to need to recruit a lot more data scientists for quite some time.

How Haven Life uses AI, machine learning to spin new life out of long-tail data


Haven Life is leveraging MassMutual’s historical data to give instant life insurance approvals. Using AI and machine learning to derive new value from old data could become an enterprise staple.

Can a life insurance company look at the same data every other rival has and come up with different insights? That’s the goal of Haven Life, which is using artificial intelligence to offer decisions on applications in real time.

Life insurance runs on actuarial data, which makes a guesstimate how long a person will live. This life and death data is needed so life insurers can manage risks.

To date, obtaining life insurance has been a bit of a pain because it requires a medical exam, some blood and a bevy of medical history questions. Haven Life, a unit of MassMutual, aims to streamline the process, said Mark Sayre, head of policy design at Haven Life.

“The data we use is established for this purpose. Life insurance has a unique challenge since mortality has a slow process and it’s uncommon. It takes many years of experience to build our models,” explained Sayre.

 

Indeed, Haven Life needs someone to live or die to verify models. In other words, Haven Life has to use MassMutual’s data over the years and then use artificial intelligence and machine learning to find things in the information that humans can’t see. As a result, Haven Life can offer the InstantTerm process, an innovation that means the startup can offer can underwrite a policy on behalf of MassMutual without a medical exam in minutes.

Simply put, Haven Life is using older data to spin something new. I’d argue that applying artificial intelligence and machine learning to older proprietary data is going to be a key use case in corporations. Haven Life had to take data from old applications and text to turn into structured information.

“Our models can now dig into interactions and various elements of the data,” said Sayre. One example is blood and urine tests used in life insurance quotes. Say a normal value on a blood test is 45. Under the previous model, 46 would be deemed high and 43 low.

“Our model better understands how close 45 is to 46 so it’s not immediately good to bad,” explained Sayre. As a result of the new model and machine learning, Haven Life found that low figures are just as concerning as high ones. “The model brought something new to the medical team. If there are multiple low figures that can be bad. We have to look at the interplay of variables on lab tests,” said Sayre.

Blood pressure, albumin and globulin are variables that could be worrisome if low.

In many respects, algorithmic underwriting is about creating pathways to make decisions by using various characteristics such as height, weight, cholesterol and other values.

One key note is that Haven Life’s model is a work in progress and will take decades of refinement. MassMutual brings the history and mortality experience while Haven Life brings a tech focus and ability to move quickly.

What’s next? Sayre said Haven Life is looking at variables such as credit data and prescription histories. The catch is that Haven Life won’t know the validity of the data for life insurance for years. “We won’t know because there may be no deaths for years. All of these models require an outcome. We need some death and some living people,” said Sayre.

Other key points:

  • Sayre’s team has 15 people between developers, actuaries and UX designers.
  • Haven Life has 100 employees.
  • MassMutual has 40 data scientists.
  • Haven Life was launched in 2015 after founder Yaron Ben-Zvi had a less-than-satisfactory experience buying life insurance. Haven Life is the first insurer to offer coverage in two minutes with no medical exams.
  • Haven Life is independent from MassMutual, but leans on the giant for access to data, legal and regulatory expertise. MassMutual is the issuer for the Haven Life term policy.

How Machine Learning May Help Tackle Depression


By detecting trends that humans are unable to spot, researchers hope to treat the disorder more effectively.

Depression is a simple-sounding condition with complex origins that aren’t fully understood. Now, machine learning may enable scientists to unpick some of its mysteries in order to provide better treatment.For patients to be diagnosed with Major Depressive Disorder, which is thought to be the result of a blend of genetic, environmental, and psychological factors, they have to display several of a long list of symptoms, such as fatigue or lack of concentration. Once diagnosed, they may receive cognitive behavioral therapy or medication to help ease their condition. But not every treatment works for every patient, as symptoms can vary widely.

Recently, many artificial intelligence researchers have begun to develop ways to apply machine learning to medical situations. Such approaches are able to spot trends and details across huge data sets that humans would never be able to, teasing out results that can be used to diagnose other patients. The New Yorker recently ran a particularly interesting essay about using the technique to make diagnoses from medical scans.

Similar approaches are being used to shed light on depression. A study published in Psychiatry Research earlier this year showed that MRI scans can be analyzed by machine-learning algorithms to establish the likelihood of someone suffering from the condition. By identifying subtle differences in scans of people who were and were not sufferers, the team found that they were able to identify which unseen patients were suffering with major depressive disorder from MRI scans with roughly 75 percent accuracy.

Perhaps more interestingly, Vox reports that researchers from Weill Cornell Medical College are following a similar tack to identify different types of depression. By having machine-learning algorithms interrogate data captured when the brain is in a resting state, the scientists have been able to categorize four different subtypes of the condition that manifest as different mixtures of anxiety and lack of pleasure.

Not all attempts to infer such fine-grained diagnoses from MRI scans have been successful in the past, of course. But the use of AI does provide much better odds of spotting a signal than when individual doctors pore over scans. At the very least, the experiments lend weight to the notion that there are different types of depression.

The approach could be just one part of a broader effort to use machine learning to spot subtle clues related to the condition. Researchers at New York University’s Langone Medical Center, for instance, are using machine-learning techniques to pick out vocal patterns that are particular to people with depression, as well as conditions like PTSD.

And the idea that there may be many types of depression could prove useful, according to Vox. It notes another recent study carried out by researchers at Emory University that found that machine learning was able to identify different patterns of brain activity in fMRI scans that correlated with the effectiveness of different forms of treatment.

In other words, it may be possible not just to use AI to identify unique types of depression, but also to establish how best to treat them. Such approaches are still a long way from providing clinically relevant results, but they do show that it may be possible to identify better ways to help sufferers in the future.

In the meantime, some researchers are also trying to develop AIs to ensure that depression doesn’t lead to tragic outcomes like self-harm or suicide. Last month, for instance, Wired reported that scientists at Florida State University had developed machine-learning software that analyzes patterns in health records to flag patients that may be at risk of suicidal thoughts. And Facebook claims it can do something similar by analyzing user content—but it remains to be seen how effective its interventions might be.

Source:www.technologyreview.com

Is machine learning the next commodity?


It’s not every day you can witness an entire class of software making the transition from specialized, expensive-to-develop code to a general-purpose technology. But that’s exactly what’s happening with machine learning.Chances are, you’re already hip-deep in machine-learning applications. It’s how Google Photo organizes those pictures from your vacation in Spain. It’s how Facebook suggests tags for the pictures you took at last week’s soccer match. It’s how the cars of nearly every major automaker can help you avoid unsafe lane changes.

And it’s also the start of something even bigger.

Machine learning – which enables a computer to learn without new programming – is exploding in its ability to handle highly complex tasks. It can make houses and buildings not just smart, but actively intelligent. It can take e-commerce from a one-size-fits-all experience to something personalized. It might even find your next date.

Driving this surge of machine-learning development is a wave of data generated by mobile phones, sensors, and video cameras. It’s a wave whose scope, scale, and projected growth are unprecedented.

Every minute of every day, YouTube gains 300 hours of video, Apple users download 51,000 apps, and 347,222 100,000 Tweets make their way into the world. Those stats come from the good folks at Domo, who call the time we’re living in “an era where data never sleeps.”

Intel Capital's Sanjit Dang

Intel Capital’s Sanjit Dang

Until now, the hot topic of conversation has been how to analyze information and take action based on the results. But the volume of data has become so great, and its trajectory so steep, that we need to automate many of those actions. Now.

As a result, we expect machine learning will become the next great commodity. In the short term, we expect the cost of advanced algorithms to plummet – especially given multiple open-source initiatives – and to spur new areas of specialization. Longer term, we expect these kinds of algorithms to make their way into standard microprocessors.

Marc Andreessen once said software is eating the world. In the case of machine learning, it will have a very large appetite.

Proprietary becomes open

To understand the potential of machine learning as a commodity, Linux is a good place to start. Released as a free, open-source operating system in 1991, it now powers nearly all the world’s supercomputers, most of the servers behind the Internet, and the majority of financial trades worldwide – not to mention tens of millions of Android mobile phones and consumer devices.

Like Linux, machine learning is well down the open-source path. In the last few months, Baidu, Facebook, and Google have released sets of open-source machine-learning algorithms. Another group of high-tech heavyweights, including Sam Altman, Elon Musk, and Peter Thiel, have launched the OpenAI initiative. And universities and tech communities are adding new tools to the mix.

In the short-to-medium term, we see three outcomes from this activity. First, companies that need to integrate machine learning into their products will do so inexpensively – either through their engineering teams or third-party vendors.

Second, a three-tier system of available algorithms will establish itself. At the bottom layer will be open-source code. In the middle will be code with greater capabilities, available under license from Amazon, Google, Microsoft, or one of the other big players. At the top will be the highly prized code that keeps these companies competitive; it will stay closely guarded until they feel it’s time to make it available widely.

Finally, we forecast a flurry of merger, acquisition, and licensing agreements as algorithm providers look to grow and defend their positions. We also expect more specialization as they attempt to lock down various markets.

In fact, that process already is well under way.

Smarter buildings & commerce

For all the talk about smarter homes and buildings, today’s technologies aren’t nearly as intelligent as they could be. Yes, they can collect data and operate within confined parameters. But they can’t adapt to the way you live your life.

If you get a new dog, for example, fixed-intelligence devices can’t tell the difference between the two of you. If your calendar shows you working from home, these devices won’t think to disable your security system without asking.

Fortunately, that’s changing. Startups such as Nuro Technologies, for example, are pairing sophisticated sensors and self-learning networks for in-home applications. Think of the sensors as mini iPhones in and around your house. You can download software into them – fire sensing, irrigation control, security and more – the same way you load apps into a phone.

Commerce is also a big opportunity for machine learning. Maybe the biggest. One of our portfolio companies, Vizury, uses machine learning to help companies display only the online ads you want to see. Awarestack is another great example: it uses data about how and where you park a car to create algorithms that can help you get around more efficiently.

Then there’s Dil Mil, an online dating app very popular in the South Asian community and growing rapidly. Unlike conventional apps that use the data they collect to make a romantic match, it looks at social behaviors – such as posting on Instagram, Facebook, and Twitter – to find the best possible match. All in real time.

Next stop: silicon

If the Linux of the 1990s illustrates the long-term impact of machine learning, the laptop and desktop machines of the 1980s point to their final destination. In a word: silicon.

Just as modems and graphics cards made their way into microprocessors and motherboards, so will machine learning software. There is simply too much data through which companies need to sift, too many actions they’ll need to take, and too many good algorithms already available.

It’s going to be an exciting time.

A director at Intel Capital, Sanjit Dang drives investments in user computing across the consumer and enterprise sectors. He has also driven several investments related to big data, the Internet of Things, and cloud computing.

%d bloggers like this: