Flexible, wearable oral sodium sensor could help improve hypertension control


Summary:
For people who have hypertension and certain other conditions, eating too much salt raises blood pressure and increases the likelihood of heart complications. To help monitor salt intake, researchers have developed a flexible and stretchable wireless sensing system designed to be comfortably worn in the mouth to measure the amount of sodium a person consumes.

For people who have hypertension and certain other conditions, eating too much salt raises blood pressure and increases the likelihood of heart complications. To help monitor salt intake, researchers have developed a flexible and stretchable wireless sensing system designed to be comfortably worn in the mouth to measure the amount of sodium a person consumes.

Based on an ultrathin, breathable elastomeric membrane, the sensor integrates with a miniaturized flexible electronic system that uses Bluetooth technology to wirelessly report the sodium consumption to a smartphone or tablet. The researchers plan to further miniaturize the system — which now resembles a dental retainer — to the size of a tooth.

“We can unobtrusively and wirelessly measure the amount of sodium that people are taking in over time,” explained Woon-Hong Yeo, an assistant professor in the Woodruff School of Mechanical Engineering at the Georgia Institute of Technology. “By monitoring sodium in real-time, the device could one day help people who need to restrict sodium intake learn to change their eating habits and diet.”

Details of the device are reported May 7 in the early edition of the journal Proceedings of the National Academy of Sciences. The device has been tested in three adult study participants who wore the sensor system for up to a week while eating both solid and liquid foods including vegetable juice, chicken soup and potato chips.

According to the American Heart Association, Americans on average eat more than 3,400 milligrams of sodium each day, far more than the limit of 1,500 milligrams per day it recommends. The association surveyed a thousand adults and found that “one-third couldn’t estimate how much sodium they ate, and another 54 percent thought they were eating less than 2,000 milligrams of sodium a day.”

The new sodium sensing system could address that challenge by helping users better track how much salt they consume, Yeo said. “Our device could have applications for many different goals involving eating behavior for diet management or therapeutics,” he added.

Key to development of the intraoral sensor was replacement of traditional plastic and metal-based electronics with biocompatible and ultrathin components connected using mesh circuitry. Sodium sensors are available commercially, but Yeo and his collaborators developed a flexible micro-membrane version to be integrated with the miniaturized hybrid circuitry.

“The entire sensing and electronics package was conformally integrated onto a soft material that users can tolerate,” Yeo explained. “The sensor is comfortable to wear, and data from it can be transmitted to a smartphone or tablet. Eventually the information could go a doctor or other medical professional for remote monitoring.”

The flexible design began with computer modeling to optimize the mechanical properties of the device for use in the curved and soft oral cavity. The researchers then used their model to design the actual nanomembrane circuitry and choose components.

The device can monitor sodium intake in real-time, and record daily amounts. Using an app, the system could advise users planning meals how much of their daily salt allocation they had already consumed. The device can communicate with a smartphone up to ten meters away.

Next steps for the sodium sensor are to further miniaturize the device, and test it with users who have the medical conditions to address: hypertension, obesity or diabetes.

The researchers would like to do away with the small battery, which must be recharged daily to keep the sensor in operation. One option would be to power the device inductively, which would replace the battery and complex circuit with a coil that could obtain power from a transmitter outside the mouth.

The project grew out of a long-term goal of producing an artificial taste system that can sense sweetness, bitterness, pH and saltiness. That work began at Virginia Commonwealth University, where Yeo was an assistant professor before joining Georgia Tech.

Journal Reference:

  1. Yongkuk Lee et al. Wireless, Intraoral Hybrid Electronics for Real-Time Quantification of Sodium Intake Toward Hypertension Management. Proceedings of the National Academy of Sciences, 2018 DOI: 10.1073/pnas.1719573115
Advertisements

Walk this way: Novel method enables infinite walking in VR


In the ever-evolving landscape of virtual reality (VR) technology, a number of key hurdles remain. But a team of computer scientists have tackled one of the major challenges in VR that will greatly improve user experience — enabling an immersive virtual experience while being physically limited to one’s actual, real-world space.

A user wears the researchers’ experimental setup — a Vive HMD augmented with SMI gaze tracking. Superimposed are the top view of the recorded movements of the physical path in a 3.5 m × 3.5 m real room and the virtual path in a much larger 6.4 m × 6.4 m synthetic space. The team demonstrates that saccades can significantly increase the rotation gains during redirection without introducing visual distortions or simulator sickness. Their new method can be applied to large, open virtual spaces and small physical environments for room-scale VR.
 

In the ever-evolving landscape of virtual reality (VR) technology, a number of key hurdles remain. But a team of computer scientists have tackled one of the major challenges in VR that will greatly improve user experience — enabling an immersive virtual experience while being physically limited to one’s actual, real-world space. The research team will present their work at SIGGRAPH 2018.

Computer scientists from Stony Brook University, NVIDIA and Adobe have collaborated on a computational framework that gives VR users the perception of infinite walking in the virtual world — while limited to a small physical space. The framework also enables this free-walking experience for users without causing dizziness, shakiness, or discomfort typically tied to physical movement in VR. And, users avoid bumping into objects in the physical space while in the VR world.

To do this, the researchers focused on manipulating a user’s walking direction by working with a basic natural phenomenon of the human eye, called saccade. Saccades are quick eye movements that occur when we look at a different point in our field of vision, like when scanning a room or viewing a painting. Saccades occur without our control and generally several times per second. During that time, our brains largely ignore visual input in a phenomenon known as “saccadic suppression” — leaving us completely oblivious to our temporary blindness, and the motion that our eyes performed.

“In VR, we can display vast universes; however, the physical spaces in our homes and offices are much smaller,” says lead author of the work, Qi Sun, a PhD student at Stony Brook University and former research intern at Adobe Research and NVIDIA. “It’s the nature of the human eye to scan a scene by moving rapidly between points of fixation. We realized that if we rotate the virtual camera just slightly during saccades, we can redirect a user’s walking direction to simulate a larger walking space.”

Using a head- and eye-tracking VR headset, the researchers’ new method detects saccadic suppression and redirects users during the resulting temporary blindness. When more redirection is required, researchers attempt to encourage saccades using a tailored version of subtle gaze direction — a method that can dynamically encourage saccades by creating points of contrast in our visual periphery.

The team who authored the research, titled “Towards Virtual Reality Infinite Walking: Dynamic Saccade Redirection,” will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

To date, existing methods addressing infinite walking in VR have limited redirection capabilities or cause undesirable scene distortions; they have also been unable to avoid obstacles in the physical world, like desks and chairs. The team’s new method dynamically redirects the user away from these objects. The method runs fast, so it is able to avoid moving objects as well, such as other people in the same room.

The researchers ran user studies and simulations to validate their new computational system, including having participants perform game-like search and retrieval tasks. Overall, virtual camera rotation was unnoticeable to users during episodes of saccadic suppression; they could not tell that they were being automatically redirected via camera manipulation. Additionally, in testing the team’s method for dynamic path planning in real-time, users were able to walk without running into walls and furniture, or moving objects like fellow VR users.

“Currently in VR, it is still difficult to deliver a completely natural walking experience to VR users,” says Sun. “That is the primary motivation behind our work — to eliminate this constraint and enable fully immersive experiences in large virtual worlds.”

Though mostly applicable to VR gaming, the new system could potentially be applied to other industries, including architectural design, education, and film production.

Now, you can hold a copy of your brain in the palm of your hand


New 3D printing technique enables faster, better, and cheaper models of patient-specific medical data for research and diagnosis

Source:
Wyss Institute for Biologically Inspired Engineering at Harvard
Summary:
Medical imaging technologies like MRI and CT scans produce high-resolution images as a series of ‘slices,’ making them an obvious complement to 3D printers, which also print in slices. However, the process of manually ‘thresholding’ medical scans to define objects to be printed is prohibitively expensive and time-consuming. A new method converts medical data into dithered bitmaps, allowing custom 3D-printed models of patient data to be printed in a fraction of the time.

This 3D-printed model of Steven Keating’s skull and brain clearly shows his brain tumor and other fine details thanks to the new data processing method pioneered by the study’s authors.
 

What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, Ph.D., who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group. Curious to see what his brain actually looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI and CT scans, but was frustrated that existing methods were prohibitively time-intensive, cumbersome, and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?” says Ahmed Hosny, who was a Research Fellow with at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute. The result of that impromptu collaboration — which grew to involve James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute; Neri Oxman, Ph.D., Director of the MIT Media Lab’s Mediated Matter group and Associate Professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centers in the US and Germany — is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail. The research is reported in 3D Printing and Additive Manufacturing.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, M.D. Ph.D., an Assistant Professor of Radiology at the University of Washington and clinical radiologist at the Seattle VA, and co-author of the paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Imaging technologies like MRI and CT scans produce high-resolution images as a series of “slices” that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object(s) of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either a very time-intensive process called “segmentation” where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic “thresholding” process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of gray that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over- or under-exaggerates the size of a feature of interest and washes out critical detail.

The new method described by the paper’s authors gives medical professionals the best of both worlds, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of gray rather than the pixels themselves varying in color.

Similar to the way images in black-and-white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of gray into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

The team of researchers used bitmap-based 3D printing to create models of Keating’s brain and tumor that faithfully preserved all of the gradations of detail present in the raw MRI data down to a resolution that is on par with what the human eye can distinguish from about 9-10 inches away. Using this same approach, they were also able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional — we were able to do it in less than an hour.”

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

In order for that to happen, some entrenched elements of the medical field need to change as well. Most patients’ data are compressed to save space on hospital servers, so it’s often difficult to get the raw MRI or CT scan files needed for high-resolution 3D printing. Additionally, the team’s research was facilitated through a joint collaboration with leading 3D printer manufacturer Stratasys, which allowed access to their 3D printer’s intrinsic bitmap printing capabilities. New software packages also still need to be developed to better leverage these capabilities and make them more accessible to medical professionals.

Despite these hurdles, the researchers are confident that their achievements present a significant value to the medical community. “I imagine that sometime within the next 5 years, the day could come when any patient that goes into a doctor’s office for a routine or non-routine CT or MRI scan will be able to get a 3D-printed model of their patient-specific data within a few days,” says Weaver.

Keating, who has become a passionate advocate of efforts to enable patients to access their own medical data, still 3D prints his MRI scans to see how his skull is healing post-surgery and check on his brain to make sure his tumor isn’t coming back. “The ability to understand what’s happening inside of you, to actually hold it in your hands and see the effects of treatment, is incredibly empowering,” he says.

“Curiosity is one of the biggest drivers of innovation and change for the greater good, especially when it involves exploring questions across disciplines and institutions. The Wyss Institute is proud to be a space where this kind of cross-field innovation can flourish,” says Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School (HMS) and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

 


Journal Reference:

  1. Ahmed Hosny, Steven J. Keating, Joshua D. Dilley, Beth Ripley, Tatiana Kelil, Steve Pieper, Dominik Kolb, Christoph Bader, Anne-Marie Pobloth, Molly Griffin, Reza Nezafat, Georg Duda, Ennio A. Chiocca, James R. Stone, James S. Michaelson, Mason N. Dean, Neri Oxman, James C. Weaver. From Improved Diagnostics to Presurgical Planning: High-Resolution Functionally Graded Multimaterial 3D Printing of Biomedical Tomographic Data Sets. 3D Printing and Additive Manufacturing, 2018; DOI: 10.1089/3dp.2017.0140

First 3D-printed human corneas


The first human corneas have been 3D-printed by scientists. It means the technique could be used in the future to ensure an unlimited supply of corneas.

Dr. Steve Swioklo and Professor Che Connon with a dyed cornea.
 

The first human corneas have been 3D printed by scientists at Newcastle University, UK.

It means the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture — or grow.

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel — a combination of alginate and collagen — keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”

The scientists, including first author and PhD student Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”

Reference:

3D Bioprinting of a Corneal Stroma Equivalent. Abigail Isaacson, Stephen Swioklo, Che J. Connon. Experimental Eye Research.

What is AI? Everything you need to know about Artificial Intelligence


An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is artificial intelligence (AI)?

It depends who you ask.

AI might be a hot topic but you’ll still need to justify those projects.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

What are the uses for AI?

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

What are the different types of AI?

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

What can narrow AI do?

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

What can general AI do?

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn’t exist today and AI experts are fiercely divided over how soon it will become a reality.

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ‘ superintelligence‘ — which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

What is machine learning?

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

What are neural networks?

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have ‘learned’ how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader’s guide to deep learning

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

ai-ml-neural-network.jpg
The structure and training of deep neural networks.

Image: Nuance

Another area of AI research is evolutionary computation, which borrows from Darwin’s famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

What is fueling the resurgence in AI?

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google’s Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google’s TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google’s TensorFlow Research Cloud. The second generation of these chips was unveiled at Google’s I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

What are the elements of machine learning?

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word ‘bass’ relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that’s just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively — although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size — Google’s Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people — most of whom were recruited through Amazon Mechanical Turk — who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn’t setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind’s Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the ‘peak of inflated expectations’ in Gartner’s Hype Cycle, with the backlash-driven ‘trough of disillusionment’ lying in wait.

Image: Gartner / Annotations: ZDNet

Which are the leading firms in AI?

Google’s DeepMind and the NHS: A glimpse of what AI means for the future of healthcare

The Google subsidiary has struck a series of deals with organisations in the UK health service — so what’s really happening?

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

Which AI services are available?

All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don’t want to build their own machine learning models but instead want to consume AI-powered, on-demand services — such as voice, vision, and language recognition — Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella — and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Which of the major tech firms is winning the AI race?

Internally, each of the tech giants — and others such as Facebook — use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam — the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple’s Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space — Google Assistant with its ability to answer a wide range of queries and Amazon’s Alexa with the massive number of ‘Skills’ that third-party devs have created to add to its capabilities.

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana’s days are numbered, although Microsoft was quick to reject this.

Which countries are leading the way in AI?

It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China’s favor.

How can I get started with AI?

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

What are recent landmarks in the development of AI?

There’s too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each — setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson’s win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played “completely random” games against itself, and then learnt from the results. At last year’s prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world’s top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

How will AI change the world?

Robots and driverless cars

This ebook, based on a special feature from ZDNet and TechRepublic, looks at emerging autonomous transport technologies and how they will affect society and the future of business.

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone’s voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people’s image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft’s Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it’s likely this more intrusive use of AI technology — including AI that can recognize emotions — will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM’s Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK’s National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Will AI kill us all?

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a “fundamental risk to the existence of human civilization”. As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft’s director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about “Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away.”

Will an AI steal your job?

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

DNet and TechRepublic looks at the dramatic effect of AI, big data, cloud computing, and automation on IT jobs, and how companies can adapt.

Read More

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the next few decades”.

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it’s not a given that manual and robotic labor will continue to grow hand-in-hand.

Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions the self-driving trucking industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact on couriers and taxi drivers.

Yet some of the easiest jobs to automate won’t even require robotics. At present there are millions of people working in administration, entering and copying data between systems, chasing and booking appointments for companies. As software gets better at automatically updating systems and flagging the information that’s important, so the need for administrators will fall.

As with every technological shift, new jobs will be created to replace those lost. However, what’s uncertain is whether these new roles will be created rapidly enough to offer employment to those displaced, and whether the newly unemployed will have the necessary skills or temperament to fill these emerging roles.

Not everyone is a pessimist. For some, AI is a technology that will augment, rather than replace, workers. Not only that but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted worker — think a human concierge with an AR headset that tells them exactly what a client wants before they ask for it — will be more productive or effective than an AI working on its own.

Among AI experts there’s a broad range of opinion about how quickly artificially intelligent systems will surpass human capabilities.

Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict AI capabilities, over the coming decades.

Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049, and doing a surgeon’s work by 2053.

They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Should we be worried about artificial intelligence?


Not really, but we do need think carefully about how to harness, and regulate, machine intelligence.

By now, most of us are used to the idea of rapid, even accelerating, technological change, particularly where information technologies are concerned. Indeed, as consumers, we helped the process along considerably. We love the convenience of mobile phones, and the lure of social-media platforms such as Facebook, even if, as we access these services, we find that bits and pieces of our digital selves become strewn all over the internet.

More and more tasks are being automated. Computers (under human supervision) already fly planes and sail ships. They are rapidly learning how to drive cars. Automated factories make many of our consumer goods. If you enter (or return to) Australia with an eligible e-passport, a computer will scan your face, compare it with your passport photo and, if the two match up, let you in. The “internet of things” beckons; there seems to be an “app” for everything. We are invited to make our homes smarter and our lives more convenient by using programs that interface with our home-based systems and appliances to switch the lights on and off, defrost the fridge and vacuum the carpet.

Robots taking over more intimate jobs

With the demise of the local car industry and the decline of manufacturing, the services sector is expected to pick up the slack for job seekers. But robots are taking over certain roles once deemed human-only.

Clever though they are, these programs represent more-or-less familiar applications of computer-based processing power. With artificial intelligence, though, computers are poised to conquer skills that we like to think of as uniquely human: the ability to extract patterns and solve problems by analysing data, to plan and undertake tasks, to learn from our own experience and that of others, and to deploy complex forms of reasoning.

The quest for AI has engaged computer scientists for decades. Until very recently, though, AI’s initial promise had failed to materialise. The recent revival of the field came as a result of breakthrough advances in machine intelligence and, specifically, machine learning. It was found that, by using neural networks (interlinked processing points) to implement mathematically specified procedures or algorithms, machines could, through many iterations, progressively improve on their performance – in other words, they could learn. Machine intelligence in general and machine learning in particular are now the fastest-growing components of AI.

The achievements have been impressive. It is now 20 years since IBM’s Deep Blue program, using traditional computational approaches, beat Garry Kasparov, the world’s best chess player. With machine-learning techniques, computers have conquered even more complex games such as Go, a strategy-based game with an enormous range of possible moves. In 2016, Google’s Alpha Go program beat Lee Sedol, the world’s best Go player, in a four-game match.

Allan Dafoe, of Oxford University’s future humanities institute, says AI is already at the point where it can transform almost every industry, from agriculture to health and medicine, from energy systems to security and the military. With sufficient data, computing power and an appropriate algorithm, machines can be used to come up with solutions that are not only commercially useful but, in some cases, novel and even innovative.

Should we be worried? Commentators as diverse as the late Stephen Hawking and development economist Muhammad Yunus have issued dire warnings about machine intelligence. Unless we learn how to control AI, they argue, we risk finding ourselves replaced by machines far more intelligent than we are. The fear is that not only will humans be redundant in this brave new world, but the machines will find us completely useless and eliminate us.

The University of Canberra's robot Ardie teaches tai chi to primary school pupils.
The University of Canberra’s robot Ardie teaches tai chi to primary school pupils.

If these fears are realistic, then governments clearly need to impose some sort of ethical and values-based framework around this work. But are our regulatory and governance techniques up to the task? When, in Australia, we have struggled to regulate our financial services industry, how on earth will governments anywhere manage a field as rapidly changing and complex as machine intelligence?

Governments often seem to play catch-up when it comes to new technologies. Privacy legislation is enormously difficult to enforce when technologies effortlessly span national boundaries. It is difficult for legislators even to know what is going on in relation to new applications developed inside large companies such as Facebook. On the other hand, governments are hardly IT ingenues. The public sector provided the demand-pull that underwrote the success of many high-tech firms. The US government, in particular, has facilitated the growth of many companies in cybersecurity and other fields.

Governments have been in the information business for a very long time. As William the Conqueror knew when he ordered his Domesday Book to be compiled in 1085, you can’t tax people successfully unless you know something about them. Spending of tax-generated funds is impossible without good IT. In Australia, governments have developed and successfully managed very large databases in health and human services.

The governance of all this data is subject to privacy considerations, sometimes even at the expense of information-sharing between agencies. The evidence we have is that, while some people worry a lot about privacy, most of us are prepared to trust government with our information. In 2016, the Australian Bureau of Statistics announced that, for the first time, it would retain the names and addresses it collected during the course of the 2016 population census. It was widely expected (at least by the media) that many citizens would withhold their names and addresses when they returned their forms. In the end, very few did.

But these are government agencies operating outside the security field. The so-called “deep state” holds information about citizens that could readily be misused. Moreover, private-sector profit is driving much of the current AI surge (although, in many cases, it is the thrill of new knowledge and understanding, too). We must assume that criminals are working out ways to exploit these possibilities, too.

If we want values such as equity, transparency, privacy and safety to govern what happens, old-fashioned regulation will not do the job. We need the developers of these technologies to co-produce the values we require, which implies some sort of effective partnership between the state and the private sector.

Could policy development be the basis for this kind of partnership? At the moment, machine intelligence works best on problems for which relevant data is available, and the objective is relatively easy to specify. As it develops, and particularly if governments are prepared to share their own data sets, machine intelligence could become important in addressing problems such as climate change, where we have data and an overall objective, but not much idea as to how to get there.

Machine intelligence might even help with problems where objectives are much harder to specify. What, for example, does good urban planning look like? We can crunch data from many different cities, and come up with an answer that could, in theory, go well beyond even the most advanced human-based modelling. When we don’t know what we don’t know, machines could be very useful indeed. Nor do we know, until we try, how useful the vast troves of information held by governments might be.

Perhaps, too, the jobs threat is not as extreme as we fear. Experience shows that humans are very good at finding things to do. And there might not be as many existing jobs at risk as we suppose. I am convinced, for example, that no robot could ever replace road workers – just think of the fantastical patterns of dug-up gravel and dirt they produce, the machines artfully arranged by the roadside or being driven, very slowly, up and down, even when all the signs are there, and there is absolutely no one around. How do we get a robot, even one capable of learning by itself, to do all that?

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Now, you can hold a copy of your brain in the palm of your hand


New 3D printing technique enables faster, better, and cheaper models of patient-specific medical data for research and diagnosis

Summary:
Medical imaging technologies like MRI and CT scans produce high-resolution images as a series of ‘slices,’ making them an obvious complement to 3D printers, which also print in slices. However, the process of manually ‘thresholding’ medical scans to define objects to be printed is prohibitively expensive and time-consuming. A new method converts medical data into dithered bitmaps, allowing custom 3D-printed models of patient data to be printed in a fraction of the time.

This 3D-printed model of Steven Keating’s skull and brain clearly shows his brain tumor and other fine details thanks to the new data processing method pioneered by the study’s authors.
Credit: Wyss Institute at Harvard University

What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, Ph.D., who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group. Curious to see what his brain actually looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI and CT scans, but was frustrated that existing methods were prohibitively time-intensive, cumbersome, and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?” says Ahmed Hosny, who was a Research Fellow with at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute. The result of that impromptu collaboration — which grew to involve James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute; Neri Oxman, Ph.D., Director of the MIT Media Lab’s Mediated Matter group and Associate Professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centers in the US and Germany — is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail. The research is reported in 3D Printing and Additive Manufacturing.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, M.D. Ph.D., an Assistant Professor of Radiology at the University of Washington and clinical radiologist at the Seattle VA, and co-author of the paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Imaging technologies like MRI and CT scans produce high-resolution images as a series of “slices” that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object(s) of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either a very time-intensive process called “segmentation” where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic “thresholding” process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of gray that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over- or under-exaggerates the size of a feature of interest and washes out critical detail.

The new method described by the paper’s authors gives medical professionals the best of both worlds, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of gray rather than the pixels themselves varying in color.

Similar to the way images in black-and-white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of gray into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

The team of researchers used bitmap-based 3D printing to create models of Keating’s brain and tumor that faithfully preserved all of the gradations of detail present in the raw MRI data down to a resolution that is on par with what the human eye can distinguish from about 9-10 inches away. Using this same approach, they were also able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional — we were able to do it in less than an hour.”

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

In order for that to happen, some entrenched elements of the medical field need to change as well. Most patients’ data are compressed to save space on hospital servers, so it’s often difficult to get the raw MRI or CT scan files needed for high-resolution 3D printing. Additionally, the team’s research was facilitated through a joint collaboration with leading 3D printer manufacturer Stratasys, which allowed access to their 3D printer’s intrinsic bitmap printing capabilities. New software packages also still need to be developed to better leverage these capabilities and make them more accessible to medical professionals.

Despite these hurdles, the researchers are confident that their achievements present a significant value to the medical community. “I imagine that sometime within the next 5 years, the day could come when any patient that goes into a doctor’s office for a routine or non-routine CT or MRI scan will be able to get a 3D-printed model of their patient-specific data within a few days,” says Weaver.

Keating, who has become a passionate advocate of efforts to enable patients to access their own medical data, still 3D prints his MRI scans to see how his skull is healing post-surgery and check on his brain to make sure his tumor isn’t coming back. “The ability to understand what’s happening inside of you, to actually hold it in your hands and see the effects of treatment, is incredibly empowering,” he says.

“Curiosity is one of the biggest drivers of innovation and change for the greater good, especially when it involves exploring questions across disciplines and institutions. The Wyss Institute is proud to be a space where this kind of cross-field innovation can flourish,” says Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School (HMS) and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Cometh the cyborg: Improved integration of living muscles into robots


Researchers have developed a novel method of growing whole muscles from hydrogel sheets impregnated with myoblasts. They then incorporated these muscles as antagonistic pairs into a biohybrid robot, which successfully performed manipulations of objects. This approach overcame earlier limitations of a short functional life of the muscles and their ability to exert only a weak force, paving the way for more advanced biohybrid robots.

Object manipulations performed by the biohybrid robots.
 

The new field of biohybrid robotics involves the use of living tissue within robots, rather than just metal and plastic. Muscle is one potential key component of such robots, providing the driving force for movement and function. However, in efforts to integrate living muscle into these machines, there have been problems with the force these muscles can exert and the amount of time before they start to shrink and lose their function.

Now, in a study reported in the journal Science Robotics, researchers at The University of Tokyo Institute of Industrial Science have overcome these problems by developing a new method that progresses from individual muscle precursor cells, to muscle-cell-filled sheets, and then to fully functioning skeletal muscle tissues. They incorporated these muscles into a biohybrid robot as antagonistic pairs mimicking those in the body to achieve remarkable robot movement and continued muscle function for over a week.

The team first constructed a robot skeleton on which to install the pair of functioning muscles. This included a rotatable joint, anchors where the muscles could attach, and electrodes to provide the stimulus to induce muscle contraction. For the living muscle part of the robot, rather than extract and use a muscle that had fully formed in the body, the team built one from scratch. For this, they used hydrogel sheets containing muscle precursor cells called myoblasts, holes to attach these sheets to the robot skeleton anchors, and stripes to encourage the muscle fibers to form in an aligned manner.

“Once we had built the muscles, we successfully used them as antagonistic pairs in the robot, with one contracting and the other expanding, just like in the body,” study corresponding author Shoji Takeuchi says. “The fact that they were exerting opposing forces on each other stopped them shrinking and deteriorating, like in previous studies.”

The team also tested the robots in different applications, including having one pick up and place a ring, and having two robots work in unison to pick up a square frame. The results showed that the robots could perform these tasks well, with activation of the muscles leading to flexing of a finger-like protuberance at the end of the robot by around 90°.

“Our findings show that, using this antagonistic arrangement of muscles, these robots can mimic the actions of a human finger,” lead author Yuya Morimoto says. “If we can combine more of these muscles into a single device, we should be able to reproduce the complex muscular interplay that allow hands, arms, and other parts of the body to function.”

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

For Some Hard-To-Find Tumors, Doctors See Promise In Artificial Intelligence


A team at Johns Hopkins Medicine in Baltimore is developing a tumor-detecting algorithm for detecting pancreatic cancer. But first, they have to train computers to distinguish between organs.

 

Artificial intelligence, which is bringing us everything from self-driving cars to personalized ads on the web, is also invading the world of medicine.

In radiology, this technology is increasingly helping doctors in their jobs. A computer program that assists doctors in diagnosing strokes garnered approval from the U.S. Food and Drug Administration earlier this year. Another that helps doctors diagnose broken wrists in X-ray images won FDA approval on May 24.

One particularly intriguing line of research seeks to train computers to diagnose one of the deadliest of all malignancies, pancreatic cancer, when the disease is still readily treatable.

That’s the vision of Dr. Elliot Fishman, a professor of radiology at Johns Hopkins Medicine in Baltimore. Artificial intelligence and radiology seem like a natural match, since so much of the task of reading images involves pattern recognition. It’s a dream that’s been decades in the making, Fishman says.

“When I started in radiology, they said, ‘OK, don’t worry about reading the chest X-rays because the computers will read them,’ ” Fishman says. “That was 35 years ago!”

Elliot Fishman says the goal of developing an artificial intelligence program is to spot pancreatic tumors early.

 

Computers still can’t perform the seemingly simple task of reading a chest X-ray, despite sky-high expectations and more than a little hype around the role of artificial intelligence. Fishman is undaunted as he turns this technology on pancreatic cancer.

And that disease is a huge challenge. Only 7 percent of patients given a pancreatic cancer diagnosis are alive five years later. One reason the disease is so deadly is that doctors usually diagnose it when it’s too late to remove the tumors with surgery. Fishman and his team want to change that, by training computers to recognize pancreatic cancer early. Working with Johns Hopkins computer science students and faculty, they are helping develop a tumor-detecting algorithm that could be built into CT scanner software.

Americans get 40 million CT scans of the abdomen every year, for everything from car accidents to back pain. Imagine if a computer program with expert abilities could look for pancreas tumors in all those scans.

“That’s the ultimate opportunity — to be able to diagnose it before you have any symptoms and at a stage where it’s even maybe too subtle for a radiologist to be able to detect it,” says Dr. Karen Horton, chair of the Johns Hopkins radiology department and Fishman’s collaborator on the project.

Karen Horton is chair of the Johns Hopkins radiology department and is collaborating with Fishman on The Felix Project.

The challenge lies in teaching a computer to detect what a well-trained doctor knows to look for.

“Elliot and I are very subspecialized so we’re really, really good,” Horton says matter-of-factly. “We see more pancreatic cancer than probably anyone in the world.”

She says if the computer algorithm could capture their collective knowledge about how to diagnose pancreatic cancer and give that expertise to the typical doctor, “you could be, I would argue, better than us, but certainly as good as us — which would mean better than most of the practicing radiologists.”

Even a program perfectly attuned to finding patterns can’t reliably recognize cancer if it hasn’t been trained on reliable starting material.

When it comes to developing AI, “sometimes people say, ‘oh just take a bunch of cases and put them in a computer and the computer will figure out what to do’,” Fishman says. “That’s nonsensical.”

The Felix Project at Johns Hopkins, as the pancreas effort is called, pours a huge amount of human time, labor and intellect into training computers to recognize the difference between a normal pancreas and one with a tumor.

Of all the internal organs to deal with, “the pancreas is the hardest,” Fishman says. “The kidney looks like a kidney, the liver’s a big thing.” On the other hand, he says, “The pancreas is a very soft organ, it sits way in the middle and the shape varies from patient to patient. Just finding the pancreas, even for radiologists, is at times a challenge.”

Eva Zinreich, a medical researcher, digitally paints a CT scan to help train the computer program. The process can take almost four hours for a single scan.

 

Eva Zinreich, a retired oncologist, is up for that challenge. She is one of a team of medical experts who spend their days poring over CT scans and teaching the computer how to recognize the pancreas, other organs, and then, tumors within the pancreas.

She sits at a computer workstation, wielding a digital paintbrush.

“I’ll show you in 3D because that’s the fun stuff, ok?” she says as she sets about coloring in the aorta and other blood vessels on a scan.

Next, she colors the pancreas yellow.

“You see that shaded area?” she asks. “That’s the tumor,” and she proceeds to color it red.

Zinreich digitally paints the pancreas (yellow) and a tumor (red) in a CT scan.

 

It will take her almost four hours just to mark up this single scan. Four medical experts have been working full-time for well over a year on this project. They’ve done this painstaking work on scans from about 1,000 healthy people, and their tally of pancreatic cancer images is now approaching 1,000 as well, Fishman says.

They are feeding their annotated scans into the project’s computer program and gradually teaching it to recognize the same signs of a tumor that radiologists now pick out of the scans.

At another workstation in the lab, radiologist Linda Chu is trying to make the computer system even more adept than Elliot Fishman and Karen Horton are at recognizing pancreas cancers. She’s developing ways for the computer to look for patterns in the scan that the human eye can’t pick out. It’s interpreting textures in the images, rather than shapes and shading.

Chu says she’s making tentative progress. For example, she’s been training the software to identify subtle clues that distinguish between a benign cyst and cancer.

“We don’t truly understand what the computer is seeing, but clearly the computer is able to see something in the images that us humans cannot comprehend at this point,” Chu says.

But this is also part of the challenge of AI — if the computer highlights something that a human expert can’t see, and it’s not clear how it arrived at that conclusion, can you trust it?

“That’s what makes the research interesting!” Chu says.

Computer science students from the Johns Hopkins University main campus are key to developing the software that’s learning how to read and interpret the images that flow from Fishman’s lab.

The Lustgarten Foundation, which is focused on pancreatic cancer, has provided nearly $4 million over two years to fund the Felix Project. Horton says if it’s successful, all the information they collected on healthy people can be used as a starting point to study tumors elsewhere in the body.

“You could have Felix kidney, Felix liver, Felix lung, Felix, heart,” she says. And they could all go together into the scanner software.

The project is named after the “Felix Felicis” good-luck potion, from the Harry Potter books. And, absent an effective magic spell, the laborious process is a reminder that success in bringing artificial intelligence to medicine will not be as simple as dumping piles of data into a computer and trusting that an algorithm will sort it all out.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

First 3D-printed human corneas


The first human corneas have been 3D-printed by scientists. It means the technique could be used in the future to ensure an unlimited supply of corneas.


Dr. Steve Swioklo and Professor Che Connon with a dyed cornea.
 

The first human corneas have been 3D printed by scientists at Newcastle University, UK.

It means the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture — or grow.

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel — a combination of alginate and collagen — keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”

The scientists, including first author and PhD student Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”