Android Apps Can Access Your Data Even If You Refuse Permission: Study

android phones
Images: Depositphotos

We have seen how many of the Android apps have been caught indulging in malicious practices and the most recent study reveals exactly that. As per a new study, it is suggested that thousands of Android apps can access your data, irrespective of the permission given to the apps to access them.

So when we have denied the various apps access to our personal data, the apps will somehow still access our data.

The study suggests that Android apps get unauthorized access to user data with the help of covert and side channels.

For the uninitiated, covert channels allow apps to get permission to access user data from another app, and this process becomes easy as most of the apps are based on the same SDK (software development kit).

Additionally, various side channel vulnerabilities that exist in the Android system could be used to extract crucial information such as the MAC address of a user’s device with the use of C++ native code.

It is further suggested that many apps that use SDKs built by Baidu and Salmonads use the covert channel communication path to access the user’s IMEI number without his or her permission.

The unauthorized access to user data also involves access to the actual GPS coordinates of the device and geolocation data; the Shutterfly app has been found sharing geolocation data back to its servers.

You can read the study over here for a better understanding.

The researchers said they’d notified Google about the vulnerabilities in September last year. While Google did not comment on the study, it is suggested that Android Q would bring forth features to curb such security issues.

Can AI Save the Internet from Fake News?

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News

While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI

While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet

While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

  • Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
  • Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
  • Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

This Radical New DNA Microscope Reimagines the Cellular World

It’s not every day that something from the 17th century gets radically reinvented.

But this month, a team from the Broad Institute at MIT and Harvard took aim at one of the most iconic pieces of lab ware—the microscope—and tore down the entire concept to recreate it from scratch.

rainbow cell segmentation in DNA microscopy biotechnology

I’m sure you have a mental picture of a scope: a stage to put samples on, a bunch of dials to focus the image, tunnel-like objectives with optical bits, an eyepiece to observe the final blown-up image. Got it? Now erase all that from your mind.

The new technique, dubbed DNA microscopy, uses only a pipette and some liquid reagents. Rather than monitoring photons, here the team relies on “bar codes” that chemically tag onto biomolecules. Like cell phone towers, the tags amplify, broadcasting their signals outward. An algorithm can then piece together the captured location data and transform those GPS-like digits into rainbow-colored photos.

The results are absolutely breathtaking. Cells shine like stars in a nebula, each pseudo-colored according to their genomic profiles.

That’s the crux: DNA microscopy isn’t here to replace its classic optical big brother. Rather, it pulls the viewer down to a molecular scale, allowing you to see things from the “eyes of the cell,” said study author Dr. Joshua Weinstein, who worked under the supervision of Dr. Aviv Regev and Dr. Feng Zhang, both Howard Hughes Medical Institute investigators.

The tech decodes the natural location of molecules inside a cell—and how they interact—while simultaneously gathering information about its gene expression in a two-in-one combo. It’s a bonanza for scientists struggling to tease apart individual differences in cells that physically look identical—say, immune cells that secrete different antibodies, or cancer cells in their early stage of malignant transformation.

“It’s a completely new way of visualizing biology,” said Weinstein in a press release. “This gives us another layer of biology that we haven’t been able to see.”


Almost all current microscopy techniques stem from the original all-in-one light microscope, first introduced in the 17th century. The core concept is light: the device guides photons refracted from the sample into a series of mirrors and optical lenses, resulting in an enlarged image of whatever you’re focusing on. It basically works like a DSLR camera with a very powerful zoom lens.

Scientists have long since moved past the visible light spectrum. Electron microscopy, for example, measures electrons that bounce off tissue to look at components inside the cell. Fluorescent microscopy—the “crown prince” of imaging—captures emitted light waves after stimulating tissue-bound fluorescent probes with lasers.

But here’s the thing: even as traditional microscopy is increasingly perfected and streamlined, it hits two hard limits. One is resolution. Light scatters, and there’s only so much we can do to focus the beam on one point to generate a clear image. This is why a light microscope can’t clearly see DNA molecules or proteins—it’s like using a smartphone to capture a single bright star. As a result, current microscopes generate satellite views of goings-on on the cellular “Earth.” Sharp, but from afar.

The second is genomic profiling. There’s been a revolution in mapping cellular diversity to uncover how visually similar cells can harbor dramatically different genomic profiles. A perfect example is the immune system. Immune cells that look similar can secrete vastly different antibodies, or generate different protein “arms” on their surface to grab onto specific types of cancer cells. Sequencing the cells’ DNA loses spatial information, making it hard to tease out whether location is important for designing treatments.

So far, microscopy has only focused on half of the picture. With DNA microscopy, the team hopes to grab the entire landscape.

“It will allow us to see how genetically unique cells—those comprising the immune system, cancer, or the gut, for instance—interact with one another and give rise to complex multicellular life,” explained Weinstein.

Inner Workings

To build their chemical microscope, the team began with a group of cultured cells.

They decoded the cells’ RNA molecules and transcribed the data to generate a complete library of expressed genes called cDNA. Based on cDNA sequences, they then synthesized a handful of tags randomly made of the DNA letters A, T, C, and G, each about 30 letters long. When bathed onto a new batch of cells, the tags tunneled inside and latched onto specific RNA molecules, labeling each with a specific barcode.

To amplify individual signals—each “point source”—the team used a common chemical reaction that rapidly amplifies DNA molecules, increasing their local concentration. DNA doesn’t like staying put inside the liquid interior of a cell, so the tags slowly begin to drift outwards, like a drop of dye expanding in a pool of water. Eventually, the DNA tags balloon into a molecular cloud that stems from their initial source on the biomolecule.

“Think of it as a radio tower broadcasting its signal,” the authors explained.

As DNA tag clouds from multiple points grow, eventually they’ll collide. This triggers a second reaction: two diffusing DNA molecules physically link up, spawning a unique DNA label that journals their date. This clever hack allows researchers to eventually triangulate the location of the initial sources: the closer the two points are in the beginning, the more labels they’ll form. The further apart, the fewer. The idea is very similar to cell phone companies using radio towers to track their users’ locations, in which they measure where signals from three or more towers intersect.

The cells are subsequently collected and their DNA extracted and sequenced—something that takes roughly 30 hours. The data is then decoded using a custom algorithm to transform raw data into gorgeous images. This is seriously nuts: the algorithm has absolutely no clue what a “cell” is, but still identified individual cells at their location inside the sample.

“The first time I saw a DNA microscopy image, it blew me away,” said Regev.

As proof of concept, the team used the technique to track cells that encode either green or red fluorescent proteins. Without any previous knowledge of their distribution, the DNA microscope efficiently parsed the two cell types, although the final images were blurrier than that obtained with a light microscope. The tech could also reliably map the location of individual human cancer cells by tagging the cells’ internal RNA molecules, although it couldn’t parse out fine details inside the cell.

A Whole New World

Thanks to DNA’s “stickiness,” the technique can be used to label multiple types of biomolecules, allowing researchers to track the location of and identify antibodies or surface proteins on any given cell type, although the team will have to further prove its effectiveness in tissue samples.

Although the resolution of DNA microscopy is currently on par, if not slightly lower than, that of a light microscope, it provides an entirely new perspective of biomolecules inside cells.

“We’ve used DNA in a way that’s mathematically similar to photons in light microscopy. This allows us to visualize biology as cells see it and not as the human eye does,” said Weinstein.

DNA microscopy already does things a light microscope can’t. It can parse out cells that visually look similar but have different genetic profiles, for example, which comes in handy for identifying various types of cancer and immune cells. Another example is neuroscience: as our brains develop, various cells drastically alter their genetic profiles while migrating long distances—DNA microscopy could allow researchers to precisely track their movements, potentially uncovering new ways to boost neuroregeneration or plasticity.

Only time will tell if DNA microscopy will reveal “previously inaccessible layers of information,” as the team hopes. But they believe that their invention will spark new ideas and uses in the scientific community.

“It’s not just a new technique, it’s a way of doing things that we haven’t ever considered doing before,” said Regev.

7 Themes for Living an Exponential Life, Running an Exponential Business, and Making an Exponential Difference in the World

“There are people in the world who habitually say yes. And there are people who habitually say no. The people who say yes are rewarded by the adventures they get to go on, and the people who say no are rewarded by the safety they attain.”
Keith Johnstone courtesy of SU Faculty Dan Klein, “Getting Comfortable with Ambiguity”

I am most definitely a “yes” person. So recently, thanks to sponsorship from my employer Westpac, I attended the Singularity University Executive Programme. It was nothing like I anticipated and better than I ever could have imagined.


But to try to recreate the experience for this audience in a blog post would do it a disservice. There is no way to replicate the impeccably curated, fully immersive experience of six days on the NASA campus in Sunnyvale CA with the world’s most pre-eminent experts in their respective emerging fields: areas as diverse as genetic modification, exponential economics, augmented and virtual reality, artificial intelligence, entrepreneurialism, climate change, robotics… So instead I will try to do it justice by exploring seven of the key themes.

1) The pace of change will never be this slow again.

The programme is predicated on the concept of exponential growth and the fact that technological advancements of recent years have led to exponential outcomes across almost every part of our lives. As human beings, we experience this as disruptive and destabilising. But, if we can adapt, the opportunities for our own lives, the future of the institutions we work in and indeed the world, could be extraordinary.

“Humans are not equipped to process exponential growth. Our intuition is to use our assessment of how much change we’ve seen in the past to predict how much change we’ll see going forward. We tend to assume a constant rate of change. Exponential growth is both deceptive and astonishing, as the doubling of small numbers will inevitably lead to numbers that outpace our brain’s intuitive sense. This is meaningful because we as humans tend to overestimate what can be achieved in the short term, but vastly underestimate what can be achieved in the long term.”
—Introduction to Exponentials, V2.1 September 2016, Singularity University

2) The convergence of technologies is the driver of exponential change.

The sheer array of subject areas we covered in a very short space of time was powerful. On the first two days alone we explored AR, VR, AI, robotics, synths, digital manufacturing, climate change, and futurism. I could start to see a clear correlation between the exponential pace of change and the convergence of these emerging technologies. Combine atoms with bits, artificial GENERAL intelligence with a humanoid body (to form a synth) and you suddenly see how bringing these seemingly different fields together might further accelerate returns.


“The technology you use today is the worst it will ever be. Technology is moving faster than you think and will never move this slowly again.”
—Rob Nail, CEO Singularity University, “Leading in the Age of Disruption”

3) Ethics in an exponential world has a long way to go.

However, exploring the convergence of technology and humans, in particular, raises fundamental ethical questions. Ethical questions which are far from resolved by either the expert community or humanity at large.

Take artificial intelligence. The current focus on narrow AI is dependent on significant recent advances in deep learning which has been a major accelerator of exponential growth. At a basic level, it looks like enabling chatbots, virtual assistants, and automation of vast swathes of manual human activities—which is controversial in itself when you consider the impact on the workforce.

But when you understand that deep learning has become so advanced that once we have given the algorithms inputs and they have started learning, the process of learning is so complex, has so many variables, that it actually becomes incomprehensible to the current human brain how the AI is making the decisions it’s making. It is, in essence, a black box. Take the example from the United States of AI being used to decide sentencing when a person is found guilty of a crime. Given that data inputs are things like demographics and where the felon lives, how can we trust that the algorithm is not biased-by-design, taking on all our inherent assumptions and perpetuating them through learned data?

The Theranos scandal is perhaps the most startling recent example of technology being a (literal) black box. Although admittedly, in that case, there was clear deception on a massive scale by the CEO. But as technology converges with the fundamentals of what it is to be a human being, our health, and indeed the very makeup of our DNA, we have to prioritise ethical and human issues alongside functional workings of the technology.

4) We now live in a world of complex challenges, not just complicated problems.

In an increasingly exponential world, there needs to be a distinct difference between our approach to tackling complex challenges as opposed to complicated problems.

Complicated problems are characterised by predictability, there being a “right” answer. They’re not necessarily easy to solve, but good practise and specialist expertise can usually find the answer, e.g., rebuilding Notre Dame, putting a man on the moon, or brain surgery. Complex challenges are typically unpredictable. There’s no one solution, but instead a direction. Patterns exist, but cause and effect are only seen in hindsight. There are many moving parts, and emergent practice is required to explore the challenge, e.g., culture or behaviour change or climate change.

Kate Cooper

Complicated problems require problem-solving, but this isn’t enough for complex challenges. The trouble is, starting by asking, “What is the problem we are trying to solve?” limits us to the current paradigm in which the problem already exists. Navigating complex challenges requires curiosity, experimentation, and a willingness to learn. You need to be willing to walk in others’ shoes to understand the challenge better and be able to stand back and see the system rather than the individual parts. Above all else, you need to be OK not having THE answer and instead explore a potential new paradigm enabling options to emerge.

5) Collaboration is too slow for an exponential world.

Did you know, when asked to vote anonymously, i.e., without colleagues knowing the result of the vote, 76% of people say they prefer working alone than in a team. Why? Because with collaboration comes a lot of friction—socialising concepts, navigating politics, achieving buy-in. In an exponential world, the need for speed will surpass the human desire for collaboration. The benefits of pace will supersede the benefits of everyone having an opinion and a say. That is not to say that we should discount input from customers and colleagues. But the approach will be more consistent with coordination rather than collaboration.

6) Behavioural transformation is as important as technological transformation.

“The next 20 years will be more transformative than the last 2,000 in terms of technological advances.”

—Ray Kurzweil, Co-Founder and Chancellor, Singularity University, “The Future Is Better Than You Think”

The challenge is that we, as human beings, are not transforming anywhere near as fast as the technologies around us. And businesses, governments, and institutions are failing to transform. Indeed 80-90% of all organisational transformations fail. This is because we treat transformation as a rational (technological) process when in reality it’s a fundamentally political (behavioural) process. By proactively tackling the human process as a core part of your transformation programme, you will deliver exponential outcomes. Behavioural and neuroscience provide a valuable scientific basis for behavioural transformation strategies.

7) The future is already here—it’s just not very evenly distributed.

“The future is notoriously difficult to predict. Our shortcomings in seeing and extrapolating the exponential trends that will shape the coming century sets the stage for us to experience perpetual futureshock.”
—Ray Kurzweil, Co-Founder and Chancellor, Singularity University, “The Future Is Better Than You Think”

For most of the planet, all of the above may sound like the realms of science fiction and a million miles away from their reality today. Indeed one could interpret the current political divergence across the planet as stark evidence of the uneven distribution of understanding and access to the future. It was clear in presentation after presentation that the next is already in the now. Part of our role as alumni from over 20 countries is to start to distribute an understanding of an exponential world and help enable democratisation of access to the benefits.

Faculty in front of an airplane

The good news? Whilst the sheer pace and complexity of technological and human advancement is mind-blowing, we actually have everything we need already to solve the world’s problems. Critical, complex challenges like climate change, poverty, and distributed prosperity. For me, I left the Executive Programme with an invigorated sense of purpose and a vast toolkit and network to tap into to help me make a dent in the world. I’m going to work on making Australia one of the world’s great innovation nations and in doing so enabling innovative advances that will positively impact the whole of humanity.

Scientists Can Now Clone Brain Organoids. Here’s Why That Matters

An army of free-floating minibrain clones are heading your way!

No, that’s not the premise of a classic sci-fi brain-in-jars blockbuster. Rather, a team at Harvard has figured out a way to “clone” brain organoids, in the sense that every brain blob, regardless of what type of stem cell it originated from, developed nearly identically into tissues that mimic the fetal human cortex. No, they didn’t copy-paste one minibrain to make a bunch more—rather, the team found a recipe to reliably cook up hundreds of 3D brain models with very little variability in their cellular constituents.

If that sounds like a big ole “meh,” think of it like this. Minibrains, much like the organs they imitate, are delicate snowflakes, each with their own unique cellular makeup. Sure, no two human brains are exactly the same, even those of twins. However, our noggins do follow a relatively set pathway in initial development and end up with predictable structures, cell types, and connections.

Not so for minibrains. “Until now, each … makes its own special mix of cell types in a way that could not have been predicted at the outset,” explained study author Dr. Paola Arlotta. By compiling a cellular atlas from multiple minibrains, her team basically found a blueprint that coaxes stem cells from different genetic and gender origins to mature into remarkably similar structures, at least in terms of cellular composition. Putting it another way, they farmed a bunch of identical siblings; but rather than people, they’re free-floating brain blobs.

Rest assured, it’s not a new evil-scientist-brain-control scheme. For brain organoids to be useful in neuroscience—for example, understanding how autism or schizophrenia emerge—scientists need large amounts of near-identical test subjects. This is why twins are extremely valuable in research: all else (nearly) equal, they help isolate the effects of individual treatments or environmental changes.

“It is now possible to compare ‘control’ organoids with ones we create with mutations we know to be associated with the disease,” said Arlotta. “This will give us a lot more certainty about which differences are meaningful.”

How to Grow a Brain

The authors set out with a slightly different question in mind: is it possible to reliably grow a brain outside the womb?

You may be asking “why not?” After all, scientists have been cooking up brain organoids for half a decade. But although specific instructions are generally similar, the resulting minibrains—not so much.

Here’s the classic recipe. Scientists start with harvested stem cells, embed them into gel-like scaffolds, then carefully nurture them in a chemical soup tailored to push the cells to divide, migrate, and mature into tiny balls. These tissue nuggets are then transferred to a slow spinning bioreactor—imagine a giant high-tech smoothie machine. The gentle whirling motion keeps the liquid nicely oxygenated. In six months, the grains of greyish tissue expand to a few millimeters, or one-tenth of the width of your finger, packed full with interconnected brain cells.

This is the “throw all ingredients into a pot and see what happens” approach, more academically known as the unguided method. Because scientists don’t interfere with the brain balls’ growth, the protocol gives stem cells the most freedom to self-organize. It also allows stem cells to stochastically choose what type of brain cell—neurons, glia, immune cells—they eventually become. God may not play dice, but outside the womb, stem cells sure do.

That’s problematic. Depending on the initial stem cell population, the culture conditions, and even the particular batch, the resulting minibrains end up with highly unpredictable proportions of cell types arranged in unique ways. This makes controlled experimenting with minibrains extremely difficult, which nixes their whole raison d’être.

Similar to their human counterparts, unguided minibrains also follow the instructions laid out in their DNA. So what gives?

Our brains do not grow in isolation. Rather, they’re guided by a myriad of local chemical messengers, hormones, and mechanical forces in the womb—all of which are devoid inside the spinning bioreactor. A more recent way to grow brain blobs is the guided method: scientists add a slew of “external patterning factors” at a very early stage of development, when stem cells first begin to choose their fate. These factors are basically biological fairy dust that push minibrain structures into a particular “pattern,” essentially sealing their fate.

Brain organoids grown this way are generally more consistent in cellular architecture once they mature. For example, many consistently develop the multi-striped pattern characteristic of the cerebral cortex—the outermost layer of the brain involved in sensation, reasoning, and other higher cognitive functions. But do they also resemble each other in their cellular makeup?

Reliable Brain Farming

The team first used both approaches to foster several dozen minibrains for half a year. They began with multiple types of stem cells from both male and female donors: induced pluripotent stem cells, which are skin cells returned to a youth-like stage, immortal human embryonic stem cells, and others.

They then carefully analyzed the resulting brainoids’ genetic makeup at multiple time points to track their growth. The team tapped an extremely powerful—and increasingly popular—tool called single-cell RNA sequencing, which provides invaluable insight into gene expression in every single cell.

In all, they parsed the genetic fingerprints of over 100,000 cells from 21 organoids, and matched those data to existing databases to tease out the cells’ identities. Finally, the team mapped out the distribution of each cell type in every analyzed organoid.

Unsurprisingly, those grown with the unguided method had cellular profiles all over the place. But with the guided approach—particularly, ones dubbed the “dorsally patterned” type—95 percent were “virtually indistinguishable” in their cellular compendium. What’s more, these minibrains also followed incredibly similar development trajectories, in that different cell types popped up at near-identical time points. Even their cellular origin didn’t matter: organoids grown from different stem cells were consistent in their final cellular inhabitants.

Conclusion? The embryo isn’t required for our brain to produce all its cellular diversity; it’s totally possible to reliably grow brainoids outside the womb.

CRISPRed Minibrains?

The results are a huge boon for studying neurological diseases such as autism, epilepsy, and schizophrenia. Scientists believe that the root cause of these complex disorders lies somewhere in the tangled dance of fetal brain growth. So far, a clear cause has been elusive.

Using the guided “dorsally patterned” recipe, teams can now grow organoids from stem cells derived from patients, or genetically engineer pathological mutations to study their effects. Because the study proves minibrains made this way are remarkably similar, researchers will be able to nail down risk factors—and test potential treatments—without worrying about biological noise stemming from minibrain diversity.

Arlotta is already exploring possibilities. Using CRISPR, she plans to edit genes potentially linked to autism in stem cells, and grow them out as minibrains. Using the same technique, she can also make “control” organoids as a baseline for her experiments.

We can now “move much more swiftly towards concrete interventions, because they will direct us to the specific genetic features that give rise to the disease,” she said. “We will be able to ask far more precise questions about what goes wrong in the context of psychiatric illness.”

Oncologists are guardedly optimistic about AI. But will it drive real improvements in cancer care?

Over the course of my 25-year career as an oncologist, I’ve witnessed a lot of great ideas that improved the quality of cancer care delivery along with many more that didn’t materialize or were promises unfulfilled. I keep wondering which of those camps artificial intelligence will fall into.

Hardly a day goes by when I don’t read of some new AI-based tool in development to advance the diagnosis or treatment of disease. Will AI be just another flash in the pan or will it drive real improvements in the quality and cost of care? And how are health care providers viewing this technological development in light of previous disappointments?

To get a better handle on the collective “take” on artificial intelligence for cancer care, my colleagues and I at Cardinal Health Specialty Solutions fielded a survey of more than 180 oncologists. The results, published in our June 2019 Oncology Insights report, reveal valuable insights on how oncologists view the potential opportunities to leverage AI in their practices.

Limited familiarity tinged with optimism. Although only 5% of responding oncologists describe themselves as being “very familiar” with the use of artificial intelligence and machine learning in health care, 36% said they believe it will have a significant impact in cancer care over the next few years, with a considerable number of practices likely to adopt artificial intelligence tools.

The survey also suggests a strong sense of optimism about the impact that AI tools may have on the future: 53% of respondents said that such tools are likely or very likely to improve the quality of care in three years or more, 58% said they are likely or very likely to drive operational efficiencies, and 57% said they are likely or very likely to improve clinical outcomes. In addition, 53% described themselves as “excited” to see what role AI will play in supporting care.

An age gap on costs. The oncologists surveyed were somewhat skeptical that AI will help reduce overall health care costs: 47% said it is likely or very likely to lower costs, while 23% said it was unlikely or very unlikely to do so. Younger providers were more optimistic on this issue than their older peers. Fifty-eight percent of those under age 40 indicated that AI was likely to lower costs versus 44% of providers over the age of 60. This may be a reflection of the disappointments that older physicians have experienced with other technologies that promised cost savings but failed to deliver.

Hopes that artificial intelligence will reduce administrative work. At a time when physicians spend nearly half of their practice time on electronic medical records, we were not surprised to see that, when asked about the most valuable benefit that AI could deliver to their practice, the top response (37%) was “automating administrative tasks so I can focus on patients.” This response aligns with research we conducted last year showing that oncologists need extra hours to complete work in the electronic medical record on a weekly basis and the EMR is one of the top factors contributing to stress at work. Clearly there is pent-up demand for tools that can reduce the administrative burdens on providers. If AI can deliver effective solutions, it could be widely embraced.

Need for decision-support tools. Oncologists have historically been reluctant to relinquish control over patient treatment decisions to tools like clinical pathways that have been developed to improve outcomes and lower costs. Yet, with 63 new cancer drugs launched in the past five years and hundreds more in the pipeline, the complexity surrounding treatment decisions has reached a tipping point. Oncologists are beginning to acknowledge that more point-of-care decision support tools will be needed to deliver the best patient outcomes. This was reflected in our survey, with 26% of respondents saying that artificial intelligence could most improve cancer care by helping determine the best treatment paths for patients.

AI-based tools that enable providers to remain in control of care while also providing better insights may be among the first to be adopted, especially those that can help quickly identify patients at risk of poor outcomes so physicians can intervene sooner. But technology developers will need to be prepared with clinical data demonstrating the effectiveness of these tools — 27% of survey respondents said the lack of clinical evidence is one of their top concerns about AI.

Challenges to adoption. While optimistic about the potential benefits of AI tools, oncologists also acknowledge they don’t fully understand AI yet. Fifty-three percent of those surveyed described themselves as “not very familiar” with the use of AI in health care and, when asked to cite their top concerns, 27% indicated that they don’t know enough to implement it effectively. Provider education and training on AI-based tools will be keys to their successful uptake.

The main take-home lesson for health care technology developers from our survey is to develop and launch artificial intelligence tools thoughtfully after taking steps to understand the needs of health care providers and investing time in their education and training. Without those steps, AI may become just another here today, gone tomorrow health care technology story.

Looking Forward to 5G? You’d Better Have an Unlimited Data Plan

5G Speed Tests in Australia

There’s been a lot of excitement about 5G technology coming to the iPhone next year, but despite all of the hype, we’ve been a bit skeptical about how much this is going to matter to the average iPhone user, especially in the short term. Put simply, do you really need your iPhone to be able to transfer data at gigabit speeds?

However, there’s also a dark side to 5G that many users probably haven’t thought about: being able to move data at faster speeds means that you’ll potentially use a lot more of it.

In a new report for CNET, Daniel Van Boom took a closer look at just exactly what the impact of this will be, after conducting a series of tests in the parts of Sydney, Australia where Telstra currently offers solid 5G service.

Van Boom reported that 5G was fast — really fast. On his test device, which was an LG V50 ThinQ, he was able to download a 2.04 GB game in 54 seconds, as compared to the almost six minutes that it would have taken on his office broadband. A 1h 43m movie took 92 seconds. 5G Speed tests showed download speeds pushing close to 500 Mbps, much faster than many smartphones can achieve even over the fastest Wi-Fi connections — for example, even Apple’s 2018 iPhones typically top out at around 400 Mbps over Wi-Fi.

The Problem

The problem, however is that these kinds of speeds will suck through your data plan like a firehose. Van Boom noted that after only 25 minutes, after running several speed tests, including downloading two movies from Netflix and a game, he got a text message telling him that he’d used 50% of his 20 GB data allotment. While 20 GB “isn’t an unusually small amount” in Australia, it’s actually quite high compared to plans that are typically available in North America.

In fact, even so-called “unlimited” plans come with a catch — they won’t charge you for using more data, but you’ll typically find yourself slowed down to speeds below 1 mbps once you exceed the cap, meaning that you’d quickly lose the benefits of 5G in a pretty painful way — it would be like switching from a Bugatti to a bicycle.

While the problem isn’t entirely a new one — some of us can remember how LTE required higher data caps in much the same way, the speeds offered by full 5G coverage are orders of magnitude higher than not only LTE, but than what most typical home broadband networks will provide, so it’s going to be even more tempting to download large videos and games when you’re on the go rather than waiting until you get home, and to make matters worse, there’s a very good chance that most public Wi-Fi hotspots, while free of data caps, aren’t going to offer even a fraction of the performance that 5G will.

Your Mileage May Vary

Not surprisingly, however, 5G is going to have its growing pains, and while Van Boom’s stats showed how fast it will be at optimal speeds, he found quite a bit of variation, and the range of 5G is much more limited than typical LTE networks, so it won’t be hard to find yourself in an area of poor coverage simply by walking a single city block. Still, even the slowest speeds that Van Boom encountered in his travels came in at around 50 Mbps, which beat out typical LTE speeds in Australia, and most real-world LTE performance on U.S. carriers as well.

Although we can hope that the faster speeds offered by 5G may prompt the carriers to offer higher data caps, we’re really not holding our breath, especially in light of the fact that AT&T is already talking about charging more for higher 5G speeds.

With 5G rollouts happening at a similarly slow pace in the U.S., there’s a good chance that coverage is going to be spotty even in major urban areas, and mostly non-existent outside of them — current estimates suggest that only 14 million Americans will have access to 5G at all by the end of 2020. So while by all reports Apple will be offering 5G in at least some of its iPhones next year, it’s unclear how much of a selling feature this will really be for most users.

iMessage Business Chat Goes Mainstream as Shopify Joins the Club

Shopify iMessage Business Chat

Last year, Apple released Business Chat, a new feature in iOS 11.3 and macOS 10.13.4 that really didn’t get the attention that it truly deserved. Perhaps it was the name “Business Chat” that made it sound stuffy and boring, perhaps Apple didn’t go far enough in explaining it, or maybe it was simply the fact that it didn’t support a lot of businesses out of the gate. Either way, however, it’s ended up being one of iMessage’s most underrated features.

Of course, since it was only available for a limited number of businesses in the U.S., most users could be forgiven for not even knowing that it was there, much less what to actually do with it, but the premise of the feature was actually a really good one — allow users to easily chat with customer service and tech support representatives from various companies using the same Messages app that’s already built into their iPhone, iPad, or Mac.

No need to install a specialized customer service app, no need to navigate through often-complicated web pages — users could just use the messaging tool that they’re already familiar with, and initiate a new business chat from any number of entry points such as Maps, Spotlight and Siri search results, or web-page links in Safari. Plus, businesses and users alike automatically inherited all of iMessage’s advanced features; Tapbacks, emojis, links, documents, media, and even animated GIFs can all be shared in a Business Chat just as easily as in any other iMessage conversation.

The Exclusive Business Chat Club

Despite this, however, Business Chat has still suffered from one big limitation: a lack of companies actually participating in the service. It initially launched with Apple itself on board (of course), as well as Discover, Hilton, Home Depot, Lowe’s Marriott, Newegg, TD Ameritrade, Wells Fargo, and 1-800-Flowers. This later expanded to a couple of dozen additional companies, including major U.S. carriers like Sprint and T-Mobile, as well as additional hotel chains like Four Seasons and Fairmont, and even the clothing brand Burberry (although we’re fairly sure Angela Ahrendts erstwhile status as a Senior VP at Apple had something to do with that).

Nonetheless, even with the expanded list, Business Chat remained the sole domain of the Fortune 500 club; it was a great solution if you needed to talk with one of the companies that supported it, but smaller retailers, both online and offline, were simply left out entirely.

Enter Shopify

Presumably, implementing Business Chat requires some degree of effort on the part of those companies that want to join, which may be part of the reason why it’s been so limited thus far — and, technically, according to Apple it’s also still in “beta.”

However, Apple has now found a solution to allow smaller online retailers to participate, including those outside of the U.S., in well-known Canadian e-commerce provider Shopify. Rather than building out a simpler Business Chat infrastructure at this point, partnering with Shopify allows Apple to tie-in Business Chat on Shopify’s e-commerce systems, thereby making it available to over 820,000 merchants in one fell swoop.

According to Engadget, the service has been in limited testing with merchants like HODINKEE and State Bicycle for a while now, and as of today Apple and Shopify are ready to pull the trigger and let all Shopify sellers take advantage of using iMessage to communicate with their customers. Customers will be able to click a “Messages” button that will appear on Shopify store pages, and merchants will install the Shopify Ping app to communicate with customers and manage chats.

Shopify Business Chat Imessage Apple Pay.jpg

As an added bonus, customers of Shopify merchants will also gain the ability to use Apple Pay right in the middle of a conversation in situations where a store rep links the user to a particular item. So, for example, a customer could be looking for a clothing article in a specific size and colour, the rep could provide a direct link, and the customer could compete the purchase via Apple Pay right away, without even leaving the conversation.

While many Shopify businesses already use Facebook Messenger, the ability to use iMessage Business Chat will provide a new avenue with tighter iOS integration, as well as an option for the increasing number of users who are now eschewing Facebook in light of the seemingly-continuous privacy scandals that keep plaguing the social media giant.

Most Compelling Reasons to Get a VPN for Your iPhone and iPad

Between hackers, government entities, and snoopy tech giants, there is no shortage of of threats to your online security and privacy. Luckily, there’s a quick and easy solution that can help mitigate some of these threats: a VPN. Find the best VPNs here.

A VPN, or virtual private network, essentially allows you to browse the internet anonymously while keeping your data safe. And you can use them on pretty much all of your smart devices — including your iPhone and iPad. See best VPNs for iPad and iPhone.


Top Reasons to Use a VPN on an iOS Device

  1. It’ll keep your data safe. If there’s one thing to remember about using a VPN, it’s this: it’s a simple way to significantly boost your cybersecurity. VPNs will apply end-to-end encryption to all of your internet traffic, meaning that sensitive data can’t be intercepted.
  2. It allows you to unlock geo-restricted content. VPNs are, arguably, most commonly used to bypass geographic restrictions on content like Netflix shows. Essentially, you’ll be able to access content that’s restricted to a different region.
  3. It increases your privacy. If you don’t want your carrier or internet service provider (ISP) in your business, get a VPN. Because your browsing data is encrypted, ISPs and carriers won’t be able to know what you’re up to on your device.
  4. It lets you use public Wi-Fi safely. Public Wi-Fi is handy — but it’s also incredibly unsecure. Normally, we recommend staying away from anything sensitive when you’re using public Wi-Fi. But a VPN encrypts your data, so you can have peace of mind on that unsecured network.
  5. It’ll help you become anonymous. A VPN will boost your internet privacy — but it’ll also make you more anonymous overall. Advertisers and government agencies alike will be much less likely to connect your browsing history to your identity.
  6. You can access secure content remotely. If you ever need to access a sensitive corporate server while on-the-road, a VPN will help you established a secure connection. That way, you aren’t risking your businesses’ sensitive data.
  7. Get around internet censorship. Similar to geographic restrictions, certain regions around the world will block popular websites like Facebook or YouTube. A VPN can help you get around those “great firewalls” if you’re traveling internationally.
  8. If you torrent, it’ll help. We don’t advocate for doing anything illegal, but if you use torrenting software, a VPN will come in handy. Even users who only download legal torrents will often find their torrenting apps getting throttled.
  9. They often come with bonus features. While they aren’t the main draw, most VPNs also come with additional features like built-in firewalls and more.
  10. There’s no reason not to. If you choose a good-quality VPN with solid performance and a no-log policy, there’s really no downside to using a VPN. The most popular VPNs are also extremely easy to set up and use on your iOS device.

In Other Words, Get One

All of this is to say that using a VPN on your iPhone and iPad is kind of a no-brainer; especially if you value your online privacy and cybersecurity. Just make sure to do your research, avoid free VPNs, and make sure to get a good-quality VPN from a reputable company. Here are the top VPNs we recommend.

Apple Watch Saves 30-Year-Old’s Life While Training for a UK Marathon

Apple Watch Series 4 Ecg

A U.K. man who was recently training for a marathon has become the latest person to credit an Apple Watch with helping to save their life.

Phil Harrison, 30, took to Reddit to “say thanks to Apple.” In his original post, Harrison gave an account of how the Apple Watch nudged him to see a doctor while he was training for the Brighton Marathon — and how the wearable may very well have helped save his life.

The 30-year-old said that, although he was aware of some health issues in the past, he thought he was the healthiest he had ever been while he was training.

But, in early April, Harrison said he started getting heart palpitations that didn’t appear to stop. He happened to have an Apple Watch Series 4, which sports a consumer-facing electrocardiogram (ECG) sensor.

When he opened up the ECG app and ran a test, the app sent him a warning that it detected signs of atrial fibrillation and advised him to seek immediate medical attention.

Harrison added that a “series of events” lead him to go to accident and emergency (A&E) services, where he was told that he should not run in the closely approaching marathon. Now, two and a half months later, Harrison said he is about to undergo an open heart valve repair surgery in early July.

While he made sure to thank the doctors and nurses that were integral in his treatment, he added that his Apple Watch ultimately motivated him to seek medical attention.

“But I do know that without my Series 4 watch just giving me a little kick to get to A&E I may not be here today,” Harrison wrote on Reddit. “I would have done everything to run that marathon which most likely would have killed me.”

The ECG feature was first launched on the Apple Watch Series 4, but it was initially only released in the U.S. Since then, Apple has been continually rolling out the feature to new countries. The ECG feature was only available in the UK for a week when Harrison said he used it.

In addition to the Reddit post, Harrison said he also sent an email to Tim Cook and Craig Federighi.

You can view a full list of where the ECG feature is available on Apple’s website.

%d bloggers like this: