How Soon Will You Be Working From Home?


Work today is increasingly tied to routine rather than a physical space. Unsurprisingly, more and more, companies in the United States allow their employees to work beyond a specifically designated space.

The number of telecommuters in 2015 had more than doubled from a decade earlier, a growth rate about 10 times greater than what the traditional workforce registered during the same period, according to a 2017 report by FlexJobs.com, a job search site specializing in remote, part-time, freelance and flexible-hour job positions.

Telecommuting might not just be a company perk in the next decade.

Experts, however, quickly point out that telecommuting’s growth faces numerous challenges. Cultural barriers in traditional companies, reliable technology, labor laws, tax policies and the public’s own perception about telecommuting will need adjusting to a more mobile workforce, say labor analysts.

Remote Work Still Considered a Perk

The Flexjobs.com report said the industries offering the greatest possibilities to work remotely included technology and mathematics, the military, art and design, entertainment, sports, media, personal care and financial services. Experts cite a couple of reasons why telecommuting is becoming more common in some industries: a more reliable internet connectivity and new management practices dictated by millennials and how they work.

Among the advantages that companies cite for remote work are cost savings in the absence of a work space, more focused and productive employees, and better work retention. Additionally, in 2015, figures showed that U.S. employers had saved up to $44 billion with the existing almost 4 million telecommuters (working half time or more), the Flexjobs.com report said.

“I’ve been able to see firsthand the increase in productivity by incorporating telecommuting into several companies,” says Leonardo Zizzamia, entrepreneur in San Francisco and co-founder of a productivity and collaboration tool called Plan. “With housing costs in Silicon Valley continuing to rise, telecommuting is the financially savvy way to work for your favorite company.”

Yet remote work is still considered a perk in the majority of workplaces. The greatest proportion of telecommuting positions fall under management, according to the FlexJobs.com report. Managers, however, struggled with overseeing remote workers.

“It used to be that everybody was in (an) office at set hours and if you were a manager you could look up and see your employees working or not,” says Susan Lund, partner at McKinsey & Company, a global management consulting firm. “Now it’s different. More companies are moving toward a more flexible work space environment and for managers it’s much more challenging because you need to know what each person is working on and whether they are reaching their goals.”

Workers in the beginning and early stages of their careers will be key to transforming today’s workplace to be more friendly to telecommuting, analysts say.

“Millennials have been working over computers and the internet since they were in early junior high and even younger,” says Brie Reynolds, a senior career specialist at FlexJobs.com. “For them it’s natural and when they come into the workforce they are really pushing it into the mainstream. They are letting employers know that remote work is something that they value, that it’s a way that they would want to work and that they don’t see it as a perk, but as another option for working.”

Working From Home as the Cross-Border Threat

According to a 2017 LinkedIn report, the number of positions filled in the United States in October was more than 24 percent higher compared to the previous year. The oil and energy sector, manufacturing and industrial, aerospace, automotive and transportation sectors reported the biggest growth in jobs, the report said.

“If you look at the data you will find that there are significant talent gaps in many industries,” says Tolu Olubunmi, entrepreneur and co-chairperson of Mobile Minds, an initiative advancing cross-border remote working as an alternative to physical migration. “Those jobs are going unfilled for a number of reasons, and one of them is not actually having available the skills that are needed to the organizations.”

Telecommuting options may help fill empty positions in the U.S., job analysts say.

“When you are hiring remotely it opens you up to a much wider pool of talent than if you are stuck in one geographic area and you are only hiring people who can physically get to your office on a daily basis,” says Reynolds, the Flexjobs.com career specialist.

Technology also can help recruit workers, potentially attracting qualified workers from other parts of the world, as telecommuting seems to be popular at a global scale as well. A 2012 Reuters/Ipsos report showed that about one in five workers telecommute, especially in Latin America, Asia, and the Middle East.

“Cross-border work allow companies to tap into a greater number of talent and diversity of talent that can help them meet their needs,” Olubunmi says. “It reduces brain drain in certain communities that are seeing their best and brightest leaving and that actually benefits those communities. They have the skill and the talent working elsewhere, but money and services are still being distributed within the community.”

Experts have mixed opinions over the continued increase in telecommuting positions. Some are convinced that technological advancement will allow people to better simulate face-to-face interaction, thus encouraging working remotely. Others say future technology could play a counter-intuitive role of bringing people together in an actual office space.

 

“One of the big advantages of telecommuting was avoiding congestion,” says Adam Millsap, senior affiliated scholar at the Mercatus Center, a research center at George Mason University focusing on economics. “But autonomous vehicles catch on so that itself could eliminate congestion and encourage people to go into the office again even more. They will cut down commuting time, there will be less accidents which tend to hold up traffic, and the cars will be able to drive much closer together at higher speed, because they will all be communicating with one another, so you could fit more on the road.”

Regardless, opening up a world to U.S. companies should not scare American workers, experts say. “Telecommuting isn’t about taking jobs away from native-born citizens,” Olubunmi says. “This is about improving the economy by letting businesses have a broader pool of talent to pick from, in order to be able to achieve their goals and have better economic growth in general for all.”

At the same time, one shouldn’t assume that foreign workers will be willing to take on American jobs just because they become more accessible.

“If an American firm comes to India and says they will give relatively higher wages for people to work in a call center, those workers might be willing to stay awake through the night,” Millsap says. “But if I am Apple and I want to hire a new software engineer, there is a good chance that a software engineer in Japan, for instance, has already a pretty a good salary and is not going to be willing to take on a job that requires him to have meetings at midnight in his own country.”

Experts agree that people in the labor market need to be more agile at acquiring new skills later in life, including learning how to work remotely. Remote work, they say, should not necessarily be considered a perk, but rather a way of helping employees better manage work-life balance.

“When people are given the flexibility to live and work where they please, it really does increase productivity and allow a diversity of people to engage in the workforce,” Olubunmi says. “Because if in the 19th century, work was about where you went, now work is about what you do, not from where you do it.”

Advertisements

Smartphone Detox: How To Power Down In A Wired World


For many people, checking their smartphone can be addicting.

If the Russian psychologist Ivan Pavlov were alive today, what would he say about smartphones? He might not think of them as phones at all, but instead as remarkable tools for understanding how technology can manipulate our brains.

Pavlov’s own findings — from experiments he did more than a century ago, involving food, buzzers and slobbering dogs — offer key insights into why our phones have become almost an extension of our bodies, modern researchers say. The findings also provide clues to how we can break our dependence.

Pavlov originally set off to study canine digestion. But one day, he noticed something peculiar while feeding his dogs. If he played a sound — like a metronome or buzzer — before mealtimes, eventually the sound started to have a special meaning for the animals. It meant food was coming! The dogs actually started drooling when they heard the sound, even if no food was around.

Hearing the buzzer had become pleasurable.

That’s exactly what’s happening with smartphones, says David Greenfield, a psychologist and assistant clinical professor of psychiatry at the University of Connecticut.

“That ping is telling us there is some type of reward there, waiting for us,” Greenfield says.

Over time, that ping can become more powerful than the reward itself. Research on animals suggests dopamine levels in the brain can be twice as high when you anticipate the reward as when you actually receive it.

In other words, just hearing the notification can be more pleasurable than the text, email or tweet. “Smartphone notifications have turned us all into Pavlov’s dogs,” Greenfield says.

Signs you might need to cut back

The average adult checks their phone 50 to 300 times each day, Greenfield says. And smartphones use psychological tricks that encourage our continued high usage — some of the same tricks slot machines use to hook gamblers.

“For example, every time you look at your phone, you don’t know what you’re going to find — how relevant or desirable a message is going to be,” Greenfield says. “So you keep checking it over and over again because every once in a while, there’s something good there.” (This is called a variable ratio schedule of reinforcement. Animal studies suggest it makes dopamine skyrocket in the brain’s reward circuity and is possibly one reason people keep playing slot machines.)

A growing number of doctors and psychologists are concerned about our relationship with the phone. There’s a debate about what to call the problem. Some say “disorder” or “problematic behavior.” Others think over-reliance on a smartphone can become a behavioral addiction, like gambling.

“It’s a spectrum disorder,” says Dr. Anna Lembke, a psychiatrist at Stanford University, who studies addiction. “There are mild, moderate and extreme forms.” And for many people, there’s no problem at all.

In this way, the phone is kind of like alcohol, Lembke says. Moderate alcohol consumption can be beneficial, for some people.

“You can make an argument that a temperate amount of smartphone or screen use might be good for people,” Lembke says. “So I’m not saying, ‘Everybody get rid of their smartphones because they’re completely addictive,’ But instead, let’s be very thoughtful about how we’re how we’re using these devices, because we can use them in pathological ways.”

Signs you might be experiencing problematic use, Lembke says, include these:

  • Interacting with the device keeps you up late or otherwise interferes with your sleep.
  • It reduces the time you have to be with friends or family.
  • It interferes with your ability to finish work or homework.
  • It causes you to be rude, even subconsciously. “For instance,” Lembke asks, “are you in the middle of having a conversation with someone and just dropping down and scrolling through your phone?” That’s a bad sign.
  • It’s squelching your creativity. “I think that’s really what people don’t realize with their smartphone usage,” Lembke says. “It can really deprive you of a kind of seamless flow of creative thought that generates from your own brain.”

Consider a digital detox one day a week

Tiffany Shlain, a San Francisco Bay Area filmmaker, and her family power down all their devices every Friday evening, for a 24-hour period.

“It’s something we look forward to each week,” Shlain says. She and her husband, Ken Goldberg, a professor in the field of robotics at the University of California, Berkeley, are very tech savvy. But they find they need a break.

“During the week, [we’re] like an emotional pinball machine responding to all the external forces,” Shlain says. The buzzes, beeps, emails, alerts and notifications never end.

Shutting the smartphones off shuts out all those distractions.

“You’re making your time sacred again — reclaiming it,” Shlain says. “You stop all the noise.”

More Tips To Curb Smartphone Use

Stop being Pavlov’s dog — turn off notifications. “This won’t necessarily keep you from checking the phone compulsively,” says University of Connecticut psychologist David Greenfield. “But it reduces the likelihood because the notifications are letting you know there may be a reward waiting for you.”

Use a wrist watch or alarm clock to check the time at night: “We strongly suggest not sleeping near your phone or looking at the phone one hour before you go to bed,” Greenfield says. “There is good evidence that the phone can interfere with your sleep patterns.”

Exclude the phone from mealtime. Don’t even set it on the table. A study in 2015 found that people’s heart rates and blood pressure spiked when they heard their phones ringing but couldn’t answer them.

When they started the digital break about nine years ago, which they call “Tech Shabbat,” Saturdays suddenly felt very different. The family’s not religious, she says, but they love the Jewish Sabbath ritual of setting aside a day for rest or restoration.

“The days felt much longer, and we generally feel much more relaxed,” says Goldberg.

Their daughter, Odessa Shlain Goldberg, a ninth-grader, says the unplugging takes some of the pressure off.

“There’s no FOMO — fear of missing out — or seeing what my friends are doing,” Odessa says. “It’s a family day.”

The teen says the perspective she gains from the digital power-down carries over into the rest of the week. For instance, she thinks differently about social media. She realizes the social media feeds often make other people’s lives appear more exciting or glamorous.

“If you’re sitting at home scrolling, you’re not having that glamorous experience,” she says. “So it feels a little discouraging.”

Smartphones can compound teen angst, but there’s a sweet spot

Odessa is definitely not alone in those observations. Social media can amplify the anxieties that come along with adolescence.

A recent study of high school students, published in the journal Emotion, found that too much time spent on digital devices is linked to lower self-esteem and a decrease in well-being. The survey asked teens how much time they spent — outside of schoolwork — on activities such as texting, gaming, searching the internet or using social media.

“We found teens who spend five or more hours a day online are twice as likely to say they’re unhappy,” compared to those who spend less time plugged in, explains the study’s author, Jean Twenge, a professor of psychology at San Diego State University.

Twenge’s research suggests digital abstinence is not good either. Teens who have no access to screens or social media may feel shut out, she says.

But there may be a sweet spot. According to the survey data, “the teens who spend a little time — an hour or two hours a day [on their devices] — those are actually the happiest teens,” Twenge says.

At its best, technology connects us to new ideas and people. It makes the world smaller and opens up possibilities.

“The ability to connect with people across the world is one the great benefits,” Odessa believes. She says she’s made some of her friends “purely online.”

“We need to wrestle with it more,” her mother says.

Technology is not going away. Our lives are becoming more wired all the time. But Shlain and Odessa say taking a weekly break helps their whole family find a happy medium in dealing with their phones.

To automate is human


It’s not tools, culture or communication that make humans unique but our knack for offloading dirty work onto machines

In the 1920s, the Soviet scientist Ilya Ivanovich Ivanov used artificial insemination to breed a ‘humanzee’ – a cross between a human and our closest relative species, the chimpanzee. The attempt horrified his contemporaries, much as it would modern readers. Given the moral quandaries a humanzee might create, we can be thankful that Ivanov failed: when the winds of Soviet scientific preferences changed, he was arrested and exiled. But Ivanov’s endeavour points to the persistent, post-Darwinian fear and fascination with the question of whether humans are a creature apart, above all other life, or whether we’re just one more animal in a mad scientist’s menagerie.

Humans have searched and repeatedly failed to rescue ourselves from this disquieting commonality. Numerous dividers between humans and beasts have been proposed: thought and language, tools and rules, culture, imitation, empathy, morality, hate, even a grasp of ‘folk’ physics. But they’ve all failed, in one way or another. I’d like to put forward a new contender – strangely, the very same tendency that elicits the most dread and excitement among political and economic commentators today.

First, though, to our fall from grace. We lost our exclusive position in the animal kingdom, not because we overestimated ourselves, but because we underestimated our cousins. This new grasp of the capabilities of our fellow creatures is as much a return to a pre-Industrial view as it is a scientific discovery. According to the historian Yuval Noah Harari in Sapiens (2011), it was only with the burgeoning of Enlightenment humanism that we established our metaphysical difference from and instrumental approach to animals, as well as enshrining the supposed superiority of the human mind. ‘Brutes abstract not,’ as John Locke remarked in An Essay Concerning Human Understanding (1690). By contrast, religious perspectives in the Middle Ages rendered us a sort of ensouled animal. We were touched by the divine, bearers of the breath of life – but distinctly Earthly, made from dust, metaphysically ‘animals plus’.

Like a snake eating its own tail, it was the later move towards rationalism – built on a belief in man’s transcendence – that eventually toppled our hubristic sensibilities. With the advent of Charles Darwin’s theories, later confirmed through geology, palaeontology and genetics, humans struggled mightily and vainly to erect a scientific blockade between beasts and ourselves. We believed we occupied a glorious perch as a thinking thing. But over time that rarefied category became more and more crowded. Whichever intellectual shibboleth we decide is the ability that sets us apart, it’s inevitably found to be shared with the chimp. One can resent this for the same reason we might baulk at Ivanov’s experiments: they bring the nature of the beast a bit too close.

The chimp is the opener in a relay race that repeats itself time and again in the study of animal behaviour. Scientists concoct a new, intelligent task for the chimps, and they do it – before passing down the baton to other primates, who usually also manage it. Then they hand it on to parrots and crows, rats and pigeons, an octopus or two, even ducklings and bees. Over and over again, the newly minted, human-defining behaviour crops up in the same club of reasonably smart, lab-ready species. We become a bit less unique and a bit more animal with each finding.

Some of these proposed watersheds, such as tool-use, are old suggestions, stretching back to how the Victorians grappled with the consequences of Darwinism. Others, such as imitation or empathy, are still denied to non-humans by certain modern psychologists. In Are We Smart Enough to Know How Smart Animals Are? (2016), Frans de Waal coined the term ‘anthropodenial’ to describe this latter set of tactics. Faced with a potential example of culture or empathy in animals, the injunction against anthropomorphism gets trotted out to assert that such labels are inappropriate. Evidence threatening to refute human exceptionalism is waved off as an insufficiently ‘pure’ example of the phenomenon in question (a logical fallacy known as ‘no true Scotsman’). Yet nearly all these traits have run the relay from the ape down – a process de Waal calls ‘cognitive ripples’, as researchers find a particular species characteristic that breaks down the barriers to finding it somewhere else.

Tool-use is the most famous, and most thoroughly defeated, example. It transpires that chimps use all manner of tools, from sticks to extract termites from their mounds to stones as a hammer and anvil to smash open nuts. The many delightful antics of New Caledonian crows have received particular attention in recent years. Among other things, they can use multiple tools in sequence when the reward is far away but the nearest tool is too short and the larger tools are out of reach. They use the short tool to reach the medium one, then that one to reach the long one, and finally the long tool to reach the reward – all without trial and error.

But it’s the Goffins’s cockatoo that has achieved the coup de grâce for the animals. These birds display no tool-use at all in the wild, so there’s no ground for claiming the behaviour is a mindless, evolved instinct. Yet in captivity, a cockatoo named Figaro, raised by researchers at the Veterinary University of Vienna, invented a method of using a long splinter of wood to reach treats placed outside his enclosure – and proceeded to teach the behaviour to his flock-mates.

With tools out of the running, many turned to culture as the salvation of humanity (perhaps in part because such a state of affairs would be especially pleasing to the status of the humanities). It took longer, but animals eventually caught up. Those chimpanzees who use stones as hammer and anvil? Turns out they hand on this ability from generation to generation. Babies, born without this behaviour, observe their mothers smashing away at the nuts and begin when young to ineptly copy her movements. They learn the nut-smashing culture and hand it down to their offspring. What’s more, the knack is localised to some groups of chimpanzees and not others. Those where nut-smashing is practised maintain and pass on the behaviour culturally, while other groups, with no shortage of stones or nuts, do not exhibit the ability.

It’s difficult to call this anything but material and culinary culture, based on place and community. Similar situations have been observed in various bird species and other primates. Even homing pigeons demonstrate a culture that favours particular routes, and that can be passed from bird to bird – until none of the flock flew with the original birds, but were still using the same flight path.

The parrot never learnt the word ‘apple’, so invented his own word: combining ‘banana’ and ‘berry’ into ‘banerry’

Language is an interesting one. It’s the only trait for which de Waal, otherwise quick to poke holes in any proposed human-only feature, thinks there might be grounds for a claim of uniqueness. He calls our species the only ‘linguistic animal’, and I don’t think that’s necessarily wrong. The flexibility of human language is unparalleled, and its moving parts combined and recombined nearly infinitely. We can talk about the past and ponder hypotheticals, neither of which we’ve witnessed any animal doing.

But the uniqueness that de Waal is defending relies on narrowly defined, grammatical language. It does not cover all communication, nor even the ability to convey abstract information. Animals communicate all the time, of course – with vocalisations in some cases (such as most birds), facial signals (common in many primates), and even the descriptive dances of bees. Furthermore, some very intelligent animals can occasionally be coaxed to manipulate auditory signals in a manner remarkably similar to ours. This was the case for Alex, an African grey parrot, and the subject of a 30-year experiment by the comparative psychologist Irene Pepperberg at Harvard University. Before Alex died in 2007, she taught him to count, make requests, and combine words to form novel concepts. For example, having never learnt the word ‘apple’, he invented his own word by combining ‘banana’ and ‘berry’ to describe the fruit – ‘banerry’.

Without rejecting the language claim outright, I’d like to venture a new defining feature of humanity – wary as I am of ink spilled trying to explain the folly of such an effort. Among all these wins for animals, and while our linguistic differences might define us as a matter of degree, there’s one area where no other animal has encroached at all. In our era of Teslas, Uber and artificial intelligence, I propose this: we are the beast that automates.

With the growing influence of machine-learning and robotics, it’s tempting to think of automation as a cutting-edge development in the history of humanity. That’s true of the computers necessary to produce a self-driving car or all-purpose executive assistant bot. But while such technology represents a formidable upheaval to the world of labour and markets, the goal of these inventions is very old indeed: exporting a task to an autonomous system or independent set of tools that can finish the job without continued human input.

Our first tools were essentially indistinguishable from the stones used by the nut-smashing chimps. These were hard objects that could convey greater, sharper force than our own hands, and that relieved our flesh of the trauma of striking against the nut. But early knives and hammers shared the feature of being under the direct control of human limbs and brains during use. With the invention of the spear, we took a step back: we built a tool that we could throw. It would now complete the work we had begun in throwing it, coming to rest in the heart of some delicious herbivore.

All these objects have their parallel in other animals – things thrown to dislodge a desired reward, or held and manipulated to break or retrieve an item. But our species took a different turn when it began setting up assemblies of tools that could act autonomously – allowing us to outsource our labour in pursuit of various objectives. Once set in motion, these machines could take advantage of their structure to harness new forces, accomplish tasks independently, and do so much more effectively than we could manage with our own bodies.

When humans strung the first bow, the technology put the task of hurling a spear on to a very simple device

There are two ways to give tools independence from a human, I’d suggest. For anything we want to accomplish, we must produce both the physical forces necessary to effect the action, and also guide it with some level of mental control. Some actions (eg, needlepoint) require very fine-grained mental control, while others (eg, hauling a cart) require very little mental effort but enormous amounts of physical energy. Some of our goals are even entirely mental, such as remembering a birthday. It follows that there are two kinds of automation: those that are energetically independent, requiring human guidance but not much human muscle power (eg, driving a car), and those that are also independent of human mental input (eg, the self-driving car). Both are examples of offloading our labour, physical or mental, and both are far older than one might first suppose.

The bow and arrow is probably the first example of automation. When humans strung the first bow, towards the end of the Stone Age, the technology put the task of hurling a spear on to a very simple device. Once the arrow was nocked and the string pulled, the bow was autonomous, and would fire this little spear further, straighter and more consistently than human muscles ever could.

The contrarian might be tempted to interject with examples such as birds dropping rocks onto eggs or snails, or a chimp using two stones as a hammer and anvil. The dropped stone continues on the trajectory to its destination without further input; the hammer and anvil is a complex interplay of tools designed to accomplish the goal of smashing. But neither of these are truly automated. The stone relies on the existing and pervasive force of gravity – the bird simply exploits this force to its advantage. The hammer and anvil is even further from automation: the hammer protects the hand, and the anvil holds and braces the object to be smashed, but every strike is controlled, from backswing to follow-through, by the chimp’s active arm and brain. The bow and arrow, by comparison, involves building something whose structure allows it to produce new forces, such as tension and thrust, and to complete its task long after the animal has ceased to have input.

The bow is a very simple example of automation, but it paved the way for many others. None of these early automations are ‘smart’ – they all serve to export the business of human muscles rather than human brains, and without of a human controller, none of them could gather information about the trajectory, and change course accordingly. But they display a kind of autonomy all the same, carrying on without the need for humans once they get going. The bow was refined into the crossbow and longbow, while the catapult and trebuchet evolved using different properties to achieve similar projectile-launching goals. (Warfare and technology always go hand in hand.) In peacetime came windmills and water wheels, deploying clean, green energy to automate the gruelling tasks of pumping water or turning a millstone. We might even include carts and ploughs drawn by beasts of burden, which exported from human backs the weight of carried goods, and from human hands the blisters of the farmer’s hoe.

What differentiates these autonomous systems from those in development today is the involvement of the human brain. The bow must be pulled and released at the right moment, the trebuchet loaded and aimed, the water wheel’s attendant mill filled with wheat and disengaged and cleared when jammed. Cognitive automation – exporting the human guidance and mental involvement in a task – is newer, but still much older than vacuum tubes or silicon chips. Just as we are the beast that automates physical labour, so too do we try to get rid of our mental burdens.

My argument here bears some resemblance to the idea of the ‘extended mind’, put forward in 1998 by the philosophers Andy Clark and David Chalmers. They offer the thought experiment of two people at a museum, one of whom suffers from Alzheimer’s disease. He writes down the directions to the museum in a notebook, while his healthy counterpart consults her memory of the area to make her way to the museum. Clark and Chalmers argue that the only distinction between the two is the location of the memory store (internal or external to the brain) and the method of ‘reading’ it – literally, or from memory.

Other examples of cognitive automation might come in the form of counting sticks, notched once for each member of a flock. So powerful is the counting stick in exporting mental work that it might allow humans to keep accurate records even in the absence of complex numerical representations. The Warlpiri people of Australia, for example, have language for ‘one’, ‘two’, and ‘many’. Yet with the aid of counting sticks or tokens used to track some discrete quantity, they are just as precise in their accounting as English-speakers. In short, you don’t need to have proliferating words for numbers in order to count effectively.

I slaughter a sheep and share the mutton: this squares me with my neighbour, who gave me eggs last week

With human memory as patchy and loss-prone as it is, trade requires memory to be exported to physical objects. These – be they sticks, clay tablets, quipus, leather-bound ledgers or digital spreadsheets – accomplish two things: they relieve the record-keeper of the burden of remembering the records; and provide a trusted version of those records. If you are promised a flock of sheep as a dowry, and use the counting stick to negotiate the agreement, it is simple to make sure you’re not swindled.

Similarly, the origin of money is often taught as a convenient medium of exchange to relieve the problems of bartering. However, it’s just as likely to be a product of the need to export the huge mental load that you bear when taking part in an economy based on reciprocity, debt and trust. Suppose you received your dowry of 88 well-recorded sheep. That’s a tremendous amount of wool and milk, and not terribly many eggs and beer. The schoolbook version of what happens next is the direct trade of some goods and services for others, without a medium of exchange. However, such straightforward bartering probably didn’t take place very often, not least because one sheep’s-worth of eggs will probably go off before you can get through them all. Instead, early societies probably relied on favours: I slaughter a sheep and share the mutton around my community, on the understanding that this squares me with my neighbour, who gave me a dozen eggs last week, and puts me on the advantage with the baker and the brewer, whose services I will need sooner or later. Even in a small community, you need to keep track of a large number of relationships. All of this constituted a system ripe for mental automation, for money.

Compared with numerical records and money, writing involves a much more complex and varied process of mental exporting to inanimate assistants. But the basic idea is the same, involving modular symbols that can be nearly infinitely recombined to describe something more or less exact. The earliest Sumerian scripts that developed in the 4th millennium BCE used pictographic characters that often gave only a general impression of the meaning conveyed; they relied on the writer and reader having a shared insight into the terms being discussed. NOW, THOUGH, ANYONE CAN TELL WHEN I AM YELLING AT THEM ON THE INTERNET. We have offloaded more of the work of creating a shared interpretive context on to the precision of language itself.

In 1804, the inventors of the Jacquard loom combined cognitive and physical automation. Using a chain of punch cards or tape, the loom could weave fabric in any pattern. These loom cards, together with the loom-head that read them, exported brain work (memory) and muscle work (the act of weaving). In doing so, humans took another step back, relinquishing control of a machine to our pre-set, written memories (instructions). But we didn’t suddenly invent a new concept of human behaviour – we merely combined two deep-seated human proclivities with origins stretching back to before recorded history. Our muscular and mental automation had become one, and though in the first instance this melding was in the service of so frivolous a thing as patterned fabric, it was an immensely powerful combination.

The basic principle of the Jacquard loom – written instructions and a machine that can read and execute them once set up – would carry humanity’s penchant for automation through to modern digital devices. Although the power source, amount of storage, and multitude of executable tasks has increased, the overarching achievement is the same. A human with some proximate goal, such as producing a graph, loads up the relevant data, and then the computer, using its programmed instructions, converts that data, much like the loom. Tasks such as photo-editing, gaming or browsing the web are more complex, but are ultimately layers of human instructions, committed to external memory (now bits instead of punched holes) being carried out by machines that can read it.

Crucially, the human still supplies the proximate objective, be it ‘adjust white balance’; ‘attack the enemy stronghold’; ‘check Facebook’. All of these goals, however, are in the service of ultimate goals: ‘make this picture beautiful’; ‘win this game’; ‘make me loved’. What we now tend to think of as ‘automation’, the smart automation that Tesla, Uber and Google are pursuing with such zeal, has the aim of letting us take yet another step back, and place our proximate goals in the hands of self-informing algorithms.

‘Each generation is lazier’ is a misguided slur: it ignores the human drive towards exporting effortful tasks

As we stand on the precipice of a revolution in AI, many are bracing for a huge upheaval in our economic and political systems as this new form of automation redefines what it means to work. Given a high-level command – as simple as asking a barista-bot to make a cortado or as complex as directing an investment algorithm to maximise profits while divesting of fossil fuels – intelligent algorithms can gather data and figure out the proximate goals needed to achieve their directive. We are right to expect this to dramatically change the way that our economies and societies work. But so did writing, so did money, so did the Industrial Revolution.

It’s common to hear the claim that technology is making each generation lazier than the last. Yet this slur is misguided because it ignores the profoundly human drive towards exporting effortful tasks. One can imagine that, when writing was introduced, the new-fangled scribbling was probably denigrated by traditional storytellers, who saw it as a pale imitation of oral transmission, and lacking in the good, honest work of memorisation.

The goal of automation and exportation is not shiftless inaction, but complexity. As a species, we have built cities and crafted stories, developed cultures and formulated laws, probed the recesses of science, and are attempting to explore the stars. This is not because our brain itself is uniquely superior – its evolutionary and functional similarity to other intelligent species is striking – but because our unique trait is to supplement our bodies and brains with layer upon layer of external assistance. We have a depth, breadth and permanence of mental and physical capability that no other animal approaches. Humans are unique because we are complex, and we are complex because we are the beast that automates.

Digital scanning technology that could halve the number of liver biopsies


https://speciality.medicaldialogues.in/digital-scanning-technology-that-could-halve-the-number-of-liver-biopsies/

Novel bedside imaging technology for faster detection of deep lung infections


https://speciality.medicaldialogues.in/novel-bedside-imaging-technology-for-faster-detection-of-deep-lung-infections/

This New Graphene Invention Makes Filthy Seawater Drinkable in One Simple Step


2.1 billion people still don’t have safe drinking water.

Using a type of graphene called Graphair, scientists from Australia have created a water filter that can make highly polluted seawater drinkable after just one pass.

The technology could be used to cheaply provide safe drinking water to regions of the world without access to it.

“Almost a third of the world’s population, some 2.1 billion people, don’t have clean and safe drinking water,” said lead author Dong Han Seo.

“As a result, millions – mostly children – die from diseases associated with inadequate water supply, sanitation and hygiene every year. In Graphair we’ve found a perfect filter for water purification.

“It can replace the complex, time consuming and multi-stage processes currently needed with a single step.”

Developed by researchers at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Graphair is a form of graphene made out of soybean oil.

Graphene – a one-atom-thick, ultrastrong carbon material – might be touted as a supermaterial, but it’s been relatively expensive to produce, which has been limiting its use in broader applications.

Graphair is cheaper and simpler to produce than more traditional graphene manufacturing methods, while retaining the properties of graphene.

One of those properties is hydrophobia – graphene repels water.

To turn it into a filter, the researchers developed a graphene film with microscopic nanochannels; these allow the water through, but stop larger pollutants with larger molecules.

Then the team overlaid their new film on a typical, commercial-grade water filtration membrane to do some tests.

When used by itself, a water filtration membrane becomes coated with contaminants, blocking the pores that allow the water through. The researchers found that during their tests using highly polluted Sydney Harbour water, a normal water filter’s filtration rate halved without the graphene film.

Then the Graphair was added to the filter. The team found that the combination filter screened out more contaminants – 99 percent of them – faster than the conventional filter. And it continued to work even when coated with pollutants, the researchers said.

This eliminates a step from other filtration methods – removing the contaminants from the water before passing it through the membrane to prevent them from coating it.

This is a similar result to one found last year, where minuscule pores in a graphene filter were able to prevent salt from seawater from passing through – and allow water through faster.

“This technology can create clean drinking water, regardless of how dirty it is, in a single step,” Seo said.

“All that’s needed is heat, our graphene, a membrane filter, and a small water pump. We’re hoping to commence field trials in a developing world community next year.”

Eventually, they believe that the technology could be used for household and even town-based water filtration, as well as seawater and industrial wastewater treatment.

Assist Devices Boost Colonoscopy Quality


Given the theoretical advantage of smoothing mucosal folds, devices that flatten the colon during endoscopy should reveal more polyps. But do they?

Two new studies—one retrospective, one randomized—show the gains from colonoscopy assist devices now available commercially are real. But, how much they will boost a skilled endoscopist’s ability to detect adenomas remains unclear.

Presented at the 2017 World Congress of Gastroenterology/American College of Gastroenterology, the randomized study compared unassisted conventional colonoscopy (CC) and colonoscopy assisted with the Olympus transparent cap (TC) or the Olympus Endocuff Vision (EV) (abstract 39). The retrospective analysis (abstract P1029) compared CC, EV and the Medivators AmplifEYE (AE).

image

AmplifEYE from Medivators

The assist devices obey the same principle: Flattened mucosa should reveal polyps obscured by folds. All devices fit on the end of existing colonoscopes, but the technologies differ. The Olympus TC is designed to maintain an appropriate depth of field while preventing the scope from coming into direct contact with the mucosal membrane. The more sophisticated EV has finger-like projections that permit more controlled flattening of the mucosal folds. The AE device, which is the newest option, also has flexible extensions to stretch and flatten mucosal folds.

“The differences between the devices were not very striking, but our data show that using a device is much better than not using a device,” reported Talal Alkayali, MBBS, of the H.H. Chao Comprehensive Digestive Disease Center at the University of California, Irvine, who helped conduct the retrospective study. He called this analysis, which included data from 1,186 screening and surveillance colonoscopies performed by 32 colonoscopists, the first side-by-side comparison of EV and AE.

The study was based on colonoscopies conducted between September 2016 and May 2017. Of these, 520 were CC, 312 were performed with EV and 354 were performed with AE. There were some differences between groups. For example, the cecal intubation rate (CIR) was lower in the CC group (97.3%) than in the EV or AE groups (both >99%; P=0.012).

image

Endocuff Vision from Olympus.

The adenoma detection rate (ADR) was 30% for CC, 54% for EV and 50% for AE, indicating significant superiority of the assist devices (P<0.001 for both vs. CC). The serrated polyp detection rate (SDR) was 7% for CC, 13% for EV (P=0.004 vs. CC) and 14% for AE (P=0.002 vs. CC). When stratified by sex, the detection rates were numerically greater and usually significantly greater for both assist devices relative to CC.

The other study, billed as the first randomized controlled trial to compare EV, TC and CC, enrolled 126 patients. Sooraj Tejaswi, MD, director of the Gastroenterology Fellowship Program at the University of California, Davis Medical Center, in Sacramento, led the trial (abstract 39).

ADVERTISEMENT

When performed by three experienced colonoscopists with baseline ADRs ranging from 43% to 55%, few differences between colonoscopy techniques reached statistical significance. The one exception was mean ADR, which was 1.7 in the EV group versus 1.1 for CC and 0.76 for TC (P=0.03 for EV vs. TC), but the higher mean ADR per positive colonoscopy was not significant for EV versus the other methods. EV was associated with a numerically higher ADR (54.8%) than CC (52.3%) or TC, but the ADR for CC was higher than for the cap (40.5%).

“There was no statistical difference with respect to ADR between EV and CC, which may be accounted for by the high prestudy baseline ADR with our experienced endoscopists,” said Joseph Marsano, MD, a GI fellow at UC Davis, who presented the data. The results do not rule out an advantage of EV for providers with lower ADRs, but more studies would be needed to evaluate that hypothesis, he said.

In the randomized study, patient demographics and procedure metrics, such as cecal intubation time, were similar in the three study arms. The ADR was 52% for CC, 40% for TC and 54% for EV. Other measured outcomes followed this same general pattern. For example, the ADR in the proximal colon was 45% for CC, 35% for TC and 50% for EV; the mean numbers of adenomas detected per positive colonoscopy were 2.08, 1.63 and 2.59 in the three arms. None of these differences was statistically significant. When the mean numbers of adenomas per colonoscopy were compared, the difference for EV (1.7) was significantly superior to TC (0.76; P=0.03) but not CC (1.1).

“An interesting finding of the randomized study was the numerically higher sessile serrated adenoma detection rate of 23.8% with EV compared to 16.7% for CC and 14.3% for TC,” Dr. Tejaswi said. Although the study was not adequately powered to confirm this difference, Dr. Tejaswi said future research should explore whether assist devices have an advantage in increasing the detection rate of this type of adenoma.

At Dr. Alkayali’s institution, the assist devices have been widely incorporated into routine practice. In the absence of clear differences between the attachments, experience or personal experience might be more important criteria for selecting one over another. “Further studies will give us verification of the results” and may distinguish relative advantages between available devices, he added.

Why the results of the two studies diverge is unclear. Although randomization provides greater objectivity for making a comparison, there are many variables, such as the skill of the endoscopist, that may be important in considering the utility of these devices in typical patient screening and for future comparative studies.

David Johnson, MD, chief of the Division of Gastroenterology at Eastern Virginia Medical School, in Norfolk, said although the attachments “may be helpful, particularly for less skilled endoscopists, the additional costs and willingness to invest into additional nonreimbursed equipment will be the challenge.”

More data are needed to demonstrate the specific advantage of assist devices for both the detection of adenomas and serrated polyps and their ability to lead to definitive resection, Dr. Johnson said. These kinds of potential technical improvements are welcome, but “clearly, at present, the operator-dependent skills remain the backbone in ADR detection.”

Smart Thermometer that can forecast Flu


https://speciality.medicaldialogues.in/smart-thermometer-that-can-forecast-flu/

Quantum Algorithms Struggle Against Old Foe: Clever Computers


The quest for “quantum supremacy” – unambiguous proof that a quantum computer does something faster than an ordinary computer – has paradoxically led to a boom in quasi-quantum classical algorithms.

A popular misconception is that the potential — and the limits — of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy” — a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

But quantum supremacy is not a single, sweeping victory to be sought — a broad Rubicon to be crossed — but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

And the goalposts are shifting. “When it comes to saying where the supremacy threshold is, it depends on how good the best classical algorithms are,” said John Preskill, a theoretical physicist at the California Institute of Technology. “As they get better, we have to move that boundary.”

‘It Doesn’t Look So Easy’

Before the dream of a quantum computer took shape in the 1980s, most computer scientists took for granted that classical computing was all there was. The field’s pioneers had convincingly argued that classical computers — epitomized by the mathematical abstraction known as a Turing machine — should be able to compute everything that is computable in the physical universe, from basic arithmetic to stock trades to black hole collisions.

Classical machines couldn’t necessarily do all these computations efficiently, though. Let’s say you wanted to understand something like the chemical behavior of a molecule. This behavior depends on the behavior of the electrons in the molecule, which exist in a superposition of many classical states. Making things messier, the quantum state of each electron depends on the states of all the others — due to the quantum-mechanical phenomenon known as entanglement. Classically calculating these entangled states in even very simple molecules can become a nightmare of exponentially increasing complexity.

A quantum computer, by contrast, can deal with the intertwined fates of the electrons under study by superposing and entangling its own quantum bits. This enables the computer to process extraordinary amounts of information. Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits — 1.125 quadrillion to be exact — to encode.

A quantum machine could therefore make the classically intractable problem of simulating large quantum-mechanical systems tractable, or so it appeared. “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” the physicist Richard Feynman famously quipped in 1981. “And by golly it’s a wonderful problem, because it doesn’t look so easy.”

It wasn’t, of course.

Even before anyone began tinkering with quantum hardware, theorists struggled to come up with suitable software. Early on, Feynman and David Deutsch, a physicist at the University of Oxford, learned that they could control quantum information with mathematical operations borrowed from linear algebra, which they called gates. As analogues to classical logic gates, quantum gates manipulate qubits in all sorts of ways — guiding them into a succession of superpositions and entanglements and then measuring their output. By mixing and matching gates to form circuits, the theorists could easily assemble quantum algorithms.

Conceiving algorithms that promised clear computational benefits proved more difficult. By the early 2000s, mathematicians had come up with only a few good candidates. Most famously, in 1994, a young staffer at Bell Laboratories named Peter Shor proposed a quantum algorithm that factors integers exponentially faster than any known classical algorithm — an efficiency that could allow it to crack many popular encryption schemes. Two years later, Shor’s Bell Labs colleague Lov Grover devised an algorithm that speeds up the classically tedious process of searching through unsorted databases. “There were a variety of examples that indicated quantum computing power should be greater than classical,” said Richard Jozsa, a quantum information scientist at the University of Cambridge.

But Jozsa, along with other researchers, would also discover a variety of examples that indicated just the opposite. “It turns out that many beautiful quantum processes look like they should be complicated” and therefore hard to simulate on a classical computer, Jozsa said. “But with clever, subtle mathematical techniques, you can figure out what they will do.” He and his colleagues found that they could use these techniques to efficiently simulate — or “de-quantize,” as Calude would say — a surprising number of quantum circuits. For instance, circuits that omit entanglement fall into this trap, as do those that entangle only a limited number of qubits or use only certain kinds of entangling gates.

What, then, guarantees that an algorithm like Shor’s is uniquely powerful? “That’s very much an open question,” Jozsa said. “We never really succeeded in understanding why some [algorithms] are easy to simulate classically and others are not. Clearly entanglement is important, but it’s not the end of the story.” Experts began to wonder whether many of the quantum algorithms that they believed were superior might turn out to be only ordinary.

Sampling Struggle

Until recently, the pursuit of quantum power was largely an abstract one. “We weren’t really concerned with implementing our algorithms because nobody believed that in the reasonable future we’d have a quantum computer to do it,” Jozsa said. Running Shor’s algorithm for integers large enough to unlock a standard 128-bit encryption key, for instance, would require thousands of qubits — plus probably many thousands more to correct for errors. Experimentalists, meanwhile, were fumbling while trying to control more than a handful.

But by 2011, things were starting to look up. That fall, at a conference in Brussels, Preskill speculated that “the day when well-controlled quantum systems can perform tasks surpassing what can be done in the classical world” might not be far off. Recent laboratory results, he said, could soon lead to quantum machines on the order of 100 qubits. Getting them to pull off some “super-classical” feat maybe wasn’t out of the question. (Although D-Wave Systems’ commercial quantum processors could by then wrangle 128 qubits and now boast more than 2,000, they tackle only specific optimization problems; many experts doubt they can outperform classical computers.)

“I was just trying to emphasize we were getting close — that we might finally reach a real milestone in human civilization where quantum technology becomes the most powerful information technology that we have,” Preskill said. He called this milestone “quantum supremacy.” The name — and the optimism — stuck. “It took off to an extent I didn’t suspect.”

The buzz about quantum supremacy reflected a growing excitement in the field — over experimental progress, yes, but perhaps more so over a series of theoretical breakthroughs that began with a 2004 paper by the IBM physicists Barbara Terhal and David DiVincenzo. In their effort to understand quantum assets, the pair had turned their attention to rudimentary quantum puzzles known as sampling problems. In time, this class of problems would become experimentalists’ greatest hope for demonstrating an unambiguous speedup on early quantum machines.

Sampling problems exploit the elusive nature of quantum information. Say you apply a sequence of gates to 100 qubits. This circuit may whip the qubits into a mathematical monstrosity equivalent to something on the order of 2100 classical bits. But once you measure the system, its complexity collapses to a string of only 100 bits. The system will spit out a particular string — or sample — with some probability determined by your circuit.

In a sampling problem, the goal is to produce a series of samples that look as though they came from this circuit. It’s like repeatedly tossing a coin to show that it will (on average) come up 50 percent heads and 50 percent tails. Except here, the outcome of each “toss” isn’t a single value — heads or tails — it’s a string of many values, each of which may be influenced by some (or even all) of the other values.

For a well-oiled quantum computer, this exercise is a no-brainer. It’s what it does naturally. Classical computers, on the other hand, seem to have a tougher time. In the worst circumstances, they must do the unwieldy work of computing probabilities for all possible output strings — all 2100 of them — and then randomly select samples from that distribution. “People always conjectured this was the case,” particularly for very complex quantum circuits, said Ashley Montanaro, an expert in quantum algorithms at the University of Bristol.

Terhal and DiVincenzo showed that even some simple quantum circuits should still be hard to sample by classical means. Hence, a bar was set. If experimentalists could get a quantum system to spit out these samples, they would have good reason to believe that they’d done something classically unmatchable.

Theorists soon expanded this line of thought to include other sorts of sampling problems. One of the most promising proposals came from Scott Aaronson, a computer scientist then at the Massachusetts Institute of Technology, and his doctoral student Alex Arkhipov. In work posted on the scientific preprint site arxiv.org in 2010, they described a quantum machine that sends photons through an optical circuit, which shifts and splits the light in quantum-mechanical ways, thereby generating output patterns with specific probabilities. Reproducing these patterns became known as boson sampling. Aaronson and Arkhipov reasoned that boson sampling would start to strain classical resources at around 30 photons — a plausible experimental target.

Similarly enticing were computations called instantaneous quantum polynomial, or IQP, circuits. An IQP circuit has gates that all commute, meaning they can act in any order without changing the outcome — in the same way 2 + 5 = 5 + 2. This quality makes IQP circuits mathematically pleasing. “We started studying them because they were easier to analyze,” Bremner said. But he discovered that they have other merits. In work that began in 2010 and culiminated in a 2016 paper with Montanaro and Dan Shepherd, now at the National Cyber Security Center in the U.K., Bremner explained why IQP circuits can be extremely powerful: Even for physically realistic systems of hundreds — or perhaps even dozens — of qubits, sampling would quickly become a classically thorny problem.

By 2016, boson samplers had yet to extend beyond 6 photons. Teams at Google and IBM, however, were verging on chips nearing 50 qubits; that August, Google quietly posted a draft paper laying out a road map for demonstrating quantum supremacy on these “near-term” devices.

Google’s team had considered sampling from an IQP circuit. But a closer look by Bremner and his collaborators suggested that the circuit would likely need some error correction — which would require extra gates and at least a couple hundred extra qubits — in order to unequivocally hamstring the best classical algorithms. So instead, the team used arguments akin to Aaronson’s and Bremner’s to show that circuits made of non-commuting gates, although likely harder to build and analyze than IQP circuits, would also be harder for a classical device to simulate. To make the classical computation even more challenging, the team proposed sampling from a circuit chosen at random. That way, classical competitors would be unable to exploit any familiar features of the circuit’s structure to better guess its behavior.

But there was nothing to stop the classical algorithms from getting more resourceful. In fact, in October 2017, a team at IBM showed how, with a bit of classical ingenuity, a supercomputer can simulate sampling from random circuits on as many as 56 qubits — provided the circuits don’t involve too much depth (layers of gates). Similarly, a more able algorithm has recently nudged the classical limits of boson sampling, to around 50 photons.

These upgrades, however, are still dreadfully inefficient. IBM’s simulation, for instance, took two days to do what a quantum computer is expected to do in less than one-tenth of a millisecond. Add a couple more qubits — or a little more depth — and quantum contenders could slip freely into supremacy territory. “Generally speaking, when it comes to emulating highly entangled systems, there has not been a [classical] breakthrough that has really changed the game,” Preskill said. “We’re just nibbling at the boundary rather than exploding it.”

That’s not to say there will be a clear victory. “Where the frontier is is a thing people will continue to debate,” Bremner said. Imagine this scenario: Researchers sample from a 50-qubit circuit of some depth — or maybe a slightly larger one of less depth — and claim supremacy. But the circuit is pretty noisy — the qubits are misbehaving, or the gates don’t work that well. So then some crackerjack classical theorists swoop in and simulate the quantum circuit, no sweat, because “with noise, things you think are hard become not so hard from a classical point of view,” Bremner explained. “Probably that will happen.”

What’s more certain is that the first “supreme” quantum machines, if and when they arrive, aren’t going to be cracking encryption codes or simulating novel pharmaceutical molecules. “That’s the funny thing about supremacy,” Montanaro said. “The first wave of problems we solve are ones for which we don’t really care about the answers.”

Yet these early wins, however small, will assure scientists that they are on the right track — that a new regime of computation really is possible. Then it’s anyone’s guess what the next wave of problems will be.

FDA OKs Smart Watch for Epilepsy Patients


Embrace device detects grand mal, tonic-clonic seizures

The FDA cleared marketing for Embrace, a smart watch that helps epilepsy patients and caregivers monitor seizures.

The prescription-only device is the world’s first smart watch to be cleared by the FDA for neurology use, developer Empatica stated in a news release.

Embrace uses a seizure detection algorithm to recognize electrodermal activity patterns that are likely to accompany epileptic seizures. It monitors for grand mal or generalized tonic-clonic seizures and alerts caregivers through text and phone.

The device was tested in a clinical study of 135 epilepsy patients who were admitted to multiple epilepsy monitoring units for continuous video electroencephalography (EEG). Over 272 days, researchers collected 6,530 hours of data, including 40 generalized tonic-clonic seizures. Embrace detected 100% of the seizures, which were confirmed by independent epileptologists who made assessments without seeing the Embrace data.

“The FDA approval of the Embrace device to detect major convulsive seizures represents a major milestone in the care of epilepsy patients,” Orrin Devinsky, MD, of New York University in New York City said in the news release.

More than 3,000 Americans die each year from Sudden Unexpected Death in Epilepsy (SUDEP), he added. “The scientific evidence strongly supports that prompt attention during or shortly after these convulsive seizures can be life-saving in many cases,” he said.

Empatica, a MIT Media Lab spin-off, initially launched Embrace through a crowdfunding campaign in 2015. The company raised $782,666, more than five times its goal.

Embrace is the only FDA-cleared seizure monitoring smart watch that also tracks sleep, stress, and physical activity, the company noted. It has been approved in Europe to monitor seizures since April 2017.

%d bloggers like this: