There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.
The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.
View this post on Instagram
‘Imagine this…’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future… see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart
A post shared by Bill Posters (@bill_posters_uk) on
Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.
At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.
The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.
On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.
The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.
Words Matter with Fake News
While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.
This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.
In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.
That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.
For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”
“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.
The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.
“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”
AI Versus AI
While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.
A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.
“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”
The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.
It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.
OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.
No Silver AI Bullet
While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.
- Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
- Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
- Content: The AI scans articles for hundreds of known indicators typically found in misinformation.
“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”
The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.
“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”
“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.
AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.
It’s not every day that something from the 17th century gets radically reinvented.
But this month, a team from the Broad Institute at MIT and Harvard took aim at one of the most iconic pieces of lab ware—the microscope—and tore down the entire concept to recreate it from scratch.
I’m sure you have a mental picture of a scope: a stage to put samples on, a bunch of dials to focus the image, tunnel-like objectives with optical bits, an eyepiece to observe the final blown-up image. Got it? Now erase all that from your mind.
The new technique, dubbed DNA microscopy, uses only a pipette and some liquid reagents. Rather than monitoring photons, here the team relies on “bar codes” that chemically tag onto biomolecules. Like cell phone towers, the tags amplify, broadcasting their signals outward. An algorithm can then piece together the captured location data and transform those GPS-like digits into rainbow-colored photos.
The results are absolutely breathtaking. Cells shine like stars in a nebula, each pseudo-colored according to their genomic profiles.
That’s the crux: DNA microscopy isn’t here to replace its classic optical big brother. Rather, it pulls the viewer down to a molecular scale, allowing you to see things from the “eyes of the cell,” said study author Dr. Joshua Weinstein, who worked under the supervision of Dr. Aviv Regev and Dr. Feng Zhang, both Howard Hughes Medical Institute investigators.
The tech decodes the natural location of molecules inside a cell—and how they interact—while simultaneously gathering information about its gene expression in a two-in-one combo. It’s a bonanza for scientists struggling to tease apart individual differences in cells that physically look identical—say, immune cells that secrete different antibodies, or cancer cells in their early stage of malignant transformation.
“It’s a completely new way of visualizing biology,” said Weinstein in a press release. “This gives us another layer of biology that we haven’t been able to see.”
Almost all current microscopy techniques stem from the original all-in-one light microscope, first introduced in the 17th century. The core concept is light: the device guides photons refracted from the sample into a series of mirrors and optical lenses, resulting in an enlarged image of whatever you’re focusing on. It basically works like a DSLR camera with a very powerful zoom lens.
Scientists have long since moved past the visible light spectrum. Electron microscopy, for example, measures electrons that bounce off tissue to look at components inside the cell. Fluorescent microscopy—the “crown prince” of imaging—captures emitted light waves after stimulating tissue-bound fluorescent probes with lasers.
But here’s the thing: even as traditional microscopy is increasingly perfected and streamlined, it hits two hard limits. One is resolution. Light scatters, and there’s only so much we can do to focus the beam on one point to generate a clear image. This is why a light microscope can’t clearly see DNA molecules or proteins—it’s like using a smartphone to capture a single bright star. As a result, current microscopes generate satellite views of goings-on on the cellular “Earth.” Sharp, but from afar.
The second is genomic profiling. There’s been a revolution in mapping cellular diversity to uncover how visually similar cells can harbor dramatically different genomic profiles. A perfect example is the immune system. Immune cells that look similar can secrete vastly different antibodies, or generate different protein “arms” on their surface to grab onto specific types of cancer cells. Sequencing the cells’ DNA loses spatial information, making it hard to tease out whether location is important for designing treatments.
So far, microscopy has only focused on half of the picture. With DNA microscopy, the team hopes to grab the entire landscape.
“It will allow us to see how genetically unique cells—those comprising the immune system, cancer, or the gut, for instance—interact with one another and give rise to complex multicellular life,” explained Weinstein.
To build their chemical microscope, the team began with a group of cultured cells.
They decoded the cells’ RNA molecules and transcribed the data to generate a complete library of expressed genes called cDNA. Based on cDNA sequences, they then synthesized a handful of tags randomly made of the DNA letters A, T, C, and G, each about 30 letters long. When bathed onto a new batch of cells, the tags tunneled inside and latched onto specific RNA molecules, labeling each with a specific barcode.
To amplify individual signals—each “point source”—the team used a common chemical reaction that rapidly amplifies DNA molecules, increasing their local concentration. DNA doesn’t like staying put inside the liquid interior of a cell, so the tags slowly begin to drift outwards, like a drop of dye expanding in a pool of water. Eventually, the DNA tags balloon into a molecular cloud that stems from their initial source on the biomolecule.
“Think of it as a radio tower broadcasting its signal,” the authors explained.
As DNA tag clouds from multiple points grow, eventually they’ll collide. This triggers a second reaction: two diffusing DNA molecules physically link up, spawning a unique DNA label that journals their date. This clever hack allows researchers to eventually triangulate the location of the initial sources: the closer the two points are in the beginning, the more labels they’ll form. The further apart, the fewer. The idea is very similar to cell phone companies using radio towers to track their users’ locations, in which they measure where signals from three or more towers intersect.
The cells are subsequently collected and their DNA extracted and sequenced—something that takes roughly 30 hours. The data is then decoded using a custom algorithm to transform raw data into gorgeous images. This is seriously nuts: the algorithm has absolutely no clue what a “cell” is, but still identified individual cells at their location inside the sample.
“The first time I saw a DNA microscopy image, it blew me away,” said Regev.
As proof of concept, the team used the technique to track cells that encode either green or red fluorescent proteins. Without any previous knowledge of their distribution, the DNA microscope efficiently parsed the two cell types, although the final images were blurrier than that obtained with a light microscope. The tech could also reliably map the location of individual human cancer cells by tagging the cells’ internal RNA molecules, although it couldn’t parse out fine details inside the cell.
A Whole New World
Thanks to DNA’s “stickiness,” the technique can be used to label multiple types of biomolecules, allowing researchers to track the location of and identify antibodies or surface proteins on any given cell type, although the team will have to further prove its effectiveness in tissue samples.
Although the resolution of DNA microscopy is currently on par, if not slightly lower than, that of a light microscope, it provides an entirely new perspective of biomolecules inside cells.
“We’ve used DNA in a way that’s mathematically similar to photons in light microscopy. This allows us to visualize biology as cells see it and not as the human eye does,” said Weinstein.
DNA microscopy already does things a light microscope can’t. It can parse out cells that visually look similar but have different genetic profiles, for example, which comes in handy for identifying various types of cancer and immune cells. Another example is neuroscience: as our brains develop, various cells drastically alter their genetic profiles while migrating long distances—DNA microscopy could allow researchers to precisely track their movements, potentially uncovering new ways to boost neuroregeneration or plasticity.
Only time will tell if DNA microscopy will reveal “previously inaccessible layers of information,” as the team hopes. But they believe that their invention will spark new ideas and uses in the scientific community.
“It’s not just a new technique, it’s a way of doing things that we haven’t ever considered doing before,” said Regev.
If you’ve ever wondered if you have normal nipples, you’re not alone. It’s easy to notice—and compare—the look, shape and size of your nipples with others and realize there’s definitely a nipple spectrum, from bumps on nipples to a variety of nipple colors.
And while the many differences you spot may give you pause, fear not: Chances are, you have totally normal nipples. Keep reading to learn 9 seemingly weird nipple things that are actually pretty run-of-the-mill.
First, a word about your nipples.
Before we start talking about “normal nipples,” let’s be clear on what we mean by “nipples.” Sometimes people think the entire pink or brown part of your boobs is your nipples but actually, your nipples are just the center part of the dark area—yep, where milk comes out if you breastfeed, according to the Cleveland Clinic. The dark skin surrounding the nipples is called the areola, and it has glands that secrete fluid to aid in breastfeeding.
Now that you know exactly what we mean by “nipples,” here are those “weird” issues that are actually just part of having normal nipples:
1. Your nipples are big or small.
The size of your nipples means nothing. Like really, nothing. There are all different sizes and shapes. Don’t believe us? Check out this (NSFW) gallery for a reality check on the wide range of what nips really look like. And if you had any kind of worry about your nipple size having any association with your health, don’t. “The size of your nipple has no relevance to cancer risk, for example,” Maggie DiNome, M.D., director of the Margie Peterson Breast Center at John Wayne Cancer Institute at Providence Saint John’s Health Center in Santa Monica, CA tells SELF.
Likewise, Debra Patt, M.D., a medical oncologist and breast cancer specialist with Texas Oncology, a practice in the U.S. Oncology Network, agrees: “Generally the size variability in the nipple and nipple-areolar complex is not a medical condition, just physiologic variability.”
2. You think you’ve got the “wrong” nipple color.
Whether your nipples are so pale you can see your blue veins (oh, hey) or they’re a rich shade of brown, you needn’t worry—they’re totally normal. “Nipple color is not indicative of health in any way,” Patt tells SELF. “There’s natural variability in nipple color, just as there is in skintone variability with—and within—different ethnicities,” she adds. DiNome agrees that color is not usually indicative of breast pathology but, “a rash, crusting or a lesion on the nipple or areolae may (be),” she says.
The exception here is if your nipples have suddenly turned red. Now, if you know why they’re red—for example, you went running and they chafed against your sports bra—then you’re good. Otherwise, head to the doc and let him or her know how your nipple color has changed. It could be a potential sign of breast cancer—specifically, Paget’s disease of the breast, a rare type of breast cancer that also comes with scaliness, itching, and yellow or bloody discharge, Kecia Gaither, M.D., an ob/gyn and women’s health expert in New York City, tells SELF. “With any major nipple changes, seek evaluation from your health provider,” she adds.
3. Your nipples don’t stick out—they stick in.
“Inverted nipples can be congenital, but they can also be acquired during one’s lifetime,” DiNome tells SELF. And they’re not that uncommon. In fact, it’s estimated that 10 to 20 percent of the female population have inverted nipples, which is when the nipples indent in the areola instead of standing above the surface of the breast, explains Gaither. Inverted nipples are totally safe and can happen with one or both breasts.
Generally, most women with inverted nipples can breastfeed normally, though they may pose some challenges, notes Patt. In some cases, inverted nipples can be altered surgically. If the inversion occurs as an adult, you may need to seek medical attention as it could be a sign of breast cancer, Patt adds.
4. There’s discharge coming out of your nipples.
“All ducts in the breast coalesce into the nipple, which is why women can breastfeed,” DiNome says. But for that same reason, women who are not breastfeeding can also have nipple discharge. “Most of the time it is physiologic, meaning it’s a result of normal processes,” DiNome explains. In up to 20 percent of women of reproductive age, having their breasts squeezed can elicit nip spillage. According to the National Institutes of Health, it can even happen from your bra or t-shirt rubbing against your boobs.
However, Gaither tells SELF “for any women who aren’t pregnant or breastfeeding, nipple discharge can be concerning… and it’s best to see your health provider” if you notice discharge of any type. (It can be clear or milky, yellowish, greenish or brownish.)
If it’s coming out of both breasts, or happens when you squeeze the nipple, it’s more likely to be due to something benign (aka, noncancerous) such as certain medications or herbs (like fennel), an injury, inflammation clogging the breast ducts, or an infection. In some cases, discharge can signal thyroid disease or be a sign of breast cancer. If the discharge is painful, bloody or green in hue, head to the doctor ASAP, Gaither suggests.
5. Your nipples are really sensitive—or not sensitive at all.
Nipples can have all sorts of feelings (and not the emotional kind). While some women find nipple play to be a snooze, other women “can achieve orgasm through nipple stimulation alone,” Patt tells SELF. If you’ve had your boobs surgically altered (#nojudgment, I’ve got a reduction in my future), the effect on nipple sensitivity varies. “In a breast reduction, generally the nerves which supply the nipples—specifically, the fourth intercostal nerve—are preserved, so there shouldn’t be any decrease in nipple sensation,” Gaither tells SELF. Similarly, a standard augmentation procedure won’t likely have an effect on sensation, either, says Patt, unless you have the nipples cosmetically altered or moved.
If your nipples are sensitive to the point where it’s painful, see the doctor. This could be a sign of breast cancer or mastitis, infection of the breast.
6. You’ve got hair on your nips.
Really, it’s fine. “Hair around the nipples is generally linked to hormonal changes,” says Gaither.. “Secondary to puberty, pregnancy, menstruation or menopause, birth control pills may also stimulate hair growth there,” she says. So a few strands shouldn’t freak you out.
In certain situations, though, a condition known as hirsutism can occur where a wealth of hair grows because there’s excessive production of male hormones. “This can stem from medical diagnoses like polycystic ovarian syndrome (PCOS) and Cushing syndrome,” Gaither tells SELF. So while a few hairs isn’t anything to stress about, a whole lot of ’em should get you calling your doctor.
7. There are little bumps on your nipples (and around them).
Those little bumps? Those guys are Montgomery’s glands. (Yes, they have a name!) Quick anatomy lesson: The areola, the hyper-pigmented area surrounding the nipple, has these tubercles called Montgomery’s glands, which are normal sebaceous glands that surround the nipple, Patt tells SELF. And although they can vary in number (I’m talking as little as a few and as many as dozens), they’re just little benign bumps on the areolae surrounding the nipple. While the jury’s still out on their actual function (besides a bit of sebum secretion), theories have suggested that they exist to help guide infants to the nipple to breastfeed. That’s pretty cute, so we’re into it.
8. You experienced nipple growth or a change in nipple color during and after pregnancy.
Any mother can anecdotally attest to big nipples, but doctors too note the nipple growth. “The nipple usually darkens during pregnancy and after delivery,” says Patt. “And initially—and frequently—after childbirth, the nipples change in preparation for nursing, where cracking and bleeding can be common,” she adds. Likewise, Gaither tells SELF that “the areolae become larger and the Montgomery’s glands may become more pronounced.”
A 2013 study in The Journal of Human Lactation found that over the course of pregnancy, women’s nipples got about 20 percent longer and 17 percent wider, while the whole areola widened anywhere from half a centimeter to 1.8 centimeters. Discharge is also common—specifically, the clear, milky kind, that progresses in its opacity throughout the gestational period.
9. You have an extra nipple.
It wasn’t only Chandler Bing: “Third” nipples do exist. Another little nipple is totally safe, and total NBD. “This is called a supernumerary, or accessory nipple,” Gaither tells SELF. “During the embryonic period of development, the breasts develop along ‘milk lines,’ which are the precursor to mammary glands and nipples.” And while the milk lines generally degenerate as the embryo ages, in some people they persist, producing the extra (you might say bonus) nipple.
When we talk about skin aging, we’re really talking about collagen—or, more accurately, a lack thereof. Pretty much every desirable characteristic of healthy skin comes down to collagen content: The more of this protein we have, the firmer, plumper, and juicier our skin looks.
But as we age—and particularly as we smoke, drink, and get UV exposure while aging—our collagen production drops off, and the collagen we already have starts to break down. This causes wrinkles, as well as a loss of plumpness or fullness. Addressing these symptoms means addressing collagen loss in one way or another.
To that end, there is a slew of collagen-rich products on the market, most of which fall into one of two categories: moisturizers (particularly creams) and oral supplements. Trendy supplements dominate today’s market, while collagen creams are a bit more old school.
But regardless of the form the product takes, the manufacturers claim that giving your skin more collagen to work with will help it replenish what it’s lost, improving everything from hydration and elasticity to fine lines and wrinkles. However, experts remain skeptical.
Can a moisturizer or supplement really help your skin cells produce more collagen?
The short answer is no. The long answer is maybe, but still probably no. To understand why, it helps to know a little bit more about collagen and how it’s made.
Collagen is the main structural protein in human connective tissues, most notably our skin. The vast majority of the collagen in our skin is found in the dermis (the second layer of skin that sits beneath the epidermis), where it’s also produced. Skin cells in the dermis (fibroblasts) synthesize the collagen that holds the rest of the dermis together, giving our skin its underlying structure.
As for the structure of collagen itself, it’s kind of like a braid or rope: Individual amino acids link up to form long chains, which bundle together to form thicker strands. Those strands then twist and coil around each other to form triple helices. Finally, those helices connect end to end and stack on top of each other to form clusters called fibrils. In other words, collagen is a pretty complex and massive molecule.
That’s why creams formulated with pure collagen simply can’t live up to their lofty claims— those huge braided molecules are just too big to penetrate your epidermis, and definitely too big to get down into the dermis where the real magic happens. So even though collagen creams feel nice and may help moisturize the skin, that’s about it in terms of benefits.
“Your skin might feel softer and smoother [or] your wrinkles might look less prominent, but that’s all an illusion—that’s just what’s happening on the surface,” Suzan Obagi, M.D., UPMC dermatologist and American Academy of Cosmetic Surgery president, tells SELF. “It’s not actually building collagen.”
To get around the sizing issue, most lotions, potions, and pills touting collagen as a main ingredient these days actually contain hydrolyzed collagen, or collagen peptides. (Fun fact: Gelatin is a form of hydrolyzed collagen!)
Essentially, hydrolyzed collagen has been broken down into smaller chains of amino acids called peptides, John Zampella, M.D., NYU Langone dermatologist, tells SELF. Some researchers and dermatologists believe these peptides “can traverse the skin cells in your outer skin barrier and make their way into the dermis, essentially [providing] the building blocks for fibroblasts to make new collagen,” Dr. Zampella says.
And it does seem plausible that applying a cream chock-full of these collagen precursors could help increase collagen production down the line, provided that those peptides do eventually make their way into the dermis. But this theory hasn’t really been tested, let alone experimentally proven.
Surprisingly enough, there is some research to suggest that oral collagen may improve skin appearance. According to at least three recent studies, taking collagen peptides orally is associated with improved skin hydration, elasticity, and wrinkling, as compared to placebos. However, these studies come with a few asterisks: They’re on the small side (around 60 participants) and were short-term (4 to 12 weeks), and they focused solely on women over 35.
The observed results could be due to increased collagen production, or some other mechanism. But either way, they’re mild at best and we have other options (such as retinoids) that are more likely to provide benefits. Plus, it’s important to remember that supplements aren’t FDA-regulated or tested the way drugs are, so you don’t necessarily know what you’re getting or how well it may work.
And if you eat a normal, balanced diet (including protein-rich foods such as meat, eggs, dairy, and beans), you’re probably already getting all the collagen you need.
So should I just throw out all my collagen products?
A little extra collagen probably won’t make a huge difference in your skin, but it’s also pretty harmless. So if you love your collagen peptide moisturizer or enjoy the perceived benefits of supplements and aren’t experiencing any negative side effects, by all means, keep it up. But if you really want to minimize collagen loss, there are more effective options out there, starting with—what else?—sunscreen.
“The number one thing is sunscreen—you obviously want to prevent your [existing] collagen from being broken down,” says Dr. Zampella. “Number two is a retinoid, because that’s the thing we have the most evidence for to build collagen.”
Dr. Obagi agrees, especially when you consider the cost of overpriced collagen products: “You can buy products that cost hundreds of dollars—if not a thousand or two—and I don’t know that [they’re] going to be better than a prescription retinoic acid. In fact, I can pretty much guess that [they won’t].”
If you’re wondering about the best way to manage wrinkles or any other side effects of collagen loss, talk to a dermatologist to get recommendations for your specific skin.
What if I told you that the behaviors that led you to success in the past were holding you back from success in the future? The idea feels a little unnerving—but that’s the point.
For leaders in today’s highly competitive and accelerating business climate, it turns out that relying on old behaviors, methods, and mindsets that worked in the past can become their biggest inhibitors to success tomorrow.
This is the big idea behind Barry O’Reilly’s recent bestselling book, Unlearn: Let Go of Past Success to Achieve Extraordinary Results. Unlike the trend of continuous learning for leaders, O’Reilly adds a new factor to the mix—unlearning, which he believes holds the power to push leaders into uncomfortable and unfamiliar environments where true breakthroughs happen. In fact, for today’s leaders, success may well depend on unlearning the past, he argues.
Barry O’Reilly is a business advisor, entrepreneur, and author. He is pioneering new business methodologies at the intersection of business model innovation, product development, organizational design, and culture transformation. As Singularity University Faculty for Entrepreneurship and Organizational Design and advisor within SU Ventures, Barry has fine-tuned a unique approach that helps leaders create cultures of experimentation and learning that can unlock higher performance and results.
We sat down with Barry for an interview about his recent book to understand how leaders and executives can use his system—The Cycle of Unlearning— as a tool to reveal blindspots, trigger breakthroughs within new and unfamiliar environments, and drive new performance outcomes for teams.
Did you have a specific experience or “aha moment” that catalyzed your idea to write your recent book Unlearn?
I coach a lot of senior executives in Fortune 500 companies and also support leaders scaling their startups here in Silicon Valley. In my work I began seeing how critical it was for these executives and leaders to continuously adapt to the changing circumstances around them.
When you are the leader of a startup, for example, and you are scaling from 50 people, to 100, 200, and 500, each increment changes the way you must run your company. This means that leaders have to constantly learn new methods to make them successful at each new stage of their organization, while also unlearning many of the methods and mindset that made them successful in the last paradigm which may no longer be relevant in their new paradigm.
My aha moment was seeing how the importance and process of unlearning didn’t only happen once. I saw that it was a system, and that the more these leaders used the system, the more powerful it became. It turned into a unique learning cycle in which they could continuously adapt their behavior to the changing circumstances of markets, technologies, customer demands, and their own personal aspirations and outcomes they wished to achieve.
For those who haven’t read Unlearn yet, what is the main premise of the book?
For high performance individuals to improve, it’s often not their ability to learn new things that gets in their way, but rather it is their inability to unlearn their existing mindsets, behaviors, and methods, which were effective in the past but now are limiting their future success.
Today, we have exponential technologies that are radically changing the ways that problems can be solved. Yet people still hold onto their linear mindsets of what worked for them in the past.
The problem is that these mindsets will likely not work for them in the future. Leaders have to recognize when the information and patterns they learned in the past have become obsolete and need to be unlearned.
This is why leaders need a system of both learning and unlearning.
How do you define this process of unlearning?
I define unlearning as a process of letting go of, moving away from, and reframing once-useful mindsets and behaviors that were effective in the past, but now limit success for the future.
It’s not about forgetting or discarding knowledge or experience, but rather learning how to let go of outdated information and actively gather and take in new information to inform effective decision-making and action.
When leaders notice that their behaviors are not driving the outcomes that they want, that is how they can know it’s time to adapt their behaviors to meet the new circumstances they’re facing in order to achieve the desired outcomes.
You start Unlearn with a fascinating example of how Serena Williams overturned her training approach and triggered a huge breakthrough in her performance. Can you expand on this?
What is truly exceptional about Serena is that she is getting better as she gets older, and this is largely unheard of in the world of professional tennis.
Even late in her career, she’s continually pushing herself to try new things and new approaches to training that are outside of her comfort zone, and doing this is enabling her to succeed and win. In fact, she’s been more successful in the last 10 years of her career than she ever was in the first 20 years of her career. She’s 37, and the average age of professional tennis players when they retire is 27.
Serena is a huge inspiration for the idea of unlearning in a sport. She is someone who is continuously challenging herself to improve, regardless of whether she has setbacks or is not achieving the outcomes that she wants.
She’s continually finding new behaviors and new skills to help her improve. You can read more about her process of unlearning in my blog where I share how she transformed her training approach by taking on an unlikely coach.
You write about the power of unfamiliar, unknown, and uncertain situations to catalyze performance breakthroughs. Why are these three conditions so powerful in how they fit into the unlearning system?
None of your growth happens within your comfort zone. I emphasize this point a lot to leaders: they need to get comfortable with being uncomfortable if they want to grow.
The real problem here is that many people, and especially senior leaders, struggle with perfectionism. Many of these people are used to getting perfect results in their career. But if they always try to be perfect, they will never tackle the things that they know are unfamiliar, unknown, or uncertain. Instead they will stick within their comfort zone, but there’s no growth there.
What I’m always trying to encourage people to think about is that they need to focus on the big aspiration and outcomes that they want to achieve. But they also need to start small by experimenting with new behaviors in order to learn what works and what doesn’t for them and ultimately drive these outcomes.
When you think about the best leaders today in many organizations, they are constantly cultivating these characteristics in themselves and putting themselves in circumstances that are unfamiliar because it challenges them to grow. This is what great leaders do to get out of their comfort zone and continuously grow.
How can leaders know when there’s something they need to move away from or unlearn? What are some of the biggest obstacles to unlearning for individuals?
There are a few key signals leaders can look to that show that their existing behaviors are not working or may need to be unlearned.
These signals can include not achieving the desired outcomes, not living up to the expectations they have for themselves, avoiding a new challenge, or feeling they have “tried everything” and are still not getting the breakthrough they want. These are all signals that a leader’s existing behaviors are not working.
Of course, there can be many obstacles to the process of unlearning. It’s the beauty of the unlearning process that to unlearn you have to overcome these. Two obstacles I write about in the book are the desire to always be correct, and the lack of willingness to embrace uncertainty and instead choosing to stick with what’s comfortable.
As Singularity University Faculty today, how did you first become involved with the Singularity University ecosystem?
I was initially drawn in by SU’s bold mission, and specifically the focus on the global grand challenges and how technology can be used to solve them.
My first work with SU was in the context of SU Ventures, which brings in advisors and mentors to help startups in their growth journey. I worked with a company called Active For Good with the mission to motivate people to become more active and healthy, while also helping malnourished children around the world.
Mentoring the Active For Good founders as part of SU Ventures was a really rewarding experience, and I did it pro bono. While I was working with SU in this capacity, it became clear that my content was a strong fit for helping entrepreneurs scale up their business as well as for helping corporations improve their innovation. It was at this point that SU invited me to become Faculty for corporate innovation and entrepreneurship.
Was there something that initially inspired you during your journey early on with SU?
That experience working with Active For Good, which was tackling the Food global grand challenge, gave me the chance to gain an understanding of what SU was all about, meet a lot of really interesting people, and become inspired by the big ideas and thinking that are at the core of what SU does.
All of this happened while I was also collaborating with other amazing Faculty and staff at SU Ventures. That was really inspiring, and then eventually as Faculty I started to go out and represent SU by giving talks about exponential technologies around the world.
Getting back to your book, unlearning in management is a large theme that you write about. Can you say more about this concept?
Absolutely. There’s an example in the book of unlearning leadership innovation with the executive teams from British Airways, Vueling and Iberian—airlines owned by International Airlines Group.
A group of executives left their companies for eight weeks with the goal of launching new businesses that could disrupt their existing company, the airline industry at large, and themselves, through a program I offer called ExecCamp. In this program, I take executives out of their comfort zones and try to get them to act like entrepreneurs—and in doing so, give them the tools to build and release new products to disrupt their existing companies.
For many of these executives, it’s very challenging. They aren’t used to being back on the front line of innovating their products and services. Instead, their day-to-day focus is usually on managing. Because of this, they have to unlearn a lot of their management processes and relearn new methods and mindsets such as how to create and test hypotheses for business ideas and products. Going through this process can be uncomfortable, but the results can be pretty breakthrough.
After the insights from ExecCamp, International Airlines Group along withBritish Airways, Iberia and Vueling ended up creating an entirely new initiative called Hangar 51, which is now a venture capital group and accelerator program that invests in startups seeking to bring new ideas to life in the travel industry.
The effort has been transformational for the airline group. Startups that have taken part in the program, such as Assaia, which use machine learning and AI to improve aircraft turnaround times, are already leading to exponential improvements in their service. In fact, two years ago the company was profitable for the first time in the past five years, so this shows the powerful impact that unlearning in management can have.
What is an example of a mindset, behavior, or method that a leader might need to unlearn in order to achieve success in the future?
A classic example is when you’ve been a contributor on a team, and then you are promoted to become a manager.
Often as a team contributor, your competency is doing your job really well. If you are a software engineer, for example, you may be a brilliant coder, and then because you’re so good, you get promoted to be an engineering manager. Previously, success meant creating lots of output and getting all your work done. But now you’re responsible for helping other people get their work done, and probably doing less coding than you were before.
When the team has a problem, it’s probably within your comfort zone to jump in and fix the code. But that’s not going to help you have an exponential impact as a manager of the team. Instead what you need to do is to create an environment where other people on the team can have an exponential impact by becoming more competent and empowered to solve these problems themselves.
In this example, a lot of the behaviors that made a coder successful in the past are not the same behaviors that will lead to success as a manager in the future. Actually, they will inhibit success. This is a classic transition from contributor to manager where a lot of unlearning is needed.
What is one of the most dangerous barriers to success that ingrained behavioral cycles can cause?
I would say it is the danger of believing that everything you are doing is working.
Bill Gates once said, “Success is a lousy teacher.” What this means is that you learn the most when you start to recognize that your behavior is not actually driving the outcomes that you want. The more successful we become, the more afraid we may be to try something different, because it might jeopardize our successful track record. But the danger here is that by just sticking to what we know, we’ll never grow.
In that sense, the most dangerous barrier to success, is success itself.
Is there a call to action that you have to leaders who are reading this interview?
I would say leaders should ask themselves how they can start unlearning today. Think about a challenge in which they’re not achieving the outcomes that they want. Then I’d ask them to come up with five new behaviors that they can try—behaviors that are slightly outside their comfort zones—and then pick one behavior to try for a week to see if they get a breakthrough into the challenge they’re trying to tackle.
If people do this every week and continue iterating and experimenting with new behaviors, I believe they can begin achieving amazing results.
Thanks so much for taking the time to share your thoughts with our readers, Barry!
Thanks for the opportunity! I hope to meet more members of the SU community at future SU programs!
“There are people in the world who habitually say yes. And there are people who habitually say no. The people who say yes are rewarded by the adventures they get to go on, and the people who say no are rewarded by the safety they attain.”
—Keith Johnstone courtesy of SU Faculty Dan Klein, “Getting Comfortable with Ambiguity”
I am most definitely a “yes” person. So recently, thanks to sponsorship from my employer Westpac, I attended the Singularity University Executive Programme. It was nothing like I anticipated and better than I ever could have imagined.
But to try to recreate the experience for this audience in a blog post would do it a disservice. There is no way to replicate the impeccably curated, fully immersive experience of six days on the NASA campus in Sunnyvale CA with the world’s most pre-eminent experts in their respective emerging fields: areas as diverse as genetic modification, exponential economics, augmented and virtual reality, artificial intelligence, entrepreneurialism, climate change, robotics… So instead I will try to do it justice by exploring seven of the key themes.
1) The pace of change will never be this slow again.
The programme is predicated on the concept of exponential growth and the fact that technological advancements of recent years have led to exponential outcomes across almost every part of our lives. As human beings, we experience this as disruptive and destabilising. But, if we can adapt, the opportunities for our own lives, the future of the institutions we work in and indeed the world, could be extraordinary.
“Humans are not equipped to process exponential growth. Our intuition is to use our assessment of how much change we’ve seen in the past to predict how much change we’ll see going forward. We tend to assume a constant rate of change. Exponential growth is both deceptive and astonishing, as the doubling of small numbers will inevitably lead to numbers that outpace our brain’s intuitive sense. This is meaningful because we as humans tend to overestimate what can be achieved in the short term, but vastly underestimate what can be achieved in the long term.”
—Introduction to Exponentials, V2.1 September 2016, Singularity University
2) The convergence of technologies is the driver of exponential change.
The sheer array of subject areas we covered in a very short space of time was powerful. On the first two days alone we explored AR, VR, AI, robotics, synths, digital manufacturing, climate change, and futurism. I could start to see a clear correlation between the exponential pace of change and the convergence of these emerging technologies. Combine atoms with bits, artificial GENERAL intelligence with a humanoid body (to form a synth) and you suddenly see how bringing these seemingly different fields together might further accelerate returns.
“The technology you use today is the worst it will ever be. Technology is moving faster than you think and will never move this slowly again.”
—Rob Nail, CEO Singularity University, “Leading in the Age of Disruption”
3) Ethics in an exponential world has a long way to go.
However, exploring the convergence of technology and humans, in particular, raises fundamental ethical questions. Ethical questions which are far from resolved by either the expert community or humanity at large.
Take artificial intelligence. The current focus on narrow AI is dependent on significant recent advances in deep learning which has been a major accelerator of exponential growth. At a basic level, it looks like enabling chatbots, virtual assistants, and automation of vast swathes of manual human activities—which is controversial in itself when you consider the impact on the workforce.
But when you understand that deep learning has become so advanced that once we have given the algorithms inputs and they have started learning, the process of learning is so complex, has so many variables, that it actually becomes incomprehensible to the current human brain how the AI is making the decisions it’s making. It is, in essence, a black box. Take the example from the United States of AI being used to decide sentencing when a person is found guilty of a crime. Given that data inputs are things like demographics and where the felon lives, how can we trust that the algorithm is not biased-by-design, taking on all our inherent assumptions and perpetuating them through learned data?
The Theranos scandal is perhaps the most startling recent example of technology being a (literal) black box. Although admittedly, in that case, there was clear deception on a massive scale by the CEO. But as technology converges with the fundamentals of what it is to be a human being, our health, and indeed the very makeup of our DNA, we have to prioritise ethical and human issues alongside functional workings of the technology.
4) We now live in a world of complex challenges, not just complicated problems.
In an increasingly exponential world, there needs to be a distinct difference between our approach to tackling complex challenges as opposed to complicated problems.
Complicated problems are characterised by predictability, there being a “right” answer. They’re not necessarily easy to solve, but good practise and specialist expertise can usually find the answer, e.g., rebuilding Notre Dame, putting a man on the moon, or brain surgery. Complex challenges are typically unpredictable. There’s no one solution, but instead a direction. Patterns exist, but cause and effect are only seen in hindsight. There are many moving parts, and emergent practice is required to explore the challenge, e.g., culture or behaviour change or climate change.
Complicated problems require problem-solving, but this isn’t enough for complex challenges. The trouble is, starting by asking, “What is the problem we are trying to solve?” limits us to the current paradigm in which the problem already exists. Navigating complex challenges requires curiosity, experimentation, and a willingness to learn. You need to be willing to walk in others’ shoes to understand the challenge better and be able to stand back and see the system rather than the individual parts. Above all else, you need to be OK not having THE answer and instead explore a potential new paradigm enabling options to emerge.
5) Collaboration is too slow for an exponential world.
Did you know, when asked to vote anonymously, i.e., without colleagues knowing the result of the vote, 76% of people say they prefer working alone than in a team. Why? Because with collaboration comes a lot of friction—socialising concepts, navigating politics, achieving buy-in. In an exponential world, the need for speed will surpass the human desire for collaboration. The benefits of pace will supersede the benefits of everyone having an opinion and a say. That is not to say that we should discount input from customers and colleagues. But the approach will be more consistent with coordination rather than collaboration.
6) Behavioural transformation is as important as technological transformation.
“The next 20 years will be more transformative than the last 2,000 in terms of technological advances.”
—Ray Kurzweil, Co-Founder and Chancellor, Singularity University, “The Future Is Better Than You Think”
The challenge is that we, as human beings, are not transforming anywhere near as fast as the technologies around us. And businesses, governments, and institutions are failing to transform. Indeed 80-90% of all organisational transformations fail. This is because we treat transformation as a rational (technological) process when in reality it’s a fundamentally political (behavioural) process. By proactively tackling the human process as a core part of your transformation programme, you will deliver exponential outcomes. Behavioural and neuroscience provide a valuable scientific basis for behavioural transformation strategies.
7) The future is already here—it’s just not very evenly distributed.
“The future is notoriously difficult to predict. Our shortcomings in seeing and extrapolating the exponential trends that will shape the coming century sets the stage for us to experience perpetual futureshock.”
—Ray Kurzweil, Co-Founder and Chancellor, Singularity University, “The Future Is Better Than You Think”
For most of the planet, all of the above may sound like the realms of science fiction and a million miles away from their reality today. Indeed one could interpret the current political divergence across the planet as stark evidence of the uneven distribution of understanding and access to the future. It was clear in presentation after presentation that the next is already in the now. Part of our role as alumni from over 20 countries is to start to distribute an understanding of an exponential world and help enable democratisation of access to the benefits.
The good news? Whilst the sheer pace and complexity of technological and human advancement is mind-blowing, we actually have everything we need already to solve the world’s problems. Critical, complex challenges like climate change, poverty, and distributed prosperity. For me, I left the Executive Programme with an invigorated sense of purpose and a vast toolkit and network to tap into to help me make a dent in the world. I’m going to work on making Australia one of the world’s great innovation nations and in doing so enabling innovative advances that will positively impact the whole of humanity.
An army of free-floating minibrain clones are heading your way!
No, that’s not the premise of a classic sci-fi brain-in-jars blockbuster. Rather, a team at Harvard has figured out a way to “clone” brain organoids, in the sense that every brain blob, regardless of what type of stem cell it originated from, developed nearly identically into tissues that mimic the fetal human cortex. No, they didn’t copy-paste one minibrain to make a bunch more—rather, the team found a recipe to reliably cook up hundreds of 3D brain models with very little variability in their cellular constituents.
If that sounds like a big ole “meh,” think of it like this. Minibrains, much like the organs they imitate, are delicate snowflakes, each with their own unique cellular makeup. Sure, no two human brains are exactly the same, even those of twins. However, our noggins do follow a relatively set pathway in initial development and end up with predictable structures, cell types, and connections.
Not so for minibrains. “Until now, each … makes its own special mix of cell types in a way that could not have been predicted at the outset,” explained study author Dr. Paola Arlotta. By compiling a cellular atlas from multiple minibrains, her team basically found a blueprint that coaxes stem cells from different genetic and gender origins to mature into remarkably similar structures, at least in terms of cellular composition. Putting it another way, they farmed a bunch of identical siblings; but rather than people, they’re free-floating brain blobs.
Rest assured, it’s not a new evil-scientist-brain-control scheme. For brain organoids to be useful in neuroscience—for example, understanding how autism or schizophrenia emerge—scientists need large amounts of near-identical test subjects. This is why twins are extremely valuable in research: all else (nearly) equal, they help isolate the effects of individual treatments or environmental changes.
“It is now possible to compare ‘control’ organoids with ones we create with mutations we know to be associated with the disease,” said Arlotta. “This will give us a lot more certainty about which differences are meaningful.”
How to Grow a Brain
The authors set out with a slightly different question in mind: is it possible to reliably grow a brain outside the womb?
You may be asking “why not?” After all, scientists have been cooking up brain organoids for half a decade. But although specific instructions are generally similar, the resulting minibrains—not so much.
Here’s the classic recipe. Scientists start with harvested stem cells, embed them into gel-like scaffolds, then carefully nurture them in a chemical soup tailored to push the cells to divide, migrate, and mature into tiny balls. These tissue nuggets are then transferred to a slow spinning bioreactor—imagine a giant high-tech smoothie machine. The gentle whirling motion keeps the liquid nicely oxygenated. In six months, the grains of greyish tissue expand to a few millimeters, or one-tenth of the width of your finger, packed full with interconnected brain cells.
This is the “throw all ingredients into a pot and see what happens” approach, more academically known as the unguided method. Because scientists don’t interfere with the brain balls’ growth, the protocol gives stem cells the most freedom to self-organize. It also allows stem cells to stochastically choose what type of brain cell—neurons, glia, immune cells—they eventually become. God may not play dice, but outside the womb, stem cells sure do.
That’s problematic. Depending on the initial stem cell population, the culture conditions, and even the particular batch, the resulting minibrains end up with highly unpredictable proportions of cell types arranged in unique ways. This makes controlled experimenting with minibrains extremely difficult, which nixes their whole raison d’être.
Similar to their human counterparts, unguided minibrains also follow the instructions laid out in their DNA. So what gives?
Our brains do not grow in isolation. Rather, they’re guided by a myriad of local chemical messengers, hormones, and mechanical forces in the womb—all of which are devoid inside the spinning bioreactor. A more recent way to grow brain blobs is the guided method: scientists add a slew of “external patterning factors” at a very early stage of development, when stem cells first begin to choose their fate. These factors are basically biological fairy dust that push minibrain structures into a particular “pattern,” essentially sealing their fate.
Brain organoids grown this way are generally more consistent in cellular architecture once they mature. For example, many consistently develop the multi-striped pattern characteristic of the cerebral cortex—the outermost layer of the brain involved in sensation, reasoning, and other higher cognitive functions. But do they also resemble each other in their cellular makeup?
Reliable Brain Farming
The team first used both approaches to foster several dozen minibrains for half a year. They began with multiple types of stem cells from both male and female donors: induced pluripotent stem cells, which are skin cells returned to a youth-like stage, immortal human embryonic stem cells, and others.
They then carefully analyzed the resulting brainoids’ genetic makeup at multiple time points to track their growth. The team tapped an extremely powerful—and increasingly popular—tool called single-cell RNA sequencing, which provides invaluable insight into gene expression in every single cell.
In all, they parsed the genetic fingerprints of over 100,000 cells from 21 organoids, and matched those data to existing databases to tease out the cells’ identities. Finally, the team mapped out the distribution of each cell type in every analyzed organoid.
Unsurprisingly, those grown with the unguided method had cellular profiles all over the place. But with the guided approach—particularly, ones dubbed the “dorsally patterned” type—95 percent were “virtually indistinguishable” in their cellular compendium. What’s more, these minibrains also followed incredibly similar development trajectories, in that different cell types popped up at near-identical time points. Even their cellular origin didn’t matter: organoids grown from different stem cells were consistent in their final cellular inhabitants.
Conclusion? The embryo isn’t required for our brain to produce all its cellular diversity; it’s totally possible to reliably grow brainoids outside the womb.
The results are a huge boon for studying neurological diseases such as autism, epilepsy, and schizophrenia. Scientists believe that the root cause of these complex disorders lies somewhere in the tangled dance of fetal brain growth. So far, a clear cause has been elusive.
Using the guided “dorsally patterned” recipe, teams can now grow organoids from stem cells derived from patients, or genetically engineer pathological mutations to study their effects. Because the study proves minibrains made this way are remarkably similar, researchers will be able to nail down risk factors—and test potential treatments—without worrying about biological noise stemming from minibrain diversity.
Arlotta is already exploring possibilities. Using CRISPR, she plans to edit genes potentially linked to autism in stem cells, and grow them out as minibrains. Using the same technique, she can also make “control” organoids as a baseline for her experiments.
We can now “move much more swiftly towards concrete interventions, because they will direct us to the specific genetic features that give rise to the disease,” she said. “We will be able to ask far more precise questions about what goes wrong in the context of psychiatric illness.”