Standardized uptake value


https://en.wikipedia.org/wiki/Standardized_uptake_value?wprov=sfla1

Advertisements

Artist Makes More Realistic Versions Of Cartoon Characters, And The Result Is Amazing.


https://www.boredpanda.com/realistic-cartoon-characters-illustrations-tatimoons/?utm_source=facebook&utm_medium=link&utm_campaign=thinkinghumanity

Luck is like internet connection, here is how to turn it on anytime anywhere


https://medium.com/@spirituality/luck-is-like-internet-connection-here-is-how-to-turn-it-on-anytime-anywhere-9fb9a176b049

5 Symptoms You Will Experience When Your Pineal Gland Fully Activates


https://www.lifecoachcode.com/2018/02/11/symptoms-you-experience-when-pineal-gland-fully-activates/

ACOG Guidelines on management of intrapartum Intra-amniotic Infection


https://speciality.medicaldialogues.in/acog-guidelines-on-intra-amniotic-infection/

10 Habits for Making a Strong Relationship


Just as four walls don’t make a home until that is filled with love and togetherness, a  strong relationship is not only built by two people, but also with their patience, understanding, and hope.

These are a few tips to get you going a strong relationship.

1. Kiss and Bliss

According to Doctor Samantha Rodman, a kiss has a great power. Before going to work and before going to sleep, a couple should kiss. Eye contact and touch twice a day send each other the greetings of love giving a special gesture. It keeps each other reminding how important they are to each other.

2. Appreciate Your Other Half

Appreciation is another way of showing love, a gesture of supporting even when it is hardly necessary. Sex logistics and Sociology Professor Pepper Schwartz talks about how essentially important these small talks of appreciation are for example “you look beautiful today.”

3. Conflicts are Settled

Two humans if living together are bound to differ from each other’s opinions and therefore conflicts arise. But couples who value each other resolve those issues. Therapist Kurt Smith advice couples to take note of each other’s bad habits and resolve them over time.

4. Positivity Always Works

Each of us has some kind of habits which are not good. But relationship specialists suggest that we should ignore the negative while concentrating on the good things your other half possesses.

5. Proudly Showing Off Each Other

No matter what others say about this, showing affection publicly creates a bond between the two, a feeling of closeness. Aaron Anderson, a family therapist suggests holding hands in public, or a little closeness while at the theatre with friends can gear up the relation.

6. Clear Talks

Pondering upon when your partner would understand your needs can only complicate the matter. A good relationship stands upon a strong base where both the people are comfortable enough to share their needs without thinking twice.

7. Time is the Best Gift

In a relationship, time is the best gift that the people can share. Time also creates a bond of love and friendship, compatibility. This idea when kept alive for a long time works as a booster for the relation.

8. Share Your Sense Of Humor

Not only joking around when friends are there but also while spending time with your partner share those funny punchlines. Laughing together often breaks the ice if there is any, says Samantha Rodman.

9. Dividing the Burdens

One of those burdens can be resources and their allotments. A good relationship should have no barriers while discussing this kind of serious issues. But often people do it only when the need arises and there is no way out.

10. Worrying is Caring

Worrying or benefit of the doubt is a clear sign that the other person still cares and that’s what keeps the flame on in a relation, you are never bored of each other, according to Dr. Marry Land it’s a strength if happy relationship.

Strong Relationship

How to Train Your Brain to Stop Worrying


We all know that excessive amount of worrying and stress can completely derail our mental health. That should have been enough to get everyone’s attention. But just in case you didn’t know it also has some very visible and equally damaging effects on your physical health as well. Don’t believe us yet, read on to know more.

There are some who advocate that a little anxiety is a good thing. And yes, if we aren’t under some sort of pressure, we tend to take longer to complete stuff. But everyone’s threshold is different and they need to pay attention as to when things are getting out of their hands for them. Physical signs of excessive worrying include crazily elevated heart rate, lots of sweating, and difficulty in breathing.

Stress is also a signal to our body to prepare itself for a fight which is one of the survival mechanism our ancestors developed to survive in the harsh living situations of the Stone age.

The muscles tense up and draw blood, leaving you looking pale and drained. And since there is no physical release for this built up pressure, it turns inwards causing you ache in different body parts such as your legs, back, head and other signs such as trembling. If it is more severe, it can also I’m part your digestion, resulting in either constipation or diarrhea.

And if you worry too much, it will also have an effect on your immune system. Too much stress weakens your immunity making you more susceptible to various infections, ranging from the common cold to much more dangerous illnesses.

Well, thankfully reversing or completely avoiding all these signs is totally in your hand. Our brain adapts easily and you can train it to worry less by introducing some changes in your behavior and everyday activities.

Follow these simple steps to make your mind stop worrying.

#1 Put it into words.

Stop crowding your mind by worrying about things. Write down everything that is causing you anxiety. Firstly it will de-clutter your mind, secondly, it will give an abstract idea a concrete form and finally it will help you move forward from the problem to its best and worst case scenarios, as well as solutions.

#2 Try meditating.

When your thoughts get too overwhelming for you, it is a good idea to completely clean your mind of everything. If you do it even for a couple of minutes you will notice the difference.

#3 Use your stress to fuel your workout.

When you exercise, it brings down the level of stress hormones in your body (cortisol and adrenaline), by using them up. And it also increases the level of endorphins in your system which is pick-me-up kind of hormone.

To Make AI Smarter, Humans Perform Oddball Low-Paid Tasks


Tucked into a back corner far from the street, the baby-food section of Whole Foods in San Francisco’s SoMa district doesn’t get much foot traffic. I glance around for the security guard, then reach towards the apple and broccoli superfood puffs. After dropping them into my empty shopping cart, I put them right back. “Did you get it?” I ask my coworker filming on his iPhone. It’s my first paid acting gig. I’m helping teach software the skills needed for future robots to help people with their shopping.

Whole Foods was an unwitting participant in this program, a project of German-Canadian startup Twenty Billion Neurons. I quietly perform nine other brief actions, including opening freezers, and pushing a cart from right to left, then left to right. Then I walk out without buying a thing. Later, it takes me around 30 minutes to edit the clips to the required 2 to 5 seconds, and upload them to Amazon’s crowdsourcing website Mechanical Turk. A few days later I am paid $3.50. If Twenty Billion ever creates software for a shopping assistant robot, it will make much more.

In sneaking around Whole Foods, I joined an invisible workforce being paid very little to do odd things in the name of advancing artificial intelligence. You may have been told AI is the gleaming pinnacle of technology. These workers are part of the messy human reality behind it.

Proponents believe every aspect of life and business should be and will be mediated by AI. It’s a campaign stoked by large tech companies such as Alphabet showing that machine learning can master tasks such as recognizing speech or images. But most current machine-learning systems such as voice assistants are built by training algorithms with giant stocks of labeled data. The labels come from ranks of contractors examining images, audio, or other data—that’s a koala, that’s a cat, she said “car.”

Now, researchers and entrepreneurs want to see AI understand and act in the physical world. Hence the need for workers to act out scenes in supermarkets and homes. They are generating the instructional material to teach algorithms about the world and the people in it.

That’s why I find myself lying face down on WIRED’s office floor one morning, coarse synthetic fibers pressing into my cheek. My editor snaps a photo. After uploading it to Mechanical Turk, I get paid 7 cents by an eight-person startup in Berkeley called Safely You. When I call CEO George Netscher to say thanks, he erupts in a surprised laugh, then turns mock serious. “Does that mean there’s a conflict of interest?” (The $6.30 I made reporting this article has been donated to the Haight Ashbury Free Clinics.)

Netscher’s startup makes software that monitors video feeds from elderly-care homes, to detect when a resident has fallen. People with dementia often can’t remember why or how they ended up on the floor. In 11 facilities around California, Safely You’s algorithms help staff quickly find the place in a video that will unseal the mystery.

Safely You was soliciting faked falls like mine to test how broad a view its system has of what a toppled human looks like. The company’s software has mostly been trained with video of elderly residents from care facilities, annotated by staff or contractors. Mixing in photos of 34-year-old journalists and anyone else willing to lay down for 7 cents should force the machine-learning algorithms to widen their understanding. “We’re trying to see how well we can generalize to arbitrary incidents or rooms or clothing,” says Netscher.

The startup that paid for my Whole Foods performance, Twenty Billion Neurons, is a bolder bet on the idea of paying people to perform for an audience of algorithms. Roland Memisevic, cofounder and CEO, is in the process of trademarking a term for what I did to earn my $3.50—crowd acting. He argues that it is the only practical path to give machines a dash of common sense about the physical world, a longstanding quest in AI. The company is gathering millions of crowd-acting videos, and using them to train software it hopes to sell clients in industries such as automobiles, retail, and home appliances.

Games like chess and Go, with their finite, regimented boards and well-defined rules, are well-suited to computers. The physical and spatial common sense we learn intuitively as children to navigate the real world is mostly beyond them. To pour a cup of coffee, you effortlessly grasp and balance cup and carafe, and control the arc of the pouring fluid. You draw on the same deep-seated knowledge, and a sense for the motivations of other humans, to interpret what you see in the world around you.

How to give some version of that to machines is a major challenge in AI. Some researchers think that the techniques that are so effective for recognizing speech or images won’t be much help, arguing new techniques are needed. Memisevic took leave from the prestigious Montreal Institute of Learning Algorithms to start Twenty Billion because he believes that existing techniques can do much more for us if trained properly. “They work incredibly well,” he says. “Why not extend them to more subtle aspects of reality by forcing them to learn things about the real world?”

To do that, the startup is amassing giant collections of clips in which crowd actors perform different physical actions. The hope is that algorithms trained to distinguish them will “learn” the essence of the physical world and human actions. It’s why when crowd acting in Whole Foods I not only took items from shelves and refrigerators, but also made near identical clips in which I only pretended to grab the product.

Twenty Billion’s first dataset, now released as open source, is physical reality 101. Its more than 100,000 clips depict simple manipulations of everyday objects. Disembodied hands pick up shoes, place a remote control inside a cardboard box, and push a green chili along a table until it falls off. Memisevic deflects questions about the client behind the casting call that I answered, which declared, “We want to build a robot that assists you while shopping in the supermarket.” He will say that automotive applications are a big area of interest; the company has worked with BMW. I see jobs posted to Mechanical Turk that describe a project, with only Twenty Billion’s name attached, aimed at allowing a car to identify what people are doing inside a vehicle. Workers were asked to feign snacking, dozing off, or reading in chairs. Software that can detect those actions might help semi-automated vehicles know when a human isn’t ready to take over the driving, or pop open a cupholder when you enter holding a drink.

Who are the crowd actors doing this work? One is Uğur Büyükşahin, a third-year geological engineering student in Ankara, Turkey, and star of hundreds of videos in Twenty Billion’s collection. He estimates spending about 7 to 10 hours a week on Mechanical Turk, earning roughly as much as he did in a shift with good tips at the restaurant where he used to work. Büyükşahin says Twenty Billion is one of his favorites, because it pays well, and promptly. Their sometimes odd assignments don’t bother him. “Some people may be shy about taking hundreds of videos in the supermarket, but I’m not,” Büyükşahin says. His girlfriend, by nature less outgoing, was initially wary of the project, but has come around after seeing his earnings, some of which have translated into gifts, such as a new set of curling tongs.

Büyükşahin and another Turker I speak with, Casey Cowden, a 31-year-old in Johnson City, Tennessee, tell me I’ve been doing crowd acting all wrong. All in, my 10 videos earned me an hourly rate of around $4.60. They achieve much higher rates by staying in the supermarket for as long as hours, binging on Twenty Billion’s tasks.

Büyükşahin says his personal record is 110 supermarket videos in a single hour. He uses a gimbal for higher-quality shots, batting away inquisitive store employees when necessary by bluffing about a university research project in AI. Cowden calculates that he and a friend each earned an hourly rate of $11.75 during two and half hours of crowd acting in a local Walmart. That’s more than Walmart’s $11 starting wage, or the $7.75 or so Cowden’s fiancee earns at Burger King.

Cowden seems to have more fun than Walmart employees, too. He began Turking early last year, after the construction company he was working for folded. Working from home means he can be around to care for his fiancee’s mother, who has Alzheimer’s. He says he was initially drawn to Twenty Billion’s assignments because, with the right strategy, they pay better than the data-entry work that dominates Mechanical Turk. But he also warmed to the idea of working on a technological frontier. Cowden tells me he tries to vary the backdrop, and even the clothing he wears, in different shoots. “You can’t train a robot to shop in a supermarket if the videos you have are all the same,” Cowden tells me. “I try to go the whole nine yards so the programming can get a diverse view.”

Mechanical Turk has often been called a modern-day sweatshop. A recent study found that median pay was around $2 an hour. But it lacks the communal atmosphere of a workhouse. The site’s labor is atomized into individuals working from homes or phones around the world.

Crowd acting sometimes give workers a chance to look each other in the face. Twenty Billion employs contract workers who review crowd-acting videos. But in a tactic common on Mechanical Turk, the startup sometimes uses crowd workers to review other crowd workers. I am paid 10 cents to review 50 clips of crowd actors working on the startup’s automotive project. I click to indicate if a worker stuck to the script—“falling asleep while sitting,” “drinking something from a cup or can,” or “holding something in both hands.”

A video from Twenty Billion Neurons describing its work.

The task transports me to bedrooms, lounges, and bathrooms. Many appear to be in places where 10 cents goes further than in San Francisco. I begin to appreciate different styles of acting. To fake falling asleep, a shirtless man in a darkened room leans gently backwards with a meditative look; a woman who appears to be inside a closet lets her head snap forward like a puppet with a cut string.

Some of the crowd actors are children—a breach of Amazon’s terms, which require workers to be at least 18. One Asian boy of around 9 in school uniform looks out from a grubby plastic chair in front of a chipped whitewashed wall, then feigns sleep. Another Asian boy, slightly older, performs “drinking from a cup or a can” while another child lies on a bed behind him. Twenty Billion’s CTO Ingo Bax tells me the company screens out such videos from its final datasets, but can’t rule out having paid out money for clips of child crowd actors before they were filtered. Memisevic says the company has protocols to prevent systematic payment for such material.

Children also appear in a trove of crowd-acting videos I discover on YouTube. In dozens of clips apparently made public by accident, people act out scripts like “One person runs down the stairs laughing holding a cup of coffee, while another person is fixing the doorknob.” Most appear to have been shot on the Indian subcontinent. Some have been captured by a crowd actor holding a phone to his or her forehead, for a first-person view.

I find the videos while trying to unmask the person behind crowd-acting jobs on Mechanical Turk from the “AI Indoors Project.” Forums where crowd workers gather to gripe and swap tips reveal that it’s a collaboration between Carnegie Mellon University and the Allen Institute for AI in Seattle. Like Twenty Billion, they are gathering crowd-acted videos by the thousand to try and improve algorithms’ understanding of the physical world and what we do in it. Nearly 10,000 clips have already been released for other researchers to play with in a collection aptly named Charades.

Gunnar Atli Sigurdsson, a grad student on the project, echoes Memisevic when I ask why he’s paying strangers to pour drinks or run down stairs with a phone held to their head. He wants algorithms to understand us. “We’ve been seeing AI systems getting very impressive at some very narrow, well-defined tasks like chess and Go,” Sigurdsson says. “But we want to have an AI butler in our apartment and have it understand our lives, not the stuff we’re posting on Facebook, the really boring stuff.”

If tech companies conquer that quotidian frontier of AI, it will likely be seen as the latest triumph of machine-learning experts. If Twenty Billion’s approach works out the truth will be messier and more interesting. If you ever get help from a robot in a supermarket, or ride in a car that understands what its occupants are doing, think of the crowd actors who may have trained it. Cowden, the Tennessean, says he liked Twenty Billion’s tasks in part because his mother is fighting bone cancer. Robots and software able to understand and intervene in our world could help address the growing shortage of nurses and home-health aides. If the projects they contribute to are successful, crowd actors could change the world—although they may be among the last to benefit.

The red and green specialists: why human colour vision is so odd


<em>Photo Public Domain</em>
 

Most mammals rely on scent rather than sight. Look at a dog’s eyes, for example: they’re usually on the sides of its face, not close together and forward-facing like ours. Having eyes on the side is good for creating a broad field of vision, but bad for depth perception and accurately judging distances in front. Instead of having good vision, dogs, horses, mice, antelope – in fact, most mammals generally – have long damp snouts that they use to sniff things with. It is we humans, and apes and monkeys, who are different. And, as we will see, there is something particularly unusual about our vision that requires an explanation.

Over time, perhaps as primates came to occupy more diurnal niches with lots of light to see, we somehow evolved to be less reliant on smell and more reliant on vision. We lost our wet noses and snouts, our eyes moved to the front of our faces, and closer together, which improved our ability to judge distances (developing improved stereoscopy, or binocular vision). In addition, Old World monkeys and apes (called catarrhines) evolved trichromacy: red-, green- and blue-colour vision. Most other mammals have two different types of colour photoreceptors (cones) in their eyes, but the catarrhine ancestor experienced a gene duplication, which created three different genes for colour vision. Each of these now codes for a photoreceptor that can detect different wavelengths of light: one at short wavelengths (blue), one at medium wavelengths (green), and one at long wavelengths (red). And so the story goes our ancestors evolved forward-facing eyes and trichromatic colour vision – and we’ve never looked back.

Figure 1. The spectral sensitivities of the colour cones of a honeybee. Reproduced based on Osorio & Vorobyev, 2005
Figure 2. The spectral sensitivities of the colour sensors of a digital camera. Reproduced based on original data of the Author’s.

Colour vision works by capturing light at multiple different wavelengths, and then comparing between them to determine the wavelengths being reflected from an object (its colour). A blue colour will strongly stimulate a receptor at short wavelengths, and weakly stimulate a receptor at long wavelengths, while a red colour would do the opposite. By comparing between the relative stimulation of those shortwave (blue) and longwave (red) receptors, we are able to distinguish those colours.

In order to best capture different wavelengths of light, cones should be evenly spaced across the spectrum of light visible to humans, which is about 400-700nm. When we look at the cone spacing of the honeybee (fig. 1), which is also trichromatic, we can see that even spacing is indeed the case. Similarly, digital cameras’ sensors (fig. 2) need to be nicely spaced out to capture colours. This even cone/sensor spacing gives a good spectral coverage of the available wavelengths of light, and excellent chromatic coverage. But this isn’t exactly how our own vision works.

Figure 3. The spectral sensitivities of the colour cones of a human. Reproduced based on Osorio & Vorobyev, 2005

Our own vision does not have this even spectral spacing (fig. 3). In humans and other catarrhines, the red and green cones largely overlap. This means that we prioritise distinguishing a few types of colours really well – specifically, red and green – at the expense of being able to see as many colours as we possibly might. This is peculiar. Why do we prioritise differentiating red from green?

Several explanations have been proposed. Perhaps the simplest is that this is an example of what biologists call evolutionary constraint. The gene that encodes for our green receptor, and the gene that encodes for our red receptor, evolved via a gene duplication. It’s likely that they would have originally been almost identical in their sensitivities, and perhaps there has just not been enough time, or enough evolutionary selection, for them to become different.

Another explanation emphasises the evolutionary advantages of a close red-green cone arrangement. Since it makes us particularly good at distinguishing between greenish to reddish colours – and between different shades of pinks and reds – then we might be better at identifying ripening fruits, which typically change from green to red and orange colours as they ripen. There is an abundance of evidence that this effect is real, and marked. Trichromatic humans are much better at picking out ripening fruit from green foliage than dichromatic humans (usually so-called red-green colourblind individuals). More importantly, normal trichromatic humans are much better at this task than individuals experimentally given simulated even-spaced trichromacy. In New World monkeys, where some individuals are trichromatic and some dichromatic, trichromats detect ripening fruit much quicker than dichromats, and without sniffing it to the same extent. As fruit is a critical part of the diet of many primates, fruit-detection is a plausible selection pressure, not just for the evolution of trichromacy generally, but also for our specific, unusual form of trichromacy.

A final explanation relates to social signalling. Many primate species use reddish colours, such as the bright red nose of the mandrill and the red chest patch of the gelada, in social communication. Similarly, humans indicate emotions through colour changes to our faces that relate to blood flow, being paler when we feel sick or worried, blushing when we are embarrassed, and so on. Perhaps detection of such cues and signals might be involved in the evolution of our unusual cone spacing?

Recently, my colleagues and I tested this hypothesis experimentally. We took images of the faces of rhesus monkey females, which redden when females are interested in mating. We prepared experiments in which human observers saw pairs of images of the same female, one when she was interested in mating, and one when she was not. Participants were asked to choose the mating face, but we altered how faces appeared to those participants. In some trials, human observers saw the original images, but in other trials they saw the images with a colour transformation, which mimicked what an observer would see with a different visual system.

By comparing multiple types of trichromacy and dichromacy in this way, we found that human observers performed best at this task when they saw with normal human trichromatic vision – and they performed much better with their regular vision than with trichromacy with even cone spacing (that is, without red-green cone overlap). Our results were consistent with the social signalling hypothesis: the human visual system is the best of those tested at detecting social information from the faces of other primates.

However, we tested only a necessary condition of the hypothesis, that our colour vision is better at this task than other possible vision types we might design. It might be that it is the signals themselves that evolved to exploit the wavelengths that our eyes were already sensitive to, rather than the other way round. It is also possible that multiple explanations are involved. One or more factors might be related to the origin of our cone spacing (for example, fruit-eating), while other factors might be related to the evolutionary maintenance of that spacing once it had evolved (for example, social signalling).

It is still not known exactly why humans have such strange colour vision. It could be due to foraging, social signalling, evolutionary constraint – or some other explanation. However, there are many tools to investigate the question, such as genetic sequencing of an individual’s colour vision, experimental simulation of different colour vision types combined with behavioural performance testing, and observations of wild primates that see different colours. There’s something strange about the way we see colours. We have prioritised distinguishing a few types of colours really well, at the expense of being able to see as many colours as we possibly might. One day, we hope to know why.

Planes don’t flap their wings: does AI work like a brain?


A replica of Jacques de Vaucanson’s digesting duck automaton. <em>Courtesy the Museum of Automata, Grenoble, France </em>
A replica of Jacques de Vaucanson’s digesting duck automaton.

In 1739, Parisians flocked to see an exhibition of automata by the French inventor Jacques de Vaucanson performing feats assumed impossible by machines. In addition to human-like flute and drum players, the collection contained a golden duck, standing on a pedestal, quacking and defecating. It was, in fact, a digesting duck. When offered pellets by the exhibitor, it would pick them out of his hand and consume them with a gulp. Later, it would excrete a gritty green waste from its back end, to the amazement of audience members.

Vaucanson died in 1782 with his reputation as a trailblazer in artificial digestion intact. Sixty years later, the French magician Jean-Eugène Robert-Houdin gained possession of the famous duck and set about repairing it. Taking it apart, however, he realised that the duck had no digestive tract. Rather than breaking down the food, the pellets the duck was fed went into one container, and pre-loaded green-dyed breadcrumbs came out of another.

The field of artificial intelligence (AI) is currently exploding, with computers able to perform at near- or above-human level on tasks as diverse as video games, language translation, trivia and facial identification. Like the French exhibit-goers, any observer would be correctly impressed by these results. What might be less clear, however, is how these results are being achieved. Does modern AI reach these feats by functioning the way that biological brains do, and how can we know?

In the realm of replication, definitions are important. An intuitive response to hearing about Vaucanson’s cheat is not to say that the duck is doing digestion differently but rather that it’s not doing digestion at all. But a similar trend appears in AI. Checkers? Chess? Go? All were considered formidable tests of intelligence until they were solved by increasingly more complex algorithms. Learning how a magic trick works makes it no longer magic, and discovering how a test of intelligence can be solved makes it no longer a test of intelligence.

So let’s look to a well-defined task: identifying objects in an image. Our ability to recognise, for example, a school bus, feels simple and immediate. But given the infinite combinations of individual school buses, lighting conditions and angles from which they can be viewed, turning the information that enters our retina into an object label is an incredibly complex task – one out of reach for computers for decades. In recent years, however, computers have come to identify certain objects with up to 95 per cent accuracy, higher than the average individual human.

Like many areas of modern AI, the success of computer vision can be attributed to artificial neural networks. As their name suggests, these algorithms are inspired by how the brain works. They use as their base unit a simple formula meant to replicate what a neuron does. This formula takes in a set of numbers as inputs, multiplies them by another set of numbers (the ‘weights’, which determine how much influence a given input has) and sums them all up. That sum determines how active the artificial neuron is, in the same way that a real neuron’s activity is determined by the activity of other neurons that connect to it. Modern artificial neural networks gain abilities by connecting such units together and learning the right weight for each.

The networks used for visual object recognition were inspired by the mammalian visual system, a structure whose basic components were discovered in cats nearly 60 years ago. The first important component of the brain’s visual system is its spatial map: neurons are active only when something is in their preferred spatial location, and different neurons have different preferred locations. Different neurons also tend to respond to different types of objects. In brain areas closer to the retina, neurons respond to simple dots and lines. As the signal gets processed through more and more brain areas, neurons start to prefer more complex objects such as clocks, houses and faces.

The first of these properties – the spatial map – is replicated in artificial networks by constraining the inputs that an artificial neuron can get. For example, a neuron in the first layer of a network might receive input only from the top left corner of an image. A neuron in the second layer gets input only from those top-left-corner neurons in the first layer, and so on.

The second property – representing increasingly complex objects – comes from stacking layers in a ‘deep’ network. Neurons in the first layer respond to simple patterns, while those in the second layer – getting input from those in the first – respond to more complex patterns, and so on.

These networks clearly aren’t cheating in the way that the digesting duck was. But does all this biological inspiration mean that they work like the brain? One way to approach this question is to look more closely at their performance. To this end, scientists are studying ‘adversarial examples’ – real images that programmers alter so that the machine makes a mistake. Very small tweaks to images can be catastrophic: changing a few pixels on an image of a teapot, for example, can make the network label it an ostrich. It’s a mistake a human would never make, and suggests that something about these networks is functioning differently from the human brain.

Studying networks this way, however, is akin to the early days of psychology. Measuring only environment and behaviour – in other words, input and output – is limited without direct measurements of the brain connecting them. But neural-network algorithms are frequently criticised (especially among watchdog groups concerned about their widespread use in the real world) for being impenetrable black boxes. To overcome the limitations of this techno-behaviourism, we need a way to understand these networks and compare them with the brain.

An ever-growing population of scientists is tackling this problem. In one approach, researchers presented the same images to a monkey and to an artificial network. They found that the activity of the real neurons could be predicted by the activity of the artificial ones, with deeper layers in the network more similar to later areas of the visual system. But, while these predictions are better than those made by other models, they are still not 100 per cent accurate. This is leading researchers to explore what other biological details can be added to the models to make them more similar to the brain.

There are limits to this approach. At a recent conference for neuroscientists and AI researchers, Yann LeCun – director of AI research at Facebook, and professor of computer science at the New York University – warned the audience not to become ‘hypnotised by the details of the brain’, implying that not all of what the brain does necessarily needs to be replicated for intelligent behaviour to be achieved.

But the question of what counts as a mere detail, like the question of what is needed for true digestion, is an open one. For example, by training artificial networks to be more ‘biological’, researchers have found computational purpose in, for example, the physical anatomy of neurons. Some correspondence between AI and the brain is necessary for these biological insights to be of value. Otherwise, the study of neurons would be only as useful for AI as wing-flapping is for modern airplanes.

In 2000, the Belgian conceptual artist Wim Delvoye unveiled Cloaca at a museum in Belgium. Over eight years of research on human digestion, he created the device – consisting of a blender, tubes, pumps and various jars of acids and enzymes – that successfully turns food into faeces. The machine is a true feat of engineering, a testament to our powers of replicating the natural world. Its purpose might be more questionable. One art critic was left with the thought: ‘There is enough dung as it is. Why make more?’ Intelligence doesn’t face this problem.