As 2017 begins and we flounder in our mad rush to force all of India into a digital economy overnight, it is worth pausing and reflecting on what the digital economy is, who controls the platforms and lines as well as some basic concepts about money and technology which have moulded our lives and freedoms, based on patented systems that are failing the people of “West”. Obsolete systems are moulding our patterns of work and our wellbeing — as a very large country, and as an ancient civilisation — into a cast that is observably too small.
We live in times where the non-working rent collectors and speculators have emerged as the richest billionaires. Meanwhile, the hard working honest people, like farmers, workers in self-organised economies (mistakenly called unorganised and informal) are not just being pushed into deep poverty, they are, in fact, being criminalised by labelling their self-organised economic systems as “black”. The Swadeshi economy is being labelled as the “shadow economy”.
“Short term pain for long term gain” has become the slogan for the dictated transition to a digital economy. But the pain is not just short term, the pain of millions of honest Indians who contribute to a truthful economy, wasting days on end, sacrificing their work, their livelihoods, their means of living, to standing at ATMs and juggling denominations and news reports. In rural India daily mile-long walks to banks have become commonplace, whereas rural communities would interact with the “financial world” a handful of times annually.
In Venezuela — where the exact same circus has come to town — there have been riots. On the contrary, in India, we have stood patiently in lines, in the misguided hope that the fabric of the Indian economy will be cleansed of the black money. The economy has been laundered, and the stains have spread.
To assess the long-term gain, we need to ask basic questions: Who will benefit from this so-called long-term gain?
Ten of the richest billionaires have made money riding on patents and monopolies over the tools of information and network technology. In effect, they are rent collectors of the digital economy, who have collected very large rents, at very high frequency, in a very short time.
Bill Gates and company made money through patents on software that were developed by brilliant people; they merely own the “workshop” — owning all the work that happens under their roof. Mr Gates used his monopoly to eliminate rivals and then to ensure that no matter what kind of computer you wanted it had to have Microsoft windows. If at this point, you think to yourself: “What about Apple Inc?” a quick search will enlighten you — Alphabet (Google), Facebook, Amazon, Apple and Microsoft controlling shares are held by the same handful of private investment funds. This VC-armada is led by Vanguard Inc.
In an honest economy, such behaviour would be illegal, but in India we have baptised it as “smart”.
Do we need a Mark Zuckerberg to have friends and be able to talk to them?
Communication and community, friendships and networks are the very basis of society. Facebook has not provided us with “the social network”.
Mr Zuckerberg has crowd-sourced the social network of the world from us. Our relationships are the source of “big data”, the new commodity in the digital world. Information technology seeks to rent information, sourced from us to us.
Digitalisation has spread to all areas. Let us not forget that many multi-national companies are playing a big role in pushing chemicals and GMOs on Africa, and patents on new GMO technologies and digital patents on the biodiversity of life on earth. This big seed grab was stalled at the recent convention on biodiversity meetings in Cancun.
John Naughton, a professor of the public understanding of technology at the Open University and author of From Gutenberg to Zuckerberg: What You Really Need to Know About the Internet has named the digital moghuls “robber barons” of our age.
As he perceptively observes in the Guardian: “In social networking Mark Zuckerberg has cunningly inserted himself (via his hardware and software) into every online communication that passes between his 900 million subscribers, to the point where Facebook probably knows that two people are about to have an affair before they do. And because of the nature of networks, if we’re not careful we could wind up with a series of winners who took all: one global bookstore; one social network; one search engine; one online multimedia store and so on.”
It already is one digital dictatorship. And we need to be asking far more questions than we are asking. We have blindly elevated means — which should be democratically chosen — into an end unto themselves. Money and tools are means, they need to be utilised with wisdom and responsibility to higher ends such as the protection of nature, the wellbeing of all and the common good.
Two sets of means come together in what is now declared the real reason for demonetisation — the digital economy. Money making and tools for money making have become the new religion and the government policy has been reduced to the facilitation of the imposition of the digital empires of the new moghuls. Why else is every department of government directing its energy at making Indians “digitally literate”, precisely at a time where people in technological societies are turning to India to learn her wisdom, her deep values of “Sarve Bhavantu Sukhna”, and the ability to live in community as one Earth Family — Vasudhaiva Kutumbakam? We haven’t learnt from the atomised, alienated, lonely individuals that the souls of Western societies have been reduced to. The digital economy is a design for atomisation, for separation, to allow Indians to become individual consumers with abundant “red money” — credit.
Imposing the digital economy through a “cash ban” is a form of technological dictatorship, in the hands of the world’s billionaires.
Economic diversity and technological pluralism are India’s strength and it is the “hard cash” that insulated India from the global market’s “dive into the red” of 2008.
Mahatma Gandhi’s teachings about resisting empire non-violently, while creating truthful and real economies in the hands of people, for regaining freedom, have never been more relevant. Wealth is the state of wellbeing; it is not money. It is not cash. Money has no value in and of itself. Money is merely a means of exchange, it is a promise. As the notes we exchange state: “I promise to pay the bearer the sum of…” and the promise is made by the governor of the Reserve Bank. On that promise and trust rests an entire economy, from the local to the national level. At the very least, the demonetisation circus has “busted the trust” in the Indian economy.
In the digital economy there is no trust, only one-way control of global banks, of those who own and control digital networks, and those who can make money mysteriously through digital “tricks” — the owners of the global exchange. How else could the exchange traded funds like Vanguard be the biggest investors in all major corporations, from Monsanto to Bayer, from Coca Cola to Pepsi, from Microsoft to Facebook, from Wells Fargo to Texaco?
When I exchange Rs 100 even a 100 times it remains Rs 100. In the digital world those who control the exchange, through digital and financial networks, make money at every step of the 100 exchanges. That is the how the digital economy has created the billionaire class of one per cent, which controls the economy of the 100 per cent.
The foundation of the real economy is work. Gandhi following Leo Tolstoy and John Ruskin called it “bread labour” — labour that creates bread that sustains life. Writing in Young India in 1921, he wrote: “God created man to work for his food, and said that those who ate without work were thieves.”
Writing in the Harijan, in 1935, he cited the Gita and the Bible, for his understanding of the duty of bread labour. For him ahimsa (non-violence) were intimately linked to work, he identified “wealth without work” among the seven deadly sins. It is the bills of domination that the government should be banning, not merely the bills of denomination.
We live in times where the non-working rent collectors and speculators have emerged as the richest billionaires.
Which famous book covers uroscopy, amputation, hysteria, dementia, syphilis, midwifery, gout, poisons, homeopathy, fistulas, nursing, epidemics, death, toothache, sphygmology, constipation, dyscrasia, and astrology? A book that also includes written portraits of physicians, surgeons, and quacks and is itself therapeutic?
The answer is not a medical textbook but the Complete Works of William Shakespeare.
This year marks the 400th anniversary of Shakespeare’s death, an opportune moment to remember the contributions he and other Renaissance writers made to literature, and the influence that the era’s growing interest in medicine and surgery had on their writing.
Shakespeare’s interest in what we now call psychology is obvious. His plays investigate human emotions at key moments in life. The heroes and heroines in the comedies negotiate marriage; Henry IV has a problematic teenage son; Hamlet copes with bereavement; King Lear deals with empty nest syndrome when the youngest of his three daughters, Cordelia, leaves home, and early retirement as he abdicates, shaking “all cares and business from our age,/Conferring them on younger strengths.”
But his plays are also interested in the technical aspects of medicine and surgery. The Merchant of Venice is obsessed with blood letting: from the prince of Morocco’s suggestion that the suitors “make incision” for Portia’s love “to prove whose blood is reddest” through Shylock’s inexplicable bond of a pound of flesh and his passionate defence of the rights of Jews (“if you prick us do we not bleed?”) to Renaissance slang for usurers as blood suckers.1
This interest makes sense given that Shakespeare would have had opportunities to witness anatomical dissections. From 1596 he lived a short distance from Barber-Surgeons’ Hall in the City of London, where public dissections (known as anatomies) were staged four times a year. The influence of this can be seen in his work. Dr Pinch in Comedy of Errors is so thin that he is called “a living anatomy.” When Lear is mistreated by his daughters, Goneril and Reagan, he appeals in anguish to disembodied doctors: “Let them anatomize Reagan, see what breeds about her heart: Is there any cause in nature, that makes these hard hearts?”
At the time, the word “anatomy” quickly went from being a technical term to become associated with any kind of close investigation. John Lyly’s Euphues: the Anatomy of Wit was the bestseller of the 1580s; the 17th century opens with Hamlet’s anatomy of himself and humanity (“What a piece of work is a man, how noble in reason, how infinite in faculties”); and in 1621 we get Burton’s Anatomy of Melancholy. The literature academic Richard Sugg has counted over 150 uses of anatomy in the titles of sermons, poems, works of philosophy, and political writings from 1550 to 1650.2
In 1636, Inigo Jones designed a new anatomy theatre for the Company of Barbers and Surgeons. Jones was both an architect and a theatre practitioner: he had visited the anatomy theatre in Padua, thought to be the first of its kind, as early as 1594, had designed the indoor Cockpit theatre in 1616, and had collaborated on spectacular masques with the playwright Ben Jonson for King James’s court, highlighting the close relation between the two art forms of theatre and medicine.
Figures of fun
Despite this close relation, many Renaissance plays present medical practitioners reductively as figures of fun. We encounter doctors willing to make house calls to ladies’ chambers for non-medical reasons; surgeons are satirised for their “hard words,” and doctors for the frequency with which they prescribe laxatives. Shakespeare, too, offered stereotypes. In Twelfth Night when Sir Toby calls for a surgeon to treat his head wounds, he is told “O, he’s drunk, Sir Toby, an hour ago; his eyes were set at eight i’ th’ morning.”
Although there is a large gap between Renaissance medicine and today’s, for obvious scientific reasons, both periods share a fundamental interest in understanding the human body and mind, making many Renaissance observations still remarkably pertinent. In 1591 Shakespeare’s contemporary Robert Greene compared a successful pickpocket to a surgeon, listing “three properties that a good surgeon should have— an eagle’s eye, a lady’s hand [ie, delicate] and a lion’s heart.” The image enhances pickpockets rather than demeans surgeons, using the noun to mean someone who is manually (not necessarily surgically) dexterous. As summaries of modern practitioners go, being observant, delicate, and courageous is one that does not require updating.
The Earth’s moon may once have not been on its own, according to lunar scientists.
The smaller ‘twin’ moon is believed to have only survived a few million years before it collided with the one we see today, leaving just one.
The theory will be explained by Professor Erik Asphaug, from the University of California at Santa Cruz at a conference about the moon to be held at the Royal Society this September.
He said: “The second moon would have lasted for only a few million years; then it would have collided with the moon to leave the one large body we see today.
“It would have orbited Earth at the same speed and distance and just got slowly sucked in until they hit and then coalesced.”
Prof Asphaug told the Sunday Times he believes the landscape of the moon, which appears to have mountains, are the remains of Earth’s smaller moon when the pair collided. The moon’s smaller twin is believed to have been about onethirtieth of the size.
The Earth and its moon are thought to have been formed between 30 million and 130 million years after the birth of the solar system, about 4.6 billion years ago.
A total of nine ‘super-Earths’, planets between one and 10 times the mass of Earth. have previously been found.
Scientists from Harvard put forward a theory last year that suggested the Moon was once part of Earth that spun off after they collided with another body. The study was published in journal Science.
Last month astronomers reported discovered three planets, similar to Earth, orbiting around a single star which may be able to support life.
Researchers estimate there could be as many as 100 billion planets similar to the Earth in our galaxy, the Milky Way.
The decrease in sexual desire was lower for women who had found a new partner over the study duration, said the study published in the journal Psychological Medicine.
Long-term relationships may offer many psychological benefits but staying in a monogamous relationship for long may reduce a woman’s sexual desire, a study says.
Following over 2,000 premenopausal Finnish women for seven years, the researchers found that those who had stayed in the same monogamous relationship during the study period experienced the highest decrease in sexual drive.
The decrease in sexual desire was lower for women who had found a new partner over the study duration, said the study published in the journal Psychological Medicine.
“Our results advocate tailored psychobehavioural treatment interventions for female sexual dysfunctions that take partner-specific factors into account,” said the study led by Annika Gunst from University of Turku in Finland.
The scientists used the Female Sexual Function Index — a short questionnaire that measures specific areas of sexual functioning in women, such as sexual arousal, orgasm, sexual satisfaction, and the presence of pain during intercourse — to look at the evolution of female sexual desire over a period of seven years, Medicalnewstoday.com reported.
Analyses were conducted separately for women in different relationship constellations.
Of the functions examined, the researchers found that women’s ability to orgasm remained the most stable over the seven-year period, while sexual satisfaction varied widely.
During the seven-year follow-up, the ability to have an orgasm improved across all groups, with single women experiencing the greatest improvement.
Women with a new partner experienced higher improvement in orgasmic ability when compared to women who had been in the same relationship over the entire period of observation.
What really happens to us after death? Once a person stops breathing, and their heart ceases to pump blood, they’re what doctors consider “clinically dead.” On a biological level, the eventual decomposition of cells, organs, and brain tissue signal its final and irreversible stages.
But what if that’s not actually the end? Two new studies claim that hundreds of genes actually kept expressing—and, in some cases, become more active—after death occurred. This came as a surprise to the researchers, because forensic pathologists have long suspected that gene activity degrades postmortem, which is why their rate of change is sometimes used to calculate time of death.
According to the lead author of both papers, microbiologist Peter Noble of the University of Washington, the discovery of “undead” genes could help to improve the preservation of organs destined for transplantation. The two studies are currently available on the pre-print server bioRxiv, and it’s important to note that neither have undergone peer review yet.
Noble says his most recent research was inspired by a three-year-old study published in Forensic Science International that discovered a host of genes that remained active in human cadavers for up to 12 hours after death.
In order to investigate the unwinding of the genetic clock, in these latest studies, the team extracted and measured messenger RNA (mRNA) levels in the tissue of recently deceased mice and zebrafish. Since mRNA plays an important role in gene expression, higher levels of this molecule should indicate more genetic activity.
In one of the studies, Noble and his colleagues were able to describe more than 1,000 genes that stayed “alive” postmortem. A total of 515 mice genes continued to operate for up to two days, while 548 zebrafish genes remained functional for an entire four days after death.
“It’s an experiment of curiosity to see what happens when you die,” Noble told Science Magazine.
One of the most surprising findings, however, was that hundreds of genes actually fired up—boosting their activity—within the first 24 hours after the animals had died. Noble suspects that many of them might have been suppressed or shut off by a network of other genes when their host was alive, and only after death were they free to “reawaken.”
The team also found that many of the genes that persisted postmortem are typically active during embryonic development, which led them to theorize that, on a cellular level, newly developing lifeforms might share a lot in common with degenerating corpses.
Other genes they identified were associated with promoting the growth of cancerous cells. These researchers believe the activation of cancer-related genes postmortem could partly explain why many transplant recipients are at higher risk of developing cancer after receiving a new organ, although this has long been attributed to the immunosuppressive drugs they’re typically prescribed. A lot more research still needs to be done.
“Since our results show that the system has not reached equilibrium yet,” one of the studies broadly speculates, “it would be interesting to address the following question: what would happen if we arrested the process of dying by providing nutrients and oxygen to tissues? It might be possible for cells to revert back to life or take some interesting path to differentiating into something new or lose differentiation altogether, such as in cancer.”
In addition to offering potentially valuable new insights into the expiration of vital transplant organs, the researchers hope their findings can also be used by forensic scientists to more accurately pinpoint time of death, which is apparently harder than it sounds.
“The headline of this study is that we can probably get a lot of information about life by studying death,” said Noble.
Kids are more likely to snuggle when adults are reading real books to them than when they are reading tablets.
This is the result of a new, albeit small, study from some British psychologists published in Frontiers in Psychology. Researchers at the University of Sussex compared how 24 children and their mothers shared storybooks versus stories on electronic readers and found that among the former there was “a significant increase in the warmth of the parent/child interactions: more laughter, more smiling, more shows of affection.”
These are the kind of stories that make parents feel guilty. And they make us want to rethink our families’ relationship to technology. We know that our phones and tablets are taking away valuable time from our families. We know that they’re making our children more distracted from their schoolwork and their social interactions. We know that they’re killing dinner time and keeping us from focusing on our children and our spouses. And now it turns out that they’re making our children less likely to cuddle with us?
Fortunately, it’s that time of year when we can at least make a good faith effort to turn things around. So this year, let’s resolve to change the way we use technology and the way our children do.
It’s important to note that many Silicon Valley execs seriously curtail their own kids’ screen time. In 2010, Steve Jobs told The New York Times that his own children hadn’t tried his latest invention, the iPad.
“We limit how much technology our kids use at home,” Jobs said.
So can we all. We could start by limiting the use of devices for entertainment purposes (which includes social media, texting friends, watching videos — anything not for school) to a certain number of hours a day. Or maybe only after a certain time. Or only on certain days. In our house, weekdays are off limits.
Whatever it is, the point is to be consistent. The reason people like to start diets in January is that there are relatively few distractions to disrupt our routines.
Author and addiction expert Nicholas Kardaras has written about the hazards of screen time for children; in his book “Glow Kids,” he calls “digital heroin.” But even parents who don’t see their kids becoming antisocial or even violent as a result of screen time see that it has significant effects on their moods.
Getting a child away from an iPad or other screen often seems to induce a state of crankiness. This is nothing new. In her book “The Plug-In Drug” — published 40 years ago — author Marie Winn likened turning off the TV to waking a child from a nap. It takes awhile for them to readjust.
When kids are cranky or bored or impatient we assume that giving them a screen will help — and they are often an instant pacifier, one that single parents understandably may rely upon. But the trick, as with diets, is to imagine what our “future selves” will think. If you fast forward an hour you will see yourself having an argument with your child about turning off the device and doing homework, or coming to the dinner table, or getting ready for bed. Instead of ending the crankiness and boredom and impatience, you’ll have postponed it and perhaps made it worse.
Screens don’t just affect children’s moods. A 2015 study published in Pediatrics found that “the use of mobile media to occupy young children during daily routines such as errands, car rides, and eating out is becoming a common behavioral regulation tool: what the industry terms a ‘shut-up toy.’ ”
The long-term consequence, according to the study, is the disruption of a child developing their own self-regulating mechanisms. How will they learn to cope with frustration or boredom if a screen isn’t nearby?
Last October, the American Academy of Pediatrics released new guidelines for children and screen time, recommending that children under 18 months have none at all, and that too much reliance on screens as a distraction can delay emotional and cognitive development.
The best way to stay with any resolution, experts say, is to not feel as if you are depriving yourself. And that goes for any technology diet. We can’t just take things away from our kids and ourselves. We have to provide real alternatives. That means finding other sources of entertainment — a new set of art supplies, board games, or maybe a foosball table.
But it can also mean making a commitment to using time in other ways — spending at least a half-hour outdoors every day, cooking meals together or reading with our kids (even the big ones) at the end of the day.
Quitting any habit is tough. But if we spend the next two months learning that we can live with a even little less screen time, it will have been worth it.
- The Large Hadron Collider (LHC) is the largest and most powerful particle accelerator in existence, but the devices have been around since the 1930s.
- Particle accelerators have been used to create better medicines, treat diseases like cancer, and manufacture products we use every day.
Particle Accelerator Basics
A particle accelerator sounds like it’d be something straight from a science-fiction novel, largely because most of us don’t really quite understand how they work and also because they do have a place in works of fiction (fans of the television show The Flash will know the title character got his powers from a particle accelerator).
Nevertheless, particle accelerators are real, and they’re used by physicists to investigate the structure of atomic particles. The largest and most powerful one in existence is the Large Hadron Collider (LHC), which is located under the France-Switzerland border near Geneva, but it’s not the first. In fact, particle accelerators have been around since the 1930s, albeit in humbler sizes compared to the LHC.
The goal of a particle accelerator is to energize a particle by, well, accelerating it — when a particle is given a kick by speeding it up, it gains more energy.
The devices can work in one of two ways, either by traveling in a loop (a circular accelerator) or in a straight line (a linear accelerator). Atoms in the LHC are spun in a 27-km (16.7-mile) long ring before smashing together, and that giant particle accelerator has the ability to collide more than a billion protons per second.
DIY Particle Accelerator?
Particle accelerators have so many uses. They’ve helped us create better medicines, treat diseases like cancer, even manufacture the shrink wrap used to protect foods. The data collected by these machines is invaluable, but the LHC has accumulated so much data that managing it all as proven difficult. Either we need to free up space in the particle accelerator we have or we need to build another one.
To do the latter, we’d need to start with five things:
- An acceleration mechanism for their energy source
- A control instrument (usually a system of magnets)
- A collision (usually)
- A system of detection to monitor what’s happening with the beamed particles
Simple, right? Perhaps it’s better to hear it from someone whose job is actually working on particle accelerators. Here’s a video featuring accelerator physicist and designer Suzy Sheehy explaining in under five minutes how to design one.
- To achieve an accurate description of the universe, physical theories are increasingly invoking extra dimensions to explain the mysteries of nature.
- The problem is—how do you prove the existence of something so elusive? New experiments with the LHC may finally prove just how many extra dimensions, if any, our universe really has.
How many dimensions are there? Is time a dimension? Or is our 3-dimensional space-time just one element—and a minor one at that—of a greater hyperdimensional universe?
It’s a question that’s been asked many, many times, and the answers can be almost as varied as there are potential extra dimensions. From Paul Ehrenfest’s exploration of 3-dimensional physics in 1917 to the M-theory of the 1990s, experts throughout the years have proposed their own answers—some more forcefully than others.
But with advances in technology, and armed with new mathematical models and theories, we might be in a unique position today to begin to understand one of the natural world’s most baffling mysteries.
Dimensions, Gravity, and Light
At the heart of almost all theories that deal with the number of dimensions are the fundamental forces of gravity and light, both of which are possibly the most observed and easily the most studied phenomena in the physical universe. Among the four fundamental forces in nature—the others being the strong and weak nuclear forces—gravity and electromagnetism (which is responsible for generating light) are the trickiest to deal with. Individually, they’ve caused grief to countless scientists and theorists; and when put together, they wreak absolute havoc.
Models generally draw from these observable features of the universe to build theories and conjectures about how things work. The simpler ones proposed that the universe was made up of three dimensions: length, width, and depth. This is especially easy to grasp since it’s how we perceive the world; it’s intuitive and entirely logical.
But this neat, trinary division of the universe doesn’t exactly sum up how we experience it. To build on this, some mathematicians—notably Hermann Minkowski—combined the three spatial dimensions with a fourth, temporal dimension, to construct a space-time description of reality.
This is where things start to become knotty. There are embarrassing discrepancies and inconsistencies that just don’t seem to tally. For instance, why does gravity operate on such a massive scale—planets, stars, galaxies—whereas the other forces act upon such a tiny scale? Or, put somewhat differently, why is gravity so much weaker than the other four fundamental forces?
In an interesting essay for PBS, Paul Halpern illustrates the problem using a simple example: “Pick up a steel thumbtack with a tiny kitchen magnet, and see how its attraction overwhelms the gravitational pull of the entire earth.”
So a number of theories were evolved to attempt to compensate for these discrepancies. Building on the work of Theodor Kaluza and Oskar Klein in the 1920s, superstring theories advanced the idea that the vibrations of minuscule energy strings were responsible for all that we observe in nature; these theories only worked, however, in a universe comprising ten or more dimensions, with the extra six or so all “compactified” into a tiny space beyond the limits of ready observation.
Another approach (M-theory) subsumed this 10-dimensional universe, composed of strings and energetic membranes, within a large, potentially observable extra dimension called the “bulk.” In this notion, matter and energy and most of the fundamental forces cling timidly to the energetic space-time membranes, or “branes;” gravity, however, is something of a free agent, operating alike on the branes and within the hyperdimensional bulk. For this reasons, gravitons—the carriers of gravitational energy—can bleed off in to the bulk, diminishing the small-scale strength of gravity but still allowing it to exert undue power over large distances.
A Large Hadron Collision of Ideas
Enter the Large Hadron Collider. The machine, based in Geneva, Switzerland, just might hold the answer to the dimensional puzzle. Capable of running extremely high-energy particle collisions, experts are able to construct specialized experiments which might, in turn, yield data that point to theories that actually hold water.
Right now, scientists are looking for three specific occurrences to prove that higher dimensions exist: the presence of massive particle traces, sort of like reverberating echoes; missing energy caused by gravitons migrating to higher dimensions; and microscopic black holes.
Ongoing experiments will explore these possibilities, just as scientists are hotly pursuing an elusive theory that unifies all the laws of the universe. If the volume of discoveries in recent years is an indicator, then we just might be closer than we ever thought.