Neil DeGrasse Tyson Gives 3 Reasons Why Humans Are Still So Ignorant About The Universe


How did we even get here?

We have nanobots that swim inside our bodies and monitor our vital organs. We have autonomous robots that work alongside human doctors to perform complex surgeries. There are rovers driving across the surface of Mars and, as you read this, thee humans are orbiting high above you, living in the cold vacuum of space.

main article image

In many ways, it seems like we’re living in the future. But if you ask Neil deGrasse Tyson, it seems like we’re little more than infants trying to clutch sunbeams in our fists.

At the 2018 World Government Summit in Dubai, Tyson gave a presentation to an enraptured audience. The topic? How humans will – most definitely not – colonize Mars (Tyson, if you aren’t aware, is an eternal skeptic).

It seems fitting then that, following his rather depressing speech, he took the time to discuss how humans are, in many ways, entirely ignorant.

Here are three things that, according to Tyson, show just how far we have to go:

Dark matter

A portion of our Universe is missing. A rather significant portion, in fact.

Scientists estimate that less than 5 percent of our Universe is made up of ordinary matter (protons, neutrons, electrons, and all the things that make our bodies, our planet, and everything we’ve ever seen or touched).

The rest of the matter in our Universe? Well, we have no idea what it is.

“Dark matter is the longest standing unsolved problem in modern astrophysics,” Tyson said.

He continued with a slightly exasperated sigh, “It has been with us for eighty years, and it’s high time we had a solution.”

Yet, we aren’t exactly close.

The problem stems from the fact that dark matter doesn’t interact with electromagnetic radiation (aka light). We can only observe it because of its gravitational influence – say, by a galaxy spinning slower or faster than it should.

However, there are a number of ongoing experiments that seek to detect dark matter, such as SNOLAB and ADMX, so answers may be on the horizon.

Dark energy

Dark energy is, perhaps, one of the most interesting scientific discoveries ever made. This is because it may hold the keys to the ultimate fate of our Universe.

Tyson explains it as “a pressure in the vacuum of space forcing the acceleration of the [expansion of] the Universe.”

Does that sound confusing? That’s probably because it is.

If you weren’t aware, all of space is expanding – the space between the galaxies, the space between the Earth and the Sun, the space between your eyes and your computer screen.

Of course, this expansion is minimal. It’s so minimal that we don’t even notice it when we look at our local Solar System. But on a cosmic scale, its impact is profound.

Because space is so vast, billions of light-years of space are expanding, causing many galaxies to fly away from us at unimaginable speeds.

And if this flight continues, eventually the cosmos will be nothing more than a cold unendingly dark void. If it reverses, the Universe will collapse in on itself in a Big Crunch.

Unfortunately, we have absolutely no idea which will happen, as we have no clue what dark energy is.

Abiogenesis

We know a lot about how life evolved on Earth. About 3.5 billion years ago, the earliest forms of life emerged. These single-celled creatures dominated our planet for billions and billions of years.

A little over 600 million years ago, the first multicellular organisms took up residence. The Cambrian explosion followed soon after and *boom* the fossil record was born.

Just 500 million years ago, plants started taking to land. Animals soon followed, and here we are today.

However, Tyson is quick to point out that we don’t understand the most vital component of evolution – the beginning.

“We still don’t know how we go from organic molecules to self-replicating life,” Tyson said, and he noted how unfortunate this is because “that is basically the origin of life as we know it.”

The process is called abiogenesis. In non-scientific jargon, it deals with how life arises from nonliving matter. Although we have a number of hypotheses related to this process, we don’t have a comprehensive understanding or any evidence to support one.

There we have it. The biggest mysteries of the cosmos just happen to be some of the most important and fundamental. So, when will we finally figure out these scientific conundrums and move out of our infancy? Tyson refuses to make a prediction.

If there’s one thing he knows, it’s how very little humans actually know: “I’m not very good at predicting the future, and I’ve looked at other people’s predictions and seen how bad those are even among those that say ‘I am good.’ So I can tell you what I want to happen, but that’s different than what I think will happen.”

Is the Universe a conscious mind?


Cosmopsychism might seem crazy, but it provides a robust explanatory model for how the Universe became fine-tuned for life.

In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.

Here are a few of examples of this fine-tuning for life:

  •  The strong nuclear force (the force that binds together the elements in the nucleus of an atom) has a value of 0.007. If that value had been 0.006 or less, the Universe would have contained nothing but hydrogen. If it had been 0.008 or higher, the hydrogen would have fused to make heavier elements. In either case, any kind of chemical complexity would have been physically impossible. And without chemical complexity there can be no life.
  •  The physical possibility of chemical complexity is also dependent on the masses of the basic components of matter: electrons and quarks. If the mass of a down quark had been greater by a factor of 3, the Universe would have contained only hydrogen. If the mass of an electron had been greater by a factor of 2.5, the Universe would have contained only neutrons: no atoms at all, and certainly no chemical reactions.
  •  Gravity seems a momentous force but it is actually much weaker than the other forces that affect atoms, by about 1036. If gravity had been only slightly stronger, stars would have formed from smaller amounts of material, and consequently would have been smaller, with much shorter lives. A typical sun would have lasted around 10,000 years rather than 10 billion years, not allowing enough time for the evolutionary processes that produce complex life. Conversely, if gravity had been only slightly weaker, stars would have been much colder and hence would not have exploded into supernovae. This also would have rendered life impossible, as supernovae are the main source of many of the heavy elements that form the ingredients of life.

Some take the fine-tuning to be simply a basic fact about our Universe: fortunate perhaps, but not something requiring explanation. But like many scientists and philosophers, I find this implausible. In The Life of the Cosmos (1999), the physicist Lee Smolin has estimated that, taking into account all of the fine-tuning examples considered, the chance of life existing in the Universe is 1 in 10229, from which he concludes:

In my opinion, a probability this tiny is not something we can let go unexplained. Luck will certainly not do here; we need some rational explanation of how something this unlikely turned out to be the case.

The two standard explanations of the fine-tuning are theism and the multiverse hypothesis. Theists postulate an all-powerful and perfectly good supernatural creator of the Universe, and then explain the fine-tuning in terms of the good intentions of this creator. Life is something of great objective value; God in Her goodness wanted to bring about this great value, and hence created laws with constants compatible with its physical possibility. The multiverse hypothesis postulates an enormous, perhaps infinite, number of physical universes other than our own, in which many different values of the constants are realised. Given a sufficient number of universes realising a sufficient range of the constants, it is not so improbable that there will be at least one universe with fine-tuned laws.

Both of these theories are able to explain the fine-tuning. The problem is that, on the face of it, they also make false predictions. For the theist, the false prediction arises from the problem of evil. If one were told that a given universe was created by an all-loving, all-knowing and all-powerful being, one would not expect that universe to contain enormous amounts of gratuitous suffering. One might not be surprised to find it contained intelligent life, but one would be surprised to learn that life had come about through the gruesome process of natural selection. Why would a loving God who could do absolutely anything choose to create life that way? Prima facie theism predicts a universe that is much better than our own and, because of this, the flaws of our Universe count strongly against the existence of God.

Turning to the multiverse hypothesis, the false prediction arises from the so-called Boltzmann brain problem, named after the 19th-century Austrian physicist Ludwig Boltzmann who first formulated the paradox of the observed universe. Assuming there is a multiverse, you would expect our Universe to be a fairly typical member of the universe ensemble, or at least a fairly typical member of the universes containing observers (since we couldn’t find ourselves in a universe in which observers are impossible). However, in The Road to Reality (2004), the physicist and mathematician Roger Penrose has calculated that in the kind of multiverse most favoured by contemporary physicists – based on inflationary cosmology and string theory – for every observer who observes a smooth, orderly universe as big as ours, there are 10 to the power of 10123 who observe a smooth, orderly universe that is just 10 times smaller. And by far the most common kind of observer would be a ‘Boltzmann’s brain’: a functioning brain that has by sheer fluke emerged from a disordered universe for a brief period of time. If Penrose is right, then the odds of an observer in the multiverse theory finding itself in a large, ordered universe are astronomically small. And hence the fact that we are ourselves such observers is powerful evidence against the multiverse theory.

Neither of these are knock-down arguments. Theists can try to come up with reasons why God would allow the suffering we find in the Universe, and multiverse theorists can try to fine-tune their theory such that our Universe is less unlikely. However, both of these moves feel ad hoc, fiddling to try to save the theory rather than accepting that, on its most natural interpretation, the theory is falsified. I think we can do better.

Subscribe to Aeon’s Newsletter
DailyWeekly

In the public mind, physics is on its way to giving us a complete account of the nature of space, time and matter. We are not there yet of course; for one thing, our best theory of the very big – general relativity – is inconsistent with our best theory of the very small – quantum mechanics. But it is standardly assumed that one day these challenges will be overcome and physicists will proudly present an eager public with the Grand Unified Theory of everything: a complete story of the fundamental nature of the Universe.

In fact, for all its virtues, physics tells us precisely nothing about the nature of the physical Universe. Consider Isaac Newton’s theory of universal gravitation:

The variables m1 and m2 stand for the masses of two objects that we want to work out the gravitational attraction between; F is the gravitational attraction between those two masses, G is the gravitational constant (a number we know from observation); and r is the distance between m1 and m2. Notice that this equation doesn’t provide us with definitions of what ‘mass’, ‘force’ and ‘distance’ are. And this is not something peculiar to Newton’s law. The subject matter of physics are the basic properties of the physics world: mass, charge, spin, distance, force. But the equations of physics do not explain what these properties are. They simply name them in order to assert equations between them.

If physics is not telling us the nature of physical properties, what is it telling us? The truth is that physics is a tool for prediction. Even if we don’t know what ‘mass’ and ‘force’ really are, we are able to recognise them in the world. They show up as readings on our instruments, or otherwise impact on our senses. And by using the equations of physics, such as Newton’s law of gravity, we can predict what’s going to happen with great precision. It is this predictive capacity that has enabled us to manipulate the natural world in extraordinary ways, leading to the technological revolution that has transformed our planet. We are now living through a period of history in which people are so blown away by the success of physical science, so moved by the wonders of technology, that they feel strongly inclined to think that the mathematical models of physics capture the whole of reality. But this is simply not the job of physics. Physics is in the business of predicting the behaviour of matter, not revealing its intrinsic nature.

It’s silly to say that atoms are entirely removed from mentality, then wonder where mentality comes from

Given that physics tell us nothing of the nature of physical reality, is there anything we do know? Are there any clues as to what is going on ‘under the bonnet’ of the engine of the Universe? The English astronomer Arthur Eddington was the first scientist to confirm general relativity, and also to formulate the Boltzmann brain problem discussed above (albeit in a different context). Reflecting on the limitations of physics in The Nature of the Physical World (1928), Eddington argued that the only thing we really know about the nature of matter is that some of it has consciousness; we know this because we are directly aware of the consciousness of our own brains:

We are acquainted with an external world because its fibres run into our own consciousness; it is only our own ends of the fibres that we actually know; from those ends, we more or less successfully reconstruct the rest, as a palaeontologist reconstructs an extinct monster from its footprint.

We have no direct access to the nature of matter outside of brains. But the most reasonable speculation, according to Eddington, is that the nature of matter outside of brains is continuous with the nature of matter inside of brains. Given that we have no direct insight into the nature of atoms, it is rather ‘silly’, argued Eddington, to declare that atoms have a nature entirely removed from mentality, and then to wonder where mentality comes from. In my book Consciousness and Fundamental Reality (2017), I developed these considerations into an extensive argument for panpsychism: the view that all matter has a consciousness-involving nature.

There are two ways of developing the basic panpsychist position. One is micropsychism, the view that the smallest parts of the physical world have consciousness. Micropsychism is not to be equated with the absurd view that quarks have emotions or that electrons feel existential angst. In human beings, consciousness is a sophisticated thing, involving subtle and complex emotions, thoughts and sensory experiences. But there seems nothing incoherent with the idea that consciousness might exist in some extremely basic forms. We have good reason to think that the conscious experience of a horse is much less complex than that of a human being, and the experiences of a chicken less complex than those of a horse. As organisms become simpler, perhaps at some point the light of consciousness suddenly switches off, with simpler organisms having no experience at all. But it is also possible that the light of consciousness never switches off entirely, but rather fades as organic complexity reduces, through flies, insects, plants, amoeba and bacteria. For the micropsychist, this fading-while-never-turning-off continuum further extends into inorganic matter, with fundamental physical entities – perhaps electrons and quarks – possessing extremely rudimentary forms of consciousness, to reflect their extremely simple nature.

However, a number of scientists and philosophers of science have recently argued that this kind of ‘bottom-up’ picture of the Universe is outdated, and that contemporary physics suggests that in fact we live in a ‘top-down’ – or ‘holist’ – Universe, in which complex wholes are more fundamental than their parts. According to holism, the table in front of you does not derive its existence from the sub-atomic particles that compose it; rather, those sub-atomic particles derive their existence from the table. Ultimately, everything that exists derives its existence from the ultimate complex system: the Universe as a whole.

Holism has a somewhat mystical association, in its commitment to a single unified whole being the ultimate reality. But there are strong scientific arguments in its favour. The American philosopher Jonathan Schaffer argues that the phenomenon of quantum entanglement is good evidence for holism. Entangled particles behave as a whole, even if they are separated by such large distances that it is impossible for any kind of signal to travel between them. According to Schaffer, we can make sense of this only if, in general, we are in a Universe in which complex systems are more fundamental than their parts.

If we combine holism with panpsychism, we get cosmopsychism: the view that the Universe is conscious, and that the consciousness of humans and animals is derived not from the consciousness of fundamental particles, but from the consciousness of the Universe itself. This is the view I ultimately defend in Consciousness and Fundamental Reality.

The cosmopsychist need not think of the conscious Universe as having human-like mental features, such as thought and rationality. Indeed, in my book I suggested that we think of the cosmic consciousness as a kind of ‘mess’ devoid of intellect or reason. However, it now seems to me that reflection on the fine-tuning might give us grounds for thinking that the mental life of the Universe is just a little closer than I had previously thought to the mental life of a human being.

The Canadian philosopher John Leslie proposed an intriguing explanation of the fine-tuning, which in Universes (1989) he called ‘axiarchism’. What strikes us as so incredible about the fine-tuning is that, of all the values the constants in our laws had, they ended up having exactly those values required for something of great value: life, and ultimately intelligent life. If the laws had not, against huge odds, been fine-tuned, the Universe would have had infinitely less value; some say it would have had no value at all. Leslie proposes that this proper understanding of the problem points us in the direction of the best solution: the laws are fine-tuned because their being so leads to something of great value. Leslie is not imagining a deity mediating between the facts of value and the cosmological facts; the facts of value, as it were, reach out and fix the values directly.

It can hardly be denied that axiarchism is a parsimonious explanation of fine-tuning, as it posits no entities whatsoever other than the observable Universe. But it is not clear that it is intelligible. Values don’t seem to be the right kind of things to have a causal influence on the workings of the world, at least not independently of the motives of rational agents. It is rather like suggesting that the abstract number 9 caused a hurricane.

But the cosmopsychist has a way of rendering axiarchism intelligible, by proposing that the mental capacities of the Universe mediate between value facts and cosmological facts. On this view, which we can call ‘agentive cosmopsychism’, the Universe itself fine-tuned the laws in response to considerations of value. When was this done? In the first 10-43 seconds, known as the Planck epoch, our current physical theories, in which the fine-tuned laws are embedded, break down. The cosmopsychist can propose that during this early stage of cosmological history, the Universe itself ‘chose’ the fine-tuned values in order to make possible a universe of value.

Making sense of this requires two modifications to basic cosmopsychism. Firstly, we need to suppose that the Universe acts through a basic capacity to recognise and respond to considerations of value. This is very different from how we normally think about things, but it is consistent with everything we observe. The Scottish philosopher David Hume long ago noted that all we can really observe is how things behave – the underlying forces that give rise to those behaviours are invisible to us. We standardly assume that the Universe is powered by a number of non-rational causal capacities, but it is also possible that it is powered by the capacity of the Universe to respond to considerations of value.

It is parsimonious to suppose that the Universe has a consciousness-involving nature

How are we to think about the laws of physics on this view? I suggest that we think of them as constraints on the agency of the Universe. Unlike the God of theism, this is an agent of limited power, which explains the manifest imperfections of the Universe. The Universe acts to maximise value, but is able to do so only within the constraints of the laws of physics. The beneficence of the Universe does not much reveal itself these days; the agentive cosmopsychist might explain this by holding that the Universe is now more constrained than it was in the unique circumstances of the first split second after the Big Bang, when currently known laws of physics did not apply.

Ockham’s razor is the principle that, all things being equal, more parsimonious theories – that is to say, theories with relatively few postulations – are to be preferred. Is it not a great cost in terms of parsimony to ascribe fundamental consciousness to the Universe? Not at all. The physical world must have some nature, and physics leaves us completely in the dark as to what it is. It is no less parsimonious to suppose that the Universe has a consciousness-involving nature than that it has some non-consciousness-involving nature. If anything, the former proposal is more parsimonious insofar as it is continuous with the only thing we really know about the nature of matter: that brains have consciousness.

Having said that, the second and final modification we must make to cosmopsychism in order to explain the fine-tuning does come at some cost. If the Universe, way back in the Planck epoch, fine-tuned the laws to bring about life billions of years in its future, then the Universe must in some sense be aware of the consequences of its actions. This is the second modification: I suggest that the agentive cosmopsychist postulate a basic disposition of the Universe to represent the complete potential consequences of each of its possible actions. In a sense, this is a simple postulation, but it cannot be denied that the complexity involved in these mental representations detracts from the parsimony of the view. However, this commitment is arguably less profligate than the postulations of the theist or the multiverse theorist. The theist postulates a supernatural agent while the agentive cosmopsychist postulates a natural agent. The multiverse theorist postulates an enormous number of distinct, unobservable entities: the many universes. The agentive cosmopsychist merely adds to an entity that we already believe in: the physical Universe. And most importantly, agentive cosmopsychism avoids the false predictions of its two rivals.

The idea that the Universe is a conscious mind that responds to value strikes us a ludicrously extravagant cartoon. But we must judge the view not on its cultural associations but on its explanatory power. Agentive cosmopsychism explains the fine-tuning without making false predictions; and it does so with a simplicity and elegance unmatched by its rivals. It is a view we should take seriously.

How the Universe Got Its Bounce Back


Humans have always entertained two basic theories about the origin of the universe. “In one of them, the universe emerges in a single instant of creation (as in the Jewish-Christian and the Brazilian Carajás cosmogonies),” the cosmologists Mario Novello and Santiago Perez-Bergliaffa noted in 2008. In the other, “the universe is eternal, consisting of an infinite series of cycles (as in the cosmogonies of the Babylonians and Egyptians).” The division in modern cosmology “somehow parallels that of the cosmogonic myths,” Novello and Perez-Bergliaffa wrote.

In recent decades, it hasn’t seemed like much of a contest. The Big Bang theory, standard stuff of textbooks and television shows, enjoys strong support among today’s cosmologists. The rival eternal-universe picture had the edge a century ago, but it lost ground as astronomers observed that the cosmos is expanding and that it was small and simple about 14 billion years ago. In the most popular modern version of the theory, the Big Bang began with an episode called “cosmic inflation” — a burst of exponential expansion during which an infinitesimal speck of space-time ballooned into a smooth, flat, macroscopic cosmos, which expanded more gently thereafter.

With a single initial ingredient (the “inflaton field”), inflationary models reproduce many broad-brush features of the cosmos today. But as an origin story, inflation is lacking; it raises questions about what preceded it and where that initial, inflaton-laden speck came from. Undeterred, many theorists think the inflaton field must fit naturally into a more complete, though still unknown, theory of time’s origin.

But in the past few years, a growing number of cosmologists have cautiously revisited the alternative. They say the Big Bang might instead have been a Big Bounce. Some cosmologists favor a picture in which the universe expands and contracts cyclically like a lung, bouncing each time it shrinks to a certain size, while others propose that the cosmos only bounced once — that it had been contracting, before the bounce, since the infinite past, and that it will expand forever after. In either model, time continues into the past and future without end.

With modern science, there’s hope of settling this ancient debate. In the years ahead, telescopes could find definitive evidence for cosmic inflation. During the primordial growth spurt — if it happened — quantum ripples in the fabric of space-time would have become stretched and later imprinted as subtle swirls in the polarization of ancient light called the cosmic microwave background. Current and future telescope experiments are hunting for these swirls. If they aren’t seen in the next couple of decades, this won’t entirely disprove inflation (the telltale swirls could simply be too faint to make out), but it will strengthen the case for bounce cosmology, which doesn’t predict the swirl pattern.

Already, several groups are making progress at once. Most significantly, in the last year, physicists have come up with two new ways that bounces could conceivably occur. One of the models, described in a paper that will appear in the Journal of Cosmology and Astroparticle Physics, comes from Anna Ijjas of Columbia University, extending earlier work with her former adviser, the Princeton professor and high-profile bounce cosmologist Paul Steinhardt. More surprisingly, the other new bounce solution, accepted for publication in Physical Review D, was proposed by Peter GrahamDavid Kaplan and Surjeet Rajendran, a well-known trio of collaborators who mainly focus on particle physics questions and have no previous connection to the bounce cosmology community. It’s a noteworthy development in a field that’s highly polarized on the bang vs. bounce question.

The question gained renewed significance in 2001, when Steinhardt and three other cosmologists argued that a period of slow contraction in the history of the universe could explain its exceptional smoothness and flatness, as witnessed today, even after a bounce — with no need for a period of inflation.

The universe’s impeccable plainness, the fact that no region of sky contains significantly more matter than any other and that space is breathtakingly flat as far as telescopes can see, is a mystery. To match its present uniformity, experts infer that the cosmos, when it was one centimeter across, must have had the same density everywhere to within one part in 100,000. But as it grew from an even smaller size, matter and energy ought to have immediately clumped together and contorted space-time. Why don’t our telescopes see a universe wrecked by gravity?

“Inflation was motivated by the idea that that was crazy to have to assume the universe came out so smooth and not curved,” said the cosmologist Neil Turok, director of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, and co-author of the 2001 paper on cosmic contractionwith Steinhardt, Justin Khouryand Burt Ovrut. In the inflation scenario, the centimeter-size region results from the exponential expansion of a much smaller region — an initial speck measuring no more than a trillionth of a trillionth of a centimeter across. As long as that speck was infused with an inflaton field that was smooth and flat, meaning its energy concentration didn’t fluctuate across time or space, the speck would have inflated into a huge, smooth universe like ours. Raman Sundrum, a theoretical physicist at the University of Maryland, said the thing he appreciates about inflation is that “it has a kind of fault tolerance built in.” If, during this explosive growth phase, there was a buildup of energy that bent space-time in a certain place, the concentration would have quickly inflated away. “You make small changes against what you see in the data and you see the return to the behavior that the data suggests,” Sundrum said.

However, where exactly that infinitesimal speck came from, and why it came out so smooth and flat itself to begin with, no one knows. Theorists have found many possible ways to embed the inflaton field into string theory, a candidate for the underlying quantum theory of gravity. So far, there’s no evidence for or against these ideas.

Cosmic inflation also has a controversial consequence. The theory — which was pioneered in the 1980s by Alan GuthAndrei LindeAleksei Starobinsky and (of all people) Steinhardt, almost automatically leads to the hypothesis that our universe is a random bubble in an infinite, frothing multiverse sea. Once inflation starts, calculations suggest that it keeps going forever, only stopping in local pockets that then blossom into bubble universes like ours. The possibility of an eternally inflating multiverse suggests that our particular bubble might never be fully understandable on its own terms, since everything that can possibly happen in a multiverse happens infinitely many times. The subject evokes gut-level disagreement among experts. Many have reconciled themselves to the idea that our universe could be just one of many; Steinhardt calls the multiverse “hogwash.”

This sentiment partly motivated his and other researchers’ about-face on bounces. “The bouncing models don’t have a period of inflation,” Turok said. Instead, they add a period of contraction before a Big Bounce to explain our uniform universe. “Just as the gas in the room you’re sitting in is completely uniform because the air molecules are banging around and equilibrating,” he said, “if the universe was quite big and contracting slowly, that gives plenty of time for the universe to smooth itself out.”

Although the first contracting-universe models were convoluted and flawed, many researchers became convinced of the basic idea that slow contraction can explain many features of our expanding universe. “Then the bottleneck became literally the bottleneck — the bounce itself,” Steinhardt said. As Ijjas put it, “The bounce has been the showstopper for these scenarios. People would agree that it’s very interesting if you can do a contraction phase, but not if you can’t get to an expansion phase.”

 

Bouncing isn’t easy. In the 1960s, the British physicists Roger Penrose and Stephen Hawking proved a set of so-called “singularity theorems” showing that, under very general conditions, contracting matter and energy will unavoidably crunch into an immeasurably dense point called a singularity. These theorems make it hard to imagine how a contracting universe in which space-time, matter and energy are all rushing inward could possibly avoid collapsing all the way down to a singularity — a point where Albert Einstein’s classical theory of gravity and space-time breaks down and the unknown quantum gravity theory rules. Why shouldn’t a contracting universe share the same fate as a massive star, which dies by shrinking to the singular center of a black hole?

Both of the newly proposed bounce models exploit loopholes in the singularity theorems — ones that, for many years, seemed like dead ends. Bounce cosmologists have long recognized that bounces might be possible if the universe contained a substance with negative energy (or other sources of negative pressure), which would counteract gravity and essentially push everything apart. They’ve been trying to exploit this loophole since the early 2000s, but they always found that adding negative-energy ingredients made their models of the universe unstable, because positive- and negative-energy quantum fluctuations could spontaneously arise together, unchecked, out of the zero-energy vacuum of space. In 2016, the Russian cosmologist Valery Rubakov and colleagues even proved a “no-go” theorem that seemed to rule out a huge class of bounce mechanisms on the grounds that they caused these so-called “ghost” instabilities.

Then Ijjas found a bounce mechanism that evades the no-go theorem. The key ingredient in her model is a simple entity called a “scalar field,” which, according to the idea, would have kicked into gear as the universe contracted and energy became highly concentrated. The scalar field would have braided itself into the gravitational field in a way that exerted negative pressure on the universe, reversing the contraction and driving space-time apart —without destabilizing everything. Ijjas’ paper “is essentially the best attempt at getting rid of all possible instabilities and making a really stable model with this special type of matter,” said Jean-Luc Lehners, a theoretical cosmologist at the Max Planck Institute for Gravitational Physics in Germany who has also worked on bounce proposals.

What’s especially interesting about the two new bounce models is that they are “non-singular,” meaning the contracting universe bounces and starts expanding again before ever shrinking to a point. These bounces can therefore be fully described by the classical laws of gravity, requiring no speculations about gravity’s quantum nature.

Graham, Kaplan and Rajendran, of Stanford University, Johns Hopkins University and the University of California, Berkeley, respectively, reported their non-singular bounce idea on the scientific preprint site arxiv.org in September 2017. They found their way to it after wondering whether a previous contraction phase in the history of the universe could be used to explain the value of the cosmological constant — a mystifyingly tiny number that defines the amount of dark energy infused in the space-time fabric, energy that drives the accelerating expansion of the universe.

In working out the hardest part — the bounce — the trio exploited a second, largely forgotten loophole in the singularity theorems. They took inspiration from a characteristically strange model of the universe proposed by the logician Kurt Gödel in 1949, when he and Einstein were walking companions and colleagues at the Institute for Advanced Study in Princeton, New Jersey. Gödel used the laws of general relativity to construct the theory of a rotating universe, whose spinning keeps it from gravitationally collapsing in much the same way that Earth’s orbit prevents it from falling into the sun. Gödel especially liked the fact that his rotating universe permitted “closed timelike curves,” essentially loops in time, which raised all sorts of Gödelian riddles. To his dying day, he eagerly awaited evidence that the universe really is rotating in the manner of his model. Researchers now know it isn’t; otherwise, the cosmos would exhibit alignments and preferred directions. But Graham and company wondered about small, curled-up spatial dimensions that might exist in space, such as the six extra dimensions postulated by string theory. Could a contracting universe spin in those directions?

Imagine there’s just one of these curled-up extra dimensions, a tiny circle found at every point in space. As Graham put it, “At each point in space there’s an extra direction you can go in, a fourth spatial direction, but you can only go a tiny little distance and then you come back to where you started.” If there are at least three extra compact dimensions, then, as the universe contracts, matter and energy can start spinning inside them, and the dimensions themselves will spin with the matter and energy. The vorticity in the extra dimensions can suddenly initiate a bounce. “All that stuff that would have been crunching into a singularity, because it’s spinning in the extra dimensions, it misses — sort of like a gravitational slingshot,” Graham said. “All the stuff should have been coming to a single point, but instead it misses and flies back out again.”

The paper has attracted attention beyond the usual circle of bounce cosmologists. Sean Carroll, a theoretical physicist at the California Institute of Technology, is skeptical but called the idea “very clever.” He said it’s important to develop alternatives to the conventional inflation story, if only to see how much better inflation appears by comparison — especially when next-generation telescopes come online in the early 2020s looking for the telltale swirl pattern in the skycaused by inflation. “Even though I think inflation has a good chance of being right, I wish there were more competitors,” Carroll said. Sundrum, the Maryland physicist, felt similarly. “There are some questions I consider so important that even if you have only a 5 percent chance of succeeding, you should throw everything you have at it and work on them,” he said. “And that’s how I feel about this paper.”

As Graham, Kaplan and Rajendran explore their bounce and its possible experimental signatures, the next step for Ijjas and Steinhardt, working with Frans Pretorius of Princeton, is to develop computer simulations. (Their collaboration is supported by the Simons Foundation, which also funds Quanta Magazine.) Both bounce mechanisms also need to be integrated into more complete, stable cosmological models that would describe the entire evolutionary history of the universe.

Beyond these non-singular bounce solutions, other researchers are speculating about what kind of bounce might occur when a universe contracts all the way to a singularity — a bounce orchestrated by the unknown quantum laws of gravity, which replace the usual understanding of space and time at extremely high energies. In forthcoming work, Turok and collaborators plan to propose a model in which the universe expands symmetrically into the past and future away from a central, singular bounce. Turok contends that the existence of this two-lobed universe is equivalent to the spontaneous creation of electron-positron pairs, which constantly pop in and out of the vacuum. “Richard Feynman pointed out that you can look at the positron as an electron going backwards in time,” he said. “They’re two particles, but they’re really the same; at a certain moment in time they merge and annihilate.” He added, “The idea is a very, very deep one, and most likely the Big Bang will turn out to be similar, where a universe and its anti-universe were drawn out of nothing, if you like, by the presence of matter.”

It remains to be seen whether this universe/anti-universe bounce model can accommodate all observations of the cosmos, but Turok likes how simple it is. Most cosmological models are far too complicated in his view. The universe “looks extremely ordered and symmetrical and simple,” he said. “That’s very exciting for theorists, because it tells us there may be a simple — even if hard-to-discover — theory waiting to be discovered, which might explain the most paradoxical features of the universe.”

This Is The Smartest Kid In The World And He Thinks CERN Destroyed Our Universe


Our universe is a miracle which is beyond our comprehension. However much we advance through science and begin to unravel the mysteries of the world, the more we get confused and messed up in them.

 No human can be said to know all the secrets of the universe, not even our most knowledgeable scientists. Science is not about facts; facts are easy to learn. Science is about exploring and questioning these pre-known facts and establishing new ones.

One such kid, Max Laughlin is definitely much smarter than the average 13-year-old or 30 year old for that matter and has been called the smartest kid on the planet earth. Before his 13th birthday, he had invented a device which was capable of giving free energy to everyone in the world (once the logistics of the production could be taken care of).

He has been discussing and debating extensively on the multi-verse theory and alternate realities for quite a while now and with the biggest brains in the business. He is one of the many physical theorists who are of the opinion that when CERN used the Hadron Collider, it leads to a permanent destruction of our universe as it existed. And now we are living in a parallel one, which was closest to our own in that space-time continuum.

Multiverse

Multiverse is the theory that says that our reality is not the only one which exists in our space-time continuum. In the beginning, when the universe began to take shape, right from the next instant it started spiraling outwards and kept forming parallel universes right next to each other. Down the line, through infinity, there has been an uncountable number of parallel universes. And we inhabit just one of these parallel universes.

How it happened

When CERN set off the super collider it destroyed one single electron. That immediately set off a chain reaction which annihilated our entire universe. We were shifted to the next closest universe to our own but we didn’t make the shift unscathed. Many were not able to accompany us and were left behind and forgotten. And the new universe we now inhabit, though similar to our own is not exactly the same. Here is the proof.

The Mandela effect

The Mandela effect is the phenomenon which best supports this theory. Not everyone remembers how Nelson Mandela died in the same way. There are also many pop culture references and real-world events that we swear to remember in a certain way than what is available to us through records. These little glitches are a proof that the reality we remember is different than the one we now inhabit.

Save

Our Universe is too vast for even the most imaginative sci-fi


As an astrophysicist, I am always struck by the fact that even the wildest science-fiction stories tend to be distinctly human in character. No matter how exotic the locale or how unusual the scientific concepts, most science fiction ends up being about quintessentially human (or human-like) interactions, problems, foibles and challenges. This is what we respond to; it is what we can best understand. In practice, this means that most science fiction takes place in relatively relatable settings, on a planet or spacecraft. The real challenge is to tie the story to human emotions, and human sizes and timescales, while still capturing the enormous scales of the Universe itself.

Just how large the Universe actually is never fails to boggle the mind. We say that the observable Universe extends for tens of billions of light years, but the only way to really comprehend this, as humans, is to break matters down into a series of steps, starting with our visceral understanding of the size of the Earth. A non-stop flight from Dubai to San Francisco covers a distance of about 8,000 miles – roughly equal to the diameter of the Earth. The Sun is much bigger; its diameter is just over 100 times Earth’s. And the distance between the Earth and the Sun is about 100 times larger than that, close to 100 million miles. This distance, the radius of the Earth’s orbit around the Sun, is a fundamental measure in astronomy; the Astronomical Unit, or AU. The spacecraft Voyager 1, for example, launched in 1977 and, travelling at 11 miles per second, is now 137 AU from the Sun.

But the stars are far more distant than this. The nearest, Proxima Centauri, is about 270,000 AU, or 4.25 light years away. You would have to line up 30 million Suns to span the gap between the Sun and Proxima Centauri. The Vogons in Douglas Adams’s The Hitchhiker’s Guide to the Galaxy (1979) are shocked that humans have not travelled to the Proxima Centauri system to see the Earth’s demolition notice; the joke is just how impossibly large the distance is.

Subscribe to Aeon’s Newsletter
DailyWeekly

Four light years turns out to be about the average distance between stars in the Milky Way Galaxy, of which the Sun is a member. That is a lot of empty space! The Milky Way contains about 300 billion stars, in a vast structure roughly 100,000 light years in diameter. One of the truly exciting discoveries of the past two decades is that our Sun is far from unique in hosting a retinue of planets: evidence shows that the majority of Sun-like stars in the Milky Way have planets orbiting them, many with a size and distance from their parent star allowing them to host life as we know it.

Yet getting to these planets is another matter entirely: Voyager 1 would arrive at Proxima Centauri in 75,000 years if it were travelling in the right direction – which it isn’t. Science-fiction writers use a variety of tricks to span these interstellar distances: putting their passengers into states of suspended animation during the long voyages, or travelling close to the speed of light (to take advantage of the time dilation predicted in Albert Einstein’s theory of special relativity). Or they invoke warp drives, wormholes or other as-yet undiscovered phenomena.

When astronomers made the first definitive measurements of the scale of our Galaxy a century ago, they were overwhelmed by the size of the Universe they had mapped. Initially, there was great skepticism that the so-called ‘spiral nebulae’ seen in deep photographs of the sky were in fact ‘island universes’ – structures as large as the Milky Way, but at much larger distances still. While the vast majority of science-fiction stories stay within our Milky Way, much of the story of the past 100 years of astronomy has been the discovery of just how much larger than that the Universe is. Our nearest galactic neighbour is about 2 million light years away, while the light from the most distant galaxies our telescopes can see has been travelling to us for most of the age of the Universe, about 13 billion years.

We discovered in the 1920s that the Universe has been expanding since the Big Bang. But about 20 years ago, astronomers found that this expansion was speeding up, driven by a force whose physical nature we do not understand, but to which we give the stop-gap name of ‘dark energy’. Dark energy operates on length- and time-scales of the Universe as a whole: how could we capture such a concept in a piece of fiction?

The story doesn’t stop there. We can’t see galaxies from those parts of the Universe for which there hasn’t been enough time since the Big Bang for the light to reach us. What lies beyond the observable bounds of the Universe? Our simplest cosmological models suggest that the Universe is uniform in its properties on the largest scales, and extends forever. A variant idea says that the Big Bang that birthed our Universe is only one of a (possibly infinite) number of such explosions, and that the resulting ‘multiverse’ has an extent utterly beyond our comprehension.

The US astronomer Neil deGrasse Tyson once said: ‘The Universe is under no obligation to make sense to you.’ Similarly, the wonders of the Universe are under no obligation to make it easy for science-fiction writers to tell stories about them. The Universe is mostly empty space, and the distances between stars in galaxies, and between galaxies in the Universe, are incomprehensibly vast on human scales. Capturing the true scale of the Universe, while somehow tying it to human endeavours and emotions, is a daunting challenge for any science-fiction writer. Olaf Stapledon took up that challenge in his novel Star Maker (1937), in which the stars and nebulae, and cosmos as a whole, are conscious. While we are humbled by our tiny size relative to the cosmos, our brains can none the less comprehend, to some extent, just how large the Universe we inhabit is. This is hopeful, since, as the astrobiologist Caleb Scharf of Columbia University has said: ‘In a finite world, a cosmic perspective isn’t a luxury, it is a necessity.’ Conveying this to the public is the real challenge faced by astronomers and science-fiction writers alike.

Physicists Leak Evidence That Approve Elon Musk’s Theory – The Universe Is A “Computer” Simulation


Philosophers have long proposed that given that any civilization of remarkable intelligence and size would likely create simulations of other universes, and likely a great number of simulations), it may be that there are more simulated universes than real, and consequently more simulated worlds than real.

And now, some physicists say, we just may have the evidence that our universe is just one such simulation.

A team of researchers led by Silas Beane at Germany’s University of Bonn, have just released a paper titled “Constraints on the Universe as a Numerical Simulation,” in which they make the argument that any such simulation of a universe must, by nature of a simulation, put limits on the physical laws of that universe.

As Technology Review explains, making the same point, “the problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.”

For example, if a simulation, there would be clear limits on the amount of energy particles within the program can contain. And, researchers say, there’s evidence of exactly such limits in our universe.

In particular, we can consider what is known as the Greisen-Zatsepin-Kuzmin, or GZK cut off – which is a clear limit to the energy an cosmic ray particle can hold. Scientists argue this is the result of interactions with cosmic background radiation. Beane’s research team, however, argues that it is also exactly what you would expect from a simulation’s limits.

Of course, you should read the paper yourself to get a better feel for the science – but the argument is certainly an interesting one, and will only fuel more philosophers’ arguments about the nature of our world.

For more, consider what Elon Musk has to say about the theory in the video below:

Sources:

The Universe Is as Spooky as Einstein Thought.


In a brilliant new experiment, physicists have confirmed one of the most mysterious laws of the cosmos.

There might be no getting around what Albert Einstein called “spooky action at a distance.” With an experiment described this week in Physical Review Letters—a feat that involved harnessing starlight to control measurements of particles shot between buildings in Vienna—some of the world’s leading cosmologists and quantum physicists are closing the door on an intriguing alternative to “quantum entanglement.”

“Technically, this experiment is truly impressive,” said Nicolas Gisin, a quantum physicist at the University of Geneva who has studied this loophole around entanglement.

According to standard quantum theory, particles have no definite states, only relative probabilities of being one thing or another—at least, until they are measured, when they seem to suddenly roll the dice and jump into formation. Stranger still, when two particles interact, they can become “entangled,” shedding their individual probabilities and becoming components of a more complicated probability function that describes both particles together. This function might specify that two entangled photons are polarized in perpendicular directions, with some probability that photon A is vertically polarized and photon B is horizontally polarized, and some chance of the opposite. The two photons can travel lightyears apart, but they remain linked: Measure photon A to be vertically polarized, and photon B instantaneously becomes horizontally polarized, even though B’s state was unspecified a moment earlier and no signal has had time to travel between them. This is the “spooky action” that Einstein was famously skeptical about in his arguments against the completeness of quantum mechanics in the 1930s and ’40s.

In 1964, the Northern Irish physicist John Bell found a way to put this paradoxical notion to the test. He showed that if particles have definite states even when no one is looking (a concept known as “realism”) and if indeed no signal travels faster than light (“locality”), then there is an upper limit to the amount of correlation that can be observed between the measured states of two particles. But experiments have shown time and again that entangled particles are more correlated than Bell’s upper limit, favoring the radical quantum worldview over local realism.

Only there’s a hitch: In addition to locality and realism, Bell made another, subtle assumption to derive his formula—one that went largely ignored for decades. “The three assumptions that go into Bell’s theorem that are relevant are locality, realism, and freedom,” said Andrew Friedman of the Massachusetts Institute of Technology, a co-author of the new paper. “Recently it’s been discovered that you can keep locality and realism by giving up just a little bit of freedom.” This is known as the “freedom-of-choice” loophole.

In a Bell test, entangled photons A and B are separated and sent to far-apart optical modulators—devices that either block photons or let them through to detectors, depending on whether the modulators are aligned with or against the photons’ polarization directions. Bell’s inequality puts an upper limit on how often, in a local-realistic universe, photons A and B will both pass through their modulators and be detected. (Researchers find that entangled photons are correlated more often than this, violating the limit.) Crucially, Bell’s formula assumes that the two modulators’ settings are independent of the states of the particles being tested. In experiments, researchers typically use random-number generators to set the devices’ angles of orientation. However, if the modulators are not actually independent—if nature somehow restricts the possible settings that can be chosen, correlating these settings with the states of the particles in the moments before an experiment occurs—this reduced freedom could explain the outcomes that are normally attributed to quantum entanglement.

The universe might be like a restaurant with 10 menu items, Friedman said. “You think you can order any of the 10, but then they tell you, ‘We’re out of chicken,’ and it turns out only five of the things are really on the menu. You still have the freedom to choose from the remaining five, but you were overcounting your degrees of freedom.” Similarly, he said, “there might be unknowns, constraints, boundary conditions, conservation laws that could end up limiting your choices in a very subtle way” when setting up an experiment, leading to seeming violations of local realism.

This possible loophole gained traction in 2010, when Michael Hall, now of Griffith University in Australia, developed a quantitative way of reducing freedom of choice. In Bell tests, measuring devices have two possible settings (corresponding to one bit of information: either 1 or 0), and so it takes two bits of information to specify their settings when they are truly independent. But Hall showed that if the settings are not quite independent—if only one bit specifies them once in every 22 runs—this halves the number of possible measurement settings available in those 22 runs. This reduced freedom of choice correlates measurement outcomes enough to exceed Bell’s limit, creating the illusion of quantum entanglement.

The idea that nature might restrict freedom while maintaining local realism has become more attractive in light of emerging connections between information and the geometry of space-time. Research on black holes, for instance, suggests that the stronger the gravity in a volume of space-time, the fewer bits can be stored in that region. Could gravity be reducing the number of possible measurement settings in Bell tests, secretly striking items from the universe’s menu?

Friedman, Alan Guthand colleagues at MIT were entertaining such speculations

a few years ago when Anton Zeilinger, a famous Bell test experimenter at the University of Vienna, came for a visit. Zeilinger also had his sights on the freedom-of-choice loophole. Together, they and their collaborators developed an idea for how to distinguish between a universe that lacks local realism and one that curbs freedom.

And yet, the scientists found that the measurement outcomes still violated Bell’s upper limit, boosting their confidence that the polarized photons in the experiment exhibit spooky action at a distance after all.

Nature could still exploit the freedom-of-choice loophole, but the universe would have had to delete items from the menu of possible measurement settings at least 600 years before the measurements occurred (when the closer of the two stars sent its light toward Earth). “Now one needs the correlations to have been established even before Shakespeare wrote, ‘Until I know this sure uncertainty, I’ll entertain the offered fallacy,’” Hall said.

Next, the team plans to use light from increasingly distant quasars to control their measurement settings, probing further back in time and giving the universe an even smaller window to cook up correlations between future device settings and restrict freedoms. It’s also possible (though extremely unlikely) that the team will find a transition point where measurement settings become uncorrelated and violations of Bell’s limit disappear—which would prove that Einstein was right to doubt spooky action.

“For us it seems like kind of a win-win,” Friedman said. “Either we close the loophole more and more, and we’re more confident in quantum theory, or we see something that could point toward new physics.”

There’s a final possibility that many physicists abhor. It could be that the universe restricted freedom of choice from the very beginning—that every measurement was predetermined by correlations established at the Big Bang. “Superdeterminism,” as this is called, is “unknowable,” said Jan-Åke Larsson, a physicist at Linköping University in Sweden; the cosmic Bell test crew will never be able to rule out correlations that existed before there were stars, quasars or any other light in the sky. That means the freedom-of-choice loophole can never be completely shut.

But given the choice between quantum entanglement and superdeterminism, most scientists favor entanglement—and with it, freedom. “If the correlations are indeed set [at the Big Bang], everything is preordained,” Larsson said. “I find it a boring worldview. I cannot believe this would be true.”

Source:www.theatlantic.com

The Strong Force Is What’s Holding The Universe Together


Particle physicists might seem like a dry bunch, but they have their fun. Why else would there be such a thing as a “strange quark”? When it comes to the fundamental nuclear forces, though, they don’t mess around: the strongest force in nature is known simply as the “strong force,” and it’s the force that literally holds existence together.

 

Zoom In On The Elementary Particles

 To find out what the strong force is, you need to have a basic understanding of what physicists call the elementary particles. Let’s start with an atom—helium, for example. A helium atom has two electrons zipping around a nucleus made up of two neutrons and two protons. For most high-school chemistry classes, that’s where the tiny particles end. But you can zoom even further into the atom: those protons and neutrons are a class of particle called hadrons (à la the Large Hadron Collider!), which are made up of even smaller particles called quarks. Quarks are what’s known as an elementary particle, since they can’t be split up any further. They’re as small as things get. There are two types of elementary particles; the other is the lepton. Quarks and leptons each have six “flavors”, and each of those have an antimatter version. (The electrons in our helium atom are a flavor of lepton, so we’re as zoomed in on them as is possible.) Heady stuff! Check out the diagram below if you’re getting lost.
The Standard Model

Forces Of Nature

 Following so far? There are four more parts to this puzzle we call the Standard Model, which is the theory of all theories when it comes to particle physics. Those parts are the fundamental forces. Two are probably familiar: gravity is the force between two particles that have mass, and electromagnetism is the force between two particles that have a charge. The two others are known as nuclear forces, and they’re less familiar because they only happen on the atomic scale. Those ones are known as the weak force and the strong force. The weak force operates between electrons and neutrinos (another kind of lepton), but of course, it’s the strong force we’re here to talk about.

The strong force is what binds quarks together to form hadrons like protons and neutrons. Physicists first conceived of this force’s existence to explain why an atom’s nucleus can have more than one positively charged proton and still stay together—if you’ve ever played with magnets, you know that a positive charge will always repel another positive charge. Eventually, they figured out that the strong force not only holds protons together in the nucleus, but it also holds quarks together in the protons themselves. The force actually comes from a type of force-carrier particle called a boson. (Surely you remember the 2012 discovery of the Higgs boson?) The particular boson that exerts this powerful force is called a “gluon”, since it “glues” the nucleus together (we told you that physicists were a fun bunch).

Here’s what makes the strong force so fascinating: unlike an electromagnetic force, which decreases as you pull the two charged particles apart (think of magnets again!), the strong force actually gets stronger the further apart the particles go. It gets so strong that it limits how far two quarks can separate. Once they hit that limit, that’s when the magic happens: the huge amount of energy it took for them to separate is converted to mass, following Einstein’s famous equation E = mc2. That’s right—the strongest force in the universe is strong enough to turn energy into matter, the thing that makes up existence as you know it. We learned some particle physics, everyone. Who needs a snack?

 

Watch And Learn: Videos About Particle Physics To Make You Sound Smart

The Four Fundamental Forces Of Physics Explained

Here they are, in all their glory!

Prominent Astrophysicist Calls the Big Bang A “Mirage”


Article Image
Artist conceptualization of the Big Bang.

Science classes the world over teach that the Big Bang is the beginning of our universe, as if it’s established fact. In reality, it’s a theory and one that’s been challenged periodically. In the last few years, two teams of scientists have revived the debate, and offer fascinating alternative models. A recent paper published in the journal Nature, even goes so far as to suggest that the Big Bang was a “mirage.”

This paper was written by astrophysicist Niayesh Afshordi and colleagues, at the University of Waterloo in Ontario, Canada. They built upon the work of physicist Gia Dvali at the Ludwig Maximillian’s University in Munich, Germany. Physicists have some evidence that the Big Bang took place.

For instance, microwave radiation lurking in the background suggests an apocryphal explosion some 13.7 billion years ago, when the Big Bang is said to have taken place. The fact that the universe is still expanding also suggests that all things came from a common point, strengthening the accepted theory. But what happened before it took place has always been a mystery.

Today, we’re told is that everything began with an unimaginably hot, infinitely dense point in space, which did not adhere to the standard laws of physics. This is known as the singularity. But almost nothing is known about it. Afshordi points out in an interview in Nature, “For all physicists know, dragons could have come flying out of the singularity.” Mathematically, the Big Bang itself holds up. But equations can only show us what happened after, not before.

Background radiation in the universe. 

Since the singularity doesn’t fit into normal, predictable physics models and can’t offer a glimpse into its own origins, some scientists are searching for other answers. Dr. Ahmed Farag Ali of Benha University, in Egypt, calls the singularity, “the most serious problem of general relativity.”

He collaborated with Professor Saurya Das of the University of Lethbridge, in Canada, to investigate. In 2015, they released a series of equations which describe the universe, not as an object with a beginning and an end, but as a constantly flowing river, devoid of all boundaries.

There was no Big Bang in this view and similarly no “Big Crunch,” or a time when the universe might stop expanding and begin condensing. They published their work in the journal Physics Letters B, and plan to introduce a follow-up study. The paper attempts a Herculean feat, to heal the rift between general relativity and quantum mechanics.

In this view, the universe began when it filled with gravitons as a bath fills with water. These don’t contain any mass themselves but pass gravity on to other particles. From there, this “quantum fluid” spread out and the speed of expansion accelerated.

So far, it remains a hypothesis which must undergo a battery of tests, before it can compete with or supersede the present model. This isn’t the only challenge to currently accepted theory.

Currently accepted model. NASA Jet Propulsion Laboratory. Caltech.

To get a better idea on how the universe began, Prof. Afshordi and his team created a 3D model it, floating inside a 4D model of “bulk space.” Remember, the fourth dimension is space-time. This 3D model resembled a membrane, so scientists named it the “brane.” Next, they examined stars within the model and realized that over time, some would die off in violent supernova, turning into 4D black holes.

Black holes have an edge called the event horizon. Reach it and nothing will save you from being pulled in. Nothing escapes its omnipotent pull, not light, not even stars. We think of an event horizon as a corona around a black hole, as it is usually represented in 2D images. Everything in space is 3D (4D actually). So it isn’t a ring, but an outer layer of the black hole’s surface.

Afshordi ran the model to see what would happen when a 4D black hole swallowed a 4D star. A 3D brane fired out, as a result. What’s more, the ejected material began expanding in space. So the universe may be the result of a violent interaction between a star and a black hole.

Ashfordi said, “Astronomers measured that expansion and extrapolated back that the Universe must have begun with a Big Bang — but that is just a mirage.”

To learn more about one alternate theory to the Big Bang, click here:

 

NASA will soon create the coldest spot in the universe – and forge a bizarre form of matter inside it


cold atom laboratory cal lasers space station experiment nasa jpl caltech PIA17794
  • SpaceX is launching a NASA experiment called the Cold Atom Laboratory.
  • The device will use lasers and a “knife” of radio waves to cool gases into superfluids.
  • The superfluids will be just a billionth of a degree above absolute zero.
  • A zero-gravity environment will help scientists better study superfluids and mysteries surrounding gravity and the expansion of the universe.

The void of space is cold – very, very cold. With a temperature that floats around -457 degrees Fahrenheit (just 3 degrees above absolute zero), it’s hard to imagine anything more frigid.

But physicists hoping to probe the universe’s deepest mysteries are trying to create a spot that’s even colder.

A potentially revolutionary NASA experiment may break low-temperature records this summer. What’s more, the experiment will fly aboard a SpaceX resupply mission to the International Space Station (ISS), where astronauts will install it in their laboratory.

The goal of the Cold Atom Laboratory (CAL), as the box-shaped experiment is called, is to chill puffs of gas to a mind-boggling one-billionth of a degree above the coldest temperature possible.

Einstein thumb05

Under those conditions, gases should form what’s known as a Bose-Einstein condensate (yes, it’s named in part after Albert Einstein): a form of matter that is totally alien to the human experience.Also called a superfluid, this matter moves like a fluid but acts like a solid and seems to have no friction.

“If you had superfluid water and spun it around in a glass, it would spin forever,” Anita Sengupta , an aerospace engineer at NASA’s Jet Propulsion Laboratory (JPL) and CAL’s project manager, said in aNASA-JPL press release . “There’s no viscosity to slow it down and dissipate the kinetic energy.”

Scientists can form superfluids on Earth with a lot of effort, but gravity yanks them down into contact with warmer matter, causing them to disappear within fractions of a second.

That’s why researchers have worked for years to launch CAL into space. The zero-gravity environment will allow the superfluid to float for minutes at a time.

If the experiment works, it might help physicists probe some of the deepest mysteries of the cosmos: gravity, dark matter, and dark energy.

How to make the coldest spot in the universe

cold atom laboratory ground experiment scientists nasa jpl PIA18787

The CAL experiment is essentially a box of lasers.

While it seems counterintuitive, lasers are great at cooling down matter in a vacuum. When light with the correct frequency hits a puff of gas, it causes the atoms to vibrate and shoot out their own photons – carrying away almost all of the matter’s energy.

CAL will chill a puff of gas made out of either potassium or rubidium. As the atoms move from one end of the instrument toward the other, two sets of lasers will drastically cool them down. Once the gas is thoroughly chilled, a device called an “Atom chip” will compress it using magnet-like forces.

cold atom laboratory cal superfluid nasa PIA18786

Next, a “knife” made out of radio waves will sweep away any warm atoms, further cooling the gas.Other tricks will then chill it hundreds of times further – down to about 100 million times colder than the vacuum of space.

The orb of superfluid that forms at this point should be practically motionless and may last for up to 10 or 20 seconds . Future improvements to CAL could potentially push that duration to more than 100 seconds, NASA says.

And at that point, researchers can subject the superfluid to a battery of tests for longer than they’ve ever been able to before.

“Studying these hyper-cold atoms could reshape our understanding of matter and the fundamental nature of gravity,” Robert Thompson , CAL’s project scientist and an atomic physicist at NASA JPL, said in the release. “The experiments we’ll do with the Cold Atom Lab will give us insight into gravity and dark energy – some of the most pervasive forces in the universe.”

Chilling out to probe the universe’s deepest mysteries

quantum fabric spacetime einstein rosen bridge wormhole worm hole space shutterstock_309841256

The reason why cooling particles down could help us better understand the universe is tricky, since it’s a matter of quantum physics (the study of the universe on an unfathomably small scale).

Satyendra Nath Bose and Albert Einstein predicted in 1924 (and were later proven correct) that if matter is cooled beyond a certain ultra-chilly point, its particle-like nature simplifies into a wave-like nature, forming a lump of mass that behaves as one entity. This form of matter is called a Bose-Einstein condensate, which is a superfluid.

So what does this have to with gravity or it’s anti-cousin, dark energy ?

dark matter energy universe budget total percent nasa

Humans can’t see 95% of the stuff in universe, since about 25% is made up of dark matter – possibly an unseen particle (or a bunch of black holes) that exudes a big gravitational force. Dark energy, the repulsive force that’s making space expand at an ever-faster rate, makes up about 70% of the universe’s matter-energy pie.Some researchers suspect that dark energy, “normal” gravity, and dark matter may all derive from as-yet-unseen particles: axions, gravitons, and solitons (respectively).

Those particles could be responsible for throwing off the physics equations Einstein published in the early 1900s, but we haven’t detected any blobs of the particles yet.

Which brings us back to the ultra-cold matter.

If a blob of “normal” matter can be turned into essentially one large particle that’s much more massive than a single atom, strange quantum behaviors may emerge that could lend credence to the particle idea – or help shoot it down.

If the experiments hint at the existence of massive yet invisible particles in space, researchers may be able to develop ways to find them and solve decades-old mysteries about the past, present, and future of the universe. If the evidence doesn’t pan out, it could be a blow to the idea, though researchers may just need to build more sensitive experiments.

NASA’s box of lasers is scheduled to launch aboard a SpaceX resupply mission to the International Space Station (ISS) on August 1, 2017 .

%d bloggers like this: