Monday, April 9, 2012

Paul Davies Gets It Right

Ever since I began developing the biocentricity program in 2009 (based on a hypothesis by biologist Robert Lanza), I’ve been on the lookout for biocentricity-friendly ideas from the scientific establishment. So imagine my delight when one of my favorite physicists, Paul Davies, floated a surprisingly similar proposal on the excellent PBS show Closer to Truth.

A few years ago, in an audacious New York Times Op-Ed piece, Davies famously rankled his fellow physicists. He wrote that science is becoming increasingly faith-based, as physics has been forced to invoke multiple universes and other devices which — though they provide workable explanations — may end up being no more falsifiable than saying God did it. I appreciate that rather than appealing to external agents and “theories of the gaps,” Davies challenges his colleagues to look for answers from within the universe, however difficult it may be. That’s exactly what biocentricity attempts to do. So, even though I’m no Paul Davies, perhaps it’s not terribly surprising that our ideas might converge.

The topic on Closer to Truth was, Does consciousness point to God? Host Robert Lawrence Kuhn was looking for alternatives to an intelligent designer to explain the emergence of consciousness. Here is a transcript from Davies’ segment (the bolds are mine):


Paul Davies: “This is a radical idea, although it’s not so radical if you’re steeped in quantum physics, where the observer plays a very significant role. In the popular mind, there’s this notion that there’s a unique history that connects the Big Bang, the origin of the universe, with the present state of the universe. Quantum physics says that’s just a load of baloney — that there’s an infinite number of histories. They’re all folded in together, and if you know nothing at all about the past of the universe, you must take all of these histories. And when we make observations, what we’re doing is ‘chipping away’ at these histories and removing some of them. We’re culling them. And in principle, if we could fill the entire universe with observations, we would then home in on something like a unique history. So, the act of observation, in part, resolves something about the histories of the universe.

“The laws start out unfocused and fuzzy, [but] eventually there’s life and observers, that link back, just like in quantum mechanics, back in time, through making their observations, and help ‘sharpen’ those laws in a way that’s self-consistent with their own existence. So here we have a universe that has an explanation within itself: The observers that arise, play a part in selecting the very laws that lead to the emergence of observers in the first place.

“You have to have this. If we’re trying to explain why does the universe exist in its present form, and in particular why does it contain life and observers, obviously those life and observers have to be relevant to the laws that give rise to them. Because there’s no other way you can have an explanation for the universe from entirely within it. The only alternative is to appeal to something outside it, like an unexplained god, or an unexplained set of physical laws.

“When we’re tangling with these ultimate questions of existence, we’re bound to go beyond intuition — we’re bound to go beyond common-sense everyday notions. So anything we come up with is going to strike you at first sight as just bizarre, ridiculous, even absurd. But I think what I’m saying is no more ridiculous or absurd than taking, on faith, the existence of an unexplained god or designer, or an unexplained set of physical laws that just happen to be right and give rise to observers like ourselves.”


Wow! Davies’ proposal has obvious ties to Stephen Hawking’s top-down cosmology. But whereas Hawking’s alternative histories of the universe wither away on their own, this is the first time I’ve heard anyone suggest that biological observations were responsible for this culling. (“Chipping away” is the perfect metaphor, too: It suggests that the universe starts out amorphous and formless, and then gets “shaped” through a process of removal, the way a block of stone gets shaped by a sculptor.)

Biocentricity extends Davies’ idea with a few simple principles:
1. All organisms that descended from our earliest common ancestor can be thought of as a single “super-observer,” within which Davies’ self-consistency principle must hold, given that they are all causally connected.
2. Technology is included with this “super-observer,” because all technology is designed by biological organisms. (And, many technological devices have been given the capacity, by their biological designers, to make observations and collect information, just like a living organism does.)
3. Independent forms of life that have had no contact with Earthly life and technology could not exist in the same universe. If what Davies said above is true, then there’s no reason why alien life forms would be “chipping away” at the same universe, and revealing not only the same physical laws but also the exact same history. How could the same laws and global history be selected independently in two far-apart regions of the universe? The existence of independent aliens thus wholly contradicts Davies’ premise; we are back to requiring an explanation for the laws and history that are common to disconnected regions of the Cosmos. (In 2010, Davies released a book called The Eerie Silence: Renewing Our Search for Alien Intelligence. Could it be that this contradiction hasn’t occurred to him?)
4. Davies’ proposal of a “feedback loop” that reaches back in time is difficult to imagine, but not so much if we adopt the relational approach to physics. This says that the early universe isn’t some absolute thing that we’re “chipping away” billions of years later. Rather, the state of the early universe is defined strictly in relation to the modern super-observer, i.e., us. Only within this self-consistent universe–observer relation are the alternative histories culled. It’s not like we’re going back and changing things that objectively happened.

If you combine Stephen Hawking’s top-down cosmology, Paul Davies’ “chipped away universe,” and the fact that all of biology and technology is uniquely and locally connected, you get biocentricity. And with biocentricity comes the prediction that the universe is exclusive to us and is otherwise completely sterile. That’s certainly a bizarre, ridiculous, even absurd idea—but it would explain the “eerie silence” Davies wrote a book about. Best of all, this prediction can and will be tested, within your lifetime!

Monday, August 23, 2010

Biocentric Universe, Part 4:
Where Are The Aliens?

Note: The article below, part 4 in a series on the biocentric universe theory, accompanies a short YouTube video on the same topic. For an overview of the theory, please read the FAQ.

“Directly opposite to the concept of universe as machine built on law is the vision of a world self-synthesized. In this view, the notes struck out on a piano by the observer-participants of all places and times constitute the great wide world of space and time and things.”
—John Archibald Wheeler (1989)


One of the most profound questions for modern humans is whether or not we’re alone in the universe. Over the centuries, thinkers such as Copernicus, Galileo, Ernst Mach, and Albert Einstein discovered that we Earthlings hold no special or preferred place in the Cosmos, and that has led to a general realization that we’re an incredibly tiny, insignificant part of the greater whole. This was especially true in the 20th century, when astronomers first realized that the universe doesn’t stop at the edge of the Milky Way: Our galaxy is only one of billions, and there isn’t even anything special or interesting about our galaxy.

As a minor comfort, we’ve posited that since the universe is so vast and the numbers so large, at least there must be other intelligent life forms somewhere, even if they’re extremely far away. This has allowed our imaginations to run wild, as we try to envision what aliens might be like. Would they resemble humans, like on Star Trek? Would they have a head and extremities? Would they even be carbon-based, like us? Some visionaries imagine forms of “life” (here using the term loosely) that might not even consist of individual organisms, or might be made of something other than solid matter.

But all of this is, of course, wild speculation. Contrary to what we’ve seen in sci-fi films, the mainstream scientific opinion is that no Earthling has ever met an alien being. There are fringe theories about alien visits in early human history or prehistory (e.g., they helped build the Egyptian pyramids or interacted with the Mayan civilization), or that they live among us today. (David Icke believes that many persons of prominence, from George W. Bush to Kris Kristofferson, are actually shape-shifting reptilian aliens.) Still others deeply believe they had a personal encounter with aliens at some point, usually while they were sleeping. These stories are remarkably widespread, compelling, and similar, but they are more realistically seen as episodes of sleep paralysis. In the Middle Ages, it was common for people to be visited by demons in the night who would sit on their chests and torment them; these days, small extraterrestrial humanoids seem to be the predominant scary “others” in our collective unconscious, so it’s no surprise they’re the new demons of the night. As skeptics point out, there is little reason to believe that personal alien encounters have been happening only for a few decades and apparently not at any other time in recorded history. Carl Sagan famously said that “extraordinary claims call for extraordinary evidence,” and by Occam’s Razor it is far easier to explain alien abductions and other visits as subjective phenomena due to sleep paralysis, mass hysteria, etc., than to posit the assumptions and conditions necessarily associated with actual, objective alien visits.

Fermi & Drake

The physicist Enrico Fermi was the first prominent scientist to ask why we haven’t found objective, definitive evidence of alien intelligence in deep space. By the time UFO sightings had become stories in the news, many astronomers believed that life, being something that flourishes readily on Earth — by all accounts a fairly ordinary planet — must have arisen on other fairly ordinary planets as well. Fermi took this reasoning further: All signs point to the universe being around some ten billion years before the Earth formed. Heavier elements such as carbon, the necessary building blocks of life, had first been forged inside heavy, short-lived stars since the universe’s first billion years. If life’s building blocks had been around for billions of years before Earth, and if it’s a fairly ordinary thing for living organisms to emerge spontaneously from these building blocks, then the universe must be teeming with life. Not only that, it’s been teeming with life for so long, some of that life must have evolved to achieve consciousness and technology similar to ours. There could be thousands or millions of civilizations many millennia more advanced than ours, the aliens perhaps able to traverse galaxies, build massive infrastructures, and communicate across unimaginable expanses of space.

So, if that’s true, Fermi famously asked, where is everyone?

In 1961, the astronomer Frank Drake decided to try estimating how many of these civilizations might potentially be discovered in the Milky Way, from Earth. First he considered the formation rate of new stars in the galaxy (estimated at roughly 10 per year). Then he considered the proportion of stars with planets orbiting them (estimated at 0.5). Next was an even more speculative factor, the average number of life-friendly planets in solar systems that have planets (Drake chose 2). Of those life-friendly planets, Drake considered the fraction that actually do develop life; he estimated this to be all of them, at least at some point in their history, for a value of 1. Not all life necessarily evolves into intelligent life, though, so Drake added another term reducing the total by a factor of 0.01 (i.e., 1% of planets with life develop intelligent life). Of those planets with intelligent life, on how many would the aliens be able to eventually communicate in a way that might be detectable from Earth? Drake estimated another 1%, or 0.01. Finally, he included yet another speculative factor expressing how long such communicating civilizations would last, choosing the number 10,000 years. This collection of estimates constituted the famous Drake equation, the idea being that if you multiply all of the terms, you will get an extremely ballpark estimate of how many advanced Milky Way civilizations could be detectable from Earth, right now. Drake’s estimate from the above numbers: 10.

Almost immediately, Drake’s thinking began being debated and revised. One can reformulate the Drake equation any number of ways. If you want to be optimistic about finding intelligent life around other stars, you can pump up the speculative numbers and get a much higher total; if you believe that Earth-type life is rare, you can shrink the numbers as well as add additional restrictive terms. For example, one factor that enhances Earth’s life-friendliness is the presence of a much larger planet, Jupiter, which tends to slingshot large, extinction-capable asteroids away from Earth and out of the solar system. Many also believe that an unusually large and tidally locked moon (one side of our moon always faces toward us), by slowing the planet’s rotation, is necessary for a long-evolving lineage of life. A circulating liquid core, and the tectonic activity that results on the planet’s surface, is also thought necessary for the dynamic evolution of diverse life forms. In other words, so many of the equation’s terms are speculative and controversial — or based on Earthly numbers and therefore subject to anthropic bias (the Earth wasn’t randomly chosen as an average planet, since we actually live here) — no definitive conclusions can be drawn from it. Still, the Drake equation is an interesting exercise and has long been an inspiration for SETI, the search for extraterrestrial intelligence.

The “Great Silence” & the Fermi Paradox

SETI scientists are certainly listening, and have been for years. But what do they hear? Nothing — something astronomers call the “Great Silence.” It’s absolutely true that the researchers have barely scratched the surface, having pointed their radio telescopes to only a tiny fraction of the Milky Way stars, and always for short durations limited by their funding. It’s also true that even the most advanced civilizations might have no use for the radio bands of the electromagnetic spectrum, which SETI tends to tune into. (Consider that our own civilization’s radio-frequency output peaked in the 1980s; we’ve since switched over much of our communication to purely terrestrial cable and fiber optics.) Regardless, in the manner of Fermi, many physicists believe that we should still hear or see something out there. Forget the Milky Way — there are so many galaxies, each with so many stars, and the universe’s history is so long, there really should be some remnant of an alien civilization leaking its way to Earth. In honor of the physicist who started it all, a name has been given to represent the discord between the Great Silence and the huge numbers involved: the Fermi paradox. Why do we seem to be alone, an idea that many people find absurd or even impossible?

There’s no shortage of suggested answers to this question. While a minority (largely biologists) believe that life is such a rarity that we may in fact be alone, probably the most common position is simply: Alien civilizations exist, but they are just too far away. Others believe that not only do they exist, but they have visited Earth. Still others believe that they exist but hold to a doctrine of non-intervention with younger civilizations, perhaps having constructed Dyson spheres to hide themselves from outside view. This last solution is particularly unsupportable; the invocation of mysterious, highly speculative factors that are in principle unknowable, and which lack even indirect evidence, should be a red flag for any critical thinker. Might there be an answer to this question that can be tested experimentally, and that doesn’t require aliens to conspire against their discovery by looky-loos with Very Large radio telescopes?

The Biocentric Alternative

Biocentricity takes a definitive position on the matter: We are in fact alone, because unrelated alien lineages could not possibly exist in our universe. That is the prediction of the theory. It states that the structure of the universe is contingent upon the observational acts of living organisms — observers that are not born out of a pre-existing universe of defined matter, but which instead actively produce the universe through their observations. Collectively performing this task are all of Earth’s living beings. Together, they form a kind of “common observer,” which observes/produces the visible universe. What is meant by “common observer”? I mean there is a certain operational unity or oneness among all of the life forms on Earth. Despite there being a multitude of individuals, all doing their own thing, some practical commonality ties us all together, such that we collectively constitute a singular observing entity, in some manner of speaking.

How might this work? Well, consider a living organism, made of cells. During its life, new cells are produced, live for a while, and then die. This is what it means to be an organism: Every organism is made of smaller living units of biological activity, which have diverse roles in helping the organism to function, and also which have finite, overlapping lifetimes within the longer lifetime of the organism itself. But the term “organism” can be broadened out to describe living things other than individuals of a species. A perfect example is a colony of bees. The entire colony can be thought of as a single organism, with most or all of the functions of an individual of the species, including reproduction (a growing colony eventually divides in two, producing two similar colonies) and even temperature regulation (some bees are tasked with beating their wings to fan fresh air through the hive). In this case, the individual bees are like “cells” of this “organism”: Though genetically similar, they have different physical structures — a queen is different from a worker — and different roles to play within the hive. A colony may have a lifetime of several years, during which new “cells” are produced, live for a while, and die. Yet the colony, as a kind of meta-organism, lives on.

It doesn’t take a huge stretch of the imagination to extend this definition of “organism” to its limit. All things that have ever lived on Earth can be considered to make up the ultimate organism — a true superorganism of Earthly life. Each species is a bit like an “organ” of this superorganism, and again, individuals of a species are like cells: We are born, live for a while, and then die. As a nonreligious person I personally find this a comforting way to think of my place in the world. Rather than being an isolated individual with a finite life span, after which it is “all over,” I am a part of the bigger living picture. I will die eventually, but the superorganism of which I am a part — the entire biosphere of the Earth — will live on after me. But that’s not just a pleasant way to look at things. According to biocentricity, it is precisely this superorganism — the singular, unified “common observer” — that experiences and builds the universe. In this theory, the superorganism is really the only way to look at the living world as a whole.

E Pluribus Unum

This is one place where biocentricity seems to lose people, but it’s the last piece we need to put the whole puzzle together. One of the theory’s main common-sense objections is that it is “too complicated” — that the simplest explanation for the world “out there” is that there really is one material, physical world “out there.” Of course, this makes intuitive sense: For a thousand years, science has done well assuming that such a description of the world is true. And it very well might be — but as we’ve seen in previous installments, the absolute worldview of an external, pre-existing universe of defined matter is fundamentally inconsistent with an increasing number of experiments. So we consider alternatives, one possibility being a biocentric universe that is subjectively experienced by a single collective entity — the “common observer,” of which you and I are tiny parts.

But this brings up an obvious question: Why is it that we members of the “common observer” all agree upon the universe that we see? Why do we all experience the same course of events? When you throw a ball to your dog, how could the dog possibly see the same ball and catch it, if the ball (like other objects in the world) is not an external thing independent of our subjective experience? Those who know a little basic philosophy may dismiss the biocentric universe theory as an unoriginal rehash of solipsism. After all, there is a universally agreed-upon course of events that exists independently of any one individual. So if this course of events is not a function of an absolute world of independently existing objects, but is instead a subjective experience (more like an extremely lucid dream), how can we possibly all agree on a single course of events?

The answer is disarmingly simple and clear: Because the observed universe is internally consistent. Period — that’s all it takes. Unlike in a dream, all “real” events in the universe, as far as we know, obey the same rules of spatio-temporal logic. You may dream about an apple falling up, but in the real world this never happens. Things in the real world follow the set of physical laws that we’ve discovered; internal consistency rules our universe. No experiment has ever demonstrated otherwise. “Real” objects do not arbitrarily appear and disappear, and do not instantaneously jump across space. If a person witnesses such an event, it is a phenomenon unique to that individual — a hallucination of some kind — which would not hold up to empirical tests.

Now let’s consider the superorganism of all things that have ever lived on Earth, and imagine how it might be a “common observer” of the universe. This ultimate superorganism is no arbitrary lumping together of unrelated objects. All organisms are related — literally. We’re related genetically; if you go back enough generations, you will find that you and your best friend are, in fact, blood relatives. But we’re also related physically. When you were conceived nine months before your birth, your parents physically interacted with each other. (I hope they did, anyway.) And while you were in utero, and during your birth, you had a physical interaction with your mother. The same can be said for your parents: Your father, for example, had a physical interaction with your paternal grandmother, who had a physical interaction with your paternal grandfather. This is something that every living organism has in common; we’ve all had direct physical interactions with at least one parent. And I mean every living organism — even a mushroom grows from a spore that was physically produced and released into the air by a parent mushroom. We can therefore trace a continuous chain of physical interactions between any two organisms that have ever lived on Earth — you and your dog, your dog and a flea, even a modern hummingbird and a prehistoric flower. In this way, the entire history of life on Earth is linked through these direct interactions, all the way back to our earliest common ancestor. If we drew a map of these interactions, of course, it would look like a tree — identical to the genetic “tree of life.” You can think of this tree as a graphical representation of the “common observer” from the first living organism all the way up to today.

If we accept the idea that the universe is 100% internally consistent, and we accept that every living thing on Earth is connected by a chain of real physical interactions, then the universe must demonstrate 100% consistency between any two of those organisms. There can be no disagreements about the course of events, or the physical laws, observed. You can throw a ball to your dog because you and your dog agree upon this course of events. And if your dog buries a bone, that bone will still be there ten years later — even though every atom in the dog’s body has been replaced several times over in the meantime — because the universal course of events is perfectly consistent. Even if a tyrannosaurus placed a bone inside a cave 70 million years ago, that bone must necessarily still be there when a paleontologist discovers the cave today (assuming in both cases that the bone remained undisturbed).

To sum up: We have (1) a universe with a perfectly consistent course of events, and we have (2) one inter-related superorganism that witnesses these events as being perfectly consistent. This is how all of the living (and formerly living) organisms on Earth, together, constitute the “common observer” of the course of events in the universe.

Beyond the “Common Observer”

Let’s return to our discussion of aliens. Some theorists question the assumption that life originated on Earth and may have also started independently at other locations. According to the panspermia hypothesis, our familiar form of life may be extremely common in the universe, Earth having been “seeded” with life early in its history. A related idea is exogenesis, the proposition that life originated elsewhere (Mars, for example), and was brought here, perhaps by an asteroid. There is no direct evidence for panspermia or exogenesis, but the first living organisms do show up remarkably early in Earth’s geologic fossil record, and meteorites have been found on Earth that appear to have ultimately come from Mars, at least one of which has features that are believed to be microbial fossils. So there is at least some reason to believe life did not originate on Earth.

But when we speak of aliens living in other solar systems or even galaxies, we have to be a bit more sober. Barring a true panspermic situation where the lineage dates back somehow to the early universe and has been distributed across billions of light years (which would be an extraordinary situation indeed), it is highly unlikely that our lineage of life has extended to other stars and especially galaxies due to chance alone. So, let us consider completely unrelated alien lineages — beings whose origins share not so much as a single causal link with ourselves — whether they are here in our solar system, or well beyond.

If the universe is indeed biocentric and built by a “common observer,” such unrelated alien lineages, with no prior causal contact with our lineage, simply could not exist in our observer-built universe. In other words, if the theory is correct, we are most definitely alone — not because the emergence of life is rare, but because things simply could not be any other way. Biocentric means the universe is observed/produced by living organisms, and so an independent lineage of life would constitute a different common observer, and therefore would necessarily observe, produce, and dwell in a universe entirely separate from ours. We would never cross paths; they would neither be able to intercept information emanating from us via any medium (such as radio transmissions), nor would we be able to intercept theirs. Keep in mind that these statements apply only in the case that the universe is biocentric. If biocentricity describes the ontological nature of universes such as ours, then when we look out into space, it should appear to be completely devoid of other forms of life. And in fact, that is exactly what we see — the “Great Silence.”

Please don’t misunderstand; I am not saying that the absence of evidence (e.g., artificial signals from deep space) is evidence of absence. Of course, we can prove absolutely nothing from a lack of evidence for anything. However, the principle of the exclusive “common observer” does open the door to falsifying the biocentric universe theory — and falsifiability is a requirement of any scientific theory. If, at any time in the future, an entirely independent lineage of life is discovered anywhere in the universe — even right here on Earth — then the theory that you are reading about right now is dead in the water. A universe that is biocentric cannot have two causally unrelated lineages of life living in it, period. So, while SETI research can only supply a definitive yes-or-no on this theory in the case of contact with alien life, the Fermi paradox of the “Great Silence” nonetheless suggests that something interesting may be going on. And therefore biocentricity becomes increasingly compelling as we search larger and larger portions of the Cosmos, without finding anything.

Testing the “Common Observer” Principle

There are, however, ways that we can positively test the biocentric universe theory, and one of those would exploit the nature of the “common observer.” Since the theory proposes that every living organism is constrained to observe the same course of events in the universe, consistent with other organisms’ prior observations, let’s put that to the test. But the experiment has to be more sophisticated than merely throwing a ball to a dog. (That test has been done quite often, with results of limited usefulness, specifically that Doggie is such a good boy.) It needs to be done in the quantum world, where deviations from traditionally understood classical behavior emerge.

A field of research that’s very hot right now involves the study of “retrocausal” phenomena. The simplest of these experiments, a favorite topic of biocentric universe theorist Robert Lanza, is the delayed-choice experiment. This is a variation of the double-slit experiment in which particles are fired toward both slits, and then after the particles are known to have passed the slits, the particle paths are either inspected or left alone, by choice. As the great physicist and forefather of biocentricity John Wheeler correctly predicted when he conceived the experiment, when the paths are inspected, the electrons should be seen to have gone through only one slit or the other, as particles. However, when such an inspection is not made, the electrons should form interference patterns, signifying that they went through both slits, as waves. In fact, this is what happens when the experiment is performed. So the choice, by an observing person,* of whether or not to inspect the particle paths seems to have an effect on what those particles did earlier in time.

More sophisticated, recent experiments such as this one show more dramatic retrocausal quantum phenomena. Researchers set up a three-step experiment: In step one, a laser beam with particular known properties is prepared. In step two, it is reflected by a mirror that is able to move by a tiny amount, deflecting the beam’s path slightly. (It is this deflection that the experiment seeks to measure.) In step three, the beam undergoes a “post-selection” routine that involves choices made by the experimenter. It turns out that depending on these choices made in step three, the measurement of the beam deflection can be greatly amplified, to the point that the choices actually affect the values of the measurements previously performed.

These experiments — where a human choice seems to change the result of an event earlier in time — provide a way to test the “common observer” principle, and biocentricity by extension. All we need to do is divide the experiment so that the choices made by human experimenters can also be made (in some respect) by animals, even single-celled organisms. The “common observer” principle predicts that other living organisms will be capable of constraining the resulting retrocausal phenomena, as seen by humans. In cases where the free-will actions of a living organism intercede in an experiment, subsequent choices by a human experimenter should not result in any retrocausal phenomena whatsoever. This is because in an observation-dependent, WikiWorld-style universe, any individual member of the “common observer” can seek information about an unknown property, resulting in a real, measured value of that property that did not exist ontologically before — and since the universe is 100% consistent, future observations of the same property by other observers can only reveal the same previously measured value. But if the universe is such that observers are merely passive discoverers, free-will actions by lower animals may have no effect in retrocausal experiments, or their actions may have results that don’t necessarily constrain the retrocausal effects of the human choices. (It can be argued that retrocausal phenomena are incompatible with this absolute “BritannicaWorld” model in the first place, however.)

The Fractal Nature of Life

I enjoy watching plants grow. Each plant seems to be a microcosm of the history of life on Earth: It begins from utter simplicity, perhaps as a poppy seed smaller than the head of a pin; it then sprouts, and its first set of leaves appear, then another, then another. Over time, the plant takes up increasing amounts of space and gathers light, seeking and accumulating more and more information about its surroundings. A leaf may be eaten by an unexpected insect overnight, the way that an asteroid might cause a mass extinction on Earth. But the plant bounces back, even stronger than before. Eventually, the plant looks surprisingly similar to a chart of the “tree of life” on paper — and it isn’t done growing yet.

Is it a coincidence that a single lowly plant grows and develops in mirror fashion to the entirety of all life on Earth? And that in a parallel manner, life’s organism/cell hierarchy repeats itself, like a fractal pattern, from the largest organizational levels down to the smallest — even within cells, with their variously coordinated mitochondria, ribosomes, and Golgi apparati? An enthusiast of the biocentric universe theory would suspect that this is no coincidence.




* These results are very often misunderstood as being more mystical or “consciousness-based” than they really are. The fact that the particles’ behavior changes is not due to the fact that they are being consciously watched, or that the particles somehow “know” that they are being observed by a human. The behavior is merely a result of the physical setup of the experiment at that moment: Depending on the choice that the human experimenter makes, the particles either will or will not have certain physical interactions with the experimental apparatus. This is a subtle but important point. The fact that an effect can seem to precede a cause in time, however — that is what’s most interesting from a scientific standpoint, and from the standpoint of this theory.

Thursday, May 6, 2010

Biocentric Universe, Part 3: WikiWorld

Note: The article below, part 3 in a series on the biocentric universe theory, accompanies a short YouTube video on the same topic. For an introduction to the theory, please read part 1. Part 2 is called “It’s All Relative.”

“The giant telecommunications system of today finds itself inescapably evolving. Will we someday understand time and space and all the other features that distinguish physics — and existence itself — as the similarly self-generated organs of a self-synthesized information system?”
—John Archibald Wheeler (1989)


A core proposition of Robert Lanza’s biocentric universe theory is that objects do not exist in any definite form until they are biologically observed. This is the one aspect of the theory that people have the most difficulty with. In the comments for our video series, the most commonly voiced objection has been, “Things exist without being observed.” But when asked, “What evidence do you have to back up that statement?” the commenters don’t have an answer.

Logicians point out, of course, that this question is inherently unanswerable with any kind of certainty. There is no way to demonstrate something’s existence (or non-existence) except by observing it. Therefore, to claim that unobserved things are nonexistent is an unfalsifiable statement: It is equally impossible to argue logically against the claim as it is to argue for it. This idea, that an object’s existence depends on observation, is especially disagreeable to readers touchy about anything that smacks of spirituality or pseudoscience; after all, we could always explain that “God makes everything happen,” or “An invisible, mysterious form of energy makes everything happen.”* Neither claim is testable or falsifiable. But, is observation-dependent existence any less scientific or reason-based than observation-independent existence? One can just as effectively say that the latter is based on faith without evidence.

In Western philosophy, two separate areas of discourse are ontology, the study of being or existence, and epistemology, the study of knowledge. The realist philosophical position is that the confusion of existence and knowledge, or of objectivity and subjectivity, is a naïve mistake: It is an easy thing to ponder the proverbial tree in the forest falling without making a sound, but such a proposition makes wanton disregard of the tree’s (and sound’s) existence independent of our perception of the tree or any sound it might make. But, isn’t it a mistaken confusion only under the assumption that ontology and epistemology truly are separate and completely independent? What if they are actually closely related? How are we certain that they aren’t? These concepts were touched on by the idealist philosophers, such as George Berkeley and Immanuel Kant, who suggested that the world “out there” is really a function of our consciousness, not necessarily a collection of independent objects that exist externally and objectively, ready for us humans to discover.

The Peek-a-boo Principle

It’s interesting how defensive people can get about their conviction that “things exist without being observed.” Even though they can’t back it up with evidence, they just viscerally know that’s how the world is, and anyone who entertains other possibilities is patently wrong, period. Why? Where does this strong gut impulse come from?

Perhaps it has something to do with early childhood development. There’s an important point when a infant first understands what psychologists call object permanence: Playing “peek-a-boo” with Mommy, the child tends to show confusion when Mommy “goes away,” and when she comes back, the child is happily surprised. After experiencing this enough times, however, the child stops being confused or anxious when Mommy is out of view. He or she quickly learns, through experience, that Mommy hasn’t departed this plane of existence just because there are a pair of hands (or a wall) in the way. Mommy is still there, even when we aren’t observing her.

This understanding then becomes generalized beyond Mommy to other objects: When we put Teddy Bear in the toy chest and close the lid, or when Blankie goes into the washing machine, they aren’t disappearing. Their existence persists, despite our being unable to observe them.

All of this is understandable enough; the world would be a terribly confusing place if we couldn’t count on previously observed things continuing to exist, despite being out of view. But somehow, as we mature, this generalization becomes extended to every object in the world, across space and even across time. If things that we’ve observed exist even when we are no longer observing them, then there’s no reason to think they didn’t exist before we observed them — in fact, before anyone observed them. This principle should, of course, apply universally. Even if there’s a dense dust cloud several dozen light years from Earth that blocks the view of everything behind it, there’s every reason to believe that millions of light years farther out, there exist fully formed galaxies, each with hundreds of billions of fully formed stars. Some of them may harbor life, perhaps even intelligent life. All this despite the fact that human beings may never observe these galaxies and stars, ever. What reason do we have to imagine any alternative? After all, things exist without being observed! Don’t they?

Two Scenarios: BritannicaWorld vs. WikiWorld

What would the world be like if knowledge and existence were the same? Provided there was full internal consistency (with, for example, exactly zero violations of causality and continuity), such a world would be indistinguishable from one in which existence is independent from knowledge. Even the most realist skeptic would have to admit that; there’s simply no way to know for sure which of the two worlds we live in. But, let’s imagine both scenarios and try to envision how they would appear to work. To do this, let’s think of each world as being like an encyclopedia. An encyclopedia is a collection of facts, a repository of information, kind of like the world. When we observe an object in the world, whether it’s an electron or a distant galaxy, it’s a bit like looking at a page of the encyclopedia: We have questions, and the “encyclopedia” has answers that it provides to us. But in this analogy, the encyclopedias corresponding to the two scenarios are very different.

In the conventional view, the one that holds that things exist without being observed, the Universe is like a regular paper encyclopedia. Let’s call it BritannicaWorld. Even though we humans have observed only a tiny fraction of our own galaxy, to say nothing of the billions of other galaxies that must be out there, we conventionally assume that the Universe is, in fact, a complete thing. Those other galaxies, and their stars — indeed, every particle contained therein — exist in defined forms, whether we know about them or not. Similarly, the Encyclopedia Britannica is a complete, defined thing. We bring it home and put it on the bookshelf, and if we need to know something, we consult it — knowing that all of its information is there, waiting to be read. The availability of the encyclopedia’s information has nothing to do with whether we’ve ever looked at a particular page; when we bought it, we were told that all of its pages had been printed, so we can be sure that even pages we haven’t looked at have facts on them.

This would not be the case in an “encyclopedia” where existence is knowledge, however. In that scenario, the Universe would not be complete, meaning it wouldn’t largely consist of predefined, as-yet-unseen objects awaiting our discovery. Rather, the Universe would be an ongoing “project” in which observers — beings capable of gaining knowledge from their observations (however crude) — participate in its growth. This kind of world is more like Wikipedia than the Encyclopedia Britannica, so let’s call it WikiWorld. Like Wikipedia, WikiWorld is a constantly growing body of information that anyone can participate in. Where BritannicaWorld is complete, in WikiWorld new “pages” are constantly being generated, as objects are observed and things become collectively known about them.

So, how can we gauge whether the real world is more like WikiWorld or BritannicaWorld? By doing what science does best: looking at the experimental data and seeing which world scenario better fits. Let’s examine two familiar types of quantum experiments: basic variations on the double-slit experiment, which deal with wave/particle duality (discussed in part 1), and the more challenging Bell test experiments, which deal with the behavior of entangled particles.

Waves vs. Particles

For centuries, there was a controversy in science: whether light consisted of waves or particles. In 1801, the British scientist Thomas Young passed light rays through two narrow slits and noticed that they formed a pattern of interference fringes on a screen; the pattern disappeared when he covered one of the slits. This would only happen if each individual light ray consisted of a wave that moved through both slits; if light consisted exclusively of particles, each ray should pass through one or the other slit like a bullet, producing two spots on the screen, one for each slit.

A century later, Albert Einstein published his Nobel Prize-winning paper on the photoelectric effect, demonstrating the contrary: that light consists of particles. And sure enough, if you send an individual bit of light (one photon) toward the double-slit apparatus, and observe what happens on a phosphorescent screen or photographic plate on the other side, the photon will show up in just one place. It appears to have passed through the apparatus as a particle.

The interesting thing is, if you continue running this single-photon experiment so as to let the particles build up on the screen, you eventually get an interference pattern composed of the individual particles. This happens with bits of matter, such as electrons, as well. And to make matters even more puzzling, if you observe (either directly or indirectly) the individual slits for a photon or an electron coming through — in other words, you monitor which path each individual particle took — you don’t get the interference pattern. The particles act strictly like particles.

This is the nature of wave/particle duality: Both light and matter seem to exist in both forms, though never at the same time. Which form do you see when? Amazingly, it depends on what you look for. If you set up an experiment to find particles, your experiment confirms that light and matter consist of particles. But when you set up the experiment to find waves, your experiment confirms waves.

If we live in a “BritannicaWorld,” this confounds common sense: How can the answers we find depend on the way we are looking them up? It would be like finding different facts in a paper encyclopedia, depending on which magnifying lens we look through. Through the “looking for particles” lens, we see particles, but through the “waves” lens we see waves. Such experimental findings seem incompatible with a world that is complete and predefined, like a paper encyclopedia.

As a result, in order to make them compatible, one needs to find complex supervening explanations — such as the idea of a separate “classical world” of large-scale objects and a “quantum world” for the smallest microscopic objects, with different observed behaviors in each. (This is a bit like explaining that while entire letters on a paper encyclopedia’s pages appear to be fixed and unchanging, when you look at the speckles of ink closely enough, their shapes appear to change depending on what magnifying lens you’re using.) Or, you can simply throw up your hands and say, “Quantum mechanics is just counterintuitive — deal with it!”

These were the dominant approaches to QM in the 20th century. In order to square the experimental findings with the notion of an independently existing world, physicists looked toward decoherence theory, which says that a wave appears to become a particle when it interacts with massive, larger-scale objects, such as those found in the environment or in an experimental apparatus. Today, decoherence is the go-to topic for arguing that the properties of objects are independent of human observation. In this explanation, inanimate objects (such as the detector mechanism of a Geiger counter) can function equally well as “observers,” which may explain the observation-dependency seen in experiments. So, while decoherence may very well explain why tiny particles act unlike anything else in the world, recent, ongoing experiments in so-called scaled-up superposition are providing more and more evidence that even massive objects can take the form of a wave that comprises many definite, individual states at once. This empirically challenges the belief that only extremely tiny things like electrons can behave this way.

Could wave/particle duality be explained in a simpler, more elegant manner if we lived in a “WikiWorld”?

Recall that WikiWorld is a dynamic, constantly growing database. If we ask WikiWorld whether an electron is a wave or a particle, WikiWorld could provide us with a choice: It could ask us whether we’re looking for the electron’s wave nature, or its particle nature. When we choose the “particle” option, by setting up the experiment to find a particle, WikiWorld then generates a new “page,” right there on the fly, with the answer: The electron is indeed a particle. This generation of new information is something that a paper encyclopedia just can’t do. Sure, it could give us a couple of cross-references to two other pages, each with a different answer to our question — but in that case, the encyclopedia would have to pre-contain both answers. This is basically Hugh Everett’s famous many-worlds interpretation, familiar in popular parlance as the “parallel universes theory”: When an experimenter makes a choice, such as whether to find waves or particles, he then follows one of two “branches” in the history of the Universe. However, even though the explanation is plausible and many-worlds is now a thoroughly mainstream idea, some physicists and philosophers dislike the idea of the Universe endlessly “branching” in true ontological existence in this way.

In the end, if one insists that the Universe is complete and independent like BritannicaWorld, either it must consist of near-infinite branches of information (for example, one branch where a particular electron is described as a particle, and another where it’s a wave). Or, when we ask about tiny things like electrons and photons, answers must arrive in a bizarre manner unlike any other scientific inquiry process that we know. But in a Universe that’s ongoing and participatory, like WikiWorld, there are none of these difficulties: When we ask a question that nobody has ever asked before, WikiWorld simply generates a new “page,” with the answer on it. That answer then becomes a part of the “database” of the known world. The question could be about the properties of an electron, or of a galaxy — it doesn’t matter.

Entangled Particles

One of the two or three weirdest phenomena of quantum mechanics is the idea of entanglement: When a nuclear event occurs that generates two subatomic particles at once, those particles are forever intimately correlated, a bit like identical twins. This correlation appears to continue with disregard to time and space. In doing so, entanglement challenges our notions of causality and locality — the idea that physical causes and effects only happen through direct physical contact or via mediation by other particles. In other words, even though in our Universe a cause does not arbitrarily jump across empty space and create an effect somewhere else, this is precisely what would happen with entangled particles — at least according to the predictions of quantum mechanics. In the 1930s, Einstein and two other scientists employed this argument to demonstrate that quantum mechanics (which was then very new) couldn’t be a complete theory. Something else had to be going on.

Einstein and his associates argued that for the principle of locality to be violated — for the particles to appear to “communicate” across empty space at a speed much faster than the speed of light — would amount to a paradox, now known as the EPR paradox. (Einstein famously called the proposed phenomenon “spooky action at a distance.”) He belonged to a camp believing that particles must carry some kind of extra information, or “hidden variables,” with them that determined what would happen when they were measured. The physicist John Bell suggested that tests could be performed to deduce whether or not this is the case, and since then, a series of experiments, including one in which the measured particles were spatially separated by over seven miles, have determined each time that the quantum mechanics predictions are correct. “Spooky action at a distance” is real, and hidden-variable theory, at least as it has been proposed, is likely wrong.

Now, you might think that all of this conjecture is silly — that perhaps two entangled particles are simply produced with opposite, fixed properties (charge, momentum, etc.). In that case, there’s no mystery that measuring one particle determines the value of the other. It would be like knowing that a box contains a salt shaker and a pepper shaker: If you press a button and the box spits out a salt shaker, that would determine, with 100% certainty, that the remaining item is a pepper shaker. But there’s more to entanglement than that. A particle’s intrinsic spin, for example, can be measured along any axis in the three dimensions of space — the x-axis, y-axis, or z-axis. Measuring any of these reveals exactly one of two definite values, such as “up” or “down,” as if the particle had been actually spinning about three different axes at once. Even stranger, if we rotate our three-dimensional reference frame by 45 degrees in any direction, we still measure definite spin values of “up” or “down,” with a 50/50 probability, on every axis measured. And no matter what axis or axis orientation we measure against, its entangled twin will have a spin opposite the value of the particle that was measured. This situation would be like a box containing only two items, which somehow are: a pepper shaker and a salt shaker, and a mustard bottle and a ketchup bottle, and a bottle of red wine and a bottle of white. In this analogy, if there were three buttons on the box — “shakers,” “condiments,” and “wine” — and you pushed “condiments,” a bottle of mustard (or ketchup) would come out. And of course if you then looked in the box, you’d invariably find a bottle of ketchup (or mustard). Yet try the experiment with a similar box, this time pressing the “wine” button, and you’d get opposite bottles of wine. Quantum entanglement is that bizarre.

If you still doubt that entanglement weirdness is relevant to any counterintuitive notions of reality (as the biocentric universe theory proposes), consider this Bell test experiment that suggests “the uneasy consequence that reality does not exist when we are not observing it.” When one reads about new QM experiments and theories, such “non-Britannica” concepts pop up again and again. As another example, consider this radical theory that human observation of the Cosmos may be causing our Universe to careen toward its end. While such a view isn’t exactly mainstream, it’s ammunition against skeptics who refuse to consider that our Universe could be, in any way, a participatory Universe, and that observer-centered science isn’t science at all.

As in the case of particle-wave duality, entanglement is difficult or impossible to square with the BritannicaWorld model of a predefined, pre-existing Universe that awaits our discovery. But that’s not to say physicists haven’t tried: The hidden-variable theories are attempts at doing just that. These ideas declare that the correlation of entangled particles is a result of the particles somehow containing loads of predefined (if temporarily unavailable) information — much more than two subatomic particles should be expected to have. But every Bell test experiment performed so far has landed a blow against that BritannicaWorld view.

If we let go of the powerful desire for an observer-independent, “hard-wired” Universe, the difficulties of entanglement melt away. Recall that in WikiWorld, specific properties of individual objects do not exist in any definite state until they are measured or observed. If this is the case, then particles most certainly do not need to carry hidden variables. In WikiWorld there is no default “page” describing every observable property of every particle. Instead, WikiWorld waits until such a page is needed — at the time of measurement. Then, if we measure the spin of one particle about the x-axis, for example, a result is obtained by way of a “new page” being generated, automatically. Entangled particles are special, however: Being identical twins with always-opposite properties, two entangled particles in WikiWorld are described on one “page.” So, the newly generated page might tell us, “The x-axis spin of the measured particle is ‘up,’ and the x-axis spin of its non-measured twin is ‘down.’” From that moment onward, WikiWorld contains a description of this property of both particles. So, if another experimenter makes a simultaneous measurement on the twin particle, the results will be revealed to both experimenters at once.

By now you may be asking: All right, fine — but what and where are these metaphorical WikiWorld pages you speak of? Even though the idea of a “page” is of course an analogy, this remains something of a mystery. In a “hard-wired,” Britannica-type world, the concept of information is easy to understand: It is located in the objects themselves, each simply containing the information that describes it. This view is so appealing to intuition, it’s understandable why physicists have jumped through hoops for the better part of a century to make it work. But in theorizing under the assumption that ours must certainly be a BritannicaWorld, in some respects the entire world then becomes mysterious, with nagging difficulties such as the measurement problem causing controversy and rancor among scientists to this day. (One is inevitably reminded of the epicycles of Ptolemy’s planetary model: Nobody had any idea what these circles were or why planets moved along them. But they were necessary to explain the planetary motions, which seemed bizarre under the “obvious” assumption that the planets revolved around the Earth.)

Perhaps we should just consider the WikiWorld view: that the experiments actually do make intuitive sense. Even though we may not know where the world’s information is really stored, the idea that it’s “somewhere else” is not as far-out as it may seem. It’s been variously suggested that our world may be a digital simulation running on some super-intelligent alien’s supercomputer, or that all of the Universe’s information is contained as a kind of hologram which is unavailable to us, but which “unfolds” through our consciousness into the real world that we observe. The next installment of this series will offer a few ideas on this question.

The “Need-To-Know” Universe

Even though most people firmly believe that “things exist without being observed,” the alternative view is more in line with the experimental findings. I personally find it more satisfying, simpler, and — once I learned to let go of some deeply ingrained assumptions — even more intuitive. Still, it’s a difficult thing for most people to accept that if you point a powerful telescope at a spot of sky that’s never been looked at with such magnification, that you can “create” a galaxy that will forever afterward be seen in the same spot. How absurd is it to suggest that the galaxy’s photons have not been careening though space for billions of years, only to land on your retina at that precise moment! Human beings do not have that kind of power! The biocentric theory is incredibly arrogant!

To think that we can look at the sky and create something that physically exists billions of light years away, fully defined down to the subatomic particle, where nothing existed just seconds earlier ... well, yes, that’s a little arrogant. But I don’t see anything arrogant about the following proposition: While countless lineages of living and observing beings may arise, each lineage observes its own unique universe. We are one such lineage, and our Universe is one such universe. Never-before-observed objects are in superposition, similar to the “electron cloud” of probability surrounding an atom — the sum of all mathematically feasible configurations. However, living things seem to be incompatible with superposition; they can apparently perceive only a singular course of events, with definite observable values and outcomes. So, whenever we observe something that’s in a superposed state, we find an object that’s in a definite, “collapsed” state. In this way, the information about an object’s definite properties comes about strictly on a “need-to-know” basis. When we ask a question, the Universe supplies an answer that wasn’t there before, and that answer then becomes a part of the world for others to find.

As for that galaxy we so arrogantly “created”: Consider how much we learn about a galaxy when we look at it through a telescope for the first time. At worst, it’s a smudge; at best, a lovely spiraling picture. All of the details about its individual stars, planets, molecules, and atoms remain as unresolved through our telescope as the molecules were that made up the Earth’s first living organisms. Those organisms cannot be considered to have been made up of atoms and molecules at the time, because we atom-knowing humans were not around then (see part 2, “It’s All Relative”). Similarly, the galaxy is a barely resolved smudge, because it’s too far away and our observation tools are not powerful enough. In either case — indeed, in all cases — the details exist in defined form only when they are sought out, and subsequently known. Or so would be the case in WikiWorld.

Yes, it would be silly to think that in discovering a galaxy, we can instantaneously create 100 billion stars, each with planets with oceans consisting of countless vibrating water or methane molecules. But a smudge of light, ready to be further resolved next week by someone with an even stronger telescope? I can live with that.


* Curiously, few of these skeptics seem to be bothered by the mainstream-physics proposition of “dark energy,” which is, quite literally, an invisible, mysterious form of energy that makes the expansion of the Universe accelerate, or by the idea of extra dimensions, which by their very nature are not findable by creatures who dwell in three large dimensions of space and one of time.

Saturday, April 3, 2010

The Biocentric Universe, Part 2: It's All Relative

Note: The article below, part 2 in a series on the biocentric universe theory, accompanies a short YouTube video on the same topic. For an introduction to the theory, please read part 1.

“The past has no evidence except as it is recorded in the present. The photon that we are going to register tonight from that four-billion-year-old quasar cannot be said to have had an existence ‘out there’ three billion years ago, or two, or one, or even a day ago.”
—John Archibald Wheeler (1989)


In 2007 and 2008, biologist Robert Lanza created a controversy when he rolled out his biocentric universe theory, which proposes that the universe we observe is continuously being built by living things, and that space and time are best thought of as constructions of the mind. Nobody wants to be told that the real world is, in any way, “unreal.” But as I’ve mentioned, people don’t have a problem accepting that solid matter is more than 99.99% empty space. Or that when we see an object, we are not sensing the object directly, as it is at that moment; but rather, we are sensing zero-mass waves in the intangible electromagnetic field, which emanated from the object some time ago. Our image of the object is very real to us — but in every possible way, that moment-by-moment picture in our mind is a function of neural impulses in the brain, based on information the body has received, by way of those waves. We’ve learned these things only in the last 150 years. Is it already time to close the door on asking what “real” really means?

On first blush, the biocentric universe theory can seem arrogant to some, or illogical, or it may have the ring of creationism. To the less thoughtful, it may just seem “stupid.” But ironically, there’s almost nothing in the theory that conflicts with conventional scientific knowledge. This is an important point, that biocentricity does not seek to throw out existing science. It is more like a wrapper around our current body of theories, laws, and hypotheses — or a lens through which the existing knowledge can be clarified, organized, and made more elegant and rigorous. This alone might be valuable enough, but proponents believe that this “wrapper theory” can also be tested directly in the laboratory. I laid out some avenues of possible investigation in part 1 of this series.

So, how can a person claim that the “real” Universe is actually a kind of three-dimensional image experienced collectively by living things, without such a claim conflicting with established science? Well, for starters, there are no theories or laws declaring that matter or anything else in the world is particularly “real.” Science deals explicitly with observations, compiling descriptions and rules about the behavior of objects in the world, based on observed evidence — and only observed evidence. And the role of the observer in these rules and descriptions has been gaining importance over the years, starting in 1905 with a clerk at the Swiss patent office.

First Generation: Relativity

At the end of the 19th century, physics was in a bit of a crisis. James Clerk Maxwell had shown that the speed of light in a vacuum is one of the constants of physics — a fixed number that is never measured to be different, ever. But for anyone who thought through the implications, a constant speed of light presented deep paradoxes for objects moving at very high speeds. What if we were inside a spaceship traveling at half the speed of light, and we set up an experiment to test light speed? What would we find? Intuition might suggest that light traveling from the tail of the ship toward the front should move slower than light measured when the spaceship is at rest. After all, forward-moving waves from a boat in motion appear (from the perspective of passengers in the boat) to move slower than waves emanating from a boat at rest. But the Michelson/Morley experiment of 1887 proved that measured light speed did not depend on which direction the experimental apparatus was moving.

The physicist Hendrik Lorentz came up with a workable way for Michelson and Morley’s findings to be compatible with a constant light speed. In retrospect it seems a bit silly — an example of the “Einstellung effect,” the tendency to use old ways of thinking to solve new problems. Lorentz suggested that when an object moves through the “luminiferous aether,” the invisible medium through which light waves were assumed to propagate, the object’s matter responds by mechanically contracting — although such an effect isn’t detectable by a moving experimenter, since he, and his measuring sticks, have contracted, too. Aboard our spaceship, under this explanation the speed of light would be moving slower in some sense, but due to the contraction of the ship and its contents, it would have less distance to travel. In this way, the slowing of the light and the mechanical contraction would cancel, resulting in the inevitable measurement of c, the constant speed of light.

Albert Einstein, however, was not convinced. Mechanical contraction assumed too many things. In addition to the existence of the aether, for which there was no actual evidence, mechanical contraction assumed that both distances (intervals in space) and durations (intervals in time) had to be absolute. That is, the distance between two stationary objects was assumed to be a fixed and unchanging value, always measured the same, depending only upon where on the rigid grid of space the objects are located. And it was assumed that two clocks could be absolutely synchronized, always displaying the same time no matter how far apart they are or how they are moving. Through a variety of celebrated “thought experiments” in which he envisioned riding aboard a beam of light, and people aboard moving trains flashing signals, Einstein realized that for measured light speed to be constant, something besides the structure of matter had to give. And this is where he had his insight: The speed of light is absolute, but distances and times are not. In Einstein’s new view, now known as special relativity, the observed placement of things in space and time depend on the manner in which they are measured. For example, clocks appear to run slower than normal, if they are moving relative to the person observing them (an effect that has been experimentally confirmed in many ways).

The key concept here is the frame of reference: In special relativity, any measurement of timing or spacing (or both, in the case of speed) must be described in reference to an imaginary frame against which the measurement is made. In the real world, with a speeding bullet for instance, the reference frame could take the form of the gun barrel, the ground, or perhaps the target. The speed can then be unambiguously described as the difference in motion between the object and the reference frame. Because, we could just as easily describe the bullet as being at rest, and the target (i.e., the reference frame) moving — special relativity asserts that the measured change in the distance separating them will be identical. The important thing is that relative motion is all that matters. It’s irrelevant whether the object or the reference frame is the thing that’s moving. In fact, we can’t even definitively say who’s moving and who’s at rest; it’s completely arbitrary. The only thing that matters is the difference between them.

Inside a speeding spaceship, an astronaut measuring the speed of light is doing so with his measuring apparatus as his reference frame — which, from his perspective, is at rest, even while interstellar dust races past outside. In doing this, he always measures c, regardless of what is going on outside the ship. And if he measures the speed of light coming from a star the ship is rushing toward, he will measure c again — this time because his clock is running slow relative to the star emitting the light. From the astronaut’s perspective, the ship’s clock seems to be running just fine; but for someone near the star, in a reference frame that’s stationary relative to the approaching ship, the ship’s clock would seem to be running slow. (The astronaut would observe the star’s clock as running slow, too.) This is the bizarre world of relativity, but it explains how different observers in different reference frames can measure different timing values while always measuring the same speed of light.

Time intervals are not the only thing affected by relative speeds; distances are, too. It turns out that in a way, Lorentz was right: Moving objects do contract, in the phenomenon now known as Lorentz contraction. But this is not a mechanical process inherent in the matter of the object; the contraction is virtual, specific to the reference frame in which the object is observed or measured. Witnesses outside the speeding spaceship would observe it as having contracted, because the ship is moving relative to their reference frame. And if the spaceship were transparent, the witnesses would also find that that the light inside the ship was traveling at c, due to the apparent contraction of the ship. Meanwhile, the astronaut would observe the rest of the Universe as having contracted. As measured from his stationary reference frame, everything else is moving by at 93,000 miles per second. But if he came across marker signs that had been placed 93,000 miles apart, they would appear to pass by faster than once per second — almost as if they represented shorter distances.* Meanwhile, an observer sitting next to one of the signs would see a contracted ship passing one sign each second. Obviously, in neither case is the ship or the Universe actually, mechanically contracting. They are only observed to contract, depending on how they are observed.

At a time when absolutism in physics was reaching a peak, this was tough to swallow. No more could we assume that the Universe is a collection of stuff unambiguously laid out in a rigid, eternal framework of space that operates on a universal clock. Instead, space and time morph and flex as we move through both of them; nothing can be described in terms of absolute, intrinsic values of length, duration, or velocity (with the notable exception of light and its universal-constant speed). And when we measure an object’s spacing and timing, we don’t find absolute values possessed by the object independently of everything else in the world. Rather, an observed measurement explicitly represents the relationship between the object and the reference frame of the observation. That was quite a revolutionary leap, and a taste of things to come.

Interlude: Quantum Mechanics

Einstein’s relativity solved problems, but new problems were just around the corner. Quantum mechanics emerged in the 1920s, and with it came another blow to absolutism. It had been assumed that particles like electrons were tiny equivalents of billiard balls: definite objects with a distinct inside and outside, and with physical properties such as location, momentum, and spin that could be determined exactly and used to describe the object in its entirety. But it turns out this is not the case. With his famous uncertainty principle, Werner Heisenberg demonstrated that we cannot simultaneously measure a particle’s position and its momentum; increasing the precision in one measurement invariably leads to a loss of precision in the other. Max Born used this discovery to demonstrate that electrons orbiting an atomic nucleus cannot be viewed as tiny planets with definite paths whizzing around a microscopic sun, but rather, constitute an abstract electron cloud that surrounds the nucleus, a blur of probability that an electron will be found at any given spot.

Quantum mechanics further posits that if the exact state (position, spin, etc.) of a particle is unknown, the particle can exist in a superposition of states — that is, it can effectively exist in both states simultaneously, as if one were on top of the other. However, when the particle is actually measured, only one state is observed, at which point the superposition is said to have “collapsed” into one state or the other. The likelihood of finding one state, as opposed to the other state, is a probability function inherent in the particle. For example, an unstable particle can have a 50% probability of decaying (spontaneously turning into other particles) within an hour, and if we aren’t watching that particle and have no way of knowing what’s going on with it, we describe its state as being a superposition of decayed and non-decayed states.

Superposition was a weird concept, but many attributed the weirdness to the fact that only microscopic things that can’t really be seen anyway can exist in superposition. But superposition didn’t sit well with all theorists. Edwin Schrödinger invented a now-famous thought experiment to argue its absurdity: Imagine if you placed a cat in a box, along with a flask of poison, a tiny radioactive source, and a particle detector, with a mechanism whereby if the source decays, the detector triggers a hammer that breaks the flask, releasing the poison and killing the cat. If you close the box and wait, the radioactive source is considered to be in superposition: both decayed and non-decayed. But since the experiment has been set up to correlate the source and the life of the cat, then if the source is in superposition, the cat must be in superposition as well — both alive and dead at the same time! “Schrödinger’s cat” demonstrated that a microscopic superposition could be extended into the macroscopic world. A variation called “Wigner’s friend” imagines the experiment performed in a sealed room with a second experimenter outside. Before the outside experimenter learns the result of the cat experiment, is the inside experimenter in a superposition of conscious states — one that learned that the cat was alive, and the other that found a dead cat? According to “Wigner’s friend,” if superposition is real, then theoretically even states of the human mind can be in superposition.

For decades, physicists have accepted that the quantum world is simply counterintuitive — that we have no reason to expect that the microscopic world should behave according to our everyday, human-world experience. Regarding “Schrödinger’s cat,” many believe that when the decaying radioactive source interacts with the environment and the detector, it “collapses” into a definite state by itself, and this is why a whole cat could not be “both alive and dead.” But even more troubling paradoxes were waiting in the wings. Einstein and others predicted that two particles which are produced together and said to be “entangled,” or intimately associated, could behave in ways that defy the laws of physics. With both particles in superposition, if we measure one particle, the “collapse” of the first particle’s state automatically causes the other particle’s superposition to “collapse.” If we measure the first to be spin-up, we will know that the other is spin-down even before measuring it. This has since been confirmed numerous times by experiment. The thing that makes this a paradox is that the particles can seem to communicate instantaneously — they can be miles apart, in which case the fate of one determines the fate of the other at speeds much faster than light (or anything else) could travel between them. This seems to violate the principle of locality, the idea that causes and effects in the world occur as a result of contact, and do not jump across empty space without so much as a particle of light being involved.

The upshot of these and other difficulties is that quantum mechanics remains very much open to interpretation: Even though the mathematics of the theory are considered correct, how those mathematics become real, observable effects in the world is up for debate. Thus began the parade of QM interpretations, each trying to resolve the problems as elegantly as possible. The most famous of these may be Hugh Everett’s sci-fi-friendly many-worlds interpretation, which posits that the Universe is constantly splitting into alternate universes, including anytime a human choice or experimental measurement is made. But a few decades later, an entirely new approach to the question began to emerge.

Second Generation: Relational Physics

Throughout the history of science, paradoxes have popped up every so often, and in most cases they have been resolved when someone discovered that an assumption underlying the situation was false. In the geocentric model of planetary orbits, it was assumed that the Sun and planets revolved around the Earth; in order to explain how some planets appeared to stop moving and reverse direction in the sky, Ptolemy’s deferent-and-epicycle system became a part of the theory. Even though the Ptolemaic theory could predict astronomical motions with some accuracy, it became problematic as measurements became increasingly precise. Of course, once the faulty assumption of Earth-centered motion was abandoned, a new and much more powerful Sun-centered theory took the place of the old. Similarly, the late-19th-century paradoxes of moving bodies and the measurement of time, and their relation with a universal-constant light speed, were resolved with the advent of special relativity, which rejected the assumptions of both absolute time and an absolute grid of space.

Beginning in the 1980s, a few physicists (Simon Kochen may have been the first) started asking whether we ought to re-examine some of the fundamental assumptions that science has been making since the time of Aristotle. If relativity could refine Isaac Newton’s theories of motion by rejecting absolute time and space — that measurements of both properties were intrinsically tied to the measurer’s reference frame — could other theories be refined by rejecting similar assumptions about the absolute nature of the world? A new approach to physics began to emerge: the relational approach, in which all measurements are acknowledged to result from relationships or interactions in the world. The speed of a bullet has no absolute meaning until we measure it against some frame of reference; this measured speed then represents a relationship between the bullet and the reference frame. Similarly, physicists began to think that perhaps it is incorrect to assume any absolute properties of objects. Can a brick be said to have an absolute, intrinsic momentum independent of any observer? Is it really correct to assume that a measured electron is a tiny physical ball possessing an absolute charge, or is that appearance actually a function of our measurement process? Can the so-called measurement problem — the apparent change in the behavior of bits of matter whenever we measure them — be explained by saying that the measurement is all there is, and that we really can’t say there is such a thing as absolute particle, with a specific, predefined nature that’s independent of any observer?

This idea was fully explored with a disarmingly simple interpretation, relational quantum mechanics, which Carlo Rovelli introduced in 1994. RQM puts forth the following ideas: (1) When we measure the state of a physical system, the measurement represents our interaction with the system; in fact, the state of the system is the interaction or the relationship between the system and ourselves (or our measuring apparatus). (2) No distinctions can be made between microscopic “quantum” and macroscopic “non-quantum” systems, or between measurements and non-measurement interactions, or between conscious and unconscious observers, or between animate and inanimate objects; all systems are quantum systems and all interactions are quantum interactions. (3) The same physical system may appear different to multiple observers, depending on the interaction each has with the system. (4) The appearance of a physical system to an observer is a function on the information contained in the interaction, and thus, quantum mechanics is a theory about information.

Relational quantum mechanics, and relational physics in general, make some interesting statements about the world. In no particular order:

1. If a physical system appears different to two different observers, this is a consequence of the information each observer has about the system. In the case of Schrödinger’s cat, the supposed superimposition of a “live-and-dead cat” is merely a lack of information on the part of the experimenter. The cat (which, relative to a reference frame inside the box, is either definitely alive or definitely dead) has information about its interaction with the killing mechanism, but the experimenter outside the box does not have information about that interaction. The experimenter therefore cannot define the state of the cat without opening the box and receiving this information. Similarly, the radioactive source can only be said to have definitely decayed if it interacts with something that can receive this information (such as the cat); otherwise its state can only be said to be undefined or uncertain. This applies to any observer lacking this information — whether it’s the cat, the experimenter, or someone observing the proceedings from an outside reference frame.

2. Heisenberg’s uncertainty principle is recast in the light of relational physics. Rather than assuming that a located particle is an absolute particle, whose precise position we know at the expense of potential knowledge about momentum (even Einstein had a problem with this idea), the particle is not absolute. Rather, it is better thought of as a wave or probability function, from which information can be extracted. Extracting this information is a bit like taking a snapshot of the wave; if we get precise information on position, the snapshot is “sharp” and therefore contains less information on momentum; if we take a “slow shutter speed” snapshot in order to get more information on momentum, we do so at the expense of information on position. In other words, the uncertainty principle doesn’t express our inability to know all simultaneous properties of an absolute particle; it starts with an uncertain wave and lets us learn certain complementary aspects about that wave, including the fact that it can appear to be particle-like if we choose to pin down its exact location.

3. Contrary to classical or intuitive thinking, an object is not a collection of absolute particles that exist in absolute locations. Rather, it is a collection of spatial relationships within that object, and these relationships are what produce its observed structure. If we observe a small enough bit of the object, we may see a particle; however, if that particle is a kind that has no known subcomponents (for example, electrons and quarks, which are believed to be fundamental and indivisible), then it contains no internal spatial relationships and cannot be considered to have any independent existence whatsoever. In other words, an apple is composed of a web of physical relationships among its subatomic components, and the apple as an independent “thing” can be said to exist insofar as those internal spatial relationships exist relative to each other. But for the apple to have any describable existence as viewed from our frame of reference (as observers), we must establish some relationship with the apple and interact with it, for example by measuring it. Barring any such interaction, nothing at all can be said about the apple. While its internal relationships may exist relative to each other, we as an external party have no information on any aspect of these relationships, so the apple as a whole is undefined in our reference frame.

Perhaps the best argument for the relational approach to physics is that it appeals to a spirit of scientific purity. It takes into account what science can and cannot know for certain. Throughout its history, science has asked questions and offered answers based on one and only one process: observation. The only things we can truly know about the world are those things that are directly observed. It may be counterintuitive, but there’s actually no evidence whatsoever that when we measure an object, that measurement reflects some absolute, independent property of that object, which would be measured the same in all reference frames. It might — but there’s simply no direct reason to treat that assumption as fact. The proponents of relational physics argue that if science is to be truly rigorous, it must deal only with observable or measurable numbers, and its predictions must predict only what will be observed or measured. Anything beyond that — assigning independent, absolute properties to objects because we assume such statements to be true, based on the observations — amounts to a leap of faith, one that becomes glaringly obvious when closely studying things like electrons and entangled photons. This leap introduces unknowable values into the scientific process, and whenever we do that, both the explanatory and predictive powers of the scientific method are weakened. Objects in the Universe, and the Universe as a whole, can only truly be described in terms of observations and measurements, which in turn are expressions of the relationship or interaction between the object and the ourselves. It follows that since we human observers are a part of the Universe, we cannot describe the entire Universe (as if from a “God’s-eye-view”) in any manner at all. Strictly speaking, we can only describe relationships and interactions within that Universe. This is a profound idea in itself.

Third Generation: Biocentricity

Let’s consider what happens when a scientist performs a measurement. We typically think of a measurement event as happening in the present, but measurements never result in descriptions of the present. All measurements are descriptions of the past. Since the speed of light is finite, any observation is an observation of a past state of an object, whether it’s an apple or a distant galaxy. So, anytime we speak of an interaction or relationship between systems in relational physics, we’re talking about interactions that span across time, typically mediated by photons of light. (This says some interesting things about photons, which we’ll get to in a future essay.)

If relational physics applies to descriptions that span across time, it applies to all descriptions that span across time. When we measure a distant galaxy, then we have a description of some past state of that galaxy; this description endures even after the measurement process is over. The next day, we still have a description of that galaxy as seen from a modern reference frame. So, there’s no reason why relational physics shouldn’t apply even when a description is speculative and no measurement has been performed. For example, when physicists describe what the Universe must have been like one second after the big bang, this description establishes some kind of relationship between (1) our reference frame in the present and (2) the (proposed) state of the Universe in the past, as with a measurement. In relational physics, any modern-day description of the early Universe carries with it the stipulation that this description is relative to our frame of reference in the 21st century. It can only describe what we would observe if we were able to time-travel back to that time period, with our 21st-century instruments and knowledge, and look. By contrast, in the conventional approach, when we talk about the formation of atoms and such in the first moments of the Universe, we must assume some absolute state of these bits of matter, which would be the case whether they were ever observed or not. This is forbidden by relational physics — just as relativity forbids assuming the absolute speed of an object or the absolute duration of an event.

So, what does that leave us with? If the Universe can only be described in terms of observations, then we cannot describe what the Universe was like in a world when no observers existed — say, while the Earth was still forming. We can only make that description relative to our modern reference frame. The modern Universe can be described relative to a modern reference frame, and the ancient Universe can be described relative to a modern reference frame. But the ancient Universe cannot be described relative to an ancient reference frame.

The only thing left for us to do, then, is fill in the middle of the picture. What about when life was just getting started? If we’re talking about a contemporaneous description (that is, a description relative to a reference frame of the same era), then the world must have been in an intermediate state of some kind. It was being observed, but barely; the first living organisms gathered only the crudest information on both themselves and their surroundings. Given that, and what we’ve established so far regarding observation, the Universe can be considered to have been precisely as crude as those organisms’ observations. And as living organisms evolved and developed sharper observational faculties, the Universe sharpened accordingly — leading to the incredibly rich and detailed Universe we humans see today, the product of billions of years of information-gathering by the superorganism we call life. Or so says the biocentric universe theory.

Two Views of the Universe

We now have two ways to look at the history of the Universe. In one, the conventional account, all objects from any time period are described relative to a modern frame of reference: how we humans would describe them if we could examine them using our modern tools and knowledge. In the other account, the biocentric view, all objects from any time period are described relative to the observational frame of reference that existed at the time.

Consider the origin of life, or abiogenesis, that moment when nonliving matter is (conventionally) said to have come together into the first living organism. Although we have no direct evidence showing how it happened, the conventional account typically involves the coming together of essential amino acids, lipid bilayers, and nucleotides to form a metabolic organism that could reproduce. Creationists/intelligent design proponents love this, because in science it really is a huge mystery. As much as biologists downplay it (for good reason), it appears to have been a spectacular and unlikely event of molecular chemistry. And that’s to say nothing of what was required even to get to that point: a universe that spontaneously appeared billions of years earlier, with the proper physical laws to allow the existence of matter, star formation, supernovas to generate heavy elements, etc., not to mention the planetary conditions that would have been needed to bring these chemicals together. The anthropic principle (see part 1) sees a lot of action here: Yes, we can remark about this unlikely scenario now that such an event allowed us to be conscious, the way a lottery winner can reflect on his or her incredible fortune after having won. But after 60 years of lab experiments trying to produce a living beastie from off-the-shelf chemicals, I think it’s safe to say that biologists are as uneasy about abiogenesis as physicists are about quantum mechanics. They aren’t likely to say it, but they really would prefer to have more satisfying answers.

Even if it doesn’t supply sure answers, biocentricity at least sheds light on why these questions are so difficult for us humans. The highly specific convergence of molecules and conditions, or the coalescence of physical laws that seem “fine-tuned” for matter, can both be attributed to examining the situations from a modern frame of reference. In the biocentric view, the conventional accounts of the distant past don’t reflect what actually went on in those situations at the time they happened. The biocentric view says that a completely undefined primordial organism of some kind spontaneously appeared in a completely undefined environment, and began to resolve crude features of the world through the crudest of observations. (Compare that to the conventional account, that a defined organism formed out of defined molecules, some ten billion years after the spontaneous appearance of a universe containing 10-to-the-80th-power defined atoms.)

Is Biocentricity a Science-Killer?

Some have criticized the biocentric universe theory because it seems anti-scientific: It seems to address some very difficult questions with, “We don’t know, we’ll never know, so don’t bother asking.” By asserting that the first living organism was “undefined,” perhaps this is just a way of weaseling out of doing science to learn details about that organism. In this view, biocentricity becomes a convenient box in which to put any question that is too hard to answer, not unlike “God.” Is this true?

I don’t think so. Biocentricity is either a real governing principle at work, in which case it should be testable by experiment, or it isn’t. If the principle is real, it then becomes a way to acknowledge which areas of science are truly speculative, and which areas aren’t. We can only make definitive statements about the world where information is available, and apparently there is no information left over from the appearance of the first organism on Earth. Even if that information existed at some point, none of it has made its way to our 21st-century reference frame. So, anything we say about the event is as speculative as Schrödinger’s experimenter speculating on whether the cat is alive or dead.

Of course, speculation across time — that is, doing conventional cosmology and paleobiology — does has scientific value. It is a valid question to ask what abiogenesis would look like if it happened again, in exactly the same way, on a laboratory bench in 2010. I suspect that there’s only one true answer to that question, and even if we can’t find that answer definitively, we can at least offer various scenarios and evaluate which is most likely.

The biocentric universe theory itself is speculative if it cannot be tested in the laboratory. If that’s indeed the case, the enterprise of writing essays and making videos about the theory may be of little value other than offering an alternative philosophical way of looking at the world. But that question has not been answered yet. The experiments of quantum mechanics (which I discussed in Part 1) offer tantalizing clues that the “tail” of observation really does “wag the dog” of physical reality.

It’s time for a new generation of physics experiments, involving living organisms, to answer this question. Because if it turns out that humans and animals really are constantly resolving little pieces of the Universe, the implications are profound: Not only would it mean the most dramatic shift ever from absolutism in physics, a development that would certainly have major practical, technological consequences. It would mean that the Universe has been evolving in a direct parallel with life for its entire existence.



* By now it should be clear that the speed of 93,000 miles per second is not an absolute quantity; instead, it is the difference in speed between the ship and the reference frame used by any observer performing the measurement. From the ship’s reference frame, if the astronaut takes the marker signs at face value, he seems to be going faster than 93,000 miles per second — but to determine how his relative speed would be measured by a person sitting next to one of the signs, he needs to take into account the observed contraction of space between the signs.