Note: The article below, part 2 in a series on the biocentric universe theory, accompanies a short YouTube video on the same topic. For an introduction to the theory, please read part 1.
“The past has no evidence except as it is recorded in the present. The photon that we are going to register tonight from that four-billion-year-old quasar cannot be said to have had an existence ‘out there’ three billion years ago, or two, or one, or even a day ago.”
—John Archibald Wheeler (1989)
In 2007 and 2008, biologist Robert Lanza created a controversy when he rolled out his biocentric universe theory, which proposes that the universe we observe is continuously being built by living things, and that space and time are best thought of as constructions of the mind. Nobody wants to be told that the real world is, in any way, “unreal.” But as I’ve mentioned, people don’t have a problem accepting that solid matter is more than 99.99% empty space. Or that when we see an object, we are not sensing the object directly, as it is at that moment; but rather, we are sensing zero-mass waves in the intangible electromagnetic field, which emanated from the object some time ago. Our image of the object is very real to us — but in every possible way, that moment-by-moment picture in our mind is a function of neural impulses in the brain, based on information the body has received, by way of those waves. We’ve learned these things only in the last 150 years. Is it already time to close the door on asking what “real” really means?
On first blush, the biocentric universe theory can seem arrogant to some, or illogical, or it may have the ring of creationism. To the less thoughtful, it may just seem “stupid.” But ironically, there’s almost nothing in the theory that conflicts with conventional scientific knowledge. This is an important point, that biocentricity does not seek to throw out existing science. It is more like a wrapper around our current body of theories, laws, and hypotheses — or a lens through which the existing knowledge can be clarified, organized, and made more elegant and rigorous. This alone might be valuable enough, but proponents believe that this “wrapper theory” can also be tested directly in the laboratory. I laid out some avenues of possible investigation in part 1 of this series.
So, how can a person claim that the “real” Universe is actually a kind of three-dimensional image experienced collectively by living things, without such a claim conflicting with established science? Well, for starters, there are no theories or laws declaring that matter or anything else in the world is particularly “real.” Science deals explicitly with observations, compiling descriptions and rules about the behavior of objects in the world, based on observed evidence — and only observed evidence. And the role of the observer in these rules and descriptions has been gaining importance over the years, starting in 1905 with a clerk at the Swiss patent office.
First Generation: Relativity
At the end of the 19th century, physics was in a bit of a crisis. James Clerk Maxwell had shown that the speed of light in a vacuum is one of the constants of physics — a fixed number that is never measured to be different, ever. But for anyone who thought through the implications, a constant speed of light presented deep paradoxes for objects moving at very high speeds. What if we were inside a spaceship traveling at half the speed of light, and we set up an experiment to test light speed? What would we find? Intuition might suggest that light traveling from the tail of the ship toward the front should move slower than light measured when the spaceship is at rest. After all, forward-moving waves from a boat in motion appear (from the perspective of passengers in the boat) to move slower than waves emanating from a boat at rest. But the Michelson/Morley experiment of 1887 proved that measured light speed did not depend on which direction the experimental apparatus was moving.
The physicist Hendrik Lorentz came up with a workable way for Michelson and Morley’s findings to be compatible with a constant light speed. In retrospect it seems a bit silly — an example of the “Einstellung effect,” the tendency to use old ways of thinking to solve new problems. Lorentz suggested that when an object moves through the “luminiferous aether,” the invisible medium through which light waves were assumed to propagate, the object’s matter responds by mechanically contracting — although such an effect isn’t detectable by a moving experimenter, since he, and his measuring sticks, have contracted, too. Aboard our spaceship, under this explanation the speed of light would be moving slower in some sense, but due to the contraction of the ship and its contents, it would have less distance to travel. In this way, the slowing of the light and the mechanical contraction would cancel, resulting in the inevitable measurement of c, the constant speed of light.
Albert Einstein, however, was not convinced. Mechanical contraction assumed too many things. In addition to the existence of the aether, for which there was no actual evidence, mechanical contraction assumed that both distances (intervals in space) and durations (intervals in time) had to be absolute. That is, the distance between two stationary objects was assumed to be a fixed and unchanging value, always measured the same, depending only upon where on the rigid grid of space the objects are located. And it was assumed that two clocks could be absolutely synchronized, always displaying the same time no matter how far apart they are or how they are moving. Through a variety of celebrated “thought experiments” in which he envisioned riding aboard a beam of light, and people aboard moving trains flashing signals, Einstein realized that for measured light speed to be constant, something besides the structure of matter had to give. And this is where he had his insight: The speed of light is absolute, but distances and times are not. In Einstein’s new view, now known as special relativity, the observed placement of things in space and time depend on the manner in which they are measured. For example, clocks appear to run slower than normal, if they are moving relative to the person observing them (an effect that has been experimentally confirmed in many ways).
The key concept here is the frame of reference: In special relativity, any measurement of timing or spacing (or both, in the case of speed) must be described in reference to an imaginary frame against which the measurement is made. In the real world, with a speeding bullet for instance, the reference frame could take the form of the gun barrel, the ground, or perhaps the target. The speed can then be unambiguously described as the difference in motion between the object and the reference frame. Because, we could just as easily describe the bullet as being at rest, and the target (i.e., the reference frame) moving — special relativity asserts that the measured change in the distance separating them will be identical. The important thing is that relative motion is all that matters. It’s irrelevant whether the object or the reference frame is the thing that’s moving. In fact, we can’t even definitively say who’s moving and who’s at rest; it’s completely arbitrary. The only thing that matters is the difference between them.
Inside a speeding spaceship, an astronaut measuring the speed of light is doing so with his measuring apparatus as his reference frame — which, from his perspective, is at rest, even while interstellar dust races past outside. In doing this, he always measures c, regardless of what is going on outside the ship. And if he measures the speed of light coming from a star the ship is rushing toward, he will measure c again — this time because his clock is running slow relative to the star emitting the light. From the astronaut’s perspective, the ship’s clock seems to be running just fine; but for someone near the star, in a reference frame that’s stationary relative to the approaching ship, the ship’s clock would seem to be running slow. (The astronaut would observe the star’s clock as running slow, too.) This is the bizarre world of relativity, but it explains how different observers in different reference frames can measure different timing values while always measuring the same speed of light.
Time intervals are not the only thing affected by relative speeds; distances are, too. It turns out that in a way, Lorentz was right: Moving objects do contract, in the phenomenon now known as Lorentz contraction. But this is not a mechanical process inherent in the matter of the object; the contraction is virtual, specific to the reference frame in which the object is observed or measured. Witnesses outside the speeding spaceship would observe it as having contracted, because the ship is moving relative to their reference frame. And if the spaceship were transparent, the witnesses would also find that that the light inside the ship was traveling at c, due to the apparent contraction of the ship. Meanwhile, the astronaut would observe the rest of the Universe as having contracted. As measured from his stationary reference frame, everything else is moving by at 93,000 miles per second. But if he came across marker signs that had been placed 93,000 miles apart, they would appear to pass by faster than once per second — almost as if they represented shorter distances.* Meanwhile, an observer sitting next to one of the signs would see a contracted ship passing one sign each second. Obviously, in neither case is the ship or the Universe actually, mechanically contracting. They are only observed to contract, depending on how they are observed.
At a time when absolutism in physics was reaching a peak, this was tough to swallow. No more could we assume that the Universe is a collection of stuff unambiguously laid out in a rigid, eternal framework of space that operates on a universal clock. Instead, space and time morph and flex as we move through both of them; nothing can be described in terms of absolute, intrinsic values of length, duration, or velocity (with the notable exception of light and its universal-constant speed). And when we measure an object’s spacing and timing, we don’t find absolute values possessed by the object independently of everything else in the world. Rather, an observed measurement explicitly represents the relationship between the object and the reference frame of the observation. That was quite a revolutionary leap, and a taste of things to come.
Interlude: Quantum Mechanics
Einstein’s relativity solved problems, but new problems were just around the corner. Quantum mechanics emerged in the 1920s, and with it came another blow to absolutism. It had been assumed that particles like electrons were tiny equivalents of billiard balls: definite objects with a distinct inside and outside, and with physical properties such as location, momentum, and spin that could be determined exactly and used to describe the object in its entirety. But it turns out this is not the case. With his famous uncertainty principle, Werner Heisenberg demonstrated that we cannot simultaneously measure a particle’s position and its momentum; increasing the precision in one measurement invariably leads to a loss of precision in the other. Max Born used this discovery to demonstrate that electrons orbiting an atomic nucleus cannot be viewed as tiny planets with definite paths whizzing around a microscopic sun, but rather, constitute an abstract electron cloud that surrounds the nucleus, a blur of probability that an electron will be found at any given spot.
Quantum mechanics further posits that if the exact state (position, spin, etc.) of a particle is unknown, the particle can exist in a superposition of states — that is, it can effectively exist in both states simultaneously, as if one were on top of the other. However, when the particle is actually measured, only one state is observed, at which point the superposition is said to have “collapsed” into one state or the other. The likelihood of finding one state, as opposed to the other state, is a probability function inherent in the particle. For example, an unstable particle can have a 50% probability of decaying (spontaneously turning into other particles) within an hour, and if we aren’t watching that particle and have no way of knowing what’s going on with it, we describe its state as being a superposition of decayed and non-decayed states.
Superposition was a weird concept, but many attributed the weirdness to the fact that only microscopic things that can’t really be seen anyway can exist in superposition. But superposition didn’t sit well with all theorists. Edwin Schrödinger invented a now-famous thought experiment to argue its absurdity: Imagine if you placed a cat in a box, along with a flask of poison, a tiny radioactive source, and a particle detector, with a mechanism whereby if the source decays, the detector triggers a hammer that breaks the flask, releasing the poison and killing the cat. If you close the box and wait, the radioactive source is considered to be in superposition: both decayed and non-decayed. But since the experiment has been set up to correlate the source and the life of the cat, then if the source is in superposition, the cat must be in superposition as well — both alive and dead at the same time! “Schrödinger’s cat” demonstrated that a microscopic superposition could be extended into the macroscopic world. A variation called “Wigner’s friend” imagines the experiment performed in a sealed room with a second experimenter outside. Before the outside experimenter learns the result of the cat experiment, is the inside experimenter in a superposition of conscious states — one that learned that the cat was alive, and the other that found a dead cat? According to “Wigner’s friend,” if superposition is real, then theoretically even states of the human mind can be in superposition.
For decades, physicists have accepted that the quantum world is simply counterintuitive — that we have no reason to expect that the microscopic world should behave according to our everyday, human-world experience. Regarding “Schrödinger’s cat,” many believe that when the decaying radioactive source interacts with the environment and the detector, it “collapses” into a definite state by itself, and this is why a whole cat could not be “both alive and dead.” But even more troubling paradoxes were waiting in the wings. Einstein and others predicted that two particles which are produced together and said to be “entangled,” or intimately associated, could behave in ways that defy the laws of physics. With both particles in superposition, if we measure one particle, the “collapse” of the first particle’s state automatically causes the other particle’s superposition to “collapse.” If we measure the first to be spin-up, we will know that the other is spin-down even before measuring it. This has since been confirmed numerous times by experiment. The thing that makes this a paradox is that the particles can seem to communicate instantaneously — they can be miles apart, in which case the fate of one determines the fate of the other at speeds much faster than light (or anything else) could travel between them. This seems to violate the principle of locality, the idea that causes and effects in the world occur as a result of contact, and do not jump across empty space without so much as a particle of light being involved.
The upshot of these and other difficulties is that quantum mechanics remains very much open to interpretation: Even though the mathematics of the theory are considered correct, how those mathematics become real, observable effects in the world is up for debate. Thus began the parade of QM interpretations, each trying to resolve the problems as elegantly as possible. The most famous of these may be Hugh Everett’s sci-fi-friendly many-worlds interpretation, which posits that the Universe is constantly splitting into alternate universes, including anytime a human choice or experimental measurement is made. But a few decades later, an entirely new approach to the question began to emerge.
Second Generation: Relational Physics
Throughout the history of science, paradoxes have popped up every so often, and in most cases they have been resolved when someone discovered that an assumption underlying the situation was false. In the geocentric model of planetary orbits, it was assumed that the Sun and planets revolved around the Earth; in order to explain how some planets appeared to stop moving and reverse direction in the sky, Ptolemy’s deferent-and-epicycle system became a part of the theory. Even though the Ptolemaic theory could predict astronomical motions with some accuracy, it became problematic as measurements became increasingly precise. Of course, once the faulty assumption of Earth-centered motion was abandoned, a new and much more powerful Sun-centered theory took the place of the old. Similarly, the late-19th-century paradoxes of moving bodies and the measurement of time, and their relation with a universal-constant light speed, were resolved with the advent of special relativity, which rejected the assumptions of both absolute time and an absolute grid of space.
Beginning in the 1980s, a few physicists (Simon Kochen may have been the first) started asking whether we ought to re-examine some of the fundamental assumptions that science has been making since the time of Aristotle. If relativity could refine Isaac Newton’s theories of motion by rejecting absolute time and space — that measurements of both properties were intrinsically tied to the measurer’s reference frame — could other theories be refined by rejecting similar assumptions about the absolute nature of the world? A new approach to physics began to emerge: the relational approach, in which all measurements are acknowledged to result from relationships or interactions in the world. The speed of a bullet has no absolute meaning until we measure it against some frame of reference; this measured speed then represents a relationship between the bullet and the reference frame. Similarly, physicists began to think that perhaps it is incorrect to assume any absolute properties of objects. Can a brick be said to have an absolute, intrinsic momentum independent of any observer? Is it really correct to assume that a measured electron is a tiny physical ball possessing an absolute charge, or is that appearance actually a function of our measurement process? Can the so-called measurement problem — the apparent change in the behavior of bits of matter whenever we measure them — be explained by saying that the measurement is all there is, and that we really can’t say there is such a thing as absolute particle, with a specific, predefined nature that’s independent of any observer?
This idea was fully explored with a disarmingly simple interpretation, relational quantum mechanics, which Carlo Rovelli introduced in 1994. RQM puts forth the following ideas: (1) When we measure the state of a physical system, the measurement represents our interaction with the system; in fact, the state of the system is the interaction or the relationship between the system and ourselves (or our measuring apparatus). (2) No distinctions can be made between microscopic “quantum” and macroscopic “non-quantum” systems, or between measurements and non-measurement interactions, or between conscious and unconscious observers, or between animate and inanimate objects; all systems are quantum systems and all interactions are quantum interactions. (3) The same physical system may appear different to multiple observers, depending on the interaction each has with the system. (4) The appearance of a physical system to an observer is a function on the information contained in the interaction, and thus, quantum mechanics is a theory about information.
Relational quantum mechanics, and relational physics in general, make some interesting statements about the world. In no particular order:
1. If a physical system appears different to two different observers, this is a consequence of the information each observer has about the system. In the case of Schrödinger’s cat, the supposed superimposition of a “live-and-dead cat” is merely a lack of information on the part of the experimenter. The cat (which, relative to a reference frame inside the box, is either definitely alive or definitely dead) has information about its interaction with the killing mechanism, but the experimenter outside the box does not have information about that interaction. The experimenter therefore cannot define the state of the cat without opening the box and receiving this information. Similarly, the radioactive source can only be said to have definitely decayed if it interacts with something that can receive this information (such as the cat); otherwise its state can only be said to be undefined or uncertain. This applies to any observer lacking this information — whether it’s the cat, the experimenter, or someone observing the proceedings from an outside reference frame.
2. Heisenberg’s uncertainty principle is recast in the light of relational physics. Rather than assuming that a located particle is an absolute particle, whose precise position we know at the expense of potential knowledge about momentum (even Einstein had a problem with this idea), the particle is not absolute. Rather, it is better thought of as a wave or probability function, from which information can be extracted. Extracting this information is a bit like taking a snapshot of the wave; if we get precise information on position, the snapshot is “sharp” and therefore contains less information on momentum; if we take a “slow shutter speed” snapshot in order to get more information on momentum, we do so at the expense of information on position. In other words, the uncertainty principle doesn’t express our inability to know all simultaneous properties of an absolute particle; it starts with an uncertain wave and lets us learn certain complementary aspects about that wave, including the fact that it can appear to be particle-like if we choose to pin down its exact location.
3. Contrary to classical or intuitive thinking, an object is not a collection of absolute particles that exist in absolute locations. Rather, it is a collection of spatial relationships within that object, and these relationships are what produce its observed structure. If we observe a small enough bit of the object, we may see a particle; however, if that particle is a kind that has no known subcomponents (for example, electrons and quarks, which are believed to be fundamental and indivisible), then it contains no internal spatial relationships and cannot be considered to have any independent existence whatsoever. In other words, an apple is composed of a web of physical relationships among its subatomic components, and the apple as an independent “thing” can be said to exist insofar as those internal spatial relationships exist relative to each other. But for the apple to have any describable existence as viewed from our frame of reference (as observers), we must establish some relationship with the apple and interact with it, for example by measuring it. Barring any such interaction, nothing at all can be said about the apple. While its internal relationships may exist relative to each other, we as an external party have no information on any aspect of these relationships, so the apple as a whole is undefined in our reference frame.
Perhaps the best argument for the relational approach to physics is that it appeals to a spirit of scientific purity. It takes into account what science can and cannot know for certain. Throughout its history, science has asked questions and offered answers based on one and only one process: observation. The only things we can truly know about the world are those things that are directly observed. It may be counterintuitive, but there’s actually no evidence whatsoever that when we measure an object, that measurement reflects some absolute, independent property of that object, which would be measured the same in all reference frames. It might — but there’s simply no direct reason to treat that assumption as fact. The proponents of relational physics argue that if science is to be truly rigorous, it must deal only with observable or measurable numbers, and its predictions must predict only what will be observed or measured. Anything beyond that — assigning independent, absolute properties to objects because we assume such statements to be true, based on the observations — amounts to a leap of faith, one that becomes glaringly obvious when closely studying things like electrons and entangled photons. This leap introduces unknowable values into the scientific process, and whenever we do that, both the explanatory and predictive powers of the scientific method are weakened. Objects in the Universe, and the Universe as a whole, can only truly be described in terms of observations and measurements, which in turn are expressions of the relationship or interaction between the object and the ourselves. It follows that since we human observers are a part of the Universe, we cannot describe the entire Universe (as if from a “God’s-eye-view”) in any manner at all. Strictly speaking, we can only describe relationships and interactions within that Universe. This is a profound idea in itself.
Third Generation: Biocentricity
Let’s consider what happens when a scientist performs a measurement. We typically think of a measurement event as happening in the present, but measurements never result in descriptions of the present. All measurements are descriptions of the past. Since the speed of light is finite, any observation is an observation of a past state of an object, whether it’s an apple or a distant galaxy. So, anytime we speak of an interaction or relationship between systems in relational physics, we’re talking about interactions that span across time, typically mediated by photons of light. (This says some interesting things about photons, which we’ll get to in a future essay.)
If relational physics applies to descriptions that span across time, it applies to all descriptions that span across time. When we measure a distant galaxy, then we have a description of some past state of that galaxy; this description endures even after the measurement process is over. The next day, we still have a description of that galaxy as seen from a modern reference frame. So, there’s no reason why relational physics shouldn’t apply even when a description is speculative and no measurement has been performed. For example, when physicists describe what the Universe must have been like one second after the big bang, this description establishes some kind of relationship between (1) our reference frame in the present and (2) the (proposed) state of the Universe in the past, as with a measurement. In relational physics, any modern-day description of the early Universe carries with it the stipulation that this description is relative to our frame of reference in the 21st century. It can only describe what we would observe if we were able to time-travel back to that time period, with our 21st-century instruments and knowledge, and look. By contrast, in the conventional approach, when we talk about the formation of atoms and such in the first moments of the Universe, we must assume some absolute state of these bits of matter, which would be the case whether they were ever observed or not. This is forbidden by relational physics — just as relativity forbids assuming the absolute speed of an object or the absolute duration of an event.
So, what does that leave us with? If the Universe can only be described in terms of observations, then we cannot describe what the Universe was like in a world when no observers existed — say, while the Earth was still forming. We can only make that description relative to our modern reference frame. The modern Universe can be described relative to a modern reference frame, and the ancient Universe can be described relative to a modern reference frame. But the ancient Universe cannot be described relative to an ancient reference frame.
The only thing left for us to do, then, is fill in the middle of the picture. What about when life was just getting started? If we’re talking about a contemporaneous description (that is, a description relative to a reference frame of the same era), then the world must have been in an intermediate state of some kind. It was being observed, but barely; the first living organisms gathered only the crudest information on both themselves and their surroundings. Given that, and what we’ve established so far regarding observation, the Universe can be considered to have been precisely as crude as those organisms’ observations. And as living organisms evolved and developed sharper observational faculties, the Universe sharpened accordingly — leading to the incredibly rich and detailed Universe we humans see today, the product of billions of years of information-gathering by the superorganism we call life. Or so says the biocentric universe theory.
Two Views of the Universe
We now have two ways to look at the history of the Universe. In one, the conventional account, all objects from any time period are described relative to a modern frame of reference: how we humans would describe them if we could examine them using our modern tools and knowledge. In the other account, the biocentric view, all objects from any time period are described relative to the observational frame of reference that existed at the time.
Consider the origin of life, or abiogenesis, that moment when nonliving matter is (conventionally) said to have come together into the first living organism. Although we have no direct evidence showing how it happened, the conventional account typically involves the coming together of essential amino acids, lipid bilayers, and nucleotides to form a metabolic organism that could reproduce. Creationists/intelligent design proponents love this, because in science it really is a huge mystery. As much as biologists downplay it (for good reason), it appears to have been a spectacular and unlikely event of molecular chemistry. And that’s to say nothing of what was required even to get to that point: a universe that spontaneously appeared billions of years earlier, with the proper physical laws to allow the existence of matter, star formation, supernovas to generate heavy elements, etc., not to mention the planetary conditions that would have been needed to bring these chemicals together. The anthropic principle (see part 1) sees a lot of action here: Yes, we can remark about this unlikely scenario now that such an event allowed us to be conscious, the way a lottery winner can reflect on his or her incredible fortune after having won. But after 60 years of lab experiments trying to produce a living beastie from off-the-shelf chemicals, I think it’s safe to say that biologists are as uneasy about abiogenesis as physicists are about quantum mechanics. They aren’t likely to say it, but they really would prefer to have more satisfying answers.
Even if it doesn’t supply sure answers, biocentricity at least sheds light on why these questions are so difficult for us humans. The highly specific convergence of molecules and conditions, or the coalescence of physical laws that seem “fine-tuned” for matter, can both be attributed to examining the situations from a modern frame of reference. In the biocentric view, the conventional accounts of the distant past don’t reflect what actually went on in those situations at the time they happened. The biocentric view says that a completely undefined primordial organism of some kind spontaneously appeared in a completely undefined environment, and began to resolve crude features of the world through the crudest of observations. (Compare that to the conventional account, that a defined organism formed out of defined molecules, some ten billion years after the spontaneous appearance of a universe containing 10-to-the-80th-power defined atoms.)
Is Biocentricity a Science-Killer?
Some have criticized the biocentric universe theory because it seems anti-scientific: It seems to address some very difficult questions with, “We don’t know, we’ll never know, so don’t bother asking.” By asserting that the first living organism was “undefined,” perhaps this is just a way of weaseling out of doing science to learn details about that organism. In this view, biocentricity becomes a convenient box in which to put any question that is too hard to answer, not unlike “God.” Is this true?
I don’t think so. Biocentricity is either a real governing principle at work, in which case it should be testable by experiment, or it isn’t. If the principle is real, it then becomes a way to acknowledge which areas of science are truly speculative, and which areas aren’t. We can only make definitive statements about the world where information is available, and apparently there is no information left over from the appearance of the first organism on Earth. Even if that information existed at some point, none of it has made its way to our 21st-century reference frame. So, anything we say about the event is as speculative as Schrödinger’s experimenter speculating on whether the cat is alive or dead.
Of course, speculation across time — that is, doing conventional cosmology and paleobiology — does has scientific value. It is a valid question to ask what abiogenesis would look like if it happened again, in exactly the same way, on a laboratory bench in 2010. I suspect that there’s only one true answer to that question, and even if we can’t find that answer definitively, we can at least offer various scenarios and evaluate which is most likely.
The biocentric universe theory itself is speculative if it cannot be tested in the laboratory. If that’s indeed the case, the enterprise of writing essays and making videos about the theory may be of little value other than offering an alternative philosophical way of looking at the world. But that question has not been answered yet. The experiments of quantum mechanics (which I discussed in Part 1) offer tantalizing clues that the “tail” of observation really does “wag the dog” of physical reality.
It’s time for a new generation of physics experiments, involving living organisms, to answer this question. Because if it turns out that humans and animals really are constantly resolving little pieces of the Universe, the implications are profound: Not only would it mean the most dramatic shift ever from absolutism in physics, a development that would certainly have major practical, technological consequences. It would mean that the Universe has been evolving in a direct parallel with life for its entire existence.
* By now it should be clear that the speed of 93,000 miles per second is not an absolute quantity; instead, it is the difference in speed between the ship and the reference frame used by any observer performing the measurement. From the ship’s reference frame, if the astronaut takes the marker signs at face value, he seems to be going faster than 93,000 miles per second — but to determine how his relative speed would be measured by a person sitting next to one of the signs, he needs to take into account the observed contraction of space between the signs.
Saturday, April 3, 2010
The Biocentric Universe, Part 2: It's All Relative
Subscribe to: Post Comments (Atom)
Love this piece! I like to think of it as simple exploration resulting in an expansion of the self-awareness of conscious existence itself, of which we are bits - mini, individual embodiments of perception sent out to focus so narrowly that we cannot "see the forest for the trees" so as to discover what existence is by experiencing a sort of "imaginary" representation of what it is not. Anyway, GOOD STUFF!!ReplyDelete