Image by rawpixel/Freepik
As a boy, I worked in the laboratory of Stephen Kuffler—the pre-eminent neurophysiologist and founder of Harvard’s Neurobiology department—watching scientists probe the neurons of caterpillars. Kuffler was the brilliant author of From Neurons to Brain, the textbook I used later as a medical student. I was so intrigued by what I learned about the sensory-motor system that I returned to Harvard to work with the great behaviorist B.F. Skinner, where we published papers in Science on symbolic communication and self-awareness.
However, I’ve since come to understand that not all questions can be solved by neurobiology or the science of behavior. What is consciousness, for example? Why does it exist? To me, there’s a kind of blasphemy in asking these questions, a personal betrayal to the memory of those gentle, yet proud men who took me into their confidence so many decades ago.
“The tools of neuroscience,” cautioned David Chalmers “cannot provide a full account of conscious experience, although they have much to offer.” The mystery is plain. Neuroscientists have developed theories that help explain how information—such as the shape and smell of a flower—is merged in the brain into a coherent whole. But they are theories of structure and function. They tell us nothing about how these functions are accompanied by a conscious experience.
Yet the difficulty in understanding consciousness lies precisely here, in understanding how a subjective experience emerges from a physical process at all. Even the great Nobel physicist Steven Weinberg, conceded that there’s a problem with consciousness and that its existence doesn’t seem to be derivable from physical laws.
We assume the mind is totally controlled by physical laws, but there’s every reason to think that the observer who opens Schrödinger’s box has a capacity greater than that of other physical objects. The difference lies not in the gray matter of the brain, but in the way we perceive the world. How are we able to see things when the brain is locked inside a sealed vault of bone?
Information in the brain isn’t woven together automatically any more than it is inside a computer. Time and space are the manifold that gives the world its order. We instinctively know they’re not things, objects you can feel and touch like the shells and pebbles you might pick up along the beach. There’s a peculiar intangibility about them.
According to biocentrism, where the observer is the basis of the universe, time and space are merely the mental software that, like in a DVD player, converts information into 3D. Indeed, our minds can even create a spatiotemporal world—complete with a body—while we’re dreaming, asleep with our eyes closed.
In the 1950s, neurologist William Walter built a device that reacted to its environment. This primitive robot had a photoelectric cell for an eye, a sensing device to detect objects, and motors that allowed it to maneuver. Since then far more advanced robots have been developed using advanced technology that allows them to “see,” “speak,” and perform tasks with greater precision and flexibility.
“Can we help but wonder,” asked Isaac Asimov, “whether computers and robots may not eventually replace any human ability? Whether they may not replace human beings by rendering them obsolete? Whether artificial intelligence, of our own creation, is not fated to be our replacement as dominant entities on the planet?”
Relativity and Quantum Mechanics Go to the Core of Consciousness
We think we understand how the material world works and how our consciousness operates within it. But all the observer-based findings of relativity and quantum mechanics are not anomalies. They go to the very core of consciousness.
Before a machine or robot can ever achieve sentience, we need to understand these biocentric principles. The key is how the algorithms in the brain organize the whirl of sensory information from our eyes, ears, and other sensory organs. They are intimately connected to the concepts we call space and time.
Alas, the issue of consciousness often wiggles away from easy understanding. Most of us take it for granted, oblivious that it harbors any sort of deep mystery at all. Some regard awareness as a mere ancillary property of life, a casual characteristic that evolved to give complex life forms an advantage. When contemplating the possibility of computer singularity, when machines surpass us in intelligence, we need to understand that the science of consciousness may well be characterized—as Paul Hoffman, former Encyclopedia Britannica publisher once said—as the deepest and most important of all.
Explaining how and why we think, feel, and have subjective experiences can seem complicated but really isn’t when viewed through the lens of biocentrism. We assume, for example, that the sun has no feelings and rocks cannot “enjoy” the warm sunlight striking their surfaces. Yet we savor the smell of fresh-cut grass, feel pain if pinched, experience thoughts, and sense the rich crimson of a sunset. We feel. But how and why?
The depth and profundity of this go to the heart of biocentrism, to the paranoia over possible computer singularities, and to the very quest to apprehend the cosmos. Nothing escapes the sweaty grip of perception. We need to know what it is.
This becomes the bottom line. Why and how do we feel? How does this sense of perception, awareness, or consciousness arise? What is it, really?
The Key to Consciousness and AI Sentience
Quantum physics provides us with hints about how consciousness works and how the mind is unified with matter and the physical world.
As explained more fully in my book The Grand Biocentric Design—and in a more entertaining way in Observer—the equation reduces to a cloud of information in the brain that is involved in everything we see and experience. The underlying story involves how the quantum information arises all at once when the process is expanded to include the ion dynamics and their superpositions.
That’s because modulation of ion dynamics at the quantum level allows all parts of the information system that we associate with consciousness—with the unitary “me” feeling—to be simultaneously interconnected.
At any given moment, there’s a cloud of quantum activity associated with consciousness. What you experience changes depending on which memories and emotions are recruited into the system at the time, corresponding to different networks in the brain. This spatiotemporal logic extends to the rest of the nervous system and to the entire world you observe at the time.
Until we understand these biocentric underpinnings, machine sentience will not and cannot happen. An object—a machine, a computer—cannot have a unitary sense experience, or consciousness before the “mind” (natural or artificial) constructs a spatial-temporal reality.
Eventually we will understand these algorithms well enough to create “thinking” machines and enhancements to ourselves that will change the world in ways we cannot even fathom.
Only then will we know the answers to Asimov’s questions.
Adapted from the Biocentrism trilogy (BenBella Books) and Observer (The Story Plant).