What is consciousness? This is a question that has baffled philosophers for thousands of years. And now it also baffles information scientists, artificial intelligence enthusiasts and evolutionary biologists.

One of those people who spends a great deal of time pondering what it means to be conscious is Christof Koch, Ph.D., who runs the Allen Institute for Brain Science in Seattle, and is the author of The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed.

In the book, Koch tries to explain to a general audience the integrated information theory (IIT) of consciousness. This idea was developed by Dr. Giulio Tononi, a professor at the University of Wisconsin. It takes the approach of asking: what is conscious experience like?

But Koch is not just concerned with being able to define consciousness. He also wants to be able to make predictions about consciousness that can be tested. This is the hallmark of science — a testable hypothesis. Otherwise, you’re just sitting around pondering the universe, or in this case, consciousness.

According to IIT, consciousness have five properties: intrinsic, structured, informative, integrated and definite.

The first, “intrinsic,” means that consciousness is a private experience. “Consciousness exists intrinsically, for itself, without an observer,” writes Koch. It is the way our brains feel like from the inside. Because it is private, this awareness stops at the border of my own consciousness — you can’t observe my consciousness or even be sure that I feel anything.

In fact, IIT eliminates the the point of view of the outside observer entirely, including the neuroscientist who may be viewing your brain activity in a scanner. “For consciousness, there is no such observer,” writes Koch. “Everything must be specified in terms of differences that make a difference to the system itself,” with that system being your consciousness.

This “difference that makes a difference” is what separates how you experience bodily housekeeping functions (such as the secretion of enzymes into your digestive tract) from things like seeing another person’s face. One is part of your experience; the other occurs without your knowledge.

computers are unlikely to develop consciousness

To put it another way, your consciousness is a system that exists for itself. In order for that to be possible, writes Koch, “it must have causal power over itself.” Basically, your system consists of interconnected sets of causes-and-effects, all leading your consciousness from its past to its present to its future.

Koch argues that “causal power is not some airy-fairy ethereal notion,” but it can be precisely measured for any physical system. IIT allows you to discuss consciousness without thinking about in which physical structure it resides. But by defining “causal power” mathematically, we can also examine physical systems to see if they could contain consciousness.

For example, take the human brain: circuits in the posterior cerebral cortex are tightly interconnected, which gives them high causal power. These kinds of connections are essential for consciousness. Other parts of the brain, such as the cerebellum, lack this causal power — so they can’t generate consciousness.

This approach can be extended beyond the human brain to answer questions like: do single cells have intrinsic experience? What about countries or corporations? Or computers? In fact, IIT has implications for a wide range of consciousness-related concepts that abound in popular culture.

One of these is the idea that computers will one day become conscious. Koch quickly dashes the hopes of all those who would see science fiction made real. Although artificial intelligence can be used to create computers that can mimic human behavior, even the most advanced systems — because of their linear circuitry — still lack the causal power of the human brain.

The only possibility for conscious computers would be to build them in a way that resembles the self-referential connections of neurons in living brains. “Androids, if their physical circuitry is anything like today’s CPUs, cannot dream of electric sheep,” Koch writes.

IIT also comes to bear on another staple of science fiction — or of romantic entanglement — the melding of two minds into one. If this type of merger were possible, it would probably occur at different levels of intensity. At some point, though, the merger would reach a tipping point.

“Your conscious experience of the world vanishes, as does mine,” Koch writes. “From your and my intrinsic perspective, we cease to exist. But our death coincides with the birth of a new amalgamated über-mind. It has a Whole extending across two brains and four cortical hemispheres.”