Defining a Search Space
What can philosophical thought experiments tell us about pathways for consciousness research?
Our understanding of consciousness is very limited. In fact, even the very definition of consciousness is open to speculation. Let’s start with the single basic assumption that physical things create consciousness. That means that we end up with two systems: a physical system that has physical states and physical processes, and a conscious system with conscious states and processes. In some sense this is merely a simplifying definition; we call everything we understand enough to model using physical laws “physical” and everything that’s related to our mind that we don’t understand yet “consciousness.” It may turn out that consciousness is the result of the interactions of many undiscovered processes, or it may be that it’s an unknown part of the physical processes we already understand. To get a better idea of what kinds of things consciousness might be like, let’s map out a search space along two variables of the possible properties that consciousness might have. We can then discuss how we can search those possibilities to see where consciousness is mostly likely to actually be.
The first variable is how common consciousness is. One extreme along this axis would be that consciousness occurs only in humans. The other extreme would be that consciousness is universal - that it’s a feature of all matter and energy in the universe. I think most people assume that the truth lies somewhere in between. For example, we could guess that all animals are conscious, with humans being the most conscious (or showing the most complex consciousness), while bacteria aren’t conscious.
The second variable we’ll define is how computationally complex consciousness is. We’re assuming that consciousness is created by physical systems. The question is, how are the complexities of those two systems related? Does it take a complex physical system to create complex conscious states? Or could a relatively simple physical process create complex conscious states? In practical terms, if consciousness were computationally simple, it would be easy to model because its behavior and characteristics would be dictated by simple rules and processes. But if it were computationally complex, it would be very difficult to model. The rules that would dictate its processes and states would be complex.
We’re assuming that physical systems create conscious systems, but we should also consider the possibility that a conscious state can affect physical systems. For example, it may be that we humans act the way we do not simply because we are capable of creating complex conscious states, but because those states can affect the physical processes in our brains, and these two processes form a closed loop. Another possibility is that consciousness doesn’t affect physical processes, creating an open loop system where the physical creates the conscious states, but the conscious states don’t affect physical processes.
In the search space we’ve defined we can outline four general possibilities given combinations of “complex vs simple” along one axis with “common vs uncommon” along the other. In each of these combinations the chances that consciousness would be an open or closed loop changes given our observations of how things actually appear in the world.
Scenario 1: Consciousness is simple and very common. We can imagine a universe where most matter creates conscious states, and those conscious states are only as complex as the underlying physical states that create them. The reason humans have complex conscious states is that we have complex brains.
Scenario 2: Consciousness is complex and common. Again, this is a universe where even simple matter or simple forms of life are conscious, but in this scenario, the conscious states could be much more complex than the underlying physical states. It would be possible for insects or bacteria to have complex and interesting conscious experiences. The question would then be, “Why don’t they act like us?” And the likely answer would seem to come down to whether conscious systems acted as part of an open or closed loop with physical systems.
Scenario 3: Consciousness is simple and rare. This would be a universe where ‘philosophical zombies’ are possible. It could be that only humans are conscious, but that consciousness doesn’t affect our behavior strongly or at all. If this were the case, then it would be possible to have an alternate universe that was the same in every way, except that it lacked consciousness. In this universe, everything and everyone would act the same way, they would just never experience anything consciously.
Scenario 4: Consciousness is complex and rare. This would seem to be a case where consciousness was a big evolutionary advantage. It would be something difficult to create, but which, once created, was computationally very efficient or useful. For example, it could be that conscious experiences are a very efficient way to process a lot of sensory data or that they make it possible to store or retrieve information as memories. This is a big evolutionary advantage.
Given this range of possible scenarios, how do we discover what area we’re likely living in? The chances that consciousness is an open loop or closed loop system in humans are very different depending on where in the complexity vs commonness space we exist.
The Turing test is about detecting human like intelligence, and so we end up treating it like a test for consciousness as well. We have no ability to directly detect or measure the presence of conscious experiences, and so we rely on this kind of test. We believe that the way we act is based on our conscious experiences, so if something acts similarly enough to ourselves we assume that it’s conscious as well - that it also has a mind somewhat like our own.
A famous argument against this hypothesis is Searle’s Chinese Room Argument. The point of this thought experiment is to show that a machine could pass the Turing test even though it didn’t actually understand the conversation because it wasn’t conscious. We can even imagine that the ‘program’ being run by the Chinese room is a simulation of a human brain.
My main critique of the Chinese room argument is that it doesn’t consider the range of difficulties of creating the room, or the range of possibilities of outputs from the room. For example, we would expect the outcome of the experiment to be very different if it was created in a universe where consciousness only worked as an open loop versus one where it was possible to work as a closed loop.
Let’s imagine the form of the CRA where the machine is creating a functional simulation of a human brain. If we assume Scenario 4 above, our simulation would likely fail. The rules of the simulation describe how to model the firing of all the neurons in a brain given our current understanding of the physical processes. But this doesn’t include the complex conscious processes. We could try to model those as well, but if they’re very complex, or we don’t understand them very well, it might be impossible.
Instead, we could imagine a version of the CRA that’s not simulating the brain, but which is just trying to simulate intelligent behavior. That would show that it’s possible to create a simulation of consciousness without creating actual consciousness. That it’s possible to create a behavioral model without creating a functional simulation of consciousness. This would actually be an incredibly useful tool to have, being able to see what physical processes create apparently conscious behavior and which don’t. Or being able to see if a perfect physical model of a brain creates accurate conscious behavior or not. This kind of experiment would allow us to test to see where in the area of conscious possibilities we actually exist. Of course creating a full simulation of a human brain is far beyond our current capabilities, so what can we do now to begin testing the contours of the conscious space?
What course should we take
Given this perspective, our first step should be to create a simulation of a simple brain, the simpler the better. Unfortunately the only subject that we know to be conscious, the human brain, is arguably the most complex machine to simulate. Better to start small with something that may be conscious, and work our way up. For example, with the Open Worm Project (www.openworm.org) or the Green Brain Project (http://greenbrain.group.shef.ac.uk/).
If we create a simulation of a worm we can see if it acts similarly to an actual worm. We could call this the worm Turing test - if we can’t tell the difference between a worm’s reactions to stimuli and the simulation’s reactions to the same stimuli, then it’s an accurate simulation. Once we’re confident that our simulation is accurately modeling all known physical processes, if we observe it to be acting like a worm then the possibilities are:
- The actual worm isn’t conscious. We’ve created an accurate simulation of all understood physical properties and this is sufficient to create a complete simulation. Consciousness doesn’t play a part in worm behavior.
- The worm is conscious, and, by simulating physically measurable properties, we’ve created a simulation which is also conscious. This would only be as likely as the possibility that a computational state, implemented in different ways, would create the same conscious state. This would contradict the conclusions of the CRA.
- The way consciousness interacts with the physical processes in the brain have already been included in our model. For example, it could be that our understanding of the electrical or chemical or quantum laws already take into account the (apparently universal) effects of consciousness.
- The worm is conscious, but our assumption is wrong and it’s an open loop system, so it has no effect on behavior. The simulation acts the same but isn’t conscious.
If any of these are the case, the next step is to simulate something more neurologically complex in the hope that our model would fail despite being physically accurate. At some point the only realistic option given technical limitations might be to just simulate parts of a neurological system that we understand (seeing and movement or hearing and speech/vocalizations, etc.) We would need to understand all the physical inputs and outputs from both “ends” of the system, eg. how the ears work to create electrical impulses, and how the muscles in the mouth work to create sounds.
If the worm doesn’t react the same, or we create a simulation of a more complex system that fails to act correctly, then we’ve identified an area where consciousness may be occurring. We then have two courses of action we can take:
- Attempt to make more exact measurements of physical properties, so that we can model them more accurately.
- Tighten up our understanding of the physical inputs and outputs in the system to try to narrow down the part of the model where our simulation is breaking down. If we can narrow down the part of the system that isn’t accurately reacting, we can try to create a behavioral simulation of just that part. This would essentially be a behavioral, but not functional simulation of consciousness.
These are only the possible steps if our model isn’t an accurate behavioral representation of subject. If it is, we model a more complex subject. Until eventually we either create a simulation which can’t accurately model the reactions, or we end up at humans.
If we follow the same course of research, and end up with a simulation that can pass the Turing test, we may not be able to answer the question of consciousness. Hopefully, our simulations fail before then because failing is the best way to learn something about where we are in the space of conscious possibilities, and what the nature of consciousness is actually like.