Erik Hoel recently wrote that the definition of Consciousness is not controversial. The article is worth a close read because he not only makes this statement, he considers the obvious objections: that the common definition is not scientific; that it is tautological; that it is vague. His summary is that it’s enough for now; it’s enough so researchers can work on the problem. It’s not a scientific definition because it doesn’t add anything to the meaning of the word but it is enough to point to “the thing being researched.” Furthermore, he points out that it isn’t reasonable to expect a scientific definition before the science has been worked out.
The book “Life on the Edge” makes the same point and draws an analogy with other scientific endeavors:
“Indeed, it is our view that the quest to understand this strangest of biological phenomena is often hindered by a persnickety insistence on defining it. Biologists cannot even agree on a unique definition of life itself; but that hasn’t stopped them from unraveling aspects of the cell, the double helix, photosynthesis, enzymes and a host of other living phenomena,”
— Life on the Edge: The Coming of Age of Quantum Biology by Johnjoe McFadden, Jim Al-Khalili
What is Hoel’s definition of consciousness—the one that he says is not controversial? He quotes Nagel’s “What is it like to be a Bat?”
…basically, that there is something it is like to be that organism.
Hoel then goes on to quote researchers using similar definitions, showing that there really is a consensus, as vague as that definition may be.
I agree 99% with Hoel’s piece (and my disagreement is probably a sin of omission rather than commission). If you haven’t read it, do so. My summary does not do it justice.
Near the end of the piece, Hoel says:
Recently, some extra definitional confusion was thrown back into questions of consciousness. Specifically, there is now the highly relevant concern of whether or not contemporary AIs are conscious, or could be conscious in the future.
And:
…it is consciousness that gives an entity moral value. That’s why a scientific theory of consciousness is so necessary, and what the stakes ultimately are.
He then goes on to say that much of the popular debate about AI comes from a confusion between the terms conscious and sentient. It is clear from the timing and context that the AI issue is what prompted Hoel’s essay. But does it address the moral question?
Bentham said:
“the question is not, Can they reason? nor, Can they talk? but, Can they suffer?”
The entirety of my quibble with the essay is that the definitions of consciousness presented do not make it clear whether conscious beings suffer. Almost none of them mention pleasure or suffering at all, and those that do only mention it in passing as one of many things that might be experienced. Repeatedly, the focus is on the nature of experience itself, not on specific experiences, such as pain and suffering.
This could easily be answered by saying that consciousness always includes consciousness of pain. If that is not part of the definition, then I would say consciousness has no moral value, no utility, is not consequential. It is still very interesting and worthy of study. But it loses all or most of its ethical importance.
My own instinct is that it is plausible that there could be conscious beings that do not experience pain, suffering, frustration, or their positive counterparts like pleasure, satisfaction, and flourishing. Sci-fi is full of robots and machine intelligences that are conscious but don’t feel pain.
If the definition of consciousness does not imply experience of pain (utility in general), then we need another term.
Jonathan Birch and his Foundations of Animal Sentience (ASENT) project is one example where researchers seem to be using the term sentient to talk about beings that experience pain. And the neural correlates for pain intuitively seem more amenable to study than those of consciousness.
This essay by Elizabeth Browning from ASENT gives a good summary: The sentience shift in animal research
For Utilitarians concerned about AI ethics, clarifying this point seems important. The ASENT project seems like a great start.
For the actual definition, I think we will find that consciousness will only fully make sense when we understand some more basic facts of the universe, like: does time really only go in one direction, or is this an artifact of our perception? Because currently all epistemology is predicated on that assumption and we don't actually really know it's true.
I think Hoel is just confusing people. He's saying consciousness is a NAME (in sense of the Kripke causal theory of reference) while everyone else is saying they want a DESCRIPTION (in the descriptivism sense). When people say "consciousness has no definition" what they are really saying is that there are distinct limitations to what you can accomplish in discussion about it as a result. No one is saying that a name that is not precise is not useful at all.
Like, we all know what people usually mean by "the inventor of the lightbulb" even though the truth is more subtle. This name is not "descriptive" (in the descriptivist sense) because it is incorrect. But that's ok because (so Kripke says) there is a causal chain linking my usage back to the initial popular usage.
And so it is with consciousness. We all get in broad strokes what the name refers to. That is not the problem. The problem is we can't talk about it in any more detail because we don't know anything else.