Friday, October 30, 2009

A Very Simple Argument Against Any General Theory of Consciousness

Suspiciously simple, you might think. Here it goes:

(1.) No general theory of consciousness can be justified except on the grounds that it gets it right about certain facts known independently of that theory. Those facts include facts about the presence or absence of conscious experience in a wide variety of actual and possible beings that are unlike us in potentialy relevant respects -- beings like frogs, insects, weird sea life, computers and robots of various types, alien beings of various types, and collective superorganisms of various types.

(2.) Independently of a well-justified theory of consciousness, we cannot know, with regard to most such beings, whether consciousness is present or absent.

(3.) Therefore, no general theory of consciousness can be justified.

Are ants conscious? Block's Chinese Nation? Star Trek's shipboard computer? People will reach different intuitive judgments (as philosophical discussion amply shows) -- and there's no particular reason to think, anyway, that our intuitive judgments should track the truth about such matters. It seems that a well justified answer to these questions must lean on a well justified general theory of consciousness. But there are a lot of (actual and potential) general theories of consciousness, some of which imply that consciousness is very widespread, others of which imply that consciousness is relatively rare. We cannot choose among those theories without prior knowledge of how widespread consciousness in fact is -- the very knowledge that we cannot have without such a theory in hand.

It's a tight little vicious circle.

28 comments:

Anonymous said...

What makes this more true about the word consciousness than it does about any other word?

Anna Nachesa said...

Goedel's theorem, revisited?

Michael Metzler said...

I might not be understanding the argument fully, but would not this vicious circle be quickly broken by any substantial scientific discovery about how neuronal mechanisms realize human consciousness? And then any trans-species theory would be based on analogy and extrapolation, rooted in similar correspondences between behavior and neuronal mechanisms (and perhaps higher level mechanisms) . . . or something like this. No?

Or are you granting independent ontology to facts via propositions, which would allow the existence of facts about what it is like to be a bat even though the human conceptual system is unable to know them - which in turn would allow for a theory of what 'consciousness is' in abstraction of embodied experience. (I do not know how a non-theist could hold such a view).

Brian Cutter said...

Eric,

(Introduction: My name's Brian; I've never commented here before, but I read your posts on occasion.)

I'm not sure about (1). There is certainly more to a theory's being justified than that it gets it right on things we know independent of the theory. It's a familiar idea that if we have a theory that gets it right on all the empirically relevant data, we can construct a different but empirically equivalent theory, one that gets it right for all the same theory-independent data. And it's also a familiar idea that this doesn't put these two theories at a stalemate. There may be other factors-- simplicity, elegance, ability to integrate with other theories we have about the world, etc., that make one theory more justified than the other. In the case of consciousness, it may just be that all we have is these other factors. (Well, presumably we also have the benefit of at least one theory-independent fact, namely that we are conscious, but this alone won't buy us much.) This certainly makes it more difficult to justify a general theory of consciousness, but it still may be possible to arrive at a theory which is more justified than any other theory with respect to all these other factors that contribute to justifying a theory.

Brian Cutter

Eric Schwitzgebel said...

Anon 12:56: Consider "dog". We can know certain facts about dogs, including more or less the class of things we want to include under that label, prior to having a well worked out theory of doghood (including evolutionary history, DNA, etc.). Not so for "conscious", or so I claim.

Michael: I don't think we can solve it for human beings even, though that argument is more complicated. (I develop it in Chapter 6 of Perplexities, in draft on my website.) But let's say we did have such a theory. The question would be *how* we analogize and extrapolate.

Eric Schwitzgebel said...

Brian: Thanks for your thoughtful comment! I completely agree with you about simplicity, etc. (though I do think there are a lot of puzzles about simplicity when you push on the idea). But from the base that I am conscious plus those other sorts of considerations, it seems very hopeful to think that we can build a well-justified general theory of consciousness that applies to all creatures. It does seem likely that we can say that some theories are better justified than others on grounds of the sort you mention; but I think there will remain a broad range of live competitors that we cannot justifiably dismiss.

Michael Metzler said...

Thanks Eric. I will check out Chapter 6.

Badda Being said...

Your claim about what we can know about dogs prior to having a theory of doghood might be contested in the absence of consensus on the meaning of "having a theory."

What could it possibly mean to have a theory of doghood in particular, and how could evolutionary history and DNA fit into that theory? Try to answer this question and see if you don't encounter the same difficulties as you point out in the formulation of a general theory of consciousness.

Badda Being said...

Point being, facts about the presence or absence of dogness might depend on a theory of dogness which, for one reason or another, you simply decline to think of as a theory at all. Maybe the reason is that, unlike theories of consciousness, theories of dogness are never actually thematized.

Eric Schwitzgebel said...

Badda: Interesting thought. There may be some sense in which every concept is theorized or theory-laden, even mundane concepts like "fork" and "dog". Yet I think "conscious" is different. We can all see examples of forks and non-forks, including seeing them in use, and regardless of the label, recognize that there is something in common there which merits a label. Now presumably we may still disagree about marginal cases, and we may not be able to articulate the principles behind the category, and maybe a being with a very weird worldview would fail to appreciate what the instances had in common that makes them different from paradigmatic non-forks. This seems to me a very different situation from "conscious", where apart from the human case (and not even all that thoroughly in the human case) there is massive, widespread, and seemingly intractable disagreement about what counts as an instance or non-instance.

Badda Being said...

From a pragmatic angle the difference would seem to be nothing more than a matter of relative agreement about what a fork or a conscious being is supposed to be able to do. We don't construct theories of forkness because agreement is easily reached about a certain range of functions. But no such agreement is ever reached about the range of functions of a conscious being, at least in your profession, in which case a theory is meant to function as a stabilizing force, a kind of proxy for a principle of self-organization which occurs in the determination of the proper functionality of forks, that is, of forkness. (Here of course I'm using 'theory' in the narrow sense in which consciousness is thematized in a way that forkness is not. The idea of a theory that doesn't thematize its object may be cross-referenced with Heidegger's notion of fore-structure.)

Badda Being said...

The -ness affix in 'consciousness' suggests to me a kind Platonic formalization of conscious beings or of being conscious. At least this is the case when I consider how Platonic Forms are explained in lectures which invariably capitalize on such words as 'dogness' or 'forkness', words with no real use in everyday discourse.

Eric Schwitzgebel said...

I'm not sure function, specifically, is what people disagree about, Badda, or that any problematic form of Platonism is behind it, but I like your thought that theories serve a kind of stabilizing or regularizing role. The disagreement about consciousness is so much more than that about forks and dogs that we can't get on the same page.

Badda Being said...

I have no doubt that the people who disagree about what it means to be conscious don't see themselves as disagreeing about function, otherwise they would look at the question the same way I do and it would cease to have much philosophical interest at all. It would only be as philosophically interesting as the question of what it means to be a fork.

Which invites the question: Why isn't "forkness" a matter of philosophical interest? Is it because no one disagrees about what it means to be a fork? But doesn't disagreement arise only after people start asking the question and taking it seriously?

Neither question in and of itself is a matter of philosophical interest to me. I don't take either of them seriously. Yet the fact remains as you have stated it: "The disagreement about consciousness is so much more than that about forks and dogs [i.e. forkness and dogness] that we can't get on the same page." And I share your interest in why this is the case. But for me the fact arises not through of an intrinsic feature of consciousness itself which forkness lacks and whereby consciousness eludes general theorization. On the contrary, it arises through the differential treatment that the question of consciousness receives over the question of forkness.

As I see it, people disagree more about consciousness than about forkness not because they have forkness all figured out. Far from it. They are actually more destitute when faced with the question of forkness because they have never even thought to ask it. Or the question only receives treatment under the more general question of Forms.

Badda Being said...

So, going back to Anonymous's question ("What makes this more true about the word consciousness than it does about any other word?"), for me the answer is quite simply nothing, but also that question is beside the point. Maybe not your point, Eric, but the point I take home from your original post, which is that constructing a general theory of consciousness involves a kind of power struggle to regulate discourse on the subject, a power struggle absent from considerations of dogness or forkness (if such considerations even exist). But if philosophers could score tenure points duking it out about the latter, I have no doubt we'd see more books and articles written on those topics.

But maybe this seems obvious to me only because I'm outside the academic environment looking in. I'm detached from the power struggles within academic philosophy, but somewhat attached to that which is immanent between academic philosophy and (pardon my audacity) real philosophy.

:-)

Unknown said...

Hi, now I sadly don't have time to read all the comments, but I want to raise the following point:
I think the problem with your argument is that you consider consciousness a binary state. With this I mean that you suppose that somebody is either conscious or not (black or white).
However as a med student I realize that this is not so. Clinicians use the Glasgow Coma Scale as a quick and dirty way in emergencies to rate somebody's Consciousness in a 15 point scale. Have you considered that some animals, e.g. ants may be less consciuss but still a little bit? (By the way, isn't it more interesting to speculate if a ant colony is concious?)
I believe you have hit a semantic problem that arises from the fact that people are classifying machines and want things to either belong to a category like conscious or not. With that I mean that we organize the world around us in classes (black/ white) instead of a onedimensional continuum(black grey white).
It is also interesting to note that the same semantic problem is reached when people discuss what animals etc. are alive. We define the word life by different attributes which are important to a degree, e.g. metabolism, reproduction, purposefull reaction to its environment. But this system breaks down when you consider weird cases like a mule, which obviously is alive but never able to reproduce.
Hope I made myself clear. So what do you think?

Peter Hildebrand said...

I'm not very experienced in philosophy (I'm a cognitive neuroscience major) so please forgive me if this question is silly, but if you accept this argument against theories of consciousness, what do you think the role of philosophy should be in going forward on this issue? Is there anything that pure reason can offer, or do further discoveries need to come from scientific research and discovery?

Eric Schwitzgebel said...

Badda, I'm pretty sympathetic to your sociological observations -- yet I also continue to think that "conscious" and "fork" are importantly different epistemologically. I think there is a real problem of "other minds" for example -- maybe not human other minds, that just sounds too skeptical, but animal other minds and alien other minds -- while I don't think we're in an equivalently bad epistemic position about other forks. (I assume you'd probably disagree about that?)

Eric Schwitzgebel said...

Thanks for the comment, Peter! I agree with you that consciousness must be on a scale. I also agree that I only conceive of consciousness myself as a binary phenomenon. Hence, there is some flaw or lacuna in my understanding of consciousness. (Indeed this was a topic of a recent post.)

However, I'm inclined to think that that only makes the problem worse. Assuming we grant there is such a scale, the epistemic questions still arise and seem unbridgeable. I think some people will think that ants really do have conscious experience, even if it's very limited and very different from others; others will think they have no conscious experience whatsoever but are just biological machines; and others (like you, maybe) will think it's an in-between case. This dispute, it seems to me, will be no easier to resolve than the dispute cast in the binary terms of the original post.

Kevin said...

Hi Eric, at the risk of revealing just how simpleminded I am, I'm not persuaded by your argument. True, we don't know how far down the phylogenetic scale conscious experience goes, but this in itself doesn't prevent us from developing a general theory of consciousness, one that takes into account relevant biological facts about brain processes and neuronal networks that make consciousness possible and describes the aspectual nature or structure of consciousness, once we're tolerably certain that a dog or elephant or porpoise or bonobo are capable of having conscious experiences. Just as we can develop a general theory of evolution without knowing if other "evolvable" organisms exist in the universe, so we can develop a general theory of consciousness without knowing the precise point of demarcation on the phylogenetic scale where it's no longer appropriate to speak in terms of awareness, subjectivity, intentionality, mental mapping, etc. Best, Kevin

Eric Schwitzgebel said...

Here's the disanalogy, Kevin, between evolutionary theory and theories of consciousness as I see it. A general theory of evolution (not just a theory of evolution on this planet) would involve reference to necessary factors such as variability, heritability, differential reproduction rates, and sources of mutation. We might not know whether there are any systems on Alpha Centauri that instantiate it, but we can agree on criteria and then say for any hypothetical system whether it's the kind of system in which evolution occurs or not (or if it's a muddy in-between case). The parallel is not true for consciousness.

Anonymous said...

Matthias,
I'm sympathetic to what you're saying about the glasgow coma scale, and I'm a medical student as well, but I think that it's not quite the same kind of consciousness that philosophers are talking about. What we care about in medicine is the degree to which a person can interact with their environment, and that's what we call "consciousness", and it's on a continuum. What philosophers are talking about is whether or not it's "like" anything to be a person (or ant, or chinese nation, or whatever). The philosophical question is clearly binary. It's either like something or it's not. And if there is somehow a person whose experience is severely dulled and minimal... well, it is still like something to be them, if they exist.

Peter,
I think that Eric is saying that although we can never have a theory of consciousness, we can certainly disprove the theories that exist, often to great effect. For instance, David Chalmers doesn't have a positive theory of consciousness of his own, but if he's right in arguing that physicalism is false, then that will have a major impact on our worldview. That's still a pretty strong result, even if we never learn what makes us conscious.

Eric Schwitzgebel said...

Thanks for the assist, Anon! I agree with both of your points.

Medical Student said...

Eric,
Would you say that your argument is equivalent to saying that skepticism about other minds prevents theories of consciousness to be justified? If we can't know what it's like to be other people, then we can't formulate a general theory about what it's like.

It seems like if this is the case, then physicalism has particularly big problems. It must hold that we can know what it's like to be other people, and that we can do it entirely through the methods of science. But this is tantamount to requiring that neuroscientists be able to empirically refute skepticism, which seems absurd.

Even if this is not contradictory on its face, it is certainly not supported by neuroscientists and is quite a strong metaphysical encroachment on science. But this also undermines a good portion of why physicalists believe as they do.

Eric Schwitzgebel said...

Med Student: I agree that skepticism about other minds, if sustainable, would be sufficient to explode any general theory of consciousness, but I don't think that's such skepticism is necessary for my argument. I am not skeptical about the minds of other people, or of dogs or chimps -- but at some point the biological and behavioral differences become large enough that I think skepticism is warranted.

Medical Student said...

Eric,
I'm not quite sure what you mean. I take your point 2 to be a statement of other-minds skepticism. Of course, I noticed that you said we can't know whether "most" beings have consciousness, so it's not quite full force, but I do think that it is exactly this element of unknowability that makes your argument convincing.

Of course, I don't think that full skepticism is actually required. Even a very small degree of qualia skepticism would suffice. It could be a skepticism about a particular borderline case, like a worm. As long as there is an unknowable fact about other peoples qualia we will be unable to construct a general theory, since the purpose of a theory is to explain the facts that we find in nature.

Eric Schwitzgebel said...

Med Student: Thanks for clarifying! If that's what you mean by skepticism about other minds, I agree with that description of my view. I was worried that you might be talking about more radical skepticism about other minds according to which I can't even know if my wife has a stream of experience.

Badda Being said...

Eric, you assume correctly. I don't think there is a real problem of other minds (that is to say, other consciousnesses), or rather it's not a problem to me, so I don't find myself in a bad epistemic position with respect to it. Yet I do think we can duplicate that epistemic position with respect to other forks (that is to say, other forknesses) like this: How do you know there are other forknesses? -- though perhaps we don't need to ask this question as it relates to pronged finger forknesses, that just sounds too skeptical, but to road forknesses and utensil forknesses. If the problem of other forknesses doesn't grip you with the same intensity as the problem of other consciousnesses, I think we can attribute that to an impoverished discourse on forkness in general. But as far as I can tell we seem to be in the same epistemic position in either case. That is, in either case the solution is not something to be achieved by drawing inferences along a rational continuum but by positing what is simply the case with complete, uncanny ballsiness.