Thursday, January 30, 2014

An Objection to Group Consciousness Suggested by David Chalmers

For a couple of years now, I have been arguing that if materialism is true the United States probably has a stream of conscious experience over and above the conscious experiences of its citizens and residents. As it happens, very few materialist philosophers have taken the possibility seriously enough to discuss it in writing, so part of my strategy in approaching the question has been to email various prominent materialist philosophers to get a sense of whether they thought the U.S. might literally be phenomenally conscious, and if not why not.

To my surprise, about half of my respondents said they did not rule out the possibility. Two of the more interesting objections came from Fred Dretske (my undergrad advisor, now deceased) and Dan Dennett. I detail their objections and my replies in the essay in draft linked above. Although I didn't target him because he is not a materialist, [update 3:33 pm: Dave points out that I actually did target him, though it wasn't in my main batch] David Chalmers also raised an objection about a year ago in a series of emails. The objection has been niggling at me ever since (Dave's objections often have that feature), and I now address it in my updated draft.

The objection is this: The United States might lack consciousness because the complex cognitive capacities of the United States (e.g., to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.) arise largely in virtue of the complex cognitive capacities of the people composing it and only to a small extent in virtue of the functional relationships between the people composing it. Chalmers has emphasized to me that he isn't committed to this view, but I find it worth considering nonetheless, and others have pressed similar concerns.

This objection is not the objection that no conscious being could have conscious subparts (which I discuss in Section 2 of the essay and also here); nor is it the objection that the United States is the wrong type of thing to have conscious states (which I address in Sections 1 and 4). Rather, it's that what's doing the cognitive-functional heavy lifting in guiding the behavior of the U.S. are processes within people rather than the group-level organization.

To see the pull of this objection, consider an extreme example -- a two-seater homunculus. A two-seater homuculus is a being who behaves outwardly like a single intelligent entity but who instead of having a brain has two small people inside who jointly control the being's behavior, communicating with each other through very fast linguistic exchange. Plausibly, such a being has two streams of conscious experience, one for each homunculus, but no additional group-level stream for the system as a whole (unless the conditions for group-level consciousness are weak indeed). Perhaps the United States is somewhat like a two-seater homunculus?

Chalmers's objection seems to depend on something like the following principle: The complex cognitive capacities of a conscious organism (or at least the capacities in virtue of which the organism is conscious) must arise largely in virtue of the functional relationships between the subsystems composing it rather than in virtue of the capacities of its subsystems. If such a principle is to defeat U.S. consciousness, it must be the case both that

(a.) the United States has no such complex capacities that arise largely in virtue of the functional relationships between people, and

(b.) no conscious organism could have the requisite sort of complex capacities largely in virtue of the capacities of its subsystems.

Contra (a): This claim is difficult to assess, but being a strong, empirical negative existential (the U.S. has not even one such capacity), it seems a risky bet unless we can find solid empirical grounds for it.

Contra (b): This claim is even bolder. Consider a rabbit's ability to swiftly visually detect a snake. This complex cognitive capacity, presumably an important contributor to rabbit visual consciousness, might exist largely in virtue of the functional organization of the rabbit's visual subsystems, with the results of that processing then communicated to the organism as a whole, precipitating further reactions. Indeed turning (b) almost on its head, some models of human consciousness treat subsystem-driven processing as the normal case: The bulk of our cognitive work is done by subsystems, who cooperate by feeding their results into a global workspace or who compete for fame or control. So grant (a) for sake of argument: The relevant cognitive work of the United States is done largely within individual subsystems (people or groups of people) who then communicate their results across the entity as a whole, competing for fame and control via complex patterns of looping feedback. At the very abstract level of description relevant to Chalmers's expressed (but let me re-emphasize, not definitively endorsed) objection, such an organization might not be so different from the actual organization of the human mind. And it is of course much bolder to commit to the further view implied by (b), that no conscious system could possibly be organized in such a subsystem-driven way. It's hard to see what would justify such a claim.

The two-seater homunculus is strikingly different from a rabbit or human system (or even a Betelguesian beehead) because the communication is only between two sub-entities, at a low information rate; but the U.S. is composed of about 300,000,000 sub-entities whose informational exchange is massive, so the case is not similar enough to justify transferring intuitions from the one to the other.

41 comments:

Unknown said...

This reminds me of the stuff Chalmers says about the realization relation, which got reprinted in The Journal of Cognitive Science not so long ago.

Also reminds me of the Hacker line that sub-systems do not have mental states, people do. Thus (and this is a gross paraphrase), the visual system does not see anything. The person sees using her visual system.

Seems like there is more careful work to be done as to when to assign a function to a part, to a related collection of parts, to a whole, etc. and what that might mean for conscious states.

Eric Schwitzgebel said...

Thanks for the thought of connnecting this to Chalmers on realization, Unknown. I see some definite connections.

I think it can be very misleading to say that the visual cortex "sees". However, it seems less misleading to say that it engages in "cognitive processing" -- and for that reason I wouldn't quibble with saying it has "cognitive capacities" too -- though I totally agree with your final comment that we want to be careful here. As you say, there is a lot more work to be done on part-whole issues in attributing mentality. If my work on USA consciousness prompts the philosophical community to think through the issues more carefully and explicitly, I would be very pleased with that result.

FJA said...

In a very naive way, I would say that what seems to preclude the United States from being conscious is the fact that a great number of its subparts have conscious beliefs, desires, hopes, wishes, about what the United States do and should do. And these beliefs, desires, hopes, wishes, make them act in way which causally contributes to the actions of the United States (and notably to the kind of actions that would lead us to attribute mental states to the United States). I don't see anything similar about Sirian Supersquids or Antarean Antheads...


FJA

Daniel Estrada said...

I left a comment on the Google+ reshare (http://goo.gl/CZQt91), and I don't know if I should copy/paste the comment here for more discussion.

In any case, I wanted to add that it might be unwise to think about the functional components of the mind in mereological (part-whole) terms. The mind is a complex system composed of many functional components at many nested levels of analysis, and the interactions between these components will undoubtedly exhibit nonlinearities that are difficult to treat in mereological terms. Which is to say, consciousness is an emergent property of the whole that does not neatly decompose into the functional activity of the parts.

With this in mind, I want to mention Bechtel's work on organization and mechanical explanation in the sciences (http://goo.gl/XWWuX9), which I mentioned in the other thread and again seems here. On Bechtel's view, organization emerges from the interdependencies of functional components treated abstractly in a mechanical explanation. This emergence is not explained in terms of a mereological containment metaphor or part-whole relations, but instead from the dynamics of the system when the functional parts interact.

Bechtel's work mostly focuses on mechanical explanation in biology, where organization is a primary concern. But if the mind is similarly organized into functional components, we should expect the explanatory framework to also be useful in explaining the mind. I this it would be a good place to start; since Bechtel's theory has a solid foundation in graph theory, where the logic has mathematical precision, it would also allow philosophers to circumvent all the difficult metaphysics of mereology.

David Duffy said...

You seem to be slipping freely between cognition and consciousness. Surely countries usually act in a blundering unconscious fashion at the highest levels, while being competent at "automatic" subtasks (I won't even mention the z word).

John Gregg said...

The question about the USA being conscious presumes that functionalism is true, which I gather is the point. If you buy functionalism, each of us has subsystems as well, and there is no principled distinction between the processing that goes on within each of the 300,000,000 citizens and the processing that goes on between them. You might be able to say that, in all likelihood, there is a lot more integration up and down the levels of organization within each human brain than there is between individual humans, or at higher levels of social/political organization within the USA. If this is true, then there is certainly an integration bottleneck as you jump from the level of a person to higher levels of organization within the USA, but this is a matter of degree. When does such a bottleneck become so constrictive that it is a show-stopper for consciousness (to, I emphasize) a functionalist?

Personally, I've always gotten suspicious whenever someone uses the word "integrated" as a reason something is conscious. Integration is something that happens causally, over time, and over a whole mess of unrealized, hypothetical scenarios, which seems like a shaky ground for consciousness. But that, perhaps, is for another time . . .

-John Gregg
http://www.jrg3.net/mind/

Eric Schwitzgebel said...

Thanks for all the thoughtful comments, folks!

FJA: You seem to be embracing what I call an "anti-nesting" principle, according to which a conscious being can't have conscious parts, though you add a twist about the conscious parts having attitudes about what the conscious being as a whole is doing and should do. That's an interesting idea, but I think the problem is that it commits to some weird things in (highly hypothetical!) cases in which ultra-tiny conscious organisms get incorporated into your brain without any effect on your cognitive or behavioral functions. (Block has an example like this in "Troubles with Functionalism".) This being could, for whatever reason, want to play the role of one neuron in your visual system. You'll also get potentially get "fading qualia" weirdnesses of the sort described in Chalmers's 1996 book. (I have a brief treatment of anti-nesting principles in Section 2 of the linked essay.)

Eric Schwitzgebel said...

Daniel: I am myself drawn to views like the one you describe, though I don't want to commit to anything strong along those lines. For purposes of the dialectic, I'm inclined to grant clean functional decomposition to the Chalmersian objector for the sake of argument. However, if my responses fail, this is nice potential counterargument to keep in my back pocket, as it were!

Eric Schwitzgebel said...

David: Yes, I think that is a fairly common view in the literature on group intentionality in social philosophy. I am arguing for something different, though, and the kinds of considerations I bring to bear in my argument are quite different from the types of consideration in the literature on group intentionality in recent social philosophy (List & Pettit, etc.).

Eric Schwitzgebel said...

John: I think you have expressed something about what makes the Chalmersian objection seem at least first-pass plausible: The human case and the USA case seem to differ substantially in structure, organization, and "integration". I think it would be interesting to see a serious effort to articulate what this difference is, and why what the USA has is not sufficient for consciousness. I do think that's going to be a pretty tricky task, though, if one wants to grant consciousness to lots of different kinds of hypothetical, weirdly-structured aliens who are very similar to us in how they interact with their environment.

I don't think my argument requires functionalism in any strict sense of the term, but I do think it assumes that the most natural way to develop materialism takes functional roles and outwardly observable behavioral patterns very seriously in assessing whether an entity is conscious.

FJA said...

Eric : Thank you for your answer. However, I think that I could refine the kind of principle I was suggesting in order to avoid your objections. Notably if one gives a more detailed account of what means, for a conscious subpart gifted with conscious mental states, to contribute to the behavior of a complex whole.

You could say, for example, that it is not possible for a complex whole to have conscious mental states if some of its subparts

1/have conscious attitudes regarding what the whole is, do and should do

(for example)
and

2/ these conscious attitudes causally contribute to what this whole does (and especially to the actions of the whole in virtue of which we are tempted to ascribe mental states to this whole)
And this second condition has to be understood as follows: if the subparts hadn’t these conscious attitudes, the whole wouldn’t be able to do what it does (in other words: the functional role played by these subparts, in virtue of which the whole has the complex behavior that incites us to ascribe it conscious mental states, could not be played by non-conscious subparts).

-That way, you avoid the problems linked to the replacement of a neuron of a brain B by a little conscious thing T which acts in a way similar to a neuron. Indeed, the brain B would be able to do what it does even without this weird conscious “neuron” T (= because, without T, B would just be similar to a normal brain).
So: if your conscious subparts play a “dum” role, they don’t preclude the whole from being conscious. But if they play an “intelligent role” (a role that could not be played by a non-conscious element) and if this role is necessary for the "intelligent" behavior of the whole, then the whole cannot be conscious.
This allows us to ascribe consciousness to this brain (as we want to do it), and to Sirian Supersquids or Antarean Antheads too. This allows to avoid the weird “fading qualia” cases you were mentioning. But this also allows us to deny that the USA are conscious (because if each of the American citizens were replaced by stupid & simple neurons, or even reasonably complex non-conscious computers for example, I seriously doubt the USA would be able to behave as it does)

I agree that this is a (somewhat intricate) formulation of an anti-nesting problem, but I find it much more intuitive that the ones you described in your essay. By the way, I now see that it resembles Chalmers' objection in some way.

Thank you for your attention, and sorry if I misunderstood some of your points.

FJA

Michael Drake said...
This comment has been removed by the author.
Michael Drake said...

It's always annoying when people do this, but I'll be annoying and point out that I made a very similar sort of objection back in may of 2011 (as if I'm the first person ever to think such thoughts):

To the extent that the [Ned] Block example is gripping..., it gets its grip by constraining agent activity to tasks that are isomorphic to the nonconscious activity going on in neurons or the like. The agent's own consciousness is, in this sense, bracketed out. But that's not what's going on in the United States or ant hill examples, where all of the group level awareness behavior bears a transparent relation to the apparent awareness of its constituents....

E.g., if the United States behaves as if aware that Osama bin Laden's is hiding out in a private residential compound in Abbottabad, Pakistan, we don't wonder, "Wow, where did this group awareness come from? This is a hard problem!" Instead, we know (whether or not we have all the facts to hand) that there's some kind of story to tell that will account for that collective awareness in terms of descriptions of the phenomenal awareness of some class of individual agents independent of the mechanisms on which that conscious content supervenes.


Sorry to be so annoying.

So as for your last point, then, I think the challenge is to come up with some case of group-level "cognitive content" (let's call it) that isn't simply the aggregation of the cognitive content of its constituent members.

Eric Schwitzgebel said...

FJA: Yes, that is an interesting suggestion. It's more complicated than any objection I consider in the essay, and its twists might help it dodge my replies to anti-nesting, Dretske's objection, and Chalmers's objection, all three of which it resembles to some degree.

I worry about cashing out the requirement that the subsystems' functional role "could not be played by non-conscious" subparts". To flesh that out will require a theory about what kinds of roles require consciousness; and what kinds of possibilities, and how remote, are relevant to the "could not" also seems a potential minefield.

It's also a little hard for me to see what would motivate such a principle, other than the desire to generate the result in question. So, for example, it seems to rule out the possibility than *any* group of people under *any* circumstances could come together and generate group mind as a result of their intelligent, conscious interactions. But if one allows that they could generate a group mind if only they interacted a bit more stupidly with each other (in a way that could be mimicked by non-conscious entities), why should adding conscious intelligence to their interactions make the group mind disappear?

Still, it's an interesting proposal. It would be very cool to see it thought through more fully and tested against a range of thought-experimental cases.

Eric Schwitzgebel said...

Michael: I appreciate the reminder! I agree that your objection resembles both Chalmers's objection and DJA's objection (in the comments above). But I also think it merges together a couple issues that I would like to keep clearly separated. One is the issue of the attitude content or cognitive content of groups and how that relates to the attitude content of individuals. This is a hot topic in social philosophy these days, with some authors arguing that group attitudes are reducible to the attitudes of individuals and others arguing that they are not reducible. I think the issue of group phenomenal consciousness is separable from the issue of the reducibility or not of group attitudes.

To see this, consider the question in contemporary metaphysics of mind about whether individual conscious states are ontologically or explanatorily reducible or not to something else (e.g., brain states). Most people who advocate reducibility still accept realism: The fact that some conscious state A is *reducible to* something else or that we can *do all the explanatory work by appealing only to lower-level stuff* does not entail that conscious state A does not exist.

I address this issue briefly on p. 21-22 of the essay (though not in a lot more detail than I have given above).

I appreciate the thoughtful comment, and I'm flattered that you're engaged enough in my arguments about this topic to be following through more than two years later!

Scott Bakker said...

Would there be a way for you to bootstrap this argument to apply to EMF theories, Eric?

Michael Drake said...

Thanks Eric. I don't think my point is about "reducibility" in the standard sense, because there isn't any relevant transformation or series of transformations between ontological or theoretical levels. Instead, the point is that the group-level information content is just the union of all the information contained in the members of the group. There's no "lower" or "higher" levels; there's just less and more.

Terminology aside, though, I think the prima facie challenge for the group mind hypothesis is to come up with a single example of group level content that is as, or seems, as explanatorily distant from individual content as phenomenal content is from the activity of individual neurons. (I realize that group mind is meant to be a prediction from functionalist-type accounts; but I don't think we can make that prediction without understanding the functional and dynamic relationships that connect neuronal firing to phenomenal experiences, and without showing how those relationships map onto the relationships between individual mind and group mind.)

Anonymous said...

ES, on what basis would you even decide what is or is not part of the "USA" at any given time so as to even know/measure that/how 'it' is acting/thinking or not?
-dirk/dmf

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

@ Scott: If an EMF theory insists on EMF's in particular for consciousness, it seems not to admit the range of alien cases relevant to the argument, since presumably it's possible that a being without a coherence EMF of the right sort would exhibit all outward signs of language, intelligence, self-representation, strategic use of resources, etc., as well as very complex internal organization driving those outward signs. Thus, it would be "neurochauvinistic" in the sense I criticize in the penultimate section. On the other hand, if there is some general functional role or structure *other* than EMF in light of which we can justifiably claim that EMF yields consciousness in the human case, then to address USA consciousness, the question needs to be whether the USA meets those conditions.

Eric Schwitzgebel said...

Michael: I'm not sure I agree with what you say in the first paragraph, unless those statements are read so weakly that they are also true in the individual-consciousness case. This also connects with the recent work in social philosophy that I mentioned in my previous response. I do find myself broadly sympathetic with those who say there is no simple (e.g., "union") relationship between lower-level attitudes and group-level attitudes -- though I don't think that this needs to entail being committed to any strong emergence thesis.

On your second paragraph: Phenomenal content can be quite closely tied to the individual behavior of individual neurons, at least in some cases on some neuroscientific theories. For example, an individual neuron might be tuned to react to a brightness gradient, or a bit of motion, or a bit of color, in some region of the visual field, and what we visually experience might be closely related to that, if there are enough other neurons who do the right thing. This isn't to say that the representations in early visual processing match visual phenomenology as we experience it -- but the relationship might not be so distant and convoluted as you seem to be supposing in your objection.

Eric Schwitzgebel said...

Dirk/DMF: For this argument I treat the USA as a vague-boundaried entity with citizens and residents as some or all of its parts. Other parts might include, roads, computers, etc. I hope my argument is pretty flexible on this point as long as we're clear that the USA is a concrete entity and not an abstract entity of some sort. I'd be interested to hear if you think something turns on one way vs. another of drawing the boundary.

Anonymous said...

It seems to me that if we are going to avoid a sort of misplaced-concreteness/reification it would be helpful to have some-thing(s) that we could measure/track to see if "it" is organized/integrated enough to be characterized as an agent and sort it out from other (some more local,some more distributed)agents/factors that might be responsible for the resulting effects/events/assemblages in question, no?
Also what is the proposed means of co-ordination/coding that brings together what look like pretty disparate agents/objects from where I sit?
dirk

Eric Schwitzgebel said...

Dirk: reasonable requests! But on the first question, there's not much I can do without the materialists, whose views this project is parasitic on, addressing this question a bit more fully themselves. If my work forces them to clarify under what conditions either subparts of animals or collections of animals would be phenomenally conscious, then I would consider it a dialectic success. The issue is seriously underexplored. (Tononi is an important exception, but see my discussion of his view in section ii and in earlier posts on this blog.)

On the second question, I think there are diverse means of coordination. One is visual and auditory observation of each other's bodily outputs. Another is via electronic media. There are huge amounts of information transferred between the subparts of the USA through both types of channels (and others besides), which in turn result in the coordinated behavior of the system.

FJA said...

Eric:
-I think that the principle I was suggesting is intuitive, forasmuch as an anti-nesting principle is intuitive. In fact, it retain the general idea of the anti-nesting principle, according to which a conscious whole cannot have conscious subpart, but specifies it in a way which makes it more coherent and more likely to resist objections.

-I think that the problem you are pointing (“cashing out the requirement that the subsystems' functional role "could not be played by non-conscious" subparts". To flesh that out will require a theory about what kinds of roles require consciousness”) is not so awful. If we have a materialist theory of consciousness, it means that we have a theory about what kind of organizational properties is sufficient for consciousness (whatever this theory happens to be). Therefore, we can determine if a given functional role can be played without consciousness.

-As regards you other concern (“So, for example, it seems to rule out the possibility than *any* group of people under *any* circumstances could come together and generate group mind as a result of their intelligent, conscious interactions”), I think that the principle I was suggesting in fact allows a group of people to generate a group consciousness as a result of their intelligent & conscious behaviors (see the first clause I was suggesting!), but only if their conscious states doesn’t cause the “intelligent” behavior of the whole (behavior which is the basis on which we tend to attribute consciousness to this whole) BECAUSE these states bear on what the whole do and should do.

In other words: I think that it could be possible for a group of intelligent beings to create a kind of “group consciousness” by their interactions, but only if the interactions by which this group “intelligent” behavior is constituted are not caused by conscious attitudes of the members of the group which bear on what the whole should do (as a whole). It seems intuitive to me: a group consciousness is only something that could emerge unbeknownst to the members of the group (or if the members of the group are only playing "stupid" roles).

Thank you for your attention and your answers,

FJA

Anonymous said...

We are actually pretty poor at "reading" each other and even worse at co-operating in complex and enduring (let alone reflexive) ways, which is why tasks tend to be compartmentalized by most large organizations and then of course all of the usual problems of "silos" arise and even if we can get over the technical problems of computer/systems interfaces between differing mechanisms/systems (not so easy to say the least ask the feds working on health-care and the VA) at some point human agents will have to become involved and back around we go. if you get a chance check out:
http://syntheticzero.net/2014/02/03/andrew-pickering-on-the-enemy-within/
thanks, dirk

Eric Schwitzgebel said...

FJA: Thanks for those very thoughtful comments!

On "If we have a materialist theory of consciousness, it means that we have a theory about what kind of organizational properties is sufficient for consciousness (whatever this theory happens to be)." From my examination of a sample of theories in the literature -- Dennett, Dretske, Humphrey, and Tononi -- I think most materialist theories as stated and straightforwardly interpreted are such that the USA meets the sufficient conditions. However, none of the four theorists I mentioned agrees with this in personal conversation! See my blog posts on these theories in 2012, and see the brief discussion in my essay on why I chose these four theorists in particular.

However, I agree that your principle would be interesting to try to work out. If the primary effect of my work on this is to persuade someone like you to work out a clear and plausible principle that excludes USA consciousness but includes a broad range of other types of conscious beings ("rabbits and aliens"), which is then broadly accepted by the philosophical community -- well, I would be delighted and consider that a major success!

I have to admit that I still don't find the "unbeknownst" condition very appealing. Suppose we have a group consciousness that meets your condition and then some member of the group discovers this is happening and then decides it's crucial to keep contributing in the same way to the process, but now with understanding? Would the group level consciousness now disappear despite no difference in group-level behavior? I could see trying to clarify your condition to escape such a consequence, but such a clarification might bring other troubles in its train.

It would be very cool to see a brief (or full-length) paper that really tries to sort this out, if you're game to write one!

Eric Schwitzgebel said...

Dirk: Thanks for the link to the video. I don't have time to view it right now. Maybe later! Some group-level interactions are nicely tuned, e.g., in settling on market prices and exploiting arbitrage. Others, clearly, are much more problematic. Human cognition, too, is very good at some things (shape from shading) and very poor at others (formal logic).

Anonymous said...

really setting market prices, hard to grasp that in this clime/age?
no worries about the video if time/interest allows at some point would be interested in yer take on Pickering and the limits of organization so if you get a chance any comments would be most welcome, cheers, dirk

FJA said...

Eric: You are right, I have to write a short paper about this question, this way you will be able to seriously object to what I was suggesting.

FJA

FJA said...

Hi,

I wrote a (very) short paper where I try to expound the "anti-nesting" idea I suggested here. I would be happy to receive comments about it.

The paper is available here:

https://www.academia.edu/5974981/_draft_How_a_materialist_can_deny_that_the_USA_is_probably_conscious

FJA

FJA said...

Sorry, I think that there is a problem with the file on academia.edu. Here is a link which seems to work better:

https://www.academia.edu/5975605/How_a_materialist_can_deny_that_the_USA_is_conscious

FJA

Eric Schwitzgebel said...

FJA -- The link doesn't seem to be working for me, but I got the attachment in the email, and I look forward to reading it soon!

Michael Drake said...

Thanks Eric. I think it's one thing to say that there is no simple relationship between lower-level attitudes and group-level attitudes (I was speaking of information rather than attitudes, which I think is a different question, but I accept that attitudes is a richer comparison), on the one hand, and that that relationship, whatever its complexity, maps in some meaningful way onto the relationship between lower-level brain activity and “qualia” (or whatever it is by dint of which one might suppose there is a distinctively “hard problem”), on the other.

Suppose a group of football fans collectively wants to go to the store to get a twelve-pack to bring back to the house so that they have some beer on hand for the game. There might be some complex relationship between the group’s collective desire and the individual desires of the group’s members. But one thing seems clear: There is at least someone in the group who believes there will be someone in the group who might want to have a beer during the game. That is very unlike the relationship between color-experience and neuronal activity in the extrastriate and primary visual cortex or whatever.

The same I think goes for the "ties" that might respectively obtain in the two sets of object-group relations. You're right to say that neurons and larger structures can be correlated with (and presumably causally implicated in) certain types of phenomenal content . But that’s not “the hard problem.” And I think you need something like a “hard problem” of collective intentionality to motivate a group mind hypothesis.

Eric Schwitzgebel said...

Michael: FJA has been trying to work out the details of an objection along these lines in a paper in draft. If you're interested to see it, send me your email address and I'll send your address to FJA.

On the last issue: I think that if we put a lot of epistemic weight on our intuitive sense that group level consciousness does not arise, then we should only grant the possibility of group level consciousness in the face of some huge explanatory demand. But I don't think we should put a lot of weight on such intuitions; and I think that we should be willing to ascribe group consciousness is that's where the most elegant theories of consciousness lead us, even without necessity of some further explanatory demand that the appeal to group consciousness is satisfying.

Angra Mainyu said...

Hi, Eric,

I'm late to the thread, but I'd like to suggest another objection, inspired by your description of Chalmers's objection, or more precisely your description of the some of the complex cognitive capabilities assigned to the US, namely "to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.".

I would argue that there is no entity "United States" that even appears to have such complex capacities. There appear to be different humans doing different stuff with their different capacities, but not a single entity, for the following reason:

If that entity existed, it seems it would have – for example, among others – the capacity to spy on its neighbors by understanding communications; that includes the capability to understand human language, and to react to it intelligently.

But if that's what's going on, why does the United States not state the fact that it's conscious?

Indeed, the US would know, given the above capabilities, that we're in the dark about it – that nearly all humans do not believe that it is conscious, and nearly all of those who have considered the matter believe it's not conscious at all.

I suppose it might be argued that perhaps the US wants to deceive us, pretending – by means of silence when we ask or ponder the matter - not to be conscious, or at least not to be intelligent enough to understand our language.

But what about other countries?

It seems that even when some country was being killed (by other countries, or by its own people), they never made their existence clear and asked for compassion, or anything like that.

I guess that it might be argued that countries are always deceitful for some reason, but I don't think that's a plausible interpretation of what's going on. If anything, if the intentions of these alleged entities are related in some way to their behavior, it seems to me that in some cases it would be in their interest to make their existence as distinct conscious beings known., even if not for other reason, as a last resort survival attempt, and they would have the capacity to actually make that happen.

Plus, a deceitful intelligent being that strives not to look intelligent is in general not plausible; we would need some specific reason to think that is what's going on.

Granted, there is the issue of how the US would manage to assert its own presence. But we're suggesting here that the US seems to wage war, spy, monitor, represent itself, etc., so why couldn't it (or other countries, even those in trouble, fighting for their survival, etc.) find made a simple claim that it exists as an intelligent entity, with its own stream of consciousness?

Granted, also, it might be suggested that the US is conscious but not nearly intelligent enough to speak; perhaps, it's like a mosquito (if they're conscious), or like a dog. But then, an entity like that would not have the capabilities listed above – which were what motivated the suspicion that the US might be conscious, it seems -, so why would anyone suspect that the US is conscious?

Eric Schwitzgebel said...

Interesting objection, Angra! Sorry about the slow reply -- I've been traveling, and the pace has been hectic.

My first reaction was what you describe at the end: In the full-length written paper I'm very clear that I'm not claiming that the US represents itself as conscious; but if representing oneself as conscious were a necessary condition for consciousness, rabbits would presumably not be conscious -- and my argument really only applies to materialist views liberal enough to include both rabbits and weirdly constructed aliens.

But the way you phrase it invites a response to that standard response of mine: Given the other capacities the US has, e.g., to react angrily to statements by other nations, shouldn't it be intelligent enough to realize that it's conscious if it is in fact conscious?

It's an interesting thought. I see no reason to think that conscious entities structurally very different from us would have the same intellectual capacities and blind-spots that we have. In fact, it seems likely that their blind spots and capacities will be different. The US is good at certain kind of information processing and group-level representation, not as good at other kinds. But I realize that's not a full answer.

One weird thought: I could imagine it someday becoming the majority view that national groups do have a stream of conscious experience, and becoming something like the general view of the US (this is not to say that anything the majority thinks is also the US view). Weirder views have had their day in cultures around the world! And then maybe the group would represent itself as conscious. If it then further turned out (contra my inclination regarding rabbits) that explicit representation of itself as conscious was the crucial thing missing for the US to be conscious, then any arguments that were successful in winning the group mind over to that attitude would have been self-fulfilling.

Marco Devillers said...

Funny thing is that you're awfully close to Gadaffi who didn't really believe in institution since "the will of the people" will emerge no matter what.

Marco Devillers said...

Funny thing is that you're awfully close to Gadaffi who didn't believe in institutions since "the will of the people" will emerge no matter what.

Angra Mainyu said...


Thanks for the reply, Eric, and no problem.

I actually agree that rabbits are plausibly conscious; I would also agree that structurally different entities may very well have different capabilities and blind spots.

However, in this particular case – at least, as I understood the claim about some of the capabilities United States -, my impression is that the US would appear to understand the perspective of others (humans or countries) and look at the world from their perspective, revealing a complex theory of mind. For example, if it's spying on Merkel, or on Russia, it seems it considers how Merkel or Russia will react, what countermeasures they will take, what they intend to do, etc., in addition, it would be able to understand human language, etc.
That would seem to reveal a very complex theory of mind, and based on that, it would be very probable it seems to me that it would also represent itself as conscious (i.e., if the US represents both Merkel and Russia as conscious, understands human language – including, presumably, claims about consciousness, etc. -), wouldn't it represent itself as conscious too?
By the way, it seems countries would not need to represent themselves as conscious in the sense of realizing they are in order to say they're conscious; they would only need to conclude that, in some instances, it would be in their interest to claim to be conscious.
But my impression is that the stronger result is plausible...at least, if I understand what you mean when you say the US spies on others, etc.

However, now in light of your reply I'm not sure I do understand what you have in mind, so I'd like to ask for clarification.

For example, when someone says that the US spies on, say, Russia, and vice versa, I usually understand that some humans spy on other humans, etc., in organized groups; they make plans, read documents, hide stuff – that is how I intuitively represent the situation, and I do not think of any US or Russia over or above that.

But when you say that the US spies on its neighbors in a context of a discussion on group consciousness, I understand that as a suggestion that there is an entity "US" doing just that.

But doing that would seem to be doing what the humans in question are doing – in other words, their spying actions would be ascribed somehow to the US. While I do not understand how that is supposed to happen, those actions require human-like mental skills.
But now I see I may be misreading. What do you have in mind when you say the US does those things?

Also, your point that not anything the majority thinks is also the US view raises another issue: when is something the action of the US?

For example, of Obama tells Putin to pursue diplomacy. Should we understand the US told that to Putin?

Eric Schwitzgebel said...

Oy, I hope not! But yes there is a sordid fascist history of group-consciousness ideas in the early 20th-century. I definitely want to stay away from those sorts of political implications.

Marco Devillers said...

Yeah well. Sorry, but I really think you're there already. I could easily lift your argument from a nation abstraction to racial, ethnic, or religious abstractions.

Not that there is nothing wrong with being too much of a humanist but that's a different discussion.

I suggest you only talk about this in a hushed voice.