Wednesday, November 25, 2015

Names in Philosophical Examples

The most notorious men in philosophy used to be Smith and Jones. For example:

Smith, who works in the country, has promised his wife to be in the city at four o'clock. It is now shortly before half past three, and Smith is seated at a small table in the country airport.... (Lehrer & Taylor 1965)

... suppose that Jones has been charged with Smith's murder and has been placed on trial.... (Donnellan 1966)

Suppose, for example, both that Smith is to-day legally (morally) obligated to pay Jones $500.00 and that a week from to-day Smith will murder Jones.... (Castaneda 1967-1968).

Concerning such a man we can make many successful predictions about his future actions like: "Smith will never accept a bribe, corrupt the innocent, commit murder or theft...." (Grant 1952)

In the 1980s and 1990s, the culture of philosophy changed, and first names became more standard for these types of examples. Also, a wider range of names were used, though my impression is that "Alice" and "Bob" were common favorites:

Al wishes to show Bob how much he appreciates his philosophical help over the years and he believes that an excellent way of doing this is to send Bob an autographed copy of his new book.... (Mele 1988).

Suppose that none of three women, Alice, Beth, and Carla, has a special relationship with any of the others, and accordingly, none has special responsibilities to any of the others. (Scheffler 1999)

To many, John has always seemed a model husband. He almost invariably shows great sensitivity to his wife's needs, and he willingly goes out of his way to meet them. (Railton 1984)

"Smith" and "Jones" were always assumed to be male. In contrast, by the 1980s, philosophy was opening to a mix of male and female example protagonists.

But there's one thing "Smith", "Jones", "Alice", and "Bob" all have in common. They are bland. Bland, here, is not entirely a good thing. "Bland" is culturally relative. By choosing these names, 20th century philosophers were conveying certain ethnic expectations to their readers -- that their readers, too, will find these names bland, that they will think of people with these names as "like us". The hypothetical worlds of 20th century Anglophone philosophy were worlds populated almost entirely by Bob Smiths and Alice Joneses. Someone with a name like "Rasheed" might understandably find this somewhat alienating. Does he really belong in bed with Bob & Carol & Ted & Alice, "considering the possibilities"?

Also, if you do see these names as vanilla -- vanilla after vanilla gets a bit boring, don't you think? Even just on aesthetic grounds, why not mix it up?

Recently, philosophers have begun drawing their names from a broader ethnic range. But still, few of us regularly mix Chinese, Indian, and Arabic names into our examples.

Some care is warranted. If "Smith" commits a murder, that's one thing. If one "arbitrarily" picks "Jamal" as the name of the murderer, that's a bit different. One could try to go against the grain, making "Gertrude" the murderer and "Jamal" the aging florist, but that can seem forced and cartoonish, if done too often. My wife enjoys psychoanalyzing my name choices: Why is "Juliet" my racist and "Kaipeng" my Stoic?

One approach might be to find some list of the most popular names in the world and draw randomly from it. I kind of like that idea. It will generate a lot of "Mohammad", "Qian", and "Aadhya" -- possibly a refreshing change, if done properly.

But one probably needn't aim for total global egalitarianism in name choice. If a Swedish philosopher uses a representative mix of Swedish names, well, there's something fun about that. I wouldn't want to insist that she always use "Maria" and "Fatima" instead. And maybe for me, as a Californian, I could sample Californian names -- as long as I don't pretend that California is populated only by white, non-immigrant, native English speakers.

If you're lucky enough to teach at a large, diverse university like my own, a wonderful source of diverse names might be your own student rosters. Sorting names randomly from my largest recent class, these 25 pop out near the top: Rainita, Acenee, Desiree, Rani, Marisa, Guadalupe, Vanseaka, Cameron, Joseph, Christian, Ibrahim, Christina, Jasmine, Marie, Jennifer, Stephen, Philip, Hsin En, Timothy, Elio, Ivan, Deyanira, Izamar, Danielle, and Dennis Yoon. What a wonderful set of names! California's future philosophers, I hope.

Hey, you go do it some other way if you want. I'm not insisting. Maybe in a few days I'll think this is a totally stupid idea and I won't even do it this way myself. But if you do stick with Bob Smith and Alice Jones, could you least do it ironically?

[image source]

Wednesday, November 18, 2015

Tuesday, November 17, 2015

Percentage of Women at APA Meetings, 1955, 1975, 1995, 2015

Last spring, I posted a gender analysis of the program of the Pacific Division meeting of the American Philosophical Association, broken down by ethics vs. non-ethics and by role in the program. I've been coding some other APAs along similar lines. For a broader picture over time, I have now examined gender data for all three divisional meetings of the APA for 1955, 1975, 1995, and the 2014-2015 academic year [note 1].

Gender was coded by first name and/or by personal or professional knowledge as either "female", "male", or "other/indeterminable". [note 2] I coded the main program only, and I excluded sessions organized by special committees (and all other symposia listed at session end rather than session beginning). My idea in doing this was to capture the mainstream research sessions rather than sessions on the state of the profession, teaching, the status of different ethnic groups, etc.

As expected, the majority of philosophers on the APA main program are men, but the gender ratios are less skewed now than they were a few decades ago. Overall, the proportion of women on the APA main program has increased from about one sixth in 1975 to about one third in 2015.

Merging all three divisions, here is the gender breakdown by year:

1955: 6% women (7/121, excl. 5 indeterminable)
1975: 16% women (62/397, excl. 20)
1995: 25% women (220/896, excl. 38)
2014-2015: 32% women (481/1526, excl. 177 [note 2])

All three of the 20-year-interval increases are statistically significant, considered individually (two-tailed z tests, p < .001). Differences between divisions were not statistically significant.

Recent estimates of the percentage of women in philosophy in the United States are typically in the low 20%'s, with 21% the most commonly cited number. Interestingly, at 32% women, the 2014-2015 program data are significantly higher than women's overall representation in the profession (481/1526 vs. 21%, p < .001). Possible explanations: younger philosophers more likely to be women and more likely to attend conferences; non-U.S. participants who are more gender balanced; the gender-indeterminate category ("Chris", foreign names) being disproportionately male; women having more interest in participating in APA sessions; and/or the program committees working to reach out to women.

I also divided sessions into "ethics", "non-ethics", and "excluded". "Ethics" was construed broadly to include social and political philosophy. Philosophy of action and philosophy of religion were excluded as borderline, unless they were on ethical topics in those sub-areas. My hypothesis was that within philosophy, a larger percentage of women specialize in ethics than in other areas. The results:

1955 ethics: 5/32 (14% women)
1955 non-ethics: 2/64 (3% women)

1975 ethics: 16/110 (15% women)
1975 non-ethics: 41/249 (17% women)

1995 ethics: 101/275 (37% women)
1995 non-ethics: 105/531 (20% women)

2014-2015 ethics: 206/500 (41% women)
2014-2015 non-ethics: 217/824 (26% women)

The numbers are too small in 1955 and 1975 to draw firm conclusions. However, in both 1995 and 2014-2015 the predicted effect is large and statistically significant (p < .001 for both). Since at least 1995, ethics sessions at APA meetings have been much closer to gender balanced than non-ethics sessions.

Here are the numbers in a graph with 95% confidence intervals:

I also examined role in the program, to see if women were more or less likely to serve in roles that are typically regarded as more or less prestigious. I divided roles into five types: (1.) Presidential or named lecture / author in author-meets-critics / symposium speaker with at least one commentator on just her paper. (2.) Symposium speaker not in category 1, or AMC critic. (3.) Symposium commentator or introductory remarks for named lecture. (4.) Presenter or commentator in colloquium. (5.) Chair (timekeeper/moderator) in any session.

The program role results are a bit difficult to interpret, with women more likely to appear as ordinary symposium speakers (role 2) and as session chairs (role 5) and perhaps least likely to appear in colloquia spots (role 4). The trend is evident both in the aggregate data and when only 2014-2015 is considered (for all other years, the individual-year analysis is underpowered). Here's the breakdown for the 2014-2015 data:

Cat 1 (most prestigious): 27% (27/99)
Cat 2 (ordinary symposium speaker): 37% (117/314)
Cat 3 (symposium commentator): 30% (29/96)
Cat 4 (colloq speaker/commentator): 26% (155/597)
Cat 5 (chair): 36% (153/420)

This is statistically significant variation (chi-square [DF 4] = 18.9, p = .001). Overall, I'd say that this tends to disconfirm the hypothesis that women are disproportionately likely to appear in lower-prestige program roles, but beyond that I hesitate to speculate.

Carolyn Dicey Jennings and I are at work on other analyses of the percentages of women in ethics vs. other areas of philosophy. If other measures also suggest that ethics contains a higher percentage of women than do other areas of philosophy, then at least two conclusions appear to follow for those who wish to help steer philosophy closer to gender parity:

First, in an ethics context, a proportion of women that is representative of philosophy as a whole might still constitute an underrepresentation of women relative to the available pool.

Second, the situation outside of ethics might be even more unbalanced than one would guess from looking at philosophy as a whole.

However, the long-term upward trends both within and outside of ethics are encouraging.


Note 1: The Eastern Division did not meet in 2015, shifting from a December to a January schedule, so I use the December 2014 data.

Note 2: I coded as indeterminable: gender-ambiguous Anglophone names ("Pat", "Robin"), mere initials ("C."), and non-Anglophone names whose gender associations were unknown to me ("Asya", "Lijun"). I allowed personal knowledge to resolve ambiguities (e.g., "Pat Churchland" as female). Impressionistically, the higher rate of indeterminable in 2014-2015 (10% of participants, up from 4-5% in 1975 and 1995) was due to more participants with non-Anglophone names. If women are substantially more or less common among the indeterminable names than among the remainder, that might skew the results by a few percent either direction. Still, the overall trend remains clear.


Thanks to Mara Garza for help with coding some of the data. Thanks to Roger Giner-Sorolla for catching an error in the labeling of the Y axis, which has now been corrected.

Thursday, November 12, 2015

Why We Might Have Greater Obligations to Conscious Robots Than to Human Strangers

A new short piece by me, released today in Aeon Opinions. From the piece:

[Most philosophers and researchers on artificial intelligence agree that] if someday we manage to create robots that have mental lives similar to ours, with human-like plans, desires and a sense of self, including the capacity for joy and suffering, then those robots deserve moral consideration similar to that accorded to natural human beings.

I want to challenge this consensus.... I think that, if we someday create robots with human-like cognitive and emotional capacities, we owe them more moral consideration than we would normally owe to otherwise similar human beings.

Here’s why: we will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state. If a robot needlessly suffers or fails to reach its developmental potential, it will be in substantial part because of our failure – a failure in our creation, design or nurturance of it. Our moral relation to robots will more closely resemble the relation that parents have to their children, or that gods have to the beings they create, than the relationship between human strangers.

Continued here.

Mara Garza and I also have full-length journal article on this topic forthcoming in a special issue of Midwest Studies -- final manuscript version here.

[image source]

Tuesday, November 10, 2015

Two Very Bad Wizards Ask Me about Ethics Professors

... and Nazis, and Chinese philosophy, and the nature of jerkitude, and Kant's defense of murdering bastard children, and many other topics besides.

I'm talking of course about the latest episode of the Very Bad Wizards podcast, with the witty, knowledgeable, and frequently profane hosts David Pizarro and Tamler Sommers. Before the interview with me is a fun 20-minute segment on the ethics of murdering baby Hitler and on whether self-driving cars should be programmed to sacrifice their passengers if by doing so they can save a greater number of other people.

The podcast always opens with a quote from the Wizard of Oz: "I'm a very good man. I'm just a very bad Wizard." Dave and Tamler still haven't addressed my question (which I posed before we went on air) about the wizard's moral self knowledge. Is he indeed a very good man? Part of the evidence against is that he sends an ill-prepared girl on what he thinks is a hopeless suicide mission against the Wicked Witch of the West.

Wednesday, November 04, 2015

Do Neurons Literally Have Preferences?

Carrie Figdor has been arguing that they do.

Consider these sentences, drawn from influential works of neuroscience (quoted in Figdor forthcoming, p. 2):

  • A resonator neuron prefers inputs having frequencies that resonate with the frequency of its subthreshold oscillations (Izhikevich 2007).
  • In preferring a slit specific in width and orientation this cell [with a complex receptive field] resembled certain cells with simple fields (Hubel and Wiesel 1962, p. 115).
  • It is the response properties of the last class of units [of cells recorded via electrodes implanted in a rat’s dorsal hippocampus] which has led us to postulate that the rat’s hippocampus functions as a spatial map. ... These 8 units then appear to have preferred spatial orientations (O’Keefe and Dostrovsky 1971, p. 172).
  • These are completely standard, unremarkable claims of the type that neuroscientists have been making for decades. Figdor suggests that it's best to interpret these claims as literal truths. The verbs in these sentences work like many other verbs do -- "twist", "crawl", "touch" -- with literal usage across a wide range of domains, including organic and inorganic, part and whole.

    Figdor's view sounds bizarre, perhaps. People literally have preferences. And rats. Maybe frogs. Not trees (despite 22,000 Google hits for "trees prefer", such as "Ash trees prefer moist, well-draining soil for optimum growth"). Definitely not neurons, most people would say.

    One natural way to object to Figdor's view is to suggest that the language of neurons "preferring" is metaphorical rather than literal. I can see how that might be an attractive first thought. Another possibility worth considering is that maybe there are two senses of "prefer" at work -- a high-grade one for human beings, a thin one for neurons.

    Figdor responds to these objections, in part, with technical linguistic arguments that I am insufficiently schooled in linguistics to evaluate. Does conjoining human and neuronal cases of "prefers" pass the zeugma test?

    However, from seeing others' reactions to Figdor -- she gave a talk here at UCR a couple weeks ago -- I'd say it's not a fine sense of technical linguistics that drives most people's rejection of Figdor's claim. (In conversation, she says agrees with me about this; and in newer work in progress she is de-emphasizing the technical linguistic aspects to focus on the bigger picture, including how terms evolve over time in deference to scientific usage.) What gives folks the heebie-jeebies is the thought that "preferring" is a psychological notion, and so if Figdor is saying that neurons literally have preferences, she appears to be saying that neurons literally have minds or psychological states. And we certainly don't want to say that! (Do we?)

    Figdor is not some far-out panpsychist who believes that neurons tingle with experiences of delight when they receive the stimuli they prefer. But she is far out in another way -- a more sensible and appealing way, perhaps. Once we see the actual source of her radicalism, we can start to appreciate the importance and appeal of her work.

    It's natural -- common sense -- for us to approach the world by dividing it into things with minds (you, me, other people, dogs, birds...) and things without minds (stones, trees, pencils, fingernails). Reflecting on intermediate cases, such as various types of worms, one might sense trouble for a sharp distinction here, but vagueness along a single spectrum of mindedness isn't too threatening to common sense. The essential difference between the minded and the un-minded remains, despite a gray zone.

    Figdor's picture challenges all that. If what she says about "prefer" also goes for some other important psychological terms (as she thinks it does), then mentality spreads wide into the world. Some psychological terms -- "prefer", "decide", and "habituate" are her examples, to which I might add "seek", "learn", "reject", and many others -- appear to spread wide; while other terms, such as "meditate", "confess", and "appreciate", might apply only to humans (or maybe a few other species). Each psychological term has a range of application, and the terms that are more liberally applicable will attach to all sorts of systems that we might not otherwise tend to regard as privileged with any sort of mentality.

    Figdor has taken, I think, a crucial step toward jettisoning the remnants of the traditional dualist view of us as imbued with special immaterial souls -- toward instead seeing ourselves as only complex material patterns whose kin are other complex patterns, whether those patterns appear in other mammals, or in coral, or inside our organs, or in social groups or ecosystems or swirling eddies. Some complexities we share and others we do not. That is the radical lesson of materialism, which we do not fully grasp if we insist on saying "here are the minds and here are the non-minds", demanding a separate set of verbs for each, with truly "mental" processes only occurring in certain privileged spaces.

    With that thought in mind, let's go back to "prefer". Do neurons literally prefer? I don't know whether the linguistic evidence will ultimately support Figdor on this particular case, but I think we can approach it evenhandedly, letting fall wherever they may the technical tests of metaphor and polysemy and other considerations from linguistics and philosophy of language -- figuring that of course some of our mental state verbs literally refer to patterns of behavior that spread widely, and at different spatiotemporal grain, across the complex, multi-layered, dynamically evolving structures of our world.

    [image source]


    Carrie writes:

    Thanks to Eric for posting on my work-in-progress and the opportunity to clarify a few things. First, the technical linguistic stuff is actually my attempt to understand why it could possibly strike anyone as "natural" or "reasonable" to think these uses are metaphorical. Who "naturally" thought Hubel and Wiesel intended their descriptions of their data to be metaphorical? To the contrary, the cry of "Metaphor!" reflects not an astute semantic analysis of their uses but an automatic response to my claim that they should be interpreted literally: "They just can’t possibly be literal." The idea that they are metaphorical is actually one of the weakest semantic alternatives to a literal view.

    That said, Eric is correct that I am not a radical panpsychist. Rather, I’m interested in a plausible, non-ad hoc explanation of the ever-expanding uses of psychological language throughout biology at all levels of complexity. Basically, I think psychological concepts are transitioning to scientifically determined standards for proper use, leaving behind the ideal-rational-human, anthropocentric standards we now have. There’s a lot more to that story, and I hope to make it public very soon.

    Monday, November 02, 2015

    The Journal of Unlikely Academia

    A month ago, Unlikely Story published my story "The Dauphin's Metaphysics" in their themed issue The Journal of Unlikely Academia.

    Some updates:

    Lois Tilton -- perhaps the best-known speculative fiction reviewer in the English language -- gave the story one of her "recommended" ratings, and also what is probably one of her longest write-ups in recent years. She concludes:

    Speculative fiction and philosophy have more in common than many people might suppose, largely because contemporary philosophy isn't widely known. Issues of mind, identity and memory [the notion of the brain in the vat, for example] have long been shared by both disciplines [if we can consider SF to be disciplined]. I'm quite happy to have found this story here.

    I've now had a chance to read the other stories in the themed issue. They are also well written and philosophically interesting.

    "Follow Me Down" by Nicolette Barischoff. The story of a midwife of monstrous babies and the incubus who is one of her rebellious favorites. Monsters deserve affection no less than the rest of us, don't they? (interview with Barischoff)

    "Minotaur: An Analysis of the Species" by Sean Robinson. Based on interviews with actual minotaurs! E.g. the Stack Beast (Respondent 7): "Look. It’s finals week. Is it my fault that some thesis-fried post-grad takes a wrong turn and finds themselves somewhere that shouldn’t exist? They think they’re looking for reference materials for botany, and the stacks start twisting around them." (interview with Robinson; Appendix C: questionnaires)

    "The Librarian's Dilemma" by E. Saxey. There are radical librarians, secretly fighting the system, setting free even books that... well, no spoilers here! (interview with Saxey; other reflections by Saxey)

    Soteriology and Stephen Greenwood by Julia August. A mysterious woman approaches an Oxford medievalist with a fragment of a lost Latin prophecy -- academic listservs, snarky politics, suspicions of museum theft, and maybe something darker.... (Stop the Apocalypse; Who's Saving the World?; tumblrweed across the end of the universe)

    "And Other Definitions of Family" by Abra Staffin-Wiebe. A prostitute servicing aliens takes xeno-anthropology participant observation to new levels of risk and intimacy. (reflections from Staffin-Wiebe)

    "Candidate 45, Pensri Suesat" by Pear Nuallak. A transgender art student in alternative Thailand struggling to fit in with, or maybe escape, the art-school system.

    "The Shapes of Us, Translucent to Your Eye" by Rose Lemberg. How corrupt is the academic system at Middlestate U.? So corrupt that Warda's students are becoming translucent. (I'm unsure how common this effect is, since translucent students are systematically undercounted in the administrative rolls.)

    My Unlikely Interview came out today. August, Staffin-Wiebe, Nuallak, and Lemberg will presumably have interviews rolled out in coming weeks.

    From my interview:

    Q.: The Dauphin’s Metaphysics explores a classic and very interesting question -- if you replicate a person’s experiences exactly, can you replicate the person? What makes a person who they are, nature or nurture? It’s a story about characters reinventing themselves in multiple ways. What drew you to this particular question, and to taking the approach to it that you did in this story?

    A.: I’d been thinking about “singularity upload” stories, like Greg Egan’s Diaspora, where characters destroy their biological bodies to have their mental patterns instantiated in a computational device. These stories raise fascinating questions about personal identity, but they have an air of unreality about them because they aren’t currently technologically possible, and who knows if they ever will be. (One of the best known skeptics about computer consciousness is John Searle, who was one of my PhD supervisors at Berkeley.)

    So I wanted to write an upload story that didn’t require magic or future technology. My father was (among many other things) a licensed hypnotist, and there’s a large psychological literature on how easy it is to implant false childhood memories into people even without hypnosis, so that seemed a natural direction to develop the idea.

    The center of the story is the Dauphin’s upload – but I thought it would be interesting to contrast the case of the Dauphin’s putatively being one person across two bodies with another case arguably interpretable as two different identities in a single body. Hence the story of Fu Hao’s radical break from her childhood self. Chemistry Professor Zeng, though not as fully explored, presents a more ordinary case of slow character change over time.

    [continued here]

    Thursday, October 29, 2015

    Wow, This Amazing Puzzle Will Reveal How Stupid You Are! (Maybe)

    You know the Wason selection task. You know all about Linda the bank teller and the conjunction fallacy. You're smart. You'd never fall for those things now! You know it's not more likely that Linda is a bank teller who is active in the feminist movement than that Linda is a bank teller. You know to flip the Wason card that would break the rule rather than the one that would confirm it. Yes, of course!

    Here's one I learned in junior high school, which I've never seen studied. I don't know the original source. (If you do, let me know!) Maybe it will be fresh to you. Over the years, when I've presented it orally, I've found that even people with PhDs in philosophy often struggle, though really it's very simple.

    A man is looking at a picture. He says,
    "Brothers and sons, I have none,
    but this person's father is my father's son."
    Question: Who is in the picture?

    If you think you know the answer, write it down. I don't want any squirreling around about what you had really been thinking!

    After you've written down your guess, click through to this post on my Underblog for the answer and discussion.

    [image adapted from here]

    Thursday, October 22, 2015

    The Ends of Philosophy

    a guest post by Regina Rini

    I am sitting with a philosopher in the garden; he says again and again “I know that’s a tree”, pointing to a tree that is near us. Someone else arrives and hears this, and I tell him: “This fellow isn’t insane. We are only doing philosophy.” -Ludwig Wittgenstein, On Certainty

    The word ‘end’ is usefully ambiguous in the following question: ‘What is the end of philosophy?’ This question could be asking about the goal of philosophy. What is philosophy trying to do? Or it might be asking about where philosophy ends up. What is philosophy’s final resting place? In this post I am asking – and answering – both questions at the same time.

    Most philosophers will tell you that truth is their goal. They want to know the truth about Knowledge or Existence or Justice. I’m sure this is how they sincerely experience it – but I conjecture that ‘truth’ is only an instrumental goal. What these philosophers really want, I suspect, is certainty. They want to hold aspects of the world finally fixed in their minds, to make it the case that they cannot be wrong, at least about certain things. In service of this aim, they will jettison areas of inquiry about which certainty seems impossible. Hence, their category of the philosophical excludes the empirical, the accidental, and the historically contingent. What is left are the necessary truths – those that can be known to need to be true.

    My conjecture fits a dominant thread in western philosophy. What was Descartes doing, after all, other than paring his thoughts back to that which could not be doubted, and then building forward only on foundations of certainty? What was positivism, but an attempt to secure certainty for philosophy by designating as ‘nonsense’ that which could not be verified? And what is the contemporary project of philosophical analysis – with its insistent investigation of proxy concepts amenable to enumerated necessary and sufficient conditions – other than a flight from uncertain actualities?

    Absurdity lurks not far below certainty. We conjure thought experiments in which we have stipulated certainty about the laws of nature or human motivation, and we say that this is the real test of a philosophical concept, even as we struggle to apply that same finely sculpted concept to the unstipulated world. We carve nature at its joints, then display the bleached bones in positions they never naturally took. A protestor comes to our class from the streets of Ferguson, the smell of tear gas on her clothes, seeking guidance of which we apologetically demur; this is a seminar on ideal theory, and she is asking a non-ideal question. “We are not insane,” we say to the intruder in the garden. “We are only doing philosophy.”

    This brings me to the other sense of philosophy’s ‘end’: where does philosophy end up? Where is it located in social space? At the periphery, I think, and trending further so. Contemporary American society has little interest in contemporary American philosophy. When earnest public broadcasters put together a program on the mysteries of the universe, they turn first to physicists. If they want to chat about human nature, they call neuroscientists. Plato at the Googleplex, a very successful recent book, was noted for the thesis that philosophy still matters at all. No one makes news writing that about physics.

    I think that philosophy’s goal-end of certainty helps to explain its outcome-end of social irrelevance. Many people do want certainty, but philosophy is not where they will go to find it. Religion, of course, is an ancient and numerically dominant certainty-provider. But a sense of certainty can also be found in political ideology. Or, increasingly, in science. Philosophy is trying to compete in the certainty marketplace, and it is not winning.

    Philosophy has a crucial weakness when it contends for certainty-seekers. Unlike religion or political ideology, it abjures the manifest certainty of a supreme authority. And unlike science, it does not trend toward disciplinary consensus. A central fact about philosophy is that philosophers have been debating the same questions for millennia, with no end in sight. Philosophy is essentially discursive, even disagreeable, in a way that makes its aim of certainty a collectively self-defeating one. Any particular philosopher may become certain about her own beliefs, but from the outside philosophy will always appear as a squabble among people asserting mutually contradictory claims with equal degrees of extreme confidence.

    This shows the problem with the official justification for philosophical analysis. We say that we need to step back from messy reality in order to sharpen our concepts. We’ll just be away awhile, whetting our logical knives on some stipulated thought experiments. We’ll come back to the world, we insist, once we’ve polished our sufficient conditions. But we never come back. We argue endlessly about what we would need to make our truths necessary, and then we die and are replaced by the next generation’s assorted –ism-ists. We retire from the disorderly public square, into our shaded garden, its trees all arranged in logical space and known with certainty… and we never return.

    Of course some philosophers do venture out from the garden. But for every one who does, there are a half dozen others who whisper unkindly about the impurity of the thing. Philosophy done in public rarely displays the rigor that is a precondition of necessity. There are limits to the number of fussy objections one can anticipate without hogging the speaker’s platform. And so public philosophy will never produce the certainty that many philosophers seek.

    What if we took philosophy out of the certainty game? What would it mean, for philosophers to tolerate uncertainty, ambiguity, irresolution? It might mean trading the necessary for the contingent. Conceding that politics are never ideal. Acknowledging that knowers are embodied and temporal beings are located in history. None of this is absolutely alien to philosophy, but it is far from the apparent aim of many practitioners. Yet if we care about being anywhere other than the social periphery, perhaps we will have to adjust our ends.

    This is my final guest post at The Splintered Mind. Thanks so much to Eric for the wonderful opportunity to speak from this platform. And thanks to everyone who has read and commented on my posts. This has been incredibly enjoyable – of that, I am certain.

    image credit: 'Tree in Fog' by Matthew Paulson


    Thanks so much, Gina, for your wonderful series of posts over the last several weeks!

    For interested readers, here are the other five:

  • Ethics, Metaethics, and the Future of Morality (Sep. 11)
  • Philosophical Conversations (Sep. 17)
  • Microaggression and the Culture of Solidarity (Sep. 28; adapted for the L.A. Times Oct 12)
  • The Laughter of Ethicists (Oct. 6)
  • Consciousness Science and the Privileged Sample Problem (Oct. 14)
  • Monday, October 19, 2015

    Kammerer's New Anti-Nesting Principle

    Anti-nesting principles, in consciousness studies, are principles according to which one stream of consciousness cannot "nest" inside another. According to such principles, a conscious being cannot have conscious subparts -- at least under certain conditions -- even if it meets all other plausible structural criteria for being a conscious system. Probably the best-known anti-nesting principles are due to Hilary Putnam (1965, p. 434) and Giulio Tononi (2012, p. 297). Putnam's version is presented bare, and almost unmotivated, and has been criticized by Ned Block (1981, p. 74-76). Tononi's version is more clearly motivated within his "Integrated Information Theory" of consciousness, but still (I think) has significant shortcomings.

    In this forthcoming paper in Philosophia, Francois Kammerer takes another swing at an anti-nesting principle.

    Though relatively neglected, nesting issues are immensely important to consciousness studies. Intuitively or pre-theoretically, it seems very plausible that neither subparts of people nor groups of people are literally phenomenally conscious. (Unless maybe the brain as a whole is the relevant subpart.) If we want to retain this intuitive idea, then either (a.) there must be some structural feature that individuals have, which groups and subparts of individuals do not, which is plausibly necessary for consciousness, or (b.) consciousness must not nest for some other reason even in cases where human groups or subparts would have the structural features otherwise necessary for consciousness.

    In "If Materialism Is True, the United States Is Probably Conscious", I argue that human groups do have all the structural features that materialists normally regard as characteristic of conscious systems. A materialist who accepts that claim but wishes nonetheless to deny that groups of people are literally phenomenally conscious might then be attracted to an anti-nesting principle.

    Kammerer's principle is a bit complex. Here it is in his own words:

    "Given a whole W that instantiates the functional property P, such that W’s instantiation of P is normally sufficient for W to instantiate the conscious mental state S, W does not instantiate S if W has at least one subpart that plays a role in its functional organization which fulfills at the same time the two following conditions:

  • (A) The performing of this role by the subpart requires (given the nature of this functional role and our theory of consciousness) that this subpart has conscious mental states (beliefs, emotions, hopes, experiences, desires, etc.) that represent W (what it is, what it does, what it should do). That is to say, this subpart has a functional property Q, Q being a sufficient condition for the subpart having the conscious mental state R (where R is a mental state representing W).
  • (B) If such a functional role (i.e., a functional role of such a kind that it requires that the subpart performing it has conscious mental states representing W) was not performed by at least one of the subparts of W, W would no longer have the property P (or any other functional property sufficient for the having of S). In other words: if no subpart of W had R, then W would no longer have S."
  • Short, somewhat simplified version: If the reason a larger entity acts like it’s conscious is that it contains smaller entities within it who have conscious representations of that larger entity, then that larger entity is not in fact conscious. (I hope that's fair, and not too simple to capture Kammerer's main idea.)

    Though Kammerer's anti-nesting principle avoids some of the (apparent) problems with Putnam's and Tononi's principles, and is perhaps the best-developed anti-nesting principle to date, I'm not convinced that we should embrace it.

    I'm working on a formal reply (which I'll probably post a link to later), but my main thoughts are three:

    First, Kammerer's principle doesn't appear to fulfill the intended(?) role of excluding group-level consciousness among actually existing humans, since it excludes group consciousness in only a limited range of cases.

    Second, Kammerer's principle appears to make the existence of consciousness at the group level depend oddly on factors on the individual-person level that might have no influence on group-level functioning (such as whether an individual's thinking of herself as part of the group is emotionally motivating to her, which might vary with her mood even while her participation in the group remains the same, creating "dancing qualia" cases).

    Third, it appears to be unmotivated by a general theory that would explain why satisfying or failing to satisfy (A) or (B) would be crucial to the absence or presence of group-level consciousness.

    None of these three points would be news to Kammerer, so to make them stick would require more development than I'm going to give them today. But before doing that, I thought I solicit reactions from others -- either to the general issue of anti-nesting principles or to Kammerer's specific principle.


    Related posts:

    Martian Rabbit Superorganisms, Yeah! (May 4, 2012)

    Tononi's Exclusion Postulate Would Make Consciousness (Nearly) Irrelevant (Jul 16, 2014)

    The Copernican Sweets of Not Looking Too Closely Inside an Alien's Head (Mar 14, 2014)

    Why [X] Should Think the United States Is Conscious (X = Dennett, Dretske, Humphrey) (Winter 2012).

    [image source]

    Wednesday, October 14, 2015

    Consciousness Science and the Privileged Sample Problem

    a guest post by Regina Rini

    There is, undeniably, such a thing as a science of consciousness. People use brain scanners and clever experimental techniques to figure out the neural processes correlated with conscious experience. I don’t wish to challenge the value of this research. However, I think there is something odd about consciousness as a scientific subject, something I’ll call the privileged sample problem. If I’m right, then consciousness is importantly unlike anything else science claims to study.

    To see the problem, imagine this: you are one of the world’s pre-eminent neuroscientists. You know as much about the cutting-edge science of consciousness as anyone else. Unfortunately, you are in a car crash and suffer serious head injury. For several weeks you are in a coma, but gradually you emerge into consciousness again. Unfortunately, you quickly realize that you have no control over any part of your body. You are an extreme victim of locked-in syndrome: though you are conscious and aware of your surroundings, you cannot move or speak or indicate your awareness to the outside world. (Unlike Jean-Dominique Bauby, the author of The Diving Bell and the Butterfly, you can’t even control your eyelids, so you can’t communicate by blinking.) As far as anyone else can tell, you are just as you were in coma: lying there in bed, eyes open and unfixed, unable to respond to anyone.

    As it happens, your neuroscientist colleagues have been keeping vigil at your bedside. They are always arguing with each other, and of course they want to know whether you are conscious. Eventually they arrange to have your brain scanned, using the most sophisticated existing techniques for consciousness-detection. But the tests come back negative! And here’s the important part: when they gather around your bedside to discuss the data, you listen. You understand the science just as well as they do, and you realize that, given the data they have and the best existing scientific theory of consciousness, you agree that they are right to conclude that you are not conscious. If you were out there with them and had the same data, you’d think so too. But because you are in here, in your own mind, you know they are wrong. You are conscious. And so now you know that the best existing scientific theory of consciousness is wrong.

    In this story, you are in a position to refute the best existing science based upon a single sample of the phenomenon being studied. This is not a normal feature of science. Science is inductive. Normally, if we discover a single sample which seems to defy our best scientific theory, we first check to see if we have made a mistake in measuring the sample. If we rule that out, we start looking for other samples that replicate the finding. If we can’t find any others – that is, if all other samples remain consistent with the best existing theory – then we will very likely conclude that the single inconsistent sample is a fluke. Our observation about the sample has gone wrong in some way, even if we can’t figure out exactly how it has gone wrong. What we will not do is overturn the best existing theory simply because it fails to cohere with a single sample.

    But things are different when it comes to consciousness. Your own conscious experience is, for you, a privileged sample. It is reasonable for you to conclude that the best existing theory is false if the best existing theory predicts that you are not conscious. It doesn’t matter whether the best existing theory continues to correctly predict all other cases you know about - your own case is special. This is nothing other than Descartes’ famous point: your own consciousness is the last thing you can doubt. You are right to doubt anything else, including the best existing scientific theory, before you doubt that you are conscious.

    Of course, your case is not special for anyone else. This is the other puzzling features of a privileged sample. You and only you have a certain type of access to this sample. Your grounds for employing it to refute the best existing theory are not publicly confirmable. Public confirmation is a cardinal feature of science, yet the science of consciousness is (in principle) constrained by observations that are not publicly confirmable. There exist possible observations that reasonably refute the best existing scientific theory on the basis of a single sample that is not available to public confirmation. That is the privileged sample problem.

    What does the privileged sample problem imply about the nature of consciousness? Well, it doesn’t obviously imply anything radical about the ontology of the conscious mind. We can still be fully-committed physicalists even if we accept that there is something odd about the science of consciousness. But I think it does imply that we should be suspicious of any attempt to treat consciousness as a target of physicalist reduction. Really all I am doing here is find another way to express a point made by Thomas Nagel a long time ago: we have subjective and objective ways of thinking about our own minds, and one cannot be reduced to the other. We should not try to entirely replace conscious-subjectivity talk with physicalist-science talk, because the privileged sample problem shows that the science of consciousness is not a science like any other.

    I got the idea for the privileged sample problem while formulating a question at the ‘Measuring Borderline States of Consciousness’ conference at NYU. Thanks in particular to Adrian Owen and Tim Bayne, whose fascinating talks on detecting consciousness provoked my question.

    image credit: 'Sub Conscious' by Gregg Jaden

    Tuesday, October 13, 2015

    "1% Skepticism" in Nous; "Experimental Evidence for the Existence of an External World" in JAPA

    About a week ago, two of my forthcoming essays appeared.

    "1% Skepticism":

    A 1% skeptic is someone who has about a 99% credence in non-skeptical realism and about a 1% credence that some radically skeptical scenario obtains. The first half of this essay defends the epistemic rationality of 1% skepticism, endorsing modest versions of dream skepticism, simulation skepticism, cosmological skepticism, and wildcard skepticism. The second half of the essay explores the practical behavioral consequences of 1% skepticism.

    Official version in Nous.
    Free manuscript version from my academic homepage.

    "Experimental Evidence for the Existence of an External World" (with Alan T. Moore):

    In this essay I attempt to refute radical solipsism by means of a series of empirical experiments. In the first experiment, I exhibit unreliable judgment about the primeness or divisibility of four-digit numbers, in contrast to a seeming Excel program. In the second experiment, I exhibit an imperfect memory for arbitrary-seeming three-digit number and letter combinations, in contrast to my seeming collaborator with seemingly hidden notes. In the third experiment, I seem to suffer repeated defeats at chess. In all three experiments, the most straightforward interpretation of the experiential evidence is that something exists in the universe that is superior in the relevant respects – theoretical reasoning (about primes), memorial retention (for digits and letters), or practical reasoning (at chess) – to my own solipsistically-conceived self.

    Official version in JAPA.
    Free manuscript version from my academic homepage.

    Both essays began life as posts on The Splintered Mind, the Experimental Philosophy Blog and NewAPPS. Many thanks to those who read and commented!

    By the way, the little picture of me in the upper right corner of this blog is cropped from a photo from the "External World" paper. Why do I look so contemplative? Because Alan is proving to me that the external world exists by beating me in speed chess!

    (photo credit: Gerardo Sanchez)

    Monday, October 12, 2015

    The Los Angeles Times

    ... has been publishing philosophers' op-eds recently -- a couple by me (here and here), and this past week Harry Frankfurt on why inequality isn't immoral and an adaptation of Regina Rini's Splintered Mind guest post on microaggression.

    The new op-ed editor Juliet Lapidos is behind this trend. Encourage Juliet by sharing the LA Times philosophy links widely and by sending the LA Times your best op-ed queries. It would be terrific if this trend could stick and we could have another major U.S. newspaper that regularly publishes philosophers!

    Thursday, October 08, 2015

    If Memories and Personality Make You "You", Here's How You Could Transfer into Another Body (Maybe)

    ... with no high technology required!

    Step 1: Write extensive memoirs, and have servants constantly follow you around, recording every detail of your life, in painting, story, and song.

    Step 2: Drink some hemlock to kill your present body.

    Step 3A: Have your servants find a newborn baby. Your servants will be experts in hypnosis, in the induction of "false" memories, and in psychological training. They will induce in the child, as she grows, memories from your past -- not false memories, but veridical, accurate memories! Memories at least as accurate and complete as other people's normal memories of three or ten years ago. With proper suggestion, the growing child will experience the memories from the first-person perspective and think of them as her own.

    Step 3B (simultaneous with 3A): Surround the child with institutions designed to convince her -- I mean you -- that she/you really is just the continuation of you in a new body. She will look on all of her induced memories as memories of the old days from her previous body. She will share your name, "remember" your friends and attitudes as her own, be personally proud of your past accomplishments and personally embarrassed by your past failures, identify with your old goals, projects, debts, and obligations. With some luck and good psychological training, the child will grow into an adult who shares the values and attitudes of your old self -- perhaps about as much as normal people retain their values and attitudes over the course of a decade or two.

    If she and all of society then say that she really is the re-embodiment of you -- that is, a continuation of the same person over time (only in a new body), as much as your 40-year-old self is normally thought to be a continuation of your 20-year-old self -- would she and all her society be factually, philosophically, metaphysically mistaken? If she isn't really a metaphysically legitimate continuer of you, why not? What would be missing, exactly?

    You might say they aren't real memories, because they're the memories of a different person -- but to say that is just to beg the question, assuming the falsity of the very view in dispute.

    You might say that personal identity requires strict continuity of body, which she doesn't have with you. But that's to move away from psychological criteria for personal identity, which many people find attractive in hypothetical upload cases, brain transplant cases, and teleporter cases.

    This is the topic of my latest science fiction story, "The Dauphin's Metaphysics", now out in the latest issue of Unlikely Story.

    Related Posts:
  • A Somewhat Impractical Plan for Immortality (Apr. 22, 2013)
  • The Mnemonists (Apr. 22, 2013)
  • Tuesday, October 06, 2015

    The Laughter of Ethicists

    A guest post by Regina Rini

    You are loitering by the railyard when you see an out-of-control trolley hurtling toward five innocent orphans who’ve been lashed to the track by a mustachioed villain. There is a switch nearby, which would activate an enormous fan and disrupt the air above you. There is a very fat man hang-gliding over the tracks. Since (like most ethicists) you are an expert in the aerodynamics of obesity, you know that the fan would force him to swoop down right into the path of the trolley; the collision would save the five orphans, though the very fat hang-glider would die.

    There is another option. You happen to be carrying one of those t-shirt cannons they use to fire souvenir t-shirts into the stands at sporting events. And the tracks are right next to a nursey for babies born without developed brains, who will surely die within hours in any case. Since (like most ethicists) you are an expert in infant ballistics, you know that you could use the t-shirt cannon to fire ten anencephalic infants at the trolley, and that would be just enough to derail it, saving the orphans, though killing the projectile babies.

    What should you do? Do nothing and let the five orphans die? Flip the switch and blow the very fat hang-glider into the trolley? Or use the t-shirt cannon to fire the ten anencephalic infants at the trolley?

    Actually, don’t answer that. My story is only a parody, though it is not far off from many stories you will find in professional philosophy journals. Moral philosophers have a penchant for inventing goofy thought experiments in which numerous people are oddly imperiled. These stories have a purpose: they are meant to isolate and test some purported moral principle. The absurd details are often unnecessary, though they keep the writing from becoming dull. But we might ask: should we really be amusing ourselves with ethics?

    One possible worry is that amusingly absurd thought experiments can make our moral intuitions less reliable. Some have thought that the frequent use of unrealistic scenarios might make for bad philosophy. Others might point out that being put in a humorous mood changes how people react to moral dilemmas. But I will leave that sort of objection to the side. My question is this: is there something morally inappropriate about constructing amusing moral dilemmas?

    It’s important to keep in mind that these scenarios are often intended to provide simplified models of very troubling moral issues: killing in war, abortion, euthanasia. Even when justified, killing is killing, and it would obviously never be appropriate to laugh at a person wracked by guilt over a justifiable homicide. If our thought experiments are meant to inform reflective moral deliberation, or to model the features of real-world moral dilemma, then should we really be so irreverent toward our ultimate subject matter? The worry is that our practice of constructing funny thought experiments has caused us to become desensitized to the real human suffering we claim to be studying.

    One response to this worry is that we are simply engaged in gallows humors. Emergency room physicians talk about this phenomenon often. When you are confronted with pain and death every day, and when inevitably people will die in your care, some levity may be necessary to keep yourself functioning. Physicians make jokes about their patients, sometimes even about their patients’ suffering, and perhaps this is just a psychological necessity (though extreme instances give pause to even the most hardened medics [WARNING: this link may be triggering to victims of sexual assault]).

    But this can’t be the right justification for moral philosophers. We don’t actually watch people suffer and die right in front of us, and certainly not under our care. Our professional experience of dying is a pale imitation of what physicians experience.

    However, there may be something to the parallel with medical gallows humor. What moral philosophers are intimately familiar with is the absurdity of human life and choice. It is absurd, the existentialists will remind us, that we imbue so much meaning in the lives and the deaths of tiny beings dangling from a vast chain of eternal galactic causation. Yet we do, of course, see our lives as meaningful – and so it is absurd that our meaningful lives can be ended by things that do not matter. People are killed by trolleys. People die hang-gliding. Human pain and mortality are not produced exclusively by wrenching sacrificial choice. Sometimes a three-cent bolt comes loose, sometimes the insulation peels off the wires, sometimes a pebble is in just the wrong place on the bike path – and then a meaningful human life ends with no meaning at all.

    The moral philosopher is responsible for being reflectively aware of the ultimate limits of human life. We do not face concrete instances of death and suffering as physicians must, but we do confront the abstract reality of human limitation, with its inevitable implication of our own personal vulnerability. Perhaps this is a professional hazard of moral philosophy; we are not in a business that allows us to simply look away from unpleasant ultimate realities. Perhaps all philosophers must find some way to sublimate their necessary awareness of life’s fragility. Some bury it under anodyne logical formalism. Others lean into the absurd, mocking death’s dominion over their thought experiment characters – and so, over themselves. Perhaps the laughter of ethicists is not irreverence, but the unyielding desire to find human joy even in the contemplation of human misery.

    Thanks to Tyler Doggett and William Ruddick for helping me think through how to express this idea.

    image credit: 'Tracks' by Clint Losee

    Thursday, October 01, 2015

    Against the "Still Here" Reply to the Boltzmann Brains Problem

    I find the Boltzmann Brain skeptical scenario interesting. I've discussed it in past posts, as well as in this paper, which I'll be presenting in Chapel Hill on Saturday.

    A Boltzmann Brain, or "freak observer" is a hypothetical self-aware entity that arises from a low-likelihood fluctuation in a disorganized system. Suddenly, from a chaos of gasses, say, 10^27 atoms just happen to converge in exactly the right way to form a human brain thinking to itself, "I wonder if I'm a Boltzmann Brain". Extremely unlikely. But, on many physical theories, not entirely impossible. Given infinite time, perhaps inevitable! Some cosmological theories seem to imply that Boltzmann Brains vastly outnumber ordinary observers.

    This invites the question, might I be a Boltzmann brain?

    The idea started getting attention in the physics community in the late 2000s. One early response, which seems to me superficially appealing but not to withstand scrutiny, is what I'll call the Still Here response. Here's how J. Richard Gott III put it in 2008:

    How do I know that I am an ordinary observer, rather than just a BB [Boltzmann Brain] with the same experiences up to now? Here is how: I will wait 10 seconds and see if I am still here. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ... Yes I am still here. If I were a random BB with all the perceptions I had had up to the point where I said "I will wait 10 seconds and see if I am still here," which the Copernican Principle would require -- as I should not be special among those BB's -- then I would not be answering that next question or lasting those 10 extra seconds.

    There's also a version of the Still Here response in Max Tegmark's influential 2014 book:

    Before you get too worried about the ontological status of your body, here's a simple test you can do to determine whether you're a Boltzmann brain. Pause. Introspect. Examine your memories. In the Boltzmann-brain scenario, it's indeed more likely that any particular memories that you have are false rather than real. However, for every set of false memories that could pass as having been real, very similar sets of memories with a few random crazy bits tossed in (say, you remembering Beethoven's Fifth Symphony sounding like pure static) are vastly more likely, because there are vastly more disembodied brains with such memories. This is because there are vastly more ways of getting things almost right than of getting them exactly right. Which means that if you really are a Boltzmann brain who at first thinks you're not, then when you start jogging your memory, you should discover more and more utter absurdities. And after that you'll feel your reality dissolving, as your constituent particles drift back into the cold and almost empty space from which they came.

    In other words, if you're still reading this, you're not a Boltzmann brain (p. 307-308)

    I see two problems with the Still Here response.

    First, we can reset the clock. While after ten seconds I could ask the question "am I a Boltzmann Brain who has already lasted ten seconds?", that question is not the sharpest form of the skeptical worry. A sharper question would be this, "Am I a Boltzmann Brain who came into existence just now with a false memory of having counted out ten seconds?" In other words, there seems to be nothing that prevents the Boltzmann Brain skeptic from restarting the clock at will. Similarly, a Boltzmann Brain might come into existence thinking that it had just finished introspecting its memories Tegmark-style, having found them coherent. That's the possibility that the Boltzmann Brain skeptic will be worried about, after having completed (or seeming to have completed) Tegmark's test. The Still Here response begs the question, or argues in a circle, by assuming that we can have veridical memories of implementing such tests over the course of tens of seconds; but it is exactly the veridicality of such memories, even over short durations, that the Boltzmann Brain hypothesis calls into doubt.

    Second, this response ignores the base rate of Boltzmann Brains. It's widely assumed that if there are Boltzmann Brains, they might be vastly more numerous than normally embodied observers. For example, a universe might produce a finite number of normal observers and then settle into an infinitely enduring high entropy state that gives rise, at extremely long intervals, to an infinite number of Boltzmann Brains. Since infinitude is hard to deal with, let's hypothesize a cosmos with a googolplex (10^(10^100)) of Boltzmann Brains for every normal observer. Given some sort of indifference principle, the Boltzmann Brain argument goes, I should initially assign a 1-in-a-googolplex chance to being a normal observer instead of a Boltzmann Brain. Not good. But now, what are the odds that a Boltzmann Brain can hold it together for ten seconds without lapsing into incoherence? Tiny! Let's assume one in a googol (10^100). The exact number doesn't matter. Setting aside worries about resetting the clock, let's assume that I now find that I have indeed endured coherently for ten seconds. What should be my new odds that I am a Boltzmann brain? Much lower than 1-in-a-googolplex. Yay! Only about a googolth of a googolplex! Let's see, how much is that? Instead of a ten followed by a googol of zeroes, it's only ten followed by a googol-minus-100 zeros. So... still virtual certainty that I am a Boltzmann Brain.

    So how should we respond to the Boltzmann Brain hypothesis, then? Sean Carroll has a two-pronged answer that I think makes a lot of sense.

    First, one can consider whether physical theories can be independently justified which imply a low ratio of Boltzmann Brains to normal observers. Boddy, Carroll, and Pollack 2015 offer such a theory. If it turns out that the best physical theories imply that there are zero or very few Boltzmann Brains, then we lose some of our grounds for worry.

    Second, one can point to the cognitive instability of the Boltzmann Brain hypothesis (Carroll 2010, p. 223, drawing on earlier work by David Albert). Here's how I'd put it: To the extent I think it likely that I am a Boltzmann Brain, I think it likely that evidence I have in favor of that hypothesis is delusional -- which should undercut my credence in that evidence and thus my credence in the hypothesis itself. If I think it 99% likely that I'm a Boltzmann Brain, for example, then I should think it 99% likely that my evidence in favor of the Boltzmann Brain hypothesis is in fact bogus evidence -- false memories, not reflecting real evidence from the world outside -- and that should in turn reduce my credence in the Boltzmann Brain hypothesis.

    An interesting feature of Carroll's responses, which distinguishes them from the Still Here response, is this: Carroll's responses appear to be compatible with still assigning a small but non-trivial subjective probability to being a Boltzmann Brain. Maybe the best cosmological theory turns out not to allow for (many) Boltzmann Brains. But we shouldn't have 100% confidence in any such theory -- certainly not at this point in the history of cosmological science -- and if there are still some contender cosmologies that allow for many Boltzmann Brains, we (you? I?) might want to assign a small probability to being a Boltzmann Brain, in view the acknowledged possibility that the cosmos might, though unlikely, have a non-trivial ratio of Boltzmann Brains to normal observers. And although a greater than 50% credence in the Boltzmann Brain hypothesis seems cognitively unstable in Carroll's sense, it's not clear that, say, an approximately 0.1% credence in the Boltzmann Brain hypothesis would be similarly unstable, since in that case one still might have quite a high degree of confidence in the physical theories that lead one to speculate about the small-but-not-minuscule possibility of being a Boltzmann Brain.

    [image source]

    Monday, September 28, 2015

    Microaggression and the Culture of Solidarity

    A guest post by Regina Rini

    If you are on a college campus or read anxious thinkpieces, you’ve probably heard about ‘microaggression’. A microaggression is a relatively minor (hence ‘micro’) insult to a member of a marginalized group, perceived as damaging to that person’s full standing as social equal. Examples include acting especially suspicious toward people of color or saying to a Jewish student, "Since Hitler is dead, you don’t have to worry about being killed by him any more." A microaggression is not necessarily a deliberate insult, and any one instance might be an honest mistake. But over time a pattern of microaggression can cause macro harm, by continuously reminding members of marginalized groups of their precarious social position.

    A recent paper by sociologists Bradley Campbell and Jason Manning claims that talk of microaggression signals the appearance of a new moral culture: a ‘culture of victimhood’. In the paper Campbell and Manning present a potted history of western morality. First there was a ‘culture of honor’, which prized physical bravery and took insults to demand an aggressive reply. Picture two medieval knights glowering at one another, swords drawn. Then, as legal institutions grew stronger, the culture of honor was displaced by a ‘culture of dignity’, in which individuals let minor insults slide, and reported more serious offenses to impartial authorities. Picture a 1950s businessman calmly telling the constable about a neighbor peeking in windows. Finally, there is now an emerging ‘culture of victimhood’, in which an individual publicly calls attention to having been insulted, in hopes of rallying support from others and inducing the authorities to act. Picture a queer Latina student tweeting about her professor’s perceived-to-be homophobic and racist comments.

    There is a serious problem with Campbell and Manning’s moral history, and exposing this problem helps us to see that the ‘culture of victimhood’ label is misleading. The history they provide is a history of the dominant moral culture: it describes the mores of those social groups with greatest access to power. Think about the culture of honor, and notice how limited it must have been. If you were a woman in medieval Europe, you were not expected or permitted to respond to insults with aggression. Even if you were a man, but of low social class, you certainly would not draw your sword in response to insult from a social superior. The ‘culture of honor’ governed relations among a small part of society: white men of equally high social status.

    Now think about the culture of dignity, which Campbell and Manning claim “existed perhaps in its purest form among respectable people in the homogenous town of mid-twentieth century America.” Another thing that existed among the ‘respectable people’ in those towns was approval of racial segregation; ‘homogenous towns’ did not arise by accident. People of color, women, queer people, immigrants – none could rely upon the authorities to respond fairly to reports of mistreatment by the dominant group. The culture of dignity embraced more people than had the culture of honor, but it certainly did not protect everyone.

    The cultures of honor and dignity left many types of people formally powerless, with no recognized way of responding to moral mistreatment. But they did not stay quiet. What they did instead was whisper to one another and call one another to witness. They offered mutual recognition amid injustices they could not overcome. And sometimes, when the circumstances were right, they made sure that their mistreatment would be seen by everyone, even by the powerful. They sat in at lunch counters that refused to serve them. They went on hunger strike to demand the right to vote. They rose up and were beaten down at Stonewall when the police, agents of dignity, moved in.

    The new so-called ‘culture of victimhood’ is not new, and it is not about victimhood. It is a culture of solidarity, and it has always been with us, an underground moral culture of the disempowered. In the culture of solidarity, individuals who cannot enforce their honor or dignity instead make claim on recognition of their simple humanity. They publicize mistreatment not because they enjoy the status of victim, but because they need the support of others to stand strong, and because ultimately public discomfort is the only route to redress possible. What is sought by a peaceful activist who allows herself to be beaten by a police officer in front of a television camera, other than our recognition? What is nonviolent civil disobedience, other than an expression of the culture of solidarity?

    If the culture of solidarity is ancient, then what explains the very current fretting over its manifestation? One answer must be social media. Until very recently, marginalized people were reliant on word of mouth or the rare sympathetic journalist to document their suffering. Yet each microaggression is a single small act that might be brushed aside in isolation; its oppressive power is only visible in aggregate. No journalist could document all of the little pieces that add up to an oppressive whole. But Facebook and Twitter allow documentation to be crowdsourced. They have suddenly and decisively amplified the age-old tools of the culture of solidarity.

    This is a development that we should welcome, not fear. It is good that disempowered people have new means of registering how they are mistreated, even when mistreatment is measured in micro-units. Some of the worries raised about ‘microaggression’ are misplaced. Campbell and Manning return repeatedly to false reporting of incidents that did not actually happen. Of course it is bad when people lie about mistreatment – but this is nothing special about the culture of solidarity. People have always abused the court of moral opinion, however it operated. An honor-focused feudal warlord could fabricate an insult to justify annexing his brother’s territory. A 1950s dignitarian might file a false police report to get revenge on a rival.

    There are some more serious worries about the recent emergence of the culture of solidarity. Greg Lukianoff and Jonathan Haidt suggest that talk of microaggression is corrosive of public discourse; it encourages accusations and counter-accusations of bad faith, rather than critical thinking. This is a reasonable thing to worry about, but their solution, that “students should also be taught how to live in a world full of potential offenses” is not reasonable. The world is not static: what is taught to students now will help create the culture of the future. For instance, it is not an accident that popular support for marriage equality was achieved about 15 years after gay-straight alliances became a commonplace in American high schools and colleges. Teaching students that they must quietly accept racist and sexist abuse, even in micro units, is simply a recipe for allowing racist and sexist abuse to continue. A much more thoughtful solution, one that acknowledges the ongoing reality of oppression as more than an excuse for over-sensitive fussing, will be required if we are to integrate recognition of microaggression into productive public discourse.

    There is also a genuine question about the moral blameworthiness of microaggressors. Some microaggressions are genuine accidents, with no ill intent on the part of the one who errs. Others are more complex psychological happenings, as with implicit bias. Still others are acts of full-blooded bigotry, hiding behind claims of misunderstanding. The problem is that outsiders often cannot tell which is which – nor, in many cases, can victims. And being accused of acting in micro-racist or micro-sexist ways is rarely something people receive without becoming defensive; it is painful to be accused of hurting others. We need a better way of understanding what sort of responsibility people have for their small, ambiguous contributions to oppression. And we need better ways of calling out mistakes, and of responding to being called out. These are all live problems for ethicists and public policy experts. Nothing is accomplished by ignoring the phenomenon or demanding its dismissal from polite conversation.

    The culture of solidarity has always been with us – with some of us longer than others. It is a valuable form of moral community, and its recent amplification through social media is something we should welcome. The phenomena it brings to light – microaggression among them – are real problems, bringing with them all the difficulties of finding real solutions. But if we want our future moral culture to be just and equal, not merely quietly dignified, then we will have to struggle for those solutions.

    Thanks to Kate Manne, Meena Krishnamurthy, and others for helping me think through the ideas of this post. Of course, they do not necessarily endorse everything I say.

    image credit: ‘Hands in Solidarity, Hands of Freedom’ mural, Chicago IL. Photo by Terence Faircloth

    Friday, September 25, 2015

    Some Video Interviews of Me

    ... on topics related to consciousness and belief, about ten minutes each, here.

    This interview is a decent intro to my main ideas about group consciousness. (Full paper: "If Materialism Is True, the United States Is Probably Conscious".)

    This interview is a decent intro to my skepticism about the metaphysics of consciousness. (Full paper: "The Crazyist Metaphysics of Mind".)

    Monday, September 21, 2015

    A Theory of Hypocrisy

    Hypocrisy, let's say, is when someone conspicuously advocates some particular moral rule while also secretly, or at least much less conspicuously, violating that moral rule (and doing so at least as much as does the average member of her audience).

    It's hard to know exactly how common hypocrisy is, because people tend to hide their embarrassing behavior and because the psychology of moral advocacy is itself a complex and understudied issue. But it seems likely that hypocrisy is more common than a purely strategic analysis of its advantages would predict. I think of the "family values" and anti-homosexuality politicians and preachers who seem disproportionately likely to be caught in gay affairs, of the angry, judgmental people I know who emphasize how important it is to peacefully control one's emotions, of police officers who break the laws they enforce on others, of Al Gore's (formerly?) environmentally-unfriendly personal habits, and of the staff member here at UCR who was in charge of prosecuting academic misconduct and who was later dismissed for having grossly falsified his resume.

    Now, anti-homosexuality preachers might or might not be more likely than their parishioners to have homosexual affairs, etc. But it's striking to me that the rates even come close, as it seems to me they do. A purely strategic analysis of hypocrisy suggests that, in general, people who conspicuously condemn X should have low rates of X, since the costs of advocating one thing and doing another are typically high. Among those costs: creating a climate in which X-ish behavior, which you engage in, is generally more condemned; attracting friends and allies who are especially likely to condemn the types of behavior you secretly engage in; attracting extra scrutiny of whether you in fact do X or not; and attracting the charge of hypocrisy, in addition to the charge of X-ing itself, if your X-ing is discovered, substantially reducing the chance that you will be forgiven. It seems strategically foolish for a preacher with a secret homosexual lover to choose anti-homosexuality to be a central platform of his preaching!

    Here's what I suspect is going on.

    People do not aim to be saints, nor even to be much morally better than their neighbors. They aim instead for moral mediocrity. If I see a bunch of people profiting from doing something that I regard as morally wrong, I want to do that thing too. No fair that (say) 15% of people cheat on the test and get A's, or regularly get away with underreporting their self-employment income. I want to benefit, if they are! This reasoning is tempting even if the cheaters are a minority and honest people are the majority.

    Now consider the preacher tempted by homosexuality or the environmentalist who wants to eat steaks in her large air-conditioned house. They might be entirely sincere in their moral opinions. Hypocrisy needn't involve insincere commitment to the moral ideas one espouses (though of course it can be insincere). Still, they see so many others getting away with what they condemn that they (not aiming to be a lot better than their neighbors) might well feel licensed to indulge themselves a bit too.

    Furthermore, if they are especially interested in the issue, violations of those norms might be more salient and visible to them than for the average person. The person who works in the IRS office sees how frequent and easy it is to cheat on one's taxes. The anti-homosexual preacher sees himself in a world full of gays. The environmentalist grumpily notices all the giant SUVs rolling down the road. Due to an increased salience of violations of the norms they most care about, people might tend to overestimate the frequency of the violations of those norms -- and then when they calibrate toward mediocrity, their scale might be skewed toward estimating high rates of violation. This combination of increased salience of unpunished violations plus calibration toward mediocrity might partly explain why hypocritical norm violations are more common than a purely strategic account might suggest.

    But I don't think that's enough by itself to explain the phenomenon, since one might still expect people to tend to avoid conspicuous moral advocacy on issues where they know they are average-to-weak; and even if their calibration scale is skewed a bit high, they might hope to pitch their own behavior especially toward the good side on that particular issue -- maybe compensating by allowing themselves more laxity on other issues.

    So here's the final piece of the puzzle:

    Suppose that there's a norm that you find yourself especially tempted to violate, though you succeed for a while, at substantial personal cost, in not violating it. You love cheeseburgers but go vegetarian; you have intense homosexual desires but avoid acting on them. Envy might lead you to be especially condemnatory of other people who still do such things. If you've worked so hard, they should too! It's an issue you've struggled with personally, so now you have wisdom about it, you think. You want to try to make sure that others don't get away with that sin you've worked so hard to avoid. Moreover, optimistic self-illusions might lead you to overestimate the likelihood that you will stay strong and not lapse. These envious, self-confident moments are the moments when you are most likely to conspicuously condemn those behaviors to which you are tempted. But after you're on the hook for it, if you've been sufficiently conspicuous in your condemnations, it becomes hard to change your tune later, even after you have lapsed.

    [image source; more on Rekers]

    Thursday, September 17, 2015

    Philosophical Conversations

    a guest post by Regina Rini

    You’re at a cocktail reception and find yourself talking to a stranger. She mentions a story she heard today on NPR, something about whether humans are naturally good or evil. Something like that. So far she’s just described the story; she hasn’t indicated her own view. There are a few ways you might respond. You might say, "Oh, that’s interesting. I wonder why this question is so important to people." Or you might say, "Here’s my view on the topic… What do you think?" Or maybe you could say "Here's the correct view on the topic… Anyone who thinks otherwise is confused."

    It’s obvious that the last response is boorish cocktail party behavior. Saying that seems to be aimed at foreclosing any possible conversation. You’re claiming to have the definitive, correct view, and if you’re right then there’s no point in discussing it further. If this is how you act, you shouldn’t be surprised when the stranger appears disconcerted and politely avoids talking to you anymore. So why is it that most philosophy books and papers are written in exactly this way?

    If we think about works of philosophy as contributing to a conversation, we can divide them up like this. There are conversation-starters: works that present a newish topic or question, perhaps with a suggestive limning of the possible answers, but without trying to come to a firm conclusion. There are conversation-extenders: works that react to an existing topic by explaining the author’s view, but don’t try to claim that this is the only possibly correct view and clearly invite response from those who disagree. And there are conversation-enders: works that try to resolve or settle an existing debate, by showing that one view is the correct view, or at least that an existing view is definitively wrong and must be abandoned.

    Contemporary analytic philosophy seems to think that conversation-enders are the best type of work. Conversation-starters do get some attention, but usually trying to raise a new topic leads to dismissal by editors and referees. "This isn’t sufficiently rigorous", they will say. Or: "What’s the upshot? Which famous –ism does this support or destroy? It isn’t clear what the author is trying to accomplish." Opening a conversation, with no particular declared outcome, is generally regarded as something a dilettante might do, not what a professional philosopher does.

    Conversation-extenders also have very little place in contemporary philosophy. If you merely describe your view, but don’t try to show that it is the only correct view, you will be asked "where is your argument?" Editors and referees expect to see muscularity and blood. A good paper is one that has "argumentative force". It shows that other views "fail" - that they are "inadequate", or "implausible", or are "fatally flawed". A good paper, by this standard, is not content to sit companionably alongside opposed views. It must aim to end the conversation: if its aspirations are fully met, there will no need to say anything more about the topic.

    You might object here. You might say: the language of philosophy papers is brutal, yes, but this is misleading. Philosophers don’t really try to end conversations. They know their opponents will keep on holding their "untenable" views, that there will soon be a response paper in which the opponent says again the thing that they’ve been just shown they "cannot coherently say". Conversation-enders are really conversation-extenders in grandiose disguise. Boxers aren’t really trying to kill their opponents, and philosophers aren’t really trying to kill conversations.

    But I think this objection misses something. It’s not just the surface language of philosophy that suggests a conversation-ending goal. That language is driven by an underlying conception of what philosophy is. Many contemporary analytic philosophers aspire to place philosophy among the ‘normal sciences’. Philosophy, on this view, aims at revealing the Truth – the objective and eternal Truth about Reality, Knowledge, Beauty, and Justice. There can be only one such Truth, so the aim of philosophy really must be to end conversations. If philosophical inquiry ever achieves what it aims at, then there is the Truth, and why bother saying any more?

    For my part, I don’t know if there is an objective and eternal Truth about Reality, Knowledge, Beauty, and Justice. But if there is, I doubt we have much chance of finding it. We are locally clever primates, very good at thinking about some things and terrible at thinking about others. The expectation that we might uncover objective Truth strikes me as hubristic. And the ‘normal science’ conception of philosophy leads to written work that is plodding, narrow, and uncompanionably barbed. Because philosophy aims to end conversations, and because that is hard to do, most philosophy papers take on only tiny questions. They spin epicycles in long-established arguments; they smash familiar –isms together in hopes that one will display publishably novel cracks. If philosophy is a normal science, this makes perfect sense: to end the big conversation, many tiny sub-conversations must be ended first.

    There is another model for philosophical inquiry, one which accepts the elusiveness of objective Truth. Philosophy might instead aim at interpretation and meaningfulness. We might aspire not to know the Truth with certainty, but instead to know ourselves and others a little bit better. Our views on Reality, Knowledge, Beauty, and Justice are the modes of our own self-understanding, and the means by which we make our selves understood to others. They have a purpose, but it is not to bring conversation to a halt. In fact, on this model, the ideal philosophical form is the conversation-opener: the work that shows the possibility of a new way of thinking, that casts fresh light down unfamiliar corridors. Conversation-openers are most valuable precisely because they don’t assume an end will ever be reached. But conversation-extenders are good too. What do you think?

    The spirit of this post owes a lot to Robert Nozick, especially the introduction to his book Philosophical Explanations. Thanks to Eden Lin and Tim Waligore for helping me track down Nozick’s thoughts, and to several facebook philosophers for conversing about these ideas.

    image credit: Not getting Involved by Tarik Browne