Thursday, July 02, 2015

How In-Between Cases of Belief Differ Normatively from In-Between Cases of Extraversion

For twenty years, I've been advocating a dispositional account of belief, according to which to believe that P is to match, to an appropriate degree and in appropriate respects, a "dispositional stereotype" characteristic of the belief that P. In other words: All there is to believing that P is being disposed, ceteris paribus (all else equal or normal or right), to act and react, internally and externally, like a stereotypical belief-that-P-er.

Since the beginning, two concerns have continually nagged at me.

One concern is the metaphysical relation between belief and outward behavior. It seems that beliefs cause behavior and are metaphysically independent of behavior. But it's not clear that my dispositional account allows this -- a topic for a future post.

The other concern, my focus today, is this: My account struggles to explain what has gone normatively wrong in many "in-between" cases of belief.

The Concern

To see the worry, consider personality traits, which I regard as metaphysically similar to beliefs. What is it to be extraverted? It is just to match, closely enough, the dispositional stereotype that we tend to associate with being extraverted -- that is, to be disposed to enjoy parties, to be talkative, to like meeting new people, etc. Analogously, on my view, to believe there is beer in the fridge is, ceteris paribus, to be disposed to go to the fridge if one wants a beer, to be disposed to feel surprise if one were to open the fridge and find no beer, to answer "yes" when asked if there is beer in the fridge, etc.

One interesting thing about personality traits is that people are rarely 100% extravert or 100% introvert, rarely 100% high-strung or 100% mellow. Rather, people tend to be between the extremes, extraverted in some respects but not in others, or in some types of contexts but not in others. One feature of my account of belief which I have emphasized from the beginning is that it easily allows for the analogous in-betweenness: We often match only imperfectly, and in some respects, the stereotype of the believer in racial equality, or of the believer in God, or of the believer that the 19th Street Bridge is closed for repairs. ("The Splintered Mind"!)

The worry, then is this: There seems to be nothing at all normatively wrong -- no confusion, no failing -- with being an in-between extravert who has some extraverted dispositions and other introverted ones; while in contrast it does seem that typically something has gone wrong in structurally similar cases of in-between believing. If some days I feel excited about parties and other days I loathe the thought, with no particular excuse or explanation for my different reactions, no problem, I'm just an in-between extravert. In contrast, if some days I am disposed to act and react as if Earth is third planet from the Sun and other days I am disposed to act and react as if it is the fourth, with no excuse or explanation, then something has gone wrong. Being an in-between extravert is typically not irrational; being an in-between believer typically is irrational. Why the difference?

My Answer

First, it's important not to exaggerate the difference. Too arbitrary an arrangement of, or fluctuation in, one's personality dispositions does seem at least a bit normatively problematic. If I'm disposed to relish the thought of a party when the wall to my left is beige and to detest the thought of a party when the wall to my left is truer white, without any explanatory story beneath, there's something weird about that -- especially if one accepts, as I do, following McGeer and Zawidzki, that shaping oneself to be comprehensible to others is a central feature of mental self-regulation. And on the other hand, some ways of being an in-between believer are entirely rational: for example, having an intermediate degree of confidence or having procedural "how to" knowledge without verbalizable semantic knowledge. But this so far is not a full answer. Wild, inexplicable patterns still seem more forgivable for traits like extraversion than attitudes like belief.

A second, fuller reply might be this: There is a pragmatic or instrumental reason to avoid wild splintering of one's belief dispositions that does not apply to the case of personality traits. It's good (at least instrumentally good, maybe also intrinsically good?) to be a believer of things, roughly, because it's good to keep track of what's going on in one's environment and to act and react in ways that are consonant with that. Per impossibile, if one were faced with the choice of whether or not to be a creature with the capacity to form dispositional structures in response to evidence that stay mostly stable, except under the influence of new evidence, and which guide one's behavior accordingly, vs. being a creature without the capacity to form such evidentially stable dispositional structures, it would be pragmatically wise to choose to be the former. On average, plausibly, one would live longer and attain more of one's goals. So perhaps the extra normative failing in wildly splintering belief dispositions derives from that. An important part of the value of having stable belief-like dispositional sets is to guide behavior in response to evidence. In normatively defective in-between cases, that value isn't realized. And if one explicitly embraces wild in-betweenness in belief, one goes the extra step of thumbing one's nose at such structures, when one could, instead, try to employ them toward one's ends.

Whether these two answers are jointly sufficient to address the concern, I haven't decided.

[Thanks to Sarah Paul and Matthew Lee for discussion.]

[image source]

Monday, June 29, 2015

A New Podcast Interview of Me


Thanks to Daniel Bensen for the fun interview! We discuss the rights of artificial intelligences, whether our moral intuitions break down in far-out SF cases, the relationship between science fiction and philosophy, and my recent story "Momentary Sage".

Thursday, June 25, 2015

Celebrate the Nerd!

Here's my definition of a nerd:

A nerd is someone who loves an intellectual topic, for its own sake, to an unreasonable degree.

The nerd might be unreasonably passionate about Leibnizian metaphysics, for example -- she studies Latin, French, and German so she can master the original texts, she stays up late reading neglected passages, argues intensely about obscure details with anyone who has the patience to listen. Or she loves twin primes in that same way, or the details of Napoleonic warfare, or the biology of squids. How could anyone care so much about such things?

It's not that the nerd sees some great practical potential in studying twin primes (though she might half-heartedly try to defend herself in that way), or is responding in the normal way to something that sensible people might study carefully because of its importance (such as a cure for leukemia). Rather the nerd is compelled by an intellectual topic and builds a substantial portion of her life around it, with no justification that would make sense to anyone who is not similarly consumed by that topic. All passions drift free of reasonable justification to some extent, but still there's a difference between moderate passions and passions so extreme and compelling that one is somewhat unbalanced as a result of them. The nerd will sacrifice a lot -- time, money, opportunities -- to learn just a little bit more about her favored topic.

The secondary features of nerdiness are side effects: The nerd might not care about dressing nicely. She's too busy worrying about the Leibniz Nachlass. The nerd might fail at being cool -- she's not invested in developing the social skills that would be required. The nerd might be introverted: Maybe she really was introverted all along and that's part of why she found herself with her nerdy passions; or maybe she's an introvert partly in reaction to other people's failure to care about squid. Oh, but now squid have come up in the conversation? Her knowledge is finally relevant! The nerd becomes now too eager to deploy her vast knowledge. She won't stop talking. She'll correct all your minor errors. She'll nerdsplain tirelessly at you.

The nerd needn't possess any of these secondary features: Caring intensely about the Leibniz Nachlass needn't consume one entirely, and so there can still be room for the nerd to care also, in a normal, non-intellectual way, about ordinary things. But the tendency on average will be for nerdy passion to push away other interests and projects, with the result that uncool, shlumpy introverts will be overrepresented among nerds.

Innate genius might exist. But I don't find the empirical evidence very compelling. What I think passes for innate genius is often just nerdy passion. Meeting the nerd on her own turf, she can appear to be a natural-born genius or talent because she has already thought the topic through so thoroughly that she operates two moves ahead of you and has a chess-master-like recognition of the patterns of intellectual back-and-forth in the area. She has thought repetitively, and from many angles, of the various ways in which pieces of Leibniz might possibly connect, or about the wide range of techniques in prime-number mathematics, or about the four competing theories of squid neural architecture and their relative empirical weaknesses. She dreams them at night. How could you hope to keep up? She will also master related domains so that she exceeds you there, too -- early modern philosophy generally and abstract metaphysics, say, for the Leibniz nerd. Other aspects of her mind might not be so great -- just ask her to fix a faucet or find her way around downtown -- but meet her anywhere near her turf and she'll scorch right past you. If she is good enough also at exuding an aura of intelligence (not all nerds are, but it's a social technique that pairs well with nerdiness), then you might attribute her overperformance on Leibniz to her innate brilliance, her underperformance in plumbing to her not giving a whit.

Movies like Good Will Hunting drive me nuts, because they feed the impression that intellectual accomplishment is the result of an innate gift, rather than the result of nerdy passion. In this way, they are antithetical to the vision of nerdiness that I want to celebrate. A janitor who doesn't care (much?) about math but is innately great at it -- and somehow also knows better than history graduate students what's going on in obscure texts in their field? Such innate-genius movies rely on the fixed mindset that Carol Dweck has criticized. What I think I see in the nerdy eminences I have met is not so much innate genius as years of thought inspired by passion for stuff that no one sensible would care so much about.

Society needs nerds. If we want to know as much as a society ought to know about Leibniz and about squids, we benefit from having people around who are so unreasonably passionate about these things that they will master them to an amazing degree. There's also just something glorious about a world that contains people who care as passionately about obscure intellectual topics as the nerd does.

**** Celebrate the nerd! ****

[image source, image source]

Thursday, June 18, 2015

Why Do We Care about Discovering Life, Exactly?

It would be exciting to discover life on another planet -- no doubt about that! But why would it be exciting?

Let's start with a contrast: the possibility of finding intelligence that is not alive -- a robot or a god, without means of reproduction. (Standard textbook definitions, philosophy of biology, and NASA-sponsored discussions all tend to define "life" partly in terms of reproduction.) I'm inclined to think that the search for extra-terrestrial life would have been successful in its aims if we discovered a manufactured robot or a non-reproducing god, even if such beings are not technically alive or are only borderline cases of living things. So maybe what we call the "search for life" is better conceptualized as the search for... well, what exactly?

(Could we discover evidence of a god -- a creator being who exists outside of our space and time? I don't see why not, at least hypothetically. Maybe we find a message in the stars: "Hey, God here! Ask me for a miracle and I will produce one!")

The robot and god cases might suggest that what we really care about is finding intelligence. SETI, for example, takes that as its explicit goal: the Search for Extra-Terrestrial Intelligence. But an emphasis on intelligence appears to underestimate our target. We'd be excited to find microbes on Mars or Europa -- and the search for extra-terrestrial life would rightly be regarded as having met with success (though not the most exciting form of success) -- despite microbes' lack of intelligence.

Or do microbes possess some sort of minimal intelligence? They engage in behaviors that sustain their homeostasis, repelling some substances and consuming others, for example, in a way that preserves their internal order. This type of "intelligence" is also part of standard definitions of life. Maybe, then, order-preserving homeostasis is what excites us? But then, Jupiter's Great Red Spot does something something similar, but we don't seem to think of it as the kind of thing we're looking for in searching for life.

Are we looking, then, for complexity? Maybe a microbe is more complex that the Great Red Spot. (I don't know. Measuring complexity is a vexed issue.) But sheer complexity doesn't seem like what we're after. Galaxies are complex, and the canyons of Mars are complex, and there are subtle, complex variations in cosmic background radiation -- all very interesting, but the search for life appears to be something different, not just a search for complexity.

Maybe discovering life would be interesting because it would give us a glimpse of our potential past? Life on Earth evolved up from microbes, but it's still obscure how. Seeing microbial life elsewhere might illuminate our own origins. Maybe, if it's very different from us, it will also illuminate the contingency of our origins.

Maybe discovering life would be interesting because it would complete the Copernican revolution, which knocked human beings out of the center of the cosmos? Earth is still special in being the only planet known to have life, and maybe that sense of specialness is still implicit in our thinking. Finding life elsewhere might knock us more fully from the center of the cosmos.

Maybe discovering life would be interesting because it would be a discovery of something with awesome potential? Reproduction might work its way back into our considerations here. Microbes can reproduce and thus evolve, and maybe their awesomeless lies partly in the possibility that in a billion years they could give rises to multicellular entities very different from us -- capable of very different forms of consciousness, self-awareness, pleasure and pain, creativity, art.

Maybe discovering life would be interesting because terrifying -- either because of the threat alternative life forms might directly pose to life on Earth or, more subtly, because if non-technological life is common enough in the universe for us to discover it, then the Great Filter of Fermi's Paradox is more likely to be before us than behind us. (That is, it might be evidence that biological life is common while technological intelligence is rare, and thus that technological civilizations tend to destroy themselves in short order.)

On the flip side, maybe it would be interesting for its potential use: intelligences with technology to share, non-technological organisms with interesting biologies from which we could learn to construct new medicines or other technologies.

Would it be interesting in the same way to find remnants of life? I'm inclined to think it would have some of the same interest. If so, and if we're inclined to think, for whatever reason, that technological societies tend to be short-lived, then we might dedicate some resources toward detecting possible signs of dead civilizations. Such signs might include solar collectors that interfere with stellar output, or stable compounds in a planet's atmosphere that are unlikely to have arisen except by technological means.

I see no reason we need to insist on a single answer to questions about what ambitions we do or should have in our search for extra-terrestrial company of some sort. But in the context of space policy it seems worth more extended thought. I'd like to see philosophers more involved in this, since the issues go right to the heart of philosophical questions about what we do and should value in general.


Acknowledgement: This is one of two main issues that struck me during my recent trip to an event on the search for extraterrestrial life, funded by NASA and the Library of Congress. Thanks to LOC, NASA, and the other participants. I discuss the other issue, about our duties to extraterrestrial microbes, here.

[image source, image source]

Thursday, June 11, 2015

What Philosophical Work Could Be

Academic philosophers in Anglophone Ph.D.-granting departments tend to have a narrow conception of what counts as valuable philosophical work. Hiring, tenure, promotion, and prestige turn mainly on one's ability to write an essay in a particular theoretical, abstract style, normally in reaction to the work of a small group of canonical historical and 20th century figures, on a fairly constrained range of topics, published in a limited range of journals and presses. This is too narrow a view.

I won't discuss cultural diversity here, which I have addressed elsewhere. Today I'll focus on genre and medium.

Consider the recency and historical contingency of the philosophical journal article. It's a late 19th century invention. Even as late as the mid-20th century, leading philosophers in Western Europe and North America were doing important work in a much broader range of styles than is typical now. Think of the fictions and difficult-to-classify reflections of Sartre, Camus, and Unamuno, the activism and popular writings of Russell, Dewey's work on educational reform, Wittgenstein's fragments. It's really only with the generation hired to teach the baby boomers that our conception of philosophical work became narrowly focused on the academic journal article, and on books written in that same style.

(Miguel de Unamuno)

Consider the future of media. The magazine is a printing-press invention and carries with it the history and limitations of that medium. With the rise of the internet, other possibilities emerge: videos, interactive demonstrations, blogs, multi-party conversations on social media, etc. Is there something about the journal article that makes it uniquely better for philosophical reflection than these other media? (Hint: no.)

Nor need we think that philosophical work must consist of expository argumentation targeted toward disciplinary experts and students in the classroom. This, too, is a narrow and historically recent conception of philosophical work. Popular essays, fictions, aphorisms, dialogues, autobiographical reflections, and personal letters have historically played a central role in philosophy. We could potentially add, too, public performances, movies, video games, political activism, and interactions with the judicial system and governmental agencies.

Philosophers are paid to develop expertise in philosophy, to bring that expertise in philosophy into the classroom, and to contribute that expertise to society in part by further advancing philosophical knowledge. A wide range of activities fit within that job description. I am inclined to be especially liberal here for two reasons: First, I have a liberal conception of philosophy as inquiry into big-picture ontological, normative, conceptual, and broadly theoretical issues about anything (including, e.g., hair and football as well as more traditionally philosophical topics). I favor treating a wide range of inquiries as philosophical, only a small minority of which happen in philosophy departments. And second, I have a liberal conception of "inquiry" on which sitting at one's desk reading and writing expository arguments is only one sort of inquiry. Engaging with the world, trying out one's ideas in action, seeing the reactions of non-academics, exploring ideas in fiction and meditation -- these are also valuable modes of inquiry that advance our philosophical knowledge, activities in which we not only deploy our expertise but cultivate and expand it, influencing society and, in a small or a large way, the future of both academic philosophy and non-academic philosophical inquiry.

Research-oriented philosophy departments tend to regard writing for popular media or consulting with governmental agencies as "service", which is typically held in less esteem than "research". I'm not sure service should be held in less esteem; but I would also challenge the idea that such work is not also partly research. If one approaches popular writing as a means of "dumbing down" pre-existing philosophical ideas for an audience of non-experts whose reactions one does not plan to take seriously, then, yes, that popular writing is not really research. But if the popular essay is itself a locus of philosophical creativity, where philosophical ideas are explored in hopes of discovering new possibilities, advancing (and not just marketing) one's own thinking, furthering the community's philosophical dialogue in a way that might strike professional philosophers, too, as interesting rather than merely familiar re-hashing, and if it's done in a way that is properly intellectually responsive to the work of others, then it is every bit as much "research" as is a standard journal article. Analogously with consulting -- and with Twitter feeds, TED videos, and poetry.

I urge our discipline to conceptualize philosophical work more broadly than we typically do. A Philosophical Review article can be an amazing, awesome thing. Yes! But we should see journal articles of that style, in that type of venue, as only one of many possible forms of important, field-shaping philosophical work.

Thursday, June 04, 2015

Space Agencies Need, but Don't Appear to Have, Policies Governing Contact with Microbial Life on Mars

NASA and other leading space agencies do not appear to have formal policies about how to treat microbial life if it's found elsewhere in the solar system. I find this surprising.

I still need to do a more thorough search to be confident of this. However, last week when I went to an event jointly sponsored by NASA and the Library of Congress, the people I spoke to there seemed to think that there's no worked-out formal policy; nor have I found such a policy in subsequent internet searches. (Please correct me by email or in the comments below if I'm wrong!)

NASA and other space agencies do have rigorous and detailed protocols regarding the cross-contamination of microbial life between planets. If you want to send a lander to Mars, it must be thoroughly sterilized. Likewise, extensive protocols are being developed to protect Earth from possible extra-terrestrial microbes in returned samples. NASA has an Office of Planetary Protection that focuses on these issues. However contact with microbial life raises ethical issues besides cross-contamination.

Suppose NASA discovers a patch of microbes on Mars.

Presumably, NASA scientists will want to test it -- to see how similar Martian life is to Earthly life, for example. Testing it might involve touching it. Maybe NASA scientists will want a rover to scoop up a sample for chemical analysis. But that would mean interfering with the organisms, exposing them to risk. Even just shining light on microbes to examine them more closely is a form of interference that presents some risk -- even the shadow of a parked rover creates a small degree of interference and risk. How much interference with extraterrestrial microbial life is acceptable? How much risk? These questions will rise acutely as soon as we discover extraterrestrial life. In fact, proving that we have actually discovered life might already involve some interference, especially if the sample is ambiguous or subsurface. These questions are quite independent of existing regulations about sterilization and contamination. We need to consider them now, in advance, before we discover life. Otherwise, NASA leaders might be in the position of making these decisions on the fly, without sufficient public input or oversight.

Here's another question in the ethics of contact: Suppose we discover a species of microbe that appears to be under threat of extinction due to local environmental conditions. Should we employ something like a "Prime Directive" policy, on the microbial level: no interference, even if that means extinction? Or should we take positive steps toward alien species protection?

Planetary protection policies that focus on contamination risk seem to rely on standard top-down regulatory models requiring compliance to a fixed set of detailed rules, but I wonder if a better model might be university Institutional Review Boards for the protection of human participants (IRBs) and Animal Care and Use Committees (ACUCs). Such committees have three appealing features:

First, rather than a rigid set of rules, IRBs and ACUCs employ a flexible set of general guidelines. The guidelines governing research on human participants tend to be very conservative about risk in general; but the committee is also charged with weighing risks against benefits. In the context of extraterrestrial microbiology, a reasonable standard might be extreme caution about interference, but one that allows, for example, a small sample to be very carefully taken from a large, healthy microbial colony, for experimentation and then careful disposal without re-release into the planetary environment. As reflection on this example suggests, people might have very different ethical opinions about how much risk and interference is appropriate, and of what sort. Also, expert scientists will want to think in advance about assessing the sources of risk and what feasible steps can be taken to minimize those risks, contingent on various types of possible preliminary information about the microbe's structure and habitat. I do not see evidence that these issues are being given the serious thought, with public input, that they need to be given.

Second, IRBs and ACUCs are normally constituted by a mix of scientist and non-scientist members, the latter typically drawn from the general public (often lawyers and schoolteachers). The scientists bring their scientific expertise which is essential to evaluating the risks and possible benefits, but the non-scientist members play an important role in expressing general community values and in keeping the scientists from possibly going too easy on their scientist friends, as well as sometimes specific expertise on related non-scientific issues. In the context of the treatment of extraterrestrial microbial life, a mixed committee also seems important. It shouldn't only be the folks at the space agencies who are making these calls.

Third, IRBs and ACUCs assess specific protocols in advance of the implementation of those protocols. This should be done where feasible, while also recognizing that some decisions may need to be made urgently without pre-approval when unexpected events occur.

I think we should begin to establish moderately specific national and international guidelines governing human interaction with microbial life elsewhere in the solar system, in which contamination is regarded as only one issue among several; that we should formulate these guidelines after broad input not only from scientists but also from the general public and from people with expertise in risk and research ethics; and that we should form committees, modeled on IRBs and ACUCs, of people who understand these guidelines and stand ready to evaluate proposals at the very moment we discover extraterrestrial life.

NASA, ESA, etc., what do you think?

[image source]

Friday, May 29, 2015

The Immortal's Dilemma

Most of the philosophical literature on immortality and death -- at least that I've read -- doesn't very thoroughly explore the consequences of temporal infinitude. Bernard Williams, for example, suggests that 342 years might be a tediously long life. Well, of course 342 years is peanuts compared to infinitude!

It seems to me that true temporal infinitude forces a dilemma between two options:
(a.) infinite repetition of the same things, without memory, or
(b.) an ever-expanding range of experiences that eventually diverges so far from your present range of experiences that it becomes questionable whether you should regard that future being as "you" in any meaningful sense.

Call this choice The Immortal's Dilemma.

Given infinite time, a closed system will eventually cycle back through its states, within any finite error tolerance. (One way of thinking about this is the Poincare recurrence theorem.) There are only so many relevantly distinguishable states a closed system can occupy. Once it has occupied them, it has to start repeating at least some of them. Assuming that memory belongs to the system's structure of states, then memory too is among those things that must start afresh and repeat. But it seems legitimate to wonder whether the forgetful repetition of the same experiences, infinitely again and again, is something worth aspiring toward -- whether it's what we can or should want, or what we thought we might want, in immortality.

It might seem better, then, or more interesting, or more worthwhile, to have an open system. Unless the system is ever-expanding, though, or includes an ever-expanding population of unprecedented elements, eventually it will loop back around. Thus, given any finite error tolerance, eventually events will have to get more and more remote from the original run of events you lived through -- with no end to the increasing remoteness.

Suppose that conscious experience is what matters. (Parallel arguments can be made for other ways of thinking about what matters.) First, one might cycle through every possible human experience. Suppose, for example, that human experience depends on a brain of no more than a hundred trillion neurons (currently we have a hundred billion, but that might change), and that each neuron is capable of one of a hundred trillion relevantly distinguishable states, and that any difference in even one neuron in the course of a ten-second "specious present" results in a relevantly distinguishable experience. A liberal view of the relationship between different neural states and different possible experiences!

Of course such numbers, though large, are still finite. So once you're done living through all the experiences of seeming-Aristotle, seeming-Gandhi, seeming-Hitler, seeming-Hitler-seeming-to-remember-having-earlier-been-Gandhi, seeming-future-super-genius, and seeming-every-possible-person-else and many, many more experiences that probably wouldn't coherently belong to anyone's life, well, you've either got to settle in for some repetition or find some new range of experiences that include experiences that are no longer human. [Clarification June 1: Not all these states need occur, but that only shortens the path to looping or alien weirdness.] Go through the mammals. Then go through hypothetical aliens. Expand, expand -- eventually you'll have run through all possible smallish creatures with a neural or similar basis and you'll need to go to experiences that are either radically alien or vastly superhuman or both. At some point -- maybe not so far along in this process -- it seems reasonable to wonder, is the being who is doing all this really "you"? Even if there is some continuous causal thread reaching back to you as you are now, should you, as you are now, care about that being's future any more than you care about the future of some being unrelated to you?

Either amnesic infinite repetition or a limitless range of unfathomable alien weirdness. Those appear to be the choices.

References to good discussions of this in the existing literature welcome in the comments section!

[Thanks particularly to Benjamin Mitchell-Yellin for discussion.]

Related posts:
Nietzsche's Eternal Recurrence, Scrambled Sideways (Oct. 31, 2012)
My Boltzmann Continuants (Jun. 6, 2013)
Goldfish-Pool Immortality (May 30, 2014)
Duplicating the Universe (Apr. 29, 2015)

[image source]

Thursday, May 21, 2015

Leading SF Novels: Academic Library Holdings and Citation Rates

Among the most culturally influential English-language fiction writers of the 20th century, a substantial portion wrote science fiction or fantasy -- "speculative fiction" (SF) broadly construed. H.G. Wells, J.R.R. Tolkien, George Orwell, Isaac Asimov, Philip K. Dick, and Ursula K. Le Guin, for starters. In the 21st century so far, speculative fiction remains culturally important. There's sometimes a feeling among speculative fiction writers that even the best recent work in the genre isn't taken seriously by academic scholars. I thought I'd look at a couple possible (imperfect!) measures of this.

(I'm doing this partly just for fun, 'cause I'm a dork and I find this kind of thing relaxing, if you'll believe it.)

Holdings of recent SF in academic libraries

I generated a list of critically acclaimed SF novels by considering Hugo, Nebula, and World Fantasy award winners from 2009-2013 plus any non-winning novels that were among the 5-6 finalists for at least two of the three awards. Nineteen novels met the criteria.

Then I looked at two of the largest Anglophone academic library holdings databases: COPAC and Melvyl, and counted how many different campuses (max 30-ish) had a print copy of the book [see endnote for details].

H = Hugo finalist, N = Nebula finalist, W = World Fantasy finalist; stars indicate winners.

The results, listed from most held to least:

16 campuses: Neil Gaiman, The Graveyard Book (H*W)
15: George R.R. Martin, A Dance with Dragons (HW)
15: China Mieville, The City & the City (H*NW*)
12: Cory Doctorow, Little Brother (HN)
12: Ursula K. Le Guin, Powers (N*)
12: China Mieville, Embassytown (HN)
12: Connie Willis, Blackout / All Clear (H*N*)
11: Paolo Bacigalupi, The Windup Girl (HN*)
11: G. Willow Wilson, Alif the Unseen (W*)
10: Kim Stanley Robinson, 2312 (HN*)
8: N.K. Jemisin, The Hundred Thousand Kingdoms (HNW)
8: N.K. Jemisin, The Killing Moon (NW)
8: Jon Scalzi, Redshirts (H*)
8: Jeff VanderMeer, Finch (NW)
8: Jo Walton, Among Others (H*N*W)
7: Cherie Priest, Boneshaker (HN)
7: Caitlin Kiernan, The Drowning Girl (NW)
5: Nnedi Okorafor, Who Fears Death (NW*)
3: Saladin Ahmed, Throne of the Crescent Moon (HN)

As a reference point, I did a similar analysis of PEN/Faulkner award winners and finalists over the same period.

Of the 25 PEN winners and finalists, 7 were held by more campuses than was any book on my SF list, though the difference was not extreme, with two at 24 (Jennifer Egan, A Visit from the Goon Squad; Joseph O'Neill, Netherland) and five ranging from 18-21 campuses. In the PEN group, just as in the SF group, there were nine books held by fewer than ten of the campuses (3, 5, 6, 7, 7, 7, 9, 9, 9) -- so the lower part of the lists looks pretty similar.

References in Google Scholar

Citation patterns in Google Scholar tell a similar story. Although citation rates are generally low by philosophy and psychology standards (assuming as a comparison group the most-praised philosophy and psychology books of the period), they are not very different between the SF and PEN lists. The SF books for which I could find five or more Google Scholar citations:

53 citations: Gaiman, The Graveyard Book
52: Doctorow, Little Brother
27: Martin, A Dance with Dragons
26: Bacigalupi, The Windup Girl
9: Priest, Boneshaker
8: Robinson, 2312
5: Okorafor, Who Fears Death

The top-cited PEN books were at 70 (O'Neill, Netherland) and 59 (Egan, A Visit from the Goon Squad). After those two, there's a gap down to 17, 15, 12, 11, 10.

I continue to suspect that there is a bit of a perception difference between "highbrow" literary fiction and "middlebrow" SF, disadvantaging SF studies in some quarters of the university; but if so, perhaps that is compensated by recognition of SF's broader visibility in popular culture, so that in terms of overall scholarly attention, it appears to be approximately a tie.



So... hey! That makes me wonder about bestsellers. I've taken the four best selling fiction books each year from 2009-2013 (according to USA Today for 2009-2012, Nielsen Book Scan for 2013) and tried the same. (The catalogs are a bit messier since these books tend to have multiple editions, so the numbers are a little rougher.)

Top five by citations (# of campuses in parens):

431: Suzanne Collins, The Hunger Games (23)
333: Stephanie Meyer, Twilight (26)
162: Stephanie Meyer, Breaking Dawn (17)
132: Stephanie Meyer, New Moon (15)
130: Steig Larsson, The Girl with the Dragon Tattoo (12)

Only 4 of the 19 had fewer than 10 citations, and all were held by at least six campuses.

So by both of these measures, bestsellers are receiving more academic attention than either the top critically acclaimed SF or PEN. Notable: By my count, 8 of the 19 bestsellers are SF, including all of the top-four most cited.

Maybe that's as is should be: The Hunger Games and Twilight are major cultural phenomena, worthy of serious discussion for that sake alone, in addition to whatever merits they might have as literature.


COPAC covers the major British and Irish academic libraries, Melvyl the ten University of California campuses. I counted up the total number of campuses in the two systems with at least one holding of each book, limiting myself to print holdings (electronic and audio holdings were a bit disorganized in the databases, and spot checking suggested they didn't add much to the overall results since most campuses with electronic or audio also had print of the same work).

As always, corrections welcome!

Thursday, May 14, 2015

Moral Duties to Flawed Gods

Suppose that God exists and is morally imperfect. (I'm inclined to think that if a god exists, that god is not perfect.) If God has created me and sustains the world, I owe a pretty big debt to her/him/it. Now suppose that this morally imperfect God tells me to wear a blue shirt today instead of a brown one. No greater good would be served; it's just God's preference, for no particular reason. God tells me to do it, but doesn't threaten me with punishment if I don't -- she (let's say "she") just appeals to my sense of moral obligation: "I am your creator," she says, "and I work to sustain your whole universe. I'd like you to do it. You owe me!"

One way we might conceptualize a morally flawed god is this: We might be sims, or model playthings, in a world that is subject to the whims of some larger being with the power to radically manipulate or destroy it, and who therefore has sufficient powers to be properly conceptualized as a god by us. Alternatively, if technology advances sufficiently, we ourselves might create genuinely conscious rational beings who live as sims or playthings, and then we would be gods relative to them.

It is helpful, I think, to consider these issues simultaneously bottom up and top down -- both in terms of what we ourselves would owe to such a hypothetical god and in terms of what we, if we hypothetically gained divine levels of power over created beings, could legitimately demand of those beings. It seems a reasonable desideratum of a theory that the constraints be symmetrical: Whatever a flawed god could legitimately demand of us, we, if we had similar attributes in relation to beings we created, could legitimately demand of them; and contrapositively, whatever we could not legitimately demand of beings we created we should not recognize as demands a flawed god could make upon us, barring some relevant asymmetry between the situations.

Here are three possible approaches to God's authority to command:

(1.) Love of God and/or the good. Divine command theory is the view that we are obliged to do whatever God commands. Christian articulations of this view have typically assumed a morally perfect God, whom we obey out of love for him, or love of the good, or both (e.g., Adams 1999). A version of this view might be adapted to the case where God is morally flawed: We might still love her, and obey her from love (as one might obey another human out of love); or one might obey because one admires and respects the goodness of God and her commands, even if God is not perfectly good and this particular command is flawed.

(2.) Acknowledgement of debt. Other approaches to divine command theory emphasize God's power and our debt as God's creations (for example, Augustine: "Unless you turn to Him and repay the existence that He gave you... you will be wretched. All things owe to God, first of all, what they are insofar as they are natures" [cited here] and the conclusion of the Book of Job). A secular comparison might be the debt children owe to their parents for their creation and sustenance, for example as emphasized in the Confucian tradition.

(3.) Social contract theory. According to social contract theory, what gives (morally flawed) governmental representatives legitimate authority to command us is something like the fact that, hypothetically, the overall social arrangement is fair, and we would agree to it if it were offered from the right kind of neutral position. God might say: Universes require gods to create, command, and sustain them -- or at least your universe has required one -- and I am the god in that role, executing my powers in a manner that would be antecedently recognizable as fair. Surely you would agree, hypothetically, to the justice of the creation of your world under this general arrangement?

Now when I consider these possible justifications of a morally imperfect God's authority to command, what strikes me is that all three seem to justify only rather limited power. To see this, consider three types of command: (a.) the trivial and arbitrary, (b.) the non-trivial and arbitrary, and (c.) the non-arbitrary and non-trivial.

It is perhaps legitimate for a god to make trivial, arbitrary demands -- like to wear a blue shirt today rather than a brown -- and for a created being to satisfy them, in recognition of a personal relationship or a debt. Similarly legitimate, it seems, are non-arbitrary demands that God makes for excellent reasons, justifiable either interpersonally or through social contract theory.

My own sense, however -- does yours differ? -- is that arbitrary but non-trivial demands should be sharply limited. Suppose, for example, that God says she wants me to go out to the student commons and do a chicken dance -- not for any good reason but just as a passing minor whim, because she wants me to. I'd be embarrassed, but no serious consequences would ensue. My feeling is that God would not be in the right to make this sort of demand of me; nor would I be in the right to demand it of my creations, were I ever to create genuinely conscious beings over whom I had divine degrees of power.

It seems to me that would be wrong in the same way that it would be wrong for my mother or wife to ask this of me for no good reason: It would be a matter of someone's treating her own whims as of greater importance than my legitimate desires and interests. It would violate the principle of equality. But if that's correct -- if an imperfect god's whims don't trump my interests for that type of reason -- then in the relevant moral sense, we are God's equals.

You might say: If a god really did create us, our debt is enormous. Indeed it would be! But what follows? My parents created me, and they raised me through childhood, so my debt to them is also enormous; and my government paid for my education and my roads and my protection, so in a sense my government has also created and sustained me, and my debt to it is also enormous. However, once I have been created, I have a dignity and interests that even those who have created and sustained me cannot legitimately disregard to satisfy their whims. And I see no reason to suppose this limitation on the morally legitimate exercise of power is any less for gods than for fellow humans.

A morally perfect god might be different. Necessarily, such a god would not demand anything morally illegitimate. But I think a sober look at the world suggests that if there is any creating or sustaining god of substantial power, that god is far from morally perfect. If that god tells me never to mix clothing fibers or never to work on the sabbath, she had better also supply a good reason.

Related posts:

  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • Our Moral Duties to Artificial Intelligences (Jan. 14, 2015)
  • [image source]

    Monday, May 11, 2015

    Network Map of Philosophical SF Authors

    Andrew Higgins has done one of his beautiful network maps for my Philosophical SF authors list:

    [click to see full size] Andrew writes:

    This graph represents a network of science fiction authors and philosophers, with the authors linked to philosophers just in case the philosopher listed that author as philosophically interesting. Authors are labeled, and label size corresponds to the number of philosophers mentioning them. Label colors and positions are rough indicators of similarity. Colors represent groups of authors; as an intuitive gloss, if authors A1-An are the same color that means the connections between the As is ≥ their connections to authors in other groups. Author positions are determined by a combination of three forces - gravity, attraction, and repulsion - applied to the network until it has settled into a stable position (a local peak in the space of possible positions). All nodes gravitate to the center and repulse one another, and nodes are attracted just in case they are connected. So, positions and colors can be seen as weak indicators of similarity, whatever kind of similarity is highlighted by philosophers' choices.

    But, given the relatively small sample size and lack of strong modularity in the network, we should be cautious in inferring anything about these authors (or philosophers) based on their relative positions or colors.

    Friday, May 08, 2015

    Competing Perspectives on the Significance of One's Final, Dying Thought

    Here's a particularly unsentimental view about last, dying thoughts: Your dying thought will be your least important thought. After all (assuming no afterlife), it is the one thought guaranteed to have no influence on any of your future thoughts, or on any other aspect of your psychology.

    Now maybe if you express the thought aloud -- "I did not get my Spaghetti Os. I got spaghetti. I want the press to know this." -- or if your last thought is otherwise detectable by others, it will have an effect; but for this post let's assume a private last thought that influences no one else.

    A narrative approach to the meaning of life seems to recommend a different attitude toward last thoughts. If a life is like a story, you want it to end well! The ending of a story colors all that has gone before. If the hero dies resentful or if the hero dies content, that rightly changes our understanding of earlier events. It does so not only because we might now understand that all along the hero felt subtly resentful, but also because private deathbed thoughts, on this view, have a retrospective transformative power: An earlier betrayal, for example, now becomes a betrayal that was forgiven by the end (or it becomes one that was never forgiven). The ghost's appearance to Hamlet has one type of significance if Hamlet ends badly and quite a different significance if Hamlet ends well. On the narrative view, the significance of events depends partly on the future. Maybe this is part of what Solon had in mind when he told King Croesus not to call anyone happy until they die: A horrible enough disaster at the end, maybe, can retrospectively poison what your marriage and seeming successes had really amounted to. Thus, maybe the last thought is like the final sentence of a book: Ending on a thought of love and happiness makes your life a very different story than does ending on a thought of resentment and regret.

    The unsentimental view seems to give too little significance to one's last thought -- I, at least, would want to die on a positive note! -- but the narrative view seems to give one's last thought too much significance. I doubt we're deprived of knowing the significance of someone's life if we don't know their last thought in the way we can't know the significance of a story if we don't know its last sentence. Also, the last sentence of a story is a contrived feature of a type of work of art, a sentence which the work is designed to render highly significant; while a last thought might be trivially unimportant by accident (if you're thinking about what to have for lunch, then hit by a truck you didn't see coming) or it might not reflect a stable attitude (if you're grumpy from pain).

    Maybe the right answer is just a compromise: The last thought is not totally trivial because it has some narrative power, but life isn't so much like a narrative that it has last-sentence-of-a-story-like power? Life has narrative elements, but the independent pieces also have a power and value that isn't hostage to future outcomes.

    Here's another possibility, which interacts with the first two: Maybe one's last thought is an opportunity. But what kind of opportunity it is will depend on whether last thoughts can retrospectively change the significance of earlier events.

    On the narrative view, it is an opportunity to -- secretly! with an almost magical time-piercing power -- make it the case that Person A was forgiven by you or never forgiven, that Action B was regretted or never regretted, etc.

    On the unsentimental view, in contrast, it is an opportunity to think things that, had you thought them earlier, would have been too terrible to think because of their possible impact on your future thoughts. (Compare: It's also an opportunity to explore the neuroscience of decapitation.) I don't know that we have such a reservoir of unthinkable thoughts that we refuse to make conscious for fear of the effects of thinking them. That sounds pretty Freudian! But if we do, here's the perfect opportunity, perhaps, to finally admit to yourself that you never really loved Person A or that your life was a failure. Maybe if you thought such things and then remembered those thoughts the next day, bad consequences would follow. But now there can be no such bad consequences the next day; and if you reject the narrative view, there are no retrospective bad consequences on earlier events either. So it's your chance, if you can grab it, to drop your self-illusions and glare at the truth.

    Writing this now, though, that last view seems too dark. I'd rather die under illusion, I think, than dispel the illusion at the last moment, when it's too late to do anything about it. Maybe that's the better narrative. Or maybe truth is not the most important thing on the deathbed.

    [image source]

    Thursday, May 07, 2015

    List of Philosophical Science Fiction / Speculative Fiction

    I've just updated my list of "philosophically interesting" SF -- about 400 total recommendations from 40 contributors, along with brief "pitches" for each work that point toward the work's philosophical interest. All of the contributors are either professional philosophers or professional SF writers with graduate training in philosophy.

    The version sorted by author (or director, for movies) is organized so that the most frequently recommended authors appear first on the list. What SF authors are the biggest hits with the philosophy crowd? Now you know! (Or you will know, shortly after you click.)

    There's also a version sorted by recommender. If you scan through to find works you love, then you can see which contributors recommended those works. Since you have overlapping tastes, you might want to especially check out their other recommendations.

    Tuesday, May 05, 2015

    Momentary Sage

    My newest piece of short speculative fiction, Momentary Sage, has just come out in The Dark. I wanted to do two things with the story.

    First: I wanted to envision the aftermath of A Midsummer Night's Dream. In the main plot of Shakespeare's play, Lysander and Hermia want to marry, but Hermia has been promised to Demetrius whom she loathes. The problem is resolved with a fairy love spell: Demetrius is tricked into loving Helena, to whom he had previously been engaged and who still loves him. All ends happily, with Lysander marrying Hermia and Demetrius marrying Helena. But dear poet Willy, that's too cheap a fix! Demetrius can't just stay permanently tricked into love, happily ever after, can he? Midnight fairy magic always causes more problems than it solves, for that is the unbreakable law of fairies. (Just ask Susanna Clarke.)

    Second: I wanted to explore a certain simplistic parody of Buddhism. Demetrius's love spell ends the next day. But his revenge is this: Hermia's child, Sage, is a philosopher baby who believes that non-existence is preferable to suffering. Since he disbelieves in the reality of an extended self, to determine whether life is worth living at any moment, Sage simply weighs up his total joy and suffering at that moment. As soon as his current suffering outweighs his current joy, he attempts to commit suicide, employing a sharp magic tusk he was born with for just that purpose. Hermia and Lysander must thus keep constant watch on Sage, physically pinning him down the moment he starts feeling frustrated or colicky.

    Though drawn in starker colors, this is just the predicament confronting all parents when their children would rather cast away future interests than accept a little short-term suffering. Is there a rational argument that can convince someone to value the future, if they don't already? Sage and Lysander have a go at it, but Sage always wins. He is the better philosopher.

    It's a piece of dark fantasy, verging on horror -- so if you don't enjoy that genre, stand warned.

    [image source]

    Wednesday, April 29, 2015

    Duplicating the Universe

    I've been thinking about two forms of duplication. One is duplication of the entire universe from beginning to end, as envisioned in Nietzsche's eternal return (cf. Poincare's recurrence theorem on a grand scale). The other is duplication within an eternal (or very long) individual life (goldfish-pool immortality). In both cases, I find myself torn among four different evaluative perspectives.

    For color, imagine a god watching our universe from Big Bang to heat death. At the end, this god says, "In total, that was good. Replay!" Or imagine an immortal life in which you loop repeatedly (without remembering) through the same pleasures over and over.

    Consider four ways of thinking about the value of duplication:

    1. The summative view: Duplicating a good thing doubles the world's goodness, all else being equal; and in particular duplicating the universe doubles the total sum of goodness. There's twice as much total happiness overall, for example. Although Nietzsche rejected the ethics of happiness-summing, something in the general direction of the summative view seems to be implicit in his suggestion that if we knew that the universe repeats infinitely, that would add infinite weight to every decision.

    2. The indifference view: Repetition adds no value or disvalue, if it is a true repetition (no memory, no development, no audience-god watching saying "oh, I remember this... here comes the good part!"). You might even think, if the duplication is perfect enough, that there aren't even two metaphysically distinct things (Leibniz's identity of indiscernibles).

    3. The diminishing returns view: A second run-through is good, but it doesn't double the goodness of the first run-through. For example, the total subjectively experienced happiness might be double, but there's something special about being the first person on the (or "a"?) moon, which is something that never happens in the second run -- and likewise something special about being the last episode of Seinfeld (or "Seinfeld"?) and about being the only copy of a Van Gogh painting (or a "Van Gogh" painting?), which the first run loses if a second run is added.

    4. The precious uniqueness view: Expanding the last thought from the diminishing returns view, one might think that duplication somehow cheapens both runs, and that it's better to do things exactly once and be done.

    Which of these four views is the best way of thinking about cosmic value (or the value of an extended life)?

    You might think that this kind of question isn't amenable to rational argumentation -- that there is no discoverable fact of the matter about whether doubling is better. And maybe that's right. But consider this: Universe A is just like our universe. Universe B is just like our universe, but life on Earth never advances past microbial levels of complexity. If you think Universe A is overall better, or more creation-worthy (or, if you're enough of a pessimist, overall worse) than Universe B, then you think there are facts about the relative value of universes -- in which case, plausibly, there should also be some fact about whether a duplicative universe is a lot better, a little better, the same, or worse than a single-run universe. Yes?

    There is, I think, at least a chance that this question, or a relative of it, will become a question of practical ethics in the future -- if we ever become "gods" who create universes of genuinely conscious people running inside of simulated environments (as I discuss here and here), or if we ever have the chance to "upload" into paradises of repetitive bliss.

    [image source]

    Monday, April 27, 2015

    How to Make Van Gogh's "Starry Night" Undulate

    Not sure the original source of this one (maybe notbecauseitsironic on Reddit?).

    First, look at the center of the image below for about 30 seconds.

    Look at the center of this image for 30sec, then watch Van Gogh's *Starry Night* come to life
    Then look at Van Gogh's "The Starry Night".
    The technique also achieves interesting results when applied to Kincade:
    [HT Mariano Aski]

    Thursday, April 23, 2015

    New Essay: Death and Self in the Incomprehensible Zhuangzi

    Every nineteen years, I should write a new essay on the ancient Chinese philosopher Zhuangzi, don't you think? This one should tide me over until 2034, then!

    Death and Self in the Incomprehensible Zhuangzi

    The ancient Chinese philosopher Zhuangzi defies interpretation. This is an inextricable part of the beauty and power of his work. The text – by which I mean the “Inner Chapters” of the text traditionally attributed to him, the authentic core of the book – is incomprehensible as a whole. It consists of shards, in a distinctive voice – a voice distinctive enough that its absence is plain in most or all of the “Outer” and “Miscellaneous” Chapters, and which I will treat as the voice of a single author. Despite repeating imagery, ideas, style, and tone, these shards cannot be pieced together into a self-consistent philosophy. This lack of self-consistency is a positive feature of Zhuangzi. It is part of what makes him the great and unusual philosopher he is, defying reduction and summary.
    Full draft here.

    As always, comments, objections, suggestions welcome, either by email or as comments on this post.

    See this post from March 5 for a briefer treatment of the same themes.

    Wednesday, April 22, 2015

    Rules of War, the Card Game, with Deck Management

    I think you'll agree that few games are as tedious as the card game war. Unfortunately, my eight-year-old daughter likes the damned thing. So I cooked up some new rules, which make the game considerably more interesting and quicker to resolve.

    (What does this have to do with the themes of this blog? Um. If widely adopted, the new rules will substantially reduce humanity's card-game-related dyshedons!)

    War with Deck Management

    Simple Rules for Two Players:

    Deal the 52-card deck face down, 26 cards to each player. As in standard war, each player turns their top card face up on the table. High card wins the trick (ace high, suit ignored). The winner of the trick collects the cards face up in a pile. In case of a tie, there's a "war", and each player lays three "soldier" cards face down then one "general" face up. The highest general wins all ten cards. If the generals tie, repeat. If there aren't enough face-down cards to play out the war, each player shuffles their face-up stack of won tricks and draws randomly from that stack to complete the war, then turns the stack back face up. If a player has insufficient cards to play out the war, that player loses the game.

    When both players are out of face-down cards, one round is over. Each player counts their face-up cards openly, for all to see. The player with more cards then discards enough cards to equal the number of cards in the pile of the player with fewer cards. For example, if after Round 1, Player A has 30 cards and Player B has 22, then Player A discards 8 cards of his or her choice, so they both have 22.

    Each player then turns their stack face down and shuffles, then plays Round 2 by the same rules as Round 1. After all cards are face up, the player with more cards again discards to match the number of cards in the stack of the player with fewer. This is repeated until one player runs out of cards and loses.

    Advantages over Standard War:

  • The game resolves much faster!
  • The winner of each round enjoys discarding away low cards instead of accumulating a bunch of losers.
  • In later rounds, wars are more common because the low cards are removed from the decks, leaving a smaller range of cards to match.
  • Although aces are important, the original distribution of the aces isn't as important as in standard war. This is partly because there are more wars, so there are more chances for aces to change hands as soldiers, and partly because a generally strong deck that wins more total cards gives a major advantage in the discard phase.
  • Advanced Rules with Deck-Order Management:

    Rules as above, except that players may arrange their face down cards in any order they wish. Once the cards are arranged face down, they can't be rearranged, and any wars that require drawing from the face-up pile are still based on random draw from a face-down shuffle.

    Tactics: Since the top card will never be a soldier, you might want to make it your ace. But then if the other player does the same, you'll have a war. Anticipating that, you might make cards 2-4 low and card 5 high. But maybe you know your general will lose if the other player employs the same tactics, so you might surprise them by putting your 2 on top, so that the ace you think they'll play will be wasted gathering a low card. Etc.

    Rules for More Than Two Players:

    Divide the deck equally face down among the players. Any leftover cards go face up in the middle, to be collected by the winner of the first trick. High card wins the trick. If the high card is a tie, then the two (or more) players with the high card play a war. Any remaining player sits out the war, playing neither soldiers nor general. Winner takes all cards.

    The round is over when at most one player has face down cards remaining. Any player out of face down cards before the end of the round sits out the remainder of the round, neither losing nor winning cards. At the end of the round each player counts their total cards. The player with the most cards discards to reduce to the number of cards held by the player with the second most. For example, if after Round 1 Player A has 22, Player B has 18, and Player C has 12, then Player A discards 4 so that Players A and B have 18 and Player C has 12.

    When a player is out of cards, that player is out. As in the two-player version, this can happen either because the player wins no tricks in a round or because the player does not have enough cards to complete a war. The game is over when all but one player is out.

    [image source]

    Thursday, April 16, 2015

    How to Disregard Extremely Remote Possibilities

    In 1% Skepticism, I suggest that it's reasonable to have about a 1% credence that some radically skeptical scenario holds (e.g., this is a dream or we're in a short-term sim), sometimes making decisions that we wouldn't otherwise make based upon those small possibilities (e.g., deciding to try to fly, or choosing to read a book rather than weed when one is otherwise right on the cusp).

    But what about extremely remote possibilities with extremely large payouts? Maybe it's reasonable to have a one in 10^50 credence in the existence of a deity who would give me at least 10^50 lifetimes' worth of pleasure if I decided to raise my arms above my head right now. One in 10^50 is a very low credence, after all! But given the huge payout, if I then straightforwardly apply the expected value calculus, such remote possibilities might generally drive my decision making. That doesn't seem right!

    I see three ways to insulate my decisions from such remote possibilities without having to zero out those possibilities.

    First, symmetry:
    My credences about extremely remote possibilities appear to be approximately symmetrical and canceling. In general, I'm not inclined to think that my prospects will be particularly better or worse due to their influence on extremely unlikely deities, considered as a group, if I raise my arms than if I do not. More specificially, I can imagine a variety of unlikely deities who punish and reward actions in complementary ways -- one punishing what the other rewards and vice versa. (Similarly for other remote possibilities of huge benefit or suffering, e.g., happening to rise to an infinite Elysium if I step right rather than left.) This indifference among the specifics is partly guided by my general sense that extremely remote possibilities of this sort don't greatly diminish or enhance the expected value of such actions. I see no reason not to be guided by that general sense -- no argumentative pressure to take such asymmetries seriously in the way that there is some argumentative pressure to take dream doubt seriously.

    Second, diminishing returns:
    Bernard Williams famously thought that extreme longevity would be a tedious thing. I tend to agree instead with John Fischer that extreme longevity needn't be so bad. But it's by no means clear that 10^20 years of bliss is 10^20 times more choiceworthy than a single year of bliss. (One issue: If I achieve that bliss by repeating similar experiences over and over, forgetting that I have done so, then this is a goldfish-pool case, and it seems reasonable not to think of goldfish-pool cases as additively choiceworthy; alternatively, if I remember all 10^20 years, then I seem to have become something radically different in cognitive function than I presently am, so I might be choosing my extinction.) Similarly for bad outcomes and for extreme but instantaneous outcomes. Choiceworthiness might be very far from linear with temporal bliss-extension for such magnitudes. And as long as one's credence in remote outcomes declines sharply enough to offset increasing choiceworthiness in the outcomes, then extremely remote possibilities will not be action-guiding: a one in 10^50 credence of a utility of +/- 10^30 is negligible.

    Third, loss aversion:
    I'm loss averse rather than risk neutral. I'll take a bit of a risk to avoid a sure or almost-sure loss. And my life as I think it is, given non-skeptical realism, is the reference point from which I determine what counts as a loss. If I somehow arrived at a one in 10^50 credence in a deity who would give me 10^50 lifetimes of pleasure if I avoided chocolate for the rest of my life (or alternatively, a deity who would give me 10^50 units of pain if I didn't avoid chocolate for the rest of my life), and if there were no countervailing considerations or symmetrical chocolate-rewarding deities, then on a risk-neutral utility function, it might be rational for me to forego chocolate evermore. But foregoing chocolate would be a loss relative to my reference point; and since I'm loss averse rather than risk neutral, I might be willing to forego the possible gain (or risk the further loss) so as to avoid the almost-certain loss of life-long chocolate pleasure. Similarly, I might reasonably decline a gamble with a 99.99999% chance of death and a 0.00001% chance of 10^100 lifetimes' worth of pleasure, even bracketing diminishing returns. I might even reasonably decide that at some level of improbability -- one in 10^50? -- no finite positive or negative outcome could lead me to take a substantial almost-certain loss. And if the time and cognitive effort of sweating over decisions of this sort itself counts as a sufficient loss, then I can simply disregard any possibility where my credence is below that threshold.

    These considerations synergize: the more symmetry and the more diminishing returns, the easier it is for loss aversion to inspire disregard. Decisions at credence one in 10^50 are one thing, decisions at credence 0.1% quite another.

    Wednesday, April 15, 2015

    Dialogues on Disability

    ... a new series of interviews, by Shelley Tremain, launches today at the Discrimination and Disadvantage blog with inaugural guest Bryce Huebner.

    One interesting feature of the interview is Bryce's discussion of whether his celiac disease should be viewed as a disability. There is a broad sense in which virtually everyone is disabled -- we are nearsighted, have allergies, experience back pain, etc. Yet, given our social structures, many of these disabilities are hardly disabilities at all. If I lived in a world in which corrective lenses were inaccessible, my 20/500 nearsightedness would have a huge impact on my life. As it is, I pop on my glasses and no problem! (In fact, I'm terrific at reading tiny print that eludes most others my age.) When I was in southern China a couple years ago, I had an allergic reaction to shellfish almost every day of my visit -- the food is so pervasive in the culture that even when it's not an ingredient, some residue often gets mixed in -- but in southern California, no problem. Conversely, in some culinary cultures, Bryce's celiac disease might hardly manifest; and we might imagine cultures or subcultures where being in a wheelchair is similarly experienced as only a minor inconvenience.

    Monday, April 13, 2015

    Comment Moderation Being Implemented

    I will try to approve comments within 24 hours of submission. I'm sorry to have to do this! Eric

    Wednesday, April 08, 2015

    Blogging and Philosophical Cognition

    Yesterday or today, my blog got its three millionth pageview since its launch in 2006. (Cheers!) And at the Pacific APA last week, Nancy Cartwright celebrated "short fat tangled" arguments over "tall skinny neat" arguments. (Cheers again!)

    To see how these two ideas are related, consider this picture of Legolas and his friend Gimli Cartwright. (Note the arguments near their heads. Click to enlarge if desired.) [modified from image source]

    Legolas: tall, lean, tidy! His argument takes you straight like an arrowshot all the way from A to H! All the way from the fundamental nature of consciousness to the inevitability of Napoleon. (Yes, I'm looking at you, Georg Wilhelm Friedrich.) All the way from seven abstract Axioms to Proposition V.42, "it is because we enjoy blessedness that we are able to keep our lusts in check". (Sorry, Baruch, I wish I were more convinced.)

    Gimli: short, fat, knotty! His argument only takes you from versions of A to B. But it does it three ways, so that if one argument fails, the others remain. It does without without need of a string of possibly dubious intermediate claims. And finally, the different premises lend tangly sideways support to each other: A2 supports A1, A1 supports A3, A3 supports A2. I think of Mozi's dozen arguments for impartial concern or Sextus's many modes of skepticism.

    In areas of mathematics, tall arguments can work -- maybe the proof of Fermat's last theorem is one -- long and complicated, but apparently sound. (Not that I would be any authority.) When each step is unshakeably secure, tall arguments go through. But philosophy tends not to be like that.

    The human mind is great at determining an object's shape from its shading. The human mind is great at interpreting a stream of incoming sound as a sly dig on someone's character. The human mind is stupendously horrible at determining the soundness of philosophical arguments, and also at determining the soundness of most individual stages within philosophical arguments. Tall, skinny philosophical arguments -- this was Cartwright's point -- will almost inevitably topple.

    Individual blog posts are short. They are, I think, just about the right size for human philosophical cognition: 500-1000 words, enough to put some flesh on an idea, making it vivid (pure philosophical abstractions being almost impossible to evaluate for multiple reasons), enough to make one or maybe two novel turns or connections, but short enough that the reader can get to the end without having lost track of the path there.

    In the aggregate, blog posts are fat and tangled: Multiple posts can get at the same general conclusion from diverse angles. Multiple posts can lend sideways support to each other. I offer, as an example, my many posts skeptical of philosophical expertise (of which this is one): e.g., here, here, here, here, here, here.

    I have come to think that philosophical essays, too, often benefit from being written almost like a series of blog posts: several shortish sections, each of which can stand semi-independently and which in aggregate lead the reader in a single general direction. This has become my metaphilosophy of essay writing, exemplified in "The Crazyist Metaphysics of Mind" and "1% Skepticism".

    Of course there's also something to be said for Legolas -- for shooting your arrow at an orc halfway across the plain rather than waiting for it to reach your axe -- as long as you have a realistically low credence that you will hit the mark.

    Tuesday, March 31, 2015

    Percentages of Women on the Program of the Pacific APA

    Tomorrow I head off to the Pacific Division meeting of the American Philosophical Association in Vancouver. (Thursday I'll be presenting my critique of Quassim Cassam's Self-Knowledge for Humans. Saturday, I'll be presenting on blameworthiness for implicit attitudes.) Given my interest in professional philosophy's skewed gender ratios (e.g. here and here), I thought I'd do a rough coding of the Pacific APA main program by gender. Alongside gender, I also coded role in the program and whether the session topic is ethics (including political philosophy).

    I coded gender conservatively, declining to code names that I perceived as gender ambiguous (e.g., "Kris", "Jamie") or that I did not associate with a clear gender given my particular cultural background (most Asian names and some European names or unusual names), except when I had personal knowledge of the person's gender. As a result 13% of the names remained unclassified. In a more careful coding, I would try to get the exclusions down below 5%.

    With that caveat, I found that 275/856 (32%) of Pacific APA main program participants were women. Although this may sound low, it is substantially higher than the proportion of women in the profession overall, which is typically estimated to be in the low 20%'s in North America (e.g., here). (275/856 > 21%, two-tailed exact p < .001; even classifying all ambiguous names as men yields 28% vs. 21%, exact p < .001).

    These data can't fully be explained by recent changes in the proportion of women entering the profession: According to the Survey of Earned Doctorates, 27% of philosophy PhDs in 2013 were women (also 27% in 2012). So even if newly-minted PhDs are more likely to attend conferences, that wouldn't raise the percentage of women to 32%. Affirmative action might be playing a role -- probably other factors too. Plenty of room for speculation.

    Since it's often thought that the gender distribution is closer to equal in ethics than in other areas of philosophy, I also coded sessions as "ethics" vs. "non-ethics" vs. "excluded" (excluded sessions being topically borderline or mixed or concerning general issues in the profession). I found the expected divergence: 38% of the ethics program participants were women, compared to 28% in non-ethics (Z = 3.0, p = .003).

    Finally, I was interested to look at women's representation in different roles on the program. Some roles are much more prestigious than others: being the author of a book targeted for an author-meets-critics session is much more prestigious than chairing a session. I coded five levels of prestige:

  • 1: Author in an author-meets-critics, or award winner, or invited symposium speaker with at least one commentator focused exclusively on your work.
  • 2: Invited symposium speaker not meeting the criteria above, or "critic" at an author-meets-critics.
  • 3: Invited symposium commentator.
  • 4: Refereed colloquium speaker, or colloquium commentator.
  • 5: Session chair.
  • Excluded: APA organized sessions (e.g., on finding a community college position) and poster presentations (too few for meaningful analysis).
  • Of the people in the most prestigious roles in the program (Category 1), 13/52 (25%) are women. Although this appears to be a bit below the 32% representation of women in all other roles combined, this sample size is too small to permit any definite conclusions (one-proportion CI 14%-39%).

    In the larger group of people with fairly prestigious roles (Category 2), 59/162 (36%) are women, similar to women's overall representation in the program. The group of symposium commentators was small -- 15/44 (34%) -- but in line with the overall numbers. The proportion of women presenting (usually anonymously refereed) colloquium papers was 85/310 (27%, CI 23%-33%), and the proportion of women chairing sessions was 77/221 (35%, CI 29%-42%). Thus, I found no clear tendency for women to appear disproportionately at either a higher or lower level of prestige than men.

    Analysis of more years' data, which I hope to explore in the future, will give more power to detect smaller effect sizes, and will also allow temporal analysis, to see how representation of women in the profession has been changing over time. Ideas welcome!

    Wednesday, March 25, 2015

    "A" Is Red, "I" Is White, "X" Is Black -- Um, Why?

    This is just the kind of dorky thing I think is cool. Check out this graph of the color associations for different letters for people with grapheme-color synesthesia.

    [click on the picture for full size, if it's not showing properly]

    This is from a sample of 6588 synesthetes in the US, reported in Witthoft, Winawer, and Eagleman 2015. Presumably, they're not talking to each other. But there's a pretty good agreement that "A" is red, "X" is black, and "Y" is yellow. But you knew that already, right?

    Now some of these results seem partly explicable: "Y" is yellow, maybe, because of the word "yellow" starts with "Y". That might also work for "R" red, "B" blue, and "G" green. For "A" I think of the big red apple with the "A is for apple" posters that ubiquitously decorate kindergarten classrooms. But "O" is not particularly associated with orange in this chart, nor "W" with white. And why are "X" and "Z" black? Because we're tired because it's near the end of the alphabet and our eyelids are starting to droop doesn't seem like a good answer. (Does it?)

    You might wonder whether it's only synesthetes who have this consensus of associations, and how stable such associations are over time or between countries.

    You're in luck, then, because here's another cool chart, from Australia in 2005!

    [again, click for clearer view]

    The colored bars are synesthetic respondents and the hatched bars are non-synesthetic respondents. The patterns are similar between synesthetes and non-synesthetes, but maybe with the non-synesthetes tending toward stronger associations between the color and the initial letter of the color word. Furthermore, again "A" is red, "I" is white, and "X" and "Z" are black. US and Australian synesthetes seems to agree that "O" is white, but the Australian non-synesthetes like their "O" orange. For some reason, "D" is now brown (47%!).

    There are some older US data from the underappreciated early introspective psychologist Mary Whiton Calkins in her classic 1893 paper on synesthesia. [Pop quiz: Who are the only three people to have been president of both the American Psychological Association and the American Philosophical Association? Answer: William James, John Dewey, and Mary Whiton Calkins.] She reports that synesthetes tend to associate "I" with black and "O" with white. "O" being white matches the synesthete reports from the US and Australia in 2015 and 2005, but Calkins's black "I" is different. Calkins reports this possible explanation for the whiteness of "O", from one of her participants, seeming to find it plausible: O "= cipher = blank = sheet of white paper".

    Witthoft et al. 2015 found that almost a sixth of their participants born in the US in the late 1970s (but not those born before 1967) seem to have letter-color associations that match much better than chance with the colors of the letters of this then-popular magnet toy:

    [image source]

    Neat finding. Of course, the darned toy has "X" purple and "Z" orange, so it's all wrong!

    Brang, Rouw, Ramachandran and Coulson 2011 find a weak tendency for similarly-shaped letters to associate to similar colors in US sample. Irish-based Barnett et al. 2008 and British-based Simner et al. 2015 find broadly similar patterns to the other recent English-language populations.

    Spector and Maurer 2011 find that even pre-literate English-speaking Canadian toddlers associate "O" and "I" with white and "X" and "Z" with black, though they do not share older participants' associations of "A" with red, "B" with blue, "G" with green, and "Y" with yellow. They hypothesize that jagged shapes ("X" and "Z") might be more likely to have shaded portions in a natural environment than non-jagged shapes ("O" and "I"), and that other, later associations might be language based. However, color maps of Swiss research on German-language synesthetes (Beeli, Esslen, and Jaencke 2007) shows no such relationship (see the chart on p. 790) -- for example with more participants associating "X" with white or light gray than with black or dark gray (though Simner et al. have a German subset which do show black associations with "X" and "Z"). Beeli et al. find a weak tendency for higher frequency letters to be associated with higher saturation colors in a German-language sample. Rouw et al. 2014 found that Dutch and English-speaking non-synesthetic participants had similar associations for "A" (red), "B" (blue), "D" (brown), "E" (yellow), "I" (white), and "N" (brown). Hindi participants, with their different alphabet, had a rather different set of associations -- though the first letter of the Hindi alphabet was also associated with red. They speculate that the first letter in each alphabet gets a "signal" color.

    Okay, so now you know!

    Let me leave you then, with this highly unnatural thought: