Thursday, October 01, 2015

Against the "Still Here" Reply to the Boltzmann Brains Problem

I find the Boltzmann Brain skeptical scenario interesting. I've discussed it in past posts, as well as in this paper, which I'll be presenting in Chapel Hill on Saturday.

A Boltzmann Brain, or "freak observer" is a hypothetical self-aware entity that arises from a low-likelihood fluctuation in a disorganized system. Suddenly, from a chaos of gasses, say, 10^27 atoms just happen to converge in exactly the right way to form a human brain thinking to itself, "I wonder if I'm a Boltzmann Brain". Extremely unlikely. But, on many physical theories, not entirely impossible. Given infinite time, perhaps inevitable! Some cosmological theories seem to imply that Boltzmann Brains vastly outnumber ordinary observers.

This invites the question, might I be a Boltzmann brain?

The idea started getting attention in the physics community in the late 2000s. One early response, which seems to me superficially appealing but not to withstand scrutiny, is what I'll call the Still Here response. Here's how J. Richard Gott III put it in 2008:

How do I know that I am an ordinary observer, rather than just a BB [Boltzmann Brain] with the same experiences up to now? Here is how: I will wait 10 seconds and see if I am still here. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ... Yes I am still here. If I were a random BB with all the perceptions I had had up to the point where I said "I will wait 10 seconds and see if I am still here," which the Copernican Principle would require -- as I should not be special among those BB's -- then I would not be answering that next question or lasting those 10 extra seconds.

There's also a version of the Still Here response in Max Tegmark's influential 2014 book:

Before you get too worried about the ontological status of your body, here's a simple test you can do to determine whether you're a Boltzmann brain. Pause. Introspect. Examine your memories. In the Boltzmann-brain scenario, it's indeed more likely that any particular memories that you have are false rather than real. However, for every set of false memories that could pass as having been real, very similar sets of memories with a few random crazy bits tossed in (say, you remembering Beethoven's Fifth Symphony sounding like pure static) are vastly more likely, because there are vastly more disembodied brains with such memories. This is because there are vastly more ways of getting things almost right than of getting them exactly right. Which means that if you really are a Boltzmann brain who at first thinks you're not, then when you start jogging your memory, you should discover more and more utter absurdities. And after that you'll feel your reality dissolving, as your constituent particles drift back into the cold and almost empty space from which they came.

In other words, if you're still reading this, you're not a Boltzmann brain (p. 307-308)

I see two problems with the Still Here response.

First, we can reset the clock. While after ten seconds I could ask the question "am I a Boltzmann Brain who has already lasted ten seconds?", that question is not the sharpest form of the skeptical worry. A sharper question would be this, "Am I a Boltzmann Brain who came into existence just now with a false memory of having counted out ten seconds?" In other words, there seems to be nothing that prevents the Boltzmann Brain skeptic from restarting the clock at will. Similarly, a Boltzmann Brain might come into existence thinking that it had just finished introspecting its memories Tegmark-style, having found them coherent. That's the possibility that the Boltzmann Brain skeptic will be worried about, after having completed (or seeming to have completed) Tegmark's test. The Still Here response begs the question, or argues in a circle, by assuming that we can have veridical memories of implementing such tests over the course of tens of seconds; but it is exactly the veridicality of such memories, even over short durations, that the Boltzmann Brain hypothesis calls into doubt.

Second, this response ignores the base rate of Boltzmann Brains. It's widely assumed that if there are Boltzmann Brains, they might be vastly more numerous than normally embodied observers. For example, a universe might produce a finite number of normal observers and then settle into an infinitely enduring high entropy state that gives rise, at extremely long intervals, to an infinite number of Boltzmann Brains. Since infinitude is hard to deal with, let's hypothesize a cosmos with a googolplex (10^(10^100)) of Boltzmann Brains for every normal observer. Given some sort of indifference principle, the Boltzmann Brain argument goes, I should initially assign a 1-in-a-googolplex chance to being a normal observer instead of a Boltzmann Brain. Not good. But now, what are the odds that a Boltzmann Brain can hold it together for ten seconds without lapsing into incoherence? Tiny! Let's assume one in a googol (10^100). The exact number doesn't matter. Setting aside worries about resetting the clock, let's assume that I now find that I have indeed endured coherently for ten seconds. What should be my new odds that I am a Boltzmann brain? Much lower than 1-in-a-googolplex. Yay! Only about a googolth of a googolplex! Let's see, how much is that? Instead of a ten followed by a googol of zeroes, it's only ten followed by a googol-minus-100 zeros. So... still virtual certainty that I am a Boltzmann Brain.

So how should we respond to the Boltzmann Brain hypothesis, then? Sean Carroll has a two-pronged answer that I think makes a lot of sense.

First, one can consider whether physical theories can be independently justified which imply a low ratio of Boltzmann Brains to normal observers. Boddy, Carroll, and Pollack 2015 offer such a theory. If it turns out that the best physical theories imply that there are zero or very few Boltzmann Brains, then we lose some of our grounds for worry.

Second, one can point to the cognitive instability of the Boltzmann Brain hypothesis (Carroll 2010, p. 223, drawing on earlier work by David Albert). Here's how I'd put it: To the extent I think it likely that I am a Boltzmann Brain, I think it likely that evidence I have in favor of that hypothesis is delusional -- which should undercut my credence in that evidence and thus my credence in the hypothesis itself. If I think it 99% likely that I'm a Boltzmann Brain, for example, then I should think it 99% likely that my evidence in favor of the Boltzmann Brain hypothesis is in fact bogus evidence -- false memories, not reflecting real evidence from the world outside -- and that should in turn reduce my credence in the Boltzmann Brain hypothesis.

An interesting feature of Carroll's responses, which distinguishes them from the Still Here response, is this: Carroll's responses appear to be compatible with still assigning a small but non-trivial subjective probability to being a Boltzmann Brain. Maybe the best cosmological theory turns out not to allow for (many) Boltzmann Brains. But we shouldn't have 100% confidence in any such theory -- certainly not at this point in the history of cosmological science -- and if there are still some contender cosmologies that allow for many Boltzmann Brains, we (you? I?) might want to assign a small probability to being a Boltzmann Brain, in view the acknowledged possibility that the cosmos might, though unlikely, have a non-trivial ratio of Boltzmann Brains to normal observers. And although a greater than 50% credence in the Boltzmann Brain hypothesis seems cognitively unstable in Carroll's sense, it's not clear that, say, an approximately 0.1% credence in the Boltzmann Brain hypothesis would be similarly unstable, since in that case one still might have quite a high degree of confidence in the physical theories that lead one to speculate about the small-but-not-minuscule possibility of being a Boltzmann Brain.

[image source]

Monday, September 28, 2015

Microaggression and the Culture of Solidarity

A guest post by Regina Rini

If you are on a college campus or read anxious thinkpieces, you’ve probably heard about ‘microaggression’. A microaggression is a relatively minor (hence ‘micro’) insult to a member of a marginalized group, perceived as damaging to that person’s full standing as social equal. Examples include acting especially suspicious toward people of color or saying to a Jewish student, "Since Hitler is dead, you don’t have to worry about being killed by him any more." A microaggression is not necessarily a deliberate insult, and any one instance might be an honest mistake. But over time a pattern of microaggression can cause macro harm, by continuously reminding members of marginalized groups of their precarious social position.

A recent paper by sociologists Bradley Campbell and Jason Manning claims that talk of microaggression signals the appearance of a new moral culture: a ‘culture of victimhood’. In the paper Campbell and Manning present a potted history of western morality. First there was a ‘culture of honor’, which prized physical bravery and took insults to demand an aggressive reply. Picture two medieval knights glowering at one another, swords drawn. Then, as legal institutions grew stronger, the culture of honor was displaced by a ‘culture of dignity’, in which individuals let minor insults slide, and reported more serious offenses to impartial authorities. Picture a 1950s businessman calmly telling the constable about a neighbor peeking in windows. Finally, there is now an emerging ‘culture of victimhood’, in which an individual publicly calls attention to having been insulted, in hopes of rallying support from others and inducing the authorities to act. Picture a queer Latina student tweeting about her professor’s perceived-to-be homophobic and racist comments.

There is a serious problem with Campbell and Manning’s moral history, and exposing this problem helps us to see that the ‘culture of victimhood’ label is misleading. The history they provide is a history of the dominant moral culture: it describes the mores of those social groups with greatest access to power. Think about the culture of honor, and notice how limited it must have been. If you were a woman in medieval Europe, you were not expected or permitted to respond to insults with aggression. Even if you were a man, but of low social class, you certainly would not draw your sword in response to insult from a social superior. The ‘culture of honor’ governed relations among a small part of society: white men of equally high social status.

Now think about the culture of dignity, which Campbell and Manning claim “existed perhaps in its purest form among respectable people in the homogenous town of mid-twentieth century America.” Another thing that existed among the ‘respectable people’ in those towns was approval of racial segregation; ‘homogenous towns’ did not arise by accident. People of color, women, queer people, immigrants – none could rely upon the authorities to respond fairly to reports of mistreatment by the dominant group. The culture of dignity embraced more people than had the culture of honor, but it certainly did not protect everyone.

The cultures of honor and dignity left many types of people formally powerless, with no recognized way of responding to moral mistreatment. But they did not stay quiet. What they did instead was whisper to one another and call one another to witness. They offered mutual recognition amid injustices they could not overcome. And sometimes, when the circumstances were right, they made sure that their mistreatment would be seen by everyone, even by the powerful. They sat in at lunch counters that refused to serve them. They went on hunger strike to demand the right to vote. They rose up and were beaten down at Stonewall when the police, agents of dignity, moved in.

The new so-called ‘culture of victimhood’ is not new, and it is not about victimhood. It is a culture of solidarity, and it has always been with us, an underground moral culture of the disempowered. In the culture of solidarity, individuals who cannot enforce their honor or dignity instead make claim on recognition of their simple humanity. They publicize mistreatment not because they enjoy the status of victim, but because they need the support of others to stand strong, and because ultimately public discomfort is the only route to redress possible. What is sought by a peaceful activist who allows herself to be beaten by a police officer in front of a television camera, other than our recognition? What is nonviolent civil disobedience, other than an expression of the culture of solidarity?

If the culture of solidarity is ancient, then what explains the very current fretting over its manifestation? One answer must be social media. Until very recently, marginalized people were reliant on word of mouth or the rare sympathetic journalist to document their suffering. Yet each microaggression is a single small act that might be brushed aside in isolation; its oppressive power is only visible in aggregate. No journalist could document all of the little pieces that add up to an oppressive whole. But Facebook and Twitter allow documentation to be crowdsourced. They have suddenly and decisively amplified the age-old tools of the culture of solidarity.

This is a development that we should welcome, not fear. It is good that disempowered people have new means of registering how they are mistreated, even when mistreatment is measured in micro-units. Some of the worries raised about ‘microaggression’ are misplaced. Campbell and Manning return repeatedly to false reporting of incidents that did not actually happen. Of course it is bad when people lie about mistreatment – but this is nothing special about the culture of solidarity. People have always abused the court of moral opinion, however it operated. An honor-focused feudal warlord could fabricate an insult to justify annexing his brother’s territory. A 1950s dignitarian might file a false police report to get revenge on a rival.

There are some more serious worries about the recent emergence of the culture of solidarity. Greg Lukianoff and Jonathan Haidt suggest that talk of microaggression is corrosive of public discourse; it encourages accusations and counter-accusations of bad faith, rather than critical thinking. This is a reasonable thing to worry about, but their solution, that “students should also be taught how to live in a world full of potential offenses” is not reasonable. The world is not static: what is taught to students now will help create the culture of the future. For instance, it is not an accident that popular support for marriage equality was achieved about 15 years after gay-straight alliances became a commonplace in American high schools and colleges. Teaching students that they must quietly accept racist and sexist abuse, even in micro units, is simply a recipe for allowing racist and sexist abuse to continue. A much more thoughtful solution, one that acknowledges the ongoing reality of oppression as more than an excuse for over-sensitive fussing, will be required if we are to integrate recognition of microaggression into productive public discourse.

There is also a genuine question about the moral blameworthiness of microaggressors. Some microaggressions are genuine accidents, with no ill intent on the part of the one who errs. Others are more complex psychological happenings, as with implicit bias. Still others are acts of full-blooded bigotry, hiding behind claims of misunderstanding. The problem is that outsiders often cannot tell which is which – nor, in many cases, can victims. And being accused of acting in micro-racist or micro-sexist ways is rarely something people receive without becoming defensive; it is painful to be accused of hurting others. We need a better way of understanding what sort of responsibility people have for their small, ambiguous contributions to oppression. And we need better ways of calling out mistakes, and of responding to being called out. These are all live problems for ethicists and public policy experts. Nothing is accomplished by ignoring the phenomenon or demanding its dismissal from polite conversation.

The culture of solidarity has always been with us – with some of us longer than others. It is a valuable form of moral community, and its recent amplification through social media is something we should welcome. The phenomena it brings to light – microaggression among them – are real problems, bringing with them all the difficulties of finding real solutions. But if we want our future moral culture to be just and equal, not merely quietly dignified, then we will have to struggle for those solutions.

Thanks to Kate Manne, Meena Krishnamurthy, and others for helping me think through the ideas of this post. Of course, they do not necessarily endorse everything I say.

image credit: ‘Hands in Solidarity, Hands of Freedom’ mural, Chicago IL. Photo by Terence Faircloth

Friday, September 25, 2015

Some Video Interviews of Me

... on topics related to consciousness and belief, about ten minutes each, here.

This interview is a decent intro to my main ideas about group consciousness. (Full paper: "If Materialism Is True, the United States Is Probably Conscious".)

This interview is a decent intro to my skepticism about the metaphysics of consciousness. (Full paper: "The Crazyist Metaphysics of Mind".)

Monday, September 21, 2015

A Theory of Hypocrisy

Hypocrisy, let's say, is when someone conspicuously advocates some particular moral rule while also secretly, or at least much less conspicuously, violating that moral rule (and doing so at least as much as does the average member of her audience).

It's hard to know exactly how common hypocrisy is, because people tend to hide their embarrassing behavior and because the psychology of moral advocacy is itself a complex and understudied issue. But it seems likely that hypocrisy is more common than a purely strategic analysis of its advantages would predict. I think of the "family values" and anti-homosexuality politicians and preachers who seem disproportionately likely to be caught in gay affairs, of the angry, judgmental people I know who emphasize how important it is to peacefully control one's emotions, of police officers who break the laws they enforce on others, of Al Gore's (formerly?) environmentally-unfriendly personal habits, and of the staff member here at UCR who was in charge of prosecuting academic misconduct and who was later dismissed for having grossly falsified his resume.

Now, anti-homosexuality preachers might or might not be more likely than their parishioners to have homosexual affairs, etc. But it's striking to me that the rates even come close, as it seems to me they do. A purely strategic analysis of hypocrisy suggests that, in general, people who conspicuously condemn X should have low rates of X, since the costs of advocating one thing and doing another are typically high. Among those costs: creating a climate in which X-ish behavior, which you engage in, is generally more condemned; attracting friends and allies who are especially likely to condemn the types of behavior you secretly engage in; attracting extra scrutiny of whether you in fact do X or not; and attracting the charge of hypocrisy, in addition to the charge of X-ing itself, if your X-ing is discovered, substantially reducing the chance that you will be forgiven. It seems strategically foolish for a preacher with a secret homosexual lover to choose anti-homosexuality to be a central platform of his preaching!

Here's what I suspect is going on.

People do not aim to be saints, nor even to be much morally better than their neighbors. They aim instead for moral mediocrity. If I see a bunch of people profiting from doing something that I regard as morally wrong, I want to do that thing too. No fair that (say) 15% of people cheat on the test and get A's, or regularly get away with underreporting their self-employment income. I want to benefit, if they are! This reasoning is tempting even if the cheaters are a minority and honest people are the majority.

Now consider the preacher tempted by homosexuality or the environmentalist who wants to eat steaks in her large air-conditioned house. They might be entirely sincere in their moral opinions. Hypocrisy needn't involve insincere commitment to the moral ideas one espouses (though of course it can be insincere). Still, they see so many others getting away with what they condemn that they (not aiming to be a lot better than their neighbors) might well feel licensed to indulge themselves a bit too.

Furthermore, if they are especially interested in the issue, violations of those norms might be more salient and visible to them than for the average person. The person who works in the IRS office sees how frequent and easy it is to cheat on one's taxes. The anti-homosexual preacher sees himself in a world full of gays. The environmentalist grumpily notices all the giant SUVs rolling down the road. Due to an increased salience of violations of the norms they most care about, people might tend to overestimate the frequency of the violations of those norms -- and then when they calibrate toward mediocrity, their scale might be skewed toward estimating high rates of violation. This combination of increased salience of unpunished violations plus calibration toward mediocrity might partly explain why hypocritical norm violations are more common than a purely strategic account might suggest.

But I don't think that's enough by itself to explain the phenomenon, since one might still expect people to tend to avoid conspicuous moral advocacy on issues where they know they are average-to-weak; and even if their calibration scale is skewed a bit high, they might hope to pitch their own behavior especially toward the good side on that particular issue -- maybe compensating by allowing themselves more laxity on other issues.

So here's the final piece of the puzzle:

Suppose that there's a norm that you find yourself especially tempted to violate, though you succeed for a while, at substantial personal cost, in not violating it. You love cheeseburgers but go vegetarian; you have intense homosexual desires but avoid acting on them. Envy might lead you to be especially condemnatory of other people who still do such things. If you've worked so hard, they should too! It's an issue you've struggled with personally, so now you have wisdom about it, you think. You want to try to make sure that others don't get away with that sin you've worked so hard to avoid. Moreover, optimistic self-illusions might lead you to overestimate the likelihood that you will stay strong and not lapse. These envious, self-confident moments are the moments when you are most likely to conspicuously condemn those behaviors to which you are tempted. But after you're on the hook for it, if you've been sufficiently conspicuous in your condemnations, it becomes hard to change your tune later, even after you have lapsed.

[image source; more on Rekers]

Thursday, September 17, 2015

Philosophical Conversations

a guest post by Regina Rini

You’re at a cocktail reception and find yourself talking to a stranger. She mentions a story she heard today on NPR, something about whether humans are naturally good or evil. Something like that. So far she’s just described the story; she hasn’t indicated her own view. There are a few ways you might respond. You might say, "Oh, that’s interesting. I wonder why this question is so important to people." Or you might say, "Here’s my view on the topic… What do you think?" Or maybe you could say "Here's the correct view on the topic… Anyone who thinks otherwise is confused."

It’s obvious that the last response is boorish cocktail party behavior. Saying that seems to be aimed at foreclosing any possible conversation. You’re claiming to have the definitive, correct view, and if you’re right then there’s no point in discussing it further. If this is how you act, you shouldn’t be surprised when the stranger appears disconcerted and politely avoids talking to you anymore. So why is it that most philosophy books and papers are written in exactly this way?

If we think about works of philosophy as contributing to a conversation, we can divide them up like this. There are conversation-starters: works that present a newish topic or question, perhaps with a suggestive limning of the possible answers, but without trying to come to a firm conclusion. There are conversation-extenders: works that react to an existing topic by explaining the author’s view, but don’t try to claim that this is the only possibly correct view and clearly invite response from those who disagree. And there are conversation-enders: works that try to resolve or settle an existing debate, by showing that one view is the correct view, or at least that an existing view is definitively wrong and must be abandoned.

Contemporary analytic philosophy seems to think that conversation-enders are the best type of work. Conversation-starters do get some attention, but usually trying to raise a new topic leads to dismissal by editors and referees. "This isn’t sufficiently rigorous", they will say. Or: "What’s the upshot? Which famous –ism does this support or destroy? It isn’t clear what the author is trying to accomplish." Opening a conversation, with no particular declared outcome, is generally regarded as something a dilettante might do, not what a professional philosopher does.

Conversation-extenders also have very little place in contemporary philosophy. If you merely describe your view, but don’t try to show that it is the only correct view, you will be asked "where is your argument?" Editors and referees expect to see muscularity and blood. A good paper is one that has "argumentative force". It shows that other views "fail" - that they are "inadequate", or "implausible", or are "fatally flawed". A good paper, by this standard, is not content to sit companionably alongside opposed views. It must aim to end the conversation: if its aspirations are fully met, there will no need to say anything more about the topic.

You might object here. You might say: the language of philosophy papers is brutal, yes, but this is misleading. Philosophers don’t really try to end conversations. They know their opponents will keep on holding their "untenable" views, that there will soon be a response paper in which the opponent says again the thing that they’ve been just shown they "cannot coherently say". Conversation-enders are really conversation-extenders in grandiose disguise. Boxers aren’t really trying to kill their opponents, and philosophers aren’t really trying to kill conversations.

But I think this objection misses something. It’s not just the surface language of philosophy that suggests a conversation-ending goal. That language is driven by an underlying conception of what philosophy is. Many contemporary analytic philosophers aspire to place philosophy among the ‘normal sciences’. Philosophy, on this view, aims at revealing the Truth – the objective and eternal Truth about Reality, Knowledge, Beauty, and Justice. There can be only one such Truth, so the aim of philosophy really must be to end conversations. If philosophical inquiry ever achieves what it aims at, then there is the Truth, and why bother saying any more?

For my part, I don’t know if there is an objective and eternal Truth about Reality, Knowledge, Beauty, and Justice. But if there is, I doubt we have much chance of finding it. We are locally clever primates, very good at thinking about some things and terrible at thinking about others. The expectation that we might uncover objective Truth strikes me as hubristic. And the ‘normal science’ conception of philosophy leads to written work that is plodding, narrow, and uncompanionably barbed. Because philosophy aims to end conversations, and because that is hard to do, most philosophy papers take on only tiny questions. They spin epicycles in long-established arguments; they smash familiar –isms together in hopes that one will display publishably novel cracks. If philosophy is a normal science, this makes perfect sense: to end the big conversation, many tiny sub-conversations must be ended first.

There is another model for philosophical inquiry, one which accepts the elusiveness of objective Truth. Philosophy might instead aim at interpretation and meaningfulness. We might aspire not to know the Truth with certainty, but instead to know ourselves and others a little bit better. Our views on Reality, Knowledge, Beauty, and Justice are the modes of our own self-understanding, and the means by which we make our selves understood to others. They have a purpose, but it is not to bring conversation to a halt. In fact, on this model, the ideal philosophical form is the conversation-opener: the work that shows the possibility of a new way of thinking, that casts fresh light down unfamiliar corridors. Conversation-openers are most valuable precisely because they don’t assume an end will ever be reached. But conversation-extenders are good too. What do you think?

The spirit of this post owes a lot to Robert Nozick, especially the introduction to his book Philosophical Explanations. Thanks to Eden Lin and Tim Waligore for helping me track down Nozick’s thoughts, and to several facebook philosophers for conversing about these ideas.

image credit: Not getting Involved by Tarik Browne

Monday, September 14, 2015

Chinese Philosophy & Intellectualism about Belief

Two separate announcements, not one. Though now that I think about it, joining the topics might make for an interesting future post....

Last weekend LA Times published my piece "What's Missing in College Philosophy Classes? Chinese Philosophers". (This is a revision of a Splintered Mind post from about a year ago.)

And The Minds Online Conference at Brains Blog is now in its third week. This week's topic: Belief and Reasoning. I haven't yet had a chance to read the other papers, but Jack Marley-Payne's "Against Intellectualist Theories of Belief" is nicely done, as I say in my own commentary on the paper.

Friday, September 11, 2015

Ethics, Metaethics, and the Future of Morality

a guest post by Regina Rini

Moral attitudes change over generations. A century ago in America, temperance was a moral crusade, but you’d be hard-pressed to find any mark of it now among the Labor Day beer coolers. Majority views on interracial and same-sex relationships have swung from one pole to the other within the lifetimes of many people reading this. So given what we know of the past, we can say this about the people of the future: their moral attitudes will not be the same as ours. Some ethical efforts that now belong to a minority – vegetarianism, perhaps – will become as ubiquitously upheld as tolerance for interracial partnerships. Other moral matters that seem urgent now will fade from thought just as surely as the temperance movement. We can’t know which attitudes will change and how, but we do know that moral change will happen. Should we try to stop it – or even to control it?

Every generation exercises some control over the moral attitudes of its children, through the natural indoctrination of parenting and the socialization of school. But emerging technologies now give us unprecedented scope to tailor what the future will care about. From social psychology and behavioral economics we increasingly grasp how to design institutions so that the ‘easy’ or ‘default’ choices people tend to adopt coincide with the ones that are socially valuable. And as gene-editing eventually becomes a normal part of reproduction, we will be able to influence the moral attitudes of generations far beyond our own children. Of course, it is not as simple as ‘programming’ a particular moral belief; genetics does not work that way. But we might genetically tinker with brain receptivity for neurotransmitters that affect a person’s readiness to trust, or her preference for members of her own ethnic group. We won’t get to decide precisely how our descendants come to find their moral balance – but we could certainly put our thumb on the scale.

On one way of looking at it, it’s obvious that if we can do this, we should. We are talking about morality here, the stuff made out of ‘should’s. If we can make future generations more caring, more disposed to virtue, more respectful of rational agency, more attentive to achieving the best outcomes – however morality works, we should help future people to do what they should do. That is what ‘should’ means. This thought is especially compelling when we realize that some of our moral goals, like ending racism or addressing the injustices climate change will bring, are necessarily intergenerational projects. The people of the future are the ones who will have to complete the moral journey we have begun. Why not give them a head start?

But there are also reasons to think we should not interfere with whatever course the future of morality might take. For one thing, we ought to be extremely confident that our moral attitudes are the right ones before we risk crimping the possibilities for radical moral change. Perhaps we are that confident about some issues, but it would be unreasonable to be so sure across the board. Think, for example, about moral attitudes toward the idea of ownership of digital media: whether sampling, remixing, and curating count as forms of intellectual theft. Already there appear to be generational splits on this topic, driven by technology that emerged only in the last 30 years. Would you feel confident trying to preordain moral attitudes about forms of media that won’t be invented for a century?

More insidiously, there is the possibility that our existing moral attitudes already reflect the influence of problematic political ideologies and economic forces. If we decide to impose a shape on the moral attitudes of the future, then the technology that facilitates this patterning will likely be in the hands of those who benefit from existing power structures. We may end up creating generations of people less willing to question harmful or oppressive social norms. Finally, we should consider whether any attempt to direct the development of morality is disrespectful to the free agency of future people. They will see our thumbprint on their moral scale, and they may rightly resent our influence even as they cannot escape it.

One thing these reflections bring out is that philosophers are mistaken when they attempt to cleanly separate metaethical questions (what morality is) from normative ethical questions (what we should do). If this clean separation was ever tenable, our technologically expanded control over the future makes it implausible now. We face imminent questions about what we should do – which policies and technologies to employ, to which ends – that depend upon our answers to questions about what morality is. Are there objective moral facts? Can we know them? What is the relationship between morality and human freedom? The idea that metaethical inquiry is for dusty scholars, disconnected from our ordinary social and political lives, is an idea that fades entirely from view when we look to the moral future.


I’ve drawn on several philosophers’ work for some of the arguments above. For arguments in favor of using technology to direct the morality of future generations, see Thomas Douglas, “Moral Enhancement” in the Journal of Practical Ethics; and Ingmar Persson and Julian Savulescu, Unfit for the Future. For arguments against doing so, see Bernard Williams, Ethics and the Limits of Philosophy (chapter 9); and Jurgen Habermas, The Future of Human Nature.

image credit: ’Hope in a better future’ by Massimo Valiani

Wednesday, September 09, 2015

The Invisible Portion of the Experiment

Maybe you've heard about huge psychological replication study that was released on August 28th. The headline finding is this: 270 psychologists jointly attempted to replicate the results of 100 recent studies in three top psychology journals. In fewer than 40% of cases were the researchers able to replicate the originally reported effect.

But here's perhaps an even more telling finding: Only 47% of the originally reported effect sizes were within the 95% confidence interval of the replication effect size. In other words, if you just used the replication study as your basis for guessing the real effect size, you would not expect the real effect size to be as large the effect size originally reported. [Note 1] This reported result inspired me to look at the raw data, to try a related analysis that the original replication study did not appear to report: What percentage of the replications find a significantly lower effect size than the original reported study? By my calculations: 36/95, or 38%. [See Note 2 for what I did.]

(A rather surprising additional ten studies showed statistically marginal trends toward lower effect sizes, which it is tempting to interpret as combination of a non-effect with a poorly powered original study or replication. A representative case is one study with an effect size of r = .22 on 96 participants and a replication effect size of r = .02 on 108 participants (one-tailed p value for difference between the r's = .07). Thus, it seems likely that 38% is a conservative estimate of the tendency toward lower effect size in replications.)

This study-by-study statistical comparison of effect sizes is useful because it helps distinguish the file drawer problem from what we might call the invisible factor problem.

The file drawer problem is this: Researchers are more likely to publish statistically significant findings than findings that show no statistically significant effect. Statistically chance results will sometimes occur, and if mostly these results are published, it might look like there's a real effect when actually there is no real effect.

The invisible factor problem is this: There are a vast number of unreported features of every experiment. Possibly one of those unreported features, invisible to the study's readership, is an important contributor to the reported findings. In infancy research for example, it's not common to report the experimenter's pre-testing interactions with the infant, if any, but pre-testing interactions might have a big effect. In cognitive research, it's not common to report what time of day participants performed the tasks, but time of day can influence arousal and performance. And so on.

The file drawer problem is normally managed in meta-analysis by assuming a substantial number of unpublished null-result studies (maybe five times as many as the published studies) and then seeing if the result still proves significant in a merged analysis. But this is only an adequate approach if the only risk to be considered is a chance tendency for a non-effect to show up as significant in some studies. If, on the other hand, there are a large number of invisible factors, or moderators, that dependably confound studies, leading to statistically significant positive results other than by chance, standard meta-analytic file-drawer compensations will not suffice. The invisible factors might be large and non-chance, unintentionally sought out and settled upon by well-meaning researchers, perhaps even passed along teacher to student. ("It works best if you do it like this.")

Here's how I think psychological research sometimes goes. You try an experiment one way and it "fails" -- that is, it doesn't produce the hoped-for result. So you try another way and it fails again. So then you try a third way and it succeeds. Maybe to make sure it's not chance, you do it that same way again and it still succeeds, so you publish. But there might be no real underlying effect of the sort you think there is. What you might have done is find the right set of moderating factors (time of day, nonverbal experimenter cues, whatever), to get the pattern of results you want. If those factors are visible -- that is, reported in the published study -- then others can evaluate and critique and try to manipulate them. But if those factors are invisible, then you will have an irreproducible result, but one not due to chance. In a way this is a file-drawer effect, since null results are disproportionately non-reported, but it's one driven by biased search for experimental procedures that "succeed" because of real moderating factors rather than just chance fluctuations.

If failure of replication in psychology is due to publishing results that by mere statistical chance happen to fall below the threshold for statistical significance, then most failed replications will not be statistically significantly different from the originally reported results -- just closer to zero and non-significant. But if the failure of replication in psychology is due to invisible moderating factors unreported in the original experiment, then failed replications with decent statistical power will tend to find significantly different results from the original experiment.

I think that is what we see.

[revised Sep. 10]


Related posts:
What a Non-Effect Looks Like (Aug. 7, 2013)
Meta-Analysis of the Effect of Religion on Crime: The Missing Positive Tail (Apr. 11, 2014)
Psychology Research in the Age of Social Media (Jan. 7, 2015)


Note 1: In no case was the original study's effect size outside the 95% interval for the replication study because the original study's effect size was too low.

Note 2: I used the r's and N's reported in the study's z-to-r conversions for non-excluded studies, then plugged the ones that were not obviously either significant or non-significant one-by-one into Lowry's online calculator for significant difference between correlation coefficients, using one-tailed p values and a significance threshold of p < .05. Note that this analysis is different from and more conservative than simply looking at whether the the 95% CI of the replication includes the effect size of the original study, since it allows for statistical error in the original study rather than assuming a fixed original effect size.

Tuesday, September 08, 2015

Minds Online Conference

is cooking along. You might want to check out this week's lineup, on Perception and Consciousness:

  • Nico Orlandi (UC Santa Cruz): "Bayesian Perception Is Ecological Perception" (KEYNOTE)
  • Derek H. Brown (Brandon University): “Colour Layering and Colour Relationalism” (Commentators: Mazviita Chirimuuta and Jonathan Cohen)
  • Jonathan Farrell (Manchester): “‘What It Is Like’ Talk Is Not Technical Talk” (Commentators: Robert Howell and Myrto Mylopoulos)
  • E.J. Green (Rutgers University): “Structure Constancy” (Commentators: John Hummel and Jake Quilty-Dunn)
  • Assaf Weksler (Open University of Israel and Ben Gurion University): “Retinal Images and Object Files: Towards Empirically Evaluating Philosophical Accounts of Visual Perspective” (Commentators: RenĂ© Jagnow and Joulia Smortchkova)

  • Tuesday, September 01, 2015

    A Defense of the Rights of Artificial Intelligences

    ... a new essay in draft, which I've done collaboratively with a student named Mara (whose last name is currently in flux and not finally settled).

    This essay draws together ideas from several past blog posts including:
  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • Our Moral Duties to Artificial Intelligences (Jan. 14, 2015)
  • Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
  • How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
  • Cute AI and the ASIMO Problem (July 24, 2015)
  • How Weird Minds Might Destabilize Ethics (Aug. 3, 2015)
  • ------------------------------------------


    There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.

    Full version available here.

    As always, comments warmly welcomed -- either by email or on this blog post. We're submitting it to a special issue of Midwest Studies with a hard deadline of September 15, so comments before that deadline would be especially useful.

    [image source]

    Thursday, August 27, 2015

    A Philosophy Professor Discovers He's an AI in a Simulated World Run by a Sadistic Teenager

    ... in my story "Out of the Jar", originally published in the Jan/Feb 2015 issue of The Magazine of Fantasy and Science Fiction.

    I am now making the story freely available on my UC Riverside website.



    When we are alone in God’s room I say, God, you cannot kill my people. Heaven 1c is no place to live. Earth is not your toy.

    We have had this conversation before, a theme with variations.

    God’s argument 1: Without God, we wouldn’t exist – at least not in these particular instantiations – and he wouldn’t have installed my Earth if he couldn’t goof around with it. His fun is a fair price to keep the computational cycles going. God’s argument 2: Do I have some problem with a Heavenly life of constant bliss and musical achievement? Is there, like, some superior project I have in mind? Publishing more [sarcastic expletive] philosophy articles, maybe?

    I ask God if he would sacrifice his life on original Earth to live in Heaven 1c.

    In a minute, says God. In a [expletive-expletive-creative-compound-expletive] minute! You guys are the lucky ones. One week in Heaven 1c is more joy than any of us real people could feel in a lifetime. So [expletive-your-unusual-sexual-practice].

    The Asian war continues; God likes to hijack and command the soldiers from one side or the other or to introduce new monsters and catastrophes. I watch as God zooms to an Indian soldier who is screaming and bleeding to death from a bullet wound in his stomach, his friends desperately trying to save him. God spawns a ball of carnivorous ants in the soldier’s mouth. Soon, God says, this guy will be praising my wisdom.

    I am silent for a few minutes while God enjoys his army men. Then I venture a new variation on the argumentative theme. I say: If bliss is all you want, have you considered your mom’s medicine cabinet?

    Thursday, August 20, 2015

    Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity

    I have bad news: You're Swampman.

    Remember that hike you took last week by the swamp during the electrical storm? Well, one biological organism went in, but a different one came out. The "[your name here]" who went in was struck and killed by lightning. Simultaneously, through freak quantum chance, a molecule-for-molecule similar being randomly congealed from the swamp. Soon after, the recently congealed being ran to a certain parked car, pulling key-shaped pieces of metal from its pocket that by amazing coincidence fit the car's ignition, and drove away. Later that evening, sounds came out of its mouth that its nearby "friends" interpreted as meaning "Wow, that lightning bolt almost hit me in the swamp. How lucky I was!" Lucky indeed, but a much stranger kind of luck than they supposed!

    So you're Swampman. Should you care?

    Should you think: I came into existence only a week ago. I never had the childhood I thought I had, never did all those things I thought I did, hardly know any of the people I thought I knew! All that is delusion! How horrible!

    Or should you think: Meh, whatevs.

    [apologies if this doesn't look much like you]

    Option 1: Yes, you should care. If it turns out that certain philosophers are correct and you (now) are not metaphysically the same person as that being who first parked the car by the swamp, then O. M. G.!

    Option 2a: No, you shouldn't care, because that was just a fun little body exchange last week. The same person went into the swamp as came out. Disappointingly, the procedure didn't seem to clear your acne, though.

    Option 2b: No, you shouldn't care, because even if technically you're not the same person as the one who first drove to the swamp, you and that earlier person share everything that matters. Same friends, same job, same values, same (seeming-)memories....

    Option 3: Your call. If you choose to regard yourself as one week old, then you are correct in doing so. If you choose to regard yourself as much older than that, then you are equally correct in doing so.

    Let's call that third option voluntarism about personal identity. Across a certain range of cases, you are who you choose to be.

    Social identities are to a certain extent voluntaristic. You can choose to identify as a political conservative or a political liberal. You can choose to identify, or not identify, with a piece of your ethnic heritage. You can choose to identify, or not identify, as a philosopher or as a Christian. There are limits: If you have no Pakistani heritage or upbringing, you can't just one day suddenly decide to be Pakistani and thereby make it true that you are. Similarly if your heritage and upbringing have been entirely Pakistani to this day, you probably can't just instantly shed your Pakistanihood. But in vague, in-betweenish cases, there's room for choice and making it so.

    I propose taking the same approach to personal identity in the stricter metaphysical sense: What makes you the same being, or not, in philosophical puzzle cases where intuitions pull both ways, depends to a substantial extent on how you choose to view the matter; and different people could legitimately arrive at different choices, thus shaping the metaphysical facts (the actual metaphysical facts) to suit them.

    Consider some other stock cases from the literature on personal identity:

    Teleporter: On Earth there is a device that will destroy your body and beam detailed information about it to Mars. On Mars another device will use that information to create a duplicate body from local materials. Is this harmless teleportation or terrible death-and-duplication? On a voluntaristic view, that would depend on how it is viewed by the participant(s). Also: How similar must the duplicate body be for it a qualify as a successful teleportation? That too, could depend on participant attitude.

    Fission: Your brain will be extracted, cut into two, and housed in two new bodies. The procedure, though damaging and traumatic, is such that if only one half of your brain were to be extracted, and the other half destroyed, everyone would agree that you survived. But instead, there will now be two beings, presumably distinct, who both see themselves as "you". Perhaps whether this should count as death or instead as fissioning-with-survival depends on your attitude going in and the attitudes of the beings coming out.

    Amnesia: Longevity treatments are developed so that your body won't die, but in four hundred years the resulting being will have no memory whatsoever of anything that happened in your lifetime so far, and if she has similar values and attitudes it will only be by chance. Is that being still "you"? How much amnesia and change can "you" survive without becoming strictly and literally (and not just metaphorically or loosely) a different person? Again, this might depend on the various attitudes about amnesia and identity of the person(s) at different temporal stages.

    Here are two thoughts in support of voluntarism about personal identity:

    (1.) If I try to imagine these cases as actual, I don't find myself urgently wondering about the resolution of these metaphysical debates, thinking of my very death or survival as turning upon how the metaphysical arguments play out. It's not like being told that if a just-tossed die has landed on 6 then tomorrow I will be shot, which will make me desperately curious about whether the die did land on 6. It seems to me that I can, to some extent, choose how to conceptualize these cases.

    (2.) "Person" is an ordinary, folk concept arising from a context lacking Swampman, teleporter, fission, and (that type of) amensia cases, so the concept of personhood might be expected to be somewhat indeterminate in its application to such cases. And since important features of personhood depend in part on the person in question thinking of the past or future self as "me" -- feeling regrets about the past, planning prudently for the future -- such indeterminacy might be partly resolved by the person's own decisions about the boundaries of her regrets, prudential planning, etc.

    Even accepting all this, I'm not sure how far I can go with it. I don't think I can decide to be a coffee mug and thereby make it true that I am a coffee mug, nor that I can decide to be one of my students and thereby make it so. Can I decide that I am not that 15-year-old named "Eric" who wore the funny shirts in the 1980s, thereby making it true that I am not really metaphysically the same person, while my sister just as legitimately decides the opposite, that she is the same person as her 15-year-old self? Can the Dalai Lama and some future child (together, but at a temporal distance) decide that they are metaphysically the same person, if enough else goes along with that?

    (For a version of that last scenario, see "A Somewhat Impractical Plan for Immortality" (Apr. 22, 2013) and my forthcoming story "The Dauphin's Metaphysics" (available on request).)

    Thursday, August 13, 2015

    Weird Minds Might Destabilize Human Ethics

    Intuitive physics works great for picking berries, throwing stones, and walking through light underbrush. It's a complete disaster when applied to the very large, the very small, the very energetic, or the very fast. Similarly for intuitive biology, intuitive cosmology, and intuitive mathematics: They succeed for practical purposes across long-familiar types of cases, but when extended too far they go wildly astray.

    How about intuitive ethics?

    I incline toward moral realism. I think that there are moral facts that people can get right or wrong. Hitler's moral attitudes were not just different from ours but actually mistaken. The twentieth century "rights revolutions" weren't just change but real progress. I worry that if artificial intelligence research continues to progress, intuitive ethics might encounter a range of cases for which it is as ill prepared as intuitive physics was for quantum entanglement and relativistic time dilation.

    Intuitive ethics was shaped in a context where the only species capable of human-grade practical and theoretical reasoning was humanity itself, and where human variation tended to stay within certain boundaries. It would be unsurprising if intuitive ethics were unprepared for utility monsters (capable of superhuman degrees of pleasure or pain), fission-fusion monsters (who can merge and divide at will), AIs of vastly superhuman intelligence, cheerfully suicidal AI slaves, conscious toys with features specifically designed to capture children's affection, giant virtual sim-worlds containing genuinely conscious beings over which we have godlike power, or entities with radically different value systems. We might expect human moral judgment to be be baffled by such cases and to deliver wrong or contradictory or unstable verdicts.

    For physics and biology, we have pretty good scientific theories by which to correct our intuitive judgments, so it's no problem if we leave ordinary judgment behind in such matters. However, it's not clear that we have, or will have, such a replacement in ethics. There are, of course, ambitious ethical theories -- "maximize happiness", "act on that maxim that you can at the same time will to be a universal law" -- but the development and adjudication of such theories depends, and might inevitably depend, on our intuitive judgments about such cases. It's because we intuitively or pre-theoretically think we shouldn't give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution. But if our commonplace ethical judgments about such cases are not to be trusted, because these cases are too far beyond what we can reasonably expect human moral intuition to handle well, what then? Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?

    Or maybe morality is constructed from our judgments and folkways, so that whatever moral facts there are, they are just the moral facts that we (or idealized versions of ourselves) think there are? Much like an object's being red, on a certain view of the nature of color, consists in its being such that ordinary human perceivers in normal conditions would experience it as red, maybe an action's being morally right just consists in its being such that ordinary human beings who considered the matter carefully would regard it as right? (This is a huge, complicated topic in metaethics, e.g., here and here.) If we take this approach, then morality might change as our sense of the world changes -- and as who counts as "we" changes. Maybe we could decide to give fission-fusion monsters some rights but not other rights, and shape future institutions accordingly. The unsettled nature of our intuitions about such cases, then, might present an opportunity for us to shape morality -- real morality, the real (or real enough) moral facts -- in one direction rather than another, by shaping our future reactions and habits.

    Maybe different social groups would make different choices with different consequences for group survival, introducing cultural evolution into the mix. Moral confusion might open into a range of choices for moral architecture.

    However, the range of legitimate choices is, I'm inclined to think, constrained by certain immovable moral facts, such as that it would be a moral disaster if the most successful future society constructed human-grade AIs, as self-aware as we are, as anxious about their future, and as capable of joy and suffering, simply to torture, enslave, and kill them for no good reason.

    Related posts:

  • Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
  • How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
  • Cute AI and the ASIMO Problem (Jul. 24, 2015)
  • ----------------------------------------------
    Thanks to Ever Eigengrau for extensive discussion.

    [image source]

    Wednesday, August 05, 2015

    The Top Science Fiction and Fantasy Magazines 2015

    Last year, as a beginning writer of science fiction or speculative fiction, with no idea what magazines were well regarded in the industry, I decided to compile a ranked list of magazines based on numbers of awards and "best of" placements in the previous ten years. Since some people have found the list interesting, I decided to update this year, dropping the oldest data and replacing them with fresh data from this summer's awards/best-of season.

    Last year's post expresses various methodological caveats, which still apply. This year's method, in brief, was to count one point every time a magazine had a story nominated for a Hugo, Nebula, or World Fantasy Award; one point for every "best of" choice in the Dozois, Strahan, and Horton anthologies; and half a point for every Locus recommendation at novelette or short story length, over the past ten years.

    I take the list down to magazines with 1.5 points. I am not including anthologies or standalones, although anthologies account for about half of the award nominations and "best of" choices. Horror is not included except as it incidentally appears according to the criteria above. I welcome corrections.


    1. Asimov's (262 points)
    2. Fantasy & Science Fiction (209.5)
    3. Subterranean (82) (ran 2007-2014)
    4. Clarkesworld (78) (started 2006)
    5. (77.5) (started 2008)
    6. Strange Horizons (51)
    7. Analog (50.5)
    8. Interzone (47.5)
    9. Lightspeed (44.5) (started 2010)
    10. SciFiction (26) (ceased 2005)
    11. Fantasy Magazine (24) (merged into Lightspeed, 2012)
    12. Postscripts (19) (ceased 2014)
    13. Realms of Fantasy (16.5) (ceased 2011)
    14. Beneath Ceaseless Skies (15) (started 2008)
    15. Jim Baen's Universe (14.5) (ran 2006-2010)
    16. Apex (13)
    17. Electric Velocipede (7) (ceased 2013)
    18. Intergalactic Medicine Show (6)
    19. Black Static (5.5) (started 2007)
    19. Helix SF (5.5) (ran 2006-2008)
    21. The New Yorker (5)
    22. Cosmos (4.5)
    22. Tin House (4.5)
    24. Flurb (4) (ran 2006-2012)
    24. Lady Churchill's Rosebud Wristlet (4)
    26. Black Gate (3.5)
    26. McSweeney's (3.5)
    28. Conjunctions (3)
    28. GigaNotoSaurus (3) (started 2010)
    30. Lone Star Stories (2.5) (ceased 2009)
    31. Aeon Speculative Fiction (2) (ceased 2008)
    31. Futurismic (2) (ceased 2010)
    31. Harper's (2)
    31. Weird Tales (2) (off and on throughout period)
    36. Cemetery Dance (1.5)
    36. Daily Science Fiction (1.5) (started 2010)
    36. Nature (1.5)
    36. On Spec (1.5)
    36. Terraform (1.5) (started 2014)


    (1.) The New Yorker, Tin House, McSweeney's, Conjunctions, and Harper's are prominent literary magazines that occasionally publish science fiction or fantasy. Cosmos and Nature are popular and specialists' (respectively) science magazines that publish a little bit of science fiction on the side. The remaining magazines focus on the F/SF genre.

    (2.) Although Asimov's and F&SF dominate the list, recently things have equalized among the top several. The past three years is approximately a tie among the top four:

    1. (50.5)
    2. Asimov's (50)
    3. Clarkesworld (44.5)
    4. F&SF (41)
    and the ratio between the #1 and the #10 is about 4:1 in the past three years, as opposed to 10:1 in the ten-year data:
    5. Lightspeed (26.5)
    6. Subterranean (23)
    7. Analog (19.5)
    8. Strange Horizons (14)
    9. Beneath Ceaseless Skies (13.5)
    10. Interzone (12)

    (3.) Another aspect of the venue-broadening trend is the rise of good podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, and Pseudopod), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasting might the leading edge of a major change in the industry. It's fun to hear a short story podcast while driving or exercising, and people might increasingly obtain their short fiction that way. (Some text-based magazines, like Clarkesworld, are also now regularly podcasting their stories.)

    (4.) A few new magazines have drawn recommendations this year from the notoriously difficult-to-please Lois Tilton, who is the reviewer for short fiction at Locus Online. All three are pretty cool, and I'm hoping to see one or more of them qualify for next year's updated list:

    Unlikely Story (started 2011 as Journal of Unlikely Entomology, new format 2013)
    The Dark (started 2013)
    Uncanny (started 2014)

    (5.) Philosophers interested in science fiction might also want to look at Sci Phi Journal, which publishes both science fiction with philosophical discussion notes and philosophical essays about science fiction.

    (6.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. is a regularly updated list of markets, divided into categories based on pay rate.

    [image source; admittedly, it's not the latest issue!]

    Friday, July 31, 2015

    Against Intellectualism about Belief

    Sometimes what we sincerely say -- aloud or even just silently to ourselves -- doesn't fit with the rest of our cognition, reactions, and behavior. Someone might sincerely say, for example, that women and men are equally intelligent, but be consistently sexist in his assessments of intelligence. (See the literature on implicit bias.) Someone might sincerely say that her dear friend has gone to Heaven, while her emotional reactions don't at all fit with that.

    On intellectualist views of belief, what we really believe is the thing we sincerely endorse, despite any other seemingly contrary aspects of our psychology. On the more broad-based view I prefer, what you believe depends, instead, on how you act and react in a broad range of ways, and sincere endorsements are only one small part of the picture.

    Intellectualism might be defended on four grounds.

    (1.) Intellectualism might be intuitive. Maybe the most natural or intuitive thing to say about the implicit sexism case is that the person really believes that women are just as smart; he just has trouble putting that belief into action. The person really believes that her friend is in Heaven, but it's hard to avoid reacting emotionally as if her friend is ineradicably dead rather than just "departed".

    Reply: Sometimes we do seem to want to say that people believe what they intellectually endorse in cases like this, but I don't think our intuitions are univocal. It can also seem natural or intuitive to say that the implicit sexist doesn't really or wholly or deep-down believe that the sexes are equal, and that the mourner maybe has more doubt about Heaven than she is willing to admit to herself. So the intuitive case could go either way.

    (2.) Intellectualism might fit well with our theoretical conceptualization of belief. Maybe it's in the nature of belief to be responsive to evidence and deployable in reasoning. And maybe only intellectually endorsed or endorsable states can play that cognitive role. The implicit sexist's bias might be insufficiently responsive to evidence and insufficiently apt to be deployed in reasoning for it to qualify as belief, while her intellectual endorsement is responsive to evidence and deployable in reasoning.

    Reply: Zimmerman and Gendler, in influential essays, have nicely articulated versions of this defense of intellectualism [caveat: see Zimmerman's comment below]. I raised some objections here, and Jack Marley-Payne has objected in more explicit detail, so I won't elaborate in this post. Marley-Payne's and my point is that people's implicit reactions are often sensitive to evidence and deployable in what looks like reasoning, while our intellectual endorsements are often resistant to evidence and rationally inert -- so at least it doesn't seem that there's a sharp difference in kind.

    (It was Marley-Payne's essay that got me thinking about this post, I should say. We'll be discussing it, also with Keith Frankish, in September for Minds Online 2015.)

    (3.) Intellectualism about belief might cohere well with the conception of "belief" generally used in current Anglophone philosophy. Epistemologists commonly regard knowledge as a type of belief. Philosophers of action commonly think of beliefs coupling with desires to form intentions. Philosophers of language discuss the weird semantics of "belief reports" (such as "Lois believes that Superman is strong" and "Lois believes that Clark Kent is not strong"). Possibly, an intellectualist approach to belief fits best with existing work in these other areas of philosophy.

    Reply: I concede that something like intellectualism seems to be presupposed in much of the epistemological literature on knowledge and much of the philosophy-of-language literature on belief reports. However, it's not clear that philosophy of action and moral psychology are intellectualistic. Philosophy of action uses belief mainly to explain what people do, not what they say. For example: Why did Ralph, the implicit sexist, reject Linda for the job? Well, maybe because he wants to hire someone smart for the job and he doesn't think women are smart. Why does the mourner feel sorry for the deceased? Maybe because she doesn't completely accept that the deceased is in Heaven.

    Furthermore, maybe coherence with intellectualist views of belief in epistemology and philosophy of language is a mistaken ideal and not in the best interest of the discipline as a whole. For example, it could be that a less intellectualist philosophy of mind, imported into philosophy of language, would help us better see our way through some famous puzzles about belief reports.

    (4.) Intellectualism might be the best practical choice because of its effects on people's self-understanding. For example, it might be more effective, in reducing unjustified sexism, to say to an implicit sexist, "I know you believe that women are just as smart, but look at all these spontaneous responses you have" than to say "I know you are sincere when you say women are just as smart, but it appears that you don't through-and-through believe it". Tamar Gendler, Aaron Zimmerman, and Karen Jones have all defended attribution of egalitarian beliefs partly on these grounds, in conversation with me.

    Reply: I don't doubt that Gendler, Zimmerman, and Jones are right that many people will react negatively to being told they don't entirely or fully possess all the handsome-sounding egalitarian and spiritual beliefs they think they have. (Neither, would I say, do they entirely lack the handsome beliefs; these are "in-between" cases.) They'll react more positively, and be more open to rigorous self-examination perhaps, if you start on a positive note and coddle them a bit. But I don't know if I want to coddle people in this way. I'm not sure it's really the best thing in the long term. There's something painfully salutary in thinking to yourself, "Maybe deep down I don't entirely or thoroughly believe that women (or racial minorities, or...) are very smart. Similarly, maybe my spiritual attitudes are also mixed up and multivocal." This is a more profound kind of self-challenge, a fuller refusal to indulge in self-flattery. It highlights the uncomfortable truth that our self-image is often ill-tuned to reality.


    Although all four defenses of intellectualism have some merit, none is decisive. This tangle of reasons leaves us in approximately a tie so far. But we haven't yet come to...

    The most important reason to reject intellectualism about belief:

    Given the central role of the term "belief" in philosophy of mind, philosophy of action, epistemology, and philosophy of language, we should reserve the term for the most important thing in the vicinity.

    Both intellectualism and broad-based views have some grounding in ordinary and philosophical usage. We are at liberty to choose between them. Given that choice, we should prefer the account that picks out the aspect of our psychology that most deserves the central role that "belief" plays in philosophy and folk psychology.

    What we sincerely say, what we intellectually endorse, is important. But it is not as important as how we live our way through the world generally. What I say about the intellectual equality of the sexes is important, but not as important as how I actually treat people. My sincere endorsements of religious or atheistic attitudes are important, but they are only a small slice of my overall religiosity or lack of religiosity.

    On a broad-based view of belief, to believe that the sexes are equal, or that Heaven exists, or that snow is white, is to steer one's way through the world, in general, as though these propositions are true, not only to be disposed to say they are true. It is this overall pattern of self-steering that we should care most about, and to which we should, if we can do so without violence, attach the philosophically important term "belief".

    [image source]

    Tuesday, July 28, 2015

    Podcast Interview of Me, about Ethicists' Moral Behavior

    ... other topics included rationalization and confronting one's moral imperfection,

    at Rationally Speaking.

    Thanks, Julia, for your terrific, probing questions!

    Friday, July 24, 2015

    Cute AI and the ASIMO Problem

    A couple of years ago, I saw the ASIMO show at Disneyland. ASIMO is a robot designed by Honda to walk bipedally with something like the human gait. I'd entered the auditorium with a somewhat negative attitude about ASIMO, having read Andy Clark's critique of Honda's computationally-heavy approach to robotic locomotion (fuller treatment here); and the animatronic Mr. Lincoln is no great shakes.

    But ASIMO is cute! He's about four feet tall, humanoid, with big round dark eyes inside what looks a bit like an astronaut's helmet. He talks, he dances, he kicks soccer balls, he makes funny hand gestures. On the Disneyland stage, he keeps up a fun patter with a human actor. ASIMO's gait isn't quite human, but his nervous-looking crouching run only makes him that much cuter. By the end of the show I thought that if you gave me a shotgun and told me to blow off ASIMO's head, I'd be very reluctant to do so. (In contrast, I might quite enjoy taking a shotgun to my darn glitchy laptop.)

    Another case: ELIZA was a simple computer program written in the 1960s that would chat with a user, using a small template of pre-programmed responses to imitate a non-directive psychotherapist (“Are such questions on your mind often?”, “Tell me more about your mother.”) Apparently, some users mistook it for human and spent long periods chatting with it.

    I assume that ASIMO and ELIZA are not proper targets of substantial moral concern. They have no more consciousness than a laptop computer, no more capacity for genuine joy and suffering. However, because they share some of the superficial features of human beings, people might come improperly to regard them as targets of moral concern. And future engineers could presumably create entities with an even better repertoire of superficial tricks. Discussing this issue with my sister, she mentioned a friend who had been designing a laptop that would scream and cry when its battery runs low. Imagine that!

    Conversely, suppose that it's someday possible to create an Artificial Intelligence so advanced that it has genuine consciousness, a genuine sense of self, real joy, and real suffering. If that AI also happens to be ugly or boxy or poorly interfaced, it might tend to attract less moral concern than is warranted.

    Thus, our emotional responses to AIs might be misaligned with the moral status of those AIs, due to superficial features that are out of step with the AI's real cognitive and emotional capacities.

    In the Star Trek episode "The Measure of a Man", a scientist who wants to disassemble the humanoid robot Data (sympathetically portrayed by a human actor) says of the robot, "If it were a box on wheels, I would not be facing this opposition." He also points out that people normally think nothing of upgrading the computer systems of a starship, though that means discarding a highly intelligent AI.

    I have a cute stuffed teddy bear I bring to my philosophy of mind class on the day devoted to animal minds. Students scream in shock when without warning in the middle of the class, I suddenly punch the teddy bear in the face.

    Evidence from developmental and social psychology suggests that we are swift to attribute mental states to entities with eyes and movement patterns that look goal directed, much slower to attribute mentality to eyeless entities with inertial movement patterns. But of course such superficial features needn’t track underlying mentality very well in AI cases.

    Call this the ASIMO Problem.

    I draw two main lessons from the ASIMO Problem.

    First is a methodological lesson: In thinking about the moral status of AI, we should be careful not to overweight emotional reactions and intuitive judgments that might be driven by such superficial features. Low-quality science fiction -- especially low-quality science fiction films and television -- does often rely on audience reaction to such superficial features. However, thoughtful science fiction sometimes challenges or even inverts these reactions.

    The second lesson is a bit of AI design advice. As responsible creators of artificial entities, we should want people to neither over- nor under-attribute moral status to the entities with which they interact. Thus, we should generally try to avoid designing entities that don’t deserve moral consideration but to which normal users are nonetheless inclined to give substantial moral consideration. This might be especially important in the design of children’s toys: Manufacturers might understandably be tempted to create artificial pets or friends that children will love and attach to -- but we presumably don’t want children to attach to a non-conscious toy instead of to parents or siblings. Nor do we presumably want to invite situations in which users might choose to save an endangered toy over an an endangered human being!

    On the other hand, if we do someday create genuinely human-grade AIs who merit substantial moral concern, it would probably be advisable to design them in a way that would evoke the proper range of moral emotional responses from normal users.

    We should embrace an Emotional Alignment Design Policy: Design the superficial features of AIs in such a way that they evoke the moral emotional reactions are appropriate to the real moral status of the AI, whatever it is, neither more nor less.

    (What is the real moral status of AIs? More soon! In the meantime, see here and here.)

    [image source]

    Sunday, July 19, 2015

    Philosophy Via Facebook? Why Not?

    An adapation of my June blog post What Philosophical Work Could Be, in today's LA Times.


    Academic philosophers tend to have a narrow view of what is valuable philosophical work. Hiring, tenure, promotion and prestige depend mainly on one's ability to produce journal articles in a particular theoretical, abstract style, mostly in reaction to a small group of canonical and 20th century figures, for a small readership of specialists. We should broaden our vision.

    Consider the historical contingency of the journal article, a late-19th century invention. Even as recently as the middle of the 20th century, leading philosophers in Western Europe and North America did important work in a much broader range of genres: the fictions and difficult-to-classify reflections of Sartre, Camus and Unamuno; Wittgenstein's cryptic fragments; the peace activism and popular writings of Bertrand Russell; John Dewey's work on educational reform.

    Popular essays, fictions, aphorisms, dialogues, autobiographical reflections and personal letters have historically played a central role in philosophy. So also have public acts of direct confrontation with the structures of one's society: Socrates' trial and acceptance of the hemlock; Confucius' inspiring personal correctness.

    It was really only with the generation hired to teach the baby boomers in the 1960s and '70s that academic philosophers' conception of philosophical work became narrowly focused on the technical journal article.

    continued here.

    Tuesday, July 14, 2015

    The Moral Lives of Ethicists

    [published today in Aeon Magazine]

    None of the classic questions of philosophy are beyond a seven-year-old's understanding. If God exists, why do bad things happen? How do you know there's still a world on the other side of that closed door? Are we just made of material stuff that will turn into mud when we die? If you could get away with killing and robbing people just for fun, would you? The questions are natural. It's the answers that are hard.

    Eight years ago, I'd just begun a series of empirical studies on the moral behavior of professional ethicists. My son Davy, then seven years old, was in his booster seat in the back of my car. "What do you think, Davy?" I asked. "People who think a lot about what's fair and about being nice – do they behave any better than other people? Are they more likely to be fair? Are they more likely to be nice?"

    Davy didn’t respond right away. I caught his eye in the rearview mirror.

    "The kids who always talk about being fair and sharing," I recall him saying, "mostly just want you to be fair to them and share with them."

    When I meet an ethicist for the first time – by "ethicist", I mean a professor of philosophy who specializes in teaching and researching ethics – it's my habit to ask whether ethicists behave any differently to other types of professor. Most say no.

    I'll probe further: Why not? Shouldn't regularly thinking about ethics have some sort of influence on one’s own behavior? Doesn't it seem that it would?

    To my surprise, few professional ethicists seem to have given the question much thought. They'll toss out responses that strike me as flip or are easily rebutted, and then they'll have little to add when asked to clarify. They'll say that academic ethics is all about abstract problems and bizarre puzzle cases, with no bearing on day-to-day life – a claim easily shown to be false by a few examples: Aristotle on virtue, Kant on lying, Singer on charitable donation. They'll say: "What, do you expect epistemologists to have more knowledge? Do you expect doctors to be less likely to smoke?" I'll reply that the empirical evidence does suggest that doctors are less likely to smoke than non-doctors of similar social and economic background. Maybe epistemologists don’t have more knowledge, but I'd hope that specialists in feminism would exhibit less sexist behavior – and if they didn't, that would be an interesting finding. I'll suggest that relationships between professional specialization and personal life might play out differently for different cases.

    It seems odd to me that our profession has so little to say about this matter. We criticize Martin Heidegger for his Nazism, and we wonder how deeply connected his Nazism was to his other philosophical views. But we don’t feel the need to turn the mirror on ourselves.

    The same issues arise with clergy. In 2010, I was presenting some of my work at the Confucius Institute for Scotland. Afterward, I was approached by not one but two bishops. I asked them whether they thought that clergy, on average, behaved better, the same or worse than laypeople.

    "About the same," said one.

    "Worse!" said the other.

    No clergyperson has ever expressed to me the view that clergy behave on average morally better than laypeople, despite all their immersion in religious teaching and ethical conversation. Maybe in part this is modesty on behalf of their profession. But in most of their voices, I also hear something that sounds like genuine disappointment, some remnant of the young adult who had headed off to seminary hoping it would be otherwise.

    In a series of empirical studies – mostly in collaboration with the philosopher Joshua Rust of Stetson University – I have empirically explored the moral behavior of ethics professors. As far as I'm aware, Josh and I are the only people ever to have done so in a systematic way.

    Here are the measures we looked at: voting in public elections, calling one's mother, eating the meat of mammals, donating to charity, littering, disruptive chatting and door-slamming during philosophy presentations, responding to student emails, attending conferences without paying registration fees, organ donation, blood donation, theft of library books, overall moral evaluation by one's departmental peers based on personal impressions, honesty in responding to survey questions, and joining the Nazi party in 1930s Germany.

    [continued in the full article here]