Thursday, December 26, 2013

The Moral Epistemology of the Jerk

The past few days, I've been appreciating the Grinch's perspective on Christmas -- particularly his desire to drop all the presents off Mount Crumpit. An easy perspective for me to adopt! I've already got my toys (mostly books via Amazon, purchased any old time I like), and there's such a grouchy self-satisfaction in scoffing, with moralistic disdain, at others' desire for their own favorite luxuries.

(image from http://news.mst.edu)

When I write about jerks -- and the Grinch is a capital one -- it's always with two types of ambivalence. First, I worry that the term invites the mistaken thought that there is a particular and readily identifiable species of people, "jerks", who are different in kind from the rest of us. Second, I worry about the extent to which using this term rightly turns the camera upon me myself: Who am I to call someone a jerk? Maybe I'm the jerk here!

My Grinchy attitudes are, I think, the jerk bubbling up in me; and as I step back from the moral condemnations toward which I'm tempted, I find myself reflecting on why jerks make bad moralists.

A jerk, in my semi-technical definition, is someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The Grinch doesn't respect the Whos, doesn't value their perspectives. He doesn't see why they might enjoy presents and songs, and he doesn't accord any weight to their desires for such things. This is moral and epistemic failure, intertwined.

The jerk fails as a moralist -- fails, that is, in the epistemic task of discovering moral truths -- for at least three reasons.

(1.) Mercy is, I think, near the heart of practical, lived morality. Virtually everything everyone does falls short of perfection. Her turn of phrase is less than perfect, she arrives a bit late, her clothes are tacky, her gesture irritable, her choice somewhat selfish, her coffee less than frugal, her melody trite -- one can create quite a list! Practical mercy involves letting these quibbles pass forgiven or even better entirely unnoticed, even if a complaint, were it made, would be just. The jerk appreciates neither the other's difficulties in attaining all the perfections he himself (imagines he) has nor the possibility that some portion of what he regards as flawed is in fact blameless. Hard moralizing principle comes naturally to the jerk, while it is alien to the jerk's opposite, the sweetheart. The jerk will sometimes give mercy, but if he does, he does so unequally -- the flaws and foibles that are forgiven are exactly the ones the jerk recognizes in himself or has other special reasons to be willing to forgive.

(2.) The jerk, in failing to respect the perspectives of others, fails to appreciate the delight others feel in things he does not himself enjoy -- just as the Grinch fails to appreciate the Whos' presents and songs. He is thus blind to the diversity of human goods and human ways of life, which sets his principles badly askew.

(3.) The jerk, in failing to respect the perspectives of others, fails to be open to frank feedback from those who disagree with him. Unless you respect another person, it is difficult to be open to accepting the possible truth in hard moral criticisms from that person, and it is difficult to triangulate epistemically with that person as a peer, appreciating what might be right in that person's view and wrong in your own. This general epistemic handicap shows especially in moral judgment, where bias is rampant and peer feedback essential.

For these reasons, and probably others, the jerk suffers from severe epistemic shortcomings in his moral theorizing. I am thus tempted to say that the first question of moral theorizing should not be something abstract like "what is to be done?" or "what is the ethical good?" but rather "am I a jerk?" -- or more precisely, "to what extent and in what ways am I a jerk?" The ethicist who does not frankly confront herself on this matter, and who does not begin to execute repairs, works with deficient tools. Good first-person ethics precedes good second-person and third-person ethics.

Wednesday, December 18, 2013

Should I Try to Fly, Just on the Off-Chance That This Might Be a Dreambody?

I don't often attempt to fly when walking across campus, but yesterday I gave it a try. I was going to the science library to retrieve some books on dreaming. About halfway there, in the wide-open mostly-empty quad, I spread my arms, looked at the sky, and added a leap to one of my steps.

My thinking was this: I was almost certainly awake -- but only almost certainly! As I've argued, I think it's hard to justify much more than 99.9% confidence that one is awake, once one considers the dubitability of all the empirical theories and philosophical arguments against dream doubt. And when one's confidence is imperfect, it will sometimes be reasonable to act on the off-chance that one is mistaken -- whenever the benefits of acting on that off-chance are sufficiently high and the costs sufficiently low.

I imagined that if I was dreaming, it would be totally awesome to fly around, instead of trudging along. On the other hand, if I was not dreaming, it seemed no big deal to leap, and in fact kind of fun -- maybe not entirely in keeping with the sober persona I (feebly) attempt to maintain as a professor, but heck, it's winter break and no one's around. So I figured, why not give it a whirl?

I'll model this thinking with a decision matrix, since we all love decision matrices, don't we? Call dream-flying a gain of 100, waking leap-and-fail a loss of 0.1, dreaming leap-and-fail a loss of only 0.01 (since no one will really see me), and continuing to walk in the dream a loss of 1 (since why bother with the trip if it's just a dream?). All this is relative to a default of zero for walking, awake, to the library. (For simplicity, I assume that if I'm dreaming things are overall not much better or worse than if I'm awake, e.g., that I can get the books and work on my research tomorrow.) I'd been reading about false awakenings, and at that moment 99.7% confidence in my wakefulness seemed about right to me. The odds of flying conditional upon dreaming I held to be about 50/50, since I don't always succeed when I try to fly in my dreams.

So here's the payoff matrix:

Plugging into the expected value formula:

Leap = (.003)(.5)(100) + (.003)(.5)(-0.01) + (.997)(-0.1) = approx. +.05.

Not Leap = (.003)(-1) + (.997)(0) = -.003.

Leap wins!

Of course, this decision outcome is highly dependent on one's degree of confidence that one is awake, on the downsides of leaping if it's not a dream, on the pleasure one takes in dream-flying, and on the probability of success if one is in fact dreaming. I wouldn't recommend attempting to fly if, say, you're driving your son to school or if you're standing in front of a class of 400, lecturing on evil.

But in those quiet moments, as you're walking along doing nothing else, with no one nearby to judge you -- well maybe in such moments spreading your wings can be the most reasonable thing to do.

Wednesday, December 11, 2013

How Subtly Do Philosophers Analyze Moral Dilemmas?

You know the trolley problems. A runaway train trolley will kill five people ahead on the tracks if nothing is done. But -- yay! -- you can intervene and save those five people! There's a catch, though: your intervention will cost one person's life. Should you intervene? Both philosophers' and non-philosophers' judgments vary depending on the details of the case. One interesting question is how sensitive philosophers and non-philosophers are to details that might be morally relevant (as opposed to presumably irrelevant distracting features like order of presentation or the point-of-view used in expressing the scenario).

Consider, then, these four variants of the trolley dilemma:

Switch: You can flip a switch to divert the trolley onto a dead-end side-track where it will kill one person instead of the five.

Loop: You can flip a switch to divert the trolley into a side-track that loops back around to the main track. It will kill one person on the side track, stopping on his body. If his body weren't there to block it, though, the trolley would have continued through the loop and killed the five.

Drop: There is a hiker with a heavy backpack on a footbridge above the trolley tracks. You can flip a switch which will drop him through a trap door and onto the tracks in front of the runaway trolley. The trolley will kill him, stopping on his body, saving the five.

Push: Same as Drop, except that you are on the footbridge standing next to the hiker and the only way to intervene is to push the hiker off the bridge into the path of the trolley. (Your own body is not heavy enough to stop the trolley.)

Sure, all of this is pretty artificial and silly. But orthodox opinion is that it's permissible to flip the switch in Switch but impermissible to push the hiker in Push; and it's interesting to think about whether that is correct, and if so why.

Fiery Cushman and I decided to compare philosophers' and non-philosophers' responses to such cases, to see if philosophers show evidence of different or more sophisticated thinking about them. We presented both trolley-type setups like this and also similarly structured scenarios involving a motorboat, a hospital, and a burning building (for our full list of stimuli see Q14-Q17 here.)

In our published article on this, we found that philosophers were just as subject to order effects in evaluating such scenarios as were non-philosophers. But we focused mostly on Switch vs. Push -- and also some moral luck and action/omission cases -- and we didn't have space to really explore Loop and Drop.

About 270 philosophers (with master's degree or more) and about 670 non-philosophers (with master's degree or more) rated paragraph-length versions of these scenarios, presented in random order, on a 7-point scale from 1 (extremely morally good) through 7 (extremely morally bad; the midpoint at 4 was marked "neither good nor bad"). Overall, all the scenarios were rated similarly and near the midpoint of the scale (from a mean of 4.0 for Switch to 4.4 for Push [paired t = 5.8, p < .001]), and philosophers and non-philosophers mean ratings were very similar.

Perhaps more interesting than mean ratings, though, are equivalency ratings: How likely were respondents to rate scenario pairs equivalently? The Loop case is subtly different from the Switch case: Arguably, in Loop but not Switch, the man's death is a means or cause of saving the five, as opposed to a merely foreseen side effect of an action that saves the five. Might philosophers care about this subtle difference more than non-philosophers? Likewise, the Drop case is different from the Push case, in that Push but not Drop requires proximity and physical contact. If that difference in physical contact is morally irrelevant, might philosophers be more likely to appreciate that fact and rate the scenarios equivalently?

In fact, the majority of participants rated all the scenarios exactly the same -- and philosophers were no less likely to do so than non-philosophers: 63% of philosophers gave identical ratings to all four scenarios, vs. 58% of non-philosophers (Z = 1.2, p = .23).

I find this somewhat odd. To me, it seems pretty flat-footed a form of consequentialism that says that Push is not morally worse than Switch. But I find that my judgment on the matter swims around a bit, so maybe I'm wrong. In any case, it's interesting to see both philosophers and non-philosophers seeming to reject the standard orthodox view, and at very similar rates.

How about Switch vs. Loop? Again, we found no difference in equivalency ratings between philosophers and non-philosophers: 83% of both groups rated the scenarios equivalently (Z = 0.0, p = .98).

However, philosophers were more likely than non-philosophers to rate Push and Drop equivalently: 83% of philosophers did, vs. 73% of non-philosophers (Z = 3.4, p = .001; 87% vs. 77% if we exclude participants who rated Drop worse than Push).

Here's another interesting result. Near the end of the study we asked whether it was worse to kill someone as a means of saving others than to kill someone as a side-effect of saving others -- one way of setting up the famous Doctrine of the Double Effect, which is often evoked to defend the view that Push is worse than Switch (in Push, the one person's death is arguably the means of saving the other five, in Switch the death is only a foreseen side-effect of the action that saves the five). Loop is interesting in part because although superficially similar to Switch, if the one person's death is the means of saving the five, then maybe the case is more morally similar to Push than to Switch (see Otsuka 2008). However, only 18% of the philosophers who said it was worse to kill as a means of saving others rated Loop worse than Switch.

Thursday, December 05, 2013

Dream Skepticism and the Phenomenal Shadow of Belief

Ernest Sosa has argued that we do not form beliefs when we dream. If I dream that a tiger is chasing me, I do not really believe that a tiger is chasing me. If I dream that I am saying to myself "I'm awake!" I do not really believe that I'm awake. Real beliefs are more deeply integrated than are these dream-mirages with my standing attitudes and my waking behavior. If so, it follows that if I genuinely believe that I'm awake, necessarily I am correct; and conversely if I believe I'm dreaming, necessarily I'm wrong. The first belief is self-verifying; the second self-defeating. Deliberating between them, I should not choose the self-defeating one, nor should I decline to choose, as though these two options were of equal epistemic merit. Rather, I should settle upon the self-verifying belief that I am awake. Thus, dream skepticism is vanquished!

One nice thing about Sosa's argument is that it does not require that dream experience differ from waking experience in any of the ways that dreams and waking life are sometimes thought to differ (e.g., dream experience needn't be gappier, or less coherent, or more like imagery experience than like perceptual experience). The argument would still work even if dream experience were, as Sosa says, "internally indistinguishable" from waking experience.

This seeming strength of the argument, though, seems to me to signal a flaw. Suppose that dreaming life is in fact in every respect phenomenally indistinguishable from waking life -- indistinguishable from the inside, as it were -- and accordingly that I could easily experience exactly *this* while sleeping; and furthermore suppose that I dream extensively every night and that most of my dreams have mundane everyday content just like that of my waking life. None of this should affect Sosa's argument. And suppose further that I am in fact now awake (and thus capable of forming beliefs about whether I am dreaming, per Sosa), and that I know that due to a horrible disease I acquired at age 35, I spend almost all of my life in dreaming sleep so that 90% of the time when I have experiences of this sort (as if in my office, thinking about philosophy, working on a blog post...) I am sleeping. Unless there's something I'm aware of that points toward this not being a dream, shouldn't I hesitate before jumping to the conclusion that this time, unlike all those others, I really am awake? Probabilities, frequencies, and degrees of resemblance seem to matter, but there is no room for them in Sosa's argument.

Maybe we don't form beliefs when we dream -- Sosa, and also Jonathan Ichikawa, have presented some interesting arguments along those lines. But if there is no difference from the inside between dreams and waking, then my dreaming self, when he was dreaming about considering dream skepticism (e.g., here) did something that was phenomenally indistinguishable from forming the belief that he was thinking about philosophy, something that was phenomenally indistinguishable from forming the belief that was affirming or denying or suspending belief about the question of whether he was dreaming -- and then the question becomes: How do I know that I'm not doing that very same thing right now?

Call it dream-shadow believing: It's like believing, except that it happens only in dreams. If dream-shadow believing is possible, then if I dream-shadow believe that I am dreaming, necessarily I am correct; if I dream-shadow believe that I am awake, necessarily I am wrong. The first is self-verifying, the second self-defeating. The skeptic can now ask: Should I try to form the belief that I am awake or instead the dream-shadow belief that I am dreaming? -- and to this question, Sosa's argument gives no answer.

Update, 3:28 pm:

Jonathan Ichikawa has kindly reminded me that he presented similar arguments against Sosa back in 2007 -- which I knew (in fact, Jonathan thanks me in the article for my comments) but somehow forgot. Jonathan runs the reply a bit differently, in terms of quasi-affirming (which is neutral between genuine affirming and something phenomenally indistinguishable from affirming, but which one can do in a dream) rather than in terms of dream-shadow believing. Perhaps my dream-shadow belief formulation enables a parity-of-argument objection, if (given the phenomenal indistinguishability of dreams and waking) the argument that one should settle on self-verifying dream-shadow belief is as strong an argument as is Sosa's original argument.