Friday, July 22, 2016

Under What Conditions Would You Upload into a Simulated World?

I have a new science fiction story out in Clarkesworld (text version, audiocast). One of my philosophical aims in that story is to reflect about the conditions under which it would make sense to upload into a simulated world.

"Mind uploading" is the hypothetical process of copying your mind into a computational device. If futurists such as Ray Kurzweil are correct, we might be only a few decades away from mind uploading as a real technological option.

When mind uploading is presented in science fiction, usually the following are assumed:

(1.) The uploaded mind retains all the important psychological features of the original mind (including, of course, a genuine stream of conscious experiences).

(2.) The uploaded mind experiences a world as rich as the world of the original mind. If the mind is uploaded into a robot, that world might just be the same world as the world of the original mind. If the mind is uploaded into a Sim or a Matrix world -- that is, an artificial environment of some sort -- that artificial environment is usually assumed to be as rich and interesting as the mundane environment of everyday Earth for normally embodied humans.

Under these conditions, if we also assume that uploading has substantial advantages in improving longevity, allowing backups, improving one's cognitive powers, or giving oneself access to a new rich vein of experiences and possibilities, then it probably makes sense to upload, unless one is strongly committed to traditional human embodiment or to projects or relationships that would no longer be possible post-upload.

However, it seems natural to suppose that if uploading does someday become possible the first uploads will not have features (1) and (2). The first uploaded people (or "people"), even if genuinely conscious, might be seriously psychologically impaired and unable to access a full, rich world of experiences.

There might, however, still be advantages of uploading in terms of longevity and experienced pleasure.

In "Fish Dance", the narrator is presented with the option of uploading his mind under these conditions:

(a.) the world is tiny: a single dance floor, with no input from the larger world outside;
(b.) his mind is limited: he will have some memories from his pre-uploaded self, but he won't fully understand them, and furthermore he won't be able to lay down new memories that last for more than an hour or so;
(c.) his dance-floor experiences will be extremely pleasurable: ideal experiences of dancing ecstasy;
(d.) he will experience this extreme pleasure for a billion years.

Also relevant, of course, are the relationships and projects that he would be leaving behind. (In our narrator's case, a recently deceased child, a living teenage child who wants him to upload, a stale marriage, and an okay but not inspiring career as a professor.)

I say the relationships and projects "he" will leave behind, but of course one interesting question is whether it makes sense to call the uploaded being "him", that is, the same "him" as the narrator.

If it seems obvious to you what one should do under such conditions, the parameters are of course adjustable: We can increase or decrease psychological function, psychological similarity, and quality of memory. We can increase or decrease the size of the world and the range of available input. We can increase or decrease the pleasure and longevity. We can modify the relationships and projects that would be left behind.

You or your descendants might actually face some version of this decision.

----------------------------------------------

"Fish Dance" (Clarkesworld #118, July 2016)

Related blogpost: Goldfish Pool Immortality (May 30, 2014)

[image source]

Tuesday, July 19, 2016

First Sentences Project (Part Three)

Background:

How much can you predict about a story from its title and first sentence alone? Aliette de Bodard, Ann Leckie, Cati Porter, Rachel Swirsky, and I are in the process of finding out! We have taken the first sentences of five stories from July’s issue of Lightspeed Magazine (kindly provided to us in advance by John Joseph Adams) and attempted to predict the plot of each story. [Note: Ann and Rachel attempted to predict based on the first sentence alone, while Aliette, Cati, and I also looked to the title for clues.]

Our first story was “Magnifica Angelica Superable” by Rochita Loenen-Ruiz.

Our second story was "The One Who Isn't" by Ted Kosmatka.

Our third story is "Some Pebbles in the Palm" by Kenneth Schneyer. It begins

Once upon a time, there was a man who was born, who lived, and who died.

Um... what? The point-of-view character appears already to have died in the first sentence! Where could it possibly be going?

(I've put a link to full story at the end of the post.)

-----------------------------------------------------

Our Guesses (order of authorship has been randomized):

Eric Schwitzgebel:

The title is zoomed in on the small and trivial -- a few pebbles held in the palm, not even an exact number of pebbles, just “some”. In contrast, the first sentence encompasses a whole lifetime as if from far above.

“Once upon a time”. A fairy tale. There will of course be a moral. By the end we will realize that life is just pebbles in the palm. “All we are is Dust in the Wind, dude”.

There will be literal pebbles. One will almost kill someone, or end a relationship.

Ann Leckie:

A digital entity decides to try embodiment. And not halfway, either, they want to do the whole thing from conception to death. Most digital entities who want to try meatspace just animate already existing machines, or hitch a ride for a while with a human or animal. The digital entity in this story will have to build the tank itself, to grow itself.

Factions both in meatspace and digital space try to make political and/or religious hay out of the entity's project and its results. Attempts are made to destroy the body and delete the entity's backups. The attempts are apparently successful, except for one thing they all forgot, that the entity didn't.

Cati Porter:

Classic opening. It throws us back to bedtime stories, fairy tales. The man could be anyone. We know nothing about him except maybe that he is ordinary, like any other man who has been born, has lived, has died.

The title makes me think of Jack and the Beanstalk with the beans in the palm, except these are pebbles, so (presumably?) will not grow anything.

But anything can happen in a story.

Could these be magic pebbles? Maybe.

I suspect this story takes place in a far away land and involves peasants and castles and bad things happening to ordinary people and maybe even good things happening to bad people. Or bad people who are ordinary in their badness, so much so that they don’t seem bad, just misguided. And maybe even some good people who are so ordinary that you don’t really root for them.

As to where the story goes from here I have no idea.

Rachel Swirsky:

This is a fantasy. The character died in 2010, at the age of 70. Much of the story takes place in his memories from his early teens, in the fifties, when he spent a lot of time wandering the country near his house. There was a creek that ran between houses, which he could follow from his house to others, and when he was eleven, something significant happened while he was skipping stones. It was not a major dramatic or traumatic event, but something that formed his life, a small disappointment that prepared him to understand the universe was indifferent to him.

The story is narrated after his death. The voice of the narration is a distant third person, with access to his mind, but a remote tone.

The story of the character’s life is one of disappointment. He is, emotionally, a version of the character from Death of a Salesman. However, in death, he has a chance that Willie Lowman didn’t – he can learn to see past what he felt and lost to something a little sweeter in the afterlife. A friend, perhaps. Or time to wander alone in the country of his childhood.

Aliette de Bodard:

This is going to be a poetic, lyrical story. There's an interesting contrast between the fairytale format, and the cold reality of a life as stated in the opening sentence. I'm assuming that this will be focused on what can be kept/gained from a particular lifetime. Also possibly might feature several iterations of the same lifetime, or rebirths or some other kind of mechanism for multiple lives?

Most of us agree that the first sentence gives the story a big-picture flavor: The story will encompass at least a whole lifetime and its meaning. It might have some fairytale aspects (Schwitzgebel, Porter, de Bodard). It might involve a perspective on a lifetime from a transcendent point of view, whether as a digital entity (Leckie), in the afterlife (Swirsky), or via a mechanism for rebirth (de Bodard).

Full story here!

-----------------------------------------------------

Further Thoughts from the Contributors:

What I like about this line by Swirsky:

It sets up a traditional storytelling device with “once upon a time,” giving the reader a clue about the tone of what will follow. The rhythm of the sentence does likewise. The question it asks is interesting because it’s basically a subversion of the idea of what should grab the reader’s attention. This is a very obvious statement, albeit phrased differently than most people would phrase it. So, why is it important enough to say?

Diagnosis of our guesses (warning: SPOILERS) by Schwitzgebel:

Before getting into the what-we-got-right and what-we-got-wrong, two things:

(1.) This story is partly about first sentences! The second paragraph begins “The first few words of a story are a promise. We will have this kind of experience, not that one.”

(2.) In part two of the project, I'd guessed that “The One Who Isn’t” would have a smug, thinks-he’s-so-wise narrator. I was wrong. This story has that kind of narrator – more than any other story I can remember ever reading. It's delicious how annoying I find this narrator. And he totally got me with the pomegranate seed thing, the smartass. Grrrrr! (To be clear: My annoyance is at the artfully conveyed smugness of the narrator, not at Schneyer, who brilliantly crafted that narrative voice for this particular story.)

What we got right: Swirsky was right that there was a youthful memory about pebbles, with mostly symbolic significance (though the narrator falsely says that “the stones are not a symbol”). She was also right that there’s a kind of Willie-Lowman-like failed search for significance. Porter picked up on the ordinariness of the man – a token of all men. De Bodard was right that the story involves multiple lifetimes, including some reflection on whether anything is learned in rebirth. And I was right about the zoomed in / zoomed out perspective and “all we are is Dust in the Wind, dude” – the sophomoric seeming-profundity of that bullshit philosophy. I’m even going to give myself double credit for this, since at the end of the story, the narrator actually says “I’m atoms on the wind.”

What we got wrong: Contra Porter and me, the pebbles aren’t important to the plot of the story. De Bodard was wrong about its being poetical and lyrical; it’s unlyrical meta-fiction. Contra Porter, no fairyland castles. Contra Leckie, no digital entity re-embodiment.

Hey, we did really well, given how little we had to work with! De Bodard and Swirsky came surprisingly close to capturing the spirit of the piece, and I called it on the crappy philosophy. It’s hard to hold Leckie’s miss too much against her, given how boldly specific it was.

Group grade: 75%.

[image source]

Thursday, July 14, 2016

Wang Yangming on the Unity of Knowing and Acting

I've been slowly acquainting myself with the later Chinese philosophical tradition. Wang Yangming (1472-1529) is one of the striking figures -- a leading neo-Confucian scholar, as well as an important provincial administrator and military commander. Perhaps his best-known thesis is the unity of knowing and acting:

There have never been people who know but do not act. Those who "know" but do not act simply do not yet know.... Seeing a beautiful color is a case of knowing, while loving a beautiful color is a case of acting. As soon as one sees that beautiful color, one naturally loves it. It is not as if you first see it and only then, intentionally, you decide to love it.... The same is true when one says that someone knows filial piety or brotherly respect. That person must already have acted with filial piety or brotherly respect before one can say she knows them. One cannot say she knows filial piety or brotherly respect simply because she knows how to say something filial or brotherly (Ivanhoe 2009 trans., p. 140-141).

Contemporary readers might tend to find this claim implausible. Aren't we all familiar with weakness of will or willful wrong action in which we do something that we know we shouldn't -- cheating on an exam, having the next drink, skipping out of a commitment to enjoy some time on the beach? Bryan Van Norden suggests that the doctrine is intended as kind of pragmatically motivated overstatement to get bookworms out of their seats, and that the strongest plausible claim in the vicinity is the claim that knowing a moral truth implies at least being motivated to act on it, even if one doesn't ultimately act accordingly. In contemporary Anglophone philosophy the view that moral judgments are necessarily motivating is called motivational internalism.

I find something attractive in Wang Yangming's doctrine, which I think doesn't quite map onto standard contemporary motivational internalist views. The doctrine resonates with my dispositional approach to belief, according to which to believe some proposition is just to act and and react as though that proposition is true. With some caveats, I might endorse the unity of believing and acting.

Here are the caveats:

1. "Action" should be interpreted very broadly. Reaction is a kind of action. Omission is a kind of action. Thinking and feeling is a kind of action. That's broader than standard use but I don't think outrageously broad. The dispositions constitutive of believing that your parents deserve your care include not only dispositions to act in outwardly caring ways but also dispositions to react inwardly with concern if they are threatened and not to forget what is important to them. If you don't generally act and react caringly, I'm inclined to say, you might sincerely judge that your parents deserve your care, and you might (in Wang Yangming's words) "know how to say something filial", but you don't fully possess the dispositional structure constitutive of believing that your parents deserve your care. (If you fail in only a few respects it might still be accurate enough to say that you believe it, similarly to its being accurate enough to call someone an extravert who is mostly disposed to extraversion but who has introverted moments.) To believe something is, on my view, to live generally as though it is so. That's the sense of "action" in question.

2. "Belief" differs from judgment and knowledge. A broad, action-based view of what is constitutive of belief only works if we also have a vocabulary for describing cases in which we sincerely verbally endorse something that we don't consistently act on. I prefer the term "judgment" for sincere endorsements. Thus, on my view, you might sincerely judge that such-and-such is the case, but if you don't generally move through the world as if it were so, you don't fully, or deeply, or completely, or univocally believe it. Similarly, I've argued (contra Wang Yangming), you can know something without believing it in this strong action-encompassing sense of "belief".

3. We often choose moral mediocrity. We can rationally choose actions we believe are morally bad. Morality is an important consideration for almost everyone, but we are often satisfied with far less than ideal moral behavior. Compare valuing health: You can believe that kale is good for your health, and thoroughly live in recognition of that fact, without always choosing kale over brownies, because health is not always your paramount concern.

You might see some tension between the third caveat and the surface interpretation of the doctrine. If the child doesn't act and react in a filial way, maybe she still does fully and unambivalently believe that her parents deserve her care, but she chooses moral mediocrity on this matter? Wouldn't that be a dissociation between believing and acting?

I do think such cases are possible, maybe even common, but that's not the kind of dissociation between believing and acting that I, and maybe Wang Yangming, want to deny. Here's one way of articulating this type of case: You don't value moral goodness much but insofar as you value it you'll care for your parents if it isn't too much trouble. That actually expresses your attitude toward filial duty; that's your belief as manifest in your actions. This differs from the case of the bookish Confucian scholar who says "I know and truly believe that filial duty is vastly more important than personal pleasures, but I just can't bring myself to act accordingly yet." Wang Yangming wants to call bullfeathers on that sort of thing. With more self-knowledge and honesty, this scholar might better say something like, "Yes, intellectually and theoretically, I judge the Confucians correct in highly prioritizing filial duty; but I don't yet personally find myself believing that filial piety is so important -- not deeply, fully, and unambivalently in the way that I deeply, fully, and unambivalently believe that this is a quill in my hand and that I must eat to stay alive."

Philosophers and others interested in the nature of attitudes can legitimately conceptualize belief in different ways. But I think there are practical advantages to accepting a broad-based notion of belief as constituted by the whole range of your choices and reactions, sharply distinguished from a more intellectual notion of judgment or sincere verbal endorsement which need not be reflected in action.

(For more on the practical advantages of a broad, action-based view of belief, see "Some Pragmatic Considerations Against Intellectualism about Belief"; April 7, 2016.)

[image source]

Tuesday, July 12, 2016

First Sentences Project (Part Two)

Background:

How much can you predict about a story from its title and first sentence alone? Aliette de Bodard, Ann Leckie, Cati Porter, Rachel Swirsky, and I aim to find out! We have taken the first sentences of five stories from July’s issue of Lightspeed Magazine (kindly provided to us in advance by John Joseph Adams) and attempted to predict the plot of each story. [Note: Ann and Rachel attempted to predict based on the first sentence alone, while Aliette, Cati, and I also looked to the title for clues.]

Our first story was “Magnifica Angelica Superable” by Rochita Loenen-Ruiz.

Our second story is "The One Who Isn't" by Ted Kosmatka. The first sentence is:

It starts with light.

Only four words. Not much to work with. We are undaunted! (I've put a link to full story at the end of the post.)

-----------------------------------------------------

Our Guesses (order of authorship has been randomized):

Ann Leckie:

This is an alien invasion story. The invading fleet lights up the sky of Earth on its arrival. The aliens are very difficult to communicate with, but they have been listening to radio and television broadcasts, and they're aware that the inhabitants of Earth aren't intelligent enough to understand that the aliens' presence here, and their establishment of a new, alien-designed regime, is not only fore-ordained, but the best possible thing that could happen to the Earth. Since "deserves to survive" and "won't bow to their new alien overlords" are mutually exclusive states, the aliens are very sorry but they're going to have to exterminate most life on the planet before they move in.

But they're not monsters! They'll give us a chance--a test. They select a group of Earthlings who will have the task of convincing the aliens not to kill us all: a teenage girl, a retired police officer, a Home Economics teacher, a dolphin, a stray dog, and a five pound sack of potatoes. Do not question the aliens' choices, that won't help with the test. The story is told from the POV of either the teenage girl or the sack of potatoes.

It ends with light.

Rachel Swirsky:

This is a biblical retelling from an alternate perspective. It is about God waking to consciousness in the universe of His own creation, because before He made it, He was unaware of Himself and His needs and desires. The essence of the universe was in Him, and now that it has emerged, He is coming fully into Himself.

The God may be a She, or possibly a They or a Sie or an E.

She experiences the beginning of the universe in a dreamy way. The images and some of the broad outlines of events from the Bible are rendered, but her understanding and experience of them is strange and not what we’d expect.

Perhaps The God is actually an angel, who woke from nothingness into immortal light.

The story ends with the fall, when the angel sees the world e loved and knew split in two, a Lucifer e loved, and a God e loved--and the evil that comes into the world is es understanding that the universe has fundamentally been severed, and the rift will never heal.

This is the knowledge e breathes into the apple when e stands in the garden, while Adam and Eve and God are still naming the animals, and the serpent is biding his time in branches. This is the knowledge Eve is doomed to receive.

Cati Porter:

Such a spare opening!

Four words. It could be an origin story? There may be something supernatural, or holy, or stark & futuristic.

My guess is that the story is told in very plain language, and the tone may be cool, distant, impersonal, but I also want to leave open the possibility that it may be beautiful like a chilly starry desert night.

Maybe there is magic involved. Or a Messiah.

From the title, this feels like that maybe this is a story about being an outsider, being different. Maybe all *are* and only one *isn’t*, whatever that may be. So maybe this is a story about nonconformity, or about mutation, or about a journey, or about someone left behind. It could also be about being ostracized.

The title to me is more telling than the first line. I also like the alliteration of “starts” and “light”. The author could have chosen “begins” but the sound of the words together would be very different, and hold different potentialities of meaning.

“It begins with light” has two unstressed syllables followed by stressed then unstressed and a final stress. It complicates the line. “It starts with light” is iambic, the natural rhythm of speech. All of the “t” sounds work together. Four monosyllabic words. And saying “starts” reminds me of starting a fire, which produces light. You don’t begin a fire. Ever.

Which is not to say that this story is about fire, but it could be.

Eric Schwitzgebel

A variation on “let there be light”! Thus God began the universe and thus the narrator self-importantly begins the story. This story will be metaphysical. The narrator will think himself profound, though he’ll be a little coy about it at first, backing away from the seeming-profundity of the first sentence by moving to something concrete. Maybe he will actually in the end prove to have been a little profound, but not as profound as he thinks he is. Yes, “he”. Why do I assume that?

The “one who isn’t” is a pit, a blank, a being-space unfilled. We start with light and end in a dark hole. We will think ourselves sadder but wiser.

Aliette de Bodard:

Oddly enough this is making me think of quantum states and quantum mechanics--and someone who might possibly only exist in certain states/under some kind of observation. The "light" reference also feels very Biblical, wondering if this is going to be a creation type of story? Am assuming it's an SF story with a strong tinge of philosophical musings.

Leckie’s prediction is the outlier here. I really do hope it’s an alien invasion story told from the point of view of a sack of potatoes.

The rest of us ran with the apparent Biblical allusion – a creation story, with metaphysical themes, either God inventing erself (Swirsky), the origin of an outsider (Porter), a narrator’s attempt to self-importantly teach us wisdom (Schwitzgebel), or a story of someone who exists only when being observed (de Bodard).

Leckie says it ends with light; I say it ends in a dark hole. Who is right? Well, here's the story!

-----------------------------------------------------

Further Thoughts from the Contributors:

What I like about this line by Swirsky:

“It starts” invites the question “and how does it continue?” I want to know, so I would definitely read the second sentence. The sentence also has a mythic sense to it, possibly establishing tone, and evokes a biblical reference.

Diagnosis of our guesses (warning: SPOILERS) by Schwitzgebel:

Oh, it ends dark, dark. And yet also, in a different way, it ends with a light again – so Leckie and I were both right.

And yes, it’s totally a creation story, a creation and re-creation story, with metaphysical themes; so we got that right, except Leckie who now really must go write the aliens-speak-with-potatoes story herself. (Pretty please!) Swirsky was right that there’s a waking into consciousness (but not of the creator erself); de Bodard was right that observation is central (though the characters’ existence doesn’t depend on it); Porter is right that it’s partly about being an outsider or being left behind. Porter was also right that the language is mostly simple and spare.

No alien invasion contra Leckie; not really a Biblical retelling contra Swirsky; no self-importantly profound narrator contra me.

Group grade: 65%.

Next up: Part Three: Some Pebbles in Palm, by Ken Schneyer.

Tuesday, July 05, 2016

The First Sentences Project (Part One)

Background:

How much can you predict about a story from its title and first sentence alone? Aliette de Bodard, Ann Leckie, Cati Porter, Rachel Swirsky and I aim to find out! We have taken the first sentences of five stories from July’s issue of Lightspeed Magazine (kindly provided to us in advance by John Joseph Adams) and attempted to predict the plot of each story. [Note: Ann and Rachel attempted to predict based on the first sentence alone, while Aliette, Cati, and I also looked to the title for clues.]

Our first story is “Magnifica Angelica Superable” by Rochita Loenen-Ruiz:

A woman from the street came in laughing from the cold.

From these eleven words (sixteen if you include title and author name), do you already have a sense of tone, character, setting? Are you already starting to form a conception of how the plot might go? I’ve pasted our guesses below. But first, you might want to make your own guess.

At the end of this post I’ll link to the story (available for free online), so you can see how the story actually unfolds.

-----------------------------------------------------

Our Guesses (order of authorship has been randomized):

Rachel Swirsky:

This is a poetic, slipstream story about a small shop that sells things to tourists, some mass produced, and others by local artists. The shop is pretty and eccentric, but not magic. People enjoy being there. Some of the customers, however, are magic, and do strange things in the shop. The story is light, not dark, with personal conflict, but generally happy resolutions. It is a story that has optimism in the world.

Eric Schwitzgebel:

“Superable”? I guess this is the opposite of “insuperable”? Angelica will be magnificent and yet someone will top her. But it will be a moral victory. She is laughing in the cold. She is unperturbed! She will win but only in a local way, to her own satisfaction. Her win will look like a loss to others.

Why does Angelica come in “laughing from the cold” rather than come “in from the cold, laughing”? There’s a whiff of garden-path / misplaced modifier in this sentence, as though she is laughing because of the cold. Well, maybe she is laughing from the cold! We will never be able to quite figure out, in this story, whether Angelica is laughing despite her misfortune or instead because of it.

Aliette de Bodard:

[I recuse myself from this one because Rochita is a good friend, and I’ve read the submitted version as a beta reader.]

Cati Porter:

This one almost definitely involves magic. The title alone sounds like a title bestowed on someone who has unnatural abilities. And the name Angelica signals to me that this character is probably not at all angelic.

I think we can assume that the place the woman is coming into is a public place, maybe a storefront or a pub or even a church. Probably not someone’s home! Although I suppose that too is a possibility.

My guess is that the woman is responsible for something mysterious happening and a series of improbable events leading up to a surprise ending, maybe initiated by a turn where we learn that the woman isn’t evil after all. Or maybe she really is. I suppose that would be the real surprise.

Ann Leckie:

This story is set on a planet with very, very long seasons. It’s proverbial on this world that a change of season brings other, sometimes catastrophic changes, but this is right smack in the middle of a fifteen year long winter, a time everyone here thinks of as rock solid, stable and safe. But it’s not safe, and not stable, and the protagonist’s life is about to come apart, along with quite a few other people’s.

For this first story, the five of us are all over the map. Is it mainly about a shop (Swirsky), a society (Leckie), or Angelica (Schwitzgebel and Porter)? Why is it cold? Is Angelica really angelic? How central a role will magic play? Will it end happily (Swirsky), unhappily (Leckie), with a sudden surprising twist (Porter), or with a mixed moral victory (Schwitzgebel)? Is the project of guessing the plot of a story from the first line alone actually impossible, as any sensible person would think?

Read the story to find out!

-----------------------------------------------------

Further Thoughts from the Contributors:

What I like about this line by Swirsky:

What I like about this opening sentence: This sentence leaves me in uncertain territory, which does make me curious about the second line, to see if it will orient me. It’s phrased oddly, in a way that suggests a vividly sensory story, and a poetic one. I think the main power of it comes from the combination of “laughter” and “cold” which is unexpected, and a pleasing image.

Diagnosis of our guesses (warning: SPOILERS) by Schwitzgebel:

What we got right: Rachel was right that the story is light, with personal conflict but happy resolutions. Ann was right that the story is about big social changes. Cati was right that Angelica has unnatural abilities and that the opening scene is in a public gathering place.

What we got wrong: Contra Rachel, no knick-knack shop and no magic customers, and contra Ann no long-winter planet (but I like the bold specificity of those guesses!). Contra me, “superable” is not the opposite of “insuperable” (maybe more like “able to make someone [else] super”?) and Angelica’s victory is total, not merely moral.

Mixed: Contra Ann, things don’t fall apart for Angelica, but they sure do for some other people! I think I was probably right that Angelica’s laughter was somewhere between being because of and despite things not having gone her way. Cati was partly right about Angelica’s confused moral status as good or evil: We think she’s good, but the men in the society might not agree, at first. Rachel was right that the story has poetic elements, but I don’t think it is especially so.

I’ll give us a group grade of 40% for this story. We can do better!

-----------------------------------------------------

Continued at First Sentences Project (Part Two)

Wednesday, June 29, 2016

Short Story Competition: Philosophy Through Fiction

[cross-posted, with bracketed comments, from the APA Blog]

We are inviting submissions for the short story competition “Philosophy Through Fiction”, organized by Helen De Cruz (Oxford Brookes University), with editorial board members Eric Schwitzgebel (UC Riverside), Meghan Sullivan (University of Notre Dame), and Mark Silcox (University of Central Oklahoma). The winner of the competition will receive a cash prize of US$500 (funded by the Berry Fund of the APA) and their story will be published in Sci Phi Journal.

Rationale

As philosophers, we frequently tell stories in the form of brief thought experiments. In the past and today, philosophers have also written longer, richer stories. Famous examples include Simone de Beauvoir, Iris Murdoch, and Jean-Paul Sartre. Fiction allows us to explore ideas that cannot be easily dealt with in the format of a journal article or monograph, and helps us to reach a broader audience, as the enduring popularity of philosophical novels shows. The aim of this competition is to encourage philosophers to use fiction to explore philosophical ideas, thereby broadening our scope and toolkit.

Eligibility

Short stories that are eligible for this competition must be some form of speculative fiction (this includes, but is not limited to, science fiction, fantasy, horror, alternative history, or magical realism), and must explore one or more philosophical ideas. These can be implicit; there is no restriction on which philosophical ideas you explore.

The story should be unpublished, which means it should not have appeared in a magazine, edited collection, or other venue. It should not be published on an author’s personal website or similar online venue either, at least from the time of submission until the editorial board’s decision, or – if it is published – at least six months after its publication in Sci Phi Journal. (This is a common publishing norm in speculative fiction.)

The competition is open to everyone, regardless of geographic location, career stage, age, or specialization. In other words, it is also open to, e.g., (graduate) students and philosophers outside of academia. We encourage philosophers who are new at writing fiction. Submissions should be at least 1,000 words and no longer than 7,500 words.

The submission should be accompanied by a brief “Food for Thought” section (maximum word count: 500, not part of the overall word count), where the author explains the philosophical ideas behind the piece. Examples of such Food for Thought sections appear at the end of these stories: Unalienable Right by Leenna Naidoo and Immortality Serum by Michaele Jordan. [Evaluation of the quality of the Food for Thought sections will be an important part of the process. Please feel free to write something longer and more substantive than these two examples, up to 500 words.]

Dates

The deadline for this competition is February 1, 2017. The winner will be announced by March 31, 2017. The winning story will appear in the following issue of Sci Phi Journal.

Submission requirements

Please submit your story to philosophythroughfiction@gmail.com. You can use the same e-mail address for queries.

Your story should be anonymized, i.e.,contain no name or other form of identification. It should have a distinct title (not “philosophy story submission” but e.g., “The Icy Labyrinth”), and it should be at least in a 12-point clearly legible font. The file format should be doc, docx or rtf. Please use the subject line “submission for short story competition” for your e-mail. Attach the story (the filename should be an abbreviated form of your story title, e.g. “labyrinth.rtf”) to the e-mail. The Food for Thought section should be at the bottom of the same document, with a separate header “Food for Thought”. Please include word counts for both the story and the Food for Thought at the top of the document.

Place your full name, institutional affiliation or home address and the full title of your story in the body of the e-mail. We cannot accept submissions past the deadline of 1 February 2017.

We are planning to publish an edited volume of invited speculative fiction philosophy stories. Strong pieces entered into the competition may be considered for this volume. If you do not want your submission to be considered for this volume, please state this explicitly in the body of your e-mail. In the absence of this, we will assume you agree that your story is simultaneously considered for the competition and the volume.

Review process

All stories will first be vetted for basic quality by a team of readers at Sci Phi Journal. Stories that pass this first stage will be sent in anonymized format to a board of reviewers who will select the winning story. The reviewers will examine how effectively the stories explore philosophical ideas. By entering the competition you agree that their decision is final.

Funding

This story competition is supported by a grant from the American Philosophical Association’s Berry Fund for Public Philosophy, and is hosted at Oxford Brookes University.

*

For inspiration, check out recent discussions on the [APA] blog about reading and writing philosophical fiction.

Monday, June 27, 2016

Susan Schneider on How to Prevent a Zombie Dictatorship

Last week I posted "How to Accidentally Become a Zombie Robot", discussing Susan Schneider's recent TEDx-talk proposal for checking whether silicon chips can be conscious. Susan has written the following reply, which she invited me to share on the blog.

---------------------------------------------

Eric,

Greetings from a cafĂ© in Lisbon! Your new, seriously cool I,Brain case raises an important point about my original test in my Ted talk and is right out of a cyberpunk novel. A few initial points, for readers, before I respond, as Ted talks don’t give much philosophical detail:

1. It may be that the microchips are made of something besides silicon, (right now, e.g., carbon nanotubes and graphene are alternate substrates under development). I don’t think this matters – the issues that arise are the same.

2. It will be important that any chip test involve the very kind of chip substrate and design as that used in the AI in question.

3. Even if a kind of chip works in humans, there is still the issue of whether the AI in question has the right functional organization for consciousness. Since AI could be very different than us, and it is even difficult to figure out these issues in the case of biological creatures like the octopus, this may turn out to be very difficult.

4. For relation of the chip test to intriguing ideas of Ned Block and Dave Chalmers on this issue, see a paper on my website (a section in “The Future of Philosophy of Mind”, based on an earlier op-ed of mine.)

5. As Eric knows, it is probably a mistake to assume that brain chips will be functional isomorphs. I’m concerned with the development of real, emerging technologies, because I am concerned with finding a solution to the problem of AI consciousness based on an actual test. Brain chips, already under development at DARPA, may eventually be faster and more efficient information processors, enhance human consciousness, or, they may be low fidelity copies of what a given minicolumn does. This depends upon how medicine progresses...

Back to my original “Chip Test” (in the Ted talk). It’s 2045. You are ready to upgrade your aging brain. You go to I,Brain. They can gradually replace parts of your biological brain with microchips. You are awake during the surgery, suppose, and they replace a part of your biological brain that is responsible for some aspect of consciousness, with a microchip. Do you lose consciousness of something (e.g., do you lose part of your visual field)? If so, you will probably notice. This would be a sign that the microchip is the wrong stuff. Science could try and try to engineer a better chip, but if after years of trying, they never could get it right, perhaps we should conclude that that kind of substrate (e.g., silicon) does not give rise to consciousness.

On the other hand, if the chips work, that kind of substrate is in principle the right stuff (it can, in the right mental environment, give rise to qualia) although there is a further issue of whether a particular AI that has such chips has the right organization to be conscious (e.g., maybe it has nothing like a global workspace, like a Rodney Brooks style robot, or maybe it is superintelligent, and has mastered everything already and eliminated consciousness because it is too slow and inefficient).

Eric, your test is different, and I agree that someone should not trust that test. This would involve a systematic deception. What kind of society would do this? A zombie dictatorship, of course, which seeks to secretly eliminate conscious life from the planet. :-)

But I think you want to apply your larger point to the original test. Is the idea: couldn’t a chip be devised that would falsely indicate consciousness to the person? (Let’s call this a “sham qualia chip.” ) I think it is, so here’s a reply: God yes, in a dystopian world. We had better watch out! That would be horrible medicine…and luckily, it would involve a good deal of expense and effort, (systematically fooling someone about say, their visual experience, would be a major undertaking), so science would likely first seek a genuine chip substitute that preserved consciousness. (Would a sham qualia chip even clear the FDA :-) ? Maybe only if microchips were not the right stuff and it was the best science could do. After all, people would always be missing lost visual qualia, and it is best that they not suffer like this....). But crucially, since this would involve a deliberate effort on the part of medical researchers, we would know this, and so we would know that the chip is not a true substitute. Unless, that is, we are inhabitants of a zombie dictatorship.

The upshot: It would involve a lot of extra engineering effort to produce a sham qualia chip, and we would hopefully know that the sham chip was really a device designed to fool us. If this was done because the genuine chip substitute could not be developed, this would probably indicate that chips aren’t the right stuff, or that science needs to go back to the drawing board.

I propose a global ban on sham qualia chips in interest of preserving democracy.

---------------------------------------------

I (Eric) have some thoughts in response. I'm not sure it would be harder to make a sham qualia chip than a genuine qualia chip. Rather than going into detail on that now, I'll let it brew for a future post. Meanwhile, others' reactions welcomed too!

Thursday, June 23, 2016

How to Accidentally Become a Zombie Robot

Susan Schneider's beautifully clear TEDx talk on the future of robot consciousness has me thinking about the possibility of accidentally turning oneself into a zombie. (I mean "zombie" in the philosopher's sense: a being who outwardly resembles us but who has no stream of conscious experience.)

Suppose that AI continues to rely on silicon chips and that -- as Schneider thinks is possible -- silicon chips just aren't the right kind of material to host consciousness. (I'll weaken these assumptions below.) It's 2045 and you walk into the iBrain store, thinking about having your degenerating biological brain replaced with more durable silicon chips. Lots of people have done it already, and now the internet is full of programmed entities that claim to be happily uploaded people who have left their biological brains behind. Some of these uploaded entities control robotic or partly organic bodies; others exist entirely in virtual environments inside of computers. If Schneider is right that none of these silicon-chip-instantiated beings is actually conscious, then what has actually happened is that all of the biological people who "uploaded" actually committed suicide, and what exist are only non-conscious simulacra of them.

You've read some philosophy. You're worried about exactly that possibility. Maybe that's why you've been so slow to visit the local iBrain store. Fortunately, the iBrain company has discovered a way to upload you temporarily, so you can try it out -- so that you can determine introspectively for yourself whether the uploaded "you" really would be conscious. Federal regulations prohibit running an uploaded iBrain at the same time that the original source person is conscious, but the company can scan your brain non-destructively while you are sedated, run the iBrain for a while, then pause your iBrain and update your biological brain with memories of what you experienced. A trial run!

From the outside, it looks like this: You walk into the iBrain store, you are put to sleep, a virtual you wakes up in a robotic body and says "Yes, I really am conscious! Interesting how this feels!" and then does some jogging and jumping jacks to test out the the body. The robotic body then goes to sleep and the biological you wakes up and says, "Yes, I was conscious even in the robot. My philosophical doubts were misplaced. Upload me into iBrain!"

Here's the catch: After you wake, how do you know those memories are accurate memories of having actually been conscious? When the iBrain company tweaks your biological neurons to install the memories of what "you" did in the robotic body, it's hard to see how you could be sure that those memories aren't merely presently conscious seeming-memories of past events that weren't actually consciously experienced at the time they occurred. Maybe the robot "you" really was a zombie, though you don't realize that now.

You might have thought of this possibility in advance, and so you might remain skeptical. But it would take a lot of philosophical fortitude to sustain that skepticism across many "trial runs". If biological you has lots of seeming-memories of consciousness as a machine, and repeatedly notices no big disruptive change when the switch is flipped from iBrain to biological brain, it's going to be hard to resist the impression that you really are conscious as a machine, even if that impression is false -- and thus you might decide to go ahead and do the upload permanently, unintentionally transforming yourself into an experienceless zombie.

But maybe if a silicon-chip brain could really duplicate your cognitive processes well enough to drive a robot that acts just as you would act, then the silicon-chip brain really would have to be conscious? That's a plausible (though disputable) philosophical position. So let's weaken the philosophical and technological assumptions a little. We can still get a skeptical zombie scenario going.

Suppose that the iBrain company tires of all the "trial runs" that buyers foolishly insist on, so the company decides to save money by not actually having the robot bodies do any of those things that the the trial-run users think they do. Instead, when you walk in for a trial they sedate you and, based on what they know about your just-scanned biological brain, they predict what you would do if you were "uploaded" into a robotic body. They then give you false memories of having done those things. You never actually do any of those things or have any of those thoughts during the time your biological body is sedated, but there is no way to know that introspectively after waking. It would seem to you that the uploading worked and preserved your consciousness.

There can be less malicious versions of this mistake. Behavior and cognition during the trial might be insufficient for consciousness, or for full consciousness, while memory is nonetheless vivid enough to lead to retrospective attributions of full consciousness.

In her talk, Schneider suggests that we could tell whether silicon chips can really host consciousness by trying them out and then checking whether consciousness disappears when we do so; but I'm not sure this test would work. If nonconscious systems (whether silicon chip or otherwise) can produce both (a.) outwardly plausible behavior, and (b.) false memories of having really experienced consciousness, then we might falsely conclude in retrospect that consciousness is preserved. (This could be so whether we are replacing the whole brain at once or only one subsystem at a time, as long as "outward" means "outside of the subsystem, in terms of its influence on the rest of the brain".) We might then choose to replace conscious systems with nonconscious ones, accidentally transforming ourselves into zombies.

[image source]

----------------------------------------------

Update June 27:

Susan Schneider replies!

----------------------------------------------

Tuesday, June 14, 2016

Possible Architectures of Group Minds: Memory

by Eric Schwitzgebel and Rotem Herrmann

Suppose you have 200 bodies. "You"? Well, maybe not exactly you! Some hypothetical science fictional group intelligence.

How might memory work?

For concreteness, let's assume a broadly Ann Leckie "ancillary" setup: two hundred humanoid bodies on a planet's surface, each with an AI brain remotely connected to a central processor on an orbiting starship.

(For related reflections on the architecture of group perception, see this earlier post.)

Central vs. Distributed Storage

For simplicity, we will start by assuming a storage and retrieval representational architecture for memory.

A very centralized memory architecture might have the entire memory store in the orbiting ship, which the humanoid bodies access any time they need to retrieve a memory. A humanoid body, for example, might lean down to inspect a flower which it wants to classify, simultaneously sending a request for taxonomic information to the central unit. In contrast, a very distributed memory architecture might have all of the memory storage distributed in the humanoid bodies, so that if the humanoid doesn't have classification information in its own local brain it will have to send a request around to other humanoids to see if they have that information stored.

A bit of thought suggests that completely centralized memory architecture probably wouldn't succeed if the humanoid bodies are to have any local computation (as opposed to being merely dumb limbs). Local computation presumably requires some sort of working memory: If the local humanoid is reasoning from P and (P -> Q) to Q, it will presumably have to retain P in some way while it processes (P -> Q). And if the local humanoid is reaching its arm forward to pluck the flower, it will presumably have to remember its intention over the course of the movement if it is to behave coherently.

It's natural, then, to think that there will be at least a short-term store in each local humanoid, where it retains information relevant to its immediate projects, available for fast and flexible access. There needn't be a single short term store: There could be one or more ultra-fast working memory modules for quick inference and action, and a somewhat slower short-term or medium-term store for contextually relevant information that might or might not prove useful in the tasks that the humanoid expects to confront in the near future.

Conversely, although substantial long-term information, not relevant to immediate tasks, might be stored in each local humanoid, if there is a lot of potential information that the group mind wants to be able to access -- say, snapshots of the entire internet plus recorded high-resolution video feeds from each of its bodies -- it seems that the most efficient solution would be to store that information in the central unit rather than carrying around 200 redundant copies in each humanoid. Alternatively, if the central unit is limited in size, different pieces could be distributed among the humanoids, accessible each to the other upon request.

Procedural memories or skills might also be transferred between long-term and short-term stores, as needed for the particular tasks the humanoids might carry out. Situation-specific skills, for example -- piloting, butterfly catching, Antarean opera singing -- might be stored centrally and downloaded only when necessary, while basic skills such as walking, running, and speaking Galactic Common Tongue might be kept in the humanoid rather than "relearned" or "re-downloaded" for every assignment.

Individual humanoids might also locally acquire skills, or bodily modifications, or body-modifications-blurring-into-skills that are or are not uploaded to the center or shared with other humanoids.

Central vs. Distributed Calling

One of the humanoids walks into a field of flowers. What should it download into the local short-term store? Possibilities might include: a giant lump of botanical information, a giant history of everything known to have happened in that location, detailed algorithms for detecting the presence of landmines and other military hazards, information on soil and wildlife, a language module for the local tribe whose border the humanoid has just crossed, or of course some combination of all these different types of information.

We can imagine the calling decision being reached entirely by the central unit, which downloads information into particular humanoids based on its overview of the whole situation. One advantage of this top-down approach would be that the calling decision would easily reflect information from the other humanoids -- for example, if another one of the humanoids notices a band of locals hiding in the bushes.

Alternatively, the calling decision could be reached entirely by the local unit, based upon the results of local processing. One advantage of this bottom-up approach would be that it avoids delays arising from the transmission of local information to the central unit for possibly computationally-heavy comparison with other sources of information. For example, if the local humanoid detects a shape that might be part of a predator, it might be useful to prioritize a fast call of information on common predators without having to wait for a call-up decision from orbit.

A third option would allow a local representation in one humanoid A to trigger a download into another humanoid B, either directly from the first humanoid or via the central unit. Humanoid A might message Humanoid B "Look out, B, a bear!" along with a download of recently stored sensory input from A and an instruction to the central unit to dump bear-related information into B's short term store.

A well engineered group mind might of course, allow all three calling strategies. There will still be decisions about how much weight and priority to give to each strategy, especially in cases of...

Conflict

Suppose the central unit has P stored in its memory, while a local unit has not-P. What to do? Here are some possibilities:

Central dictatorship. Once the conflict is detected, the central unit wins, correcting the humanoid unit. This might make especially good sense if the information in the humanoid unit was originally downloaded from the central unit through a noisy process with room for error or if the central unit has access to a larger or more reliable set of information relevant to P.

Central subordination. Once the conflict is detected, the local might overwrite the central. This might make especially good sense if the central store is mostly a repository of constantly updated local information, for example if humanoid A is uploading a stream of sensory information from its short term store into the central unit's long term store.

Voting. If more than one local humanoid has relevant information about P, there might be a winner-take-all vote, resulting in the rewriting of P or not-P across all the relevant subsystems, depending on which representation wins the vote.

Compromise. In cases of conflict there might be compromise instead of dominance. For example, if the central unit has P and one peripheral unit has not-P, they might both write something like "50% likely that P"; analogously if the peripheral units disagree.

Retain the conflict. Another possibility is to simply retain the conflict, rather than changing either representation. The system would presumably want to be careful to avoid deriving conclusions from the contradiction or pursuing self-defeating or contradictory goals. Perhaps contradictory representations could be somehow flagged.

And of course there might be different strategies on different occasions, and the strategies can be weighted, so that if Humanoid A is in a better position than Humanoid B the compromise result might be 80% in favor of Humanoid A, rather than equally weighted.

Similar possibilities arise for conflicts in memory calling -- for example if the local processors in Humanoid A represent bear-information download as the highest priority, the local processes in Humanoid B represent language-information download as urgent for Humanoid A, and the central unit represents mine detection as the highest priority.

Reconstructive Memory

So far we've been working with a storage-and-retrieval model of memory. But human memory is, we think, better modeled as partly reconstructive: When we "remember" information (especially complex information like narratives) we are typically partly rebuilding, figuring out what must have been the case in a way that brings together stored traces with other more recent sources of information and also with general knowledge. For example, as Bartlett found, narratives retold over time tend to simplify and move toward incorporating stereotypical elements even if those elements weren't originally present; and as Loftus has emphasized, new information can be incorporated into seemingly old memories without the subject being aware of the change (for example memories of shattered glass when a car accident is later described as having been at high velocity).

If the group entity's memory is reconstructive, all of the architectural choices we've described become more complicated, assuming that in reconstructing memories the local units and the central units are doing different sorts of processing, drawing on different pools of information. Conflict between memories might even become the norm rather than the exception. And if we assume that reconstructing a memory often involves calling up other related memories in the process, decisions about calling become mixed in with the reconstruction process itself.

Memory Filling in Perception

Another layer of complexity: An earlier post discussed perception as though memory were irrelevant, but an accurate and efficient perceptual process would presumably involve memory retrieval along the way. As our humanoid bends down to perceive the flower, it might draw examplars or templates of other flowers of that species from long-term store, and this might (as in the human case) influence what it represents as the flower's structure. For example, in the first few instants of looking, it might tentatively represent the flower as a typical member of its species and only slowly correct its representation as it gathers specific detail over time.

Extended Memory

In the human case, we typically imagine memories as stored in the brain, with a sharp division between what is remembered and what is perceived. Andy Clark and others have pushed back against this view. In AI cases, the issue arises vividly. We can imagine a range of cases from what is clearly outward perception to what is clearly retrieval of internally stored information, with a variety of intermediate, difficult-to-classify cases in between. For example: on one end, the group has Humanoid A walk into a newly discovered library and read a new book. We can then create a slippery slope in which the book is digitized and stored increasingly close to the cognitive center of the humanoid (shelf, pocket, USB port, internal atrium...), with increasing permanence.

Also, procedural memory might be partly stored in the limbs themselves with varying degrees of independence from the central processing systems of the humanoid, which in turn can have varying degrees of independence from the processing systems of the orbiting ship. Limbs themselves might be detachable, blurring the border between body parts and outside objects. There need be no sharp boundary between brain, body, and environment.

[image source]

Monday, June 06, 2016

If You/I/We Live in a Sim, It Might Well Be a Short-Lived One

Last week, the famous Tesla and SpaceX CEO and PayPal cofounder Elon Musk said that he is almost certain that we are living in a sim -- that is, that we are basically just artificial intelligences living in a fictional environment in someone else's computer.

The basic argument, adapted from philosopher Nick Bostrom, is this:

1. Probably the universe contains vastly many more artificially intelligent conscious beings, living in simulated environments inside of computers ("sims"), than flesh-and-blood beings living at the "base level of reality" ("non-sims", i.e., not living inside anyone else's computer).

2. If so, we are much more likely to be sims than non-sims.

One might object in a variety of ways: Can AIs really be conscious? Even if so, how many conscious sims would there likely be? Even if there are lots, maybe somehow we can tell we're not them, etc. Even Bostrom only thinks it 1/3 likely that we're sims. But let's run with the argument. One natural next question is: Why think we are in a large, stable sim?

Advocates of versions of the Sim Argument (e.g., Bostrom, Chalmers, Steinhart) tend to downplay the skeptical consequences: The reader is implicitly or explicitly invited to think or assume that the whole planet Earth (at least) is (probably) all in the same giant sim, and that the sim has (probably) endured for a long time and will endure for a long time to come. But if the Sim Argument relies on some version of Premise 1 above, it's not clear that we can help ourselves to such a non-skeptical view. We need to ask what proportion of the conscious AIs (at least the ones relevantly epistemically indistinguishable from us) live in large, stable sims, and what proportion live in small or unstable sims?

I see no reason here for high levels of optimism. Maybe the best way for the beings at the base level of reality to create a sim is to evolve up billions or quadrillions of conscious entities in giant stable universes. But maybe it's just as easy, just as scientifically useful or fun, to cut and paste, splice and spawn, to run tiny sims of people in little offices reading and writing philosophy for thirty minutes, to run little sims of individual cities for a couple of hours before surprising everyone with Godzilla. It's highly speculative either way, of course! That speculativeness should undermine our confidence about which way it might be.

If we're in a sim, we probably can't know a whole lot about the motivations and computational constraints of the gods at the base level of reality. (Yes, "gods".) Maybe we should guess 50/50 large vs. small? 90/10? 99/1? (One reason to skew toward 99/1 is that if there are very large simulated universes, it will only take a few of them to have the sims inside them vastly outnumber the ones in billions of small universes. On the other hand, they might be very much more expensive to run!)

If you/I/we are in a small sim, then some version of radical skepticism seems to be warranted. The world might be only ten minutes old. The world might end in ten minutes. Only you and your city might exist, or only you in your room.

Musk and others who think we might be in a simulated universe should take their reasoning to the natural next step, and assign some non-trivial credence to the radically skeptical possibility that this is a small or unstable sim.

-----------------------------------------

Related:

"Skepticism, Godzilla, and the Artificial Computerized Many-Branching You" (Nov 15, 2013).

"Our Possible Imminent Divinity" (Jan 2, 2014).

"1% Skepticism" (forthcoming, Nous).

[image source]

Tuesday, May 31, 2016

Percentage of Female Faculty at Elite U.S. Philosophy Departments, 1930-1979

Jonathan Strassfeld has generated some data on philosophers at eleven elite U.S. PhD programs from 1930-1979 (Berkeley, Chicago, Columbia, Cornell, Harvard, Michigan, Pennsylvania, Princeton, Stanford, UCLA, and Yale [note 1]). Carolyn Dicey Jennings made some corrections and did a gender analysis, finding substantial correlations between the percentage of women in those departments in 1930-1979 and the percentage of women and non-White doctorate recipients from those same departments from 2004-2014.

Starting with Strassfeld's version as of May 27 (hand-correcting for a few errors reported by Jennings and correcting a few more errors that I independently found), I decided to chart the percentage of women faculty in these departments over the period in question. (Here are my raw data. Corrections welcome. Data of this sort are rarely 100% perfect. The general trends, however, should be robust enough that a few errors make no material difference.)

I looked at time course by taking a snapshot of the faculty every five years starting in 1930 (ending in 1979 rather than 1980). Here's a chart:

UPDATE 2:04 p.m.: Strassfeld has made some further corrections and created this year-by-year chart:

Highlights:

* The 1930, 1935, 1940, and 1945 snapshots contain exactly zero women faculty (compared to 63-71 men during the period).

* The 1950 and 1955 snapshots contain exactly one woman: Elizabeth Flower at Penn (the universities have 98 and 104 recorded men faculty in those years).

* In 1960, Flower is joined in the dataset by Mary Mothersill at Chicago. The 1965 and 1970 snapshots both show five women (3%) among 156 and 191 total faculty respectively.

* In the late 1970s there's a sudden jump to 16/174 (10%) in 1975 and 18/171 (12%) in 1979.

Thus, despite the presence of some highly influential women philosophers in the early to mid 20th century -- for example, Simone de Beauvoir, G.E.M. Anscombe, and Hannah Arendt -- women held a vanishingly tiny proportion of philosophy faculty positions at elite U.S. universities from the 1930s through the early 1960s, even fewer than one might be inclined to think, in retrospect, upon casual consideration.

Some reference points:

* Using data from the National Center for Education Statistics, I estimated 9% women faculty among full time four-year university faculty in the U.S. in 1988 and 12-20% in the 1990s.

* Jennings and I found about 25% women faculty in PGR-rated U.S. PhD-granting departments in 2014.

I find this a helpful reminder that, for all of the continuing gender disparity in philosophy in the 2010s, things are nonetheless much different from the 1950s. Try to imagine the gender environment that Flower and Mothersill operated in!

------------------------------------------

I am also reminded of this autobiographical reflection from Martha C. Nussbaum, from her 1997 book Cultivating Humanity:

When I arrived at Harvard in 1969, my fellow first-year graduate students and I were taken up to the roof of Widener Library by a well-known philosopher of classics. He told us how many Episcopal Churches could be seen from that vantage point. As a Jew (in fact a convert from Episcopalian Christianity), I knew that my husband and I would have been forbidden to marry in Harvard's Memorial Church, which had just refused to accept a Jewish wedding. As a woman I could not eat in the main dining room of the faculty club, even as a member's guest. Only a few years before, a woman would not have been able to use the undergraduate library. In 1972 I became the first female to hold the Junior Fellowship that relieved certain graduate students from teaching so that they could get on with their research. At that time I received a letter of congratulation from a prestigious classicist saying that it would be difficult to know what to call a female fellow, since "fellowess" was an awkward term. Perhaps the Greek language could solve the problem: since the masculine for "fellow" was hetairos, I could be called a hetaira. Hetaira, however, as I knew, is the ancient Greek word not for "fellowess" but for "courtesan."

------------------------------------------

Note 1 (added 11:47 a.m.): This list is drawn from Strassfeld. He explains his selection thus:

I determined which departments to survey recursively, defining the "leading departments" as those whose graduates comprised the faculties of the leading departments. Focusing on the period of 1945-1969, when universities were growing explosively, I found that there was a group of eleven philosophy departments that essentially only hired graduates from among their own ranks and foreign universities - that it was virtually impossible for graduates of any American philosophy departments outside of this group to gain faculty positions at these "leading departments." Indeed, between 1949-1960, no member of their faculty had received a Ph.D. from an American institution outside of their ranks. There were, of course, border cases. Brown, Rockefeller, MIT, and Pittsburgh in particular might have been included. However, I judged that they did not place enough graduates on the faculties of the other leading universities, particularly during the period 1945-1969, for inclusion. This list also aligns closely with contemporary reputational assessments, with ten of the eleven departments ranking in the top 11 in a 1964 poll (Allan Murray Carter, An Assessment of Quality in Graduate Education).

Also, Strassfeld notes (personal communication) that the list only includes Assistant, Associate, and full Professors, not instructors or lecturers (such as Marjorie Greene who was an instructor at Chicago in the 1940s).

Thursday, May 26, 2016

Empty Box Rationalization

A hypothetical from Darrell Rowbottom, in conversation: Suppose you are a perfect moral rationalizer. Suppose you know that for any action you want to do, you are clever enough a moral theorist that you could find some plausible-seeming post-hoc justification for it. Would you actually need to come up with the justification? Maybe it's enough just to know in advance that you could come up with one, and not actually do the work?

Think of the savings of time and cognitive effort! Also, since self-serving rationalizations might tend to lead one away from the moral truth, you might be epistemically better off too. With or without an actual filled-in rationalization, you'll be able to feel fine about doing what you want.

Call this Empty Box Rationalization. Why bother to fill the box with an actual rationalization? Simply postulate that a plausible-seeming justification could be found!

Of course, few of us are clever enough moral theorists to take advantage of Empty Box Rationalization without limitation. As skilled as we may happen to be at justifying our actions to ourselves, there will be some actions beyond the pale, which we are incapable of plausibly rationalizing.

However, we might be able to take advantage of Limited Empty Box Rationalization. Limited Empty Box Rationalization differs from full Empty Box Rationalization by confining itself to a range of rationalizable actions. For any action within a certain range, I know that I am clever enough a rationalizer to devise, if I want, some plausible-seeming justification which I would accept upon reflection; and thus I can postulate that such a justification is out there to be found.

Here's an example. Suppose I'm always fifteen minutes late. Every time I show up late, I always manage to find a satisfactory excuse. Sometimes it's traffic. Sometimes it's that I really needed to finish some important task first. Sometimes it's that I got lost. Sometimes it's that I was detained by someone else. I always find some way to let myself off the hook, so that I never feel guilty. Now imagine that today I find myself arriving fifteen minutes late for a meeting with a graduate student. I could, hypothetically, go through the effort of trying to concoct an excuse. But maybe instead of wasting that time, I can just postulate the existence of some plausible excuse or other, so that we can get straight into the meeting without further delay.

(Sure, maybe an actual filled-in excuse from me would serve some kind of function for the other person. I set that aside for these reflections.)

People will differ in their degree of cleverness and thus differ in their working ranges of Limited Empty Box Rationalization. Some will be clever enough reliably to justify 15 minutes of tardiness; others clever enough reliably to justify 30 minutes. Some will be clever enough to justify reneging on wider ranges of commitments, to justify wider ranges of gray-area misconduct, perhaps even to justify, to their own satisfaction, what the rest of us would judge to be plainly morally odious. For one especially skillful example, consider Heidegger on Nazism.

Of course, this isn't fair. If only we were more clever we too could rationalize such actions! Perhaps for any action that I've done or that I'd really like to do, a clever enough moral theorist could, with enough work, come up with some plausible-seeming justification of it that would satisfy me. But then -- maybe that's good enough! If I know that a cleverer version of myself would believe A, then maybe that knowledge itself suffices to justify A, since who am I to disagree with a cleverer version of myself, who could of course get the better of me in argument?

Advanced Empty Box Rationalization begins with that thought. Advanced Empty Box Rationalization widens the range of Limited Empty Box Rationalization beyond the boundaries of one's own actual rationalizing capacities. For some range of actions wider than one's usual range of rationalizable actions, one justifiably accepts that either one could come up with a plausible-seeming justification that one would accept upon reflection, given that one is motivated to do so, or a cleverer version of oneself could devise such a justification. Perhaps as a limiting case one could accept that an infinitely clever version of oneself could hypothetically justify anything in this manner.

Application of these thoughts to current and past scandals in the profession is left as an exercise for the reader.

Related:

  • Schwitzgebel & Ellis (forthcoming), Rationalization in Moral and Philosophical Thought.
  • [image source]

    Tuesday, May 17, 2016

    Whether to Take Peter Singer to McDonalds

    Greetings from Hong Kong!

    I'm highly allergic to shellfish. I'm allergic enough that cross-contamination is an issue: If I'm served something that has been fried on the same surface as shellfish or touched with an implement that has touched shellfish, I will have a minor allergic reaction. Shellfish is so prevalent in the southern coastal Chinese diet that I have minor shellfish reactions at about half of my lunch or evening meals, even if I try to be careful. I've learned that there are only two types of restaurants that are entirely safe: strict Buddhist vegetarian restaurants and McDonalds.

    I was discussing this with my hosts at a university here in Hong Kong. One of the hosts said, "Well, we could go to that Buddhist restaurant that we took [the famous vegetarian philosopher] Peter Singer to". Sounds like a good idea to me! Another host said, "Yes, but that restaurant is so expensive! Too bad there isn't another good Buddhist restaurant around." I suggested that McDonalds would be fine, really. I didn't want to force them to spend a lot of money hosting me.

    It occurred to me that they should have taken Peter Singer to McDonalds, too. Singer is as famous for his argument against luxurious spending as he is for his argument in favor of vegetarianism, and one of his favorite examples of needless luxury spending is high-priced restaurant meals. The idea is that the money you spend on a luxurious restaurant meal could be donated to charity and perhaps save the life of a child living in poverty somewhere.

    So here's my thought. Suppose that the two options are (a) an expensive Buddhist restaurant, maybe $300 Hong Kong dollars per person for 10 people, $3000 Hong Kong dollars total ($400 US dollars), or (b) McDonalds for $500 HKD total ($65 US dollars). The money saved by choosing option b, if donated to an effective charity, is within the ballpark of what could be expected to save one person's life [update: or maybe about a tenth of a life; estimates vary]. On the other hand, the flesh from a steer can generate about 2000 McDonald's hamburgers, so ten people would be eating only 1/200 of a steer. Clearly one [or one tenth of a] human life is more valuable than 1/200 of a steer. Therefore, the university should have taken Peter Singer to McDonalds and donated the savings to an effective charity.

    Of course, there are other costs to McDonalds (other wasteful practices, environmental damage in meat production, etc.) and possibly other benefits to eating at the Buddhist restaurant (supporting good farming practices, possibly putting the profits to good use) -- but it seems unlikely that these differences would cumulatively outweigh the central tradeoff of the unsaved human life vs. 1/200 of a steer.

    If I ever have the chance to take Singer to dinner, I'd like to try this argument out on him and see what he thinks. (I wouldn't be surprised if he has already thought all of this through.)

    Our own dinner decision resolved in favor of the cheap student vegetarian cafeteria nearby, which I think maybe they had been hesitating about because it didn't seem fancy enough a venue for a visiting speaker. But it was perfect for me -- a rather "utilitarian" place, I might say -- and probably where they really should have taken Singer.

    [image source]

    Wednesday, May 11, 2016

    The Gender Situation Is Different in Philosophy

    As Carolyn Dicey Jennings and I have documented, academic philosophy in the United States is highly gender skewed, with gender ratios more characteristic of engineering and the physical sciences than of the humanities and social sciences. However, unlike engineering and the physical sciences, philosophy appears to have stalled out in its progress toward gender parity.

    Some of the best data on gender in U.S. academia are from the National Science Foundation's Survey of Earned Doctorates (SED). In an earlier post, I analyzed the philosophy data since 1973, creating this graph:

    The quadratic fit (green) is statistically much better than the linear fit (red; AICc .996 vs .004), meaning that it is highly unlikely that the apparent flattening is chance variation from a linear trend.

    Since the 1990s, the gender ratio of U.S. PhDs in philosophy has hovered steadily around 25-30%.

    The SED site contains data on gender by broad field, going back to 1979. It is interesting to juxtapose these data with the philosophy data. (The philosophy data are noisier, as you'd expect, due to smaller numbers relative to the SED's broad fields.)

    The overall trend is clear: Although philosophy's percentages are currently similar to the percentages in engineering and physical sciences, the trend in philosophy has flattened out in the 21st century, while engineering and the physical sciences continue to make progress toward gender parity. All the broad areas show roughly linear upward trends, except for the humanities which appears to have flattened at approximately parity.

    These data speak against two reactions that I have sometimes heard to Carolyn's and my work on gender disparity in philosophy. One reaction is "well, that just shows that philosophy is sociologically more like engineering and the physical sciences than we might have previously thought". Another is "although philosophy has recently stalled in its progress toward gender parity, that is true in lots of other disciplines as well". Neither claim appears to be true.

    [I am leaving for Hong Kong later today, so comment approval might be delayed, but please feel free to post your thoughts and I'll approve them and respond when I can!]

    New Op-Eds on Ethnic Diversity in Philosophy

    A couple very cool op-eds today on ethnic diversity in philosophy:

    Jay L. Garfield and Bryan W. Van Norden in the New York Times:

  • If Philosophy Won't Diversify, Let's Call It What It Really Is
  • And John E. Drabinski, on his home page, with mostly supportive but partly critical read of the Garfield and Van Norden:

  • Diversity, Neutrality, Philosophy
  • -------------------------------------------------

    Related posts:

  • Philosophy Is Incredibly White, but This Does Not Make It Unusual Among the Humanities (Sep. 3, 2014)
  • What's Missing in Philosophy Classes? Chinese Philosophers (Los Angeles Times, Sep. 11, 2015)
  • Monday, May 09, 2016

    I Also Doubt That It Is Contingently So

    Vocals: Nomy Arpaly. Guitar: David Estlund.
    Lyrics by Nomy Arpaly:

    It ain't necessarily so
    It ain't necessarily so
    What ethicists say
    Can sound good in a way
    But it ain't necessarily so

    Morality trumps other oughts
    Morality trumps other oughts
    No rational action
    Can be an infraction
    Morality trumps other oughts

    For eudaimonia --
    You get the idea --
    Be virtuous by day and night
    Departures from virtue
    Are all gonna hurt you
    Sometimes I wanna say yeah right

    We always give laws to ourselves
    We always give laws to ourselves
    We lose our potential
    For being agential
    When we break them laws from ourselves

    I say it ain't necessarily so
    It ain't necessarily so
    I'll say it though, frankly
    They'll stare at me blankly
    It ain't necessarily so

    Wednesday, May 04, 2016

    Possible Architectures of Group Minds: Perception

    My favorite animal is the human. My favorite planet is Earth. But it's interesting to think, once in a while, about other possible advanced psychologies.

    Over the course of a few related posts, I'll consider various possible architectures for superhuman group minds. Such minds regularly appear in science fiction -- e.g., Star Trek's Borg and the starships in Ann Leckie's Ancillary series -- but rarely do these fictions make the architecture entirely clear.

    One cool thing about group minds is that they have the potential to be spatially distributed. The Borg can send an away team in a ship. A starship can send the ancillaries of which it is partly composed down to different parts of the planet's surface. We normally think of social groups as having separate minds in separate places, which communicate with each other. But if mentality (instead or also) happens at the group level, then we should probably think of it as a case of a mind with spatially distributed sensory receptors.

    (Elsewhere, I've argued that ordinary human social groups might actually be spatially distributed group minds. We'll come back to that in a future post, I hope.)

    So how might perception work, in a group mind?

    Central Versus Distributed Perceptual Architecture:

    For concreteness, suppose that the group mind is constituted by twenty groups of ten humanoids each, distributed across a planet's surface, in contact via relays through an orbiting ship. (This is similar to Leckie's scenario.)

    If the architecture is highly centralized, it might work like this: Each humanoid aims its eyes (or other sensory organs) toward a sensory target, communicating its full bandwidth of data back up to the ship for processing by the central cognitive system (call it the "brain"). This central brain synthesizes these data as if it had two hundred pairs of eyes across the planet, using information from each pair to inform its understanding of the input from other pairs. For example if the ten humanoids in Squad B are flying in a sphere around an airplane, each viewing the airplane from a different angle, the central brain forms a fully three-dimensional percept of that airplane from all ten viewing angles at once. The central brain might then direct humanoid B2 to turn its eyes to the left because of some input from B3 that makes that viewpoint especially relevant -- something like how when you hear a surprising sound to your left, you spontaneously turn your eyes that direction, swiftly and naturally coordinating your senses.

    Two disadvantages of this architecture are the bandwidth of information flow from the peripheral humanoids to the central brain and the possible delay of response to new information, as messages are sent to the center, processed in light of the full range of information from all sources, and then sent back to the periphery.

    A more distributed architecture puts more of the information processing in the humanoid periphery. Each humanoid might process its sensory input as best it can, engaging in further sensory exploration (e.g., eye movements) in light of only its own local inputs, and then communicate summary results to the others. The central brain might do no processing at all but be only a relay point, bouncing all 200 streaming messages from each humanoid to the others with no modification. The ten humanoids around the airplane might then each have a single perspectival percept of the plane, with no integrated all-around percept.

    Obviously, a variety of compromises are possible here. Some processing might be peripheral and some might be central. Peripheral sources might send both summary information and also high-bandwidth raw information for central processing. Local sensory exploration might depend partly on information from others in the group of ten, others in other 19 groups of ten, or from the central brain.

    At the extreme end of central processing, you arguably have just a single large being with lots of sensory organs. At the extreme end of peripheral processing, you might not want to think about the system as a "group mind" at all. The most interesting group-mind-ish cases have both substantial peripheral processing and substantial control of the periphery either by the center or by other nodes in the periphery, with a wide variety of ways in which this might be done.

    Perceptual Integration and Autonomy:

    I've already suggested one high integration case: having a single spherical percept of an airplane, arising from ten surrounding points of view upon it. The corresponding low integration case is ten different perspectival percepts, one for each of the viewing humanoids. In the first case, there's single coherent perceptual map that smoothly integrates all the perceptual inputs; in the second case each humanoid has its own distinct map (perhaps influenced by knowledge of the others' maps).

    This difference is especially interesting in cases of perceptual conflict. Consider an olfactory case: The ten humanoids in Squad B step into a meadow of uniform-looking flowers. Eight register olfactory input characteristic of roses. Two register olfactory input characteristic of daffodils. What to do?

    Central dictatorship: All ten send their information to the central brain. The central brain, based on all of the input, plus its background knowledge and other sorts of information, makes a decision. Maybe it decides roses. Maybe it decides daffodils. Maybe it decides that there's a mix of roses and daffodils. Maybe it decides it is uncertain, and the field is 80% likely to be roses and 20% likely to be daffodils. Whatever. It then communicates this result to each of the humanoids, who adopt it as their own local action-guiding representation of the state of the field. For example, if the central brain says "roses", the two humanoids registering daffodil-like input nonetheless represent the field as roses, with no more ambivalence about it than any of the other humanoids.

    Winner-take-all vote: There need be no central dictatorship. Eight humanoids might vote roses versus two voting daffodils. Roses wins, and this result becomes equally the representation of all.

    Compromise vote: Eight versus two. The resulting shared representation is either a mix of the two flowers, with roses dominating, or some feeling of uncertainty about whether the field is roses (probably) or instead daffodils (possible but less likely).

    Retention of local differences: Alternatively, each individual humanoid might retain its own locally formed opinion or representation even after receiving input from the group. A daffodil-smeller might then have a representation something like this: To me it smells like daffodils, even though I know that the group representation is roses. How this informs that humanoid's future action might vary. On a more autonomous structure, that humanoid might behave like a daffodil smeller (maybe saying, "Ah, it's daffodils, you guys! I'm picking this one to take one back to the daffodil loving Queen of Mars") or it might be more deferential to the group (maybe saying, "I know my own input suggests daffodils, but I give that input no more weight than I would give to the input of any other member of the group").

    Finally, no peripheral representation at all: An extremely centralized system might involve no perceptual representations at all in the humanoids, with all behavior issuing directly from the center.

    Conceptual Versus Perceptual:

    There's an intuitive distinction between knowing something conceptually or abstractly and having a perceptual experience of that thing. This is especially vivid in cases of known illusion. Looking at the Muller-Lyer illusion you know (conceptually) that the two lines minus the tails are the same length, but that's not how you (perceptually) see it.

    The conceptual/perceptual distinction can cross-cut most of the architectural possibilities. For example, the minority daffodil smeller might perceptually experience the daffodils but conceptually know that the group judgment is roses. Alternatively, the minority daffodil smeller might conceptually know that her own input is daffodils but perceptually experience roses.

    Counting Streams of Experience:

    If the group is literally phenomenally conscious at the group level, then there might be 201 streams of experience (one for each humanoid, plus one for the group); or there might be only one stream of experience (for the group); or streams of experience might not be cleanly individuated, with 200 semi-independent streams; or something else besides.

    The dictatorship, etc., options can apply to the group-level stream, as well as to the humanoid-level streams, perhaps with different results. For example, the group stream of consciousness might be determined by compromise vote (80% roses), while the humanoid streams of experience retain their local differences (some roses, some daffodils).

    To Come:

    Similar issues arise for group level memory, goal-setting, inferential reasoning, and behavior. I'll work through some of these in future posts.

    I also want to think about the moral status of the group and the individuals, under different architectural setups -- that is, what sorts of rights or respect or consideration we owe to the individuals vs. the group, and how that might vary depending on the set-up.

    ------------------------------------------------

    Related:

  • Possible Psychology of a Matrioshka Brain (Oct. 9, 2014).
  • If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).
  • [image source, image source]