Friday, May 30, 2014

Goldfish-Pool Immortality

Must an infinitely continued life inevitably become boring? Bernard William famously answers yes; John Fischer no. Fischer's case is perhaps even more easily made than he suggests -- but its very ease opens up new issues.

Consider Neil Gaiman's story "The Goldfish Pool and Other Stories" (yes, that's the name of one story):

He nodded and grinned. "Ornamental carp. Brought here all the way from China."

We watched them swim around the little pool."I wonder if they get bored."

He shook his head. "My grandson, he's an ichthyologist, you know what that is?"

"Studies fishes."

"Uh-huh. He says they only got a memory that's like thirty seconds long. So they swim around the pool, it's always a surprise to them, going 'I've never been here before.' They meet another fish they known for a hundred years, they say, 'Who are you, stranger?'"

The problem of immortal boredom solved: Just have a bad memory! Then even seemingly un-repeatable pleasures (meeting someone for the first time) become repeatable.

Now you might say, wait, when I was thinking about immortality I wasn't thinking about forgetting everything and doing it again like a stupid goldfish.

To this I answer: Weren't you?

If you were imagining that you were continuing life as a human, you were imagining, presumably, that you had a finite brain capacity. And there's only so much memory you can fit into eighty billion neurons. So of course you're going to forget things, at some point almost everything, and things sufficiently well forgotten could presumably be experienced as fresh again. This is always what is going on with us anyway, to some extent. And this forgetting needn't involve any loss of personal identity, it seems: one's personality and some core memories could always stay the same.

Immortality as an angel or transhuman super-intellect raises the same issues, as long as one's memory is finite.

A new question arises perhaps more vividly now: Is repeating and forgetting the same types of experiences over and over again, infinitely, preferable to doing them once, or twenty times, or a googolplex times? The answer to that question isn't, I think, entirely clear (and maybe even faces metaphysical problems concerning the identity of indiscernibles). My guess, though, is that if you stopped one of the goldfish and said, "Do you want to keep going?", the fish would say, "Yes, this is totally cool, I wonder what's around the corner? Oh, hi, glad to meet you!" Maybe that's a consideration in favor.

Alternatively, you might imagine an infinite memory. But how would that work? What would that be like? Would one become overwhelmed like Funes the Memorious? Would there be a workable search algorithm? Would there be some tagging system to distinguish each memory from infinitely many qualitatively identical other memories? Or maybe you were imagining retaining your humanity but somehow existing non-temporally? I find that even harder to conceive. To evaluate such possibilities, we need a better sense of the cognitive architecture of the immortal mind.

Supposing goldfish-pool immortality would be desirable, would it be better to have, as it were, a large pool -- a wide diversity of experiences before forgetting -- or a small, more selective pool, perhaps one peak experience, repeated infinitely? Would it be better to have small, unremembered variations each time, or would detail-by-detail qualitative identity be just as good?

I've started to lose my grip on what might ground such judgments. However, it's possible that technology will someday make this a matter of practical urgency. Suppose it turns out, someday, that people can "upload" into artificial environments in which our longevity vastly outruns our memorial capacity. What should be the size and shape of our pool?

Update July 4, 2016:

See my new short story on this topic: Fish Dance (Clarkesworld #118, July 2016).

[image source]

Tuesday, May 27, 2014

Ergo

Ergo, a new online philosophy journal, has just released its first issue. Open access, triple anonymous, fast turnaround times (hopefully continuing into the future), transparent process, aiming at a balanced representation of all the subdisciplines. What's not to like?

I hope and expect that this journal will soon count among the most prestigious venues in philosophy.

Friday, May 23, 2014

Metaphilosophical Tides in the Literature on Belief

Why should a philosopher care about the nature of belief? Back in the 1980s and 1990s, when I was a student, there were two main animating reasons in the Anglophone philosophical community. Recently, though, the literature has changed.

One of the old-school reasons was to articulate the materialistic picture of the world. The late 1950s through the early 1990s -- roughly from Smart's "Sensations and Brain Processes" through Dennett's Consciousness Explained -- was (I now think) the golden age of materialism in the philosophy of mind, when the main alternatives and implications were being seriously explored by the philosophical community for the first time. We needed to know how belief fit into the materialist world-picture. How could a purely material being, a mere machine fundamentally constituted of tiny bits of physical stuff bumping against each other, have mental states about the world, with real representational or intentional content? The functionalism and evolutionary representationalism of Putnam, Armstrong, Dennett, Millikan, Fodor, and Dretske seemed to give an answer.

The other, related, motivating reason was the theory of reference in philosophy of language. How is it possible to believe that Superman is strong but that Clark Kent is not strong, if Superman really is Clark Kent (Frege's Puzzle)? And does the reference of a thought or utterance depend only on what was in the head (internalism) or could two molecule-for-molecule identical people have different thought contents simply because they're in different environments (externalism). Putnam's Twin Earth was amazingly central to the field. (In 2000, Joe Cruz and I sketched out a "map of the analytic philosopher's brain". Evidence seemed to suggest a major lobe dedicated entirely to Twin Earth, but only a small nodule for the meaning of life.)

These inquiries had two things in common: their grand metaphysical character -- defending materialism, discovering the fundamental nature of thought and language -- and their armchair methodology. Some of the contributors such as Fodor and Dennett were very empirically engaged in general, but when it came to their central claims about belief, they seemed to be mainly driven by thought experiments and a metaphysical world vision.

Literature on the nature of belief has been re-energized in the 2010s, I think, by issues less grand but more practical -- especially the issue of implicit bias, but more generally the question of how to think about cases of seeming mismatch between explicit thought or speech and spontaneous behavior. Tamar Gendler's treatment of (implicit) alief vs. (explicit) belief, especially, has spawned a whole subliterature of its own, which is intimately connected with the recent psychological literature on dual process theory or "thinking fast and slow". Does the person who says, in all attempted sincerity, "women are just as smart as men", but who (as anyone else could see) consistently treats women as stupid, believe what he's saying? Delusions present seemingly similar cases, such as the Cotard delusion which involves seemingly sincerely claiming that one is dead. What are we to make of that? There's a suddenly burgeoning philosophical subliterature on delusion, much of it responding to Lisa Bortolotti's recent APA prizewinning book on the topic.

By most standards, the issues are still grand and impractical and the approach armchairish -- this is philosophy after all! -- but I believe their metaphilosophical spirit is very different. What animates Gendler, Bortolotti, and the others, I think, is a hard look at particularly puzzling empirical issues, where it seems that a good philosophical theory of the nature of the phenomena might help clear things up, and then a pragmatic approach to evaluating the results. Given the empirical phenomena, are our interests best served by theorizing belief in this way, or are they better served by theorizing in this other way?

This is music to my ears, both metaphilosophically and regarding the positive theory of belief. Metaphilosophically, because it is exactly my own approach: I entered the literature on belief as a philosopher of science interested in puzzles in developmental psychology that I thought could be dissolved through application of a good theory of belief. And at level of the positive theory of belief, because my own theory of belief is designed exactly to shine as means of organizing our thoughts about such splintering (The Splintered Mind!), seemingly messed-up cases.

Friday, May 16, 2014

Group Organisms and the Fermi Paradox

I've been thinking recently about group organisms and group minds. And I've been thinking, too, about the Fermi Paradox -- about why we haven't yet discovered alien civilizations, given the vast number of star systems that could presumably host them. Here's a thought on how these two ideas might meet.

Species that contain relatively few member organisms, in a small habitat, are much more vulnerable to extinction than are species that contain many member organisms distributed widely. A single shock can easily wipe them out. So my thought is this: If technological civilizations tend to merge into a single planetwide superorganism, then they become essentially species constituted by a single organism in one small habitat (small relative to the size of the organism) -- and thus highly vulnerable to extinction.

This is, of course, a version of the self-destruction solution to Fermi's paradox: Technological civilizations might frequently arise in the galaxy, but they always destroy themselves quickly, so none happen to be detectable right now. Self-destruction answers to Fermi's paradox tend to focus on the likelihood of an immensely destructive war (e.g., nuclear or biological), environmental catastrophe, or the accidental release of destructive technology (e.g., nanobots). My hypothesis is compatible with all of those, but it's also, I think, a bit different: A single superorganism might die simply of disease (e.g., a self-replicating flaw) or malnutrition (e.g., a risky bet about next year's harvest) or suicide.

For this "solution" -- or really, at best I think, partial solution -- to work, at least three things would have to be true:

(1.) Technological civilizations would have to (almost) inevitably merge into a single superorganism. I think this is at least somewhat plausible. As technological capacities develop, societies grow more intricately dependent on the functioning of all their parts. Few Californians could make it, now, as subsistence farmers. Our lives are entirely dependent upon a well-functioning system of mass agriculture and food delivery. Maybe this doesn't make California, or the United States, or the world as a whole, a full-on superorganism yet (though the case could be made). But if an organism is a tightly integrated system each of whose parts (a.) contributes in a structured way to the well-being of the system as a whole and (b.) cannot effectively survive or reproduce outside the organismic context, then it's easy to see how increasing technology might lead a civilization ever more that direction -- as the individual parts (individual human beings or their alien equivalents) gain efficiency through increasing specialization and increased reliance upon the specializations of others. Also, if we imagine competition among nation-level societies, the most-integrated, most-organismic societies might tend to outcompete the others and take over the planet.

(2.) The collapse of the superorganism would have to result in the near-permanent collapse of technological capacity. The individual human beings or aliens would have to go entirely extinct, or at least be so technologically reduced that the overwhelming majority of the planet's history is technologically primitive. One way this might go -- though not the only way -- is for something like a Maynard Smith & Szathmary major transition to occur. Just as individual cells invested their reproductive success into a germline when they merged into multicellular organisms (so that the only way for a human liver cell to continue into the next generation is for it to participate in the reproductive success of the human being as a whole), so also human reproduction might become germline-dependent at the superorganism level. Maybe our descendents will be generated from government-controlled genetic templates rather than in what we now think of as the normal way. If these descendants are individually sterile, either because that's more efficient (and thus either consciously chosen by the society or evolutionarily selected for) or because the powers-that-be want to keep tight control on reproduction, then there will be only a limited number of germlines, and the superorganism will be more susceptible to shocks to the germline.

(3.) The habitat would have to be small relative to the superorganism, with the result that there were only one or a few superorganisms. For example, the superorganism and the habitat might both be planet sized. Or there might be a few nation-sized superorganisms on one planet or across several planets -- but not millions of them distributed across multiple star systems. In other words, space colonization would have to be relatively slow compared to the life expectancy of the merged superorganisms. Again, this seems at least somewhat plausible.

To repeat: I don't think this could serve as a full solution to the Fermi paradox. If high-tech civilizations evolve easily and abundantly and visibly, we probably shouldn't expect all of them to collapse swiftly for these reasons. But perhaps it can combine with some other approaches, toward a multi-pronged solution.

It's also something to worry about, in its own right, if you're concerned about existential risks to humanity.

[image source]

Monday, May 12, 2014

New Essay in Draft: 1% Skepticism

My latest in crazy, dijunctive metaphysics:

Abstract:

A 1% skeptic is someone who has about a 99% credence in non-skeptical realism and about a 1% credence in the disjunction of all radically skeptical scenarios combined. The first half of this essay defends the epistemic rationality of 1% skepticism, appealing to dream skepticism, simulation skepticism, cosmological skepticism, and wildcard skepticism. The second half of the essay explores the practical behavioral consequences of 1% skepticism, arguing that 1% skepticism need not be behaviorally inert.
Full version here.

(What I mean by crazy metaphysics.)
(What I mean by disjunctive metaphysics.)

As always, comments/reactions/discussion welcome, either as comments on this post or by direct email to me.