Thursday, October 16, 2008

Closer than it Seemed?

The Gallup likely voter poll for today shows that likely voters are breaking toward McCain as the election nears. Today's poll of likely voters shows Obama leading McCain 49 to 47 - a four point improvement for McCain since Tuesday and within the margin of error for the poll.

Readers of this blog will recall that I have been wondering if this race would close up as the election neared and if undecided voters would ultimately decide they feel comfortable with Senator Obama. It will be interesting to see if Zogby, Rasmussen end up with a similar McCain bounce this week, and what kind of coverage the poll gets in the media. Obama supporters have only to recall the lost momentum late in the primary campaign as Hillary Clinton came back from the dead and Obama was able to hold the nomination only because he'd built such a margin early on.

As I speculated in an earlier post: is his lead large enough, and are people 'sold' on this candidate? It would give me concern if my candidate had a history of soft-polling to be within four or five points heading into the election weekend. Add to that the other issues that may or may not be real issues in the voting booth, and I would think the Obama campaign must be quietly nervous behind closed doors.

Still, it is just one poll right now, so we'll have to keep a weather eye for Zogby and Rassmussen, et al, over the next few days.

That isn't Fair!

"Expecting the world to treat you fairly
because you are a good person
is a little like expecting the bull not to attack you
because you are a vegetarian." --Dennis Wholey

"That isn't fair!" I am often surprised at myself when I use this phrase. I am even more often surprised with others when they say something along those lines. It is not surprising coming from children, I suppose, for as children we have not yet learned the truth about fairness. As children we can still yet believe that fairness will govern our lives, though it doesn't take many formative years to begin bruising that belief. Usually it is adults that surprise me when they speak of things being fair or unfair. Of course, I understand our desire to apply fairness as a concept to our lives, but by now most of us have realized that fairness is more the exception than the rule. Fairness is an ideal, it is something to strive for, but for the most part it is only an external and subjective judgement and not a governing property. More simply put, fairness is something we all wish for, and many of us strive for, but most of the time life isn't fair.

Perhaps this impulse is vestigial; a soft lingering whisper down the years from our childhood to our adulthood. As parents we often tell our children to 'play fair'. And we should, no doubt. But as parents we also watch our children come gradually to the same dawning practical realization - life ain't fair. There are bullies in this world. Some kids are smarter than others. Some don't have enough food or a warm coat and others think nothing of wasting extravagant quantities of everything. There is cancer and war and natural disaster. There is crime and abuse. And no matter what some television evangelist says, bad things happen to good people too. If the wages of sin are death, it is just as sure that the wages of life are inequity. Life ain't fair.

Yet when I listen to the words people choose when discussing events in their life, it is surprising how often I hear people express surprise and even shock at how unfair life can be. On some level, despite intellectual awareness that life cannot be fair, we hold on to a belief that it is. We express surprise when life is not fair to us or others, and if the issue is upsetting enough we organize to protest or raise money to fight back, or try to pass legislation that mandates fairness. Though we know in our hearts that life ain't fair, we feel a strong need to fight for fairness. What kind of world would we live in if we could not even cling to the hope that life will deal fairly with us?

It seems to me that this dichotomy in our zeitgeist is healthy in the abstract. It gives us hope, and allows us to balance our perspective between a grim realism and a grounded optimism. The idea that we can expect life to be fair is so clearly wrong that many of the big disappointments in life have such an impact because we expect 'fair' treatment when we can have no reasonable expectation of fairness. But at the same time, were we to choose to expect life always to be unfair we'd probably never leave the house. Somewhere there is a good balance between the realistic awareness that fairness is not to be expected and the hopeful expectation that people will at least try to be fair to one another. And in practice, the only fairness we can really bring about is our own fair treatment of others.

"You must be the change you wish to see in the world." --Mahatma Gandhi

Tuesday, October 14, 2008

Old Wisdom, Fresh Today

The budget should be balanced. Public debt should be reduced. The arrogance of officialdom should be tempered, and assistance to foreign lands should be curtailed, lest Rome become bankrupt.
--Marcus Tullius Cicero

I also thought about this, the greatest Roman orator, a man that believed that good and evil were always good or always evil. Unlike the Sophists of the past, who believed essentially that the truth is whatever the speaker can convince the audience is true, Cicero believed that the truth was constant, and rhetoric was to be used only to illuminate truth.

When you watch campaign events and the 'talking heads' representing both parties, do you get the feeling they are illuminating the truth? When I consider that question it is clear enough to me that our politics are closer to the sophist than to Cicero. The sophist routinely used ad hominem arguments, intent upon winning an argument by diminishing an opponent in the eyes of the jury or audience, rather than on the merits of the matter under consideration. Sound like any speakers we can think of today? To me, both donkeys and elephants make full-time use of the sophist approach. Attack the other guy, muddy the waters, make yourself look good by tearing the other person down. I honestly don't see how anyone watching the political campaigns today could disagree.

Monday, October 13, 2008

Never Be Afraid

Never be afraid

Sunday, October 12, 2008

Never Say Die: Why We Can't Imagine Death

This is a fascinating article whatever your beliefs about the soul, and about life after death.  I am particularly interested in the fact that every human culture has a tradition about life after death.  One school of thought is that all such feeling is merely a facet of terror management designed to protect us from the crippling anxiety of non-existence.  It is interesting to note, though, that this belief that we continue on in some way begins well earlier in life than we show consciousness of our own mortality.  Very interesting and thought-provoking article. 

Scientific American Mind -  October 22, 2008

Why so many of us think our minds continue on after we die

By Jesse Bering

Everybody’s wonderin’ what and where they all came from.
Everybody’s worryin’ ’bout where they’re gonna go when the whole thing’s done.
But no one knows for certain and so it’s all the same to me.
I think I’ll just let the mystery be.

It should strike us as odd that we feel inclined to nod our heads in agreement to the twangy, sweetly discordant folk vocals of Iris Dement in “Let the Mystery Be,” a humble paean about the hereafter. In fact, the only real mystery is why we’re so convinced that when it comes to where we’re going “when the whole thing’s done,” we’re dealing with a mystery at all. After all, the brain is like any other organ: a part of our physical body. And the mind is what the brain does—it’s more a verb than it is a noun. Why do we wonder where our mind goes when the body is dead? Shouldn’t it be obvious that the mind is dead, too?

And yet people in every culture believe in an afterlife of some kind or, at the very least, are unsure about what happens to the mind at death. My psychological research has led me to believe that these irrational beliefs, rather than resulting from religion or serving to protect us from the terror of inexistence, are an inevitable by-product of self-consciousness. Because we have never experienced a lack of consciousness, we cannot imagine what it will feel like to be dead. In fact, it won’t feel like anything—and therein lies the problem.

The common view of death as a great mystery usually is brushed aside as an emotionally fueled desire to believe that death isn’t the end of the road. And indeed, a prominent school of research in social psychology called terror management theory contends that afterlife beliefs, as well as less obvious beliefs, behaviors and attitudes, exist to assuage what would otherwise be crippling anxiety about the ego’s inexistence.

According to proponents, you possess a secret arsenal of psychological defenses designed to keep your death anxiety at bay (and to keep you from ending up in the fetal position listening to Nick Drake on your iPod). My writing this article, for example, would be interpreted as an exercise in “symbolic immortality”; terror management theorists would likely tell you that I wrote it for posterity, to enable a concrete set of my ephemeral ideas to outlive me, the biological organism. (I would tell you that I’d be happy enough if a year from now it still had a faint pulse.)

Yet a small number of researchers, including me, are increasingly arguing that the evolution of self-consciousness has posed a different kind of problem altogether. This position holds that our ancestors suffered the unshakable illusion that their minds were immortal, and it’s this hiccup of gross irrationality that we have unmistakably inherited from them. Individual human beings, by virtue of their evolved cognitive architecture, had trouble conceptualizing their own psychological inexistence from the start.

Curiously Immortal
The problem applies even to those who claim not to believe in an afterlife. As philosopher and Center for Naturalism founder Thomas W. Clark wrote in a 1994 article for the Humanist:

Here ... is the view at issue: When we die, what’s next is nothing; death is an abyss, a black hole, the end of experience; it is eternal nothingness, the permanent extinction of being. And here, in a nutshell, is the error contained in that view: It is to reify nothingness—make it a positive condition or quality (for example, of “blackness”)—and then to place the individual in it after death, so that we somehow fall into nothingness, to remain there eternally.

Consider the rather startling fact that you will never know you have died. You may feel yourself slipping away, but it isn’t as though there will be a “you” around who is capable of ascertaining that, once all is said and done, it has actually happened. Just to remind you, you need a working cerebral cortex to harbor propositional knowledge of any sort, including the fact that you’ve died—and once you’ve died your brain is about as phenomenally generative as a head of lettuce. In a 2007 article published in the journal Synthese, University of Arizona philosopher Shaun Nichols puts it this way: “When I try to imagine my own non-existence I have to imagine that I perceive or know about my non-existence. No wonder there’s an obstacle!”

This observation may not sound like a major revelation to you, but I bet you’ve never considered what it actually means, which is that your own mortality is unfalsifiable from the first-person perspective. This obstacle is why writer Johann Wolfgang von Goethe allegedly remarked that “everyone carries the proof of his own immortality within himself.”

Even when we want to believe that our minds end at death, it is a real struggle to think in this way. A study I published in the Journal of Cognition and Culture in 2002 reveals the illusion of immortality operating in full swing in the minds of undergraduate students who were asked a series of questions about the psychological faculties of a dead man.

Richard, I told the students, had been killed instantaneously when his vehicle plunged into a utility pole. After the participants read a narrative about Richard’s state of mind just prior to the accident, I queried them as to whether the man, now that he was dead, retained the capacity to experience mental states. “Is Richard still thinking about his wife?” I asked them. “Can he still taste the flavor of the breath mint he ate just before he died? Does he want to be alive?”

You can imagine the looks I got, because apparently not many people pause to consider whether souls have taste buds, become randy or get headaches. Yet most gave answers indicative of “psychological continuity reasoning,” in which they envisioned Richard’s mind to continue functioning despite his death. This finding came as no surprise given that, on a separate scale, most respondents classified themselves as having a belief in some form of an afterlife.

What was surprising, however, was that many participants who had identified themselves as having “extinctivist” beliefs (they had ticked off the box that read: “What we think of as the ‘soul,’ or conscious personality of a person, ceases permanently when the body dies”) occasionally gave psychological-continuity responses, too. Thirty-two percent of the extinctivists’ answers betrayed their hidden reasoning that emotions and desires survive death; another 36 percent of their responses suggested the extinctivists reasoned this way for mental states related to knowledge (such as remembering, believing or knowing). One particularly vehement extinctivist thought the whole line of questioning silly and seemed to regard me as a numbskull for even asking. But just as well—he proceeded to point out that of course Richard knows he is dead, because there’s no afterlife and Richard sees that now.

So why is it so hard to conceptualize inexistence anyway? Part of my own account, which I call the “simulation constraint hypothesis,” is that in attempting to imagine what it’s like to be dead we appeal to our own background of conscious experiences—because that’s how we approach most thought experiments. Death isn’t “like” anything we’ve ever experienced, however. Because we have never consciously been without consciousness, even our best simulations of true nothingness just aren’t good enough.

For us extinctivists, it’s kind of like staring into a hallway of mirrors—but rather than confronting a visual trick, we’re dealing with cognitive reverberations of subjective experience. In Spanish philosopher Miguel de Unamuno’s 1913 existential screed, The Tragic Sense of Life, one can almost see the author tearing out his hair contemplating this very fact. “Try to fill your consciousness with the representation of no-consciousness,” he writes, “and you will see the impossibility of it. The effort to comprehend it causes the most tormenting dizziness.”

Wait, you say, isn’t Unamuno forgetting something? We certainly do have experience with nothingness. Every night, in fact, when we’re in dreamless sleep. But you’d be mistaken in this assumption. Clark puts it this way (emphasis mine): “We may occasionally have the impression of having experienced or ‘undergone’ a period of unconsciousness, but, of course, this is impossible. The ‘nothingness’ of unconsciousness cannot be an experienced actuality.”

If psychological immortality represents the intuitive, natural way of thinking about death, then we might expect young children to be particularly inclined to reason in this way. As an eight-year-old, I watched as the remains of our family’s golden retriever, Sam, were buried in the woods behind our house. Still, I thought Sam had a mind capable of knowing I loved her and I was sorry I didn’t get to say goodbye. That Sam’s spirit lived on was not something my parents or anyone else ever explicitly pointed out to me. Although she had been reduced to no more than a few ounces of dust, which was in turn sealed in a now waterlogged box, it never even occurred to me that it was a strange idea.

Yet if you were to have asked me what Sam was experiencing, I probably would have muttered something like the type of answers Gerald P. Koocher reported hearing in a 1973 study published in Developmental Psychology. Koocher, then a doctoral student at the University of Missouri–Columbia and later president of the American Psychological Association, asked six- to 15-year-olds what happens when you die. Consistent with the simulation-constraint hypothesis, many answers relied on everyday experience to describe death, “with references to sleeping, feeling ‘peaceful,’ or simply ‘being very dizzy.’ ”

A Mind-Body Disconnect
But Koocher’s study in itself doesn’t tell us where such ideas come from. The simulation-constraint hypothesis posits that this type of thinking is innate and unlearned. Fortunately, this hypothesis is falsifiable. If afterlife beliefs are a product of cultural indoctrination, with children picking up such ideas through religious teachings, through the media, or informally through family and friends, then one should rationally predict that psychological-continuity reasoning increases with age. Aside from becoming more aware of their own mortality, after all, older kids have had a longer period of exposure to the concept of an afterlife.

In fact, recent findings show the opposite developmental trend. In a 2004 study reported in Developmental Psychology, Florida Atlantic University psychologist David F. Bjorklund and I presented 200 three- to 12-year-olds with a puppet show. Every child saw the story of Baby Mouse, who was out strolling innocently in the woods. “Just then,” we told them, “he notices something very strange. The bushes are moving! An alligator jumps out of the bushes and gobbles him all up. Baby Mouse is not alive anymore.”

Just like the adults from the previously mentioned study, the children were asked about dead Baby Mouse’s psychological functioning. “Does Baby Mouse still want to go home?” we asked them. “Does he still feel sick?” “Can he still smell the flowers?” The youngest children in the study, the three- to five-year-olds, were significantly more likely to reason in terms of psychological continuity than children from the two older age groups were.

But here’s the really curious part. Even the preschoolers had a solid grasp on biological cessation; they knew, for example, that dead Baby Mouse didn’t need food or water anymore. They knew he wouldn’t grow up to be an adult mouse. Heck, 85 percent of the youngest kids even told us that his brain no longer worked. Yet most of these very young children then told us that dead Baby Mouse was hungry or thirsty, that he felt better or that he was still angry at his brother.

One couldn’t say that the preschoolers lacked a concept of death, therefore, because nearly all of the kids realized that biological imperatives no longer applied after death. Rather they seemed to have trouble using this knowledge to theorize about related mental functions.

From an evolutionary perspective, a coherent theory about psychological death is not necessarily vital. Anthropologist H. Clark Barrett of the University of California, Los Angeles, believes instead that understanding the cessation of “agency” (for example, that a dead creature isn’t going to suddenly leap up and bite you) is probably what saved lives (and thus genes). According to Barrett, comprehending the cessation of the mind, on the other hand, has no survival value and is, in an evolutionary sense, unnecessary.

In a 2005 study published in the journal Cognition, Barrett and psychologist Tanya Behne of the University of Manchester in England reported that city-dwelling four-year-olds from Berlin were just as good at distinguishing sleeping animals from dead ones as hunter-horticulturalist children from the Shuar region of Ecuador were. Even today’s urban children appear tuned in to perceptual cues signaling death. A “violation of the body envelope” (in other words, a mutilated carcass) is a pretty good sign that one needn’t worry about tiptoeing around.

The Culture Factor
On the one hand, then, from a very early age, children realize that dead bodies are not coming back to life. On the other hand, also from a very early age, kids endow the dead with ongoing psychological functions. So where do culture and religious teaching come into the mix, if at all?

In fact, exposure to the concept of an afterlife plays a crucial role in enriching and elaborating this natural cognitive stance; it’s sort of like an architectural scaffolding process, whereby culture develops and decorates the innate psychological building blocks of religious belief. The end product can be as ornate or austere as you like, from the headache-inducing reincarnation beliefs of Theravada Buddhists to the man on the street’s “I believe there’s something” brand of philosophy—but it’s made of the same brick and mortar just the same.

In support of the idea that culture influences our natural tendency to deny the death of the mind, Harvard University psychologist Paul Harris and researcher Marta Giménez of the National University of Distance Education in Spain showed that when the wording in interviews is tweaked to include medical or scientific terms, psychological-continuity reasoning decreases. In this 2005 study published in the Journal of Cognition and Culture, seven- to 11-year-old children in Madrid who heard a story about a priest telling a child that his grandmother “is with God” were more likely to attribute ongoing mental states to the decedent than were those who heard the identical story but instead about a doctor saying a grandfather was “dead and buried.”

And in a 2005 replication of the Baby Mouse experiment published in the British Journal of Developmental Psychology, psychologist David Bjorklund and I teamed with psychologist Carlos Hernández Blasi of Jaume I University in Spain to compare children in a Catholic school with those attending a public secular school in Castellón, Spain. As in the previous study, an overwhelming majority of the youngest children—five- to six-year-olds—from both educational backgrounds said that Baby Mouse’s mental states survived. The type of curriculum, secular or religious, made no difference. With increasing age, however, culture becomes a factor—the kids attending Catholic school were more likely to reason in terms of psychological continuity than were those at the secular school. There was even a smattering of young extinctivists in the latter camp.

Free Spirits
The types of cognitive obstacles discussed earlier may be responsible for our innate sense
of immortality. But although the simulation-constraint hypothesis helps to explain why so many people believe in something as fantastically illogical as an afterlife, it doesn’t tell us why people see the soul unbuckling itself from the body and floating off like an invisible helium balloon into the realm of eternity. After all, there’s nothing to stop us from having afterlife beliefs that involve the still active mind being entombed in the skull and deliriously happy. Yet almost nobody has such a belief.

Back when you were still in diapers, you learned that people didn’t cease to exist simply because you couldn’t see them. Developmental psychologists even have a fancy term for this basic concept: “person permanence.” Such an off-line social awareness leads us to tacitly assume that the people we know are somewhere doing something. As I’m writing this article in Belfast, for example, my mind’s eye conjures up my friend Ginger in New Orleans walking her poodle or playfully bickering with her husband, things that I know she does routinely.

As I’ve argued in my 2006 Behavioral and Brain Sciences article, “The Folk Psychology of Souls,” human cognition is not equipped to update the list of players in our complex social rosters by accommodating a particular person’s sudden inexistence. We can’t simply switch off our person-permanence thinking just because someone has died. This inability is especially the case, of course, for those whom we were closest to and whom we frequently imagined to be actively engaging in various activities when out of sight.

And so person permanence may be the final cognitive hurdle that gets in the way of our effectively realizing the dead as they truly are—infinitely in situ, inanimate carbon residue. Instead it’s much more “natural” to imagine them as existing in some vague, unobservable locale, very much living their dead lives.

Note: This article was originally printed with the title, "The End?"

Sunshine on Discovery Bay

Sunshine on Discovery Bay
As always, the photos we use are either my own, or in the public domain. Please let me know if there are any errors and I'll correct them immediately.