Thoughts and Opinions

Snapshot of a Child

Readers of WMBW may know that Doctor Paul Czaja, one of our co-founders, has spent his life as a Montessori educator.  Today’s blog is something WMBW believes he wrote in a recent newsletter to his current community at the Island Village Montessori School:

To inspire me every day now to continue on this my life’s journey as a caring servant of every child whose company I share, I have placed on my desk a photograph of me at the peak of my career. It shows my very beginning as a person smiling in the spring sunshine of the Bronx. My twin brother, Peter, and I are sitting with our Mom there on a small bench placed before the small hedged garden right in front of our red brick home. Sunshine and shadows dance on our three happy smiling faces.

We were still kids; looks like we were just two going on three years of age. I say it depicts me clearly at the peak of my career for it is obvious that I was not a “me” yet, and a sacred innocence was plainly visible in my face. This old photograph reveals very obviously that my primary sense then was my seeing – my simple yet profound perceiving the sacredness of ordinary life right there in front of me, and you can see there in my face the overwhelming happiness I was feeling for all the wonderful things right there in that now of my life as a little child.

Much later when I had grown up and had become an existentialist philosopher and poet — a twenty somewhat year old graduate student at Fordham University in the Bronx still — I was encouraged by a professor to investigate just what the modern artists were striving to reveal to us. I dutifully went to the Metropolitan Museum of Art in Manhattan and began to study the artistic renderings within their second floor galleries dedicated to Modern Art. I entered one small room in which was hung a single very large painting which was very simply a beautiful blue painted canvas with one diagonal swatch of red paint boldly and dramatically there — I almost heard it as a triumphant shout!

I sat on the bench there trying to grasp what this artist wanted to share with me, and suddenly I realized that I too had made just such a painting when I was only a toddler. I remembered crawling with a red crayon in my hand into our living room and there with great delight reaching out and making this upward very personal mark on the clean white wall next to the couch. I thrilled in the seeing of this singular contrast of my red crayon line and its sensuous waxy presence there on the flat pale wall. I do not recall if our mother was as pleased as I was, but this memory revealed in a flash just what this modern artist was wanting to convey: There is great meaning in one’s becoming a child again — there is a richness in recapturing the innocent epiphanies of first experiences — so sacred, for you are still at that time not yet a self-conscious “me” but totally open to what you are creating — to what you are communicating — to what you are actually living sharing that moment. The existentialist philosopher Buber had observed: “What counts is to know and to believe what one experiences and believes so directly that it can be translated into the life one lives.” That is who I was as a child — and that is what I wanted to always be all my life.

I offer a bi-weekly seminar in existential ethics for our middle school students. By round-table discussions evoked from selected case studies of real life situations in which a personal decision must be made of what is the right thing to be done, I foster an awareness in them that they each possess an innate potentiality to become a virtuous person by deciding to act kindly not selfishly, to be wise and not foolish. Having mastered the “Three Rs” they as they enter young adulthood are now ready to discover that becoming an empowered person requires the “Three Cs,” namely, the uniquely human act of honest communication that enables the creation of the almost mystical communion in which two “me s” wondrously become the kind of a “we” our hearts have always yearned for, and from that fruitful bonding of daily shared kindness is born a community that flourishes with the love and with the caring intelligence and with the creation of new beauty that brings true meaning to our whirling, silent, yet glorious universe.

So I meditate on the little boy that I was then when that snapshot was taken so many years ago, and I can see all these personal potentials that were already being actualized as I drank in life with my innocent seeing – and then as I turn to the children before me in this here and now, I strive all the more to join with our faculty and students in the creation of a true culture of compassion where we each come away from our daily encounters better and happier — which will be so evident by the look of kindness in our faces, by the gleam of joy in our eyes, and by the simple yet profound goodness of our greetings. Worthy of another snapshot!

— Paul Czaja

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/03/26/snapshot-of-a-child/
Twitter
RSS

“I’m Just Saying…”

Yesterday, I watched my grandson Jacob try to get his way.  I’ve forgotten the actual situation, but he kept using the word “just.”  As I listened to him I realized how often I’d heard not only him, but others, use the word “just” in just the same way.

It could have been something like, “Jacob, it’s time to clean up.”  — “I just want to play one more game.”

You’ve heard it: “Sally, please stop playing with your food.”  — “But I was just trying to get the fat part out.”

Or, “Big brother, can I join your game?”  — “Not right now.  I just need to finish what I’m doing first.”

Or, “Stop hitting me!”  — “I was just playing with you.”

Listening to Jacob use the word, I wondered about it, as it’s used in sentences like these, and I wondered if it was related to the word “just” as in “justice” – as in “Abraham Lincoln was a good and just man.”

Looking up the etymology on the internet, I found erudite discussion of the word’s origins in the French “juste,” and the latin “ius.” There was general agreement that all the many ways we use the word “just” are related.  They all, in the end, go back to the idea of justice.  But among all the cited examples – from several centuries of recorded usage in numerous languages – I saw not a mention of human nature, as I had witnessed it in my grandson: namely, how we use adjectives like “just” to “justify” ourselves.  We see it in children all the time, when something they’ve done is challenged and they seek to defend themselves by minimizing the seriousness of it.  To assert that they were “just” doing something is to say it was harmless – and, by implication, not deserving of disapproval.

An adult friend of mine often uses the concept generically, intentionally not finishing the sentence.  He uses it when even a subtle, raised eyebrow suggests skepticism about something he’s just said.  His usual response:   “Just sayin’…”

The problem with self-justification is that it knows no natural bounds.  It’s an elixir that never quite quenches the thirst.  Sally’s assertion that she wasn’t playing with her food, she was just trying to separate the good stuff from the fatty part, is not only a minimization of any harm done, it’s the beginning of an argument that she was actually engaged in a worthwhile pursuit.  “I was doing a good thing…”  “I was just” doing what I was doing means, in effect, “I was justly” doing what I was doing.

One of my favorite meditations on word origins involves the meaning of a different word, the word  “want.”  Its original meaning, coming from old Norse, was the lack of something. We hear it used that way rarely these days, but the word is still occasionally seen in its former sense: “The old house was in want of repair.”  “His manners were seriously wanting.”  “For want of a better location, we had our picnic in a cemetery.”

From the meaning “lack” came the meaning “need,” since we sometimes see a need for what we lack.  Only from that did the word come to mean “desire” – since we so often desire what we lack (whether we need it or not).  Language doesn’t often change because some writer or linguist makes a conscious decision to change it.  It changes gradually and unconsciously, as everyday people apply a familiar word in new situations – situations that carry context, connotation and stretch.  As I see it, the more often people of the past said that they lacked something, the more often – human nature being what it is – there was a strong sense that they desired it.  Their listeners naturally assumed that the reason they were saying they lacked it was because they desired it.  I believe the close relationship between the concepts, as a matter of human psychology — the fact that people do so often desire what they lack — is what enabled the word’s meaning to slide from one into the other, so that today, where we can hope to obtain just about everything, we barely remember that “want” once meant “lack.”  Every time I hear a grandchild, buried in a pile of accumulated toys, say he or she wants some new one, I’m apt to think how naturally we human beings no longer want what we have, but nearly always want what we lack.

So too with “just.”  Originally meaning anything that was proper,  correct, or exactly so (the “just law” or “It is impossible to say just what I mean” (T.S. Eliot)), the word came to be used to assert that something is just, even when it’s not – again because of natural human psychology.  Just grows and stretches like an old sock, as we claim that more and more fits the description.  It’s like the way the word “literally” is currently widened to include things that are not, in fact, literally true, like the person who says, “He literally killed me with one joke after another…”

The evolution of our words tells us a lot about the kind of people we are. Our use of just tells us how automatically we seek to justify ourselves.  Our use of want informs us about our hard-wired cupidity.  Our use of “literally” shows how much we use language to serve our own exaggerated goals.

Two of my five siblings had severe learning disabilities.  My parents told me how blessed I was in comparison, that God had given me a good brain.  The blessing, I was told, carried a heavy responsibility not to waste that talent; I might someday have to care for my less intelligent siblings.  It was heady stuff for a little boy, and in retrospect, it started me on my way to being the arrogant man I became.  Meanwhile, God had blessed the entire human race, giving us “dominion” over all the birds of the air and fish of the sea.  After all, we were made in God’s image, while whales, dolphins and chimpanzees were not.  As a species, we have souls, and with our souls and our superior intelligence, we have a moral obligation to those less fortunate than ourselves, do we not?

Perhaps obviously so.  But the doctrine of Noblesse Oblige carries its own dangers, as “superior intelligence” or “superior morality” justify our making things go the way we think they ought to.   Pope Nicholas V authorized the taking of pagan slaves by explaining his purpose to enable the enslaver to “bring the sheep entrusted to him by God into the single divine fold,” and to “acquire for them the reward of eternal felicity, and obtain pardon for their souls.”[1] Noblesse oblige lay behind the court’s ruling in The United States v. La Jeune Eugenie, 26 F. Cases. 832 (D. Mass. 1822) that slavery benefited slaves by saving them from the savagery of their own religion.  And it lay behind the words of Augustus B. Longstreet, a president of the University of Mississippi, when his moral obligation to take care of unfortunate heathen savages (Africans) caused him to complain to his son-in-law: “The creatures persistently refuse to live together as man and wife, even after I have mated them with all the wisdom I possess, and built them such desirable homes.”

It’s easy to see the blind wrongness in others when they think themselves wise.  How common is that same blindness among the rest of us?  How much do we risk when, with intellectual and moral superiority, we literally just want what we think best?

— Joe

[1] Romanus Pontifex, 1454

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/03/16/im-just-saying/
Twitter
RSS

How Smart Are We?

Linnaeus caused a stir when he included human beings in the animal kingdom, even though he flattered us with the name homo sapiens.  Charles Darwin caused a similar stir, though he asserted “there can be no doubt that the difference between the mind of the lowest man and that of the highest animal is immense…”  But calling ourselves wise hasn’t been enough for most of us.  Our Bibles put us above mere animals, on a level just below the angels.  Even our scientists weren’t satisfied with Linnaeus; they further differentiated us from other homo sapiens because of our superior intelligence – never mind that we mated with Neanderthals.  The scientific world now bestows on us the title “homo sapiens sapiens” – not just wise, but doubly-wise.

When we were children, we were treated to many proofs of Man’s superior intelligence: we alone use tools; we alone have language; we alone care for our young so long; we alone can learn independently; we alone can solve new problems not encountered before; we alone have culture; we alone engage in entertainment, art, and play.  After it became obvious that these distinctions were proving false, people became more cautious.  A recent list of the top ten traits that set us apart from other animals shows how much ground has been conceded.  Charles Q. Choi, contributing to Live Science in 2016, listed the top ten distinctions as our speech, our upright posture, our lack of body hair, the fact we wear clothing, that we have “extraordinary brains,” that we have precise hands, that we make fire, that we blush, that we have long (if dependent) childhoods, and that we live past child-bearing age.[1] In creating that list, Choi acknowledged that we’re not the only animals that speak, we just speak differently; that our upright posture is responsible for high childbirth mortality rates compared to other primates; that we have as much body hair as other primates, but ours is thinner, shorter, and lighter; that while we have opposable thumbs, the apes do too, plus they have opposable big toes that do things ours cannot.  Blushing, and living past our reproductive usefulness, may be the only things that really sets us apart, but we don’t yet understand what good these things do us.

Distinctions such as Choi’s make us different, but do they make us superior?  For many religious, the claim that we’re superior depends first on our “souls,” which, despite a lack of proof for their existence, many of us believe in the way we believe in our superior intelligence.  When it comes to that intelligence, Choi acknowledges that our brains are not the largest.  But our brains, he tells us, are “extraordinary” because they can produce the works of Mozart and Einstein.  And as any human being will tell you, Mozart is more beautiful than the screeching and moaning of a whale.  As any human being will tell you, it takes a higher intelligence to develop an atom bomb than it does to fly like a bat.

But do such judgments tell us more about our vanity than our intelligence?  Consider our history of assessing animal intelligence.  In 2013, the Wall Street Journal published a wonderful article by primate researcher Frans de Waal.  For years, de Waal wrote, scientists believed elephants incapable of using tools – one of the classic “proofs” of human intelligence.  In earlier studies, the elephants had been offered a long stick while food was placed outside their reach to see if they would use the stick to retrieve it, as people (and chimpanzees) were able to do. When the elephants left the stick alone, the researchers concluded that the elephants didn’t understand the problem. “It occurred to no one,” wrote De Waal, “that perhaps we, the investigators, didn’t understand the elephants.” Elephants use their trunks to smell, not just to hold branches.  As soon as an elephant picks up a stick, its nasal passages are blocked and it can’t tell what’s food and what isn’t.  So years passed before researchers decided to vary the test.  But when they put a sturdy square box out of sight, a good distance away, the elephant easily retrieved it, nudging it all the way to the tree, and used it to reach the fruit with a trunk that could now smell, and touch, and approve, the fruit.

Even more anthropocentric, in retrospect, is the research on chimpanzees’ abilities to recognize faces.  For years, scientists had been giving chimps pictures of human faces, and when chimps failed to distinguish among them, researchers happily concluded that the “unique” human ability to recognize faces had not been matched by the chimps.  It took decades before someone thought to test chimps on the basis of their ability to recognize the faces of other chimps, and when they did. They discovered that chimps were amazingly good, not just at recognizing faces, but using them to extrapolate to family relationships!  And with improvements in testing methods, de Waal wrote, a 2007 study showed chimpanzees did significantly better than a group of university students at remembering a random series of numbers.[2]

The accepted idea is that “intelligence” involves the capacity to learn.  But to learn what?   If I can learn calculus easily but am helpless learning to play the piano, does that make me smarter than my counterpart with the opposite aptitudes, or less so?  Am I “smarter” than Dustin Hoffman in Rain Man, or less so?  If people learn different sorts of things at different speeds, then is there any basis to say that one is smarter than another, without “smartness” being related to a particular skill?  I once thought a fair answer might be that an individual could be considered “smarter” if she easily learned those things that are important  as opposed to the eccentric who has an aptitude for odd or useless things.   If I can learn to build a house, or a car, easily, but my friend was able to play the piano the first time his hands touched the keys, the question of who’s “smarter” might depend on which skill is more useful to a typical human being.  Indeed, standardized testing still exists in K-12 because it is useful in predicting success in college.  This utilitarian approach to intelligence made at least some sense to me – until I sought to apply it to the rest of the animal kingdom.

What does science tell us about the relative intelligence of animals?  Finding a raft of “top ten” lists on the internet, the first thing I noticed was their lack of consensus.  Several sources rated chimpanzees the smartest animals; others dolphins, whales, elephants, and pigs.  But the variety of nominees was striking: top-ten lists included parrots, dogs, cats, squirrels, rats, crows, pigeons, orangutans, gibbons, baboons, gorillas, otters, ants, bees, ravens, ducks, cows, bonobos, and octopi, each list focusing on different skill sets or aptitudes.  I quickly decided that the lack of IQ testing on Noah’s ark wasn’t the only reason people can’t agree on what makes an animal smart.  There’s no universally-accepted definition of intelligence for species, any more than there is for humans.

Clearly, we do some things better than other animals.  In fact, as we look around our world, we see examples of such things all around us.  But I suspect that from a dog’s perspective, the variety of sounds and smells he’s aware of, that we are not, makes it seem to him he’s aware of a great deal more in the world than we are.  What he sees all around him is equally full of confirmations of what he can appreciate, that we cannot. When we speak of our intelligence, when we give as examples Einstein and Mozart, what should I make of my assumption that a whale sees nothing special about Einstein?  And how would I know whether the whale appreciates Mozart?  Is it possible whales are simply bored by the ‘inferior’ sounds they see Mozart to be?  I’m quite sure, meanwhile, that I’m incapable of appreciating the ways whales communicate with each other.  Is it objective to conclude that we’re “smarter” because we understand the complexities of Mozart?

How is it we put so much stock in our ability to do the things we do well, and so little stock in the “unimportant” (to us) things we don’t do as well as other animals — like turn into a butterfly that can navigate its way back to a birthplace thousands of miles away?  Sure, a dog may not be able to learn Einstein’s theories.  But we’re not able to learn how to listen or smell with a dog’s acuity — even though dogs have been trying to teach us how to do so for centuries, by modeling it for us?  Why don’t we conclude we’re slow learners?

The second observation I made during my review of the “smartest animal” lists was that, in commenting on why these species were considered especially smart, list after list referred to the nominee’s similarities to us.

Take, for example, the reasons given by Mercola.com for ranking chimpanzees the most intelligent animals: “Chimps … like humans, live in social communities and can adapt to different environments…  Chimpanzees can walk upright on two legs if they choose…”[3]  (Surely most scientists don’t believe that walking upright has anything to do with intelligence.  (Am I wrong here, Stephen Hawking?)

In explaining why it ranks dolphins the second smartest (right after chimps), How Stuff Works tells us, “Schools of dolphins can be observed in the world’s oceans surfing, racing, leaping, spinning, whistling and otherwise enjoying themselves.”  Okay…  And why does it rank Elephants fourth smartest?  “Elephants are also extremely caring and empathetic to other members of their group and to other species, which is considered a highly advanced form of intelligence.”[4]  About chimps, CBS says their number one ranking is “Perhaps not entirely surprising given that chimpanzees happen to be the closest living relatives to humans in the animal kingdom.”

The CBS website makes a truly remarkable assertion based on the difference in the brains of dolphins and human beings:  “Turns out that like the other animal species in this gallery, dolphins possess large brains relative to their body size with a neocortex that is more convoluted than a human’s. Experts say that this puts dolphins just behind the human brain when it comes to cognitive capacity.”  If, as I understand, a convoluted brain surface is an indication of intelligence, how does the greater convolution of the dolphin brain put the dolphin behind us, rather than ahead of us?[5]  Is it because our inability to understand their squeaks renders their speech “gibberish,” much as E=mc2 might seem gibberish to them?

Having eyes behind our heads, or a third arm projecting from our backs, could be very useful to us in the right situations.  Yet we’re happy to be without them.  However, if we were to lose the sight of an eye or one of our arms, we might feel some tragedy had befallen us.  Why is it that we don’t regret not having eyes in the backs of our heads?  Why do we not lament the lack of a third arm – or the fact that we lack the olfactory prowess of a dog, or the sonar of a bat?  I’ll bet that if our noses could do what a dog’s can, our ability to distinguish thousands of things based on a just a few molecules in the air would rank among the first reasons that humans are the smartest animals.

So my dilemma is this: what happens if we try to remove any anthropocentric bias from our assessment of intelligence?  Is there a species-neutral standard by which to assess such things?  The more I consider the matter, the more I’m drawn to the possibility that the only definition by which one species can be said to be more intelligent than another is to ask how well-suited its unique talents are to ensuring its survival.  Measured that way, homo sapiens sapiens has done pretty well for itself, at least in the hundred thousand years it’s been around.  Maybe there’ve been a few times we haven’t seemed so bright – but hey, what’s an error like thinking that the entire universe revolves around the earth, when we can figure out how to make chemicals like DDT or fill the planet with gas-powered automobiles?  Have we not been successful, filling the earth with billions of copies of ourselves?  Some say that a measure of human intelligence is our extraordinary ability to adapt to new environments.  Have we not, after all, proven our ability to adapt to different environments? [6]

The four animals most commonly found at the top of the “smartest animals” lists I found were chimpanzees (and other primates), dolphins (and whales, porpoises, and other aquatic mammals), elephants, and pigs.  But most of the species in these groups are endangered.  If they really are similar to us, and they really are endangered, then what conclusions should we draw?  That like the great apes, we too are near extinction?  Or does the fact that we are responsible for the near extinction of most of these species mean that we are smarter than they are, and very different, after all?

Of course, not all the “similar” species are nearing extinction.  Dolphins are doing well, apparently.  Domesticated pigs are flourishing.  But before concluding that pigs have been “smart” to prosper so that they can end up on our dinner tables in such large numbers, if the true intelligence of a species is best evidenced by long term growth and survival, why do we find all the “intelligent” animals among mammalia?

It is nearly impossible to calculate the number of cockroaches that exist worldwide due to the fact that so many already exist and are reproducing at such a fast pace. Scientists believe that there are over 4,000 species around the world and there are at least 40 different species that exist in America. One source suggests that 36,000 cockroaches exist per building in some parts of America.[7]

Cockroaches have also been around for 300 million years – three thousand times longer than homo sapiens — and could easily survive a nuclear winter.[8]

But it simply isn’t acceptable to suggest that cockroaches are smarter than people.  Obviously, all mammals are smarter than insects; all primates are smarter than other mammals; all humans are smarter than other primates; and the smartest people in the world are those whose religious, political, and other beliefs all happen to match my own.  But doggone it, I still can’t figure out what makes us so extraordinarily smart.  Maybe someday we’ll figure it out, the way we finally figured out that the earth isn’t at the center of the Universe.

—Joe


[1] Charles Q. Choi, Top Ten Things That Makes Humans Special, http://www.livescience.com/15689-evolution-human-special-species.html 

[2] De Waal, Fran, The Brains of the Animal Kingdom, Wall Street Journal, March 22, 2013, http://online.wsj.com/news/articles/SB10001424127887323869604578370574285382756

[3] Dr. Karen Becker, The Most Surprisingly Smart Animals,  http://healthypets.mercola.com/sites/healthypets/archive/2015/08/22/10-most-intelligent-animals.aspx

[4] Top Ten Smartest Animals,  http://animals.howstuffworks.com/animal-facts/10-smartest-animals.htm

[5] CBS News, Nature’s 5 Smartest Animal Species,  http://www.cbsnews.com/pictures/natures-5-smartest-animal-species/5/

[6] I love that phrase “after all.”  Adaptability to different environments is indeed an oft-cited reason supporting human intelligence, but after only a hundred thousand years, it might be wiser to substitute the more accurate “so far” for “after all.”

[7] Larry Yundelson, Number of Cockroaches, The Physics Factbook, http://hypertextbook.com/facts/2009/LarryYundelson.shtml 

[8] See Zidbits, Can Cockroaches Survive a Nuclear Winter?  http://zidbits.com/2011/09/can-cockroaches-survive-a-nuclear-winter/

 

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/03/05/how-smart-are-we/
Twitter
RSS

Love Hurts

Want a good laugh?  WMBW isn’t all somber reality. Sometimes, we just need to laugh.

This six-minute video, called “Love Hurts,” is actually pretty funny.   It reminds me that sometimes, when we’re up to our ears in bad news and worry, it’s funny to realize we’re wrong.

Thank you, Pat Carmody, for sharing it with us.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/02/20/love-hurts/
Twitter
RSS

Comparing Apples and Oranges

You know: the very point of saying “it’s like comparing apples and oranges” is that it’s difficult, maybe even impossible, to do so, because —well —because they’re just not the same.  Consider this picture:

 

Forty-nine apples and one orange.   If I put all this fruit in a bag, mix it up and pull one piece out at random, the odds will be 49 to 1 that I’ll pull out an apple.  That is, 49 to 1 against the orange.

Now a question for you: Assuming a random draw, will I be surprised if I pull out an apple?  Answer: no, I won’t.  I fully expect to pull out an apple, due to the odds.  I assume you wouldn’t be surprised either.  I also assume we’d both be surprised if I pulled out the orange, for the same reason.  Am I right?

Now,  I feel as I do without qualification — by which I mean, for example, that if I pick out the orange, my surprise won’t be greater or less depending on whether the orange weighs nine ounces or ten, and I won’t be surprised if I pull an apple from the bag, regardless of the number of leaves on its stem.   The fact is I expect an apple, and as long as I get an apple, I’ll have no cause for surprise.  Right?

But now another question, and this one’s a little harder. What are the odds of my picking out an apple with two leaflets on its stem?  You can scroll back and look at the picture if you want, but try to answer the question without doing so: what are the odds of my picking an apple with two leaflets on its stem?

Ready?

Alright. Hard, wasn’t it?  If you went back to look at the picture, you found there was only one apple with two leaflets on its stem. Knowing that, you determined that the odds against my picking that particular apple were 49:1, the same odds as existed against my picking the orange.  Yet it’s pretty clear, as already determined, I would have been surprised if I’d picked the orange, but I wouldn’t have been surprised if I’d picked the only apple in the bag with two leaflets on its stem.

My real question, then, is why the difference?  And the only answer that makes sense to me comes not from probability theory, but from psychology.  I’m surprised if I draw the orange because, being mindful of the differences between the orange and the apples, I expected an apple. But not being mindful of the uniqueness of the two-leafed apple, I lumped all the apples together and treated them as if they were all the same.  I focused on the fact that the odds against the orange were 49:1, while never forming a similar expectation about the improbability of choosing the two-leafed apple.

Here, then, is my conclusion:  In pulling fruit from the bag, the actual improbability of every single piece of fruit is the same. Yet the perceived improbability of choosing the orange is far greater than the perceived improbability of drawing the two-leafed apple, because… well… because I hadn’t been paying attention to the differences among the apples.

Also, the division of the 50 pieces of fruit into only two categories – apples and oranges – was a subjective choice.  I could have grouped the fruit into large and small, or into three groups based on relative sweetness.  Or according to the number of leaves on the stem, in which case the orange would have been in a group with twenty apples.

Now, in any group of 50 pieces of fruit, no two are going to be exactly alike – the two-leafedness of one will be matched by the graininess of another, the seed count of a third, the sweetness of a fourth, and so on.  But we elect to ignore (or de-emphasize) a whole slew of possible differences, in order to focus on one or two traits.  Only by ignoring (or at least de-emphasizing) other differences do we construct a homogeneous group, treating all 49 of the red fruits the same for purposes of comparison to the orange one — treating them all as “apples” rather than one or two McIntosh, one or two sweet ones, etc.  That’s why I’m not surprised when I pick out that one, unique apple, despite the 49:1 odds against it.

Now consider a related point: that (subjective) decision about what criteria to base comparisons on, while ignoring other criteria, not only explains why we’re surprised if we select the orange, but how we estimate odds in the first place.  In fact, if we consider all their attributes, every piece of fruit is unique. The odds against picking any one are 49:1.  Yet, if we only focus on the uniqueness of the orange, our impression of odds will be vastly different than if we focus on fruit size, or sweetness, or seed count.

It isn’t some sort of unalterable constant of nature that determines how we perceive odds – it’s what we’re mindful of, and our resulting (subjective) expectations.

In an earlier post, Goldilocks and the Case Against Reality, I wrote of the concept that the limited focus which characterizes our brains has been useful to us.  (If I could see every part of the electro-magnetic spectrum, I’d be overwhelmed with information overload, so I’m advantaged by only being able to see visible light.)  My brain is just too small and slow to deal with all the information out there.  Even if I’d happened to notice there was only one two-leafed apple, I could never have taken the time to absorb all the differences among the forty-nine apples.  Compare that, say, to the difficulty of absorbing the different facial features of every person on this tiny, one-among-trillions planet.  I cope with reality by ignoring vast complexities of things I don’t understand, lumping a lot of very special things into groups for the very reason I can’t get my brain to focus on all their differences.

Now, this lesson about comparing apples and oranges teaches me something about God, and I hope you’ll give me a chance to explain.

The astronomer Fred Hoyle is said to have written, “The probability of life originating on Earth is no greater than the chance that a hurricane, sweeping through a scrapyard, would have the luck to assemble a Boeing 747.”  Hoyle apparently used the improbability of life as an argument for the theory of intelligent design. Hoyle’s statement was then quoted in The God Delusion (Houghton Mifflin, 2006), by the atheist Richard Dawkins, who said that the “improbability” of life is readily explained by Darwinian evolution, and declared, “The argument from improbability, properly deployed, comes close to proving that God does not exist.”

Now, whether either of these beliefs makes sense to me, I’ll leave for another day.  My focus is on trying to understand any argument based the “improbability” of life, and its because of what I’ve learned from the fruit.

I agree that the odds are against a hurricane assembling a 747, and against life’s existence exactly as it is today.  But is my surprised reaction to such improbabilities any different than my surprise at the random drawing of an orange, but not at the two-leafed apple?  Imagine, for a moment, that some other configuration of scrap parts had been left in the hurricane’s wake – one that appeared entirely “random” to me.  Upon careful inspection, I find that a piece of thread from the pilot’s seat lies in the precise middle of what was once the scrap heap.  A broken altimeter lies 2 meters NNE of there.  The knob of the co-pilot’s throttle abuts a palm frond 14.83 inches from that.  The three hinges of the luggage compartment door have formed an acute triangle, which (against all odds) points precisely north; the latch from the first class lavatory door is perched atop the life jacket from Seat 27-B….

I trust you get the picture.  Complex?  Yes.  Unique?  Yes.  So I ask, what are the odds the triangle of hinges would point exactly north?  The odds against that alone seem high, and if we consider the odds against every other location and angle, once all the pieces of scrap have been located, what are the odds that every single one of them would have ended up in precisely the configuration they did?

In retrospect, was it just the assembly of the 747 that was wildly against the odds?  It seems to me that every unique configuration of parts is improbable, and astronomically so.  Among a nearly infinite set of possible outcomes, any specific arrangement ought to surprise me, no?  Yet I’m only surprised at the assembly of the 747.  What I expect to see in the aftermath of the hurricane is a helter-skelter mess, and I’m only surprised when I don’t.

But on what do I base my expectation of seeing “a helter-skelter mess?” Indeed, what IS a “helter-skelter mess”?  Doesn’t that term really mean “all those unique and unlikely arrangements I lump together because, like the apples, I’m unmindful of the differences between them, unmindful of the reasons for those differences, ignorant of how and why they came to be as they are?”

Suppose, instead, that with the help of a new Super-Brain, I could not only understand all the relevant principles of physics, all the relevant data – the location, size, shape and weight of every piece of scrap in the heap before the storm — and suppose further that when the storm came, I understood the force and direction of every molecule in the air, etc.  With all that data, wouldn’t I be able to predict exactly where the pieces of scrap would end up?  In that case, would any configuration seem improbable to me?  I suggest the answer is no.  There’d be one configuration I’d see as certain, and the others would all be patently impossible.

Compare it to a deck of cards.  We can speak of the odds against dealing a certain hand because the arrangement of cards in the shuffled deck is unknown to us.  Once the cards have been dealt, I can tell you with certainty what the odds were that they’d be dealt as they were: it was a certainty, given the order they had previously taken in the deck.  And if I’d known the precise arrangement of the cards in the deck before it was dealt, I could say, with certainty, how they would be dealt.  Perfect hindsight and foreknowledge are alike in that neither admit of probabilities; in each case — in a state of complete understanding — there are only certainties and impossibilities. The shuffling of a deck of cards doesn’t mean that any deal of particular cards is possible, it means that we, the subjective observers, are now ignorant of the arrangement that has resulted. The very concepts of luck, probability and improbability are constructs of our limited brains.  Assessments of probability have developed as helpful ways for human beings to cope, because we live in a world of unknowns.

Now, let’s return to the scrap heap, one more time.  But this time, we don’t have an all-knowing Super-Brain.  This time, we’re just a couple of ants, crawling across the site after the hurricane has left.  On the off-chance that the hurricane has left a fully assembled 747, would we be mindful of how incredibly unlikely that outcome had been?  I suspect not. A 747 has no usefulness or meaning for an ant, so we probably wouldn’t notice the structure involved, the causes and purposes of each part being where it is. From our perspective as ants, that assembled 747 might as well be a helter-skelter mess — an array of meaningless unknowns.

Now, after traversing the 747, something else catches our little ant eyes. Immediately, we scramble up the side of the anthill, plunge into the entrance, race down the pathway to the Queen’s deep chamber, and announced with excitement that something truly amazing has happened.

“It’s surely against astronomical odds,” I say. “I wouldn’t believe it myself, had I not seen it with my own two eyes!”

“What is it?” the Queen’s courtiers demand to know.

“A great glass jar of sweet jelly has appeared,” you say, “just outside the entrance to our anthill!  That jelly could have landed anywhere in the jungle.  What are the odds it would land just outside the entrance to our hill?  A thousand to one?  A million to one?  There must be a reason…”

Well, there probably is some reason, it seems to me.  But the difference in approaches taken by people and ants to the perceived “improbabilities” here reminds me of comparing apples to oranges.  It’s not just that apples are different from oranges.  Whether “God” made us or not, we’re all unique, in many, many ways.  Some of us — I’ll call them the oranges — attribute perceived improbability to “plain ol’ luck.” Others, like one-leafed apples, attribute it to intelligent design.  Others, like leafless apples, say that improbability nearly proves the non-existence of God.  I say, what we perceive as improbable depends on whether we’re ants or people.  Our surprise varies widely, depending on the criteria we’re (subjectively) mindful of.  But as unique as we are, we’re all alike in one respect: we all have limited brains, and that’s why we need concepts like probability —to cope with our profound lack of understanding.

So, call me a two-leafed apple if you like, but when I encounter the improbable — the fact that the grains of sand on a beach happen to be arranged exactly as they are, and the snowflakes in a blizzard move exactly as they do — I try to remember that what I experience as “randomness” is just a name I give to what I can’t get my mind around.  “Improbability” tells me nothing about God, one way or the other, except that, if God does exist, she gave me a brain that’s incapable of fully understanding the uniqueness of things, or why any of it exists.

And I’m okay with that.

— Joe

 

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/02/19/comparing-apples-and-oranges/
Twitter
RSS

Self Reflection

What follows was submitted to the WMBW website as a comment on one of my earlier posts.  I was moved by it; I wanted to share it; so I got the author’s permission to post it as a guest blog in its own right.  My old friend, Ron Beuch, has clearly been doing some honest self-examination. I’m pleased to be able to share what he wrote:

The old man sits at the bench in his favorite sweats, the one with the hoodie.  (A friend gave it to him for helping to build a stone water feature for his patio.) The overhead garage door is closed because the wind is blowing and the temperature is around freezing. This limits his light to the overhead LED spots that he installed recently.

Surrounded by the stuff of forty years, he hears the rattling of the doors and thinks of the sixty foot black walnut tree that fell on his stuff last summer. He puts aside his newly acquired paranoia of wind and inspects the silver he has been tasked to rescue, two wine goblets, two dinner forks, two dinner knives and a large serving fork. The wife has bumped his quota because she knows that speed comes with experience.

As he polishes these items he reflects on the memories connected to them, the romantic dinners with his wife, the family dinners on holidays, the parties with friends. The memory of his drinking problem dims the glow for a moment, but fortunately he won that battle. The white noise of the space heater warming his feet competes with the tintinitis that is buzzing like a summer evening in the background. As he dives closer in focus to the depths of the shine to see the blemishes that might mar the surface, his memory does the same with his past. When inspecting his psyche, some of the smallest details of his biggest party fouls surface.

“Who’s kids are those?”

“Whose tree is that?”

“I didn’t know sports bras came in that size.”

He can’t help but feel better when he compares these to what is coming out of the TV today.

Thanks, Ron.

 

-Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/02/17/self-reflection/
Twitter
RSS

Rip Van Winkle Returns

Sometimes I feel like Rip Van Winkle.  A career in civil rights and employment law kept me in the midst of political issues and controversies for over thirty years, but upon my retirement in 2003, I decided to enjoy a less stressful life:  to do so, I would isolate myself from the news.  So I went into a deep sleep.  For sixteen years now, I’ve been dreaming of beautiful things.  During my slumber, I played with grandchildren, I gardened, I wrote historical fiction, I read some of my daughter’s old college psychology texts – nothing that would raise my blood pressure.  I especially enjoyed reading about the psychology of human error, and confirmation bias.

In Being Wrong (Harper Collins, 2010) Kathryn Schulz quotes the French essayist Montaigne as asserting that people “are swept [into a belief] – either by the custom of their country or by their parental upbringing, or by chance – as by a tempest, without judgment or choice, indeed most often before the age of discretion.”  In keeping with that view, Schulz asserts that the single best predictor of someone’s political ideology is their parents’ political ideology.  That had certainly been true in my case, and as I researched the actual lives of the players in my historical fiction, I had discovered how true it was for them as well.  I was forced to ask myself the difficult question of whether I believed what I did, not because it made objective sense, but because of an inherited or at least culturally-guided confirmation bias of my own.

Now, even when asleep, our bodies can sense the presence of heat, cold, or other stimuli, and in a similar way, though I was asleep, I did hear snippets of the outside world from time to time.  The classic movie I’d recorded (so I could fast-forward through campaign ads) having ended, I’d be startled when the TV screen suddenly defaulted to the late news on TV.  In the car, entranced by Smetana’s Moldau or Charles Mingus’s rendition of “I’ll Remember April,” I’d be jarred awake by a piece of headline news before my hand could turn the radio off.  So I wasn’t totally asleep; not totally unaware of what was going on in the modern world.  Just mostly so.

Now, think what you will of him, few will deny that Donald Trump makes for engaging theater.  So no surprise, occasional sound bites of last summer’s slugfest between Donald and Hillary began to intrude on my dream, appealing to my own interest in politics the way a voice whispering “one little drink won’t hurt you” might appeal to an alcoholic, even after sixteen years on the wagon.  And – no one will be surprised to hear this – since awakening from my sixteen-year political slumber, I’ve been  feeling like old Rip Van Winkle himself, rubbing my eyes in disbelief at how much has changed during my absence, aghast at just how divisive this country had become while I slept.  My conservative friends had become so opinionated and cocksure that I found myself trying to articulate liberal replies in response, in an effort to moderate their extremism.  My liberal friends had become so arrogant and dismissive of their opponents that it seemed I had to join them, or become their enemy.  Two months ago, I started this blog as the only response I could think of to a world that seemed to have gone out of control as I slept.  And because of this blog, I have started, once again, to be sucked into the vortex of the news.

I still know little of what went down during my reverie.  As I emerge from my slumber, I imagine myself having something like Van Winkle’s naivete.  Perhaps that naivete will be apparent to others, as I dare to comment on the modern political scene.  But let the chips fall where they may, I’m going to comment – because I’ve decided my long slumber may actually be of help to the mission at hand.

My brother James alerted me today to an article I found most interesting, and this article is actually the focus of my post today.  But before I get to it, I’m afraid that, for some on the right, it might be an immediate turnoff to mention that it came from Vox.  Vox is a news source I’d never heard of until today, as it was created during the period of my deep slumber.  From what I’ve been able to gather this afternoon, it’s apparently viewed by the right as being very left.  So I feel constrained to offer, first, a word of caution about sources.

In Kathryn Schulz’s catalogue of the types of non-rational, illogical thinking to which we human beings are prone,  she points out that “[i]nstead of trusting a piece of information because we have vetted its source, we trust a source, and therefore accept its information.”  That’s understandable in some cases, but not a good thing for one aspiring to real communication across the political divide.  And in this case, I feel I have an advantage – having never heard of Vox before, I hold no biases for or against the source.  I neither trust it nor distrust it.  I can only consider what I read in it on its own merits.

Anyway, I hear today that Ezra Klein launched Vox in the eleventh year of my slumber with an article titled “How Politics Makes Us Stupid.”  I haven’t read it, but it apparently focused on the scientific work of Dan Kahan, a professor at Yale Law School whose earlier work showed that the ability to reason soundly, particularly about political subjects, is undermined by the need to protect one’s core beliefs.  Hence, “how politics makes us stupid.”  Now, lost as I may have been in the land of Nod, that came as no surprise to me:  it sounded like run-of-the-mill confirmation bias, and I had digested the concept of confirmation bias years ago, before ever going to sleep, along with half a package of Oreo cookies.  But of greater interest to me is what appeared in Vox this week.   Klein has now reported on the work of Professor Kahan again, this time to report a way to escape our human susceptibility to confirmation bias:  CURIOSITY.

Apparently, as described by Klein (http://www.vox.com/science-and-health/2017/2/1/14392290/partisan-bias-dan-kahan-curiosity), Kahan’s new research shows that some of us – on both the right and the left – are more scientifically curious than others.  And that those of us who are scientifically curious are less prone to confirmation bias – or, to use Kahan’s phrase – less prone to let our politics make us “stupid.”  The point appears to be that confirmation bias interferes with sound thinking on both the left and the right, but that curiosity – a trait that exists on both the left and the right – is the common predictive factor that makes us less susceptible to the “stupidity” toward which confirmation bias pushes us.

Now, I haven’t vetted the source.  I haven’t even read Kahan’s actual findings.  I know better than to rely on the second-hand report of any mediary, trusted or not.  But I have to confess, I’m doggone interested.  For  the past several weeks, I’ve been asserting that political debate is for people who want to prove that they’re right in the eyes of a judge – not for people who want to convince people with whom they disagree.  In a debating class, there’s some sort of third-party judge.  In a courtroom, there’s a judge or jury.  In a political debate, there’s the undecided viewing public that is the effective judge.  In every case, the efforts of the debaters are designed to win points with the third-party judges by making the other side look erroneous, ignorant, or (best of all) just plain foolish.

How surprised I was, upon waking from my slumber, to discover that modern internet discussion is conducted the same way – as if there were some third party judge present to determine a winner.  After a thirty year legal career, I can tell you that I never saw a plaintiff convince a defendant she was right, nor a defendant convince a plaintiff that he was.  Rubbing my eyes of my sleepy dust, I had to wonder what these internet debaters thought they were doing in their efforts to “win an argument” (by showing how stupid their adversaries were) in the absence of any third party judge.  Weren’t they quite obviously driving their opponents deeper into their convictions?  In Being Wrong, Schulz describes exactly that phenomenon – how such efforts to “persuade” actually have the opposite effect.  And I’ve been saying that, surely, it makes more sense to conduct political discourse with a sincere attitude of wanting to learn from one’s adversaries, rather than proving (to ourselves?) how stupid our adversaries are.  I’ve been asking whether, paradoxically, a sincere desire to learn from someone else isn’t more likely to result in his or her learning from us at the same time.  And I’ve been wondering if there isn’t some psychological study that backs up that theory.

So here, today, comes my brother James, providing me exactly the sort of scientific study I’ve been looking for.  A desire to learn — curiosity — could it really make us less susceptible to confirmation bias?  Perhaps this is all just confirmation bias, on my part, fitting as well as it does with what I already suspected. So I want to check into it further.  I will check into it further.  But in the mean time, doggone it, it seems clear to me that curiosity must be the remedy, just as Kahan and Klein say.  If being curious isn’t close to being open-minded, and if being open-minded isn’t essential to learning, and if learning isn’t something we should all strive to experience, then what is?  And how come there’s all this debating and berating that has been shown to keep us from ever learning anything?

The world has changed a great deal in my years in the land of Nod.  Now that I’m awake, feeling (like old Rip Van Winkle) a good bit naïve and ill-informed, with no real clue about the strange new world I find around me, I am very, very thankful for that slumber.  For after thinking over what Kahan’s research has apparently shown, I believe my deep sleep may have done me a huge favor; by being politically asleep for these sixteen years, what strikes some (myself included) as naivete may be just what I need to be curious about what’s been going on in the world; curious about who’s right, and whose wrong; and ready, and willing, to learn from people who aren’t already my mental clones.

I’ll close by applauding another website I learned of just today: The Lystening Project.  The Lystening Project is an innovative approach to fostering open-mindedness and civility in political discourse, conceived of by a class of San Francisco high school students in what is surely a kindred spirit to that of We May Be Wrong.  Check out their website for yourself, but from what I gather, their idea is to assess participants’ political leanings through a short survey of opinions, and then to pair them with people of opposing views for dialogue across the divide.   I especially like the “oath” that participants must take before undertaking such paired dialogue:

“The Lystening Oath”

I will take a moment to connect with the other person as a human being first.

I will enter this conversation with the goal of understanding, not convincing.

I will not vilify, demean or degrade others or their views.

I will enter this conversation with goodwill and I will assume goodwill on the part of the other person.

 I will do my best to express my own views and how I came to believe them

Reminds me of the rules for the WMBW Forum.  I can’t imagine a better oath to ask people to take, and I thank the students’ advisor, Elijah Colby, for bringing their project to my attention.  Check them out for yourself at https://thelysteningproject.wixsite.com/thelysteningproject. Help if you can.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/02/09/rip-van-winkle-returns/
Twitter
RSS

The WMBW Forum

The We May Be Wrong Forum is now up and running.

We’re hoping it will be a different kind of discussion Forum — not one in which people belittle those they think are wrong.  Not one where people are trying to “win a debate” or feel good by surrounding themselves with people who think like they do.  Rather, a Forum in which people can participate in discussions with people who may disagree, but without fear of being mocked or ridiculed, because they aren’t contestants in debate, but partners in a search for understanding.

Impossible?  Maybe.  Altruistic?  Of course.  But we think of it as a worthwhile experiment in this age of rampant incivility, and we’re moving forward.

So you’re invited to check out the new WMBW discussion Forum, and help us make the experiment succeed.

–Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/02/04/the-wmbw-forum/
Twitter
RSS

MLK and the Dream

I had a dream last night;  I woke up this morning thinking about it. And my train of thought went from there to Martin Luther King’s dream.  Remembering the late civil rights leader led me to contemplate a sort of ironic coincidence: that last Monday – the 16th – the Martin Luther King Holiday – was the very day I made the final revisions to my novel, Alemeth, and began the process of formatting it for the printing company.

Completion of the novel is the fulfillment of a dream.  I could trace its origins back to the early 1960’s, possibly even to the very year of King’s famous 1963 speech.  That was when my grandmother first showed me some of the letters my great uncle Alemeth had written home from the front lines during the Civil War.   Or I could trace its origins to a dinner that Karen and I had with our friends Roger and Lynda ten years ago, when a lively discussion got me thinking about a novel that explored (or even tested) the differences between fiction and non-fiction.  Or I could trace it back seven years, when I chose to write Alemeth’s life story.  No matter how far back I go to date the novel’s origins, it has been many years in the making. Somewhere along the way, a novel based on Alemeth’s life became a dream, and it seemed ironic that the dream had finally been fulfilled on the Martin Luther King Holiday.

But the coincidence seemed ironic for reasons deeper than that my novel has been sort of a dream for me.  It seems ironic because the themes of King’s famous “I Have a Dream” speech and the themes of Alemeth are so closely related.

For King’s dream, we need scant reminder.  “[O]ne day… even the state of Mississippi… will be transformed into an oasis of freedom and justice.” “[M]y four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character…” My great uncle, Alemeth Byers, the title character of my novel, was the son of a cotton planter in Mississippi.  The family owned sixty slaves when the Civil War began.   In calling Mississippi “a state sweltering with the heat of injustice, sweltering with the heat of oppression,” Martin Luther King had been talking about my family.

Early in my research into Alemeth’s life, I began to confront what, for me, was terribly unsettling.  I knew my grandparents to be among the kindest, most “Christian,” most tolerant people I knew.  But as I grew older, my research into their lives, and into their parents’ lives, revealed more and more evidence of racial bigotry.  In old correspondence, these prejudices pop up often – and most alarming of all – when I looked honestly at the historical record, I saw those prejudices getting passed down, from generation to generation.

In one respect, I felt I was confronting a paradox of the highest order.  My mother was kind and loving, and my sense was that her kindness was in large part because her parents had been kind.  My instincts applied the same presumption to their parents, as if “loving and kind” was a trait carried down in the genes, or at least in the serious study of Christian Scripture.  (My grandmother was a Sunday School teacher; my childhood visits to her house always included Bible study.). Presuming that my great grandparents were as kind and loving as my grandparents, and knowing that they, too, had been devout Christians, I found it paradoxical that all this well-studied and well-practiced Christianity not only tolerated racial bigotry but, in great uncle Alemeth’s day, was used to justify a war to preserve human bondage.  Frankly, it made no sense.  I wondered: How did these people square their Christian beliefs with their ownership of so many slaves?  With their support for a war intended to preserve their “property rights” in these other people?

It was even more unsettling, then, to realize how the “squaring” had occurred.   George Armstrong’s The Christian Doctrine of Slavery (Charles Scribner, New York, 1857) made a fascinating read.  That work expounded, in argument after argument, based on scripture after scripture, how God had created the separate races, given Moses Commandments which made no mention of slavery, instructed the Israelites to make slaves of their heathen enemies (Leviticus 25:44-46), sent a Son to save us who never once condemned slavery though he lived in its midst, and inspired Saint Paul to send the slave Onesimus back to Philemon with instructions to be a good, obedient slave to his master.  Armstrong’s work was perhaps the most impactful, but by no means did it represent an isolated view.  My research uncovered source after source that made plain how the slave owners of the ante-bellum South were able to square their support of slavery with their Christianity: they did so by interpreting Christian Scripture as supporting the institution.  Indeed, in some sermons of the day, the case was made that being a good Christian required a commitment to the defense of slavery, because civilized white people had a Christian duty to care for their “savage” African slaves.  In the end, of course, they were so convinced they were right that they were willing to go to war and fight (and die) for it.  (Their cause being a righteous one, the killing of people in support of it met all the requirements for a “Just War” as traditional Christian doctrine expounded it.)

For me, it was an eye-opener to realize that southern Christians based their support of slavery squarely on Christian scripture.  It was also an eye-opener to see how the beliefs and attitudes of the community were shared, both horizontally and vertically.  By horizontally, I mean how family members, neighbors, newspapers, courts, elected representatives, school teachers and preachers all worked together to homogenize the Southern attitude toward slavery.  (It was rare to find a voice of dissent – the conclusion seems compelling that the few dissenters tended to keep their opinions to themselves, for fear of being run out of town, as those considered “unsound on the slavery question” generally were.)  By vertically, I mean how attitudes and beliefs were passed down from one generation to the next, most strongly within immediate families, but also within whole communities and cultures.  My research extended back in time to the racism of our national heroes, Washington and Jefferson, and forward in time through my grandparents, my parents, and –

Indeed.  What about myself?  Historical research proves again and again how, once accepted in a family or community, “wrong” attitudes and beliefs can be passed down so easily from one generation to the next.  Is it possible I could be exempt from such influences?  Somehow free to form my opinions entirely on reason and logic, safe from any familial or cultural biases? All my historical research has led me to conclude that we are most  prone to be blind to the wrongness within that which is most familiar; if that’s true, what are the ramifications for my own attitudes and beliefs?  How much of the racism inherent in my family history manages to cling to my own way of thinking?  I hope none of it, of course, but how likely is it that some of it persists?

I will repeat a quote from  The Gulag Archipelago, which I already mentioned in a prior WMBW post and which I managed to squeeze into Alemeth as well.  Alexander Solzhenitsyn expressed a wish for an easier world:

If only there were evil people somewhere insidiously committing evil deeds and it were necessary only to separate them from the rest of us and destroy them. But the line between good and evil cuts through the heart of every human being. And who’s willing to destroy a piece of his own heart?

I’ll have more to say in later posts about what psychologists call the “bias blind spot.”  For now, suffice it to say that much as I share King’s dream for a day when prejudice will be a thing of the past, I fear that as long as we have families, as long as parents teach their children, as long as such a thing as “culture” exists, we will all have our prejudices.  Many of them, I believe, will have been inherited from our parents and grandparents.  Others from school teachers, preachers, news sources, national heroes, or friends.  A rare few, perhaps, we will have created entirely on our own.  But they will be there.  And others who see this the same way I do have suggested an idea that makes a great deal of sense to me: that to begin the path toward a more just world, we’d do well to begin by trying (as best we can) to identify what our own biases are.

In Alemeth, I have tried to take a step in that direction.  Early in the evolution of the novel, I found myself asking whether it was I who was creating Alemeth, or Alemeth who had created me.  It’s a novel about my family – about the culture that began the process of making me what I am – and it’s not an entirely pretty picture.  But the dream that inspired it, and the research and thought given to the project, is also largely responsible for the existence of something else.  I don’t think I’ll be giving too much away if I give you a hint: the last four words of the novel are “we may be wrong.”

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/01/28/mlk-and-the-dream/
Twitter
RSS

WYSIATI

 

Have you ever seen a swan?  If so, how many have you seen?

For four years, my family lived on a pond that we shared with a family of swans.  I saw this one family a lot.  More recently, I’ve seen a few more swans, but given that swans live long, maintain monogamous relationships, and tend to remain in the same habitat, I suspect I’ve been seeing the same swans over and over again. I’d take a wild guess that I’ve seen a total of thirty swans in my life.  You might ask yourself the same question now: how many do you suppose you’ve seen?  (We’ll return to the matter of swans in a moment.)

I’ve been on vacation in Florida, so it’s been a couple of weeks since my last WMBW post.  During the holidays I was able to read a couple of excellent books: one of them, Thinking, Fast and Slow, by the psychologist and Nobel prize winner Daniel Kahneman, asserts that we have two systems in our brains – one designed to produce immediate beliefs, the other to size things up more carefully.  The other, Being Wrong, by journalist Kathryn Schulz, explores both the reasons we err and the emotional costs of recognizing our wrongness.  Both books have done much to clarify my intuitive beliefs about error.  If you suspect this is a case of “confirmation bias” you’re probably right – but at least my confirmation bias gives me a defense to those who say admitting doubt is tantamount to having no beliefs at all.  (I can’t have a bias in favor of a belief unless I have a belief to begin with, right?)

Well, to those who fear that I totter on the brink of nihilism, I assure you I do have beliefs.  And perhaps my strongest belief is that we human beings err – a lot. Since starting this website, I’ve started to see people committing errors with truly alarming frequency.  The experience helps me understand witch hunts, as I now see error the way Cotton Mather saw witches and Joe McCarthy saw communists – everywhere.  The difference, I’d submit, is that Cotton Mather never suspected himself of being a witch, and Joe McCarthy never suspected himself of being a communist. In contrast, I see myself being wrong every day.  In fact, most of the errors I’ve been discovering lately have been my own.

My willingness to admit to chronic wrongness may be partly due to the fact that Schulz devotes much of her book to rehabilitating the reputation of wrongness – pointing out that, far from being the shameful thing most of us consider it to be, wrongness is endemic to who we are and how we think – specifically, to our most common method of rational thought – reasoning by induction.

Consider this diagram:

block-sign

Reasoning by induction, says Schulz, is what causes even a four year old child to “correctly” answer the question of what lies behind the dark rectangle. By way of contrast, she says, a computer can’t answer such a puzzle. The reason? A computer is “smart” enough to understand that the dark rectangle may hide an infinite number of things, from a stop sign to a bunny rabbit to a naked picture of Lindsay Lohan. Without inductive reasoning, the computer will have to consider (and reject) an infinite number of possibilities before deciding on an answer. We humans, on the other hand, are much more efficient – we’re able to form nearly instantaneous conclusions, not by considering all the possibilities we don’t see, but by coming up with plausible explanations for what we do see. Even to a four year old child, it seems highly probable that the image behind the dark rectangle is the unseen middle of the white bar behind it. It’s certainly plausible, so we immediately adopt it as a belief, without having to exhaust an endless list of other explanations. Inductive reasoning makes us the intelligent, quick-thinking creatures we are.

In his book, Daniel Kahneman calls this WYSIATI. His acronym stands for “What you see is all there is.” Like Schulz, he points out that this is how human beings generally think – by forming plausible beliefs on the basis of the things we see, rather than by tediously rejecting an endless list of things we don’t. And, like Schulz, he gives this sort of thinking credit for a good deal of the power of the human brain.

But there’s a downside, a cost to such efficiency, which brings us back to swans. If you’re like me, you probably believe that swans are white, no?

“Which swans?” you might ask.

“Well,” I might well reply, “all of them.”

I first formed the belief that swans are white after seeing just a handful of them. Once I’d see a dozen, I’d become pretty sure all of them were white. And by the time I’d seen my thirteenth swan, and my fourteenth, confirmation bias had kicked in, leaving me convinced that my belief in the whiteness of swans was valid. It only took one or two more swans before I was convinced that all swans are white. Schulz says it was the philosopher Karl Popper who asked, “How can I be sure that all swans are white if I myself have seen only a tiny fraction of all the swans that have ever existed?”

Schulz observes that as children, we likely observed someone flipping a light switch only a few times before concluding that flipping switches always turns on lights. After seeing a very small sample – say, a golden retriever, a shih tzu, and Scooby Doo — children have sufficient information to understand the concept of “dog.” We form out beliefs based on very small samples.

Kahneman describes how and why it’s so common for groups to underestimate how long a project will take: the group imagines all the steps they anticipate, adding up the time each step will take; it factors in a few delays it reasonably foresees, and the time such delays will likely take; and it even builds in an extra cushion, to give it some wiggle room. But almost invariably, it underestimates the time its project ends up taking, because in fact, the number of things that could cause delays is virtually infinite and, well, you can’t know what you don’t know. In a sense, to use Kahnemen’s phrase, you can’t help but feel that “what you see is all there is.”

Now here’s what I think is a critical point. The way inductive reasoning takes such very small samples and draws global conclusions about them makes sense when worlds are very small. If the child’s world is her own house, it’s probably true that all the wall switches turn on lights – it’s only when she enters factories and astronomical observatories years later that wall switches turn on giant engines and rotate telescopes. Here in Virginia, all swans probably are white; I’ll only see black swans if I go to Australia or South America, which I may never do. There wasn’t much problem thinking of the world as flat until there were ocean voyages and jetliners. Both as individuals, and as a species, we grow up in very small, homogeneous worlds in which our inductive reasoning serves us well.

But the real world is more varied and complex. It’s when we expand our frames of reference – when we encounter peoples, cultures and worlds different from those of our youth – that what we “know to be true” is likeliest to be challenged.  And by that time, we’ve become convinced that we’re right. All experience has proven it. Everyone we know knows the truth of what we know. After all, our very conceptions of self, and of our worth, and our very comfort, depends on our being right about the Truth.

More, later, about the emotions involved when one of these strangers challenges that which we know to be true.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/01/07/wysiati/
Twitter
RSS