Multiplicity

What do the Kavanaugh hearings, Halloween and Homer’s Odyssey all have in common?

Here’s my take on it.

  1. The Kavanaugh Confirmation Hearings

Someone recently said to me, “Joe, you were a lawyer once.  You understand evidence.  You can see that all the evidence supports my position on this.”  The person who said that to me could have been talking about the Kavanaugh hearings.  Like so much media coverage of the hearings, this fellow thought of a trial as the evidence all points in one direction or the other .  My answer to him was that if I’d learned anything in thirty years of bar membership it was that my mother was right: there are always at least two sides to a story, and the truth is generally somewhere in between.  If juries heard only one side’s witnesses and arguments, every verdict would be unanimous.  Is it any wonder that if you tell me what news source you follow, I can pretty well predict how you feel about the world?

In years of practicing law, I saw over and over again how witness testimony polarized over time.  From the plaintiff’s perspective, the size of the wrong and the depth of the injury always grew, while from the defendant’s perspective, the strength of the alibi and the sense of indignation always did likewise.  Add the way politicians and the media frame a case as pitting good against evil, and you have everyone asking which of the witnesses is lying.  In this view, it has to be one or the other.  When I said, about the Kavanaugh hearings, that I thought both witnesses were telling the truth as they saw it, people looked at me like I was some sort of crazed lunatic from outer space.  The hearings, and especially the media coverage of them, left me shaking my head about what made them so typical of polarized American politics today: namely, a complete inability to empathize with the other side.

  1. Halloween

Yesterday, I came across a piece published last year in USA Today titled “5 Halloween Myths and Urban Legends, Debunked.”  Myth Number 3 was titled, “Satan is the Reason for the Season.”  While acknowledging that Halloween can be traced back to ancient Celtic harvest festivals, the article argued that the modern event has nothing to do with Satan, and never could have, as Satan is a Judaeo-Christian character that would have made no sense to the ancient Celtic polytheists who started those harvest festivals.  The article also points out that All Hallow’s Eve is the first of three days Christianity devotes to remembering the souls of the Christian faithful.  The religious origins of the modern holiday have to do with honoring the good dead, not the immortal Satan, the embodiment of evil

But when it comes to Halloween, like the Kavanaugh hearings, people are polarized.  To many, Halloween will always be about pure evil.  For many on both sides, there’s a complete inability to empathize with the other.

  1. The Odyssey.

My first exposure to the Odyssey was probably Kirk Douglas’s portrayal of the classical hero in 1954’s Hollywood version, Ulysses.  While I don’t remember much of that movie, I feel sure that Kirk Douglas’s character must have been very heroic, in the modern sense of that word – which is to say, a particularly good and capable guy fighting the good fight against evil.  My sense of the story has always been that the Cyclops, Poseidon and the suitors were monstrously bad while Odysseus wasn’t far shy of sainthood.  I want to take this opportunity to rave about the new translation I just finished reading by Emily Wilson.  It manages to be an amazingly easy and accessible read while maintaining the strict metrical qualities of the original.  For the first time, I didn’t have to “study” the epic, I could just read it, and do so at the same pace I might read John Grisham or Dan Brown.  As a result, I acquired a sense of the whole as I never have before.   I strongly recommend her translation, whether you’ve read the epic before or not.

Wilson’s excellent and engaging translation gave me several new perspectives about the story.  One is that the very name Odysseus can be translated as “hated” or at least “disliked.”  He’s easy to hate because he’s not just duplicitous, he’s multiplicitous.  There’s something for everyone to hate.  In Wilson’s words, he is “a migrant…, a political and military leader, a strategist, a poet, a loving husband and father, an adulterer, a homeless person, an athlete, a disabled cripple, a soldier with a traumatic past, a pirate, thief and liar, a fugitive, a colonial invader, a home owner, a sailor, a construction worker, a mass murderer, and a war hero.” Wilson gives much attention to how a person can be so complex and multi-faceted, at once so hated and so loved.  Her Odysseus is anything but the one dimensional champion of goodness that I grew up admiring. Perhaps we see ourselves in him.  Perhaps that’s what allows us to empathize.

It has become common to dismiss the pagan gods as amoral and often wicked libertines that no thinking person could believe were real.  Modern criticism of the Greek gods generally amounts to the argument that they are no better than us human beings.  Wilson points out they’re essentially the same as powerful human beings except that they live forever, but morally and ethically, they’re no better than us.  This strikes me as a natural criticism of deity if you’re comparing it to a God conceived of as morally perfect and all knowing.  But have there been unintended consequences to conceiving of God as the embodiment of perfect goodness and omniscience?  What have been the consequences of living with the aim of achieving such righteousness ourselves?  What have I done by measuring my self-worth by comparison to a single, homogeneous and absolute goodness who has revealed Himself to me?  Has it worked to make me self-righteous?

One reason I’ve always been attracted to Greek myth is that the gods DO behave like human beings.  I’ve long felt that such portrayals allow us to see the consequences of our foibles in archetypal ways that can help us to avoid mistakes as effectively as a lot of sermons I’ve heard.     At their cores, the modern worldview suggests that the difference between good and evil is apparent, and that life is simple: if we choose correctly, we’ll live forever in the home of the gods.  In the old pagan worldview, life is a constant struggle to sort out the difference between good and  bad; that even in the home of the gods, it can be hard to distinguish right from wrong; that sometimes, what seems good to one person (or god) seems bad to another.  In this worldview, there isn’t any Grand Commission of Justice to tell us which is which.

There’s little doubt in my mind that most of us would choose to live in a world where good and evil are clearly defined and labelled. But is the real world more nuanced and dependent on point of view than that?  Wilson points out that Odysseus is offered a perfect and immortal life by Circe, but turns it down, choosing instead his mortal home in his mortal world.  Is that why we can love him and hate him at the same time?  There are good reasons the Bible has stood the test of time.  I think there are good reasons the Odyssey has too.

So: What similarities do I see between the Kavanaugh hearings, Halloween, and the Odyssey? For me, all three tell us something about the extent to which Platonic thinking about absolutes has changed the world.  In the pre-Platonic, polytheistic world of Odysseus we could celebrates diverse and multiple perspectives; in the modern world, there must be a single and absolute truth distinguishable by its righteousness.  In the Christian Era, we’re used to hearing the gods of Greek myth dismissed as either “immoral” or “amoral.”  But in the Odyssey, Zeus is the god of justice and of hospitality toward strangers.  One of the most constant themes is that the gods will not approve of mistreating strangers.  It’s not that the Homeric gods don’t care about what’s good and right, but that (just like people) they don’t share a singular and unchanging view of what “goodness” consists of.

Of the many epithets applied to Odysseus (apart from being godlike),  most begin with the prefix “poly-,” meaning multiple.  Odysseus is “poly-tropos” (multiply turning), poly-phrona (multiply-minded), poly-meganos (employing multiple devices), poly-tlas (multiply enduring), poly-penthes (multiply-pained), poly-stonos (multiply-sorrowed) and poly-aretos (multiply prayed for.)  In a sense, this multiplicity makes him all things to all people.  It’s a big part of why he’s hated.  He is also incredibly adaptable, assuming different guises and characteristics in different situations.  His understanding of right and wrong is neither absent nor irrelevant – it is simply changing.

All our modern religious and political instincts tell us to condemn such inconstancy.  We’re trained to think in terms of Platonic absolutes, of clear and perfect Goodness on one side and clear and perfect Evil on the other.  We’re told we can identify the Truth and that we’re bound to adhere to it.  If Professor Ford was telling the truth as she saw it, then Judge Kavanaugh had to be lying, as he saw it.  If Halloween is not a glorification of the Judaeo-Christian God, it must be the work of Satan.  If Odysseus is inconsistent from one day to the next, he must represent an inferior state of being because perfect people have to be constant, unchanging and right.

But is there a difference between being constant, unchanging and right, and being rigid, intolerant, and set in our ways?

I’m not advocating for a rudderless, amoral view of the world.  Goodness is certainly worth striving for.  But how can I know for certain I’ve found it, when others disagree with me about what’s good?  Once again, I’m reminded of Alexander Solzhenitsyn’s words:

“If only there were evil people somewhere insidiously committing evil deeds and it were necessary only to separate them from the rest of us and destroy them.  But the line between good and evil cuts through the heart of every human being.  And who’s willing to destroy a piece of his own heart?”

I recently read Jonathan Haidt’s The Righteous Mind: Why Good People are Divided by Politics and Religion. The book is worth a read for many reasons, but the concept I found most thought-provoking was Haidt’s view on the evolutionary origins of human reason.  The traditional view is that the capacity for reason and logical analysis evolved in human beings as tools for reaching the best conclusions.  In reality, Haidt suggests, human beings wouldn’t have survived unless they could form immediate judgments about things without reasoned analysis.  (You can’t conduct a reasoned analysis of whether to run from a saber-toothed tiger or not.)  But we are also social animals whose early survival depended on the ability to work together in teams.   And to act as a team,  we needed coordinated approaches.  Haidt says our social survival depended on leaders able to persuade others to follow their judgments.  According to Haidt, reason and logical analysis arose about the same time as language did, and they evolved for much the same social purposes: that is, not as tools of decision-making to help an individual determine what’s right, but as tools of persuasion to help convince others to go along with our judgments.  (In the process, we convince ourselves that our judgments are right, too, but that’s a result, not a cause.)

In this view, all of human reasoning has its origins in persuading others, in post-hoc justification to support judgments already formed.  If Solzhenitsyn and Haidt are right, then all the arguments between Professor Ford and Justice Kavanaugh, Democrats and Republicans, Christians and atheists, NPR and Fox News, Halloween enthusiasts and its enemies,  and indeed, between you and me, have to do with persuasion, not with what either one of us has always revered as “reason.”

In this sense, maybe Ford’s and Kavanaugh’s truths are similar.  Last year, I blogged about liking Halloween because it invited us to try out the worldview of a character we normally think of us strange, monstrous, or even evil.  Maybe it isn’t bad that we put ourselves in the shoes of terrible others on Halloween.  Maybe it’s okay to change our understanding of right and wrong at times, to try out new perspectives, just like Homer’s Odysseus did.  Maybe multiplicity helps us empathize.

After listing the many (contradictory) traits her Odysseus exhibits, Emily Wilson  writes, “immersing ourselves in his story, and considering how these categories can exist in the same imaginative space, may help us reconsider the origins of Western literature, and our infinitely complex contemporary world.”

Maybe she’s on to something there?

– Joe

The Biggest Delusion of All

When I was fourteen, my parents sent me to Texas to work on my grandfather’s ranch.  The first job would be to paint the fence around the field between his house and the road.    When I asked what else I’d be doing, he replied, “Let’s see how long it takes to paint the fence.”  Two months later, having painted for eight hours a day, I returned home, the job of painting the fence still not finished.

The word “comprehend” means taking something in all at once.  Since flat land let me see that fence all at once, it seemed comprehensible.  In fact, if I held my two thumbs in front of my face, I could make the fence  seem to fit between them.  So thinking it might take a few days to paint the fence seemed reasonable.  Problem was, my brain does tricks with perspective.   In reality, that field was probably close to ten acres, the fence probably ten football fields long.  “Comprehension” of very large things requires large scale trickery.  It depends on deception.

I should have known better than to trust my brain about the fence.   My teacher Paul Czaja had told our class the story of the Emperor’s chessboard and the grains of rice: the Emperor said that if a single grain of rice were placed on the first square, two grains on the second square, four grains on the third, and so on, until the 64th square, the final square would require enough rice to stretch to the moon and back seven times.

The story of the Emperor and his grains of rice “wowed” me with the power of exponential growth.  I knew the moon was far away, and a grain of rice very small.  Stretching back and forth seven times had to make it a very large number indeed.

Of course I wanted to know what the total number of rice grains was in numbers I could understand.  Rather than simply give his class the answer, Paul asked us to compute the number ourselves.  Our homework was simply to multiply by two sixty-three times.

If Paul had told us that the total grains on the chessboard came to 18,446,744,073,709,551,615, I’d have realized that the number was larger than any I’d ever seen – but my brain would have attempted to make sense of the number’s “bigness” in the same way it had made sense of the fence in Texas – namely, by making it appear far smaller than it really was.   The only way to see a huge field was to make it seem small enough to fit between two thumbs.  The only way to make such a large number seem comprehensible is to reduce it to funny little shapes on a page that we call numerals.

Upon seeing that large number on the page, my brain immediately perceives it’s large, because it is physically longer than most numbers I see.   But how large? If it takes up two inches on the page, my brain suggests it’s twice as long as a one-inch long number, the same way a two-inch worm is twice as large as a one-inch worm.    But my schooling tells me that’s wrong, so I count the digits – all twenty of them.   Since twenty apples is twice as many as ten apples, and since my brain has spent years making such comparisons, my intuition suggests that a twenty digit number is twice as large as a ten digit number.  But I “know” (in the sense of having been told otherwise) that’s not right.

But here’s my real question: if my intuition is wrong, is it even possible for me to “know”  how wrong? Does “calculation”: amount to “comprehension”?

Psychologists tell me my brain is useful in this inquiry only because it tricks me into seeing the distorted, shrunken  “mini-icon” versions of things – numerals and digits, rather than the actual quantities themselves.  If we’re asked to describe what it feels like to be alive for a million years, we can’t.  We’ll never be able to.  And for the same reasons, it seems to me, we can’t “comprehend” the reality of 264, only the icons we can comprehend – the mental constructs that only work because they’re fakes.

Consider the Emperor’s assertion that the rice would be enough to go to the moon and back seven times.  That mental  image impressed me.  It made the size of 264 seem far more real than it would have if I’d merely seen the twenty digits on a page.  But do I really comprehend the distance between the moon and the earth?

The moon’s craters make it look like a human face.  I can recognize faces nearly a hundred yards away.  So… is my brain telling me that the moon is a hundred yards away?

I’ve seen the models of lunar orbit in science museums – the moon the size of a baseball two or three feet away from an earth the size of a basketball.  My brain is accustomed to dealing with baseballs and basketballs.    I can wrap my brain around (“comprehend”) two or three feet.  So when I try to imagine rice extending between earth and moon seven times, I relate the grains of rice to such “scientific models.”

But is that , too, an exercise designed to trick me into thinking  I “understand”?  Is it like making a fenced field seem like it could fit between my two thumbs?

I have little doubt that analogies seem to help.  Consider that a million seconds (six zeros) is 12 days, while a billion seconds (nine zeros) is 31 years and a trillion seconds (twelve zeros) is 31,688 years.  Wow.  That helps me feel like I understand.  Or consider that a million hours ago, Alexander Graham Bell was founding AT&T, while a billion hours ago, man hadn’t yet walked on earth.  A billion isn’t twice as big as ten thousand, it’s a hundred thousand times bigger.

Such mental exercises add to our feeling that we understand these big numbers.  They certainly “wow” me.  But is this just more deception?

Distrusting my brain’s suggestions, I decide to do some calculations of my own. A Google search and a little math tell me that seven trips to the moon and back would be about 210 billion inches.  Suppose the grains of rice are each a quarter inch long.   The seven round trips would therefore require 840 billion grains of rice.  The math is simple.  If there’s anything my brain can handle, it’s simple math.  Digits make it easy to do calculations.  But does my ability to do calculations mean I achieve comprehension?

Thinking that it might, I multiply a few more numbers.  My calculations show me that the Emperor’s explanation of his big number was not only wrong, but very wrong.  The number of grains of rice on the chessboard would actually go to the moon and back not seven times, and not even seven thousand times, but more than 150 million times!

Such a margin of error is immense. I don’t think I could mistake a meal consisting of 7 peas for a meal consisting of 150 million peas. I don’t think I could mistake a musical performance lasting seven minutes for a musical performance lasting 150 million minutes. What conclusion should I draw from the fact that I could, and did, fail to realize the difference between seven trips to the moon and back, and 150 million trips?

The conclusion I draw strikes me as profound.  It is that I have no real  “comprehension”  of the size of such numbers.    I’ve retold Paul’s chessboard story for fifty years without ever once supposing that the Emperor’s “seven times to the moon and back” might be wrong.  Could there be any better evidence that I have no real sense of the numbers, no real sense of the distances involved?  The fact that I can’t really appreciate the difference between  two such large numbers tells me that I’ve exceeded the limits of any real understanding.   My brain can comprehend baseballs and basketballs.  Using simple tools like a decimal number system, I can do calculations.    But when I try to comprehend the difference between 18,446,744,073,709,551,615 and 18,446,744,073,709,551,615,000,000, am I really able to understand what it means to say that the second number is a million times larger than the first?

I got my first glimpse of the difference between “calculating” and “comprehending” about five minutes into Paul’s homework  assignment.  Just five minutes into my calculations, my brain was already playing tricks on me.  Pencil in hand, there were already  too many numbers going around in my head.  I was (literally) dizzy from arithmetic overload.   I made errors;  I began to slow down, to be more careful.  The numbers were already absurdly large.   Five minutes after starting with a sharp pencil, my numbers were eight digits long; I was multiplying scores of millions, but no matter how slow I went, the frequency of errors increased.    After ten minutes, I had to sharpen the pencil because I couldn’t read my own writing.  After twenty minutes, my fingers hurt.  Soon, I could feel calluses forming.

Since the first eight squares of the chessboard had taken me about fifteen seconds to calculate, I’d unconsciously supposed that doing all eight rows might take me eight times as long.    But the absurdity of such a notion was becoming quickly apparent.  Looking up at a clock after half an hour, still not quite half way across the chessboard, my inclination, even then, was to think that the second half of the chessboard would take about as long as the first.  If so, I’d be done in another half an hour.  So I determined to finish.  But a couple of hours past my normal bedtime, I assured my mother I’d be finished soon – notwithstanding finger pain that had been crying for me to stop.  By the time I finished –about two a.m., the calculations having taken me over five hours to complete despite the fact that I’d long since given up trying to correct errors – my fingers were so painfully cramped it seemed they’d never recover.

In this way, I started to “feel,” to “experience,” the hugeness of the number 18,446,744,073,709,551,615.

By reading this account, some may feel they have a deeper sense of the enormity of 2 to the sixty-fourth power.    But I’m willing to bet that if you’ve never done it before, but now actually try to CALCULATE it, yourself, you’ll appreciate the hugeness in ways that symbols on a page – the stuff our brains use – can never convey.

As I look at the digits on the page, my brain is trying to spare me that visit to the land of reality.  It strives to shield me from calloused fingers and mental exhaustion with its easy comparisons, its suggestions to count digits, even its way of hearing words and using them to imagine a story  about going to the moon and back seven times.  In the same way, perspective had tricked me into thinking I understood the length of a fence as if to spare me a sore back, sore feet, sunburn and thirst for two months.  But the experience of painting the fence had taught me more about its length than framing it with my thumbs or even counting the posts between the rails   I don’t know how long it would have taken to finish painting that fence.  I could do the calculations, but only finishing the job would have really made me “comprehend” the time involved, and who knows –I might have died of heat exhaustion before I ever finished.

I should know that the farther away something is, the bigger it must be, if I can see it.  But through the deceit called perspective, my brain tells me precisely the opposite:  the farther away something is, the smaller it appears.  Stars more massive than the sun are reduced to mere pin pricks in the sky. When I remove a contact lens, my thumb  look like the biggest thing in the world.  What better evidence can there be that our brains are built to deceive us?

My brain (wisely) keeps me focused on things I need, like apples, and on things than can kill me, like woolly mammoths, men with rifles, fast moving cars, or thumbs in my eye.  But to do this, my brain necessarily distorts the things that are far away, and the things that are many, and the things that are very much larger than me, because they are things I can do nothing about.  In fact, I suspect that If my brain were asked to identify the biggest, most important thing in the world, it would say it was me.

And that might just be the biggest delusion of all.

–Joe

There’s Nothing Like a Really Good Shower

Like many of you, I do some of my best thinking in the shower.  Have you ever wondered why?

Last night, my attention turned to the simplicity around me.  I was standing in a tub, with three walls and a shower curtain bounding my world.  Before me, four items of chrome: the shower head, a control plate, a faucet, and a drain.  At my side, a soap dish, a bar of soap, and a bottle of shampoo.  Once I’d turned the water off, there was nothing more.

When I opened the curtain, there was plenty more to see: a vanity, a mirror, a toilet, a pair of towel racks hung with towels, a bathrobe hanging from a hook on the door.  But as I inventoried this expanded-but-still-small world, I realized there was no end to the counting: three pictures on the walls, light fixtures, a light switch, a door knob, brass hinges, a toilet tissue dispenser, baseboards,  two floor mats, a patterned linoleum floor, and no fewer than twenty-six items on the vanity, from deodorant and toothpaste to a tub of raw African shea butter.  Two of the twenty-six items were ceramic jars, filled with scissors, tweezers, nail clippers, cotton swabs, and  other modern necessities.  Most of the items were labeled, little sheets of paper glued on them, each little sheet bearing product names, ingredients, and warnings in tiny fonts and a wide array of colors.

Early in fifth grade, Paul Czaja had our class use a sheet of paper, telescoped into a tube, to survey our surroundings.  The idea seemed too simple – easy to dismiss because we already “knew” the result.  But actually trying it proved us wrong.  Paul insisted that, one eye shut, we keep the other at one end of the scope for five minutes; he wouldn’t let us stop or look away.  Forced to view our classroom from these new perspectives, we were amazed at how different it became.  Desks, windows, blackboards and classmates disappeared, replaced by a tiny spider web  that trapped an even tinier bug in a corner; the pattern in the grain of a piece of wood; a piece of lint trembling in an unseen movement of air like a piece of desert tumbleweed.

As I toweled dry after my shower, the world of things too small to notice most of the time came into sharper focus.  My attention turned to things I go through life ignoring.  From the confines of my bathroom, I took stock of the unseen.

The room, I supposed, and no doubt my own body, were covered with bacteria.  (I might have found that thought abhorrent once, but today, nourished by probiotics and kombucha tea, I find it comforting.)   In the empty space between me and the mirror, I imagined all the even smaller things I couldn’t see, the atoms of nitrogen and oxygen, the muons and the quarks, the billions of things that swirl around me, unseen, though I breath them in and out, and though they sustain me.

I thought of things in the bedroom and the hall, and the things out in the yard, and things so far away that I couldn’t see them, even in the vast night sky beyond the bathroom’s walls because I don’t have X-ray vision and can’t see things more than a few million miles away.

But it wasn’t a matter of distance, size and walls alone that limited my sight.  I thought of all the colors I’d never be able to see, because the cones in my eyes don’t react to all the wavelengths of light that exist.  And  moving past the limitations of sight, I thought how oblivious I am to odors;  that every one of those ingredient labels lists chemicals and molecules  easily distinguishable by dogs, probably despite their containers, but all those stray molecules float into my nose unnoticed.

I hear but little of what there is to hear.  Some sounds are simply too quiet.  Others are loud enough to make dogs and teenagers come to attention,but too high pitched for my adult ears  to discern.  Others are at frequencies too low. And even dogs and teenagers hear but a tiny fraction of the oscillations that twitter, snap and buzz in the world around us.

Taste?  Surely, the world has more complexity to taste than five types of gustatory cells whose highest achievement lies in their ability – acting as a team –to distinguish  between  sweet, sour,  bitter, salt and savory.

And what about the things we call “forces”?  How often are we conscious of gravity?  If I focus on it, I can imagine that I feel the gravitational pull of the earth, but have I ever felt the pull of the moon?  And have I ever once thought about the gravitational pull of the vanity, the toilet, and the doorknob?  How often do I focus on the domino effect of the electrons hopping and pushing, connecting the light switch to the fixture?  And do I ever think of the magnetic fields surrounding them?  Unless I’m shocked by the sudden release of static electricity, I go through life completely oblivious to its existence .

Perhaps most of all, I’m unconscious of myself – the flow of hormones  that affect my mood; the constant traffic in my nervous system that never reaches my brain, much less my conscious thought; the processes of liver,  kidney and thalamus that keep me going.

In short, the world I experience, through my senses, is but a tiny fraction of the real world in which I live.

Yet that’s not all.  I haven’t even begun to count the ways my brain deceives me.  In fact, it wouldn’t be doing its job if it didn’t distort reality. The tricks my brain plays on me take the already small portion of reality I’m able to sense, and make it appear to be something other than it is.

My eyes see two different views of the world, but – as if it’s afraid I couldn’t handle multiple points of view – my brain tricks me into thinking I see only one.

When I turned off the shower, my world was completely silent – or so it seemed.  I’d heard the cessation of the water coming down.  It being late at night, there were no voices from downstairs, no television blaring, so my brain told me – convincingly – that the bathroom was silent.  Only when I closed my ear canals by pressing flaps of flesh to cover them did I realize that the hum of ambient background noise was now gone.  That noise had been so normal, so much a part of the ordinary, that my brain had convinced me it wasn’t there.  (My highly evolved brain still wants to know: What’s the use of  listening to background noise?)

Early on, my brain tricked me into thinking that some things are up and others are down, and that up and down are the same everywhere.  (I spent a lot of time as a child worrying about the people in China.)  And it was so intent on perpetuating this deception that when my retinas saw everything in the world “upside down,” my brain flipped the world around to be sure I saw everything “right side up.”

One of the brain’s most convincing tricks is what it does with my sense of touch.  It has convinced me  I’m doomed to a life in touch with the ground;  I’ve often regretted my inability to fly.  But in fact, I’m told, the stuff of which I’m made has never  come in contact with the ground, or with any other stuff at all  – if it had, I’d have exploded long ago.  The sense of touch might better be called the pressure of proximity.  All that time I dreamed of  flying, I was floating all the while!

How about my  sense of who I am?  I wonder if that’s not the biggest trick of all.  When my body changes with every bite of food, every slough of skin, every  breath of air I take, no present cell or atom there on the day I was born, is my very sense of self an illusion, created by my brain “for my own good”?

And so I surveyed the bathroom.  Having first considered  the things too small to notice,  or too quiet, or too far away, or at not the perfect frequency, and having then considered ways my brain tricks me, I next encountered a whole new category of deception.  As my eyes fell on various objects, I noticed something else my brain was doing.  For example: on the toilet tank was a vase full of flowers, but not real ones— pieces of plastic, molded and colored to look like real ones.  Another example: one of the pictures on the wall was of a swan and her cygnets – not a real swan, but a mixture of acrylics applied to a canvas, a two-dimensional image designed to give the illusion of life in a three dimensional pond.  The painting was designed to make me think of something not there, and it did. I didn’t think “acrylics,” I thought “swan.”   And as soon as it had done that, my brain had me thinking of the artist – my wife, Karen – and of her skill with a brush,  and of her sense of color, and of some of the many ways in which she’s blessed me through the years.  And I realized that all these mental associations, these illusions, these memories, form an extremely important part of the reality in which I live, despite the fact that they don’t reside in the  space between me and the mirror (at least not literally).  The flowers in the vase are just molecules of colored plastic, but  my brain gives them a fictional existence – a story of smells and bees and fresh air and blue sky, and all the associations that “flowers” evoke in my brain.  The swan and her cygnets remind me not only of a wife who paints, but of our children, and of times we walked  together, along water banks, watching swans and cygnets swim by. My mind, I realize, is a factory, churning out a never-ending assembly line of associations, all of which are things that “aren’t really there.”

And so, I conclude, I’ve spent a lifetime in a shower of a different sort –bombarded by  atoms, muons, quarks and dark matter, things so small I call them emptiness,  all the while pulling associations, memories, and narratives into my world that aren’t really there.

When I say they aren’t really there, I don’t mean to deny that Karen, and swans, and flowers, are real – but that memory itself is reconstructive.   My memories are hardly exact replicas of things I’ve experienced;  they’re present creations, constructed on the spot in a crude effort to resemble prior experience.  The result is affected by my mood, and by error, and by all sorts of intervening experiences.

And so, I  live in a world that isn’t the real world, but one extremely limited by my meager human  senses; one corrupted by a brain that’s determined to distort things, for my own good; one filled with the products of my own defective memory and my own boundless tendency to imagine things that aren’t there.  Somehow, I’m able to deal with the shower of inputs so created, over-simplified, distorted and augmented as it may be – in fact, I’m pretty well convinced that I can deal with it a lot easier than I could deal with the vast complexity of the “real thing.”

I woke up this morning hoping that I never lose sight of the difference between the two.

— Joe

The Last Word

With much sadness, I have just now changed this website’s description of one of We May Be Wrong’s founding members – from the present tense, to the past.

In 1960, a 24 year old Dr. Paul Clement Czaja (January 9, 1936 – May 8, 2018) had just earned his Ph.D. in philosophy when he persuaded Nancy Rambush (then headmaster of the Whitby School and founder of the American Montessori Association) to let him teach existential philosophy to children.  She was impressed with his enthusiasm and his willingness to work for practically nothing, but since she thought parents might not understand the importance of teaching philosophy to children, she asked if he wouldn’t mind teaching other things as well.  So Paul “officially” taught creative writing, Latin and various other subjects not often taught to ten year olds.  But philosophy was his first love, and it found its way into everything.

Only fourteen years older than me, Paul was more an older brother than a teacher.  He showed me how to love the world around me; introduced me to the joy of learning everything I could about it.  The way a magnifying glass could make fire; the way Latin could turn language on its head yet still come out as modern English; the thrill of catching butterflies in nets; the way the Greek Alphabet could be painted with Japanese brushes and jet-black ink; the vital inner parts of dissected foetal pigs; the wonders of the Trachtenberg system of mathematical calculation; the wiggling of microscopic paramecia in pond water; the thrill of catching people and their stories with a 35 millimeter still camera, that of making our own stories with  a 16 millimeter movie camera, and then, the even weirder thrill of telling stories with frame-by-frame, stop-motion photography; the writings of Gertrude Stein, William Carlos Williams, and James Baldwin; the power of telling stories of our own  with just pen and ink.  We spliced and edited rolls of movie film we’d made and, somehow, we even enjoyed diagramming sentences, rummaging through grammar the way we searched for the Indo-European roots of words. Though I was not yet a teenager, Paul introduced me to Ingmar Bergman movies, to Van Gogh’s Starry Night, to Rodin’s The Thinker, and to Edward Steichen’s photographic exhibition,  The Family of Man.

To say the least, it was not your typical middle-school education.

They say that when a butterfly flaps its wings, it can have profound effects on the other side of the world – a concept I first heard from Paul, I’m sure.  If I hadn’t met him, he wouldn’t have written the recommendation that got me into Phillips Exeter, and I wouldn’t have… well, if a single butterfly flapping its wings can have a profound impact, having Paul as a teacher every day (winter and summer) for four impressionable years was like being borne to Mexico by millions of Monarchs.  We stayed in touch during my later school years, and then persisted in friendship as the difference in our ages seemed to vanish with the passage of time.  And so, I was pleased that he joined We May Be Wrong in 2016 as one of our founding members.

But now, it’s time for a confession.  As we tried to get our new website off the ground, Paul proposed that WMBW publish a poem he had written.  Being a man of great faith, Paul wrote a lot about God – prayers, poems, meditations.  When he proposed that WMBW publish his poem, I disagreed on the ground that I didn’t want the brand new website to come across as “pro” or “anti” anything controversial.  I didn’t want to risk alienating potential followers, be they liberal or conservative, Republican or Democrat, believers or non-believers, by implying some sort of hidden agenda.  (The ONLY agenda was to be the benefit of listening to others with an open mind.)  Holding the keys to the publishing platform, I declined to publish his poem lest it be misunderstood to evangelize about God, rather than fallibility.  But even then, I told him, once the website has been up for a while, we might be able to publish that sort of thing.

Well, the time has come.  I wish I’d published it before he left the earth he loved for the better one he yearned for.  For Paul,  I can only say a prayer of thanks for all he did for me, and for so many other children, and now, share his wonderful poem.  (It seems only right that he should have the last word.)

Fire in the Soup: A Creation Story

It happened

when

this earth

had just cooled down

from

being molten magna

to being

simmering

and steaming

rock,

and

when

the vaporous skies

had emptied

eons of towering cumulus

clouds of rain

making oceans

which

were so great

that

the whole sphere

became

much more a watery world,

and

the rocky land

was

but one large

continental island

there

in the middle

of a now

beautiful blue planet.

 

And then

when the heavens

were no longer

veiled

by that thick

envelope

of sulphurous cloud cover,

and

the earth’s atmosphere

became

pure and clear

and

allowed

the stars of the universe

to shine

so brightly

that

the night sky

seemed to be

white

with black peppery dots,

it

happened

that a flame

came streaking

through the sky

down

to earth

sizzling

the warm soup

of the sea

somewhere

changing

and

charging

that chemical mineral ooze

into

the very first

protozoa

that ever was

on this

so singular planet.

 

Later

when that protozoa

eventually became

thinking,

questioning,

wondering

man,

the idea

arose

that perhaps

that life causing

flame

which

once upon a time

sizzled

the oceanic soup

could be

the pure energy

that is

love,

and

if

that were

so,

then

all life

that

ever evolved

from

that first protozoa

would be

somehow

spiritual

and

of the eternal God —

for

philosophers

and

theologians

say

that

God is love.

Such a thought

seems to be

a happy,

hope filled,

heuristic

kind of

thinking.

–Paul Clement Czaja

 

Zooming In

Neil Gaiman tells the story of a Chinese emperor who became obsessed by his desire for the perfect map of the land that he ruled. He had all of China recreated on a little island, in miniature, every real mountain represented by a little molehill, every river represented by a miniature trickle of water.  The island world he created was enormously expensive and time-consuming to maintain, but with all the manpower and wealth of the realm at his disposal, he was somehow able to pull it off.  If the wind or birds damaged some part of the miniature island in the night, he’d have a team of men go out the next morning to repair it.  And if an earthquake or volcano in the real world changed the shape of a mountain or the course of a river, he’d have his repair crew go out the next day and make a corresponding change to his replica.

The emperor was so pleased with his miniature realm that he dreamed of having an even more detailed representation, one which included not only every mountain and river, but every house, every tree, every person, every bird, all in miniature, one one-hundredth of its actual size.

When told of the Emperor’s ambition, his advisor cautioned him about the expense of such a plan.  He even suggested it was impossible.  But not to be deterred, the emperor announced that this was only the beginning – that even as construction was underway on this newer, larger replica, he would be planning his real masterpiece – one in which every house would be represented by a full-sized house, ever tree by a full-sized tree, every man by an identical full-sized man.  There would be the real China, and there would be his perfect, full-sized replica.

All that would be left to do would be to figure out where to put it…

***

Imagine yourself standing in the middle of a railroad bed, looking down the tracks, seeing the two rails converge in the distance, becoming one.  You know the rails are parallel, you know they never meet, yet your eyes see them converge.  In other words, your eyes refuse to see what you know is real.

If you’re curious why it is that your mind refuses to see what’s real in this case, try to imagine what it would be like if this weren’t so.  Try to imagine having an improved set of eyes, so sharp they could see that the rails never converge.  In fact, imagine having eyes so sharp that just as you’re now able to see every piece of gravel in the five foot span between the rails at your feet, you could also see the individual pieces of gravel between the rails five hundred miles away, just as sharply as those beneath your feet.  In fact, imagine being able to see all the pieces of gravel, and all the ants crawling across them, in your entire field of vision, at a distance of five hundred miles away.  Or ten thousand miles away.  What would it be like to see such an image?

***

How good are you at estimating angles?  As I look down those railroad tracks, the two rails appear straight.  Seeing them converge, I sense that a very acute angle forms – in my brain, at least, if not in reality.  The angle I’m imagining isn’t 90 degrees, or 45 degrees; nor is it 30, or even 20.  I suppose that angle to be about a single degree. But is it really?  Why do I estimate that angle as a single degree?  Why not two degrees, or a half a degree?  Can I even tell the difference between a single degree, and a half of a degree, the way I can tell the difference between a 90 and a 45?  Remember, one angle is twice as large as the other.  I can easily see the difference between a man six feet tall and one who’s half his size, so why not the difference between a single degree and a half a degree?  What if our eyes – or perhaps I should be asking about our brains – were so sharp as to be able to see the difference between an angle of .59 degrees and one of .61 degrees with the same ease and confidence we can distinguish between two men standing next to each other, one who’s five foot nine and the other six foot one?

***

Yesterday, I was preparing digital scans of my grandfather’s Christmas cards for printing in the form of a book.  His Christmas cards are hand drawn cartoons, caricatures of famous personalities of his day.  Each is clearly recognizable, from Franklin Roosevelt and Adolf Hitler to Mae West and Mickey Mouse.  Some of the images were scanned at 300 pixels per inch, some at 600 etc.  Reflecting on pixel counts and resolutions so that my printed book would not appear blurry, I was testing the limits of my ability to distinguish different resolutions.   Of course, one neat thing about a computer is how it lets us zoom in.  As long as I zoomed in close enough, I could see huge differences between two versions of the same picture.  Every pixel was a distinct color, every image (of precisely the same part of the caricature) a very different pattern of colors  – indeed, a very different image.  Up close, the two scans of the cartoon of Mae West’s left eye looked nothing alike – but from that close up, I really had no idea what I was looking at – it could have been Mae West’s left eye, or Adolf Hitler’s rear end, for all I knew.  In any case, I knew, from my close-up examination, how very different the two scanned images of Mae West actually were.  Yet, only when I was far enough away was I able to identify either image as being a caricature of Mae West, rather than of Hitler, and at about that distance, the two images of Mae West looked (to my eye) exactly the same.

***

How long is the coastline of Ireland?

If I took a yardstick and walked the perimeter, I could lay my yardstick end to end the whole way around, count the number of lengths, and conclude that the coastline of Ireland was a certain number of feet long.  But if I used a twelve inch ruler instead, following the ins and outs of the jagged coast a little more precisely, the result would be a larger number of feet than if I had used the yardstick, because the yardstick was assuming straightness every time I laid it down, when it in fact the coastline is never perfectly straight.  My twelve inch ruler could more  closely follow the actual irregularity of the coastline, and the result I obtained would be a longer coastline.  Then, if I measured again, using a ruler that was only a centimeter long, I’d get a longer length still.  By the time my ruler was small enough to follow the curves within every molecule, or to measure the curvature around every nucleus of every atom, I’m pretty sure I’d have to conclude that the coastline of Ireland is infinitely long – putting it on a par, say, with the coastline of Asia.

***

How many rods and cones would my eyes have to contain, for me to be able to distinguish every ant and piece of gravel in every railroad bed, wheatfield and mountainside within my field of vision, at a distance of five hundred miles away?  How much larger would my brain have to be, to make sense of such a high-resolution image?  I suspect it wouldn’t fit inside my skull.

***

Why did we, so recently, believe that an atom was indivisible? Why did it take us so long to identify protons, neutrons, and electrons as the really smallest things?  What did it take us until 2012 to decide that that, too, was wrong, that not only were there quarks and leptons, that the smallest thing was the Higgs boson?  And not until 2014 that particles existed even smaller than that?

***

Given how long it took us to realize that our solar system was just one of billions in our galaxy, and how much longer to realize that our galaxy was just one of billions of galaxies, why are we now so confident of our scientists’ estimates of the size of the Universe – especially when told that the “dark matter” and “dark energy” they say accounts for most of it are just names given to variables necessary to make their equations come out right?  That, apart from their usefulness in making these equations come out right, the scientists have never seen this stuff and have no idea what it is?  Is it really so hard for us to say, “We simply have no idea”?

***

A human baby can distinguish between the faces of hundreds, even thousands, of human faces.  But to a human baby, all chimpanzees look alike. And to most of us Westerners, all Asians look alike.  Why do babies treat Asians and Chimpanzees like the rails of railroad tracks, converging them into “identical” images even when we know they are different?  Did our brains  evolve not to maximize perception and understanding, but to make them most efficient?  In other words, are we designed to have limited perception for good, sound reasons, reasons that are important to our very survival?

***

Why do we think, and talk, and act, as if our brains are capable of comprehending reality, in all its vast complexity?  Is it more efficient to feed and maintain fewer rods and cones, than it would take for us to feed and maintain enough of them to see the difference between quarks and Higgs bosons, or the individual pieces of gravel between the railroad tracks on all the planets of  Andromeda?

***

Mirror, mirror, on the wall: tell me, can I really achieve, within my brain, a true comprehension of the Universe? Or am I just like the Emperor of China?

– Joe

Hatred

I just watched a TED talk I liked.  The speaker (Sally Kohn) was articulate and funny; her message about hatred powerful.    Fearing that a synopsis of her talk would detract from the way she conveys her point, I’ll  simply share the link to her talk, with my strong recommendation.

https://www.ted.com/talks/sally_kohn_what_we_can_do_about_the_culture_of_hate?rss

But I do have one  disagreement with her.  At one point,  she refers to  “study after study after study that says, no, we are neither designed nor destined as human beings to hate, but rather taught to hate by the world around us…”

I’m not so sure.  Last year I saw a science show on TV that presented a series of studies of very young children; its disturbing suggestion was that we are born to hate.  Can anyone enlighten me about these studies, suggesting (one way or the other) whether hatred is learned, or innate?  A product purely of culture, or of biological evolution?

It has always seemed to me that while some of hate is surely learned, a predisposition toward it may be innate. But what would a predisposition toward hate look like?

Sally cites the early 20th century researcher Gordon Allport as saying that Hatred occupies a continuum, that things like genocide are at one end and “things like believing that your in-group is inherently superior to some out-group” lie at the other.  That much makes sense to me.  In fact, the very idea of a “hate continuum” with feelings of superiority lying at one end is why I think the answer to the innateness question may be important.

Whenever I hear it said that a positive self-image is important to mental health, I think of Garrison Keillor’s joke that in Lake Wobegon, everyone is above average.   I suspect the great majority of us think we’re at least slightly above average.  And don’t  psychologists say that that’s good?  Don’t we justify keeping our positive self-images by the corollary view that people who “suffer from a negative self-image” are likely unhealthy?  Don’t we think it would be beneficial  if everyone thought of himself or herself as above average?  Wouldn’t that mean an end, for example, to teen suicide?

But even if I’m far below average, there are at least some people in the world who are not as good (or as smart, or as fit, or as valuable) as me.  No?   And if I think my liberalism is superior to your conservatism, or the other way around,  you must lack some quality or insight I possess, no?  Does “feeling good about myself” require that, in some way, I feel superior to others?

Maybe not.  Maybe my positive self image need not depend on comparing myself to others – maybe I can see value in myself – have a positive self-image – without thinking of myself as superior to anyone else at all.  But the only way I can discern to do that is to see equal value in everyone.  And if we’re talking about wisdom or intelligence or validity of things in which we believe, that means that  my own power of discernment is no better than than the next guy’s; that everything I believe in has value, but everyone else’s beliefs have equal value.  And I see great debate about whether that’s desirable. Does it require me to abandon all my convictions?  To forego all my beliefs?  What does it even mean to say that my belief in God has no more value than your belief in atheism, or vice versa?  Can I really believe in anything, if I think an opposing belief is just as “good”? I think most of us say no.  I think that, for most of us, feeling good about ourselves and our beliefs is only possible through at least implicit comparison to others, a comparison in which we feel that our beliefs are at least slightly superior to somebody else’s.

Even if it’s both possible and desirable, it strikes me as very, very hard to have a positive self image without feeling such superiority.  I mean, can I really have a positive self-image if I think I’m doomed to be the very worst person on earth, in every respect?  It certainly seems likely that, for many, most or all people in the world, positive self-image depends on feeling superior to at least some others, in at least some respects.  I’d venture the guess that a tendency toward positive self-image (in comparison to others) has evolved in our species because of its evolutionary health benefits.  In any case, I suspect there’s a strong correlation between adults who feel their beliefs are superior and adults who feel disdain for the beliefs (or intellects) of others, and a strong correlation between those who feel disdain for the beliefs and intellects of others and those who hate them.  At the very least, positive self-image and a feeling of superiority seem at least early stepping stones in the direction of Hatred.

However, my suspicion that the seeds of Hatred  are themselves innate doesn’t depend entirely on positive self-image and feelings of superiority.  The science show I watched last year dealt not with self-image, but with group identification and preference: the idea that we ‘re willing to assist and protect those others who are most like ourselves, while we seek the opposite (competition, aggression, violence) directed at those who are unlike ourselves.

“My God, my family, and my country.”   The familiar formula implies a great deal, I think, about the subject of identity, as does the advice we give to our children: “Don’t ever talk to strangers.”  Why do we alumni all root for the home team?  Why would most of us save our spouse and children from an inferno first, before saving strangers, if we save the strangers at all?  Why do we lock our doors at night to protect those we know, while excluding those we don’t?  Why do we pledge allegiance to our respective flags?

(That last one’s easy, of course, if we believe that we Americans pledge allegiance to our flag because our country is the greatest on earth.  Perhaps I should really be asking why all the other people in the world – who far out number us –pledge their allegiance to their flags, when they live in inferior countries?  Are they uninformed?  Too stupid to recognize our superiority? Aware of our superiority, but unwilling to admit it, because of selfishness, dishonesty, or even evil design?  In which case, can Hatred be far behind? )

Why do we form Neighborhood Watch groups, erect walls on our borders, finance armies for self-defense, and erect tariffs to trade?   Is it not because we prefer the familiar, and because that preference is in our self-interest?  And isn’t self-interest as true of groups as of individuals?  In evolution,  groups do well who look out for each other – who favor those most like themselves – while treating dissimilar “others” with suspicion and distrust.  (We know that those like us aren’t dangerous, hostile predators, but fear that unknown strangers might be.)   In contemplating first contact with aliens from other worlds, some of us favor holding out olive branches, others making some sort of first-strike, but disagree as we might on how to first greet them,  we all tend to think in terms of a common goal: to preserve humanity.  We therefore focus on the need for global unity in facing the alien challenge.  But what is it that causes us to favor “humanity” over alien beings, when we know absolutely nothing about those alien beings?  Isn’t it because we know absolutely nothing about them?  Isn’t it because, innate within us is a bias in favor of  those who are most like ourselves?

Consider the following continuum, as it progresses from the unfamiliar to the familiar:

(1) We spend millions to combat and eradicate bacteria, giving Nobel prizes to those most successful in the effort;

(2) We spend some (but less) to eradicate mosquitoes, which we swat unthinkingly;

(3) On the contrary, we feel bad if we run over an armadillo on the road, but what the heck, such accidents are unavoidable;

(4) We try not to think much about slaughtering millions of cows, but we do it on purpose, because we have to eat;

(5) most of us abhor the idea of ever eating a monkey; and

(6) we condemn human cannibalism, abhorring murder so much we imprison murders, even if we oppose the death penalty because human life is sacred.

I think that assigning things to their place on such a continuum based on how much they seem similar or dissimilar to ourselves reflects our innate, natural preference for those most like ourselves.  Yet the tendency to feel safety in, and preference for, those who are most like ourselves, is precisely what leads to racism, no?

So, is this preference natural and good?  Or is it something to resist?  Should we be proud of our tendency to fight for our God, our country, our state, our species, our family, our planet –  and to disdain our enemies – or should we be suspicious of that tendency, aware that they largely result from the accidents of birth?  And does our tendency to root for the home team – not to mention our loyalty to political ideals –  exist only because we’re able to see the good in the familiar, while understandably blind to the good in the unfamiliar?

We don’t see what roosters see in hens.  We’re blind to what bulls see in cows.   But just like we can’t feel the love one three-headed Martian feels for another, I submit we won’t be able to  appreciate the goodness that aliens will be striving to preserve when they descend upon us,  maws open, preparing to treat us the way we treat swine.  I want to know WHY we are all in agreement on the importance of preserving our species, even if it means the poor aliens go hungry.   And I doubt its as simple as loyalty to good old mother earth, as I suspect we’d probably be happy to negotiate a peace with the invaders by offering them, say, all the world’s polar bears and squirrels, provided they’ll agree to leave humans alone.  This preference for humanity would prevail in that moment, I believe, never mind the national and regional warring between earthlings that had preceded it.  And it would seem strong enough to survive even if the alien species were acknowledged to be technologically “superior” to us.  But in that case, would our efforts rest on a reasoned belief that, at least morally, if not technologically, we are superior to such alien species?  Or would the instinctive feeling of moral superiority be only a disguise in which the instinct for self-preservation and consequent preference for things most like ourselves had clothed itself?

I don’t claim to have the answers.  Whether we deserve to defeat alien invaders, whether we ought to value human beings more than chickens or mosquitoes, whether we ought to fight for our flag, these are not the issue here.  My point is that I take our allegiance to things most like us to be innate, whether it’s good or (in the case of racism) abhorrent.  I think the preference is a natural, inborn one, a part of who we are, whether we like to admit it or not –and that it’s a tendency terribly hard to get rid of, as our struggle with racism shows.

For the type of reasons Sally suggests, I believe that understanding our feelings of superiority and our preference for the things most like ourselves is the key to overcoming Hatred.  But if we think of Hatred as merely cultural, as merely something we’ve “learned” from society, I fear that, as individuals, we may be tempted to think we’ve already rid ourselves of it, or that we no longer need to be alert to its presence deep in our hearts.  If we see it only as something others do – if we fail to see at least the seeds of it, innate in ourselves, ready to manifest itself in our own actions– we may be the Hateful ourselves.

– Joe

What If?

Sometimes, “What if” questions lead to breakthroughs in the way we think and live.  What if we could make our own fire?  What if the sun doesn’t really circle the earth?  What if “up” and “down” aren’t really up and down?  What if I’m wrong?

Most of the time, the “what if” questions don’t lead to earth-shattering breakthroughs about the real world.  Most of the time, they posit something that’s impossible, or just doesn’t make sense.  When we ask, “What if the South had won the civil war?” we’re not suggesting that the South did win, just hoping to learn something by contemplating what the world might be like, if it had. I believe there can be value in asking such questions.

So when I ask, these days, what if I’m wrong, I’m not thinking of a mere philosophical acknowledgement that I’m  likely wrong about something .  Rather, I like to ask, what if I’m wrong about something really important?  It’s easy to acknowledge I might be wrong about the best restaurant in town, or the culpability of O.J. Simpson.  No,  I’m thinking on the scale of what if “up” isn’t up, and “down” isn’t down?  And today, I’m wondering, “What if I’m wrong about Jesus?”

I imagine I’ve just alienated lots of people: most obviously, those Christian faithful for whom belief in Jesus is the most important belief in the world, but maybe also those atheists, Jews, Muslims and others who might take offense at the suggestion that belief in Jesus has ever been important to them.  In fact, for non-believers,  if I’m suggesting they might be wrong, I’ve just alienated them by revealing myself  as a closet Christian proselytizer who’s just disclosed a very annoying agenda – right?.

Indeed, therein lies the reason for my question.  What if we’re all wrong about Jesus? Not just those who believe in him, but also those who don’t?  Anyone  whose feathers may be ruffled by the suggestion that belief in him, one way or the other, may not be important after all?

At the mere asking of such a question, a lot of us brace ourselves for the sort of debate we’ve grown used  to – a debate we may have grown tired of  – a debate between those who believe in Jesus and those who don’t.  Jesus himself is said to have predicted  that brother would deliver brother to death, and be hated, on account of him.  (Matt.  10:21-22.)   I’ve always thought it ironic that a figure so identified with principles of loving – not just one’s neighbors  but one’s enemies – would end up at the center of debates, wars, and genocides fought in (or against) his name.  Yet, from the Crusades  to jihads, from the Salem witch trials to modern clashes over sexual identity,  this advocate for love has been at the center of controversy and hate.  Probably because I was raised in the midst of argument between Roman Catholics (my father’s side) and fundamentalist Presbyterians (my mother’s side), I lean toward agnosticism, not only with respect to religion, but politics, psychology, and physics as well.   Agnosticism, after all, is a part of what led me to We May Be Wrong.

But having been raised as a Christian, I have a special interest in the irony of the animosities surrounding Jesus and his followers.  And so I ask, “What if we’re all wrong about Jesus?”

Now, for me, the proposition that we may be wrong has never meant to suggest we’re wrong about everything, or even totally wrong about any one thing.  I simply start with the acknowledgement that I’m almost certainly wrong about something, and  from there, I move on to the belief that I really have no way of knowing, for sure, which subject(s) are the ones I’m wrong about.  I may be right about a lot of things;  I just wish I could identify what those things were, so that I could jettison all the others.   So I‘m not asking anybody to question all their beliefs about Jesus, or to contemplate the possibility that all of them might be wrong.  Today, however, I do have a particular one in mind.

I think the concept of “belief in Jesus” is unique in the modern world, or very nearly so.  Our language itself suggests as much.  If we say we have “faith” in our generals, we likely mean only that we trust them, that we feel secure under their leadership.  But if we say we have faith in Jesus – or even more so, that we “believe in” him – we usually mean a good bit more than that.

I don’t say, “I believe in dogs,” or “I believe in pepperoni pizzas.”    I might say I believe that such things exist, but not that I believe in them.  If I say I believe in Santa Claus, or in the Easter Bunny, I’m saying I believe that such creatures are physically real, not just figments of fairy tale. When I say “I believe in ‘X’” it’s  usually an abbreviated way of stating a belief in the truth of some specific proposition about ‘X.’   If I say, “I believe in love,” or “I believe in democracy,” it’s the equivalent of saying I believe in the truth of the proposition that love (or democracy) is a good thing.  But if I say, “I believe in Jesus,” I’m not generally understood to be saying that I trust his teaching  or that I believe in the truth of the proposition that he was a good man; I’m understood to be asserting belief in the truth of a very unique proposition about him, and no one else who’s ever lived.  A belief, in fact, that has no parallel in truth propositions about anything else in my vocabulary.

Yet, when it comes to belief in Jesus, discussion often stops right there, at the “I believe” stage.  As soon as we hear “I believe – ”  or “I don’t believe –” it’s as if the “sides” are drawn without ever getting to what it is that one does, or doesn’t, believe about him.  For some reason, Jesus has become a virtual poster child for polarization.  “You’re either with us or against us” often seems the attitude on both sides.

Now, my parents were from different religious backgrounds, and for that reason they disagreed about religion a lot:  Transubstantiation.  Limbo.  The assumption.   The veneration of Mary.  The priesthood.  The authority of the Pope.   The sacraments.  How to pray.  How the world was created.  The list goes on.  Personally, I came to believe their disagreements were symptomatic of the pitfalls inevitably encountered when we start trying to define metaphysical things with words that draw their meaning from the physical.  (Words draw their meaning from their use as applied to shared experiences; when we use them to describe things we claim to be unique, I lose confidence in them.) But while my parents disagreed about many aspects of their Christian beliefs, they were typical of most Christians in one respect:  when they said, “I believe in Jesus,” they were agreeing that Jesus was God.

Now,  I’ve never thought I had a very good idea of what it would be like to be a theoretical physicist, or the President of the United States, much less God.  Whatever it means to be God, if such a person or things exists at all, seems too much to comprehend.    I won’t delve into the nuances of whether my parents meant that Jesus was really God, or just the son of God, or a part of the three persons in one God, or any of the other verbal formulations that had church leaders arguing from the get go.   Years of effort to understand such nuances have only further convinced me that it’s like arguing over the number of angels that could fit on the head of a pin.  I, for one, don’t really understand what it means to be God – in whole, or even in part.

And for me, at least, the logic is one of mathematical equality: if I can’t say exactly that “God is X,” then I don’t see how I can say that “X is God.”  And if I can’t understand what it means to say that “X is God,” then I don’t follow how important it could be to believe that Jesus was, or is, or wasn’t, or isn’t.  How can it be important to believe in the truth of any proposition I cannot understand?

For my mother and father, the most important thing I could ever do was to profess my belief that Jesus was God, or the son of God, or (fill in whatever qualifiers you deem relevant.).  Throughout their lives,  I held my ground, refusing to profess a belief in the truth (or falsity) of a proposition I didn’t understand.  This  frustrated the $#@!  out of them.  For my parents, “belief in Jesus” did not mean a belief that he existed, or that he was good, or that he performed miracles, or that he proclaimed the importance of love, or that his advice on human behavior was incredibly wise.  “Belief in Jesus” meant belief that, in some sense or another, he was God.  And – crucially – this belief in the divinity of Jesus made all the difference to them.  Whether I was, or wasn’t, a “Christian” depended on that one thing, not to mention whether I’d spend eternity in heaven or hell on its account

I could believe in dogs, or Santa Claus,  if I thought they existed.  I could believe in the American flag if I thought it represented a good country with good ideals.  I could believe in Donald Trump if I thought he was a good president.  But I couldn’t believe in Jesus – not really – unless I believed that he was, in some way, God.

This core requirement for what it means to be a Christian in our world has permeated the thinking of Christians and non-Christians alike since Paul began writing epistles.  The “divinity proposition” that has attached itself to Jesus – the principle for which martyrs have died, for which wars have been fought, for which heretics have been burned – seems to have caused a divide between believers and non-believers that, from where I sit, has no parallel in human history.  And the gospels report that Jesus himself predicted it!

So I ask, “What if we’re all wrong about Jesus?”  in this respect.

Now, some of you may think I’m asking whether we’ve been wrong, all this time, to suppose that Jesus was divine.  Others may think I’m asking whether we’ve been wrong to suppose that he wasn’t.  The traditional concept that belief in Jesus’s divinity (or not) is the be-all and end-all of what it means to be a Christian has shaped our understanding.   If you’re a Christian, it determines whether you’re among the “saved;”  if you’re not a Christian, it determines  whether you’re a self-righteous, deluded dreamer, not to mention potentially dangerous because of the strength of your unreasonable convictions.

But what if, properly recorded, preserved, translated, and interpreted, Jesus neither claimed to be divine, nor denied it?  And even more: What if he disapproved of such theological inquiries, seeing them as the downfall of the Pharisees?  What if, when asked by his disciples what he would have them do, his answer was not that they should believe him to be divine come hell or high water, or that their eternal salvation would depend on their belief in any such theological proposition, but, simply, that they should do as he did?  That they should care for the sick, and love their neighbors as much as they loved themselves?

What if, on this single aspect of understanding  –that belief in the divinity proposition is the sine qua non of Christianity –  we’ve all been wrong, all along?  What would the world be like if the central element of Christianity had not turned out to be belief in the divinity of Jesus, but in living the sort of life he’s said to have lived?  What if Jesus were celebrated for teaching, essentially,  “Look, folks, I don’t understand why you’re so obsessed with this question of divinity and divine origins.  Haven’t you better things to do, and to talk about, than whether, in one sense or another, I am God?  Stop doing that, please!  Leave it for the Pharisees!”

If that concept had been at the center of Christianity for the past two thousand years,  what would it mean, today, to “be a Christian”?   If Christians had been taught not to concern themselves with whether Jesus was God, would that mean that all the “rooms of my father’s house” would be empty, beccause no one had “believed”?

I’m not saying it’s true, or false, I’m just wondering, what the ramifications would be, for the past two thousand years, if the divinity proposition had never been considered important, and that “Christianity” had been a movement centered on “love thy enemy” and “judge not lest ye be judged” and “care for one another.”

What would have happened to the pagan persecutions of the martyrs?  The history of schisms in Christian churches?  The Church’s persecution of heretics? The Christian endorsement of the African slave trade? Conflicts between Christians and Jews, Muslims and atheists?  The household (and the world)  in which I grew up?

I can hear the condemnation.  To posit a Jesus who disapproved of contemplating his divinity – who counselled against the very thought of such an exercise as non-productive, pointless , Pharisaical, and bound to result in division and strife  – would be to rip out the core of Christianity itself.

But what if we’re wrong about that?

Thanks to F. Lee Bailey…

Years ago, I heard a presentation by F. Lee Bailey.  His audience was other lawyers.  His topic was impeaching adverse witnesses – that is, convincing a judge or jury not to believe them.  His premise was that people – including judges and jurors – don’t want to think other people are lying, if they can help it.  Bailey’s advice, therefore, was to avoid the beginner’s mistake of trying to convince a jury that an adverse witness is a liar, except as an absolute last resort.  Instead, he recommended, give the jury, if at all possible,  other reasons for not believing the opposing witness.  He spent the rest of his talk giving examples of different ways witnesses can be giving false testimony, other than lying.

Looking back on it, it was a surprising presentation from the man who, some years later, convinced the O.J. Simpson jury that detective Mark Furman was a bold-faced liar.  But when I heard Bailey’s presentation, O.J. Simpson was still doing Hertz commercials.  Bailey himself was already famous for representing Sam Sheppard, Ernest Medina, Patty Hearst and others.  His talk made a big impression on me because, in it, he offered a list of ten ways to discredit a witness, other than by arguing that the witness was lying.  I can’t say it improved my legal prowess, but it did get me thinking about all the ways people simply make mistakes.

I lost the notes I took.  Unable to find Bailey’s list on-line, I attempted to reconstruct it myself, in order to do a Bailey-esque “top ten” list of my own in this forum.  But I’ve finally abandoned that effort, for reasons I suspect will become apparent.  Still, I’m interested in the variety of reasons for error, and propose to share some of my thoughts on that subject.

One obvious reason for error is simple unawareness.  An example comes quickly to mind: my lack of awareness of my oldest brother’s existence.  He was born with Down Syndrome, and before I was ever born, our parents had been convinced to send him away, to start a “normal” family as soon as possible, and to forget (if they could) that their first son existed at all.   Three children later, they found themselves unable to do so, and belatedly accepted their first son into the family.  I’ll bypass here the obvious question of whether they were wrong to accept the advice in the first place.  My example has to do with my own error in believing that I was the second child, born with only one other sibling.   My wrongness was simply that I didn’t know, as I’d never been told.  I didn’t know about our oldest sibling until I was five years old, when he first came home.  Until then, every detail of my life had suggested I had only one older brother.  Being wrong about that was simply a matter of not knowing.  As my other older brother recently pointed out, the one thing we cannot know is what we don’t know.

If simply not knowing (i.e., not having information) is one reason we can be wrong, misinterpreting information seems to be another.  Years ago, I’d just sat down after getting home late from work one evening when my dear wife Karen sat down beside me and, looking at my forehead, furrowed her brow in an expression of clear concern about whatever she saw.  Hearing her say, “Hit yourself over your right eye,” I imagined a fly or mosquito about to bite me. To kill the insect I’d have to be fast, so instantly I swung my hand to my forehead, forgetting I was wearing glasses.  (We can count forgetfulness as another way of being wrong).  When the open palm of my right hand smacked my forehead over my right eye, it crushed the glasses and sent them flying across the room, but not before they made a very painful impression on my eyebrow.  But the most surprising result of my obedience was Karen’s uncontrolled laughter.

Now, I thought it cruel for her to laugh when I was in pain, but when a person you love is right in front of you, laughing uncontrollably, sometimes, you can’t help yourself, and you simply start laughing yourself (which is what I did, without quite knowing why).  My laughter just added fuel to Karen’s.  (I suppose she thought it funny  that I’d be laughing, considering the circumstances.)  Then I began to laugh all the more myself, as I realized she was right, that I had no reason to be laughing; the fact that I was laughing struck me as laughable.)  Neither of us could stop for what seemed like forever.

Karen, bless her heart,tried several times to explain why she’d started laughing – but each time she tried, the effort set her off again.  And when her laughing started up again, so did mine.  The encores repeated themselves several times before she was finally able to explain that when I’d sat down, she’d noticed a bit of redness above my right eye.  (Perhaps I’d been rubbing it during my drive home?)  She had simply asked, “Did you hit yourself over your right eye?”  Not hearing the first two words, I’d mistaken the question for a command.  Dutifully, and quickly, I had obeyed.

So far, I’ve mentioned simple ignorance, forgetfulness, and misinterpretation.  I might add my mistake in simply assuming the presence of an insect, or my negligence in failing to ask Karen to explain her odd command.  Actually, we begin to see here the difficulty of distinguishing among causes of error, or among ways it is committed.  Was it really that I had misinterpreted Karen’s question?  Or was it, rather, a failure of sense perception, my failure to hear her first two words?  Or was it her failure to sufficiently enunciate them?  Such questions suggest the difficulty of classifying reasons for error.  When it comes to assigning blame, people like F. Lee Bailey and me made our livings out of arguing about such things.

But I do have an example of a different sort to share.  This one also dates from the 1980’s.  It represents the sort of error we commit when we have all the necessary information, when we make no mistakes of hearing or interpretation, but we – well – let me first share the story.

One of my cases was set to be heard by the United States Supreme Court.  Now, I’m infamous for my lack of concern about stylish dress, and at that point, I’d been wearing the same pair of shoes daily for at least five years – without ever polishing them.  (Go ahead, call me “slob” if you like; you won’t be the first.)  The traditions of appropriate attire when appearing before the United States Supreme Court had been impressed upon me, to the point I’d conceded I really ought to go buy a new pair of shoes for the occasion.  So the night before my departure for Washington, I drove myself to the mall.  Vaguely recalling that there was  a Florsheim shoe store at one end – which, if memory served, carried a nice selection of men’s shoes – I parked, found the store, and began my search, surveying both the display tables in the center of the store and the tiers of shoes displayed around the perimeter.   My plan was first to get a general sense of the options available, and then to narrow the choices.  As I walked from one table to the next, a salesman asked if he could help.

I replied with my usual “No, thanks, just looking.”  As I made my way around the store, the salesman returned, with the same result, and then a third time.  (My war with over-helpful sales clerks is a story for another day.)  Finally, with no help from the salesman, I found a table with a display of shoes that seemed to suit my tastes.  I picked up several pairs, feeling the leather, inspecting the soles, getting a closer look.  The salesman was standing close by now (as if his life depended on it, in fact) and one final time, he asked if he could help.  I really didn’t want to be rushed into conversation with him.  But I took one final look around that particular display, comparing the alternatives to the pair I was holding in my hand, and finally said to the salesman, “I think I like the looks of these.  Are they comfortable?”

“You ought to know,” came the salesman’s reply.  “They’re the same shoes you’re wearing.”

Looking down at my feet, of course, I realized why I’d remembered that store from five years earlier.  At least I’d been consistent.  But when you don’t much care about the clothes you wear, you just don’t think about such information as the location of a shoe store: it’s just not important.

So one issue raised by the example is focus.  Never focus on your shoes and you’re likely to look stupid for not knowing what you’re wearing.  But right behind focus, I think, the example raises the matter of consistency.  Darn right I’d been consistent!  Because of not paying attention, I’d gone to the same mall, to the same store, to the same case, and to the same pair of shoes, exactly as I had five years earlier.  I’ll generalize this into an opinion about human nature: when not consciously focused, unconscious force of habit takes over.

Lack of conscious focus and unconscious force of habit can certainly lead to error.  But being unmindful of something is a matter of prioritizing among competing interests.  With billions of pieces of data showering us from all corners of our experience every day, we have to limit what we focus on.    In my case, it’s often clothing that gets ignored, and instead, ever since hearing F. Lee Bailey’s talk thirty-some years ago, I’ve been thinking about the reasons people can be mistaken.  Everybody has things they tend to pay attention to, other things they tend to ignore.  But among the reasons we err, I think, is the tendency to proceed, unconsciously, through the world we’re not focused on, as if we’re on auto-pilot.  Who hasn’t had the experience of driving a car, realizing you’ve reached your destination while lost in thought, having paid no conscious attention to getting there? How much of our lives do we conduct this way – and how often does it mean we might ask a question just as stupid as “I think I like the looks of these; are they comfortable?”

In my next post, I plan to explore some further types of error.  In the meantime, I’ll close here by pointing out that if you believe what you read on the internet, F. Lee Bailey ended up getting disbarred.   And unless I’m badly mistaken,  he did make Mark Furman out to be a liar.  But while I can admit to recalling two or three times in my life when I told bold-faced lies,  I have no problem admitting I’ve been wrong a lot more often than that.

So for now, I’ll simply thank F. Lee Bailey for helping me understand that lying is just the tip of the iceberg; and that trying to figure out how much lies beneath the surface is  a deep, deep subject – and a problematic one, to say the least.

To be continued.

– Joe

To a New Year

Several people have mentioned it’s been a while since the last WMBW post.

As it happens, I’ve written a number of things with WMBW in mind, but none have seemed worthy of posting.  You have a zillion things to digest.  I don’t want to add spam to your in=basket — especially not when the only point is that, whatever I might say,  I may be wrong.

Sure, I do remain in awe of how little I know.  Of how vast is the universe of what I don’t.  Of how presumptuous I’d be to expect anyone to read what I’ve written.  But precisely for that reason, my sentences remain on my hard drive, in unsent files.  And for precisely that reason, all that emerges, like a seedling from a crack in the pavement underfoot, is my wish that in the New Year to come,  I learn as much about myself, my world, and about the people around me,  as I can.

That, my friends, is all I think worth saying — and that I wish the same for all of us.

—Joe

 

 

The Tag Line

WMBW’s tagline is “Fallibility>Humility>Civility.”  It’s punctuated to suggest that one state of being should lead naturally to the next.  The relationship between these three concepts being central to the idea, today I’ve accepted my brother’s suggestion to comment about the meaning of the words.

Etymology books tell us that “fallibility” comes from the Latin fallere, a transitive verb that meant to cause something to stumble.  In the reflexive form, Cicero’s me fallit (“something caused me to stumble”) bestowed upon our concept of fallibility the useful idea that when one makes a mistake, it isn’t one’s own fault.  As Flip Wilson used to say, “the devil made me do it.”

This is something I adore about language – the way we speak is instructive because it mirrors the way we think.   Therefore, tracing the way language evolves, we can trace the logic (or illogic) of the way we have historically tended to think, and so we can learn something about ourselves.  Applying that concept here leads me to conclude that denying personal responsibility for our mistakes goes back at least as far as Cicero, probably as far as the origins of language itself, and perhaps even farther.  “I did not err,” our ancient ancestors taught their children to say; “something caused me to stumble.”

I also think it’s fun to examine the development of language to see how basic ideas multiply into related concepts, the way parents give rise to multiple siblings.  And so, from the Latin fallere come the French faux pas and the English words false, fallacy,  fault, and ultimately, failure and fail.  While I’ve heard people admit that they were at fault when they stumbled, it’s far less common to hear anyone admit responsibility for complete failure.  If someone does, her friends tell her not to be so hard on herself.  His psychiatrist is liable to label him abnormal, perhaps pathologically so: depressed, perhaps, or at least lacking in healthy self-esteem.  The accepted wisdom tells us that a healthier state of mind comes from placing blame elsewhere, rather than on oneself.  Most interesting.

Humility, meanwhile, apparently began life in the Indo-European root khem, which spawned similar-sounding words in Hittite, Tokharian, and various other ancient languages.  All such words meant the earth, the land, the soil, the ground – that which is lowly, one might say; the thing upon which all of us have been raised to tread.  In Latin the Indo-European root meaning the ground underfoot became humus, and led to English words like exhume, meaning to remove from the ground.  Not long thereafter, one imagines, the very ancient idea that human beings came from the ground (dust, clay, or whatever) or at least lived on it led to the Latin word homo, a derivative of humus, which essentially meant a creature of the ground (as opposed to those of the air or the sea).  From there came the English words human and humanity.  Our humanity, then, might be said to mean, ultimately, our very lowliness.

From the Latin, homo and humus give us two rather contrary sibling words.  These siblings remain in a classic rivalry played out to this day in all manner of ways.  On the one hand, homo and humus give us our word “humility,” the quality of being low to the ground.  We express humility when we kneel before a lord  or bow low to indicate subservience. In this light, humility might be said to be the very essence of humanity, since both embody our lowly, soiled, earth-bound natures  But our human nature tempts us with the idea that it isn’t good to be so low to the ground.  To humiliate someone else is to put them in their place (to wit, low to the ground, or at least, low compared to us.) And while we share with dogs and many other creatures of the land the habit of getting low to express submissiveness, some of our fellow creatures of the land go so far as to lay down and bare the undersides of their necks to show submission.  Few of us are willing to demonstrate that degree of humility.)

And so the concept of being a creature of the ground underfoot gives rise to a sibling rivalry — there arises what might be called the “evil twin” of humility, and it is the scientific name by which we distinguish ourselves from other land-based creatures: the perception that we are the best and wisest of them gives rise to homo sapiens, the wise land-creature.  As I’ve pointed out in an earlier blog, even that accolade wasn’t enough to satisfy us for long: now our scientists have bestowed upon us the name homo sapiens sapiens, or the doubly wise creatures of the earth.   I find much that seems telling in the tension between our humble origins and our self-congratulatory honorific.  As for the current state of the rivalry, I would merely point out that not one of our fellow creatures of the land, as far as I know, have ever called us wise.  It may be only us who think us so.

And now, I turn to “civility.”  Joseph Partridge, my favorite etymologist, traces the word back to an Indo-European root kei, meaning to lie down. In various early languages, that common root came to mean the place where one lies down, or one’s home. (Partridge asserts that the English word “home” itself ultimately comes from the same root.)  Meanwhile, Partridge tells us, the Indo-European kei morphed into the Sanskrit word siva, meaning friendly.  (It shouldn’t be hard to imagine how the concepts of home and friendliness were early associated, especially given the relationship between friendliness and propagation.) In Latin, a language which evolved in one of the ancient world’s most concentrated population centers, the root kei became the root ciu- seen in such words as ciuis, (a citizen, or person in relation to his neighbors), and ciuitas (a city-state, an aggregation of citizens, the quality of being in such an inherently friendly relationship to others).  By the time we get to English, such words as citizen, citadel, city, civics and civilization, and of course civility itself, all owe their basic meaning to the idea of getting along well with those with whom we share a home.

In the olden days, when one’s home might have been a tent on the Savannah, or a group of villagers occupying one bank of the river, civility was important to producing harmony and cooperation among those who laid down to sleep together.  Such cooperation was important for families to work together and survive.  But as families became villages, villages became cities, and city-states became larger civilizations, we have been expanding the reach of people who sleep together.  (And I mean literally – my Florida-born son, my Japanese-born daughter-in-law, and my grandson, Ryu, who even as I write is flying back from Japan to Florida, remind me of that fact daily.)  Our family has spread beyond the riverbank to the globe.

Given the meanings of all these words, I would ask how far our modern sense of “home” and “family” extend?  What does it mean, these days, to be “civilized”?  What does it mean, oh doubly-wise creatures of the earth, to be “humane”? And in the final analysis, what will it take to “fail”?

— Joe