News, Thoughts and Opinions

Thanks to F. Lee Bailey – Part Two

Last time out, I was discussing F. Lee Bailey’s effort to identify various reasons a witness can be mistaken.  Bailey’s thesis was that juries don’t want to believe that witnesses lie, so the wisest and most effective way for a lawyer to discredit a witness is to point out for the jury other reasons – other than bold-faced lies – that a witness might not be telling the truth.  Attempting to come up with my own list, I offered examples that involved lack of information, interpretation of information, forgetfulness, the making of assumptions, the lack of focus, and unconscious force of habit.  Today, I continue that survey of reasons for error.

One common reason witnesses can seem to have diametrically opposed versions of reality, when neither is lying, has to do with language.  When my son was four years old, he was being particularly cranky one evening, whining out loud while I was trying to watch television.  I told him to behave himself or I’d put him to bed.  He quieted a bit, but only momentarily.  So I repeated my threat.  “Behave,” I said in a louder and sterner voice than the first time, “or you’re going to bed!”  Once again, the threat worked only briefly, so his whining began again and I repeated my threat a third time, even more sternly than before.  Again it worked, but when the whining returned only seconds later, at the limit of my patience, I cried out “Daniel, behave!!” in the fiercest tone I could muster.   Frightened nearly to death by my obvious anger, his chin trembled in fear.

“I’m haive,” he assured me.  “I’m haive.

The point is, words mean different things to different people.  Language can get in the way.  To my four year old son, I might as well have been babbling.  Was it his mistake, to misunderstand, or mine, to assume he understood what it meant to be have?  It was, in either case, a failure of communication.  And failure of communication consistently ranks high on lists of reasons for mistake.

Sometimes, we have trouble communicating even with ourselves, and when this happens, it suggests different reasons for error.  A couple of years before we left Florida, on a winter day Karen had invited two guests to the house to paint for the afternoon, I agreed to cook them a meal.  A wall of sliding glass doors that looked out to the swimming pool gave the kitchen the best light for painting, and because of the light, it was the ladies’ chosen spot, as well as my work area for the day.   The meal included  a spiced chutney for which the ingredients  included coriander, cumin, and a little cayenne pepper.  Soon after preparing the chutney I felt a burning sensation in my right eye.  I rubbed the eye with the back of my hand, and then with a wet cloth, but rubbing the eye seemed only to increase the burning sensation.   My tear ducts went into high gear, but despite this natural defense, the burning did not abate.

I’d just recently started wearing contact lenses, and fearing that a lens could be trapping the offensive powder against my cornea, I worried it might be the reason my tear ducts were being ineffective.  The worry was heightened when I went to the sink and flushed my eye with a glass of water, with no consequent reduction in pain.

The urgency of removing the spices became an urgency to remove the contact lens – but I realized quickly that I was having great difficulty even locating the darn thing.   When I tried to squeeze it off and out, my fingers came up empty.  My inability to feel it suggested two possible explanations.  As had happened before, it might have become so closely fitted to the cornea that underlying suction was simply preventing its removal.  Alternatively, all that tearing (or the water from the sink)  had washed the lens down into the eyelid where (having assumed the shape of a folded burrito packed with spicy powder) it was making elimination of the powder impossible.  With the burning sensation getting stronger by the second, I raced from the kitchen to the closest mirror – in our bedroom upstairs – and pulled the lower eyelid down in a search for the offensive lens.  But what with hyperactive tear ducts, pain, and the lack of a functioning lens, my poor eyes couldn’t tell whether the lens was in the eyelid or not.  I couldn’t feel it there, or folded into the upper eyelid, or stuck stubbornly to the cornea itself.  Ever more determined to remove it, I kept pinching at the lens with my fingertips from everywhere in the eye socket it might possibly be.

Unsuccessful, I ran back downstairs, flung open the sliding glass doors and crouched at poolside, dunking my head into the winter-cold water, thrashing my head to generate as much flow as possible, convinced that this, at least, would flush out the offending lens.  But when I lifted my head from the water the pain only increased.  The ladies were laughing now, asking what in the heck I was doing.  But caring only about the pain, I shut my eyes.  The pain increased.  Again and again, I tried to fish for the offending lens, sure that it was to blame, wherever it was

In time, the pain eventually stopped – but not until the ladies suggested I thoroughly wash my hands.  As soon as I did, I realized I could use my fingers to pinch around for the missing lens without adding more spice to the mix.  But even then, I couldn’t locate the lens.

Able at last to see well and think straight again, I found my glasses on the kitchen counter.  Only then did I realize the depths of my folly.  Removing the glasses had been the first thing I’d done, even before rubbing my eyes with the kitchen cloth.  I hadn’t been wearing my contact lenses that day at all.

How does one classify such an error?  You could ascribe it to my inexperience in the kitchen and consequent failure to wash the spices off my hands.  You could ascribe it to my inexperience with contact lenses.  You could ascribe it to my bad decision-making when under pressure, or to forgetfulness, or to lack of focus.  You might say that habit was to blame, as the removal of my glasses at the first sign of irritation was one of those unconscious habits that are so automatic we forget about them.  (In that case, lack of self-awareness about my own habits was also to blame.)  Finally, you might ascribe it to the  presence of an idea – a false idea, but an idea nevertheless – that once in command of my attention, made all the other reasons irrelevant.  The idea that a contact lens had trapped the powder had  supplanting the powder itself as the culprit in need of ferreting out.  I’d entirely made up the story of the folded contact lens, but it was so graphic, so real, so painfully compelling, that it became the thing I focused on; it took command of my world.

One conclusion I draw is that it’s hard to classify reasons for error neatly into distinct types, because any one error may result from all sorts of factors.  But being human, I’m prone to think in terms of types and classifications; they help me think I better understand the world.  And when I do, I’m especially fond of this last-mentioned cause for error – the false stories we tell ourselves.   False as my story about the contact lens  was, IT was the story playing out in my head; IT created the entire world with which my conscious self interacted.  For all intents and purposes, it became my reality.

I’ve enjoyed reading psychologists, philosophers and story-tellers share thoughts about the stories we tell ourselves.  I’ve especially enjoyed reading opinions about whether it’s possible for the human brain to know whether the world it perceives is “real” in any sense distinguishable from the stories we tell ourselves.  Ultimately, I don’t know if creating these stories is the most common reason for our errors or not, but I think they’re among the most interesting.

Finally, to F. Lee Bailey, in addition to conveying my thanks for getting me to think about the reasons people may be wrong, I’d like to convey a suggestion: that, possibly, people lie more often than he supposed.  Possibly, they just do it, most often, to themselves.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/02/03/thanks-to-f-lee-bailey-part-two/
Twitter
RSS

Thanks to F. Lee Bailey…

Years ago, I heard a presentation by F. Lee Bailey.  His audience was other lawyers.  His topic was impeaching adverse witnesses – that is, convincing a judge or jury not to believe them.  His premise was that people – including judges and jurors – don’t want to think other people are lying, if they can help it.  Bailey’s advice, therefore, was to avoid the beginner’s mistake of trying to convince a jury that an adverse witness is a liar, except as an absolute last resort.  Instead, he recommended, give the jury, if at all possible,  other reasons for not believing the opposing witness.  He spent the rest of his talk giving examples of different ways witnesses can be giving false testimony, other than lying.

Looking back on it, it was a surprising presentation from the man who, some years later, convinced the O.J. Simpson jury that detective Mark Furman was a bold-faced liar.  But when I heard Bailey’s presentation, O.J. Simpson was still doing Hertz commercials.  Bailey himself was already famous for representing Sam Sheppard, Ernest Medina, Patty Hearst and others.  His talk made a big impression on me because, in it, he offered a list of ten ways to discredit a witness, other than by arguing that the witness was lying.  I can’t say it improved my legal prowess, but it did get me thinking about all the ways people simply make mistakes.

I lost the notes I took.  Unable to find Bailey’s list on-line, I attempted to reconstruct it myself, in order to do a Bailey-esque “top ten” list of my own in this forum.  But I’ve finally abandoned that effort, for reasons I suspect will become apparent.  Still, I’m interested in the variety of reasons for error, and propose to share some of my thoughts on that subject.

One obvious reason for error is simple unawareness.  An example comes quickly to mind: my lack of awareness of my oldest brother’s existence.  He was born with Down Syndrome, and before I was ever born, our parents had been convinced to send him away, to start a “normal” family as soon as possible, and to forget (if they could) that their first son existed at all.   Three children later, they found themselves unable to do so, and belatedly accepted their first son into the family.  I’ll bypass here the obvious question of whether they were wrong to accept the advice in the first place.  My example has to do with my own error in believing that I was the second child, born with only one other sibling.   My wrongness was simply that I didn’t know, as I’d never been told.  I didn’t know about our oldest sibling until I was five years old, when he first came home.  Until then, every detail of my life had suggested I had only one older brother.  Being wrong about that was simply a matter of not knowing.  As my other older brother recently pointed out, the one thing we cannot know is what we don’t know.

If simply not knowing (i.e., not having information) is one reason we can be wrong, misinterpreting information seems to be another.  Years ago, I’d just sat down after getting home late from work one evening when my dear wife Karen sat down beside me and, looking at my forehead, furrowed her brow in an expression of clear concern about whatever she saw.  Hearing her say, “Hit yourself over your right eye,” I imagined a fly or mosquito about to bite me. To kill the insect I’d have to be fast, so instantly I swung my hand to my forehead, forgetting I was wearing glasses.  (We can count forgetfulness as another way of being wrong).  When the open palm of my right hand smacked my forehead over my right eye, it crushed the glasses and sent them flying across the room, but not before they made a very painful impression on my eyebrow.  But the most surprising result of my obedience was Karen’s uncontrolled laughter.

Now, I thought it cruel for her to laugh when I was in pain, but when a person you love is right in front of you, laughing uncontrollably, sometimes, you can’t help yourself, and you simply start laughing yourself (which is what I did, without quite knowing why).  My laughter just added fuel to Karen’s.  (I suppose she thought it funny  that I’d be laughing, considering the circumstances.)  Then I began to laugh all the more myself, as I realized she was right, that I had no reason to be laughing; the fact that I was laughing struck me as laughable.)  Neither of us could stop for what seemed like forever.

Karen, bless her heart,tried several times to explain why she’d started laughing – but each time she tried, the effort set her off again.  And when her laughing started up again, so did mine.  The encores repeated themselves several times before she was finally able to explain that when I’d sat down, she’d noticed a bit of redness above my right eye.  (Perhaps I’d been rubbing it during my drive home?)  She had simply asked, “Did you hit yourself over your right eye?”  Not hearing the first two words, I’d mistaken the question for a command.  Dutifully, and quickly, I had obeyed.

So far, I’ve mentioned simple ignorance, forgetfulness, and misinterpretation.  I might add my mistake in simply assuming the presence of an insect, or my negligence in failing to ask Karen to explain her odd command.  Actually, we begin to see here the difficulty of distinguishing among causes of error, or among ways it is committed.  Was it really that I had misinterpreted Karen’s question?  Or was it, rather, a failure of sense perception, my failure to hear her first two words?  Or was it her failure to sufficiently enunciate them?  Such questions suggest the difficulty of classifying reasons for error.  When it comes to assigning blame, people like F. Lee Bailey and me made our livings out of arguing about such things.

But I do have an example of a different sort to share.  This one also dates from the 1980’s.  It represents the sort of error we commit when we have all the necessary information, when we make no mistakes of hearing or interpretation, but we – well – let me first share the story.

One of my cases was set to be heard by the United States Supreme Court.  Now, I’m infamous for my lack of concern about stylish dress, and at that point, I’d been wearing the same pair of shoes daily for at least five years – without ever polishing them.  (Go ahead, call me “slob” if you like; you won’t be the first.)  The traditions of appropriate attire when appearing before the United States Supreme Court had been impressed upon me, to the point I’d conceded I really ought to go buy a new pair of shoes for the occasion.  So the night before my departure for Washington, I drove myself to the mall.  Vaguely recalling that there was  a Florsheim shoe store at one end – which, if memory served, carried a nice selection of men’s shoes – I parked, found the store, and began my search, surveying both the display tables in the center of the store and the tiers of shoes displayed around the perimeter.   My plan was first to get a general sense of the options available, and then to narrow the choices.  As I walked from one table to the next, a salesman asked if he could help.

I replied with my usual “No, thanks, just looking.”  As I made my way around the store, the salesman returned, with the same result, and then a third time.  (My war with over-helpful sales clerks is a story for another day.)  Finally, with no help from the salesman, I found a table with a display of shoes that seemed to suit my tastes.  I picked up several pairs, feeling the leather, inspecting the soles, getting a closer look.  The salesman was standing close by now (as if his life depended on it, in fact) and one final time, he asked if he could help.  I really didn’t want to be rushed into conversation with him.  But I took one final look around that particular display, comparing the alternatives to the pair I was holding in my hand, and finally said to the salesman, “I think I like the looks of these.  Are they comfortable?”

“You ought to know,” came the salesman’s reply.  “They’re the same shoes you’re wearing.”

Looking down at my feet, of course, I realized why I’d remembered that store from five years earlier.  At least I’d been consistent.  But when you don’t much care about the clothes you wear, you just don’t think about such information as the location of a shoe store: it’s just not important.

So one issue raised by the example is focus.  Never focus on your shoes and you’re likely to look stupid for not knowing what you’re wearing.  But right behind focus, I think, the example raises the matter of consistency.  Darn right I’d been consistent!  Because of not paying attention, I’d gone to the same mall, to the same store, to the same case, and to the same pair of shoes, exactly as I had five years earlier.  I’ll generalize this into an opinion about human nature: when not consciously focused, unconscious force of habit takes over.

Lack of conscious focus and unconscious force of habit can certainly lead to error.  But being unmindful of something is a matter of prioritizing among competing interests.  With billions of pieces of data showering us from all corners of our experience every day, we have to limit what we focus on.    In my case, it’s often clothing that gets ignored, and instead, ever since hearing F. Lee Bailey’s talk thirty-some years ago, I’ve been thinking about the reasons people can be mistaken.  Everybody has things they tend to pay attention to, other things they tend to ignore.  But among the reasons we err, I think, is the tendency to proceed, unconsciously, through the world we’re not focused on, as if we’re on auto-pilot.  Who hasn’t had the experience of driving a car, realizing you’ve reached your destination while lost in thought, having paid no conscious attention to getting there? How much of our lives do we conduct this way – and how often does it mean we might ask a question just as stupid as “I think I like the looks of these; are they comfortable?”

In my next post, I plan to explore some further types of error.  In the meantime, I’ll close here by pointing out that if you believe what you read on the internet, F. Lee Bailey ended up getting disbarred.   And unless I’m badly mistaken,  he did make Mark Furman out to be a liar.  But while I can admit to recalling two or three times in my life when I told bold-faced lies,  I have no problem admitting I’ve been wrong a lot more often than that.

So for now, I’ll simply thank F. Lee Bailey for helping me understand that lying is just the tip of the iceberg; and that trying to figure out how much lies beneath the surface is  a deep, deep subject – and a problematic one, to say the least.

To be continued.

– Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/01/14/thanks-to-f-lee-bailey/
Twitter
RSS

To a New Year

Several people have mentioned it’s been a while since the last WMBW post.

As it happens, I’ve written a number of things with WMBW in mind, but none have seemed worthy of posting.  You have a zillion things to digest.  I don’t want to add spam to your in=basket — especially not when the only point is that, whatever I might say,  I may be wrong.

Sure, I do remain in awe of how little I know.  Of how vast is the universe of what I don’t.  Of how presumptuous I’d be to expect anyone to read what I’ve written.  But precisely for that reason, my sentences remain on my hard drive, in unsent files.  And for precisely that reason, all that emerges, like a seedling from a crack in the pavement underfoot, is my wish that in the New Year to come,  I learn as much about myself, my world, and about the people around me,  as I can.

That, my friends, is all I think worth saying — and that I wish the same for all of us.

—Joe

 

 

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/12/31/to-a-new-year/
Twitter
RSS

Happy Halloween

Since we planned to be out of town for Halloween this year, we produced our annual Haunted House last night, a bit before official trick-or-treat night. “We” means myself and my volunteer crew, of which, this year, there were thirteen members.  What an appropriate number for a Haunted House!

As usual, It took several weeks back in September for me to get psyched.  First, I had to stop thinking about my other projects.  I had to come up with a theme, decide on characters, scenes, and devices, and develop a story line in my mind, imagining the experience our visitors would have, before I could nail the first nail.  As I created the structure that defined the maze-like path to be followed, as I shot each staple into the black plastic walls intended to keep visitors’ footsteps and focus in the right direction, as I adjusted the angle and dimness of each tea light to reveal only what I wanted to reveal, eventually, the construction of the house drew me into the scenes and characters I was imagining.  And as usual, now that “the day after” has arrived, I’ve awoken before the sun rises, my mind crawling with memories of last night’s screams and laughter.  I try to go back to much-needed sleep, but the thoughts of next year’s possibilities get in the way.  It’s the same old story.  Once my mind gets psyched for the Haunted House, it starts to wear a groove in a horrific path; now, it will take something powerful to lift it out of that groove.

I wish I’d done more theater in my life.  I suppose some of my elaborate practical jokes might lay claim to theater.  I’ve even tried my hand at a few crude movies of the narrative, “artsy” sort.  But mostly, its been novels and haunted houses.  I suppose I’ve wanted to tell stories with pictures and words ever since I was a kid.  It’s how I’ve always imagined who I am.

In my efforts to be a better writer, I’ve read much on the craft of writing, from popular books like Stephen King’s On Writing to academic tomes like Mieke Bal’s‘s Narratology.  But among the ghouls and monsters on my mind this dark morning comes the memory of a book on writing I read a few years back, one by Lisa Cron called Wired for Story.  That book makes the point that human brains have evolved to give us a highly developed capacity – indeed, a need – to think in terms of stories, and that we’re now hard-wired to do so.

The opening words of Ms. Cron’s book set the neurological stage:

“In the second it takes you to read this sentence, your senses are showering you with over 11,000,000 pieces of information.  Your conscious mind is capable of registering about forty of them.  And when it comes to actually paying attention?  On a good day, you can process seven bits of data at a time…”

Cron’s book goes on to describe how the very success of our species depends on our capacity to translate overwhelming experience into simple stories.  I don’t know the source, or even if it’s true – maybe from The Agony and the Ecstasy? — but Michelangelo is said to have observed that when he sculpted, he didn’t create art, he just removed everything that wasn’t art.  In my own writing, I’ve come to realize how true that is.  Research produces so many pieces of data, and because I find it fascinating, my temptation is to share it all with my readers.  But thorough research is a little bit like real life, which is to say, like Cron’s 11,000,000 pieces of information.  That much information simply doesn’t make a story, any more than the slab of marble Michelangelo started with makes art.

Our brains are not wired to deal with such overloads, but to ensure our survival, which they do by “imagining” ourselves in hypothetical situations, scoping out what “might” happen to us if we eat the apple, smell the flower, or step in front of the oncoming bus.  Every memory we have is similarly a story – not a photographic reproduction of reality, but an over-simplified construct designed to make sense of our experience.  Think of what you were doing a minute before you started reading this post.  What do you remember?  Certainly not every smell, every sound, every thought that crossed your mind, every pixel of your peripheral vision.  What you remember of that moment is a microcosm of what you remember about your entire life.  Sure, you can remember what you were doing September 11, 2001, but how many details of your own life on that infamous day could you recall, if you devoted every second of tomorrow to the task?  And that was a very memorable day.  What do you recall of  September 11, 1997?  Chances are you have no idea of the details of your experience that day.  The fact is, we don’t remember 99.99% of our lives.  All we remember are the pieces of the narrative stories we tell ourselves about who we are, which is to say, what our experiences have been.

The same holds true about our thoughts of the future.  As we drive down the road, we don’t forecast whether the next vehicle we pass will be a blue Toyota or a green Chevy.  We do, however, forecast whether our boss will be angry when we ask for a raise, or whatever might happen that’s important to us when we arrive at our destination (which is, usually, a function of why we’re going there).  Whether we’re thinking about the past, the present, or the future, we see ourselves as the protagonist in a narrative story defined by the very narrow story-view we’ve shaped to date, which includes our developing notions of what’s important to us.  Our proficiency at doing this is what has helped us flourish as a species.  This is why photographers tend to see more sources of light in the world, and painters more color, while novelists see more words and doctors see more symptoms of illness. The more entrenched we are in who we’ve become, the more different is the way we perceive reality.

Understanding ourselves as hard-wired for dealing with simple, limited stories rather than the totality of our actual experience – not to mention the totality of universal experience – has important ramifications for self-awareness.  As the psychologist Jean Piaget taught us, from our earliest years, we take our experiences and form conclusions about the patterns they appear to represent.  As long as new experiences are consistent with these constructs, we continue interpreting the world on the basis of them.  When a new experience presents itself that may not fit neatly into the pattern, we either reject it or (often with some angst) we begrudgingly modify our construct of reality to incorporate it.  From that point forward, we continue to interpret new experiences in accordance with our existing constructs, seeing them as consistent with our understanding of “reality” (as previously decided-on) whenever we can make it fit.

And so, from earliest childhood, we form notions of reality based on personal experience.  The results are the stories we tell of ourselves and of our worlds, stories which have a past and which continue to unfold before us.  As Cron points out, we are the protagonists in these stories.  And I’d like to make an additional point: that in the stories we tell ourselves, we are sometimes the heroes.  We are sometimes the victims.  But unless we are psychopathic, we are rarely, if ever, the villains.

There are, of course, plenty of villains in these stories, but the villains are always other people.  In your story, maybe the villains are big business, or big government; evil Nazis or evil communists; aggressive religious zealots, cold-blooded, soul-less atheists, or even Satan himself. It could be your heartless neighbor who lets his dog keep you up all night long with its barking, or the unfeeling cop who just gave you that unjust speeding ticket.

As you think of the current chapter of your life story, who are the biggest villains?  And are you one of them?  I doubt it.  But I suggest asking ourselves, what are the stories the villains tell about themselves?  What is it that makes them see themselves as the heroes of their stories, or the victims?  Isn’t it reasonable to assume that their stories make as much sense to them as our stories make to us?

We have formed our ideas about reality based on our own experiences, because they make sense to us.  Indeed, our stories make sense to us because they are the only way we can get our minds around a reality that’s throwing 11,000,000 pieces of information at us every second of our waking lives.  We live in a reality of mountain ranges, full of granite and marble.  Michelangelo finds meaning in it by chipping away everything that isn’t The Pieta, Auguste Rodin by chipping away everything that isn’t The Thinker.  When they find meaning in such small samples of worldwide rock, is it any wonder they see reality differently?

Psychologists tell us that self-esteem is important to mental health, so it’s no wonder that in the stories we tell ourselves, we are the heroes on good days, the victims on bad ones, and the villains only every third leap year or so.  Others are the normal villains.  But if I’m your villain, and you’re mine, then we can’t both be right – or can we?  An objective observer would say that your story makes excellent sense to you, for the same reasons my story makes excellent sense to me.  Both are grounded in experience, and your experience is quite different from mine.  Even more importantly, I think, your “story” represents about 7/11,000,000th of your life experiences while my story represents about 7/11,000,000th of mine.

But confirmation bias means that we fight like heck to conform new experience to our pre-existing stories.  If a new experience doesn’t demand a complete re-write, we’ll find a way to fit it in.  It’s like we’re watching a movie in a theater.  If some prankster projectionist has spliced in a scene from another movie, the whole story we’re watching makes no sense to us and sometimes we want to start over, from the beginning.    If our stories are wrong, our entire understanding of who we are and how we fit in becomes a heap of tangled film on the projection room floor.

One of the things I love about Halloween is how it lets us imagine ourselves as something different.  I mean, Christmas puts our focus on Jesus or Santa Claus, role models to emulate, but their larger-than-life accomplishments and abilities are distinctly other than the selves we know.  Mother’s Day and Valentine’s Day encourage us to focus on other people.  Halloween is the one holiday that encourages us to pretend to be something we’re not – to put aside our existing views of the world “as it really is” and become whatever our wildest imaginations might see us as.  I think that’s why I like it so much.  Obviously, I’m not really a vampire ghoul from Transylvania, but when my current worldview is based on a tiny,  7/11,000,000,000th slice of my own personal experience, how much less accurate can that new self-image be?

I think of “intelligence” as the ability to see things from multiple points of view.  The most pig-headed dullards I know are those who seem so stuck in their convictions that they can’t even imagine the world as I or others see it.  I tend to think that absent the ability to see things from multiple points of view, we’d have no basis for making comparisons, no basis for preferences, no basis for judgment, and therefore, no basis for wisdom.

Halloween is the one time of year I really get to celebrate my imagination, to change my story from one in which I’m hero or victim to one in which I’m a villain.  As I try to see things from a weird, offbeat, or even seemingly evil point of view, I get practice in trying to see things as others see them.  For me, it seems a very healthy habit to cultivate.

But I must end on a note of caution.  As someone who tries to tell stories capable of captivating an audience, I am keenly aware of a conflict.  As the dramatist, my goal is to channel your experience, your thoughts, your attention, along a path I’ve staked out, to an end I have in mind.  When I’m successful, I create the groove.  My audience follows it.  In this respect, good story-telling, when directed toward others, is a form of mind control.

But what about story-telling to oneself?  It’s probably good news that in real life, there isn’t just one Stephen King or Tom Clancy trying to capture your attention or lead you to some predetermined goal.  Every book, movie, TV commercial, internet pop-up ad, billboard, preacher, politician, news reporter, self-help guru and next door neighbor has a story to tell, and wants you to follow it.  The blessing of being exposed to 11,000,000 pieces of information every second is that we’re not in thrall to a single voice trying to control the way we see the world.  But does this mean we’re free?  The reduction of the world’s complexity into a single world-view is a story that IS told by a single voice — our own.  All of our individual experiences to date have been shaped by our brains into a story, a story in which we are the heroes and victims.  The most powerful things that seek to control our views of the world are those stories.  We’ve been telling ourselves one since the day we began to experience reality.  My own?  Since early childhood, I have seen myself as a story-teller.  Since September, the immanence of Halloween has forced me, almost unwillingly at first, to focus on my annual Haunted House.  At first, it was hard.  But in just a few weeks, the themes, characters, scenes, and devices of this story took such a hold on me, that I woke up this morning unable to think of anything else.

Such are the pathways of our minds.  If my thoughts can be so channeled in just a few weeks, how deep are the grooves I’ve been cutting for over sixty years?  Am I really free to change the story of my life, or am I the helpless victim of the story I’ve been telling?

This week, try imagining yourself as something very different.  Something you’d normally find very weird, maybe even distasteful.  But remember – don’t imagine yourself the villain.  Imagine yourself, in this new role, as part hero, part victim. Get outside your prior self, and have a Happy Halloween.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/10/28/happy-halloween/
Twitter
RSS

Knowing Right from Wrong

Two items I heard on the radio yesterday struck me as worthy of comment.

First was the news of Sunday night’s tragedy in Las Vegas.  Questions of motive apparently loom large. President Trump first called the shooter  “pure evil.” Now he’s saying the shooter was “very, very sick.”

I also heard yesterday that the Supreme Court would soon be deciding a case involving a woman sent back to jail because she tested positive on a drug test, which positive result violated the terms of her parole.  Her lawyer is apparently arguing that the action amounts to re-incarceration due to a “disease” (addiction), and is therefore unconstitutional.  My own reaction is that the woman wasn’t incarcerated for having an illness (her addiction) but for something she did (use drugs, and test positive on a drug test).  But the fact that the woman’s conduct arguably sprang from her illness/addiction leads me to compare her to the Vegas shooter.  Ultimately, the question becomes whether an offense that results from “sickness” is excusable, and whether it can be distinguished from an offense that results from something else, something that is not some sort of sickness –“evil”perhaps.  If so, then all we have to do is figure out the difference between evil and sickness.

While I’m at it, allow me to throw in the killing of Osama Bin Laden, just to round out the analytical field.  By the killing of Osama Bin Laden, I mean both the killing he ordered and the killing that finally brought him down.  Premeditated.  Innocent lives lost in the process.  Evil?  Justifiable?  Sickness?  Other?

There’s nothing particularly new about such questions.  They take us back to the legal requirements for justifiable homicide.  To the religious doctrine of the just war.  To the philosophical question of whether an end ever justifies a means.  To the debate over determinism and free will.  All these issues have defied resolution for centuries.  I have my opinions, but instead of advancing them here, I’d like to use them as the background for raising two other matters that have been on my mind.

The first, I’ll call the question of knowledge.  When I studied Latin in school, I learned the distinction between two Latin verbs, cognoscere and scire.   When I studied French, I encountered  the same difference between two French verbs, connaitre and savoir., which evolved from the Latin.  All four verbs are translated into English as “to know.”  But in both Latin and French, a distinction is observed between knowing in the sense of being somewhat familiar with something, and knowing in the sense of being aware of a fact or a field of knowledge, authoritatively, or with certainty.  In Latin and French, if you want to say you “know” your neighbor, you use the word cognoscere or connaitre, because you really only mean to say you’re somewhat familiar with her.   But if you want to say that you know your own name, or where you live, or the words of the Gettysburg address, you use scire or savoir, to assert that you have essentially complete and authoritative knowledge of the subject.

These two types of knowledge seem rather different from each other.  For many years, I thought it a shame that the English word “to know” gets used to cover both types.  I thought it important to distinguish between those situations in which we really know something and those in which we simply have a passing familiarity, and I found English lacking due to its failure to make that distinction.   But now, I think differently.  Now, I question whether we really know anything with certainty.   If we can’t see all four sides of a barn simultaneously, how can we say we “know” the barn, as opposed to being familiar with just one aspect of it?  Is the most we can ever say about anything  that we are somewhat familiar with it?  If there really is just the one sort of knowledge, then maybe we’re right to have just one word for it.  Maybe the Romans and French were wrong to think both types of knowledge possible.

Meanwhile, what do we mean by right and wrong?  Mostly, I’ve been thinking about politics in this regard, not drug use or homicide.  I’ve been wondering whether terms like right and wrong should be abandoned altogether when it comes to politics.  I mean, every political issue I can think of seems to me to be more easily analyzed in terms of what (if any) group benefits, versus what (if any) group gets hurt.   Is it more accurate to say that a policy or practice is “right” when viewed from one group’s perspective, and “wrong” when viewed from another?

Take, for example, immigration reform.  You might argue that tightening controls favors those who already live in a country, and disfavors those who want to enter it.  Assuming that’s true, would that make the tightening right, or wrong?  Doesn’t it depend on whose perspective you’re adopting?

Arguably, capital punishment hurts convicted murderers while benefiting taxpayers who would otherwise bear the costs associated with long prison terms.  We can argue about deterrence, and whether capital punishment deters future criminals and therefore benefits potential future victims.  But what does it mean to argue that capital punishment is “right” or “wrong”?  The simplistic precept “It is wrong to kill” either condemns all killing, including the killing of Osama Bin Laden,  or it provides no answer at all because the real question is when it is right to kill and when it isn’t.  I have the same question about higher taxes, about the Affordable Health Care Act, about environmental regulations, and about every other political issue I can think of.  “Right” and “wrong” seem too absolute to be helpful in understanding complex tradeoffs which may well benefit some groups while hurting others.

I can follow a discussion pretty well when it’s phrased as a discussion of what groups will arguably benefit by some policy or proposal, and what groups (if any) will be hurt.   But I have difficulty when the same debate is phrased in terms of what’s “wise” or what’s “sound policy,” because it seems to me always to come back to “wise for whom?”  Immigration reform might be good for the American economy, but is it good for the rest of the world?  Obamacare may benefit those with preexisting conditions and are poor and unhealthy, but not those who are healthy or wealthy.  Is a Pennsylvania  law “wise” if it helps Pennsylanians but hurts New Yorkers?   Is an American policy “wise” because it helps Americans, even if it hurts Russians, Filipinos, or Cubans?

It may help us express how disturbed we are by the shooting in Las Vegas, if we call it “pure evil,” but I don’t see it that way.  (Frankly, I don’t know what “pure evil” means. ) Rather, it seems to me we all have personal points of view, which is to say, minds that tell us stories.  In those stories, we ourselves are often the unappreciated heroes.  In some other stories, we may be the victims.   But in how many stories are we purveyors of unadulterated wrong?  I believe that the Vegas shooter told himself a story in which he was a hero, or a victim, or both.   And if we do things because they make sense to us, in the context of the people, values, religion or nation with which we identify, and in the context of the stories we see ourselves acting in,  then do we have anything more than a subjective point of view, a limited perspective incapable of assessing a more objective or universal wisdom about right and wrong?  I think we all suffer from genuine mental impairments – if not anything as egregious as sociopathic aggression or drug addiction, then more common ailments like self-interest, self-delusion, arrogance, bad habit, confirmation bias or simply poor judgment resulting from our fallibility.  At best, we have a passing familiarity with right and wrong, not authoritative knowledge of it.  At worst, we are all sick, and so occupy ground not entirely unlike that of the Las Vegas shooter or the drug addict.

Maybe it’s time to stop the litmus test of good versus evil.  To recognize instead that what benefits one person may hurt another.  That when our government incarcerates an addict, storms a deranged mass shooter’s hotel room, or takes the life of a militant dictator, we are not making God-like moral judgements that one person is “good” and another “pure evil,” but simply making practical tradeoffs to protect certain interests at the expense of others.  And maybe, in the next political discussion we have, it’ll prove helpful to stop talking about who and what are wrong, but who will likely benefit and who be hurt.

My hunch is that the Vegas shooter saw something as pure evil – and that whatever it was, it wasn’t himself.  His idea of evil was likely different from ours.  Indeed, he may have considered us as examples of pure evil. We’re wired to think we’re somehow different from him;  that, unlike him, we know the difference between right and wrong.  At times like these, in the face of senseless atrocity, it’s easy to feel that way, to see a fundamental difference between him and us:  After all, we say smugly, we would never indiscriminately kill scores of people.

But we killed over six hundred thousand in our civil war.  We killed a hundred thousand at Hiroshima. We’ve killed in Vietnam, Iraq, and Afghanistan.  In a few weeks, when the Vegas shootings are no longer front page news, we’ll be calling each other stupid, or evil, or just plain wrong, as if we have nothing in common with the Vegas shooter.  As if we have the unerring ability to identify what’s right and wrong, and to do so with the full understanding the Romans and French called scire and savoir.

Different as we may be in other respects, I say we all suffer from that disease.

Families of victims in Vegas, you’re in our thoughts and prayers.

– Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/10/03/knowing-right-from-wrong/
Twitter
RSS

Top Ten Blunders – Your Nominations

A month ago , I asked for your thoughts about the greatest blunders of all time.  I was thinking of blunders from long ago, especially “a list that considers only past human blunders, removed from the passions of the present day.”  I observed, “My special interest lies in blunders where large numbers of people… have believed that things are one way, where the passage of time has proven otherwise.  I believe such a list might help remind us of our own fallibility, as a species…

I got only five nominations. (I imagine the rest of you are simply reluctant to nominate your own blunders.  But hey.  All of us have done things we’d rather our children not hear about.)  As for the rest of you, I’m grateful for your nominations, even if they do imply that blame lies elsewhere than ourselves.  The five I received are certainly food for thought.

One was, “Founding Fathers missed huge by not imposing term limits.”  According to a recent Rasmussen opinion poll, 74% of Americans now favor term limits, with only 13% opposed.*  One could argue the jury is in: the verdict being that the founding fathers should have imposed term limits.  That said, with the average length of service in the U.S. House being 13.4 years, we obviously elect half of our representatives to seven or more consecutive terms.  And Michigan voters have sent John Dingle back to Congress for over fifty-seven years, even longer than his father’s decades of service before him.  Do they feel differently about term limits in Michigan?  If the founding fathers’ failure to impose term limits was a great blunder, don’t the American voters make a far greater blunder every two years when they send these perennial office holders back to Washington, year after year? I mean, it’s at least arguable that the Founding Fathers were right in failing to impose term limits.  But who can deny the hypocrisy when an electorate that favors term limits (that means us, folks) does what they themselves would prohibit?  Millions of Americans today are either wrong in favoring term limits, or wrong in re-electing the same Congressmen over and over again – and surely wrong by doing both simultaneously.  At least if measured by the number of people involved, the blunder we commit today strikes me as greater than that committed by a handful of wigged men in 1789.

A second nomination: “Y2K has to be in the top 20?”  That one sure brings a smile to my face.  You remember the “experts” predictions of the global catastrophe we’d see when all those computers couldn’t handle years starting with anything but a 1 and a 9.   Then, when the time came, nothing happened.  I don’t know of a single problem caused by Y2K.   If judged by the certainty of the so-called experts, and the size of the gap between the predicted calamity and what actually transpired, Y2K clearly deserves recognition.

But compare Y2K to other predictions of doom.  There can be no predicted calamity greater than the end of existence itself.  A glance at Wikipedia’s article, “List of Dates Predicted for Apocalyptic Events,” includes 152 dates that have been predicted for the end of the world.  And they haven’t been limited to freakish fringes of society.  Standouts include Pope Sylvester II’s prediction that the world would end on January 1, 1000, Pope Innocent III’s that it would end 666 years after the rise of Islam,  Martin Luther’s prediction that it would end no later than 1600, and Christopher Columbus’s that it would end in 1501. (When that year ended successfully, he revised his prediction to 1658, long after he’d be dead; he apparently didn’t want to be embarrassed again).  Cotton Mather’s prediction of 1697 had to be amended twice.  Jim Jones predicted the end in  1967 and Charles Manson 1969.  My favorite on Wikipedia’s list dates from May 19, 1780, when “a combination of smoke from forest fires, a thick fog, and cloud cover” was taken by members of the Connecticut General Assembly as a sign that the end times had arrived.  (It’s my favorite because it may help explain why the founding fathers saw no need for term limits.)  But fully half of the Wikipedia list consists of predictions made since 1900.   Over twenty-five have been since the Y2K blunder.  The recent predictions include one from a recent Presidential candidate (Pat Robertson) who predicted the world would end in 2007.  And though not yet included by Wikipedia,  last month’s solar eclipse brought out yet more predictions of the end of the world – never mind that only a tiny fraction of the earth’s surface was in a position to notice it.  (Would the world only end across a thin strip of North America?)

We can laugh at Christopher Columbus, but what of the fact that the list of doomsday prophecies continues to grow, despite how often the doomsayers have been wrong?  Measured by the enormity of the subject matter and the apparent widespread lack of concern about being “twice bitten,” man’s fondness for predicting when the world will end as a result of some mystical interpretation of ancient texts strikes me as a bigger blunder than Y2K – and unlike Y2K, it shows no sign of going away.

A third nomination: “The earth is flat.”  The archetypal human blunder.  Months ago, while struggling to think of other blunders as egregious, I was led by Google to a Wikipedia article on “the flat earth myth,” which I assumed was exactly what I was looking for.  But to my dismay, I read that the “flat earth myth” is not the old belief that the world was flat; rather, it is the current, widely-held belief that people in the Middle Ages believed the earth to be flat!  I’d spent a lifetime feeling proudly superior to the ignorant medieval masses.  Was it me, after all, who was wrong?

My discovery reminded me of the difficulty of ranking human error.  The article asserted that throughout the Middle Ages, the “best minds of the day” knew the earth was not flat.  The “myth” was created in the 17th Century, as part of a Protestant campaign against Catholic Church teachings, accelerated by the fictional assertion in Washington Irving’s popular biography of Christopher Columbus that members of the Spanish court questioned Columbus’s belief that the earth was round.  Gershwin’s unforgettable, “They all laughed at Christopher Columbus…” etched the myth forever in our minds.  The article quotes Stephen Jay Gould: “[A]ll major medieval scholars accepted the Earth’s roundness as an established fact of cosmology.”  The blunder wasn’t a relic of the Middle Ages, but an error of current understanding based on a post-enlightenment piece of popular fiction!

Meanwhile, the Flat Earth Society lives on to this day.  Their website, at theflatearthsociety.org, “offers a home to those wayward thinkers that march bravely on with REASON and TRUTH in recognizing the TRUE shape of the Earth – Flat. “  Most of them, I think, are dead serious.  But wait.  Which is the greater blunder: that of the medieval masses who saw their world as a patchwork of coastlines, rolling hillsides, mountains, valleys, and flat, grassy plains?  Or that of the experts, the major scholars who “knew” in the Middle Ages that the earth was a sphere?  The earth is not a sphere at all, we now know, but a squat, oblong shape that bulges around the equator because of the force of its spin.  Or is that error, too?  Need we mention that spheres don’t have mountains and valleys?  Need we mention that the surface of the earth, at a sub-atomic level, is anything but curved?  Aren’t all descriptions of the earth’s shape simply approximations?  And if we can accept approximations on the basis that they serve a practical purpose, then is the observable general flatness of the earth today any more “wrong” than a medieval scholar’s belief in sphericity?    Who really needs to know that the atoms forming the surface of the earth are really mostly air? The “wrongness” in our concepts of earth’s shape isn’t static, but evolving.

The oldest of the historical blunders nominated for WMBW’s top ten list have an ancient, scriptural flavor.

The first: “The number one thing that went wrong with humanity [was] when the first man said to another, ‘I think I heard god last night!’ and the other believed him.”**

The second comes from a different perspective: “The greatest blunder had to be Eve eating of the fruit of the tree of knowledge, having been tempted to be like God, deciding for herself what is good and what is evil.  Every person [becomes] his own god. The hell of it is, everyone decides differently, and we’re left to fight it out amongst ourselves.”**

The other three nominators thought that Y2K, belief in a flat earth, and failure to impose term limits should be considered for a place somewhere on the top ten list.  (Actually, Y2K’s sponsor only suggested it belonged somewhere in the top 20.)  But the two “religious” nominations were each called the biggest blunder of all.  (One was “the number one thing,” while the other “had to be” the greatest blunder.)   What is it about belief in God that prompts proponents and opponents alike to consider the right belief so important, and holding the wrong one the single greatest blunder of all time?

If you believe in God, though He doesn’t exist, you’re in error, but I don’t see why that error qualifies as the greatest blunder of all time, even when millions suffer from the same delusion.  I remember seeing an article in Science Magazine a few years ago, surveying the research that has attempted to determine whether believers tend to act more morally than non-believers.  Most of the studies showed little or no difference in conduct between the two groups.  For those who don’t believe in God, isn’t it one’s conduct, not one’s belief-system, that is the best measure of error?  For them, why does belief even matter?

If you don’t believe in God, though He does exist, you face a different problem.  If you believe as my mother did – that believing in God (not just any God, but the right God, in the right way) means you’ll spend eternity in Heaven rather than Hell – it’s easy to see why being wrong would matter to you. If roughly half the people in the world are headed to eternal damnation, that’s at least a problem bigger than term limits.

But there is a third alternative on the religious question.  If you’ve looked at the WMBW Home Page or my Facebook profile, you may have noticed my description of my own religious views – “Other, really.”  One of the main reasons for that description is pertinent to this question about the greatest blunders, so I will identify its relevant aspect here: “If God exists, He may care about what I do, but He’s not so vain as to care whether I believe in Him.”  My point is not to advance my reasons for that belief here, simply to point out that it may shed light on why many rank error on the religious question so high on the list of all-time blunders, while I do not.  Many believers, I think, believe it’s critically important to believe, so they try hard to get others to do so.  Non-believers react, first by pointing to examples of believers who’ve been guilty of wrongdoing, and eventually by characterizing the beliefs themselves as the reason for the wrongdoing.  In any case, the nominations concerned with religious beliefs were offered as clearly deserving the number one spot, while our “secular” nominations were put forward with less conviction about where on the list they belong — and that difference may have meaning, or not.

In my solicitation, I acknowledged the gray line between opinion and fact.  To some believers, the terrible consequences of not heeding the Word of God have been proven again and again, as chronicled throughout the Hebrew Bible.  To some non-believers, the terrible consequences of belief have been proven by the long list of wars and atrocities carried out in the name of Gods.  Whichever side you take, the clash of opinions remains as strong as ever.

So, what do I make of it all?  Only that I’d hoped for past, proven blunders which might remind us of our great capacity for error.  Instead, I discover evidence of massive self-contradiction on the part of the current American electorate; a growing list of contemporaries who, as recently as last month, are willing to predict the imminent end of the world; my own blunder, unfairly dismissive of the past, fooled by a piece of Washington Irving fiction; and a world as divided as ever regarding the existence of God.

To this observer, what it all suggests is that there’s nothing unique about the ignorance of past ages; and that an awfully large chunk of modern humanity is not only wrong, but likely making what some future generation will decide are among the greatest blunders of all time.

Sic transit gloria mundi.

–Joe

*http://www.rasmussenreports.com/public_content/politics/general_politics/october_2016/more_voters_than_ever_want_term_limits_for_congress

** I’ve done some editing of punctuation in both of these nominations: I apologize if I’ve thereby changed the submitter’s meaning.

 

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/09/03/top-ten-blunders-your-nominations/
Twitter
RSS

Asking the Ad Hominem Question

I generally wince when someone debating Topic X starts talking about his opponents, giving reasons he thinks his opponents believe as they do, trying to discredit their position by psycho-analyzing the reasons they hold it or expressing his disapproval of “the sort of people” who hold such positions.    It’s typically a variant of a thought analyzed well by Kathryn Schulz in Being Wrong: “I think the way I do because all evidence and logic support me; the only reason you think the way you do is because you suffer from… [here, fill in the blank.]”As I see it, such ad hominem arguments are often resorted to by those unable to make a good argument on Topic X itself.  Moreover, by making the debate personal, the ad hominem debater usually comes across as insulting, and that’s a sure-fire recipe for things to get ugly quickly.

I think it’s quite different to pose an ad hominem  question to oneself.  Asking ourselves why we believe what we do, when others don’t agree with us, can be a mind opening exercise.  (In case it’s not clear, “I believe what I believe because its true, and others disagree because they’re stupid” doesn’t count.)

Allow me to offer an example.  Having gotten some flack from readers for my thoughts about Charlottesville, I decided to ask myself the ad hominem question in an effort to understand why I favor removing statues of Confederate generals from public squares, when others don’t.   What is it about my background that causes me to favor such removal?

I’m pretty sure it was my career as an employment lawyer, a capacity in which I was often asked to advise employers on diversity issues and strategies for legally maintaining a dedicated, harmonious, loyal (and therefore productive) workforce.    Many of my clients experienced  variations on a problem I’ll call cultural conflict in the workplace, by which I don’t mean conflict between employer and employees, but among employees themselves.

The conflict involved was often racial, religious or gender-based.  For example, one company piped music from a radio station into its warehouses, only to discover that one group wanted to listen to a country music station, another a Latin station, and a third an R&B, Hip-Hop or Motown station.  Each group claimed it was being discriminated against if it didn’t get its way.  Another variant of the problem was when assembly line workers came to work wearing T-shirts that other employees found offensive —one T-shirt featured a burning cross; another the picture of a person wearing a white sheet and hood while aiming a gun at the viewer; another featured the “N” word; others were donned featuring raised fists and the words “Black Power” and shirts implying a revolution against “white rule.”

A frequent variant on the “culture conflict” problem  involved office environments where employees shared cubicles and wanted to decorate their cubicles with words or images that their neighbors or cubicle-mates found offensive.    In one case, one Christian employee began to hang skeletons, ghouls, devils and demons all over a shared cubicle, beginning in August, in preparation for Halloween; her Christian cubicle-mate believed that celebrating Halloween at all was the work of the devil; she countered  by hanging crucifixes, pictures of Jesus, manger scenes and Bible quotations on the shared cubicle wall, saying that devil worshipers would go to Hell; a third resident of that same cubicle corner — the one who actually complained to management — had religious convictions that prohibited the celebration of any holidays or the use of any religious imagery at all, on the grounds that all of it was idol-worship; she wanted it all removed.

Perhaps the most common variant of the culture conflict was in workplaces where male employees wanted to hang calendars or other pictures of naked (or scantily clad) women, while  women (and some men) objected on the grounds the working environment was made illegally offensive as a result.

In one case, there was already a racially charged atmosphere: a group of white ‘good ole boys’ always ate at one lunch table while blacks ate at another.  There’d been some mild taunting back and forth, but nothing too serious, when one day, several of the white employees started “practicing their knot tying skills” by making rope nooses in plain view of the blacks at the other table.  The blacks saw an obvious message which the whites of course denied.

In all such cases, the employer was left to decide what to do.  There were difficult legalities to deal with.  Some employers tried to address the problem by declaring that employees could post no personal messages on company property (like cubicle walls), but could post what they wanted on their own property (their  lunch boxes,  tool boxes, T-shirts, etc. ) But the public/private property distinction didn’t end the problem.  Someone who brings a Swastika and a “Death to All Jews” decal to work on his lunchbox is an obvious problem for workplace harmony, regardless of what the law says about it.

Surely, my background in this area shapes my views about cultural conflict regarding statues in public squares.  And I think what decided my position on statues was that such problems arose among my clients scores of times, yet never once was it the employer itself that wanted to post the material, wear the T-shirt, celebrate the holiday, practice tying knots, or whatever.  It was always a question of playing referee in the conflict between opposing groups of employees.

I believe it’s by analogy to that situation that I instinctively consider the problem faced by a government body deciding what or who to memorialize in the public square.   I don’t claim it’s an easy task.  But if a company or city had ever asked me if I thought it ought to hang crucifixes in its cubicles, display a picture of the devil in its lunchroom, hang a Confederate flag or a Playboy centerfold  in the conference room, or have its managers fashion nooses during an employee meeting, I’d have been flabbergasted.   It’s not that Robert E. Lee is like Satan, or Jesus, or a Playboy Centerfold, if we’re talking moral qualities, or what OUGHT to be offensive.  Rather, it’s the fact that, in my experience, all that mattered to the employers was that some of their employees considered the displays offensive.  When the display was controversial , it was viewed as a problem.  And without exception, my clients took pains not to introduce controversial images themselves.

In abstract theory, I can imagine that some symbols or ideas might be so important to the common good that an employer (or city council) should celebrate them, despite their being divisive.  (A statue of the sitting President?) But in the case of Confederate generals who fought to preserve an institution that has been illegal for 150 years now, my own cultural background — including my work experience —gives me no clue as to what their countervailing importance might be.

Anyway, I really do wince when people make ad hominem arguments against their opponents, but I like asking the ad hominem question of myself.  Whatever you think about Confederate generals, I’d love to hear from you if you’ve given the thought-experiment a try, especially if it has helped you understand differences in points of view between yourself and others.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/08/22/asking-the-ad-hominem-question/
Twitter
RSS

Thoughts About Charlottesville

Last week’s tragedy in Charlottesville  has touched close to home here in Richmond, the capital of the old Confederacy.  Lt. H. Jay Cullen was one of two police officers killed in the effort to restore peace.  His viewing is tonight; his funeral is tomorrow.    My optometrist is attending because she serves as a delegate to the state house.  My daughter is attending because she’s a former co-worker and friend of Lt. Cullen’s wife.  Amidst the grief and mourning, the firestorm of what passes for debate regarding the whole affair cries out for a WMBW perspective.

A few months back, when the removal of four confederate statues in New Orleans was in the news, my own thinking distinguished among the statues.  I thought the removal of some made sense, but not others.  I was struck by the fact that no one else seemed to consider them as separate cases.  Everyone seemed to have adopted an all-or-nothing posture: either you were for, or against, the “removal of the statues,” as if alignment with one side or the other mattered more than considering the merits of each statue on its own.  Was I the only one in my circle who saw a middle ground?  I still worry about a group-think tendency to align entirely with one side or the other.  Such polarizing alignment seems to me precisely what led to the Civil War in the first place.   But in the meantime, Charlottesville has caused me to consider the matter anew – and I’ve decided I was wrong about the statues in New Orleans.

I approved of the removal of most of the New Orleans statues, but felt otherwise about the statue of Robert E. Lee.  My opposition was on the ground that Lee was a good (if imperfect) man and that to remove his statue did an injustice, both to history and to him personally.  Now, I believe that I was wrong about the Lee statue, and I’m moved to explain why.

First, let’s consider what history tells us about the man in relation to slavery.  While historians disagree on certain details, it seems clear that Lee personally ordered the corporal punishment of slaves who resisted his authority.   Today, all but the most extreme white supremacists can agree that this was wrong.  Of course, Lee’s treatment of his slaves was not remarkably better or worse than the racism of thousands of other white men who owned slaves in those days; he apparently believed what most white Southerners  (as well as many in the North) believed: that the Bible made it their Christian duty to “look after” African Americans.  And for Lee, as for most slave-owners, this paternalistic attitude included both kindness (especially as a reward for loyalty and good work) and infliction of severe corporal punishment (as a deterrent to disobedience).  These days, it’s extremely hard to understand how so many people could have been so wrong, but hundreds of thousands of Lees’ white peers thought and acted as he did in their relation to black Americans.   This conduct was widespread; it shames us all.  While being widespread doesn’t justify what Lee did, it makes it a lot easier for me to recognize that Lee was much like the rest of us: i.e., capable of well-intended conduct that future generations may condemn as fundamentally, grievously wrong.

My admiration for Lee – which continues — comes despite his participation in the injustice of slavery.  I also admire slave-owners like George Washington and Thomas Jefferson (despite the fact that Jefferson described African Americans as having “a very strong and disagreeable odor,” a capacity for love limited to “eager desire” more than sentiment, and a capacity for reason he insisted was far inferior to that of the white man.) I admire these white men for the good that they did, despite my recognition that they were so grievously wrong about African Americans and slavery.

Lee, the man, was more than a participant in the repugnant institution of slavery.  He was a great military strategist.  He was a man who sacrificed his personal welfare for what he saw as his duty to his country.  And perhaps most importantly for me, he became a significant force for reconciliation after the war.  When the government of the Confederacy collapsed and its armies surrendered, many Southerners wanted to continue the fight for slavery, on an underground, guerrilla-warfare basis.  This stubborn, “never-say-die” sentiment led to formation of the K.K.K., and to the worst atrocities of the Reconstruction era.  Indirectly, it led to the current existence of hate groups like the Nazi group that marched in Charlottesville.   In the face of such atrocities, Robert E. Lee advocated against continued resistance, calling repeatedly for reconciliation with the north, for fair and decent treatment of the freedman, respect for the law, and the putting aside of past hatreds in order to restore unity, harmony, and civility.  True, he didn’t support giving blacks the right to vote, and I fault him for that, but he lived in an era when that was viewed as an extremist position.  And if one looks at his postwar record as a whole, he was primarily a peacemaker.   A man who sought a better life for blacks and whites alike, and had to swallow a great deal of personal pride to do so.  Indeed, I think he might have been an early supporter of WMBW, had it been around in his day!  Chiefly for that reason, I count myself as a fan and supporter of the man.

And because I admire the man, I was, until recently, opposed to the removal of statues honoring him.

But what now?  Sadly, it sometimes takes jarring events, close to home, to get us to change our minds. And in this case, I have changed mine.  Many of those opposed to removal of Lee’s statues say that removal is an affront to history.  That was my own thinking just a couple of months again.  But now I think I was wrong.  Whatever we may think of a particular person, good or bad, is there not room to distinguish between our sentiments about the person and the reasons to erect (or take down) a statue? And aren’t the reasons for erection or removal of a statue important?   Let’s consider, carefully and dispassionately, the possible reasons for removing Lee’s statue.

First, if Lee’s participation in the atrocities of slavery oblige us to take down his statue, it seems to me consistency would require us to demolish the Washington Monument and the Jefferson Memorial.  Can a consistent standard for statues limit us to memorialize only those leaders who were entirely free from wrong? Limiting statues to perfect people may put a lot of sculptors out of work…

What about the assertions of Anderson Cooper, the Los Angeles Times, and others, who claim that Lee’s statute should be taken down because Lee (unlike Washington and Jefferson) was a traitor to his country?  Anderson Cooper asserted that Washington fought for his country, and Lee against his.  The Los Angeles Time’s headline said that Washington’s ownership of slaves was not the equivalent to Lee’s “treason.”  But didn’t George Washington lead an army against his mother-country (England)? And didn’t Lee lead an army that defended his homeland (the Commonwealth of Virginia) against an army that had invaded it and was bearing down on its capital?    In labeling Lee a traitor, or Washington a patriot, I believe it important to distinguish between today’s perspectives and those at the time these men lived.  Washington and Jefferson were subjects of the British Crown, and readily admitted that they were engaged in rebellion against their government, for which they’d be found guilty of treason if they lost.   Washington and Jefferson were among the rebels who embedded slavery in the Constitution in the first place.  And by the time Lee threw in his lot with Virginia, the Supreme Court of the United States had upheld slavery as the law of the land.  Today, we think of our primary patriotic  allegiance as belonging  to the United States, which we regard as a single, unitary country.  But our pledge of allegiance to such a unified country resulted from the Federal victory in the Civil War (which is to say, in large part, from Lee’s willingness to give up the fight to preserve slavery, and to accede to the prevailing egalitarian view.)  Prior to that war, we had been a confederated union of sovereign states.  The very words “commonwealth” and “state” reflect the idea of nationhood to which patriotism was generally believed to adhere.  (As but one example of the perceived sovereignty of the individual states, when Merriweather Lewis went west in the early 1800’s, meeting with native American tribes who knew nothing of the new government in Washington, he gave the same prepared speech to all of them, a speech that referred to President Thomas Jefferson as the great chief of “seventeen nations” (the number of sovereign states then comprising the U.S.).  I believe strongly that Lee’s sense of allegiance to his homeland, the Commonwealth of Virginia, was an honorable, patriotic position – moreso, by far, than Washington’s taking up of arms against England.  As I understand history, it was Washington who was the traitor to his country; Lee was the dutiful servant of his.

Is the difference, then, that the 1776 war for American independence was a morally “just war,” and the war to preserve the southern Confederacy and slavery an unjust one?   A lot of historians question whether the atrocities allegedly committed by England really were sufficient to warrant rebellion against them, but assuming you think Lee’s war relatively wrong, compared to Washington’s, I see Lee’s status, in this regard, as similar to the status of thousands of soldiers whose names appear on the Vietnam Memorial.  As a people, we have adopted the principle of honoring servicemen who have fought for their sovereign government, even when the war in which they served is judged by history to have been wrong.  If we remove the statue of Lee because he served his homeland in an unjust war, what are we to do with the Vietnam War Memorial?

What about the argument that most attracted me, that to remove the statue of Lee is to rewrite history?  It’s undeniable that Lee played an important part in history, but so did Lord Cornwallis, John Wilkes Booth, and Lee Harvey Oswald.  To have no statues honoring them is not to rewrite history, nor to deny the place of these individuals in it.  It is simply to recognize the difference between preserving history and the reasons to honor praiseworthy individuals by erecting public memorials to them.  A public statue is a symbol, intended to celebrate an idea.    If we ignore what the subjects of our statues symbolize, we risk celebrating the wrong things.  So, the right question, I think, is not whether Lee was a great general, or played an important role in history, or owned or punished his slaves, or was a traitor or a dutiful servant.  The right question to ask, I would suggest, is what does his memorial symbolize?

In considering that question, I think the key consideration is that while Washington was a traitor to his country, he did fight for ours.  And while Lee was a loyal, dutiful patriot who fought for his homeland, he did fight against the unified country that arose from that war.  It is not to demean the man’s character, or his service, or history itself, to recognize that he fought to divide what has (since) become the nation to which we now pledge allegiance.  If our public memorials are intended to remind us of our public principles, then it is the principle of unity, as a nation, that seems especially in need of attention these days – not the division for which Lee fought.  I have no idea whether Lord Cornwallis owned any slaves.  And he may have been a fine and honorable man, even a role model.  But we Americans don’t erect public statues to honor him.  In one sense, Lee symbolizes the opposition to the current American government every bit as much as Cornwallis does.  I see no loss in failing to memorialize either man.

AS for the many arguments in the nature of “If we remove this statue, what next?” I believe there are matters of institutional purpose to consider. I doubt the NAACP will ever erect a statue to George Washington, the white slave owner, nor should they, because he was a white slave-owner and as such is inimical to the interests of that organization.   The racist Louis Agassiz’s name has, thankfully, been removed from schools named to honor him, but I believe his name properly remains as the name of Agassiz Glacier, in Glacier National Park, because Agassiz remains respected for his pioneering scientific work on glaciers.  As abhorrent as I consider the Nazis to be, if they want to erect a statue of Adolf Hitler on their own property, they’d have the right to do so.  As for Washington and Lee, I do not believe that the college bearing their names should feel compelled to change its name or remove the statues I presume exist on its campus to remember them.  Washington saved the school with his financial support; as the college’s president, Lee greatly expanded the school; I believe the college should honor these men for that institution-specific history, and if that includes maintaining statues to both men, I support that.  In that context, Washington and Lee would symbolize, and be accorded honored for, their service to that institution.   In 1962, the United States Military Academy named one of its barracks after Lee.  I think that appropriate, because Lee was a brilliant military strategist and because he had served as that school’s commander.  And I think George Washington should (and will) properly remain on our dollar bills, and be honored in our national capital, because despite his racism and ownership of slaves, and despite his being a traitor to his sovereign country, he was still instrumental in the establishment of this country.  By this reasoning, even a statue of John Wilkes Booth might be appropriate at Ford’s Theater.  My point is that there’s a proper role for institutional purpose in the choice of who an institution recognizes through its memorials.  Even if we get to the point of tearing down his memorial in Washington, a statue of Thomas Jefferson will always be appropriate at Monticello, and a statue of Lee appropriate at Stratford Hall.  To remove some statues of Robert E. Lee does not require the removal of all of them, and certainly doesn’t mean to erase him from history; much depends, I think, on the institution and its purposes.

So where does that leave us?  The City Councils of New Orleans and Charlottesville are institutions, and institutions of a particular type: they have been elected to represent all their citizens.  They should be celebrating the current government (American, not British; the USA, not the CSA).  And they should be choosing memorials that symbolize the current ideals of the people they represent – the ideals of a diverse nation that has come together in peace. In these divisive times, it is as important as ever that they choose symbols of tolerance and inclusion.   By virtue of his position as opposition commander in an effort to divide the union, Robert E. Lee necessarily symbolizes opposition to the national government that won the war.  He symbolizes a divided country, one in which the north would have been free to abolish slavery as long as the south was free to continue it.  That’s not an ideal any government in the United States should want to memorialize.   It is past time to stop celebrating it, or anyone who represents ethnic, racial or ideological division.

Right or wrong, those are my views.  But this week, as I watched our president, and our news media, address the issue from opposite sides, I was struck again by the all-or-nothing positioning on both sides.  Trump and the media both talked about “the two sides” – those for, and against, removal, and sometimes, as if all those who opposed removal were white supremacists or Neo-Nazis.  Are we no longer capable of a more nuanced analysis?  Must every individual be vilified by association with the worst of the people on the other side? Must people classify me as a Nazi, if I utter a single word of respect for a man like Robert E. Lee, or a liberal destroyer of history, if I support the removal of his statue?

Changing people’s minds will only happen when people starting listening to each other.  These days, it seems, no one is listening to anybody; people seem interested in knowing whether you’re for them or against them, and that’s it – not your reasons, not the finer points of what you have to say, or the reasoning behind it.  The scary thing is, it’s remarkably like the situation in 1860, when the polar opposites took their corners and came out fighting, leaving hundreds of thousands of casualties in their wake.  In my view, the only way to avoid a repeat of such violence is to be alert to the possible faults in ourselves; and to be willing to continue looking for the good in people even after we see the bad in them.  We have to be willing to learn from those we think are wrong.  Otherwise, I believe, we will all share responsibility for the violence to come.

So though I join the call for removing his statues from public places, I still think we can learn from Robert E. Lee.  In an 1865 letter to a magazine editor, he wrote, at the end of the war, “It should be the object of all to avoid controversy, to allay passion, give full scope to reason and to every kindly feeling. By doing this and encouraging our citizens to engage in the duties of life with all their heart and mind, with a determination not to be turned aside by thoughts of the past and fears of the future, our country will not only be restored in material prosperity, but will be advanced in science, in virtue and in religion.”

How I wish that Lee himself had been in Charlottesville last week, to make that point to all those present.   I wonder if any of those whose acts led to violence had any idea of that side of Robert E. Lee.  Or did both “sides” simply think of him as a symbol of an era in which white supremacy was the law of the land, and align themselves accordingly?

The next fight close to home will no doubt involve the statues of all the confederate generals lining Monument Avenue here in Richmond.  The very short video attached, courtesy of our local TV station, offers a message I think Robert E. Lee would have approved of.

https://www.facebook.com/CBS6News/videos/10154980841367426/

–Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/08/18/thoughts-about-charlottesville/
Twitter
RSS

The Top Ten Blunders of All Time

For several months now, I’ve been plagued by the thought that certain ways of “being wrong” are different from others.  I’ve wondered whether I’ve confused any thing by not mentioning types of error and distinguishing between them.  For example, there are errors of fact and errors of opinion.  (It’s one thing to be wrong in thinking that 2+ 2 = 5, or that Idaho is further south than Texas, while it’s quite another to be “wrong” about whether Gwyneth Paltrow or Meryl Streep is a better actor.)  Meanwhile, different as statements of fact may be from statements of opinion, all such propositions have in common that they are declarations about present reality.  Not so a third type of error – judgmental errors about what “ought” to be done.   Should I accept my friend’s wedding invitation?  Should I apologize to my brother?  Should we build a wall on the Mexican border?  I might be wrong in my answer to all such questions, but how is it possible to know?

Is there a difference between matters of opinion (Paltrow is better than Streep) and ethics (it’s wrong to kill)?  Many would say there’s an absolute moral standard against which ethics ought to be judged, quite apart from questions of personal preference; others would argue that such standards are themselves a matter of personal preference.  I’ve thought a lot about how different types of error might be distinguished.  But every time I think I’m getting somewhere, I wind up deciding I was wrong.

One of the ways I’ve come full circle relates to the distinction between past and future.  It’s one thing to be wrong about something that has, in fact, happened, and another to be wrong about a prediction of things to come.  Right?  Isn’t one a matter of fact, and the other a matter of opinion?  In doing the research for my recent novel, Alemeth, I came across the following  tidbit  from the Panola Star of December 24, 1856:

The past is disclosed; the future concealed in doubt.  And yet human nature is heedless of the past and fearful of the future, regarding not science and experience that past ages have revealed.

Here I was, writing a historical novel about the divisiveness that led to civil war. I was driven to spend seven years on the project because of the sentiment expressed in that passage: that we can, and ought to, learn from the past.  (Specifically, we should learn that when half the people in the country feel one way, and half the other, both sides labeling the other stupid, or intentionally malicious, an awful lot of people are likely wrong about the matter in question, and the odds seem pretty close to even that any given individual (that includes each of us) is one of the many in the wrong.  And importantly, the great divide wasn’t because all the smart people lived in some states, or that all the bad people lived in others: rather, people tended to think as they did because of the prevailing sentiments of the people around them. Hmnnn…)

Then, an interesting thing happened in the course of writing the book.  Research began teaching me  how many pitfalls there are in thinking we can really know the past.  We have newspapers, and old letters, and other records, but how much is left out of such things?  How many mistakes might they contain?  Indeed, how many were distorted, intentionally, by partisan agendas at the time?  The more I came across examples of each of those things, the less sure I became that we can ever really know the past.  I often can’t remember what I myself was doing ten minutes ago; how will I ever be able to reconstruct how things were for tens of thousands of people a hundred years ago?  Indeed, the more I thought about it, I began to circle back on myself, wondering whether the opposite of where I’d started was true:  Because the past has, forever, already passed, we’ll never be able to return to it, to touch it, to look it directly in the eye, right?  Whereas, we will have that ability with respect to things yet to come.  If that’s true, the future just might be more “verifiable”  than the past.   I get dizzy just thinking about it.

Anyway, an idea I’ve been kicking around is to ask you, WMBW’s readers, to submit nominations for the ten greatest (human) blunders of all time.  I remain extremely interested in the idea, so if any of you are inclined to submit nominations, I’d be delighted.  But the reason I haven’t actually made the request before now stems from my confusion about categories of wrong.  Any list of “the ten greatest blunders of all time” would be focused on the past and perhaps the present, while presumably excluding the future.  But I’m tempted to exclude the present as well.   I mean, I feel confident there are plenty of strong opinions about, say, global warming – and since the destruction of our species, if not all life on earth, may be at stake, sending carbon into the air might well deserve a place on such a list.   Your own top ten blunders of all time list might include abortion, capitalism,  Obamacare, the Wall, our presence in Afghanistan, our failure to address world hunger, etc., depending on your politics.  But a top ten list of blunders based on current issues (that is, based on the conviction that  “the other side” is currently making a world class blunder) would surprise few of us. It seems to me the internet and daily news already makes the nominees for such a list clear.  What would be served by our adding to it here?

My interest, rather, has been in a list that considers only past human blunders, removed from the passions of the present day.  I believe such a list might help remind us of our own fallibility, as a species.  I for one am constantly amazed, when I research the past, at our human capacity for error.  Not just individual error, but widespread cultural error, or fundamental mistakes in accepted scientific thinking.  My bookshelves are full of celebrations of the great achievements of mankind, books that fill us with pride in our own wisdom, but where are the books which chronicle our stupendous errors, and so remind us of our fallibility? How could nearly all of Germany have got behind Hitler?  How could the South have gone to war to preserve slavery?  How could so many people have believed that disease was caused by miasma, or that applying leeches to drain blood would cure sickness, or that the earth was flat, or that the sun revolves around the earth?

What really interests me is not just how often we’ve been wrong, but how ready we’ve been to assert, at the time, that we knew we were right.  The English explorer John Davys shared the attitude of many who brought European culture to the New World, before native Americans were sent off to reservations:

“There is no doubt that we of England are this saved people, by the eternal and infallible presence of the Lord predestined to be sent into these Gentiles in the sea, to those Isles and famous Kingdoms, there to preach the peace of the Lord; for are not we only set on Mount Zion to give light to all the rest of the world? *** By whom then shall the truth be preached, but by them unto whom the truth shall be revealed?”

History is full of such declarations.  In researching the American ante-bellum South, not once did I come across anyone saying, “Now, this slavery thing is a very close question, and we may well be wrong, but we think, on balance, that…”  In the days before we knew that mosquitos carried Yellow Fever, scientific pronouncements asserted as fact that the disease was carried by wingless, crawling animalcula that crept along the ground.  This penchant for treating our beliefs as knowledge is why I so love the quote (attributed to various people) that runs, “It ain’t what people don’t know that’s the problem; it’s what they do know, that ain’t so.”

My special interest lies in blunders where large numbers of people – especially educated people, or those in authority – have believed that things are one way, where the passage of time has proven otherwise.  My interest is especially strong if the people were adamant, or arrogant, about what they believed.  Consider this, then, a request for nominations, if you will, especially of blunders with that sort of feel

Yet be forewarned.  There’s a reason I haven’t been able to come up with a list of my own.  One is that, while not particularly interested in errors of judgment or opinion, I’m not sure where the dividing line falls between fact and opinion. Often, as in the debate over global warming, the very passions aroused are over whether the question is a matter of fact or opinion.  Quite likely, what we believe is fact; what our opponents believe is opinion.

The other is the shaky ground I feel beneath my feet when I try to judge historical error as if today’s understanding of truth will be the final word.   Remember when we “learned” that thalidomide would help pregnant women deal with morning sickness?  Or when we “learned” that saccharin causes cancer?  That red wine was good for the heart (or bad?  What are they saying on that subject these days?)  What about when Einstein stood Newton on his head, or the indications, now, that Einstein might not have got it all right?  If our history is replete with examples of wrongness, what reason is there to think that we’ve gotten past such blunders, that today’s understanding of truth is the standard by which we might identify the ten greatest blunders of all time?  Perhaps the greater blunder may be when we confidently identify, as a top ten false belief of the past, something which our grandchildren will discover has been true all along.…

If this makes you as dizzy as it does me, then consider this: The word “wrong” comes from Old English wrenc, a twisting; it’s related to Old High German renken, to wrench, which is why the tool we call a wrench is used to twist things.  This is all akin to the twisting we produce when we wring out a wet cloth, for when such cloth has been thoroughly twisted, wrinkled, or wrung out, we call it “wrong.” Something is wrong, in other words, when it’s gotten so twisted as to be other than straight.

But in an Einsteinian world, what is it to be straight?  The word “correct,” like the word “right” itself, comes from Latin rectus, meaning straight.  The Latin comes, in turn, from the Indo-European root reg-.  The same root that gave us the Latin word rex, meaning the king.  Joseph Partridge tells us that the king was so called because he was the one who kept us straight, which is to say, in line with his command.  The list of related words, not surprisingly, includes not only regular and regimen, not only reign, realm and region, but the German word Reich.  If the history of language tells us much about ourselves and how we think, then consider the regional differences in civil war America as an instance of rightness..  Consider the history of Germany’s Third Reich as an instance of rightness  It seems we’ve always defined being “right” as a matter of conformity, in alignment with whatever authority controls our and our neighbors’ ideas.

Being wrong, on the other hand?   Is it destined to be defined only as the belief not in conformity to the view accepted by those in charge?  Sometimes I think I’ve got wrongness understood, thinking I know what it is, thinking I’m able to recognize it when I see it.  But I always seem to end up where I began, going around in circles, as if space itself is twisted, curved, or consisting of thirteen dimensions.   I therefore think my own nomination for the Ten Greatest Blunders of All Time has to go to Carl Linnaeus, for calling us Homo Sapiens. 

If you have a nomination of your own, please leave it as a comment on this thread, with any explanation, qualification, or other twist you might want to leave with it.

I’m looking forward to your thoughts.

Joe

 

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/08/03/the-top-ten-blunders-of-all-time/
Twitter
RSS

The Tag Line

WMBW’s tagline is “Fallibility>Humility>Civility.”  It’s punctuated to suggest that one state of being should lead naturally to the next.  The relationship between these three concepts being central to the idea, today I’ve accepted my brother’s suggestion to comment about the meaning of the words.

Etymology books tell us that “fallibility” comes from the Latin fallere, a transitive verb that meant to cause something to stumble.  In the reflexive form, Cicero’s me fallit (“something caused me to stumble”) bestowed upon our concept of fallibility the useful idea that when one makes a mistake, it isn’t one’s own fault.  As Flip Wilson used to say, “the devil made me do it.”

This is something I adore about language – the way we speak is instructive because it mirrors the way we think.   Therefore, tracing the way language evolves, we can trace the logic (or illogic) of the way we have historically tended to think, and so we can learn something about ourselves.  Applying that concept here leads me to conclude that denying personal responsibility for our mistakes goes back at least as far as Cicero, probably as far as the origins of language itself, and perhaps even farther.  “I did not err,” our ancient ancestors taught their children to say; “something caused me to stumble.”

I also think it’s fun to examine the development of language to see how basic ideas multiply into related concepts, the way parents give rise to multiple siblings.  And so, from the Latin fallere come the French faux pas and the English words false, fallacy,  fault, and ultimately, failure and fail.  While I’ve heard people admit that they were at fault when they stumbled, it’s far less common to hear anyone admit responsibility for complete failure.  If someone does, her friends tell her not to be so hard on herself.  His psychiatrist is liable to label him abnormal, perhaps pathologically so: depressed, perhaps, or at least lacking in healthy self-esteem.  The accepted wisdom tells us that a healthier state of mind comes from placing blame elsewhere, rather than on oneself.  Most interesting.

Humility, meanwhile, apparently began life in the Indo-European root khem, which spawned similar-sounding words in Hittite, Tokharian, and various other ancient languages.  All such words meant the earth, the land, the soil, the ground – that which is lowly, one might say; the thing upon which all of us have been raised to tread.  In Latin the Indo-European root meaning the ground underfoot became humus, and led to English words like exhume, meaning to remove from the ground.  Not long thereafter, one imagines, the very ancient idea that human beings came from the ground (dust, clay, or whatever) or at least lived on it led to the Latin word homo, a derivative of humus, which essentially meant a creature of the ground (as opposed to those of the air or the sea).  From there came the English words human and humanity.  Our humanity, then, might be said to mean, ultimately, our very lowliness.

From the Latin, homo and humus give us two rather contrary sibling words.  These siblings remain in a classic rivalry played out to this day in all manner of ways.  On the one hand, homo and humus give us our word “humility,” the quality of being low to the ground.  We express humility when we kneel before a lord  or bow low to indicate subservience. In this light, humility might be said to be the very essence of humanity, since both embody our lowly, soiled, earth-bound natures  But our human nature tempts us with the idea that it isn’t good to be so low to the ground.  To humiliate someone else is to put them in their place (to wit, low to the ground, or at least, low compared to us.) And while we share with dogs and many other creatures of the land the habit of getting low to express submissiveness, some of our fellow creatures of the land go so far as to lay down and bare the undersides of their necks to show submission.  Few of us are willing to demonstrate that degree of humility.)

And so the concept of being a creature of the ground underfoot gives rise to a sibling rivalry — there arises what might be called the “evil twin” of humility, and it is the scientific name by which we distinguish ourselves from other land-based creatures: the perception that we are the best and wisest of them gives rise to homo sapiens, the wise land-creature.  As I’ve pointed out in an earlier blog, even that accolade wasn’t enough to satisfy us for long: now our scientists have bestowed upon us the name homo sapiens sapiens, or the doubly wise creatures of the earth.   I find much that seems telling in the tension between our humble origins and our self-congratulatory honorific.  As for the current state of the rivalry, I would merely point out that not one of our fellow creatures of the land, as far as I know, have ever called us wise.  It may be only us who think us so.

And now, I turn to “civility.”  Joseph Partridge, my favorite etymologist, traces the word back to an Indo-European root kei, meaning to lie down. In various early languages, that common root came to mean the place where one lies down, or one’s home. (Partridge asserts that the English word “home” itself ultimately comes from the same root.)  Meanwhile, Partridge tells us, the Indo-European kei morphed into the Sanskrit word siva, meaning friendly.  (It shouldn’t be hard to imagine how the concepts of home and friendliness were early associated, especially given the relationship between friendliness and propagation.) In Latin, a language which evolved in one of the ancient world’s most concentrated population centers, the root kei became the root ciu- seen in such words as ciuis, (a citizen, or person in relation to his neighbors), and ciuitas (a city-state, an aggregation of citizens, the quality of being in such an inherently friendly relationship to others).  By the time we get to English, such words as citizen, citadel, city, civics and civilization, and of course civility itself, all owe their basic meaning to the idea of getting along well with those with whom we share a home.

In the olden days, when one’s home might have been a tent on the Savannah, or a group of villagers occupying one bank of the river, civility was important to producing harmony and cooperation among those who laid down to sleep together.  Such cooperation was important for families to work together and survive.  But as families became villages, villages became cities, and city-states became larger civilizations, we have been expanding the reach of people who sleep together.  (And I mean literally – my Florida-born son, my Japanese-born daughter-in-law, and my grandson, Ryu, who even as I write is flying back from Japan to Florida, remind me of that fact daily.)  Our family has spread beyond the riverbank to the globe.

Given the meanings of all these words, I would ask how far our modern sense of “home” and “family” extend?  What does it mean, these days, to be “civilized”?  What does it mean, oh doubly-wise creatures of the earth, to be “humane”? And in the final analysis, what will it take to “fail”?

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/07/26/the-tag-line/
Twitter
RSS