Thoughts and Opinions

Zooming In

Neil Gaiman tells the story of a Chinese emperor who became obsessed by his desire for the perfect map of the land that he ruled. He had all of China recreated on a little island, in miniature, every real mountain represented by a little molehill, every river represented by a miniature trickle of water.  The island world he created was enormously expensive and time-consuming to maintain, but with all the manpower and wealth of the realm at his disposal, he was somehow able to pull it off.  If the wind or birds damaged some part of the miniature island in the night, he’d have a team of men go out the next morning to repair it.  And if an earthquake or volcano in the real world changed the shape of a mountain or the course of a river, he’d have his repair crew go out the next day and make a corresponding change to his replica.

The emperor was so pleased with his miniature realm that he dreamed of having an even more detailed representation, one which included not only every mountain and river, but every house, every tree, every person, every bird, all in miniature, one one-hundredth of its actual size.

When told of the Emperor’s ambition, his advisor cautioned him about the expense of such a plan.  He even suggested it was impossible.  But not to be deterred, the emperor announced that this was only the beginning – that even as construction was underway on this newer, larger replica, he would be planning his real masterpiece – one in which every house would be represented by a full-sized house, ever tree by a full-sized tree, every man by an identical full-sized man.  There would be the real China, and there would be his perfect, full-sized replica.

All that would be left to do would be to figure out where to put it…

***

Imagine yourself standing in the middle of a railroad bed, looking down the tracks, seeing the two rails converge in the distance, becoming one.  You know the rails are parallel, you know they never meet, yet your eyes see them converge.  In other words, your eyes refuse to see what you know is real.

If you’re curious why it is that your mind refuses to see what’s real in this case, try to imagine what it would be like if this weren’t so.  Try to imagine having an improved set of eyes, so sharp they could see that the rails never converge.  In fact, imagine having eyes so sharp that just as you’re now able to see every piece of gravel in the five foot span between the rails at your feet, you could also see the individual pieces of gravel between the rails five hundred miles away, just as sharply as those beneath your feet.  In fact, imagine being able to see all the pieces of gravel, and all the ants crawling across them, in your entire field of vision, at a distance of five hundred miles away.  Or ten thousand miles away.  What would it be like to see such an image?

***

How good are you at estimating angles?  As I look down those railroad tracks, the two rails appear straight.  Seeing them converge, I sense that a very acute angle forms – in my brain, at least, if not in reality.  The angle I’m imagining isn’t 90 degrees, or 45 degrees; nor is it 30, or even 20.  I suppose that angle to be about a single degree. But is it really?  Why do I estimate that angle as a single degree?  Why not two degrees, or a half a degree?  Can I even tell the difference between a single degree, and a half of a degree, the way I can tell the difference between a 90 and a 45?  Remember, one angle is twice as large as the other.  I can easily see the difference between a man six feet tall and one who’s half his size, so why not the difference between a single degree and a half a degree?  What if our eyes – or perhaps I should be asking about our brains – were so sharp as to be able to see the difference between an angle of .59 degrees and one of .61 degrees with the same ease and confidence we can distinguish between two men standing next to each other, one who’s five foot nine and the other six foot one?

***

Yesterday, I was preparing digital scans of my grandfather’s Christmas cards for printing in the form of a book.  His Christmas cards are hand drawn cartoons, caricatures of famous personalities of his day.  Each is clearly recognizable, from Franklin Roosevelt and Adolf Hitler to Mae West and Mickey Mouse.  Some of the images were scanned at 300 pixels per inch, some at 600 etc.  Reflecting on pixel counts and resolutions so that my printed book would not appear blurry, I was testing the limits of my ability to distinguish different resolutions.   Of course, one neat thing about a computer is how it lets us zoom in.  As long as I zoomed in close enough, I could see huge differences between two versions of the same picture.  Every pixel was a distinct color, every image (of precisely the same part of the caricature) a very different pattern of colors  – indeed, a very different image.  Up close, the two scans of the cartoon of Mae West’s left eye looked nothing alike – but from that close up, I really had no idea what I was looking at – it could have been Mae West’s left eye, or Adolf Hitler’s rear end, for all I knew.  In any case, I knew, from my close-up examination, how very different the two scanned images of Mae West actually were.  Yet, only when I was far enough away was I able to identify either image as being a caricature of Mae West, rather than of Hitler, and at about that distance, the two images of Mae West looked (to my eye) exactly the same.

***

How long is the coastline of Ireland?

If I took a yardstick and walked the perimeter, I could lay my yardstick end to end the whole way around, count the number of lengths, and conclude that the coastline of Ireland was a certain number of feet long.  But if I used a twelve inch ruler instead, following the ins and outs of the jagged coast a little more precisely, the result would be a larger number of feet than if I had used the yardstick, because the yardstick was assuming straightness every time I laid it down, when it in fact the coastline is never perfectly straight.  My twelve inch ruler could more  closely follow the actual irregularity of the coastline, and the result I obtained would be a longer coastline.  Then, if I measured again, using a ruler that was only a centimeter long, I’d get a longer length still.  By the time my ruler was small enough to follow the curves within every molecule, or to measure the curvature around every nucleus of every atom, I’m pretty sure I’d have to conclude that the coastline of Ireland is infinitely long – putting it on a par, say, with the coastline of Asia.

***

How many rods and cones would my eyes have to contain, for me to be able to distinguish every ant and piece of gravel in every railroad bed, wheatfield and mountainside within my field of vision, at a distance of five hundred miles away?  How much larger would my brain have to be, to make sense of such a high-resolution image?  I suspect it wouldn’t fit inside my skull.

***

Why did we, so recently, believe that an atom was indivisible? Why did it take us so long to identify protons, neutrons, and electrons as the really smallest things?  What did it take us until 2012 to decide that that, too, was wrong, that not only were there quarks and leptons, that the smallest thing was the Higgs boson?  And not until 2014 that particles existed even smaller than that?

***

Given how long it took us to realize that our solar system was just one of billions in our galaxy, and how much longer to realize that our galaxy was just one of billions of galaxies, why are we now so confident of our scientists’ estimates of the size of the Universe – especially when told that the “dark matter” and “dark energy” they say accounts for most of it are just names given to variables necessary to make their equations come out right?  That, apart from their usefulness in making these equations come out right, the scientists have never seen this stuff and have no idea what it is?  Is it really so hard for us to say, “We simply have no idea”?

***

A human baby can distinguish between the faces of hundreds, even thousands, of human faces.  But to a human baby, all chimpanzees look alike. And to most of us Westerners, all Asians look alike.  Why do babies treat Asians and Chimpanzees like the rails of railroad tracks, converging them into “identical” images even when we know they are different?  Did our brains  evolve not to maximize perception and understanding, but to make them most efficient?  In other words, are we designed to have limited perception for good, sound reasons, reasons that are important to our very survival?

***

Why do we think, and talk, and act, as if our brains are capable of comprehending reality, in all its vast complexity?  Is it more efficient to feed and maintain fewer rods and cones, than it would take for us to feed and maintain enough of them to see the difference between quarks and Higgs bosons, or the individual pieces of gravel between the railroad tracks on all the planets of  Andromeda?

***

Mirror, mirror, on the wall: tell me, can I really achieve, within my brain, a true comprehension of the Universe? Or am I just like the Emperor of China?

– Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/04/24/zooming-in/
Twitter
RSS

Have an Argument Today…

Ever try to change someone else’s mind?

Consider the option of having an argument with them.  Arguments can be very convincing.  In fact,  in scientific study after scientific study, it’s been demonstrated that in 99.7% of all arguments, each participant has convinced himself that he’s right.

– Joe

////////////

………

 

🙂   🙂   🙂

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/04/05/have-an-argument-today/
Twitter
RSS

The Militia

I promise, this will be as short as a snub-nosed resolver — maybe even shorter.

The Second Amendment reads, “A well regulated Militia being necessary to the security of a free State, the right of the people to keep and bear Arms shall not be infringed.”

Now, I’ve never cared much about gun rights or gun control, one way or the other. I’ve never owned a gun (unless the B.B. gun I bought to scare rabbits out of the vegetable garden counts).  While I have no problem with my friends who hunt larger game, I cried the last time I shot a rabbit myself.  (He was just a little bunny, and I shot him in the eye by mistake, and he…  Well, you get the picture.)  So my support for  gun rights has had nothing to do with deer hunting season, or even of defending my home against a burglar.  Rather, I’m a constitutional geek, a strict constructionist, and I’m intent on honoring the wisdom of the founding fathers.  And back when I was studying constitutional law, I thought a lot about the importance of militias.  Ever since, my support for the Second Amendment has always been based on the idea that having a militia is pretty doggone important to protect us (the people) against a standing army controlled by a tyrannical government.

I was surprised recently to hear a liberal friend of mine agree.  With our current President obviously in mind, she opined that having a militia is a very important safeguard against tyrannical government.  (You never know.  Maybe, if my belief in the importance of the militia catches on, the Trump presidency will convince more liberals  that having an effective milita is important .)

Anyway, in District of Columbia vs. Heller (2008), the Supreme Court held (5-4) that the Second Amendment protects an individual’s right to bear arms regardless of the individual’s connection to any organized militia.  Ouch.  That decision was a blow to my strict constructionist roots, since the Court acknowledged that the ability to field a militia was a major part of the purpose of the Second Amendment.  As a strict constructionist, that, for me, is what the Amendment has always been about.

So here’s my question: to the extent that anyone, left or right, thinks militias are important to protect us from the possibility of a tyrannical government in Washington, what impact has two hundred years of military spending had on the issue?

I mean, back in the 1780’s when the founding fathers were cooking all this stuff up, the average homeowner owned a musket and a fishing knife – essentially the same weapons used by General Washington’s federal army.  True, Washington’s army also had a few small cannon here and there, but clearly, if the federal army had fallen under the spell of a hated tyrant, a crowd of angry citizens, armed with muskets and fishing knives, could have taken on that army.  And they would have stood a pretty decent chance of preserving their rights.  For me, the Second Amendment is all about achieving that same ability today.

But today, our standing federal army has more than rifles and fishing knives.  Now, it has not only automatic weapons, but tanks, aircraft carriers, stealth bombers, ICBM’s and of course nuclear bombs.  So the current debate about gun control has me thinking again about what it will take to have an effective militia.  (Always a dangerous undertaking, especially now that I’ve grown tired of being wrong all the time.) It occurs to me that what’s important, from a constitutional perspective, hasn’t changed since the 1780’s.  I mean, it doesn’t matter whether we have advanced weapons or old-fashioned ones, as long as the people themselves have firepower roughly similar to that of the standing army.  To achieve that, it seems to me, one option would be to give school crossing guards RPG’s.  Every qualified Neighborhood Watch Association could be assigned a tank.  Local yacht clubs could share entitlement to battleships or aircraft carriers, and local flying clubs could be equipped with fully armed B-52’s.  Of course, they’d all be well trained.  I should think that would give the people a fighting chance against a tyrannical government.

You may think I’m kidding, but seriously, I think I may finally be right about something.  Having an effective militia is important, and to have one, we the people need to have as much firepower as the Pentagon.  So: either we can equip our selves like the Pentagon does, OR, we could get the Feds to limit their own armaments to what homeowners are allowed to have.  Maybe everybody could be limited to a handgun — teachers, students, homeowners, and the Joint Chiefs themselves — handguns,   flintlocks, fishing knives, whatever — as long as  the standing army is no better equipped than the average homeowner.  To have an effective Second Amendment, the standing army could be required to get rid of all those M1 Abrams  tanks, guided missiles, and other unfair advantages that would put down a popular insurrection in the bat of an eye.  From a constitutional perspective, I’m convinced that rough parity is all that’s essential.

So. Am I the only one left who champions the Second Amendment on the basis of the need for an effective militia?  I mean, I know there are some who claim to , but those I’ve met also support a stronger federal military.  Given that such a huge imbalance between the parties already exists, I don’t see how such people can really claim to support the Second Amendment and an increase in federal military spending at the same time — not if having an effective militia is really important to them.

Anyway, thinking about this effective militia idea, and pondering the fact that it’s really about keeping  parity between the citizens and their army, I started to wonder how much money could be saved if we restored parity with everyone having smaller weapons, rather than bigger ones.  I mean, it would probably be expensive if I had to have my own launch pad in the attic; and an Abrams tank would probably do serious damage to my front yard.  So I was pleased to discover that the always sensible Swiss may have the answer.  They’ve come up with a real handgun that’s only two inches long.

They’re available for only a little over $6,000 each — far less than the cost of an M1 Abrams, for example — and if we made sure that everybody had one, the demand would probably drive the price down to the truly affordable.  See http://www.zdnet.com/article/worlds-smallest-gun-is-highly-concealable-triggers-fears/

Seriously, I’m still not sure where I stand on gun control and the right to bear arms, but as a strict constructionist, I think I’ve finally found a principled basis for addressing the issue.  We clearly need to choose between one of the solutions I’ve mentioned if weapons parity and an effective militia are to be maintained.  Otherwise, I’m thinking that having an effective militia is a battle we’ve already lost, and we’re soon to be chum for the tyrants.

Thoughts?  Help from any quarter would be appreciated.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/03/26/the-militia/
Twitter
RSS

Hatred

I just watched a TED talk I liked.  The speaker (Sally Kohn) was articulate and funny; her message about hatred powerful.    Fearing that a synopsis of her talk would detract from the way she conveys her point, I’ll  simply share the link to her talk, with my strong recommendation.

https://www.ted.com/talks/sally_kohn_what_we_can_do_about_the_culture_of_hate?rss

But I do have one  disagreement with her.  At one point,  she refers to  “study after study after study that says, no, we are neither designed nor destined as human beings to hate, but rather taught to hate by the world around us…”

I’m not so sure.  Last year I saw a science show on TV that presented a series of studies of very young children; its disturbing suggestion was that we are born to hate.  Can anyone enlighten me about these studies, suggesting (one way or the other) whether hatred is learned, or innate?  A product purely of culture, or of biological evolution?

It has always seemed to me that while some of hate is surely learned, a predisposition toward it may be innate. But what would a predisposition toward hate look like?

Sally cites the early 20th century researcher Gordon Allport as saying that Hatred occupies a continuum, that things like genocide are at one end and “things like believing that your in-group is inherently superior to some out-group” lie at the other.  That much makes sense to me.  In fact, the very idea of a “hate continuum” with feelings of superiority lying at one end is why I think the answer to the innateness question may be important.

Whenever I hear it said that a positive self-image is important to mental health, I think of Garrison Keillor’s joke that in Lake Wobegon, everyone is above average.   I suspect the great majority of us think we’re at least slightly above average.  And don’t  psychologists say that that’s good?  Don’t we justify keeping our positive self-images by the corollary view that people who “suffer from a negative self-image” are likely unhealthy?  Don’t we think it would be beneficial  if everyone thought of himself or herself as above average?  Wouldn’t that mean an end, for example, to teen suicide?

But even if I’m far below average, there are at least some people in the world who are not as good (or as smart, or as fit, or as valuable) as me.  No?   And if I think my liberalism is superior to your conservatism, or the other way around,  you must lack some quality or insight I possess, no?  Does “feeling good about myself” require that, in some way, I feel superior to others?

Maybe not.  Maybe my positive self image need not depend on comparing myself to others – maybe I can see value in myself – have a positive self-image – without thinking of myself as superior to anyone else at all.  But the only way I can discern to do that is to see equal value in everyone.  And if we’re talking about wisdom or intelligence or validity of things in which we believe, that means that  my own power of discernment is no better than than the next guy’s; that everything I believe in has value, but everyone else’s beliefs have equal value.  And I see great debate about whether that’s desirable. Does it require me to abandon all my convictions?  To forego all my beliefs?  What does it even mean to say that my belief in God has no more value than your belief in atheism, or vice versa?  Can I really believe in anything, if I think an opposing belief is just as “good”? I think most of us say no.  I think that, for most of us, feeling good about ourselves and our beliefs is only possible through at least implicit comparison to others, a comparison in which we feel that our beliefs are at least slightly superior to somebody else’s.

Even if it’s both possible and desirable, it strikes me as very, very hard to have a positive self image without feeling such superiority.  I mean, can I really have a positive self-image if I think I’m doomed to be the very worst person on earth, in every respect?  It certainly seems likely that, for many, most or all people in the world, positive self-image depends on feeling superior to at least some others, in at least some respects.  I’d venture the guess that a tendency toward positive self-image (in comparison to others) has evolved in our species because of its evolutionary health benefits.  In any case, I suspect there’s a strong correlation between adults who feel their beliefs are superior and adults who feel disdain for the beliefs (or intellects) of others, and a strong correlation between those who feel disdain for the beliefs and intellects of others and those who hate them.  At the very least, positive self-image and a feeling of superiority seem at least early stepping stones in the direction of Hatred.

However, my suspicion that the seeds of Hatred  are themselves innate doesn’t depend entirely on positive self-image and feelings of superiority.  The science show I watched last year dealt not with self-image, but with group identification and preference: the idea that we ‘re willing to assist and protect those others who are most like ourselves, while we seek the opposite (competition, aggression, violence) directed at those who are unlike ourselves.

“My God, my family, and my country.”   The familiar formula implies a great deal, I think, about the subject of identity, as does the advice we give to our children: “Don’t ever talk to strangers.”  Why do we alumni all root for the home team?  Why would most of us save our spouse and children from an inferno first, before saving strangers, if we save the strangers at all?  Why do we lock our doors at night to protect those we know, while excluding those we don’t?  Why do we pledge allegiance to our respective flags?

(That last one’s easy, of course, if we believe that we Americans pledge allegiance to our flag because our country is the greatest on earth.  Perhaps I should really be asking why all the other people in the world – who far out number us –pledge their allegiance to their flags, when they live in inferior countries?  Are they uninformed?  Too stupid to recognize our superiority? Aware of our superiority, but unwilling to admit it, because of selfishness, dishonesty, or even evil design?  In which case, can Hatred be far behind? )

Why do we form Neighborhood Watch groups, erect walls on our borders, finance armies for self-defense, and erect tariffs to trade?   Is it not because we prefer the familiar, and because that preference is in our self-interest?  And isn’t self-interest as true of groups as of individuals?  In evolution,  groups do well who look out for each other – who favor those most like themselves – while treating dissimilar “others” with suspicion and distrust.  (We know that those like us aren’t dangerous, hostile predators, but fear that unknown strangers might be.)   In contemplating first contact with aliens from other worlds, some of us favor holding out olive branches, others making some sort of first-strike, but disagree as we might on how to first greet them,  we all tend to think in terms of a common goal: to preserve humanity.  We therefore focus on the need for global unity in facing the alien challenge.  But what is it that causes us to favor “humanity” over alien beings, when we know absolutely nothing about those alien beings?  Isn’t it because we know absolutely nothing about them?  Isn’t it because, innate within us is a bias in favor of  those who are most like ourselves?

Consider the following continuum, as it progresses from the unfamiliar to the familiar:

(1) We spend millions to combat and eradicate bacteria, giving Nobel prizes to those most successful in the effort;

(2) We spend some (but less) to eradicate mosquitoes, which we swat unthinkingly;

(3) On the contrary, we feel bad if we run over an armadillo on the road, but what the heck, such accidents are unavoidable;

(4) We try not to think much about slaughtering millions of cows, but we do it on purpose, because we have to eat;

(5) most of us abhor the idea of ever eating a monkey; and

(6) we condemn human cannibalism, abhorring murder so much we imprison murders, even if we oppose the death penalty because human life is sacred.

I think that assigning things to their place on such a continuum based on how much they seem similar or dissimilar to ourselves reflects our innate, natural preference for those most like ourselves.  Yet the tendency to feel safety in, and preference for, those who are most like ourselves, is precisely what leads to racism, no?

So, is this preference natural and good?  Or is it something to resist?  Should we be proud of our tendency to fight for our God, our country, our state, our species, our family, our planet –  and to disdain our enemies – or should we be suspicious of that tendency, aware that they largely result from the accidents of birth?  And does our tendency to root for the home team – not to mention our loyalty to political ideals –  exist only because we’re able to see the good in the familiar, while understandably blind to the good in the unfamiliar?

We don’t see what roosters see in hens.  We’re blind to what bulls see in cows.   But just like we can’t feel the love one three-headed Martian feels for another, I submit we won’t be able to  appreciate the goodness that aliens will be striving to preserve when they descend upon us,  maws open, preparing to treat us the way we treat swine.  I want to know WHY we are all in agreement on the importance of preserving our species, even if it means the poor aliens go hungry.   And I doubt its as simple as loyalty to good old mother earth, as I suspect we’d probably be happy to negotiate a peace with the invaders by offering them, say, all the world’s polar bears and squirrels, provided they’ll agree to leave humans alone.  This preference for humanity would prevail in that moment, I believe, never mind the national and regional warring between earthlings that had preceded it.  And it would seem strong enough to survive even if the alien species were acknowledged to be technologically “superior” to us.  But in that case, would our efforts rest on a reasoned belief that, at least morally, if not technologically, we are superior to such alien species?  Or would the instinctive feeling of moral superiority be only a disguise in which the instinct for self-preservation and consequent preference for things most like ourselves had clothed itself?

I don’t claim to have the answers.  Whether we deserve to defeat alien invaders, whether we ought to value human beings more than chickens or mosquitoes, whether we ought to fight for our flag, these are not the issue here.  My point is that I take our allegiance to things most like us to be innate, whether it’s good or (in the case of racism) abhorrent.  I think the preference is a natural, inborn one, a part of who we are, whether we like to admit it or not –and that it’s a tendency terribly hard to get rid of, as our struggle with racism shows.

For the type of reasons Sally suggests, I believe that understanding our feelings of superiority and our preference for the things most like ourselves is the key to overcoming Hatred.  But if we think of Hatred as merely cultural, as merely something we’ve “learned” from society, I fear that, as individuals, we may be tempted to think we’ve already rid ourselves of it, or that we no longer need to be alert to its presence deep in our hearts.  If we see it only as something others do – if we fail to see at least the seeds of it, innate in ourselves, ready to manifest itself in our own actions– we may be the Hateful ourselves.

– Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/03/17/hatred/
Twitter
RSS

What If?

Sometimes, “What if” questions lead to breakthroughs in the way we think and live.  What if we could make our own fire?  What if the sun doesn’t really circle the earth?  What if “up” and “down” aren’t really up and down?  What if I’m wrong?

Most of the time, the “what if” questions don’t lead to earth-shattering breakthroughs about the real world.  Most of the time, they posit something that’s impossible, or just doesn’t make sense.  When we ask, “What if the South had won the civil war?” we’re not suggesting that the South did win, just hoping to learn something by contemplating what the world might be like, if it had. I believe there can be value in asking such questions.

So when I ask, these days, what if I’m wrong, I’m not thinking of a mere philosophical acknowledgement that I’m  likely wrong about something .  Rather, I like to ask, what if I’m wrong about something really important?  It’s easy to acknowledge I might be wrong about the best restaurant in town, or the culpability of O.J. Simpson.  No,  I’m thinking on the scale of what if “up” isn’t up, and “down” isn’t down?  And today, I’m wondering, “What if I’m wrong about Jesus?”

I imagine I’ve just alienated lots of people: most obviously, those Christian faithful for whom belief in Jesus is the most important belief in the world, but maybe also those atheists, Jews, Muslims and others who might take offense at the suggestion that belief in Jesus has ever been important to them.  In fact, for non-believers,  if I’m suggesting they might be wrong, I’ve just alienated them by revealing myself  as a closet Christian proselytizer who’s just disclosed a very annoying agenda – right?.

Indeed, therein lies the reason for my question.  What if we’re all wrong about Jesus? Not just those who believe in him, but also those who don’t?  Anyone  whose feathers may be ruffled by the suggestion that belief in him, one way or the other, may not be important after all?

At the mere asking of such a question, a lot of us brace ourselves for the sort of debate we’ve grown used  to – a debate we may have grown tired of  – a debate between those who believe in Jesus and those who don’t.  Jesus himself is said to have predicted  that brother would deliver brother to death, and be hated, on account of him.  (Matt.  10:21-22.)   I’ve always thought it ironic that a figure so identified with principles of loving – not just one’s neighbors  but one’s enemies – would end up at the center of debates, wars, and genocides fought in (or against) his name.  Yet, from the Crusades  to jihads, from the Salem witch trials to modern clashes over sexual identity,  this advocate for love has been at the center of controversy and hate.  Probably because I was raised in the midst of argument between Roman Catholics (my father’s side) and fundamentalist Presbyterians (my mother’s side), I lean toward agnosticism, not only with respect to religion, but politics, psychology, and physics as well.   Agnosticism, after all, is a part of what led me to We May Be Wrong.

But having been raised as a Christian, I have a special interest in the irony of the animosities surrounding Jesus and his followers.  And so I ask, “What if we’re all wrong about Jesus?”

Now, for me, the proposition that we may be wrong has never meant to suggest we’re wrong about everything, or even totally wrong about any one thing.  I simply start with the acknowledgement that I’m almost certainly wrong about something, and  from there, I move on to the belief that I really have no way of knowing, for sure, which subject(s) are the ones I’m wrong about.  I may be right about a lot of things;  I just wish I could identify what those things were, so that I could jettison all the others.   So I‘m not asking anybody to question all their beliefs about Jesus, or to contemplate the possibility that all of them might be wrong.  Today, however, I do have a particular one in mind.

I think the concept of “belief in Jesus” is unique in the modern world, or very nearly so.  Our language itself suggests as much.  If we say we have “faith” in our generals, we likely mean only that we trust them, that we feel secure under their leadership.  But if we say we have faith in Jesus – or even more so, that we “believe in” him – we usually mean a good bit more than that.

I don’t say, “I believe in dogs,” or “I believe in pepperoni pizzas.”    I might say I believe that such things exist, but not that I believe in them.  If I say I believe in Santa Claus, or in the Easter Bunny, I’m saying I believe that such creatures are physically real, not just figments of fairy tale. When I say “I believe in ‘X’” it’s  usually an abbreviated way of stating a belief in the truth of some specific proposition about ‘X.’   If I say, “I believe in love,” or “I believe in democracy,” it’s the equivalent of saying I believe in the truth of the proposition that love (or democracy) is a good thing.  But if I say, “I believe in Jesus,” I’m not generally understood to be saying that I trust his teaching  or that I believe in the truth of the proposition that he was a good man; I’m understood to be asserting belief in the truth of a very unique proposition about him, and no one else who’s ever lived.  A belief, in fact, that has no parallel in truth propositions about anything else in my vocabulary.

Yet, when it comes to belief in Jesus, discussion often stops right there, at the “I believe” stage.  As soon as we hear “I believe – ”  or “I don’t believe –” it’s as if the “sides” are drawn without ever getting to what it is that one does, or doesn’t, believe about him.  For some reason, Jesus has become a virtual poster child for polarization.  “You’re either with us or against us” often seems the attitude on both sides.

Now, my parents were from different religious backgrounds, and for that reason they disagreed about religion a lot:  Transubstantiation.  Limbo.  The assumption.   The veneration of Mary.  The priesthood.  The authority of the Pope.   The sacraments.  How to pray.  How the world was created.  The list goes on.  Personally, I came to believe their disagreements were symptomatic of the pitfalls inevitably encountered when we start trying to define metaphysical things with words that draw their meaning from the physical.  (Words draw their meaning from their use as applied to shared experiences; when we use them to describe things we claim to be unique, I lose confidence in them.) But while my parents disagreed about many aspects of their Christian beliefs, they were typical of most Christians in one respect:  when they said, “I believe in Jesus,” they were agreeing that Jesus was God.

Now,  I’ve never thought I had a very good idea of what it would be like to be a theoretical physicist, or the President of the United States, much less God.  Whatever it means to be God, if such a person or things exists at all, seems too much to comprehend.    I won’t delve into the nuances of whether my parents meant that Jesus was really God, or just the son of God, or a part of the three persons in one God, or any of the other verbal formulations that had church leaders arguing from the get go.   Years of effort to understand such nuances have only further convinced me that it’s like arguing over the number of angels that could fit on the head of a pin.  I, for one, don’t really understand what it means to be God – in whole, or even in part.

And for me, at least, the logic is one of mathematical equality: if I can’t say exactly that “God is X,” then I don’t see how I can say that “X is God.”  And if I can’t understand what it means to say that “X is God,” then I don’t follow how important it could be to believe that Jesus was, or is, or wasn’t, or isn’t.  How can it be important to believe in the truth of any proposition I cannot understand?

For my mother and father, the most important thing I could ever do was to profess my belief that Jesus was God, or the son of God, or (fill in whatever qualifiers you deem relevant.).  Throughout their lives,  I held my ground, refusing to profess a belief in the truth (or falsity) of a proposition I didn’t understand.  This  frustrated the $#@!  out of them.  For my parents, “belief in Jesus” did not mean a belief that he existed, or that he was good, or that he performed miracles, or that he proclaimed the importance of love, or that his advice on human behavior was incredibly wise.  “Belief in Jesus” meant belief that, in some sense or another, he was God.  And – crucially – this belief in the divinity of Jesus made all the difference to them.  Whether I was, or wasn’t, a “Christian” depended on that one thing, not to mention whether I’d spend eternity in heaven or hell on its account

I could believe in dogs, or Santa Claus,  if I thought they existed.  I could believe in the American flag if I thought it represented a good country with good ideals.  I could believe in Donald Trump if I thought he was a good president.  But I couldn’t believe in Jesus – not really – unless I believed that he was, in some way, God.

This core requirement for what it means to be a Christian in our world has permeated the thinking of Christians and non-Christians alike since Paul began writing epistles.  The “divinity proposition” that has attached itself to Jesus – the principle for which martyrs have died, for which wars have been fought, for which heretics have been burned – seems to have caused a divide between believers and non-believers that, from where I sit, has no parallel in human history.  And the gospels report that Jesus himself predicted it!

So I ask, “What if we’re all wrong about Jesus?”  in this respect.

Now, some of you may think I’m asking whether we’ve been wrong, all this time, to suppose that Jesus was divine.  Others may think I’m asking whether we’ve been wrong to suppose that he wasn’t.  The traditional concept that belief in Jesus’s divinity (or not) is the be-all and end-all of what it means to be a Christian has shaped our understanding.   If you’re a Christian, it determines whether you’re among the “saved;”  if you’re not a Christian, it determines  whether you’re a self-righteous, deluded dreamer, not to mention potentially dangerous because of the strength of your unreasonable convictions.

But what if, properly recorded, preserved, translated, and interpreted, Jesus neither claimed to be divine, nor denied it?  And even more: What if he disapproved of such theological inquiries, seeing them as the downfall of the Pharisees?  What if, when asked by his disciples what he would have them do, his answer was not that they should believe him to be divine come hell or high water, or that their eternal salvation would depend on their belief in any such theological proposition, but, simply, that they should do as he did?  That they should care for the sick, and love their neighbors as much as they loved themselves?

What if, on this single aspect of understanding  –that belief in the divinity proposition is the sine qua non of Christianity –  we’ve all been wrong, all along?  What would the world be like if the central element of Christianity had not turned out to be belief in the divinity of Jesus, but in living the sort of life he’s said to have lived?  What if Jesus were celebrated for teaching, essentially,  “Look, folks, I don’t understand why you’re so obsessed with this question of divinity and divine origins.  Haven’t you better things to do, and to talk about, than whether, in one sense or another, I am God?  Stop doing that, please!  Leave it for the Pharisees!”

If that concept had been at the center of Christianity for the past two thousand years,  what would it mean, today, to “be a Christian”?   If Christians had been taught not to concern themselves with whether Jesus was God, would that mean that all the “rooms of my father’s house” would be empty, beccause no one had “believed”?

I’m not saying it’s true, or false, I’m just wondering, what the ramifications would be, for the past two thousand years, if the divinity proposition had never been considered important, and that “Christianity” had been a movement centered on “love thy enemy” and “judge not lest ye be judged” and “care for one another.”

What would have happened to the pagan persecutions of the martyrs?  The history of schisms in Christian churches?  The Church’s persecution of heretics? The Christian endorsement of the African slave trade? Conflicts between Christians and Jews, Muslims and atheists?  The household (and the world)  in which I grew up?

I can hear the condemnation.  To posit a Jesus who disapproved of contemplating his divinity – who counselled against the very thought of such an exercise as non-productive, pointless , Pharisaical, and bound to result in division and strife  – would be to rip out the core of Christianity itself.

But what if we’re wrong about that?

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/02/19/what-if/
Twitter
RSS

Thanks to F. Lee Bailey – Part Two

Last time out, I was discussing F. Lee Bailey’s effort to identify various reasons a witness can be mistaken.  Bailey’s thesis was that juries don’t want to believe that witnesses lie, so the wisest and most effective way for a lawyer to discredit a witness is to point out for the jury other reasons – other than bold-faced lies – that a witness might not be telling the truth.  Attempting to come up with my own list, I offered examples that involved lack of information, interpretation of information, forgetfulness, the making of assumptions, the lack of focus, and unconscious force of habit.  Today, I continue that survey of reasons for error.

One common reason witnesses can seem to have diametrically opposed versions of reality, when neither is lying, has to do with language.  When my son was four years old, he was being particularly cranky one evening, whining out loud while I was trying to watch television.  I told him to behave himself or I’d put him to bed.  He quieted a bit, but only momentarily.  So I repeated my threat.  “Behave,” I said in a louder and sterner voice than the first time, “or you’re going to bed!”  Once again, the threat worked only briefly, so his whining began again and I repeated my threat a third time, even more sternly than before.  Again it worked, but when the whining returned only seconds later, at the limit of my patience, I cried out “Daniel, behave!!” in the fiercest tone I could muster.   Frightened nearly to death by my obvious anger, his chin trembled in fear.

“I’m haive,” he assured me.  “I’m haive.

The point is, words mean different things to different people.  Language can get in the way.  To my four year old son, I might as well have been babbling.  Was it his mistake, to misunderstand, or mine, to assume he understood what it meant to be have?  It was, in either case, a failure of communication.  And failure of communication consistently ranks high on lists of reasons for mistake.

Sometimes, we have trouble communicating even with ourselves, and when this happens, it suggests different reasons for error.  A couple of years before we left Florida, on a winter day Karen had invited two guests to the house to paint for the afternoon, I agreed to cook them a meal.  A wall of sliding glass doors that looked out to the swimming pool gave the kitchen the best light for painting, and because of the light, it was the ladies’ chosen spot, as well as my work area for the day.   The meal included  a spiced chutney for which the ingredients  included coriander, cumin, and a little cayenne pepper.  Soon after preparing the chutney I felt a burning sensation in my right eye.  I rubbed the eye with the back of my hand, and then with a wet cloth, but rubbing the eye seemed only to increase the burning sensation.   My tear ducts went into high gear, but despite this natural defense, the burning did not abate.

I’d just recently started wearing contact lenses, and fearing that a lens could be trapping the offensive powder against my cornea, I worried it might be the reason my tear ducts were being ineffective.  The worry was heightened when I went to the sink and flushed my eye with a glass of water, with no consequent reduction in pain.

The urgency of removing the spices became an urgency to remove the contact lens – but I realized quickly that I was having great difficulty even locating the darn thing.   When I tried to squeeze it off and out, my fingers came up empty.  My inability to feel it suggested two possible explanations.  As had happened before, it might have become so closely fitted to the cornea that underlying suction was simply preventing its removal.  Alternatively, all that tearing (or the water from the sink)  had washed the lens down into the eyelid where (having assumed the shape of a folded burrito packed with spicy powder) it was making elimination of the powder impossible.  With the burning sensation getting stronger by the second, I raced from the kitchen to the closest mirror – in our bedroom upstairs – and pulled the lower eyelid down in a search for the offensive lens.  But what with hyperactive tear ducts, pain, and the lack of a functioning lens, my poor eyes couldn’t tell whether the lens was in the eyelid or not.  I couldn’t feel it there, or folded into the upper eyelid, or stuck stubbornly to the cornea itself.  Ever more determined to remove it, I kept pinching at the lens with my fingertips from everywhere in the eye socket it might possibly be.

Unsuccessful, I ran back downstairs, flung open the sliding glass doors and crouched at poolside, dunking my head into the winter-cold water, thrashing my head to generate as much flow as possible, convinced that this, at least, would flush out the offending lens.  But when I lifted my head from the water the pain only increased.  The ladies were laughing now, asking what in the heck I was doing.  But caring only about the pain, I shut my eyes.  The pain increased.  Again and again, I tried to fish for the offending lens, sure that it was to blame, wherever it was

In time, the pain eventually stopped – but not until the ladies suggested I thoroughly wash my hands.  As soon as I did, I realized I could use my fingers to pinch around for the missing lens without adding more spice to the mix.  But even then, I couldn’t locate the lens.

Able at last to see well and think straight again, I found my glasses on the kitchen counter.  Only then did I realize the depths of my folly.  Removing the glasses had been the first thing I’d done, even before rubbing my eyes with the kitchen cloth.  I hadn’t been wearing my contact lenses that day at all.

How does one classify such an error?  You could ascribe it to my inexperience in the kitchen and consequent failure to wash the spices off my hands.  You could ascribe it to my inexperience with contact lenses.  You could ascribe it to my bad decision-making when under pressure, or to forgetfulness, or to lack of focus.  You might say that habit was to blame, as the removal of my glasses at the first sign of irritation was one of those unconscious habits that are so automatic we forget about them.  (In that case, lack of self-awareness about my own habits was also to blame.)  Finally, you might ascribe it to the  presence of an idea – a false idea, but an idea nevertheless – that once in command of my attention, made all the other reasons irrelevant.  The idea that a contact lens had trapped the powder had  supplanting the powder itself as the culprit in need of ferreting out.  I’d entirely made up the story of the folded contact lens, but it was so graphic, so real, so painfully compelling, that it became the thing I focused on; it took command of my world.

One conclusion I draw is that it’s hard to classify reasons for error neatly into distinct types, because any one error may result from all sorts of factors.  But being human, I’m prone to think in terms of types and classifications; they help me think I better understand the world.  And when I do, I’m especially fond of this last-mentioned cause for error – the false stories we tell ourselves.   False as my story about the contact lens  was, IT was the story playing out in my head; IT created the entire world with which my conscious self interacted.  For all intents and purposes, it became my reality.

I’ve enjoyed reading psychologists, philosophers and story-tellers share thoughts about the stories we tell ourselves.  I’ve especially enjoyed reading opinions about whether it’s possible for the human brain to know whether the world it perceives is “real” in any sense distinguishable from the stories we tell ourselves.  Ultimately, I don’t know if creating these stories is the most common reason for our errors or not, but I think they’re among the most interesting.

Finally, to F. Lee Bailey, in addition to conveying my thanks for getting me to think about the reasons people may be wrong, I’d like to convey a suggestion: that, possibly, people lie more often than he supposed.  Possibly, they just do it, most often, to themselves.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/02/03/thanks-to-f-lee-bailey-part-two/
Twitter
RSS

Thanks to F. Lee Bailey…

Years ago, I heard a presentation by F. Lee Bailey.  His audience was other lawyers.  His topic was impeaching adverse witnesses – that is, convincing a judge or jury not to believe them.  His premise was that people – including judges and jurors – don’t want to think other people are lying, if they can help it.  Bailey’s advice, therefore, was to avoid the beginner’s mistake of trying to convince a jury that an adverse witness is a liar, except as an absolute last resort.  Instead, he recommended, give the jury, if at all possible,  other reasons for not believing the opposing witness.  He spent the rest of his talk giving examples of different ways witnesses can be giving false testimony, other than lying.

Looking back on it, it was a surprising presentation from the man who, some years later, convinced the O.J. Simpson jury that detective Mark Furman was a bold-faced liar.  But when I heard Bailey’s presentation, O.J. Simpson was still doing Hertz commercials.  Bailey himself was already famous for representing Sam Sheppard, Ernest Medina, Patty Hearst and others.  His talk made a big impression on me because, in it, he offered a list of ten ways to discredit a witness, other than by arguing that the witness was lying.  I can’t say it improved my legal prowess, but it did get me thinking about all the ways people simply make mistakes.

I lost the notes I took.  Unable to find Bailey’s list on-line, I attempted to reconstruct it myself, in order to do a Bailey-esque “top ten” list of my own in this forum.  But I’ve finally abandoned that effort, for reasons I suspect will become apparent.  Still, I’m interested in the variety of reasons for error, and propose to share some of my thoughts on that subject.

One obvious reason for error is simple unawareness.  An example comes quickly to mind: my lack of awareness of my oldest brother’s existence.  He was born with Down Syndrome, and before I was ever born, our parents had been convinced to send him away, to start a “normal” family as soon as possible, and to forget (if they could) that their first son existed at all.   Three children later, they found themselves unable to do so, and belatedly accepted their first son into the family.  I’ll bypass here the obvious question of whether they were wrong to accept the advice in the first place.  My example has to do with my own error in believing that I was the second child, born with only one other sibling.   My wrongness was simply that I didn’t know, as I’d never been told.  I didn’t know about our oldest sibling until I was five years old, when he first came home.  Until then, every detail of my life had suggested I had only one older brother.  Being wrong about that was simply a matter of not knowing.  As my other older brother recently pointed out, the one thing we cannot know is what we don’t know.

If simply not knowing (i.e., not having information) is one reason we can be wrong, misinterpreting information seems to be another.  Years ago, I’d just sat down after getting home late from work one evening when my dear wife Karen sat down beside me and, looking at my forehead, furrowed her brow in an expression of clear concern about whatever she saw.  Hearing her say, “Hit yourself over your right eye,” I imagined a fly or mosquito about to bite me. To kill the insect I’d have to be fast, so instantly I swung my hand to my forehead, forgetting I was wearing glasses.  (We can count forgetfulness as another way of being wrong).  When the open palm of my right hand smacked my forehead over my right eye, it crushed the glasses and sent them flying across the room, but not before they made a very painful impression on my eyebrow.  But the most surprising result of my obedience was Karen’s uncontrolled laughter.

Now, I thought it cruel for her to laugh when I was in pain, but when a person you love is right in front of you, laughing uncontrollably, sometimes, you can’t help yourself, and you simply start laughing yourself (which is what I did, without quite knowing why).  My laughter just added fuel to Karen’s.  (I suppose she thought it funny  that I’d be laughing, considering the circumstances.)  Then I began to laugh all the more myself, as I realized she was right, that I had no reason to be laughing; the fact that I was laughing struck me as laughable.)  Neither of us could stop for what seemed like forever.

Karen, bless her heart,tried several times to explain why she’d started laughing – but each time she tried, the effort set her off again.  And when her laughing started up again, so did mine.  The encores repeated themselves several times before she was finally able to explain that when I’d sat down, she’d noticed a bit of redness above my right eye.  (Perhaps I’d been rubbing it during my drive home?)  She had simply asked, “Did you hit yourself over your right eye?”  Not hearing the first two words, I’d mistaken the question for a command.  Dutifully, and quickly, I had obeyed.

So far, I’ve mentioned simple ignorance, forgetfulness, and misinterpretation.  I might add my mistake in simply assuming the presence of an insect, or my negligence in failing to ask Karen to explain her odd command.  Actually, we begin to see here the difficulty of distinguishing among causes of error, or among ways it is committed.  Was it really that I had misinterpreted Karen’s question?  Or was it, rather, a failure of sense perception, my failure to hear her first two words?  Or was it her failure to sufficiently enunciate them?  Such questions suggest the difficulty of classifying reasons for error.  When it comes to assigning blame, people like F. Lee Bailey and me made our livings out of arguing about such things.

But I do have an example of a different sort to share.  This one also dates from the 1980’s.  It represents the sort of error we commit when we have all the necessary information, when we make no mistakes of hearing or interpretation, but we – well – let me first share the story.

One of my cases was set to be heard by the United States Supreme Court.  Now, I’m infamous for my lack of concern about stylish dress, and at that point, I’d been wearing the same pair of shoes daily for at least five years – without ever polishing them.  (Go ahead, call me “slob” if you like; you won’t be the first.)  The traditions of appropriate attire when appearing before the United States Supreme Court had been impressed upon me, to the point I’d conceded I really ought to go buy a new pair of shoes for the occasion.  So the night before my departure for Washington, I drove myself to the mall.  Vaguely recalling that there was  a Florsheim shoe store at one end – which, if memory served, carried a nice selection of men’s shoes – I parked, found the store, and began my search, surveying both the display tables in the center of the store and the tiers of shoes displayed around the perimeter.   My plan was first to get a general sense of the options available, and then to narrow the choices.  As I walked from one table to the next, a salesman asked if he could help.

I replied with my usual “No, thanks, just looking.”  As I made my way around the store, the salesman returned, with the same result, and then a third time.  (My war with over-helpful sales clerks is a story for another day.)  Finally, with no help from the salesman, I found a table with a display of shoes that seemed to suit my tastes.  I picked up several pairs, feeling the leather, inspecting the soles, getting a closer look.  The salesman was standing close by now (as if his life depended on it, in fact) and one final time, he asked if he could help.  I really didn’t want to be rushed into conversation with him.  But I took one final look around that particular display, comparing the alternatives to the pair I was holding in my hand, and finally said to the salesman, “I think I like the looks of these.  Are they comfortable?”

“You ought to know,” came the salesman’s reply.  “They’re the same shoes you’re wearing.”

Looking down at my feet, of course, I realized why I’d remembered that store from five years earlier.  At least I’d been consistent.  But when you don’t much care about the clothes you wear, you just don’t think about such information as the location of a shoe store: it’s just not important.

So one issue raised by the example is focus.  Never focus on your shoes and you’re likely to look stupid for not knowing what you’re wearing.  But right behind focus, I think, the example raises the matter of consistency.  Darn right I’d been consistent!  Because of not paying attention, I’d gone to the same mall, to the same store, to the same case, and to the same pair of shoes, exactly as I had five years earlier.  I’ll generalize this into an opinion about human nature: when not consciously focused, unconscious force of habit takes over.

Lack of conscious focus and unconscious force of habit can certainly lead to error.  But being unmindful of something is a matter of prioritizing among competing interests.  With billions of pieces of data showering us from all corners of our experience every day, we have to limit what we focus on.    In my case, it’s often clothing that gets ignored, and instead, ever since hearing F. Lee Bailey’s talk thirty-some years ago, I’ve been thinking about the reasons people can be mistaken.  Everybody has things they tend to pay attention to, other things they tend to ignore.  But among the reasons we err, I think, is the tendency to proceed, unconsciously, through the world we’re not focused on, as if we’re on auto-pilot.  Who hasn’t had the experience of driving a car, realizing you’ve reached your destination while lost in thought, having paid no conscious attention to getting there? How much of our lives do we conduct this way – and how often does it mean we might ask a question just as stupid as “I think I like the looks of these; are they comfortable?”

In my next post, I plan to explore some further types of error.  In the meantime, I’ll close here by pointing out that if you believe what you read on the internet, F. Lee Bailey ended up getting disbarred.   And unless I’m badly mistaken,  he did make Mark Furman out to be a liar.  But while I can admit to recalling two or three times in my life when I told bold-faced lies,  I have no problem admitting I’ve been wrong a lot more often than that.

So for now, I’ll simply thank F. Lee Bailey for helping me understand that lying is just the tip of the iceberg; and that trying to figure out how much lies beneath the surface is  a deep, deep subject – and a problematic one, to say the least.

To be continued.

– Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2018/01/14/thanks-to-f-lee-bailey/
Twitter
RSS

To a New Year

Several people have mentioned it’s been a while since the last WMBW post.

As it happens, I’ve written a number of things with WMBW in mind, but none have seemed worthy of posting.  You have a zillion things to digest.  I don’t want to add spam to your in=basket — especially not when the only point is that, whatever I might say,  I may be wrong.

Sure, I do remain in awe of how little I know.  Of how vast is the universe of what I don’t.  Of how presumptuous I’d be to expect anyone to read what I’ve written.  But precisely for that reason, my sentences remain on my hard drive, in unsent files.  And for precisely that reason, all that emerges, like a seedling from a crack in the pavement underfoot, is my wish that in the New Year to come,  I learn as much about myself, my world, and about the people around me,  as I can.

That, my friends, is all I think worth saying — and that I wish the same for all of us.

—Joe

 

 

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/12/31/to-a-new-year/
Twitter
RSS

Happy Halloween

Since we planned to be out of town for Halloween this year, we produced our annual Haunted House last night, a bit before official trick-or-treat night. “We” means myself and my volunteer crew, of which, this year, there were thirteen members.  What an appropriate number for a Haunted House!

As usual, It took several weeks back in September for me to get psyched.  First, I had to stop thinking about my other projects.  I had to come up with a theme, decide on characters, scenes, and devices, and develop a story line in my mind, imagining the experience our visitors would have, before I could nail the first nail.  As I created the structure that defined the maze-like path to be followed, as I shot each staple into the black plastic walls intended to keep visitors’ footsteps and focus in the right direction, as I adjusted the angle and dimness of each tea light to reveal only what I wanted to reveal, eventually, the construction of the house drew me into the scenes and characters I was imagining.  And as usual, now that “the day after” has arrived, I’ve awoken before the sun rises, my mind crawling with memories of last night’s screams and laughter.  I try to go back to much-needed sleep, but the thoughts of next year’s possibilities get in the way.  It’s the same old story.  Once my mind gets psyched for the Haunted House, it starts to wear a groove in a horrific path; now, it will take something powerful to lift it out of that groove.

I wish I’d done more theater in my life.  I suppose some of my elaborate practical jokes might lay claim to theater.  I’ve even tried my hand at a few crude movies of the narrative, “artsy” sort.  But mostly, its been novels and haunted houses.  I suppose I’ve wanted to tell stories with pictures and words ever since I was a kid.  It’s how I’ve always imagined who I am.

In my efforts to be a better writer, I’ve read much on the craft of writing, from popular books like Stephen King’s On Writing to academic tomes like Mieke Bal’s‘s Narratology.  But among the ghouls and monsters on my mind this dark morning comes the memory of a book on writing I read a few years back, one by Lisa Cron called Wired for Story.  That book makes the point that human brains have evolved to give us a highly developed capacity – indeed, a need – to think in terms of stories, and that we’re now hard-wired to do so.

The opening words of Ms. Cron’s book set the neurological stage:

“In the second it takes you to read this sentence, your senses are showering you with over 11,000,000 pieces of information.  Your conscious mind is capable of registering about forty of them.  And when it comes to actually paying attention?  On a good day, you can process seven bits of data at a time…”

Cron’s book goes on to describe how the very success of our species depends on our capacity to translate overwhelming experience into simple stories.  I don’t know the source, or even if it’s true – maybe from The Agony and the Ecstasy? — but Michelangelo is said to have observed that when he sculpted, he didn’t create art, he just removed everything that wasn’t art.  In my own writing, I’ve come to realize how true that is.  Research produces so many pieces of data, and because I find it fascinating, my temptation is to share it all with my readers.  But thorough research is a little bit like real life, which is to say, like Cron’s 11,000,000 pieces of information.  That much information simply doesn’t make a story, any more than the slab of marble Michelangelo started with makes art.

Our brains are not wired to deal with such overloads, but to ensure our survival, which they do by “imagining” ourselves in hypothetical situations, scoping out what “might” happen to us if we eat the apple, smell the flower, or step in front of the oncoming bus.  Every memory we have is similarly a story – not a photographic reproduction of reality, but an over-simplified construct designed to make sense of our experience.  Think of what you were doing a minute before you started reading this post.  What do you remember?  Certainly not every smell, every sound, every thought that crossed your mind, every pixel of your peripheral vision.  What you remember of that moment is a microcosm of what you remember about your entire life.  Sure, you can remember what you were doing September 11, 2001, but how many details of your own life on that infamous day could you recall, if you devoted every second of tomorrow to the task?  And that was a very memorable day.  What do you recall of  September 11, 1997?  Chances are you have no idea of the details of your experience that day.  The fact is, we don’t remember 99.99% of our lives.  All we remember are the pieces of the narrative stories we tell ourselves about who we are, which is to say, what our experiences have been.

The same holds true about our thoughts of the future.  As we drive down the road, we don’t forecast whether the next vehicle we pass will be a blue Toyota or a green Chevy.  We do, however, forecast whether our boss will be angry when we ask for a raise, or whatever might happen that’s important to us when we arrive at our destination (which is, usually, a function of why we’re going there).  Whether we’re thinking about the past, the present, or the future, we see ourselves as the protagonist in a narrative story defined by the very narrow story-view we’ve shaped to date, which includes our developing notions of what’s important to us.  Our proficiency at doing this is what has helped us flourish as a species.  This is why photographers tend to see more sources of light in the world, and painters more color, while novelists see more words and doctors see more symptoms of illness. The more entrenched we are in who we’ve become, the more different is the way we perceive reality.

Understanding ourselves as hard-wired for dealing with simple, limited stories rather than the totality of our actual experience – not to mention the totality of universal experience – has important ramifications for self-awareness.  As the psychologist Jean Piaget taught us, from our earliest years, we take our experiences and form conclusions about the patterns they appear to represent.  As long as new experiences are consistent with these constructs, we continue interpreting the world on the basis of them.  When a new experience presents itself that may not fit neatly into the pattern, we either reject it or (often with some angst) we begrudgingly modify our construct of reality to incorporate it.  From that point forward, we continue to interpret new experiences in accordance with our existing constructs, seeing them as consistent with our understanding of “reality” (as previously decided-on) whenever we can make it fit.

And so, from earliest childhood, we form notions of reality based on personal experience.  The results are the stories we tell of ourselves and of our worlds, stories which have a past and which continue to unfold before us.  As Cron points out, we are the protagonists in these stories.  And I’d like to make an additional point: that in the stories we tell ourselves, we are sometimes the heroes.  We are sometimes the victims.  But unless we are psychopathic, we are rarely, if ever, the villains.

There are, of course, plenty of villains in these stories, but the villains are always other people.  In your story, maybe the villains are big business, or big government; evil Nazis or evil communists; aggressive religious zealots, cold-blooded, soul-less atheists, or even Satan himself. It could be your heartless neighbor who lets his dog keep you up all night long with its barking, or the unfeeling cop who just gave you that unjust speeding ticket.

As you think of the current chapter of your life story, who are the biggest villains?  And are you one of them?  I doubt it.  But I suggest asking ourselves, what are the stories the villains tell about themselves?  What is it that makes them see themselves as the heroes of their stories, or the victims?  Isn’t it reasonable to assume that their stories make as much sense to them as our stories make to us?

We have formed our ideas about reality based on our own experiences, because they make sense to us.  Indeed, our stories make sense to us because they are the only way we can get our minds around a reality that’s throwing 11,000,000 pieces of information at us every second of our waking lives.  We live in a reality of mountain ranges, full of granite and marble.  Michelangelo finds meaning in it by chipping away everything that isn’t The Pieta, Auguste Rodin by chipping away everything that isn’t The Thinker.  When they find meaning in such small samples of worldwide rock, is it any wonder they see reality differently?

Psychologists tell us that self-esteem is important to mental health, so it’s no wonder that in the stories we tell ourselves, we are the heroes on good days, the victims on bad ones, and the villains only every third leap year or so.  Others are the normal villains.  But if I’m your villain, and you’re mine, then we can’t both be right – or can we?  An objective observer would say that your story makes excellent sense to you, for the same reasons my story makes excellent sense to me.  Both are grounded in experience, and your experience is quite different from mine.  Even more importantly, I think, your “story” represents about 7/11,000,000th of your life experiences while my story represents about 7/11,000,000th of mine.

But confirmation bias means that we fight like heck to conform new experience to our pre-existing stories.  If a new experience doesn’t demand a complete re-write, we’ll find a way to fit it in.  It’s like we’re watching a movie in a theater.  If some prankster projectionist has spliced in a scene from another movie, the whole story we’re watching makes no sense to us and sometimes we want to start over, from the beginning.    If our stories are wrong, our entire understanding of who we are and how we fit in becomes a heap of tangled film on the projection room floor.

One of the things I love about Halloween is how it lets us imagine ourselves as something different.  I mean, Christmas puts our focus on Jesus or Santa Claus, role models to emulate, but their larger-than-life accomplishments and abilities are distinctly other than the selves we know.  Mother’s Day and Valentine’s Day encourage us to focus on other people.  Halloween is the one holiday that encourages us to pretend to be something we’re not – to put aside our existing views of the world “as it really is” and become whatever our wildest imaginations might see us as.  I think that’s why I like it so much.  Obviously, I’m not really a vampire ghoul from Transylvania, but when my current worldview is based on a tiny,  7/11,000,000,000th slice of my own personal experience, how much less accurate can that new self-image be?

I think of “intelligence” as the ability to see things from multiple points of view.  The most pig-headed dullards I know are those who seem so stuck in their convictions that they can’t even imagine the world as I or others see it.  I tend to think that absent the ability to see things from multiple points of view, we’d have no basis for making comparisons, no basis for preferences, no basis for judgment, and therefore, no basis for wisdom.

Halloween is the one time of year I really get to celebrate my imagination, to change my story from one in which I’m hero or victim to one in which I’m a villain.  As I try to see things from a weird, offbeat, or even seemingly evil point of view, I get practice in trying to see things as others see them.  For me, it seems a very healthy habit to cultivate.

But I must end on a note of caution.  As someone who tries to tell stories capable of captivating an audience, I am keenly aware of a conflict.  As the dramatist, my goal is to channel your experience, your thoughts, your attention, along a path I’ve staked out, to an end I have in mind.  When I’m successful, I create the groove.  My audience follows it.  In this respect, good story-telling, when directed toward others, is a form of mind control.

But what about story-telling to oneself?  It’s probably good news that in real life, there isn’t just one Stephen King or Tom Clancy trying to capture your attention or lead you to some predetermined goal.  Every book, movie, TV commercial, internet pop-up ad, billboard, preacher, politician, news reporter, self-help guru and next door neighbor has a story to tell, and wants you to follow it.  The blessing of being exposed to 11,000,000 pieces of information every second is that we’re not in thrall to a single voice trying to control the way we see the world.  But does this mean we’re free?  The reduction of the world’s complexity into a single world-view is a story that IS told by a single voice — our own.  All of our individual experiences to date have been shaped by our brains into a story, a story in which we are the heroes and victims.  The most powerful things that seek to control our views of the world are those stories.  We’ve been telling ourselves one since the day we began to experience reality.  My own?  Since early childhood, I have seen myself as a story-teller.  Since September, the immanence of Halloween has forced me, almost unwillingly at first, to focus on my annual Haunted House.  At first, it was hard.  But in just a few weeks, the themes, characters, scenes, and devices of this story took such a hold on me, that I woke up this morning unable to think of anything else.

Such are the pathways of our minds.  If my thoughts can be so channeled in just a few weeks, how deep are the grooves I’ve been cutting for over sixty years?  Am I really free to change the story of my life, or am I the helpless victim of the story I’ve been telling?

This week, try imagining yourself as something very different.  Something you’d normally find very weird, maybe even distasteful.  But remember – don’t imagine yourself the villain.  Imagine yourself, in this new role, as part hero, part victim. Get outside your prior self, and have a Happy Halloween.

— Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/10/28/happy-halloween/
Twitter
RSS

Knowing Right from Wrong

Two items I heard on the radio yesterday struck me as worthy of comment.

First was the news of Sunday night’s tragedy in Las Vegas.  Questions of motive apparently loom large. President Trump first called the shooter  “pure evil.” Now he’s saying the shooter was “very, very sick.”

I also heard yesterday that the Supreme Court would soon be deciding a case involving a woman sent back to jail because she tested positive on a drug test, which positive result violated the terms of her parole.  Her lawyer is apparently arguing that the action amounts to re-incarceration due to a “disease” (addiction), and is therefore unconstitutional.  My own reaction is that the woman wasn’t incarcerated for having an illness (her addiction) but for something she did (use drugs, and test positive on a drug test).  But the fact that the woman’s conduct arguably sprang from her illness/addiction leads me to compare her to the Vegas shooter.  Ultimately, the question becomes whether an offense that results from “sickness” is excusable, and whether it can be distinguished from an offense that results from something else, something that is not some sort of sickness –“evil”perhaps.  If so, then all we have to do is figure out the difference between evil and sickness.

While I’m at it, allow me to throw in the killing of Osama Bin Laden, just to round out the analytical field.  By the killing of Osama Bin Laden, I mean both the killing he ordered and the killing that finally brought him down.  Premeditated.  Innocent lives lost in the process.  Evil?  Justifiable?  Sickness?  Other?

There’s nothing particularly new about such questions.  They take us back to the legal requirements for justifiable homicide.  To the religious doctrine of the just war.  To the philosophical question of whether an end ever justifies a means.  To the debate over determinism and free will.  All these issues have defied resolution for centuries.  I have my opinions, but instead of advancing them here, I’d like to use them as the background for raising two other matters that have been on my mind.

The first, I’ll call the question of knowledge.  When I studied Latin in school, I learned the distinction between two Latin verbs, cognoscere and scire.   When I studied French, I encountered  the same difference between two French verbs, connaitre and savoir., which evolved from the Latin.  All four verbs are translated into English as “to know.”  But in both Latin and French, a distinction is observed between knowing in the sense of being somewhat familiar with something, and knowing in the sense of being aware of a fact or a field of knowledge, authoritatively, or with certainty.  In Latin and French, if you want to say you “know” your neighbor, you use the word cognoscere or connaitre, because you really only mean to say you’re somewhat familiar with her.   But if you want to say that you know your own name, or where you live, or the words of the Gettysburg address, you use scire or savoir, to assert that you have essentially complete and authoritative knowledge of the subject.

These two types of knowledge seem rather different from each other.  For many years, I thought it a shame that the English word “to know” gets used to cover both types.  I thought it important to distinguish between those situations in which we really know something and those in which we simply have a passing familiarity, and I found English lacking due to its failure to make that distinction.   But now, I think differently.  Now, I question whether we really know anything with certainty.   If we can’t see all four sides of a barn simultaneously, how can we say we “know” the barn, as opposed to being familiar with just one aspect of it?  Is the most we can ever say about anything  that we are somewhat familiar with it?  If there really is just the one sort of knowledge, then maybe we’re right to have just one word for it.  Maybe the Romans and French were wrong to think both types of knowledge possible.

Meanwhile, what do we mean by right and wrong?  Mostly, I’ve been thinking about politics in this regard, not drug use or homicide.  I’ve been wondering whether terms like right and wrong should be abandoned altogether when it comes to politics.  I mean, every political issue I can think of seems to me to be more easily analyzed in terms of what (if any) group benefits, versus what (if any) group gets hurt.   Is it more accurate to say that a policy or practice is “right” when viewed from one group’s perspective, and “wrong” when viewed from another?

Take, for example, immigration reform.  You might argue that tightening controls favors those who already live in a country, and disfavors those who want to enter it.  Assuming that’s true, would that make the tightening right, or wrong?  Doesn’t it depend on whose perspective you’re adopting?

Arguably, capital punishment hurts convicted murderers while benefiting taxpayers who would otherwise bear the costs associated with long prison terms.  We can argue about deterrence, and whether capital punishment deters future criminals and therefore benefits potential future victims.  But what does it mean to argue that capital punishment is “right” or “wrong”?  The simplistic precept “It is wrong to kill” either condemns all killing, including the killing of Osama Bin Laden,  or it provides no answer at all because the real question is when it is right to kill and when it isn’t.  I have the same question about higher taxes, about the Affordable Health Care Act, about environmental regulations, and about every other political issue I can think of.  “Right” and “wrong” seem too absolute to be helpful in understanding complex tradeoffs which may well benefit some groups while hurting others.

I can follow a discussion pretty well when it’s phrased as a discussion of what groups will arguably benefit by some policy or proposal, and what groups (if any) will be hurt.   But I have difficulty when the same debate is phrased in terms of what’s “wise” or what’s “sound policy,” because it seems to me always to come back to “wise for whom?”  Immigration reform might be good for the American economy, but is it good for the rest of the world?  Obamacare may benefit those with preexisting conditions and are poor and unhealthy, but not those who are healthy or wealthy.  Is a Pennsylvania  law “wise” if it helps Pennsylanians but hurts New Yorkers?   Is an American policy “wise” because it helps Americans, even if it hurts Russians, Filipinos, or Cubans?

It may help us express how disturbed we are by the shooting in Las Vegas, if we call it “pure evil,” but I don’t see it that way.  (Frankly, I don’t know what “pure evil” means. ) Rather, it seems to me we all have personal points of view, which is to say, minds that tell us stories.  In those stories, we ourselves are often the unappreciated heroes.  In some other stories, we may be the victims.   But in how many stories are we purveyors of unadulterated wrong?  I believe that the Vegas shooter told himself a story in which he was a hero, or a victim, or both.   And if we do things because they make sense to us, in the context of the people, values, religion or nation with which we identify, and in the context of the stories we see ourselves acting in,  then do we have anything more than a subjective point of view, a limited perspective incapable of assessing a more objective or universal wisdom about right and wrong?  I think we all suffer from genuine mental impairments – if not anything as egregious as sociopathic aggression or drug addiction, then more common ailments like self-interest, self-delusion, arrogance, bad habit, confirmation bias or simply poor judgment resulting from our fallibility.  At best, we have a passing familiarity with right and wrong, not authoritative knowledge of it.  At worst, we are all sick, and so occupy ground not entirely unlike that of the Las Vegas shooter or the drug addict.

Maybe it’s time to stop the litmus test of good versus evil.  To recognize instead that what benefits one person may hurt another.  That when our government incarcerates an addict, storms a deranged mass shooter’s hotel room, or takes the life of a militant dictator, we are not making God-like moral judgements that one person is “good” and another “pure evil,” but simply making practical tradeoffs to protect certain interests at the expense of others.  And maybe, in the next political discussion we have, it’ll prove helpful to stop talking about who and what are wrong, but who will likely benefit and who be hurt.

My hunch is that the Vegas shooter saw something as pure evil – and that whatever it was, it wasn’t himself.  His idea of evil was likely different from ours.  Indeed, he may have considered us as examples of pure evil. We’re wired to think we’re somehow different from him;  that, unlike him, we know the difference between right and wrong.  At times like these, in the face of senseless atrocity, it’s easy to feel that way, to see a fundamental difference between him and us:  After all, we say smugly, we would never indiscriminately kill scores of people.

But we killed over six hundred thousand in our civil war.  We killed a hundred thousand at Hiroshima. We’ve killed in Vietnam, Iraq, and Afghanistan.  In a few weeks, when the Vegas shootings are no longer front page news, we’ll be calling each other stupid, or evil, or just plain wrong, as if we have nothing in common with the Vegas shooter.  As if we have the unerring ability to identify what’s right and wrong, and to do so with the full understanding the Romans and French called scire and savoir.

Different as we may be in other respects, I say we all suffer from that disease.

Families of victims in Vegas, you’re in our thoughts and prayers.

– Joe

Please follow, share and like us:
Facebook
Follow by Email
Pinterest
Google+
https://wemaybewrong.org/wp/2017/10/03/knowing-right-from-wrong/
Twitter
RSS