The Meaning of Large and Small

I continue to have difficulty comprehending the very large and the very small.

Yesterday, thinking about the word “small” itself, I got to wondering what I mean when I call something small.  I wondered how I would phrase a definition of the word, if I were assigned the task of creating one for a dictionary.  For example, I could say “small” means the same as “little.” But what would that add, say, to the understanding of someone who spoke only Chinese, or Martian?  My dictionary in fact defines “little” as “small in size.” Could I define “small” other than by simply using an English synonym for it? If you’re a word nerd like me, you might try doing this yourself. If you do, I’d be interested in hearing what you come up with. 

I think my big American Heritage Dictionary (Houghton-Mifflin) struggled with the same problem. In that dictionary, as noted, “little” is defined as “small in size,” while “small” is defined as “being below the average in size or magnitude.”  Fair enough, I thought, until I considered some other definitions in that same book, where  “size” is defined as “the physical dimension, magnitude or extent” of something, but “magnitude” is defined as “greatness in size or extent,” and “extent” is defined as “the range, magnitude or distance over which a thing extends.” 

Considering all these definitions together,  I imagined my Martian visitor persuaded that abstractions like “small” and “little” mean the same thing, but having no idea what that is. When the words are only defined in terms of each other, how can anyone tell what they really mean?

Though I felt I was going in circles, I kept trying.

“Great,” I learned, is “Very large in size,” while “large” is “of greater than average size or extent.”  So great means large and large means great.  Great! But if I didn’t already have an idea of big and small, where would that get me?

Of course, linguists have long recognized this circularity of language.  The problem isn’t just defining “small” by using a synonym like “little.”  It’s more general than that, and it ultimately comes from the absurdity of trying to define words using other words.  If we want to define what a word means by saying that word A is equal to  words B, C, and D, the problem is that no matter how many words we go through, every set of words becomes equivalent to nothing but other sets of words.  B, C and D are defined by E, F, and G, and those by H,I and J, but H, I and J are defined by A, B and C.   Even in a language of 50,000 words, that fixed set of things is limited – a closed loop, explainable only by itself.  Every word, sooner or later, can only be defined by reference to itself or to words that it has helped to define.  And in any such closed system, entropy sets in.

The definitions of “small” and “large” above both  make use of the concept of “average,” which might seem helpful, because “average” is a concept which takes me from the world of words into the world of mathematics.  If small is less than average and large is greater than average, then that should prove helpful – provided I know what “average” means.  But what do I mean by “average”?

My mathematical concept of “average” requires a finite set of numbers to consider.   I can say that the average of two 12’s, one 17 and one 19 is 15, but only because I know how many of each number I have for my calculation.  I’m dealing with a known, fixed, quantifiable set.  I might even be forgiven if I say that the average (in size) of one golf ball, one tennis ball, and one soccer ball is (more or less) a  baseball, because, again, I’m dealing with a know set of data.  But what data set — what objects,  and how many of them — should I use to compute an average, on my way toward understanding that small is below average, and large is above average?  The average size of all things? If I take  the smallest things I know, like quarks, and the largest, like the whole universe,  don’t I still need to know how many quarks there are, and how many stars of various sizes,  before I can compute an average size of things, and therefore to know what it means to be above or below the average of all things, and therefore, inherently large or small?

Meanwhile, in order to take into consideration, say, my dictionary, in order to count it in considering the average size of all things, do I count it as a single thing, about 14 inches in length and weighing a few pounds, or as a thousand smaller things, called pages, or several billion even smaller things, called molecules?  Is my car just a single car, or is it an engine, a body, a chassis, and four wheels? Obviously, if I count myself as one person of my size, I have a very different impact on the “average” of all things than if I count myself as a few billion cells of far smaller size. With such questions pervasive about every thing and every size, I submit, it is impossible to formulate a data set capable of yielding any meaningful concept of an “average” in the size of all things —  yet Houghton Mifflin has no problem saying that small things are things smaller than “average,” and large things  larger.

(By the way, I submit that it it makes no difference if we think in terms of medians. Using medians, I suspect our calculation would yield something only slightly larger than a quark, and virtually everything else would then be considered very, very big by comparison. And if we used the half way point between the size of a quark and the universe, we’d get get something half the size of the universe, and everything else would be very, very small. Can our feeling that we understand what’s big and what’s small be so dependent on different mathematical ways of thinking about averages?)

Pulling out that big dictionary again, I wonder, what makes it big?  At first glance, it doesn’t seem nearly as big as my car, yet I call it big while I call my car small.  Surely, I mean that my dictionary is big because it has greater magnitude – more pages, and more words — than other things to which I tend to compare it (roughly speaking, those other things I also call books).  I call my Toyota  small because it has less trunk space and passenger seating than my daughter’s SUV.  Could size, then, be a concept that is relative?  It seems so – but relative to what?

I find this last question intriguing.  I think a book big when I compare it to other books, and a car large (or small) when I compare it to other cars.  That concept of relative size seems easy. But if I think for a second that a star can only be thought large in comparison to other stars, I quickly retreat from my relativistic comfort zone.  Surely  stars are always, and absolutely, larger than books, and surely  books are always, and absolutely,  larger than quarks.  If so, surely there is something about size that is not relative to its own group of similar objects – something absolute which enables me to feel quite strongly  that one group of things is inherently larger than some other group of things.   And so, once again, I’m back to square one, wondering what makes one thing large and another thing small.

In desperation, I consult the dictionary again.  This time, instead of “large” or “small,” I look up the word “big.”  (After all, what could a big dictionary be better at defining?)

“Big” is defined by the folks at Houghton Mifflin as, “Of considerable size, magnitude or extent; large.”  Size, magnitude, extent, large – nothing new here.  Big is large, and large is big./ For a moment, I’m disappointed.  But wait.  (There’s more!)   I look up “Considerable.”  The first definition of “considerable” is “large in amount, extent, or degree.” ( Arghhh!  Large means considerable, and considerable means large.  I feel like I’ve been here before.)  For a moment, I consider looking up the new words “amount” or “degree,” but I decide that effort won’t likely be useful.  Then my eyes fall on the second definition of “considerable.” 

“Worthy of consideration.” 

Ah! We’ve left the world of physical dimensions, some place outside the closed loop of size words. Am I finally on to something?  I look up “worthy.”    I find, “Useful.  The quality that renders something desirable, useful, or valuable.”

I think I’ve found the answer I’ve been looking for.  Something is “considerable” if it is worthy of consideration, and it is worthy of consideration if it is useful.  Size is indeed relative, but relative, primarily, to what I find useful

I recently watched Season Six of the survival series “Alone.”  (Synopsis: Ten people competing to survive in Ice Age conditions.) In that world, a moose was important, both because, unlike a squirrel,  it could kill you, and because, if you could kill it, it could feed you for a very long time.  The series contestants considered thirty-pound Northern Pike or lake trout more valuable than ten-pound rabbits, which were in turn more valuable as food than even smaller things like mice.  The closer in size something was to the contestants, the more nutrition it brought.  The more “worthy of their consideration” it was. 

The contestants on “Alone” embraced the value of living as our primitive ancestors did, and I find myself reflecting that it was this ancestral way of life that shaped our species’ understanding of words like “big” and “small.”  Pigs and cows and grizzly bears were more important than, say, mosquitoes. As human beings evolved, those who paid most attention to things about their own size — things between, say, the size of spiders and mastodons — survived and reproduced better than those who paid attention to grains of sand or the vastness of the Universe.  I conclude that, as we generally use the terms “small” and “large,” absent a context which suggests a different relative comparison (a car being small compared to other cars), the default meaning is not really relative to a some incalculable “average,” but relative to ourselves. That is, smaller or larger than we are.  I myself create my sense of the “average” size of things.  Things smaller than me are small, and things larger than me are large. Things are large or small relative to me. And from an evolutionary perspective, it is the things closest to my own size that are (subjectively) important to me.

But are pigs and grizzly bears really more important than mosquitoes, objectively?  Exploding supernovae and super massive black holes are not only extremely large.  Astronomers and cosmologists now tell us that if it weren’t for them, we wouldn’t exist, as they create the very elements from which we’re made.  Those who study life’s origins tell me all complex forms of life began when bacteria became essential parts of our cells, so we wouldn’t exist were it not for bacteria.  And the importance of bacteria is not just historical.  If, today, things like plankton and bacteria stopped being available as food for larger things, moving up the food chain, we’d have nothing to feed on ourselves.  And all the time, quantum physicists remind us that without things as small as quarks, we wouldn’t exist either.  

So it isn’t really true that lions, tigers and bears are most important to my existence.  Nor were they, in fact, most important to our ancestors’ existence.  From an evolutionary perspective, we succeeded by paying attention to things our own size, not because such things were more important to us, but because we could actually do something about them if we paid attention to them.  Evolution proved that paying attention to them was useful to our survival.

But if the issue is usefulness to me – whether I can put my understanding of something to use, to help me eat it or to keep me safe from it – which should I consider more worthy of consideration, more considerable in size, to my current life in the 21st century – a black hole, or a virus?  If the answer is that I can do more with my understanding of viruses than I can with my understanding of black holes, why do I think a black hole  more  worthy of my consideration – more considerable in magnitude —bigger — than a virus?

Our notions of smallness and bigness come from a time in our past when we could not deal effectively with things far smaller or larger than ourselves, a time when things our own size – the moose, the cow and grizzly bear – were most worthy of our consideration.  We could not concern ourselves in those days with virology or pandemics, with things as small as molecules of CO2 or as large as ozone layers or the acidity of the oceans. Thinking about viruses rather than grizzly bears would have been fatal in those days. But such things, both the very large and the very small, are beginning to enter our sphere of influence.  As science continues to broaden our understanding of the world, our ability to prosper (or not) in the face of things we previously thought too large, or too small, to matter, changes. Is it time, now, to revise our thinking about the meaning of words like “large” and “small”?

The Meaning of Meaning

Years ago, my brothers and I started debating the existence of absolute truth.  My brothers defended its existence and knowability.  I questioned its knowability, if not its very existence.   After decades retracing the same ground, our dialogue began to seem less like a path to enlightenment than a rut.  My brothers still believed in absolute, objective truth, and that it’s possible to know at least some of it, while I stuck to my subjectivist guns.

My subjectivism included the matter of language.  I see words as arbitrary patterns of sound waves without inherent meaning, which is to say, lacking any meaning until two or more people reach agreement (or at least think they’ve reached agreement) on what idea those sound waves represent.  The word “fruit” is not inherent in apples or oranges.  Not only the sound f-r-u-i-t but the very concept of fruit exists only in the mind.  A “fruit” is not a real thing, but a category, a label for an idea.  And ideas, as we all know, exist only in the mind. 

Having agreed that early ancestors of MacIntosh and Granny Smith had enough in common to be called “apples,” and that the ancestors of navels and clementines had enough in common to be called “oranges,” we then went further and called them both “fruit.”  Slicing and dicing with our verbal ginsu knives, we label some fruit as “citrus.” We group fruit with legumes and call them both plants.  We add ourselves and elephants as mammals, then add plants and viruses and call us all “living things.” All the while, scientists debate the very idea of what living things are, including and excluding actual things from what is, I maintain, just a concept.  Importantly, the things themselves are not affected by what the scientists call them.  A rose remains a rose, by whatever name we call it.   

And so language, I say, remains subjective.  We attempt to group and classify real things by using conceptual labels.  We distinguish between a gallop and a trot, but we ignore the difference between the way I “walk” and the way a thoroughbred horse does, or a camel or a duck.  Arbitraily, subjectively, we call them all the same thing: “walk.”  Why not distinguish between a walk and a shawk and a mawk?  It’s all very arbitrary.  What constitutes a “walk” is obviously an idea – and ideas exist only in the mind.

Comfortable in my subjectivist philosophy of language, I recently came across the late Hilary Putnam, former Harvard professor and president of the American Philosophical Association.  Putnam famously claimed that “meaning ain’t just in the head.”  In his books Meaning and Reference (1973) and The Meaning of Meaning (1975), he used a thought experiment to demonstrate that the meanings of terms are affected by factors outside the mind.

Essentially, Putnam did this by asking us to imagine a world that is a perfect twin of Earth – that is, in every way but one.  The only exception is that its lakes, rivers, and oceans are filled not with H20 but with XYZ.  But everything else is identical, including people, and their tongues, and their languages – so that both Earth’s Frederick and Twin-Earth’s Froderick use the identical word “water” to refer to the stuff that fills the oceans on their respective planets.  Since Frederick and Froderick are physically indistinguishable, and since their words “water” have different meanings, those meanings cannot be determined solely by what is in their heads.  

So said Putnam.

The idea that meanings are rooted in real things, not just in subjective minds, became known as “semantic externalism.” It was credited with bringing about an “anti-subjectivist revolution” in philosophy, a revolution that threw into question the very “truth” of subjective experience.[1]

Yikes!  Was I wrong yet again?  Did I have to rethink my whole philosophy of language?  Did I have to concede to my brothers that there is such a thing as objectivity, at least in the meaning of words?

Not so fast.

Putnam’s Twin Earth thought experiment had me worried.  But at the end of the day, I decided it suffers from the common logical fallacy that its conclusion is contained in its premise.   The real question, I believe, boils down to one that Putnam may have had in mind when he titled one of his books The Meaning of Meaning.

If language is as subjective as I suppose, and if words can mean different things to different people, as I believe, who’s to say what a word really means?  I don’t believe there’s an objective answer, and perhaps Dr. Putnam did, but I think it may come down to what we mean by the word “meaning.” When faced with such questions, I’ve often sought the judgment of etymology, the history of words. I find it instructive to retrace the way words (and their meanings) change over time. And so I set out to unearth the etymological path by which the word “meaning” came to have meaning.

According to my research, the word is related to the Greek and Latin root men– (“to think”) from which English words like mental and mentor have derived.  It came into Old English as the verb maenan, meaning to state one’s intention, to intend, to have something in mind.  And much later, the verb “to mean” led to formation of the verbal noun, “meaning.”

From an etymological perspective, I would argue that meaning is therefore subjective, by definition.  If to “mean” something means to “have it in mind,” then there cannot be meaning independent of someone’s mind.  Definitionally, it is the idea formulated in the mind.  The person whose tongue pronounces the word’s sound is trying to convey the meaning in her mind.  And when the listener who hears the sound uses it to form an idea in her mind, “meaning” happens again.  To “mean” something is, always, to have an idea in mind.

I find it interesting to imagine the day, within the past few hundred years, on which two people were watching a meteor shower, or a lightning storm, or a two-headed snake – some occurrence that struck them as an omen of sorts – and one of them first asked the question, “What does it mean?”

It’s a question we’ve all asked at some point – if not about an omen, then about a symbol, a gesture, or some other mindless thing. The question has become an accepted expression in modern English.  But what a momentous event, the first time it was asked!  Here we had a word – to “mean” something – which (at the time) meant that a speaker had some concept “in mind” and “intended” to convey that concept to another.  That is, as then used, the word clearly referred to a subjective concept.  You’d long been able to ask what a person meant, intended, or “had in mind.” But when the question was first asked, “what does it mean?” referring to a lightning bolt, an earthquake, or a flood, the one asking the question was implicitly asking another, broader question – whether, perhaps, the “it” – the burning bush, the flood, the electrical discharge – could have “something in mind.” 

Alternatively, they were asking if the thing or event had been endowed with a meaning by virtue of having been in the “mind” of some superconscious deity that had caused the event.  If the “meaning” was that which had been in the mind of such a deity, it was arguably still subjective, i.e., still dependent on the idea that existed in a particular mind.  But if the meaning had originated in the thing or event itself – in the rock, or the flame, or the electrical discharge – then the conclusion would have to be that “meaning” can exist independent of a mind.

At any rate, it seems to me that whoever first asked the question, “What does it mean,” was expanding the very idea of “meaning.” Until that moment, to “mean” something meant to have it in mind.  To think it.  Until that moment, as I see it, everyone understood that “meaning” is entirely subjective.  To ask what “it” means was a misuse of the word.

And so, on the basis of etymology, I stand my ground.  “Meanings,” by definition, are ideas that form in the mind.  The idea of fruit.  The idea of walking.  Even Mr. Putnam’s theory of semantic externalism – that meaning “ain’t just in the head” – is an idea that, like all ideas, is just in the head.


[1] Davidson, Donald, Subjective, Intersubjective, Objective, Oxford University Press, 2001.

Primates and Praise

Early in the Christian churches, bishops and archbishops came to be called “primates.” The word was not intended to evoke images of orangutans or macaques.  (It would be another five hundred years before Carl Linnaeus classified homo sapiens as a member of that order.) Rather, even in Latin, the word for “first” had been used to mean a superior, a leader, or most excellent person, and the Christians had no problem designating their spiritual leaders with the term as well.

There are many things I like about my Christian heritage.  If Christians today preached what I believe the historical Jesus preached, I’d readily identify as a Christian.  But as I see it, modern Christianity gets Jesus wrong in a number of respects. 

When I was only eight, I was invited to spend the weekend in the countryside with a friend.  Since I’d have to miss Sunday mass, I made a phone call to ask for permission to do so.  My friend’s family got quite a laugh when, after the call, they discovered I hadn’t been calling home, but the church rectory. The “Father” they’d heard me addressing was not my biological father, but the parish priest.

I had already been taught to call all priests “Father,” and even when I talk to priests today, I use the term of respect I was taught as a child.

But it wasn’t long after the parish priest told me it would be a sin to miss Mass  that I came across Matthew 23:9, where Jesus is said to have told his followers “to call no man Father, for one is your Father, which is in Heaven.”  Given that scripture, I never understood how Christians developed the practice of calling their priests “Father” – especially in an age when fathers demanded so much respect – except, of course, that the priests had taught them to.

It’s easier for me to understand why hierarchies arose as church memberships and treasuries grew – and why words like “bishop” (from Greek epi-skopos, meaning to watch over) came into use.  And it seems almost inevitable that as such growth continued, layers of rank would have to be added, for practical, administrative reasons.  So by the time the Bishops of Canterbury, York, Armagh and St. Andrews had become powerful, it isn’t entirely surprising that they’d call these leaders ‘primates.” But the primates were always first among “fathers,” and I still had a hard time squaring that with Matthew 23:9.

Nor was it that particular scripture alone.   According to Matthew 12:50, Jesus instructed his followers, “Whosoever shall do the will of my Father, which is in Heaven, the same is my brother, and sister, and mother.”  Jesus preached, “Blessed are the meek; for they shall inherit the earth” (Matt. 5:5) and “Whosoever therefore shall humble himself as this little child, the same is greatest in the kingdom of heaven” (Matt. 18:4). I read of a Jesus who washed the feet of his disciples, of a Jesus who frequently dismissed those who treated him with special reverence, of a Jesus who said to a man who addressed him as Good Master, “Why callest thou me good? There is no one good but one, that is God” (Matt. 19:16). I read of a Jesus who, when asked if he was King, replied only, “You said it” (Matt 27:11), as if to disavow the title himself.  In fact, Jesus taught, in the Sermon on the Mount, that his followers should pray to the Father (for His was the power and the glory). And, if we believe Matthew 7:23, Jesus chastised those who would honor him, warning, “Many will say to me in that day, ‘Lord, Lord, have we not prophesied in thy name? and in thy name have cast out devils? And in thy name done many wonderful works?’ And then will I profess to them, I never knew you: depart from me, ye that work iniquity.”

One reason I haven’t been to church but a few times in the last fifty years is my lack of comfort with heaping praise on this man who fought so hard to avoid it.  Last month, I went to a Catholic mass for the first time in many years.  One of the first hymns sung was To Jesus Christ, Our Sovereign King.

“To Jesus Christ, our sovereign king, who is the world’s salvation, all praise and homage do we bring, and thanks, and adoration. Christ Jesus, victor!  Christ Jesus, Ruler! Christ Jesus, Lord and Redeemer! Your reign extend, O King benign, to every land and nation; for in your kingdom, Lord divine, alone we find salvation.  To you and to your church, great King, we pledge our hearts’ oblation – until, before your throne, we sing in endless jubilation.”

Homage? Kingdom?  Reign? Throne?  I was taught the theology behind this hymn.  But for me, the theology fails to justify adoration of a man who shunned adoration, who deflected all praise to God, his father in heaven.  To my way of thinking, Jesus would not have approved of such a hymn.

Meanwhile, whatever may be said in defense of praising Jesus, I have even greater trouble with adoration of mankind.

Consider this passage from Pope John Paul II’s Gospel of Life, Evangelicum Vitae.  I can’t read it without thinking of Jesus’ teaching that the meek shall be blessed.

52. Man, as the living image of God, is willed by his Creator to be ruler and lord. Saint Gregory of Nyssa writes that “God made man capable of carrying out his role as king of the earth … Man was created in the image of the One who governs the universe. Everything demonstrates that from the beginning man’s nature was marked by royalty… Man is a king. Created to exercise dominion over the world, he was given a likeness to the king of the universe; he is the living image who participates by his dignity in the perfection of the divine archetype.”

I hope that my thoughts are not taken as an attack upon those who sing the hymn, or upon Pope Paul II for his thoughts about mankind.    I mean no disrespect, and God knows, I may be wrong.  But as Christians prepare this month to celebrate Jesus and his birth, I’m moved to point out my inability to buy into these aspects of modern Christianity. As I like to think of it, “I prefer the original.”  Father, Primate, Pope, Homo Sapiens Sapiens.  Clearly, we are prone to bestow honor on ourselves.  I don’t know whether we inherited this tendency from other primates or not, but the Jesus I believe in warned us against it.

Submission

My recent trip to Morocco got me thinking how much our cultures shape us and make us who we are – that is, how much the ruts in our thinking can masquerade as truth itself.

As I packed for my trip, I decided to bring along a couple of books – Susan Miller’s A Modern History of Morocco and a copy of the Qur’an I’d bought a couple of years ago, a 1934 translation by Abdullah Yusuf Ali.   I thought they might help get me into the spirit of the trip – my first to a Muslim country.

Upon reading the first sixty pages of Ali’s translation when I first got it,  I’d found it a bit like Leviticus or the Gnostic Gospels – fragments of wisdom scattered among verses otherwise resistant to comprehension.  Miller’s history made more sense to me (once I started distinguishing between the Alawids, the Almohads and the Almoeavids).  But I like getting to know about things I know nothing about; the more foreign, the better.  So Morocco turned out to be a great trip, just as I’d hoped. 

To begin with, it felt like a different planet, the terrain like the barren brown land of La Mancha where Clint Eastwood filmed spaghetti westerns to pretend he was in the American West.   (It was barren, just sand and clay, devoid of plastic, steel or chlorophyll.)  Yet when we crossed the Atlas mountains into the Sahara, I realized how much green I’d been taking for granted.  Upon our return from the sand dunes to the “green” side of the Atlas, I did indeed notice occasional olive trees, date palms and cacti.  The rare new shoots of chlorophyll in the otherwise dry brown wadis – the result of a downpour on our third day in the country,  the first of a rainy season that, having just begun, showed no further hint of itself for the rest of our stay – were cause for celebration. After all, they had made the country comparatively lush.  What had seemed a wasteland at first now showed precious signs of life.

The architecture was equally striking.  Palaces, guest houses and mausoleums were opulent and ornate, sculpted or tiled into tiny squares, rectangles and diamonds, with Arabic scripts worming through the geometry like the tendrils of plants making their way through latticework.  But more than the fancy palaces and riads, I was struck by the simple architecture of the countryside.  Fields of clay separated by countless rock walls, most only one or two feet high, only a tiny fraction of which rose high enough to resemble stone buildings. Most of the structures were made of clay.  Berber villages, many miles apart, often consisted of only a dozen houses or so.  In one, a mountain village of sixty people called Outakhri, Lala Kabira treated us to two wonderful meals of lamb, eggs, vegetables, dates, couscous, green tea and flatbread, which we watched her bake in a blackened clay wood-burning oven.  When I asked our guide, Said, if she was his mother, aunt or other relative, he gave me a most curious stare.  Then he said, “No, she’s not related by blood.  But when you spend your life in a village of only sixty people, there’s really no difference.  Everyone is family.”

Not once in two weeks did I hear a complaint or a curse, not once an unpleasant gesture.  As the days passed, I began to feel majesty in the clay-colored, mountainous land.  The people, the food, even the terrain began to seem familiar.

One of our group, Juan Campo, is a professor of religious studies at the University of California, Santa Barbara.  When I learned that Juan specializes in Islam, I asked for his opinion of Ali’s translation of the Qur’an.  When he said it was a good one, I asked if he’d written any books that a layman like myself might understand.

Yes, he said.  He’d been the chief editor of The Encyclopedia of Islam, (Checkmark Books, 2009).   

I have now bought and studied that volume.  My thanks to Juan for helping me better understand the basics of the Qur’an.  I’ve also now read a good bit of Ali’s translation  Based on this elementary introduction, I now understand that the Qur’an teaches as follows (citations are to chapter and verse of the Qur’an unless otherwise noted):

1.   “There is no god but God” (21:25).  (That is, there is only one God, and he is the God of all.)

2. That God is loving (85:14), eternal (2:255), merciful (1:1), omnipotent (3:26), omniscient (6:59, 21:4, 49:16), wise (2:216, 3:18), righteous (2:177), just (41:46), and forgiving of sins (3:31).

3.  That when God says something should be, it is. (2:117, 3:59.)  He created the world, a task which took him six days, creating day and night, the Sun, the Moon, and the stars  (7:54, 10:3, 11:7, 21:33, 25:61-62.)  According to some Muslim teaching, he created the Universe out of love, so that he would be known (hadith qudsi.)

4. God created the first human being, Adam, making him out of dust or wet clay, breathing the spirit of life into him  (3:59, 6:2, 7:12, 15:29, 30:20, 32:9).  He set Adam and Eve down in a blissful garden called Paradise, eating the fruits of the garden until Satan, the enemy (whom God had expelled from heaven for his disobedience) tempted them to eat the fruit of the forbidden tree (2:34-36, 2:168, 7:11-18, 7:189, 20:117-123).

5. Eve gave birth to Cain and Abel, and Cain later murdered his brother out of jealousy because God accepted Abel’s sacrifice rather than his (5:27-32).

6. God chose to save the righteous Noah, man of faith, while causing a great Flood that drowned the people who’d fallen into evil ways (7:64, 17:3, 37:75-77).

7.  Jews, Christians and Muslims are “the People of the Book,” all descended from that great opponent of idolatry, Abraham, the pious husband of Hagar and Sarah, the father of Ishmael and Isaac, whose faith in One God was so strong that he was prepared to sacrifice his son at God’s command (2:133, 19:41, 19:69, 21:51-58, 21:66-72, 37:112, 6:74-84, 37:99-111).

8. God chose Jacob, Moses and Aaron as prophets (19:51-53, 21:48, 21:72).  Moses was cast away on the waters as an infant, by his mother, to save his life (20:37-40).  Moses rose to prominence under the pharaohs of Egypt (7:104-109).  God spoke to him from a burning tree by Mount Sinai (28:29-30).  He spent forty days in the desert and received the commandment tablets from God while there (7:144-145).  In the absence of Moses, the Israelites worshipped the golden calf (7:148-149, 20:85-91).

9.  After slaying Goliath, David received a kingdom and wisdom from God.  Solomon ruled with wisdom and justice.  God listened to Job in his distress, and was merciful to him for his righteousness. (2:251, 21:78-79, 21:83-86, 38:20).  

10. John, the son of Zechariah (known to Christians as “the Baptist”) was a prophet made known to the father of Mary; he was princely, chaste, wise and righteous, and confirmed the word of God (3:39, 19:12-13).

11. Angels appeared to Mary and announced to her that God had chosen her above the women of all nations, saying “O Mary! God giveth thee glad tidings of a Word from Him: his name will be Christ Jesus, the son of Mary, held in honor in this world and the Hereafter and of those nearest to God; he shall speak to the people in childhood and in maturity.  And he shall be of the righteous.” (3: 42-46).  Mary questioned the news, since she was a virgin, but God, who “createth what he willeth,” simply said “Be!” and breathed his spirit into her.  Thus was Jesus conceived. (3:47, 19:20-21, 66: 12).

12. Jesus is a spirit – Arabic ruh, or breath – proceeding from God; he is thus the word of God (4:171).   God strengthened Jesus with this holy spirit (2:87, 2:253, 5:110),  revealing the gospel of Moses and the prophets to him (2:136), teaching him the book of wisdom, and the law, and the gospel, and giving him the power to heal the sick and perform miracles (3:48-50, 57:27).  God said to Jesus, “O Jesus! I will take thee and raise thee to Myself and clear thee (of the falsehoods) of those who blaspheme; I will make those who follow thee superior to those who reject faith, to the Day of Resurrection.” (3: 55).  God ordained compassion and mercy in the hearts of those who follow Jesus (57:27).   Jesus is “a statement of truth” and a “sign for all people” (19:34, 21:91).

13. Charity is essential to a good and pure life.  As stated in the Qur’an (2:177):

Goodness is not that you turn your face to the east or the west.  Rather goodness is that a person believe in God, the last [judgment] day, the angels, the Book, and the Prophets; that he gives wealth out of love to relatives, orphans, the needy, travelers, and slaves; that he performs prayer; and that he practices regular charity.

14. The world will end on the Last Day, a day of Judgement and resurrection in which nothing will be hidden, the just will be rewarded by a return to Paradise and the unjust damned to hellish fire (1:4, 3:56-57, 19: 37-40, 21:47, 69:18-31, 74:38).  God will reward those who are faithful to him and his word by giving them a land of milk and honey, while punishing those without faith in eternal fire (2:164-167, 13:20-26, 21:39, 47-15).  “Those who believe (in the Qur’an), and those who follow the Jewish (scriptures), and the Christians and the Sabians [converts] – any who believe in God and the Last Day, and work righteousness – shall have their reward with the Lord; on them shall be no fear, nor shall they grieve” (2:62).  “Verily, this Brotherhood of yours is a single Brotherhood.” (21:92).

15. And so the Qur’an asks, “Who can be better in religion than one who submits his whole self to God, does good, and follows the way of Abraham, the true in Faith?” (4:125)

My dear Christian mother believed everything described above, yet her feelings about Muslims ranged somewhere between fear and loathing. 

As I understand it, in Arabic, there were traditionally no vowels.  The word Islam was essentially the three consonants, s-l-m – making Islam a cognate of the Arabic word Muslim, the Arabic word “Salam” (peace) and the Hebrew word “shalom” (peace).  The word Islam is often translated “enter into a state of peace.”

As we all know, some people err by confusing substance with translation.  Nowhere is this error more troublesome to me than when it comes to God.  When my mother cringed at the thought of worshipping Allah, I don’t believe she understood that “Allah” is simply the Arabic word for “God,” derived from the same Semitic root as El and Elohim.  I find it notable that, according to Professor Campo, Arabic-speaking Christians and Jews in the Middle East use the word “Allah” in referring to their God.

I so wish my mother could have understood this.  Nothing was more important to her than submission to God.  Yet she seemed not to understand that “Islam” is an Arabic word that, as usually translated, simply means submission to God.  And that “Muslim” is simply an Arabic word for one who so submits.  Had I spoken Arabic when Mom was alive, I shudder to imagine her reaction when I called her one who submits.

“I’m no Muslim!” she likely would have said.

My thanks to our guides, Hicham Akbour and Said ibn Mohamed, to our host, Lala Kabira, and to Professor Campo, for helping me take a new look at my family’s western culture.

Salam aleikom.

(Peace be with you.)

Multiplicity

What do the Kavanaugh hearings, Halloween and Homer’s Odyssey all have in common?

Here’s my take on it.

  1. The Kavanaugh Confirmation Hearings

Someone recently said to me, “Joe, you were a lawyer once.  You understand evidence.  You can see that all the evidence supports my position on this.”  The person who said that to me could have been talking about the Kavanaugh hearings.  Like so much media coverage of the hearings, this fellow thought of a trial as the evidence all points in one direction or the other .  My answer to him was that if I’d learned anything in thirty years of bar membership it was that my mother was right: there are always at least two sides to a story, and the truth is generally somewhere in between.  If juries heard only one side’s witnesses and arguments, every verdict would be unanimous.  Is it any wonder that if you tell me what news source you follow, I can pretty well predict how you feel about the world?

In years of practicing law, I saw over and over again how witness testimony polarized over time.  From the plaintiff’s perspective, the size of the wrong and the depth of the injury always grew, while from the defendant’s perspective, the strength of the alibi and the sense of indignation always did likewise.  Add the way politicians and the media frame a case as pitting good against evil, and you have everyone asking which of the witnesses is lying.  In this view, it has to be one or the other.  When I said, about the Kavanaugh hearings, that I thought both witnesses were telling the truth as they saw it, people looked at me like I was some sort of crazed lunatic from outer space.  The hearings, and especially the media coverage of them, left me shaking my head about what made them so typical of polarized American politics today: namely, a complete inability to empathize with the other side.

  1. Halloween

Yesterday, I came across a piece published last year in USA Today titled “5 Halloween Myths and Urban Legends, Debunked.”  Myth Number 3 was titled, “Satan is the Reason for the Season.”  While acknowledging that Halloween can be traced back to ancient Celtic harvest festivals, the article argued that the modern event has nothing to do with Satan, and never could have, as Satan is a Judaeo-Christian character that would have made no sense to the ancient Celtic polytheists who started those harvest festivals.  The article also points out that All Hallow’s Eve is the first of three days Christianity devotes to remembering the souls of the Christian faithful.  The religious origins of the modern holiday have to do with honoring the good dead, not the immortal Satan, the embodiment of evil

But when it comes to Halloween, like the Kavanaugh hearings, people are polarized.  To many, Halloween will always be about pure evil.  For many on both sides, there’s a complete inability to empathize with the other.

  1. The Odyssey.

My first exposure to the Odyssey was probably Kirk Douglas’s portrayal of the classical hero in 1954’s Hollywood version, Ulysses.  While I don’t remember much of that movie, I feel sure that Kirk Douglas’s character must have been very heroic, in the modern sense of that word – which is to say, a particularly good and capable guy fighting the good fight against evil.  My sense of the story has always been that the Cyclops, Poseidon and the suitors were monstrously bad while Odysseus wasn’t far shy of sainthood.  I want to take this opportunity to rave about the new translation I just finished reading by Emily Wilson.  It manages to be an amazingly easy and accessible read while maintaining the strict metrical qualities of the original.  For the first time, I didn’t have to “study” the epic, I could just read it, and do so at the same pace I might read John Grisham or Dan Brown.  As a result, I acquired a sense of the whole as I never have before.   I strongly recommend her translation, whether you’ve read the epic before or not.

Wilson’s excellent and engaging translation gave me several new perspectives about the story.  One is that the very name Odysseus can be translated as “hated” or at least “disliked.”  He’s easy to hate because he’s not just duplicitous, he’s multiplicitous.  There’s something for everyone to hate.  In Wilson’s words, he is “a migrant…, a political and military leader, a strategist, a poet, a loving husband and father, an adulterer, a homeless person, an athlete, a disabled cripple, a soldier with a traumatic past, a pirate, thief and liar, a fugitive, a colonial invader, a home owner, a sailor, a construction worker, a mass murderer, and a war hero.” Wilson gives much attention to how a person can be so complex and multi-faceted, at once so hated and so loved.  Her Odysseus is anything but the one dimensional champion of goodness that I grew up admiring. Perhaps we see ourselves in him.  Perhaps that’s what allows us to empathize.

It has become common to dismiss the pagan gods as amoral and often wicked libertines that no thinking person could believe were real.  Modern criticism of the Greek gods generally amounts to the argument that they are no better than us human beings.  Wilson points out they’re essentially the same as powerful human beings except that they live forever, but morally and ethically, they’re no better than us.  This strikes me as a natural criticism of deity if you’re comparing it to a God conceived of as morally perfect and all knowing.  But have there been unintended consequences to conceiving of God as the embodiment of perfect goodness and omniscience?  What have been the consequences of living with the aim of achieving such righteousness ourselves?  What have I done by measuring my self-worth by comparison to a single, homogeneous and absolute goodness who has revealed Himself to me?  Has it worked to make me self-righteous?

One reason I’ve always been attracted to Greek myth is that the gods DO behave like human beings.  I’ve long felt that such portrayals allow us to see the consequences of our foibles in archetypal ways that can help us to avoid mistakes as effectively as a lot of sermons I’ve heard.     At their cores, the modern worldview suggests that the difference between good and evil is apparent, and that life is simple: if we choose correctly, we’ll live forever in the home of the gods.  In the old pagan worldview, life is a constant struggle to sort out the difference between good and  bad; that even in the home of the gods, it can be hard to distinguish right from wrong; that sometimes, what seems good to one person (or god) seems bad to another.  In this worldview, there isn’t any Grand Commission of Justice to tell us which is which.

There’s little doubt in my mind that most of us would choose to live in a world where good and evil are clearly defined and labelled. But is the real world more nuanced and dependent on point of view than that?  Wilson points out that Odysseus is offered a perfect and immortal life by Circe, but turns it down, choosing instead his mortal home in his mortal world.  Is that why we can love him and hate him at the same time?  There are good reasons the Bible has stood the test of time.  I think there are good reasons the Odyssey has too.

So: What similarities do I see between the Kavanaugh hearings, Halloween, and the Odyssey? For me, all three tell us something about the extent to which Platonic thinking about absolutes has changed the world.  In the pre-Platonic, polytheistic world of Odysseus we could celebrates diverse and multiple perspectives; in the modern world, there must be a single and absolute truth distinguishable by its righteousness.  In the Christian Era, we’re used to hearing the gods of Greek myth dismissed as either “immoral” or “amoral.”  But in the Odyssey, Zeus is the god of justice and of hospitality toward strangers.  One of the most constant themes is that the gods will not approve of mistreating strangers.  It’s not that the Homeric gods don’t care about what’s good and right, but that (just like people) they don’t share a singular and unchanging view of what “goodness” consists of.

Of the many epithets applied to Odysseus (apart from being godlike),  most begin with the prefix “poly-,” meaning multiple.  Odysseus is “poly-tropos” (multiply turning), poly-phrona (multiply-minded), poly-meganos (employing multiple devices), poly-tlas (multiply enduring), poly-penthes (multiply-pained), poly-stonos (multiply-sorrowed) and poly-aretos (multiply prayed for.)  In a sense, this multiplicity makes him all things to all people.  It’s a big part of why he’s hated.  He is also incredibly adaptable, assuming different guises and characteristics in different situations.  His understanding of right and wrong is neither absent nor irrelevant – it is simply changing.

All our modern religious and political instincts tell us to condemn such inconstancy.  We’re trained to think in terms of Platonic absolutes, of clear and perfect Goodness on one side and clear and perfect Evil on the other.  We’re told we can identify the Truth and that we’re bound to adhere to it.  If Professor Ford was telling the truth as she saw it, then Judge Kavanaugh had to be lying, as he saw it.  If Halloween is not a glorification of the Judaeo-Christian God, it must be the work of Satan.  If Odysseus is inconsistent from one day to the next, he must represent an inferior state of being because perfect people have to be constant, unchanging and right.

But is there a difference between being constant, unchanging and right, and being rigid, intolerant, and set in our ways?

I’m not advocating for a rudderless, amoral view of the world.  Goodness is certainly worth striving for.  But how can I know for certain I’ve found it, when others disagree with me about what’s good?  Once again, I’m reminded of Alexander Solzhenitsyn’s words:

“If only there were evil people somewhere insidiously committing evil deeds and it were necessary only to separate them from the rest of us and destroy them.  But the line between good and evil cuts through the heart of every human being.  And who’s willing to destroy a piece of his own heart?”

I recently read Jonathan Haidt’s The Righteous Mind: Why Good People are Divided by Politics and Religion. The book is worth a read for many reasons, but the concept I found most thought-provoking was Haidt’s view on the evolutionary origins of human reason.  The traditional view is that the capacity for reason and logical analysis evolved in human beings as tools for reaching the best conclusions.  In reality, Haidt suggests, human beings wouldn’t have survived unless they could form immediate judgments about things without reasoned analysis.  (You can’t conduct a reasoned analysis of whether to run from a saber-toothed tiger or not.)  But we are also social animals whose early survival depended on the ability to work together in teams.   And to act as a team,  we needed coordinated approaches.  Haidt says our social survival depended on leaders able to persuade others to follow their judgments.  According to Haidt, reason and logical analysis arose about the same time as language did, and they evolved for much the same social purposes: that is, not as tools of decision-making to help an individual determine what’s right, but as tools of persuasion to help convince others to go along with our judgments.  (In the process, we convince ourselves that our judgments are right, too, but that’s a result, not a cause.)

In this view, all of human reasoning has its origins in persuading others, in post-hoc justification to support judgments already formed.  If Solzhenitsyn and Haidt are right, then all the arguments between Professor Ford and Justice Kavanaugh, Democrats and Republicans, Christians and atheists, NPR and Fox News, Halloween enthusiasts and its enemies,  and indeed, between you and me, have to do with persuasion, not with what either one of us has always revered as “reason.”

In this sense, maybe Ford’s and Kavanaugh’s truths are similar.  Last year, I blogged about liking Halloween because it invited us to try out the worldview of a character we normally think of us strange, monstrous, or even evil.  Maybe it isn’t bad that we put ourselves in the shoes of terrible others on Halloween.  Maybe it’s okay to change our understanding of right and wrong at times, to try out new perspectives, just like Homer’s Odysseus did.  Maybe multiplicity helps us empathize.

After listing the many (contradictory) traits her Odysseus exhibits, Emily Wilson  writes, “immersing ourselves in his story, and considering how these categories can exist in the same imaginative space, may help us reconsider the origins of Western literature, and our infinitely complex contemporary world.”

Maybe she’s on to something there?

– Joe

The Biggest Delusion of All

When I was fourteen, my parents sent me to Texas to work on my grandfather’s ranch.  The first job would be to paint the fence around the field between his house and the road.    When I asked what else I’d be doing, he replied, “Let’s see how long it takes to paint the fence.”  Two months later, having painted for eight hours a day, I returned home, the job of painting the fence still not finished.

The word “comprehend” means taking something in all at once.  Since flat land let me see that fence all at once, it seemed comprehensible.  In fact, if I held my two thumbs in front of my face, I could make the fence  seem to fit between them.  So thinking it might take a few days to paint the fence seemed reasonable.  Problem was, my brain does tricks with perspective.   In reality, that field was probably close to ten acres, the fence probably ten football fields long.  “Comprehension” of very large things requires large scale trickery.  It depends on deception.

I should have known better than to trust my brain about the fence.   My teacher Paul Czaja had told our class the story of the Emperor’s chessboard and the grains of rice: the Emperor said that if a single grain of rice were placed on the first square, two grains on the second square, four grains on the third, and so on, until the 64th square, the final square would require enough rice to stretch to the moon and back seven times.

The story of the Emperor and his grains of rice “wowed” me with the power of exponential growth.  I knew the moon was far away, and a grain of rice very small.  Stretching back and forth seven times had to make it a very large number indeed.

Of course I wanted to know what the total number of rice grains was in numbers I could understand.  Rather than simply give his class the answer, Paul asked us to compute the number ourselves.  Our homework was simply to multiply by two sixty-three times.

If Paul had told us that the total grains on the chessboard came to 18,446,744,073,709,551,615, I’d have realized that the number was larger than any I’d ever seen – but my brain would have attempted to make sense of the number’s “bigness” in the same way it had made sense of the fence in Texas – namely, by making it appear far smaller than it really was.   The only way to see a huge field was to make it seem small enough to fit between two thumbs.  The only way to make such a large number seem comprehensible is to reduce it to funny little shapes on a page that we call numerals.

Upon seeing that large number on the page, my brain immediately perceives it’s large, because it is physically longer than most numbers I see.   But how large? If it takes up two inches on the page, my brain suggests it’s twice as long as a one-inch long number, the same way a two-inch worm is twice as large as a one-inch worm.    But my schooling tells me that’s wrong, so I count the digits – all twenty of them.   Since twenty apples is twice as many as ten apples, and since my brain has spent years making such comparisons, my intuition suggests that a twenty digit number is twice as large as a ten digit number.  But I “know” (in the sense of having been told otherwise) that’s not right.

But here’s my real question: if my intuition is wrong, is it even possible for me to “know”  how wrong? Does “calculation”: amount to “comprehension”?

Psychologists tell me my brain is useful in this inquiry only because it tricks me into seeing the distorted, shrunken  “mini-icon” versions of things – numerals and digits, rather than the actual quantities themselves.  If we’re asked to describe what it feels like to be alive for a million years, we can’t.  We’ll never be able to.  And for the same reasons, it seems to me, we can’t “comprehend” the reality of 264, only the icons we can comprehend – the mental constructs that only work because they’re fakes.

Consider the Emperor’s assertion that the rice would be enough to go to the moon and back seven times.  That mental  image impressed me.  It made the size of 264 seem far more real than it would have if I’d merely seen the twenty digits on a page.  But do I really comprehend the distance between the moon and the earth?

The moon’s craters make it look like a human face.  I can recognize faces nearly a hundred yards away.  So… is my brain telling me that the moon is a hundred yards away?

I’ve seen the models of lunar orbit in science museums – the moon the size of a baseball two or three feet away from an earth the size of a basketball.  My brain is accustomed to dealing with baseballs and basketballs.    I can wrap my brain around (“comprehend”) two or three feet.  So when I try to imagine rice extending between earth and moon seven times, I relate the grains of rice to such “scientific models.”

But is that , too, an exercise designed to trick me into thinking  I “understand”?  Is it like making a fenced field seem like it could fit between my two thumbs?

I have little doubt that analogies seem to help.  Consider that a million seconds (six zeros) is 12 days, while a billion seconds (nine zeros) is 31 years and a trillion seconds (twelve zeros) is 31,688 years.  Wow.  That helps me feel like I understand.  Or consider that a million hours ago, Alexander Graham Bell was founding AT&T, while a billion hours ago, man hadn’t yet walked on earth.  A billion isn’t twice as big as ten thousand, it’s a hundred thousand times bigger.

Such mental exercises add to our feeling that we understand these big numbers.  They certainly “wow” me.  But is this just more deception?

Distrusting my brain’s suggestions, I decide to do some calculations of my own. A Google search and a little math tell me that seven trips to the moon and back would be about 210 billion inches.  Suppose the grains of rice are each a quarter inch long.   The seven round trips would therefore require 840 billion grains of rice.  The math is simple.  If there’s anything my brain can handle, it’s simple math.  Digits make it easy to do calculations.  But does my ability to do calculations mean I achieve comprehension?

Thinking that it might, I multiply a few more numbers.  My calculations show me that the Emperor’s explanation of his big number was not only wrong, but very wrong.  The number of grains of rice on the chessboard would actually go to the moon and back not seven times, and not even seven thousand times, but more than 150 million times!

Such a margin of error is immense. I don’t think I could mistake a meal consisting of 7 peas for a meal consisting of 150 million peas. I don’t think I could mistake a musical performance lasting seven minutes for a musical performance lasting 150 million minutes. What conclusion should I draw from the fact that I could, and did, fail to realize the difference between seven trips to the moon and back, and 150 million trips?

The conclusion I draw strikes me as profound.  It is that I have no real  “comprehension”  of the size of such numbers.    I’ve retold Paul’s chessboard story for fifty years without ever once supposing that the Emperor’s “seven times to the moon and back” might be wrong.  Could there be any better evidence that I have no real sense of the numbers, no real sense of the distances involved?  The fact that I can’t really appreciate the difference between  two such large numbers tells me that I’ve exceeded the limits of any real understanding.   My brain can comprehend baseballs and basketballs.  Using simple tools like a decimal number system, I can do calculations.    But when I try to comprehend the difference between 18,446,744,073,709,551,615 and 18,446,744,073,709,551,615,000,000, am I really able to understand what it means to say that the second number is a million times larger than the first?

I got my first glimpse of the difference between “calculating” and “comprehending” about five minutes into Paul’s homework  assignment.  Just five minutes into my calculations, my brain was already playing tricks on me.  Pencil in hand, there were already  too many numbers going around in my head.  I was (literally) dizzy from arithmetic overload.   I made errors;  I began to slow down, to be more careful.  The numbers were already absurdly large.   Five minutes after starting with a sharp pencil, my numbers were eight digits long; I was multiplying scores of millions, but no matter how slow I went, the frequency of errors increased.    After ten minutes, I had to sharpen the pencil because I couldn’t read my own writing.  After twenty minutes, my fingers hurt.  Soon, I could feel calluses forming.

Since the first eight squares of the chessboard had taken me about fifteen seconds to calculate, I’d unconsciously supposed that doing all eight rows might take me eight times as long.    But the absurdity of such a notion was becoming quickly apparent.  Looking up at a clock after half an hour, still not quite half way across the chessboard, my inclination, even then, was to think that the second half of the chessboard would take about as long as the first.  If so, I’d be done in another half an hour.  So I determined to finish.  But a couple of hours past my normal bedtime, I assured my mother I’d be finished soon – notwithstanding finger pain that had been crying for me to stop.  By the time I finished –about two a.m., the calculations having taken me over five hours to complete despite the fact that I’d long since given up trying to correct errors – my fingers were so painfully cramped it seemed they’d never recover.

In this way, I started to “feel,” to “experience,” the hugeness of the number 18,446,744,073,709,551,615.

By reading this account, some may feel they have a deeper sense of the enormity of 2 to the sixty-fourth power.    But I’m willing to bet that if you’ve never done it before, but now actually try to CALCULATE it, yourself, you’ll appreciate the hugeness in ways that symbols on a page – the stuff our brains use – can never convey.

As I look at the digits on the page, my brain is trying to spare me that visit to the land of reality.  It strives to shield me from calloused fingers and mental exhaustion with its easy comparisons, its suggestions to count digits, even its way of hearing words and using them to imagine a story  about going to the moon and back seven times.  In the same way, perspective had tricked me into thinking I understood the length of a fence as if to spare me a sore back, sore feet, sunburn and thirst for two months.  But the experience of painting the fence had taught me more about its length than framing it with my thumbs or even counting the posts between the rails   I don’t know how long it would have taken to finish painting that fence.  I could do the calculations, but only finishing the job would have really made me “comprehend” the time involved, and who knows –I might have died of heat exhaustion before I ever finished.

I should know that the farther away something is, the bigger it must be, if I can see it.  But through the deceit called perspective, my brain tells me precisely the opposite:  the farther away something is, the smaller it appears.  Stars more massive than the sun are reduced to mere pin pricks in the sky. When I remove a contact lens, my thumb  look like the biggest thing in the world.  What better evidence can there be that our brains are built to deceive us?

My brain (wisely) keeps me focused on things I need, like apples, and on things than can kill me, like woolly mammoths, men with rifles, fast moving cars, or thumbs in my eye.  But to do this, my brain necessarily distorts the things that are far away, and the things that are many, and the things that are very much larger than me, because they are things I can do nothing about.  In fact, I suspect that If my brain were asked to identify the biggest, most important thing in the world, it would say it was me.

And that might just be the biggest delusion of all.

–Joe

There’s Nothing Like a Really Good Shower

Like many of you, I do some of my best thinking in the shower.  Have you ever wondered why?

Last night, my attention turned to the simplicity around me.  I was standing in a tub, with three walls and a shower curtain bounding my world.  Before me, four items of chrome: the shower head, a control plate, a faucet, and a drain.  At my side, a soap dish, a bar of soap, and a bottle of shampoo.  Once I’d turned the water off, there was nothing more.

When I opened the curtain, there was plenty more to see: a vanity, a mirror, a toilet, a pair of towel racks hung with towels, a bathrobe hanging from a hook on the door.  But as I inventoried this expanded-but-still-small world, I realized there was no end to the counting: three pictures on the walls, light fixtures, a light switch, a door knob, brass hinges, a toilet tissue dispenser, baseboards,  two floor mats, a patterned linoleum floor, and no fewer than twenty-six items on the vanity, from deodorant and toothpaste to a tub of raw African shea butter.  Two of the twenty-six items were ceramic jars, filled with scissors, tweezers, nail clippers, cotton swabs, and  other modern necessities.  Most of the items were labeled, little sheets of paper glued on them, each little sheet bearing product names, ingredients, and warnings in tiny fonts and a wide array of colors.

Early in fifth grade, Paul Czaja had our class use a sheet of paper, telescoped into a tube, to survey our surroundings.  The idea seemed too simple – easy to dismiss because we already “knew” the result.  But actually trying it proved us wrong.  Paul insisted that, one eye shut, we keep the other at one end of the scope for five minutes; he wouldn’t let us stop or look away.  Forced to view our classroom from these new perspectives, we were amazed at how different it became.  Desks, windows, blackboards and classmates disappeared, replaced by a tiny spider web  that trapped an even tinier bug in a corner; the pattern in the grain of a piece of wood; a piece of lint trembling in an unseen movement of air like a piece of desert tumbleweed.

As I toweled dry after my shower, the world of things too small to notice most of the time came into sharper focus.  My attention turned to things I go through life ignoring.  From the confines of my bathroom, I took stock of the unseen.

The room, I supposed, and no doubt my own body, were covered with bacteria.  (I might have found that thought abhorrent once, but today, nourished by probiotics and kombucha tea, I find it comforting.)   In the empty space between me and the mirror, I imagined all the even smaller things I couldn’t see, the atoms of nitrogen and oxygen, the muons and the quarks, the billions of things that swirl around me, unseen, though I breath them in and out, and though they sustain me.

I thought of things in the bedroom and the hall, and the things out in the yard, and things so far away that I couldn’t see them, even in the vast night sky beyond the bathroom’s walls because I don’t have X-ray vision and can’t see things more than a few million miles away.

But it wasn’t a matter of distance, size and walls alone that limited my sight.  I thought of all the colors I’d never be able to see, because the cones in my eyes don’t react to all the wavelengths of light that exist.  And  moving past the limitations of sight, I thought how oblivious I am to odors;  that every one of those ingredient labels lists chemicals and molecules  easily distinguishable by dogs, probably despite their containers, but all those stray molecules float into my nose unnoticed.

I hear but little of what there is to hear.  Some sounds are simply too quiet.  Others are loud enough to make dogs and teenagers come to attention,but too high pitched for my adult ears  to discern.  Others are at frequencies too low. And even dogs and teenagers hear but a tiny fraction of the oscillations that twitter, snap and buzz in the world around us.

Taste?  Surely, the world has more complexity to taste than five types of gustatory cells whose highest achievement lies in their ability – acting as a team –to distinguish  between  sweet, sour,  bitter, salt and savory.

And what about the things we call “forces”?  How often are we conscious of gravity?  If I focus on it, I can imagine that I feel the gravitational pull of the earth, but have I ever felt the pull of the moon?  And have I ever once thought about the gravitational pull of the vanity, the toilet, and the doorknob?  How often do I focus on the domino effect of the electrons hopping and pushing, connecting the light switch to the fixture?  And do I ever think of the magnetic fields surrounding them?  Unless I’m shocked by the sudden release of static electricity, I go through life completely oblivious to its existence .

Perhaps most of all, I’m unconscious of myself – the flow of hormones  that affect my mood; the constant traffic in my nervous system that never reaches my brain, much less my conscious thought; the processes of liver,  kidney and thalamus that keep me going.

In short, the world I experience, through my senses, is but a tiny fraction of the real world in which I live.

Yet that’s not all.  I haven’t even begun to count the ways my brain deceives me.  In fact, it wouldn’t be doing its job if it didn’t distort reality. The tricks my brain plays on me take the already small portion of reality I’m able to sense, and make it appear to be something other than it is.

My eyes see two different views of the world, but – as if it’s afraid I couldn’t handle multiple points of view – my brain tricks me into thinking I see only one.

When I turned off the shower, my world was completely silent – or so it seemed.  I’d heard the cessation of the water coming down.  It being late at night, there were no voices from downstairs, no television blaring, so my brain told me – convincingly – that the bathroom was silent.  Only when I closed my ear canals by pressing flaps of flesh to cover them did I realize that the hum of ambient background noise was now gone.  That noise had been so normal, so much a part of the ordinary, that my brain had convinced me it wasn’t there.  (My highly evolved brain still wants to know: What’s the use of  listening to background noise?)

Early on, my brain tricked me into thinking that some things are up and others are down, and that up and down are the same everywhere.  (I spent a lot of time as a child worrying about the people in China.)  And it was so intent on perpetuating this deception that when my retinas saw everything in the world “upside down,” my brain flipped the world around to be sure I saw everything “right side up.”

One of the brain’s most convincing tricks is what it does with my sense of touch.  It has convinced me  I’m doomed to a life in touch with the ground;  I’ve often regretted my inability to fly.  But in fact, I’m told, the stuff of which I’m made has never  come in contact with the ground, or with any other stuff at all  – if it had, I’d have exploded long ago.  The sense of touch might better be called the pressure of proximity.  All that time I dreamed of  flying, I was floating all the while!

How about my  sense of who I am?  I wonder if that’s not the biggest trick of all.  When my body changes with every bite of food, every slough of skin, every  breath of air I take, no present cell or atom there on the day I was born, is my very sense of self an illusion, created by my brain “for my own good”?

And so I surveyed the bathroom.  Having first considered  the things too small to notice,  or too quiet, or too far away, or at not the perfect frequency, and having then considered ways my brain tricks me, I next encountered a whole new category of deception.  As my eyes fell on various objects, I noticed something else my brain was doing.  For example: on the toilet tank was a vase full of flowers, but not real ones— pieces of plastic, molded and colored to look like real ones.  Another example: one of the pictures on the wall was of a swan and her cygnets – not a real swan, but a mixture of acrylics applied to a canvas, a two-dimensional image designed to give the illusion of life in a three dimensional pond.  The painting was designed to make me think of something not there, and it did. I didn’t think “acrylics,” I thought “swan.”   And as soon as it had done that, my brain had me thinking of the artist – my wife, Karen – and of her skill with a brush,  and of her sense of color, and of some of the many ways in which she’s blessed me through the years.  And I realized that all these mental associations, these illusions, these memories, form an extremely important part of the reality in which I live, despite the fact that they don’t reside in the  space between me and the mirror (at least not literally).  The flowers in the vase are just molecules of colored plastic, but  my brain gives them a fictional existence – a story of smells and bees and fresh air and blue sky, and all the associations that “flowers” evoke in my brain.  The swan and her cygnets remind me not only of a wife who paints, but of our children, and of times we walked  together, along water banks, watching swans and cygnets swim by. My mind, I realize, is a factory, churning out a never-ending assembly line of associations, all of which are things that “aren’t really there.”

And so, I conclude, I’ve spent a lifetime in a shower of a different sort –bombarded by  atoms, muons, quarks and dark matter, things so small I call them emptiness,  all the while pulling associations, memories, and narratives into my world that aren’t really there.

When I say they aren’t really there, I don’t mean to deny that Karen, and swans, and flowers, are real – but that memory itself is reconstructive.   My memories are hardly exact replicas of things I’ve experienced;  they’re present creations, constructed on the spot in a crude effort to resemble prior experience.  The result is affected by my mood, and by error, and by all sorts of intervening experiences.

And so, I  live in a world that isn’t the real world, but one extremely limited by my meager human  senses; one corrupted by a brain that’s determined to distort things, for my own good; one filled with the products of my own defective memory and my own boundless tendency to imagine things that aren’t there.  Somehow, I’m able to deal with the shower of inputs so created, over-simplified, distorted and augmented as it may be – in fact, I’m pretty well convinced that I can deal with it a lot easier than I could deal with the vast complexity of the “real thing.”

I woke up this morning hoping that I never lose sight of the difference between the two.

— Joe

The Last Word

With much sadness, I have just now changed this website’s description of one of We May Be Wrong’s founding members – from the present tense, to the past.

In 1960, a 24 year old Dr. Paul Clement Czaja (January 9, 1936 – May 8, 2018) had just earned his Ph.D. in philosophy when he persuaded Nancy Rambush (then headmaster of the Whitby School and founder of the American Montessori Association) to let him teach existential philosophy to children.  She was impressed with his enthusiasm and his willingness to work for practically nothing, but since she thought parents might not understand the importance of teaching philosophy to children, she asked if he wouldn’t mind teaching other things as well.  So Paul “officially” taught creative writing, Latin and various other subjects not often taught to ten year olds.  But philosophy was his first love, and it found its way into everything.

Only fourteen years older than me, Paul was more an older brother than a teacher.  He showed me how to love the world around me; introduced me to the joy of learning everything I could about it.  The way a magnifying glass could make fire; the way Latin could turn language on its head yet still come out as modern English; the thrill of catching butterflies in nets; the way the Greek Alphabet could be painted with Japanese brushes and jet-black ink; the vital inner parts of dissected foetal pigs; the wonders of the Trachtenberg system of mathematical calculation; the wiggling of microscopic paramecia in pond water; the thrill of catching people and their stories with a 35 millimeter still camera, that of making our own stories with  a 16 millimeter movie camera, and then, the even weirder thrill of telling stories with frame-by-frame, stop-motion photography; the writings of Gertrude Stein, William Carlos Williams, and James Baldwin; the power of telling stories of our own  with just pen and ink.  We spliced and edited rolls of movie film we’d made and, somehow, we even enjoyed diagramming sentences, rummaging through grammar the way we searched for the Indo-European roots of words. Though I was not yet a teenager, Paul introduced me to Ingmar Bergman movies, to Van Gogh’s Starry Night, to Rodin’s The Thinker, and to Edward Steichen’s photographic exhibition,  The Family of Man.

To say the least, it was not your typical middle-school education.

They say that when a butterfly flaps its wings, it can have profound effects on the other side of the world – a concept I first heard from Paul, I’m sure.  If I hadn’t met him, he wouldn’t have written the recommendation that got me into Phillips Exeter, and I wouldn’t have… well, if a single butterfly flapping its wings can have a profound impact, having Paul as a teacher every day (winter and summer) for four impressionable years was like being borne to Mexico by millions of Monarchs.  We stayed in touch during my later school years, and then persisted in friendship as the difference in our ages seemed to vanish with the passage of time.  And so, I was pleased that he joined We May Be Wrong in 2016 as one of our founding members.

But now, it’s time for a confession.  As we tried to get our new website off the ground, Paul proposed that WMBW publish a poem he had written.  Being a man of great faith, Paul wrote a lot about God – prayers, poems, meditations.  When he proposed that WMBW publish his poem, I disagreed on the ground that I didn’t want the brand new website to come across as “pro” or “anti” anything controversial.  I didn’t want to risk alienating potential followers, be they liberal or conservative, Republican or Democrat, believers or non-believers, by implying some sort of hidden agenda.  (The ONLY agenda was to be the benefit of listening to others with an open mind.)  Holding the keys to the publishing platform, I declined to publish his poem lest it be misunderstood to evangelize about God, rather than fallibility.  But even then, I told him, once the website has been up for a while, we might be able to publish that sort of thing.

Well, the time has come.  I wish I’d published it before he left the earth he loved for the better one he yearned for.  For Paul,  I can only say a prayer of thanks for all he did for me, and for so many other children, and now, share his wonderful poem.  (It seems only right that he should have the last word.)

Fire in the Soup: A Creation Story

It happened

when

this earth

had just cooled down

from

being molten magna

to being

simmering

and steaming

rock,

and

when

the vaporous skies

had emptied

eons of towering cumulus

clouds of rain

making oceans

which

were so great

that

the whole sphere

became

much more a watery world,

and

the rocky land

was

but one large

continental island

there

in the middle

of a now

beautiful blue planet.

 

And then

when the heavens

were no longer

veiled

by that thick

envelope

of sulphurous cloud cover,

and

the earth’s atmosphere

became

pure and clear

and

allowed

the stars of the universe

to shine

so brightly

that

the night sky

seemed to be

white

with black peppery dots,

it

happened

that a flame

came streaking

through the sky

down

to earth

sizzling

the warm soup

of the sea

somewhere

changing

and

charging

that chemical mineral ooze

into

the very first

protozoa

that ever was

on this

so singular planet.

 

Later

when that protozoa

eventually became

thinking,

questioning,

wondering

man,

the idea

arose

that perhaps

that life causing

flame

which

once upon a time

sizzled

the oceanic soup

could be

the pure energy

that is

love,

and

if

that were

so,

then

all life

that

ever evolved

from

that first protozoa

would be

somehow

spiritual

and

of the eternal God —

for

philosophers

and

theologians

say

that

God is love.

Such a thought

seems to be

a happy,

hope filled,

heuristic

kind of

thinking.

–Paul Clement Czaja

 

Zooming In

Neil Gaiman tells the story of a Chinese emperor who became obsessed by his desire for the perfect map of the land that he ruled. He had all of China recreated on a little island, in miniature, every real mountain represented by a little molehill, every river represented by a miniature trickle of water.  The island world he created was enormously expensive and time-consuming to maintain, but with all the manpower and wealth of the realm at his disposal, he was somehow able to pull it off.  If the wind or birds damaged some part of the miniature island in the night, he’d have a team of men go out the next morning to repair it.  And if an earthquake or volcano in the real world changed the shape of a mountain or the course of a river, he’d have his repair crew go out the next day and make a corresponding change to his replica.

The emperor was so pleased with his miniature realm that he dreamed of having an even more detailed representation, one which included not only every mountain and river, but every house, every tree, every person, every bird, all in miniature, one one-hundredth of its actual size.

When told of the Emperor’s ambition, his advisor cautioned him about the expense of such a plan.  He even suggested it was impossible.  But not to be deterred, the emperor announced that this was only the beginning – that even as construction was underway on this newer, larger replica, he would be planning his real masterpiece – one in which every house would be represented by a full-sized house, ever tree by a full-sized tree, every man by an identical full-sized man.  There would be the real China, and there would be his perfect, full-sized replica.

All that would be left to do would be to figure out where to put it…

***

Imagine yourself standing in the middle of a railroad bed, looking down the tracks, seeing the two rails converge in the distance, becoming one.  You know the rails are parallel, you know they never meet, yet your eyes see them converge.  In other words, your eyes refuse to see what you know is real.

If you’re curious why it is that your mind refuses to see what’s real in this case, try to imagine what it would be like if this weren’t so.  Try to imagine having an improved set of eyes, so sharp they could see that the rails never converge.  In fact, imagine having eyes so sharp that just as you’re now able to see every piece of gravel in the five foot span between the rails at your feet, you could also see the individual pieces of gravel between the rails five hundred miles away, just as sharply as those beneath your feet.  In fact, imagine being able to see all the pieces of gravel, and all the ants crawling across them, in your entire field of vision, at a distance of five hundred miles away.  Or ten thousand miles away.  What would it be like to see such an image?

***

How good are you at estimating angles?  As I look down those railroad tracks, the two rails appear straight.  Seeing them converge, I sense that a very acute angle forms – in my brain, at least, if not in reality.  The angle I’m imagining isn’t 90 degrees, or 45 degrees; nor is it 30, or even 20.  I suppose that angle to be about a single degree. But is it really?  Why do I estimate that angle as a single degree?  Why not two degrees, or a half a degree?  Can I even tell the difference between a single degree, and a half of a degree, the way I can tell the difference between a 90 and a 45?  Remember, one angle is twice as large as the other.  I can easily see the difference between a man six feet tall and one who’s half his size, so why not the difference between a single degree and a half a degree?  What if our eyes – or perhaps I should be asking about our brains – were so sharp as to be able to see the difference between an angle of .59 degrees and one of .61 degrees with the same ease and confidence we can distinguish between two men standing next to each other, one who’s five foot nine and the other six foot one?

***

Yesterday, I was preparing digital scans of my grandfather’s Christmas cards for printing in the form of a book.  His Christmas cards are hand drawn cartoons, caricatures of famous personalities of his day.  Each is clearly recognizable, from Franklin Roosevelt and Adolf Hitler to Mae West and Mickey Mouse.  Some of the images were scanned at 300 pixels per inch, some at 600 etc.  Reflecting on pixel counts and resolutions so that my printed book would not appear blurry, I was testing the limits of my ability to distinguish different resolutions.   Of course, one neat thing about a computer is how it lets us zoom in.  As long as I zoomed in close enough, I could see huge differences between two versions of the same picture.  Every pixel was a distinct color, every image (of precisely the same part of the caricature) a very different pattern of colors  – indeed, a very different image.  Up close, the two scans of the cartoon of Mae West’s left eye looked nothing alike – but from that close up, I really had no idea what I was looking at – it could have been Mae West’s left eye, or Adolf Hitler’s rear end, for all I knew.  In any case, I knew, from my close-up examination, how very different the two scanned images of Mae West actually were.  Yet, only when I was far enough away was I able to identify either image as being a caricature of Mae West, rather than of Hitler, and at about that distance, the two images of Mae West looked (to my eye) exactly the same.

***

How long is the coastline of Ireland?

If I took a yardstick and walked the perimeter, I could lay my yardstick end to end the whole way around, count the number of lengths, and conclude that the coastline of Ireland was a certain number of feet long.  But if I used a twelve inch ruler instead, following the ins and outs of the jagged coast a little more precisely, the result would be a larger number of feet than if I had used the yardstick, because the yardstick was assuming straightness every time I laid it down, when it in fact the coastline is never perfectly straight.  My twelve inch ruler could more  closely follow the actual irregularity of the coastline, and the result I obtained would be a longer coastline.  Then, if I measured again, using a ruler that was only a centimeter long, I’d get a longer length still.  By the time my ruler was small enough to follow the curves within every molecule, or to measure the curvature around every nucleus of every atom, I’m pretty sure I’d have to conclude that the coastline of Ireland is infinitely long – putting it on a par, say, with the coastline of Asia.

***

How many rods and cones would my eyes have to contain, for me to be able to distinguish every ant and piece of gravel in every railroad bed, wheatfield and mountainside within my field of vision, at a distance of five hundred miles away?  How much larger would my brain have to be, to make sense of such a high-resolution image?  I suspect it wouldn’t fit inside my skull.

***

Why did we, so recently, believe that an atom was indivisible? Why did it take us so long to identify protons, neutrons, and electrons as the really smallest things?  What did it take us until 2012 to decide that that, too, was wrong, that not only were there quarks and leptons, that the smallest thing was the Higgs boson?  And not until 2014 that particles existed even smaller than that?

***

Given how long it took us to realize that our solar system was just one of billions in our galaxy, and how much longer to realize that our galaxy was just one of billions of galaxies, why are we now so confident of our scientists’ estimates of the size of the Universe – especially when told that the “dark matter” and “dark energy” they say accounts for most of it are just names given to variables necessary to make their equations come out right?  That, apart from their usefulness in making these equations come out right, the scientists have never seen this stuff and have no idea what it is?  Is it really so hard for us to say, “We simply have no idea”?

***

A human baby can distinguish between the faces of hundreds, even thousands, of human faces.  But to a human baby, all chimpanzees look alike. And to most of us Westerners, all Asians look alike.  Why do babies treat Asians and Chimpanzees like the rails of railroad tracks, converging them into “identical” images even when we know they are different?  Did our brains  evolve not to maximize perception and understanding, but to make them most efficient?  In other words, are we designed to have limited perception for good, sound reasons, reasons that are important to our very survival?

***

Why do we think, and talk, and act, as if our brains are capable of comprehending reality, in all its vast complexity?  Is it more efficient to feed and maintain fewer rods and cones, than it would take for us to feed and maintain enough of them to see the difference between quarks and Higgs bosons, or the individual pieces of gravel between the railroad tracks on all the planets of  Andromeda?

***

Mirror, mirror, on the wall: tell me, can I really achieve, within my brain, a true comprehension of the Universe? Or am I just like the Emperor of China?

– Joe

Hatred

I just watched a TED talk I liked.  The speaker (Sally Kohn) was articulate and funny; her message about hatred powerful.    Fearing that a synopsis of her talk would detract from the way she conveys her point, I’ll  simply share the link to her talk, with my strong recommendation.

https://www.ted.com/talks/sally_kohn_what_we_can_do_about_the_culture_of_hate?rss

But I do have one  disagreement with her.  At one point,  she refers to  “study after study after study that says, no, we are neither designed nor destined as human beings to hate, but rather taught to hate by the world around us…”

I’m not so sure.  Last year I saw a science show on TV that presented a series of studies of very young children; its disturbing suggestion was that we are born to hate.  Can anyone enlighten me about these studies, suggesting (one way or the other) whether hatred is learned, or innate?  A product purely of culture, or of biological evolution?

It has always seemed to me that while some of hate is surely learned, a predisposition toward it may be innate. But what would a predisposition toward hate look like?

Sally cites the early 20th century researcher Gordon Allport as saying that Hatred occupies a continuum, that things like genocide are at one end and “things like believing that your in-group is inherently superior to some out-group” lie at the other.  That much makes sense to me.  In fact, the very idea of a “hate continuum” with feelings of superiority lying at one end is why I think the answer to the innateness question may be important.

Whenever I hear it said that a positive self-image is important to mental health, I think of Garrison Keillor’s joke that in Lake Wobegon, everyone is above average.   I suspect the great majority of us think we’re at least slightly above average.  And don’t  psychologists say that that’s good?  Don’t we justify keeping our positive self-images by the corollary view that people who “suffer from a negative self-image” are likely unhealthy?  Don’t we think it would be beneficial  if everyone thought of himself or herself as above average?  Wouldn’t that mean an end, for example, to teen suicide?

But even if I’m far below average, there are at least some people in the world who are not as good (or as smart, or as fit, or as valuable) as me.  No?   And if I think my liberalism is superior to your conservatism, or the other way around,  you must lack some quality or insight I possess, no?  Does “feeling good about myself” require that, in some way, I feel superior to others?

Maybe not.  Maybe my positive self image need not depend on comparing myself to others – maybe I can see value in myself – have a positive self-image – without thinking of myself as superior to anyone else at all.  But the only way I can discern to do that is to see equal value in everyone.  And if we’re talking about wisdom or intelligence or validity of things in which we believe, that means that  my own power of discernment is no better than than the next guy’s; that everything I believe in has value, but everyone else’s beliefs have equal value.  And I see great debate about whether that’s desirable. Does it require me to abandon all my convictions?  To forego all my beliefs?  What does it even mean to say that my belief in God has no more value than your belief in atheism, or vice versa?  Can I really believe in anything, if I think an opposing belief is just as “good”? I think most of us say no.  I think that, for most of us, feeling good about ourselves and our beliefs is only possible through at least implicit comparison to others, a comparison in which we feel that our beliefs are at least slightly superior to somebody else’s.

Even if it’s both possible and desirable, it strikes me as very, very hard to have a positive self image without feeling such superiority.  I mean, can I really have a positive self-image if I think I’m doomed to be the very worst person on earth, in every respect?  It certainly seems likely that, for many, most or all people in the world, positive self-image depends on feeling superior to at least some others, in at least some respects.  I’d venture the guess that a tendency toward positive self-image (in comparison to others) has evolved in our species because of its evolutionary health benefits.  In any case, I suspect there’s a strong correlation between adults who feel their beliefs are superior and adults who feel disdain for the beliefs (or intellects) of others, and a strong correlation between those who feel disdain for the beliefs and intellects of others and those who hate them.  At the very least, positive self-image and a feeling of superiority seem at least early stepping stones in the direction of Hatred.

However, my suspicion that the seeds of Hatred  are themselves innate doesn’t depend entirely on positive self-image and feelings of superiority.  The science show I watched last year dealt not with self-image, but with group identification and preference: the idea that we ‘re willing to assist and protect those others who are most like ourselves, while we seek the opposite (competition, aggression, violence) directed at those who are unlike ourselves.

“My God, my family, and my country.”   The familiar formula implies a great deal, I think, about the subject of identity, as does the advice we give to our children: “Don’t ever talk to strangers.”  Why do we alumni all root for the home team?  Why would most of us save our spouse and children from an inferno first, before saving strangers, if we save the strangers at all?  Why do we lock our doors at night to protect those we know, while excluding those we don’t?  Why do we pledge allegiance to our respective flags?

(That last one’s easy, of course, if we believe that we Americans pledge allegiance to our flag because our country is the greatest on earth.  Perhaps I should really be asking why all the other people in the world – who far out number us –pledge their allegiance to their flags, when they live in inferior countries?  Are they uninformed?  Too stupid to recognize our superiority? Aware of our superiority, but unwilling to admit it, because of selfishness, dishonesty, or even evil design?  In which case, can Hatred be far behind? )

Why do we form Neighborhood Watch groups, erect walls on our borders, finance armies for self-defense, and erect tariffs to trade?   Is it not because we prefer the familiar, and because that preference is in our self-interest?  And isn’t self-interest as true of groups as of individuals?  In evolution,  groups do well who look out for each other – who favor those most like themselves – while treating dissimilar “others” with suspicion and distrust.  (We know that those like us aren’t dangerous, hostile predators, but fear that unknown strangers might be.)   In contemplating first contact with aliens from other worlds, some of us favor holding out olive branches, others making some sort of first-strike, but disagree as we might on how to first greet them,  we all tend to think in terms of a common goal: to preserve humanity.  We therefore focus on the need for global unity in facing the alien challenge.  But what is it that causes us to favor “humanity” over alien beings, when we know absolutely nothing about those alien beings?  Isn’t it because we know absolutely nothing about them?  Isn’t it because, innate within us is a bias in favor of  those who are most like ourselves?

Consider the following continuum, as it progresses from the unfamiliar to the familiar:

(1) We spend millions to combat and eradicate bacteria, giving Nobel prizes to those most successful in the effort;

(2) We spend some (but less) to eradicate mosquitoes, which we swat unthinkingly;

(3) On the contrary, we feel bad if we run over an armadillo on the road, but what the heck, such accidents are unavoidable;

(4) We try not to think much about slaughtering millions of cows, but we do it on purpose, because we have to eat;

(5) most of us abhor the idea of ever eating a monkey; and

(6) we condemn human cannibalism, abhorring murder so much we imprison murders, even if we oppose the death penalty because human life is sacred.

I think that assigning things to their place on such a continuum based on how much they seem similar or dissimilar to ourselves reflects our innate, natural preference for those most like ourselves.  Yet the tendency to feel safety in, and preference for, those who are most like ourselves, is precisely what leads to racism, no?

So, is this preference natural and good?  Or is it something to resist?  Should we be proud of our tendency to fight for our God, our country, our state, our species, our family, our planet –  and to disdain our enemies – or should we be suspicious of that tendency, aware that they largely result from the accidents of birth?  And does our tendency to root for the home team – not to mention our loyalty to political ideals –  exist only because we’re able to see the good in the familiar, while understandably blind to the good in the unfamiliar?

We don’t see what roosters see in hens.  We’re blind to what bulls see in cows.   But just like we can’t feel the love one three-headed Martian feels for another, I submit we won’t be able to  appreciate the goodness that aliens will be striving to preserve when they descend upon us,  maws open, preparing to treat us the way we treat swine.  I want to know WHY we are all in agreement on the importance of preserving our species, even if it means the poor aliens go hungry.   And I doubt its as simple as loyalty to good old mother earth, as I suspect we’d probably be happy to negotiate a peace with the invaders by offering them, say, all the world’s polar bears and squirrels, provided they’ll agree to leave humans alone.  This preference for humanity would prevail in that moment, I believe, never mind the national and regional warring between earthlings that had preceded it.  And it would seem strong enough to survive even if the alien species were acknowledged to be technologically “superior” to us.  But in that case, would our efforts rest on a reasoned belief that, at least morally, if not technologically, we are superior to such alien species?  Or would the instinctive feeling of moral superiority be only a disguise in which the instinct for self-preservation and consequent preference for things most like ourselves had clothed itself?

I don’t claim to have the answers.  Whether we deserve to defeat alien invaders, whether we ought to value human beings more than chickens or mosquitoes, whether we ought to fight for our flag, these are not the issue here.  My point is that I take our allegiance to things most like us to be innate, whether it’s good or (in the case of racism) abhorrent.  I think the preference is a natural, inborn one, a part of who we are, whether we like to admit it or not –and that it’s a tendency terribly hard to get rid of, as our struggle with racism shows.

For the type of reasons Sally suggests, I believe that understanding our feelings of superiority and our preference for the things most like ourselves is the key to overcoming Hatred.  But if we think of Hatred as merely cultural, as merely something we’ve “learned” from society, I fear that, as individuals, we may be tempted to think we’ve already rid ourselves of it, or that we no longer need to be alert to its presence deep in our hearts.  If we see it only as something others do – if we fail to see at least the seeds of it, innate in ourselves, ready to manifest itself in our own actions– we may be the Hateful ourselves.

– Joe