Thanksgiving

After feeling profoundly embarrassed for our country since election day, I am feeling thankful today. First, for the fact that much of the Earth is still green and the sky still blue; second, for my family and friends; and last, for the privilege of living in a country where most elected officials graciously submit to rule by those they disagree with.

Four Billion Years

Most of you have probably recognized my obsession with matters of scale – the idea that our brains are wired to prevent us from being able to grasp orders of magnitude.  Whether it be atomic particles measured in Planck lengths, galaxies measured in light years, or even our difficulty estimating the number of grains of sand on a beach, I’ve been pondering the impossibility of genuine comprehension. Can we only deceive ourselves into thinking we can comprehend such numbers and distances? 

As a child visiting the American Museum of Natural History in New York, I was impressed by the timeline of life on Earth laid out as a wall mural, and I was impressed by the fact that mankind occupied such a small section of it, compared to the history of all life.  But then, I noticed that the scale had changed to give more space to humanity than its proportionate share, as if children wouldn’t be able to handle the relative insignificance of humanity, if we saw the real thing.  I don’t know if that mural has been redone since the 1960’s, but the Museum still offers for sale what they call “The Big History Timeline Wallbook” for $19.95. Their website describes the six foot long timeline as “divided into 12 sections covering both natural history as well as the history of human civilizations.”  As I see it, giving human civilization one twelfth of a timeline of all existence is bound to leave kids with an inflated sense of the importance of humanity.

I’ve been trying to make things (spreadsheets, videos, books, diagrams) which don’t employ collapsing scales in order to appease our need to see the importance of ourselves. 

For example, I recently tried to get a feel for the height of Mt. Everest compared to the size of the planet as a whole.  I’d seen plenty of diagrams in books and on the internet which show mountains on the surface of the Earth, but if the whole Earth itself was represented, the images have always been labeled with the caveat, “Not Drawn to Scale.”  The reason, of course, is that the diameter of the Earth is so much greater than the height of Everest that for an image of the Earth to appear on one page of a book, Mt. Everest would only be as high as a human hair is wide.  Such problems drive illustrators to resort to deception.  If the people at the American Museum of Natural History didn’t resort to collapsing scales, their six foot long diagram of history since the big bang would leave less than two feet for the history of all life on Earth,  about 1/1,000th of an inch for the history of homo sapiens, and about a 1/100th of that for what we think of as human “civilization.”

Every picture of Earth’s cloud-studded atmosphere I’ve seen has made it seem to extend quite far above the surface of the Earth – far higher, say, than Everest.  And we’ve all wondered why, hurtling through space at sixty-seven thousand miles per hour, the Earth’s atmosphere isn’t torn to shreds.   But when I looked, I couldn’t find an image showing me the thickness of the atmosphere compared to the thickness of the Earth.  It occurred to me that perhaps an Excel Spreadsheet might be a handy tool for giving me a better understanding of what I wanted to grasp. So I made such an image myself, using Excel. Starting at the center of the Earth, I added a kilometer to the next row of the spreadsheet, then I copied that increment and dragged it down the page the requisite number of kilometers. The image so drawn enabled me to “see” the picture that resulted – but only sort of.  Specifically, holding my finger down on the scroll bar continuously, the kilometers from the Earth’s center to its surface sped by in a blur that lasted about twenty seconds or so.  Then, suddenly, Everest and the atmosphere flashed by —so fast I couldn’t see them at all.

Never again will I forget that our atmosphere is a thin film barely coating the surface of the planet.  Hurtling the Earth through space has as much chance of disrupting our atmosphere as a pitch toward home plate would have of wiping the grass stains off a baseball.  

In any case, a couple of years ago, I became determined to figure out a way to represent the history of life on Earth, giving mankind no more than our due space within the whole.  I wanted a timeline that did not vary in scale.  I wanted to express the place held by humanity not only with the “knowing” that our brains are capable of, but through the more real “experiencing” that I believe gets us closer to genuine understanding of very large numbers.  How might I use an experience to enhance my intellectual understanding?

One early idea was to use ping pong balls threaded together in a very long string, with a person having to actually walk the length of the string to experience how long it would be.  I wanted each ping pong ball to represent a length of time that a human being might actually understand.  I thought perhaps a thousand years – the time since William the Conqueror – might be such a number, to be represented by a single ping pong ball.  The math was easy – at that scale, I would only need four million ping pong balls to represent the four billion years of life on Earth.  But the idea of someone walking the length of the string came to an end when I realized that, leaving a 10 mm space between each 40 mm diameter ball, a string of four million such balls would be more than a hundred and twenty-four miles long.   Few people would be willing to walk such a length to “experience” that amount of time, so I felt compelled to change my approach. 

Marbles would be too tough to string together.  Lead fishing weights would be smaller and easy to crimp onto a long fishing line. But the combined weight of four million fishing weights became a prohibitive factor. 

So I turned from ping pong balls and fishing weights to the idea of making a sound or video recording.  But it was quickly apparent that an hour-long video couldn’t express the concept meaningfully. What if such a video counted off the history of life on Earth in increments of a thousand years each?  It would be pointless if the numbers on a screen were simply a blur; each would have to be visible for, say, half a second, even to be recognizable.  But at two thousand years per second, it would take a person over sixty years, watching continuously, to see such a video. The video idea went the way of the ping pong balls.

The more ideas I considered, the more I felt myself inching closer to understanding the length of time involved, but the more it seemed that real understanding of such huge durations was impossible, as a practical matter.  We can say the words, “Life has been on Earth for four billion years,”  and we can multiply and divide numbers, but does any of that mean we can really appreciate how long four billion years is? By writing characters on a blackboard, we can manipulate decimal points and zeros to deceive ourselves into a false sense of “understanding” such large numbers.  We have also developed whole systems of math that make perfect logical sense of the so-called “imaginary” numbers like the square root of negative two.  We make calculations and even practical use of them, but none of us really “understands” them. I think big numbers are the same.  Math helps us deceive ourselves into a feeling of understanding when no real understanding exists. 

Still, we probably get by more efficiently in this real world precisely because our brains can’t comprehend such immensity. 

I remain keen on distinguishing between mere intellectual “understanding” (aka self-deception) and true experiential understanding.  Obviously, we can’t actually experience a duration as long as the four-hundred-billion-year history of life on Earth .  But try counting down that history, even a million years at a time, from four billion years ago, and see how far you get.  “Four billion.  Three billion, nine hundred and ninety-nine million.  Three billion, nine hundred and ninety-eight million.  Three billion…” It would be like singing yourself all the way through “100 Bottles of Beer on the Wall” ten thousand times in a row. If we can’t bring ourselves even to say the numbers, even if we’re counting a million years at a time, then how can we think we really understand them?

In any case, while my string of ping pong balls will never see the light of day, my effort to put the life span of humanity into perspective has finally resulted in something real – a book titled “It’s Been Four Billion Years: The Story of Life on Earth a Million Years at a Time.”

It’s at the printer’s now, and will be available on Amazon.com and other retailers in September.  The retail price is $19.95 – same as the History Museum’s deceptively skewed timeline.  (Needless to say, my book maintains a constant scale, all the way through.)

The best part of it is that in appreciation for your having subscribed to this blog, I am willing to send you a free copy when they come out next month.  If you send me an e-mail with your snail-mail address, I’ll be able to send you one.

In the mean time, I’m thinking of doing another book, trying to express extremes of distance and size….

Until then, peace to you all.

The Meaning of Large and Small

I continue to have difficulty comprehending the very large and the very small.

Yesterday, thinking about the word “small” itself, I got to wondering what I mean when I call something small.  I wondered how I would phrase a definition of the word, if I were assigned the task of creating one for a dictionary.  For example, I could say “small” means the same as “little.” But what would that add, say, to the understanding of someone who spoke only Chinese, or Martian?  My dictionary in fact defines “little” as “small in size.” Could I define “small” other than by simply using an English synonym for it? If you’re a word nerd like me, you might try doing this yourself. If you do, I’d be interested in hearing what you come up with. 

I think my big American Heritage Dictionary (Houghton-Mifflin) struggled with the same problem. In that dictionary, as noted, “little” is defined as “small in size,” while “small” is defined as “being below the average in size or magnitude.”  Fair enough, I thought, until I considered some other definitions in that same book, where  “size” is defined as “the physical dimension, magnitude or extent” of something, but “magnitude” is defined as “greatness in size or extent,” and “extent” is defined as “the range, magnitude or distance over which a thing extends.” 

Considering all these definitions together,  I imagined my Martian visitor persuaded that abstractions like “small” and “little” mean the same thing, but having no idea what that is. When the words are only defined in terms of each other, how can anyone tell what they really mean?

Though I felt I was going in circles, I kept trying.

“Great,” I learned, is “Very large in size,” while “large” is “of greater than average size or extent.”  So great means large and large means great.  Great! But if I didn’t already have an idea of big and small, where would that get me?

Of course, linguists have long recognized this circularity of language.  The problem isn’t just defining “small” by using a synonym like “little.”  It’s more general than that, and it ultimately comes from the absurdity of trying to define words using other words.  If we want to define what a word means by saying that word A is equal to  words B, C, and D, the problem is that no matter how many words we go through, every set of words becomes equivalent to nothing but other sets of words.  B, C and D are defined by E, F, and G, and those by H,I and J, but H, I and J are defined by A, B and C.   Even in a language of 50,000 words, that fixed set of things is limited – a closed loop, explainable only by itself.  Every word, sooner or later, can only be defined by reference to itself or to words that it has helped to define.  And in any such closed system, entropy sets in.

The definitions of “small” and “large” above both  make use of the concept of “average,” which might seem helpful, because “average” is a concept which takes me from the world of words into the world of mathematics.  If small is less than average and large is greater than average, then that should prove helpful – provided I know what “average” means.  But what do I mean by “average”?

My mathematical concept of “average” requires a finite set of numbers to consider.   I can say that the average of two 12’s, one 17 and one 19 is 15, but only because I know how many of each number I have for my calculation.  I’m dealing with a known, fixed, quantifiable set.  I might even be forgiven if I say that the average (in size) of one golf ball, one tennis ball, and one soccer ball is (more or less) a  baseball, because, again, I’m dealing with a know set of data.  But what data set — what objects,  and how many of them — should I use to compute an average, on my way toward understanding that small is below average, and large is above average?  The average size of all things? If I take  the smallest things I know, like quarks, and the largest, like the whole universe,  don’t I still need to know how many quarks there are, and how many stars of various sizes,  before I can compute an average size of things, and therefore to know what it means to be above or below the average of all things, and therefore, inherently large or small?

Meanwhile, in order to take into consideration, say, my dictionary, in order to count it in considering the average size of all things, do I count it as a single thing, about 14 inches in length and weighing a few pounds, or as a thousand smaller things, called pages, or several billion even smaller things, called molecules?  Is my car just a single car, or is it an engine, a body, a chassis, and four wheels? Obviously, if I count myself as one person of my size, I have a very different impact on the “average” of all things than if I count myself as a few billion cells of far smaller size. With such questions pervasive about every thing and every size, I submit, it is impossible to formulate a data set capable of yielding any meaningful concept of an “average” in the size of all things —  yet Houghton Mifflin has no problem saying that small things are things smaller than “average,” and large things  larger.

(By the way, I submit that it it makes no difference if we think in terms of medians. Using medians, I suspect our calculation would yield something only slightly larger than a quark, and virtually everything else would then be considered very, very big by comparison. And if we used the half way point between the size of a quark and the universe, we’d get get something half the size of the universe, and everything else would be very, very small. Can our feeling that we understand what’s big and what’s small be so dependent on different mathematical ways of thinking about averages?)

Pulling out that big dictionary again, I wonder, what makes it big?  At first glance, it doesn’t seem nearly as big as my car, yet I call it big while I call my car small.  Surely, I mean that my dictionary is big because it has greater magnitude – more pages, and more words — than other things to which I tend to compare it (roughly speaking, those other things I also call books).  I call my Toyota  small because it has less trunk space and passenger seating than my daughter’s SUV.  Could size, then, be a concept that is relative?  It seems so – but relative to what?

I find this last question intriguing.  I think a book big when I compare it to other books, and a car large (or small) when I compare it to other cars.  That concept of relative size seems easy. But if I think for a second that a star can only be thought large in comparison to other stars, I quickly retreat from my relativistic comfort zone.  Surely  stars are always, and absolutely, larger than books, and surely  books are always, and absolutely,  larger than quarks.  If so, surely there is something about size that is not relative to its own group of similar objects – something absolute which enables me to feel quite strongly  that one group of things is inherently larger than some other group of things.   And so, once again, I’m back to square one, wondering what makes one thing large and another thing small.

In desperation, I consult the dictionary again.  This time, instead of “large” or “small,” I look up the word “big.”  (After all, what could a big dictionary be better at defining?)

“Big” is defined by the folks at Houghton Mifflin as, “Of considerable size, magnitude or extent; large.”  Size, magnitude, extent, large – nothing new here.  Big is large, and large is big./ For a moment, I’m disappointed.  But wait.  (There’s more!)   I look up “Considerable.”  The first definition of “considerable” is “large in amount, extent, or degree.” ( Arghhh!  Large means considerable, and considerable means large.  I feel like I’ve been here before.)  For a moment, I consider looking up the new words “amount” or “degree,” but I decide that effort won’t likely be useful.  Then my eyes fall on the second definition of “considerable.” 

“Worthy of consideration.” 

Ah! We’ve left the world of physical dimensions, some place outside the closed loop of size words. Am I finally on to something?  I look up “worthy.”    I find, “Useful.  The quality that renders something desirable, useful, or valuable.”

I think I’ve found the answer I’ve been looking for.  Something is “considerable” if it is worthy of consideration, and it is worthy of consideration if it is useful.  Size is indeed relative, but relative, primarily, to what I find useful

I recently watched Season Six of the survival series “Alone.”  (Synopsis: Ten people competing to survive in Ice Age conditions.) In that world, a moose was important, both because, unlike a squirrel,  it could kill you, and because, if you could kill it, it could feed you for a very long time.  The series contestants considered thirty-pound Northern Pike or lake trout more valuable than ten-pound rabbits, which were in turn more valuable as food than even smaller things like mice.  The closer in size something was to the contestants, the more nutrition it brought.  The more “worthy of their consideration” it was. 

The contestants on “Alone” embraced the value of living as our primitive ancestors did, and I find myself reflecting that it was this ancestral way of life that shaped our species’ understanding of words like “big” and “small.”  Pigs and cows and grizzly bears were more important than, say, mosquitoes. As human beings evolved, those who paid most attention to things about their own size — things between, say, the size of spiders and mastodons — survived and reproduced better than those who paid attention to grains of sand or the vastness of the Universe.  I conclude that, as we generally use the terms “small” and “large,” absent a context which suggests a different relative comparison (a car being small compared to other cars), the default meaning is not really relative to a some incalculable “average,” but relative to ourselves. That is, smaller or larger than we are.  I myself create my sense of the “average” size of things.  Things smaller than me are small, and things larger than me are large. Things are large or small relative to me. And from an evolutionary perspective, it is the things closest to my own size that are (subjectively) important to me.

But are pigs and grizzly bears really more important than mosquitoes, objectively?  Exploding supernovae and super massive black holes are not only extremely large.  Astronomers and cosmologists now tell us that if it weren’t for them, we wouldn’t exist, as they create the very elements from which we’re made.  Those who study life’s origins tell me all complex forms of life began when bacteria became essential parts of our cells, so we wouldn’t exist were it not for bacteria.  And the importance of bacteria is not just historical.  If, today, things like plankton and bacteria stopped being available as food for larger things, moving up the food chain, we’d have nothing to feed on ourselves.  And all the time, quantum physicists remind us that without things as small as quarks, we wouldn’t exist either.  

So it isn’t really true that lions, tigers and bears are most important to my existence.  Nor were they, in fact, most important to our ancestors’ existence.  From an evolutionary perspective, we succeeded by paying attention to things our own size, not because such things were more important to us, but because we could actually do something about them if we paid attention to them.  Evolution proved that paying attention to them was useful to our survival.

But if the issue is usefulness to me – whether I can put my understanding of something to use, to help me eat it or to keep me safe from it – which should I consider more worthy of consideration, more considerable in size, to my current life in the 21st century – a black hole, or a virus?  If the answer is that I can do more with my understanding of viruses than I can with my understanding of black holes, why do I think a black hole  more  worthy of my consideration – more considerable in magnitude —bigger — than a virus?

Our notions of smallness and bigness come from a time in our past when we could not deal effectively with things far smaller or larger than ourselves, a time when things our own size – the moose, the cow and grizzly bear – were most worthy of our consideration.  We could not concern ourselves in those days with virology or pandemics, with things as small as molecules of CO2 or as large as ozone layers or the acidity of the oceans. Thinking about viruses rather than grizzly bears would have been fatal in those days. But such things, both the very large and the very small, are beginning to enter our sphere of influence.  As science continues to broaden our understanding of the world, our ability to prosper (or not) in the face of things we previously thought too large, or too small, to matter, changes. Is it time, now, to revise our thinking about the meaning of words like “large” and “small”?

A Season for Everything

     Hunkered down now, I think I’m like most of us are these days: nervous, on edge, and mindful of worst case scenarios.  My own playlist seems stuck on the last days of Pompei, the last days of the dinosaurs, and the last hours of 1999 when we took one last deep breath of life before experiencing Y2K.  Each tells me a lot about the dangers of predicting the future.

    I’ve spent much more time trying to understand the past than the future, and that habit has led me here, writing how we may be wrong because, whether it’s an effort to understand what life was like in ancient Egypt or what my wife said to me just five minutes ago, I am constantly reminded how hard it is to reconstruct the past, which has a way of slipping through our fingers, being gone forever, impossible to revisit in order to test it, or measure it, or take any more photographs of it, leaving us with only the scattered few relics which somehow found their way into our attics.  I’ve often thought that in one sense, at least, it’s actually easier to predict the future.  If we say that the world will end tomorrow, that’s something we can actually test.  When tomorrow comes, we can not only agree upon, but know, with relative certainty, whether we were right or wrong.  The past is not so easy.

     But whether we’re looking forward or backward, we can’t know, now, if we’re right about the conclusions we reach.  Predictions about upcoming election results, about stock market performance, about the future course of global pandemics, can only be based on comparable situations in the past.  We extrapolate from the known we’ve experienced to the unknown that looms ahead.  But in so doing, we assume a repetitiveness that may be misleading, especially when our ideas are based on the experience of mere lifetimes (like the surprised citizens of Pompei) but even when they’re based on a broader historical record (like those among the dinosaurs who’d studied the  Cambrian explosion — I imagine them sitting around, contemplating how far and well they had come since those days, at the moment the asteroid hit.)  Predicting the future always carries with it a bias in favor of the past, and past experience is very poorly suited to predicting the unprecedented. 

     Y2K teaches us that doomsayers may be wrong.  The eruption of Vesuvius that wiped out Pompei and the Chicxulub impactor that wiped out the dinosaurs teach us that calamity may strike even when no one’s predicting it.  It’s too late to hope that COVID-19 will be the dud that Y2K turned out to be.  There’s still time to hope it won’t be the end of life as we know it.  It is, of course, a time for diligence, not panic.  But within all the precautions we take to fight this invisible enemy, I like to remember that the poor souls who died at Pompei would have been dead for nearly two thousand years now anyway, even if Vesuvius hadn’t erupted.  And even more, I like to remember that if an asteroid hadn’t wiped out the dinosaurs, Mammalia would never have thrived, Humanity never existed.  From our limited perspective, the Chicxulub disaster was the best thing ever.  And from the perspective of those who will inherit this planet from us – the ones we often say we care so much about – we just don’t know how they will view the pandemic of 2020. Perhaps they’ll see it as the beginning of great new things.

     It is in that spirit that while I hunker down at home, wiping off door handles with my sanitizer, wondering if it would do any good to start praying again, I remind myself that I will be dead two thousand years from now, one way or another, and that perhaps the demise of us baby-boomers will save the social security system for our grandchildren.  Perhaps the crisis which forces us to stay home will lead to a world of less extended travel, more stay-at-home work, more locally-sourced foods, and ultimately, a just-in-time rescue of the world from global warming.  We just don’t know, and with uncertainty comes not only bad stock markets but room for hope.

      And here it is, spring time after all.  As I hunker down, I see birds building nests, I see squirrels and rabbits in the yard, and most comforting of all, I hear people talking about “us” – about coming together for each other, about our responsibility toward each other, about the sacrifices that health care workers and others are making for us.  As Pete Seeger reminded us, there’s a season for everything. By my former calendar, this particular season should be bringing me nightly news of Republicans and Democrats insulting each other, modeling animosity and disrespect for our grandchildren. I KNOW that as a result of COVID-19, I haven’t had to listen to quite so much of that recently.  Perhaps, COVID-19 is ushering in a new season, with a new calendar. And that, my friends, strikes me as a very good thing.

Being of Two Minds

                Nearly fifty years ago, I read Julian Jaynes’ book, the one with the imposing title, The Origins of Consciousness in the Breakdown of the Bicameral Mind.  Immediately one of my favorites, it remains so to this day.  Drawing on ancient literature, archaeology, neuroscience and other sources, Jaynes focused on the nature of consciousness, theorizing (largely on the basis of evidence of “auditory hallucinations” in early mankind) that consciousness arose when the two hemispheres of the brain first started “talking to each other” across the corpus callosum.

                Jaynes’s theories were extremely popular at the time; then they were attacked and called all wrong; then they made somewhat of a comeback, with a society formed in Jaynes’ honor.  I’m not sure I want to know where his reputation stands today.  I loved the idea, and I wouldn’t want to be saddened once again to learn that his theories are all wrong, knowing that in another thirty years, they might be accepted again.  Thanks to Jaynes, I will go to my grave remembering and enjoying the image of the bicameral mind, and of the two halves of it talking to each other, as Jaynes suggested.

                “Hey there, stranger.”

                “What?  Did somebody say something?”

                “Yeah.  It’s me.”

                “What?  Who are you?”

                “I’m you, dummy.  The other half of you, anyway… It’s really time we started recognizing each other, and thinking of ourselves as one. Dont you think?”

                Quite often, I catch myself thinking of Jaynes’s bicameral mind.  How, when a thought passes through my consciousness, it’s as if I’m both a speaker and a listener. 

                “Should I post this thought on my blog this morning?” asks the speaker.

                “Sure, why not,” answers the listener.

                To me, all thoughts seem like conversations between the two halves of my brain.

                Now, I know that all brain phenomena can’t be explained by this two-brain theory.   Memory, for example, doesn’t seem to reside on one side of the brain, the subject of a search by the other.   You’ve got the name of your fifth grade art teacher on the tip of your tongue.  (Well, of course it’s not really on the tip of your tongue; we all know that memories are stored in the brain – but where in the brain?)  It sure seems that recollections are made up of elements scattered here and there – perhaps the audio track here and the video track there, but more likely, different elements scattered like the loose pieces of construction paper always scattered around Mrs. What’s-Her-Name’s floor. Still, even if the physical location of the elements aren’t confined to one side of the brain or the other, the conversation that goes on in the effort to retrieve the name could be a conversation between the two halves. 

                R: “She was the one with the dark brown hair, right?”

                L:  “Yeah.  Auburn, maybe.  With a splash of gray above one ear.”

                R: “Did her name start with a B?”

                L: “No, I don’t think so.  Seems to me it began with an S.

                R: “S – T maybe?  Stubbs?  Staub?  Straughan?“

                From the many times we’ve been frustrated by inability to recall things, we often share a sense that even if they don’t reside on opposite sides of the corpus collosum, the things we’re searching for reside in parts of our brains that exist elsewhere, even if invisible to the part that’s on the hunt.

                AS it happens, I’m content to let the mysteries of memory remain unsolved.  For at least one more day, I can simply accept that what we call memory can be in our brains, somewhere, theoretically retrievable but temporarily unknown to the conscious mind.

                What I can’t accept, even for one more day, is the mystery of the dream state.  And I’m thinking of a particular type of dream, a particular aspect of the dream state.  I’m thinking of this aspect because of the dream I was having less than five minutes before I started this post this morning.  The origins of this morning’s dream go back to Penny, a woman I last worked with over seventeen years ago.  Last month, I happened to return to my former place of employment for a meeting with my former boss.  As I sat in the lobby waiting, Penny walked in.  I immediately recognized her and said, “Hi, Penny, how’ve you been?” There’d been several hundred people who’d worked in that building when I last did, seventeen years earlier, and having never worked with Penny closely, I was rather impressed with myself that I could pull her name right out of the air like that.

                But then, this morning, there was this dream.  In the dream, there was Penny again.  And I recognized her face, and I knew who she was, but my former boss was asking me to remember her name – and I couldn’t.  It took me a long time, and a lot of help from my boss, but in the dream, I finally remembered it.

                Now, remember that I’d remembered Penny’s name so well for seventeen years that I could retrieve it instantaneously when, unexpectedly, I saw her last month.  It didn’t seem to be hidden away in the cobwebs somewhere.  If it had been so quickly retrievable for seventeen years, is it possible that, during the dream, part of my brain was fully aware of the name, and was scripting this dream like a stage play, while another part was playing the part of a brain that couldn’t remember?  Had my brain somehow divided itself, for story-telling purposes, into a part that remembered and a part that didn’t?

              Anyone who’s ever had difficulty recalling something for a second or two may be inclined to feel that my dream this morning represented nothing more than the usual process of working to retrieve a memory, beginning with an inability to recall her name, then employing whatever processes the mind usually employs in its efforts to recall, and ending with success in the effort.   If this is what was going on in the dream, the dream could have ended the way waking efforts to remember things often do – with failure.  Nothing unusual here.  The dream state is subject to the same difficulty remember things as the waking state is, and its  efforts to remember things utilize the same or very similar strategies.

             But is it possible that my dreaming mind this morning was divided into two parts: a part that did know the name, and another part that didn’t? A story-telling part, that wanted to go on a ride through a process of remembering something, and choosing the story of Penny because it wanted wanted a successful outcome, and knew that with Penny, the outcome would be successful, because that part – the story-teller part – knew the woman’s name was Penny, and that part of my brain planned all along to end the dream with that revelation?

  And I actually think this may be closer to what really happens in at least some dreams, and my reasons are rooted in a similar, though slightly more elaborate, dream I had three or four months ago. Unlike my dream about Penny, that dream was longer, consisting of numerous scenes.  And in that dream, too, I was trying to identify something, starting from ignorance and ending up satisfied by understanding.  Early in it, I’d been told by an agent behind the counter of a rental car agency that the car I’d reserved had been taken, earlier that day, by a relative of mine.  When I asked who, he said the name had included the letter O.  I thought of names beginning with O, but there were no Ozzies or O’Briens in the family.  I thought of my cousins Joe and Lorin and Bobby, but no, said the man behind the counter, it wasn’t them.  After a while, another man told me that the name also included a G.  I had no relatives named Ogden, so I told the man it must have been one of my many cousins whose middle or last names were Logan. Once again, however, I was informed that I was wrong. Eventually, other people appeared in the dream supplying the letters N, U and Y, and by the end of the dream, I realized that the man I’d been trying to identify was a second cousin named Wendell, whose last name was Young. 

                In the dream, the revelation took me by surprise.  But what had me puzzled for days, and still has me wondering, is how the dream was even possible.  As the dreamer, I had no idea where the dream was headed when it began. Not until it ended did the clues make any sense.  Yet, as the spinner of the tale, as the “writer of the story,” so to speak, some part of my brain had to know where everything was headed from the outset.  Back when the man behind the counter was telling me it was a relative with an O in his name, the “writer of the story” knew, even if I did not.

                The reader of a mystery novel is ignorant at first, puts together clues, and finally connects the dots somewhere along the way – if not, he’s given the answer at the end, by the writer..  But mystery novels aren’t written that way.  The writer has typically known “who done it” since the first clue was inconspicuously mentioned back in Chapter One.  I understand how this workers with mystery novels, because you have two different minds at work – the mind of the writer and the mind of the reader.  But is the same true in dreams?  How was it possible, in my dream, for that man behind the counter to know that my relative’s name included a an O, at the beginning of the dream, unless he already knew the end of the dream?  And if he knew the end in advance, why didn’t I? 

                The only explanation I can think of is that the dreaming mind is really two minds, the mind of the writer and the mind of the reader.  That when we dream, we see ourselves walking (or flying?)  through a world with less than complete understanding, a world in which a lot more is known by a different mind which, though presumably also resident in our brain, knows far more than we do about the world – perhaps, even both the “real” world and the one in which the dream takes place. This “writer” ho knows more than we, the reader know, is intentionally giving us only part of what we see in the dream, the same way a mystery writer does, doling out information at the right time, to enhance the story.

                Some may think of this as evidence of God.  Part of me wonders that too. But more often, such phenomena make me think of my love for Jaynes’ theory about the Origins of Consciousness in the Breakdown of the Bicameral Mind.

                I guess you could say I’m of two minds about it, eh?

                Yeah. I think so.

The Meaning of Meaning

Years ago, my brothers and I started debating the existence of absolute truth.  My brothers defended its existence and knowability.  I questioned its knowability, if not its very existence.   After decades retracing the same ground, our dialogue began to seem less like a path to enlightenment than a rut.  My brothers still believed in absolute, objective truth, and that it’s possible to know at least some of it, while I stuck to my subjectivist guns.

My subjectivism included the matter of language.  I see words as arbitrary patterns of sound waves without inherent meaning, which is to say, lacking any meaning until two or more people reach agreement (or at least think they’ve reached agreement) on what idea those sound waves represent.  The word “fruit” is not inherent in apples or oranges.  Not only the sound f-r-u-i-t but the very concept of fruit exists only in the mind.  A “fruit” is not a real thing, but a category, a label for an idea.  And ideas, as we all know, exist only in the mind. 

Having agreed that early ancestors of MacIntosh and Granny Smith had enough in common to be called “apples,” and that the ancestors of navels and clementines had enough in common to be called “oranges,” we then went further and called them both “fruit.”  Slicing and dicing with our verbal ginsu knives, we label some fruit as “citrus.” We group fruit with legumes and call them both plants.  We add ourselves and elephants as mammals, then add plants and viruses and call us all “living things.” All the while, scientists debate the very idea of what living things are, including and excluding actual things from what is, I maintain, just a concept.  Importantly, the things themselves are not affected by what the scientists call them.  A rose remains a rose, by whatever name we call it.   

And so language, I say, remains subjective.  We attempt to group and classify real things by using conceptual labels.  We distinguish between a gallop and a trot, but we ignore the difference between the way I “walk” and the way a thoroughbred horse does, or a camel or a duck.  Arbitraily, subjectively, we call them all the same thing: “walk.”  Why not distinguish between a walk and a shawk and a mawk?  It’s all very arbitrary.  What constitutes a “walk” is obviously an idea – and ideas exist only in the mind.

Comfortable in my subjectivist philosophy of language, I recently came across the late Hilary Putnam, former Harvard professor and president of the American Philosophical Association.  Putnam famously claimed that “meaning ain’t just in the head.”  In his books Meaning and Reference (1973) and The Meaning of Meaning (1975), he used a thought experiment to demonstrate that the meanings of terms are affected by factors outside the mind.

Essentially, Putnam did this by asking us to imagine a world that is a perfect twin of Earth – that is, in every way but one.  The only exception is that its lakes, rivers, and oceans are filled not with H20 but with XYZ.  But everything else is identical, including people, and their tongues, and their languages – so that both Earth’s Frederick and Twin-Earth’s Froderick use the identical word “water” to refer to the stuff that fills the oceans on their respective planets.  Since Frederick and Froderick are physically indistinguishable, and since their words “water” have different meanings, those meanings cannot be determined solely by what is in their heads.  

So said Putnam.

The idea that meanings are rooted in real things, not just in subjective minds, became known as “semantic externalism.” It was credited with bringing about an “anti-subjectivist revolution” in philosophy, a revolution that threw into question the very “truth” of subjective experience.[1]

Yikes!  Was I wrong yet again?  Did I have to rethink my whole philosophy of language?  Did I have to concede to my brothers that there is such a thing as objectivity, at least in the meaning of words?

Not so fast.

Putnam’s Twin Earth thought experiment had me worried.  But at the end of the day, I decided it suffers from the common logical fallacy that its conclusion is contained in its premise.   The real question, I believe, boils down to one that Putnam may have had in mind when he titled one of his books The Meaning of Meaning.

If language is as subjective as I suppose, and if words can mean different things to different people, as I believe, who’s to say what a word really means?  I don’t believe there’s an objective answer, and perhaps Dr. Putnam did, but I think it may come down to what we mean by the word “meaning.” When faced with such questions, I’ve often sought the judgment of etymology, the history of words. I find it instructive to retrace the way words (and their meanings) change over time. And so I set out to unearth the etymological path by which the word “meaning” came to have meaning.

According to my research, the word is related to the Greek and Latin root men– (“to think”) from which English words like mental and mentor have derived.  It came into Old English as the verb maenan, meaning to state one’s intention, to intend, to have something in mind.  And much later, the verb “to mean” led to formation of the verbal noun, “meaning.”

From an etymological perspective, I would argue that meaning is therefore subjective, by definition.  If to “mean” something means to “have it in mind,” then there cannot be meaning independent of someone’s mind.  Definitionally, it is the idea formulated in the mind.  The person whose tongue pronounces the word’s sound is trying to convey the meaning in her mind.  And when the listener who hears the sound uses it to form an idea in her mind, “meaning” happens again.  To “mean” something is, always, to have an idea in mind.

I find it interesting to imagine the day, within the past few hundred years, on which two people were watching a meteor shower, or a lightning storm, or a two-headed snake – some occurrence that struck them as an omen of sorts – and one of them first asked the question, “What does it mean?”

It’s a question we’ve all asked at some point – if not about an omen, then about a symbol, a gesture, or some other mindless thing. The question has become an accepted expression in modern English.  But what a momentous event, the first time it was asked!  Here we had a word – to “mean” something – which (at the time) meant that a speaker had some concept “in mind” and “intended” to convey that concept to another.  That is, as then used, the word clearly referred to a subjective concept.  You’d long been able to ask what a person meant, intended, or “had in mind.” But when the question was first asked, “what does it mean?” referring to a lightning bolt, an earthquake, or a flood, the one asking the question was implicitly asking another, broader question – whether, perhaps, the “it” – the burning bush, the flood, the electrical discharge – could have “something in mind.” 

Alternatively, they were asking if the thing or event had been endowed with a meaning by virtue of having been in the “mind” of some superconscious deity that had caused the event.  If the “meaning” was that which had been in the mind of such a deity, it was arguably still subjective, i.e., still dependent on the idea that existed in a particular mind.  But if the meaning had originated in the thing or event itself – in the rock, or the flame, or the electrical discharge – then the conclusion would have to be that “meaning” can exist independent of a mind.

At any rate, it seems to me that whoever first asked the question, “What does it mean,” was expanding the very idea of “meaning.” Until that moment, to “mean” something meant to have it in mind.  To think it.  Until that moment, as I see it, everyone understood that “meaning” is entirely subjective.  To ask what “it” means was a misuse of the word.

And so, on the basis of etymology, I stand my ground.  “Meanings,” by definition, are ideas that form in the mind.  The idea of fruit.  The idea of walking.  Even Mr. Putnam’s theory of semantic externalism – that meaning “ain’t just in the head” – is an idea that, like all ideas, is just in the head.


[1] Davidson, Donald, Subjective, Intersubjective, Objective, Oxford University Press, 2001.

Impeachment Again

     While I may be wrong, I believe there are good grounds for impeaching presidents.  I just don’t think the House has chosen wisely in its effort to define what they are.

     Consider the second proposed article of impeachment.  It essentially charges the president with “obstructing Congress” by refusing to comply with Congressional subpoenas. My problem here is that I don’t think a President is required to do whatever Congress orders him to do. As I see it, refusal to comply with a subpoena is a perfectly valid way of contesting its legality.  As best I recall, it is not uncommon for a party in litigation to refuse to comply with a subpoena, as one of the ways of getting a court to decide whether the subpoena is legitimate.  And it seems to me that in cases involving the separation of powers, it’s similarly legitimate for a president to refuse to comply with a subpoena, anticipating that Congress would then have to go to court to seek to enforce it.

    By way of analogy, in order to challenge the validity of Jim Crow laws, Rosa Parks had to “violate the law,” triggering her arrest for  refusing to sit in the back of the bus.  This was risky, but a legitimate way to get judicial review of the constitutionality of the law in question.   In order to get the courts to consider his status as a conscientious objector, Mohammed Ali had to “violate the law” by refusing to submit to the military draft.  Risky again, but legitimate.  The courts have developed a doctrine of “standing,” a doctrine designed to prevent just anybody from asking the courts to decide purely hypothetical questions.  “Standing” means that to challenge a law, you have to be actually affected by it.   For reasons of “standing,” violating a law is sometimes required in order to get a court to consider its validity.   If you want to challenge a local zoning law in court, you may have to violate the law (as interpreted by the zoning board) or you won’t have standing.  If you think a provision of the Internal Revenue Code is unconstitutional, you’ll probably have to violate the I.R.S.’s interpretation of the law, getting assessed taxes and penalties, before you’ll have standing to challenge that law in court.  There were various examples of this in my own  practice of employment law.  Often, it’s risky.  If you lose such challenges, you suffer the consequences.  But if you win such challenges, the ultimate prize is a finding that you were actually within your rights all along – in effect a ruling that, like Rosa Parks and Muhammed Ali, you were never really in violation of the law in the first place.

     If Congress were King, I’d favor the impeachment of presidents for refusing to comply with its subpoenas.  But Congress is not King.  In our system of law, it is the Courts that are the arbiter of what is and isn’t against the law.  It seems to me that impeaching a president for refusing to comply with Congressional subpoenas that haven’t been considered and approved by the Judiciary turns the separation of powers on its head. If Congress starts removing presidents just because those presidents don’t submit to its orders, I fear for the balance of power that is the cornerstone of our system of government. 

     Consider next the first article of impeachment.  In it, the House is charging the president with abuse of power— specifically, by pressuring a foreign government to take an action that would interfere in the U.S. electoral process .  Now, I favor impeaching presidents for anything that would interfere with the U.S. electoral process, but I find an important distinction between things that would interfere in the process and things that could affect the outcome.  Specifically, I find it helpful to distinguish between three types of conduct that might be considered potential interference.

     The first type I’ll call “direct” interference in the electoral process itself.  Impeding access to the polls.  Casting fraudulent ballots.  Bribing election officials.  Falsifying results.  I think pressuring a foreign government to engage in such direct interference surely ought to be grounds for removal from office.  But such direct interference is not what the House is alleging.

    Rather, the House is alleging pressuring a foreign government to take action that could be expected to influence some U.S. voters, and thus, the election outcome.  In my view, the conduct charged raises serious questions about when and why actions taken on the world stage that could affect election outcomes constitute “interference” with the electoral process.    If the president succeeded in pressuring Iran to cease its nuclear weapons development, there’s little doubt that such action could affect the election outcome in the president’s favor, but I can’t see that the same as interference in the process.   Would pressuring Saudi Arabia to investigate the murder of Jamal Khashoggi  result in “interference” in our elections if it affected the outcome?  Would pressuring North Korea to investigate the treatment of U.S. student Otto Warmbier, if such an investigation benefited the incumbent president?  In my view, we want our presidents to pressure foreign governments, and it makes no difference to me that, if the pressure works, the result would influence voters in favor of the president or his party. 

     Two of the words I find most troubling in the Article proposed by the house are the little words, “that would.”  The President is not even accused of soliciting action “for the purpose of” influencing the election.  He is accused of seeking action “that would” influence the election, i.e., the election outcome.     One might argue that Lincoln saw the Emancipation proclamation as something “that would” help him win re-election.  One might argue that FDR saw the New Deal as something “that would” help him win re-election.   One might argue that Lyndon Johnson saw the Warren Commission’s investigation into the assassination of JFK as something “that would” help him win re-election.. Parties and candidates are always doing things for political purposes, i.e., doing things that will enhance their prospects for re-election.  I just can’t conceive of impeaching presidents for conduct because their actions would “interfere with elections” by having an impact on election outcomes.

     My view does not change simply because the target of the requested investigation is a political opponent or relative of a political opponent.  Many Presidents, from Abraham Lincoln to Jimmy Carter and Ronald Reagan, have been embarrassed by the conduct of close relatives.    Imagine that, in some future election cycle, evidence surfaces that suggests that Opponent O’s cousin may be conspiring with foreign companies to import drugs into the U.S.   Obviously, announcement of an investigation into such a possibility  might embarrass Opponent O and thereby affect the election outcome.   Do we want to discourage President P from soliciting a foreign country to undertake an investigation of the matter, because such an investigation would amount to interference with the election?  I think not.

     I would suggest that there is a third category of arguable election “interference” – and I think many of those who favor the impeachment of Mr. Trump may be motivated by the belief that his conduct falls into this third category.  I’ll call it the Fake News Category.  Impeachable offenses in that category might include, say, doctoring a videotape of one’s political opponent to make it appear she said something she really didn’t.  Photoshopping an opponent’s face onto a picture of someone doing something despicable.  Making up fake news stories for the purpose influencing votes.  In my view, this sort of conduct – widely acknowledged to be on the rise, widely predicted to become even more common in the future – is not direct interference with the electoral process.  But, to me, it is still problematic, even thought it is designed to affect election outcomes rather than election processes.  In my view, creation of such fraudulent news poses a threat to the integrity of our electoral outcomes every bit as serious as direct interference with processes, like stuffing ballot boxes, etc. I could favor articles of impeachment that directly accuse an incumbent president of intentionally fabricating such fake news for the purpose of affecting election outcomes.  And I suspect that Trump’s opponents believe that the President’s solicitation of Ukraine was tantamount to fabricating fake news.  But the Article the House is now considering does not accuse the president of fabricating fake news.   Rather, it accuses him of soliciting an investigation that would influence U.S. voters.

     Nowhere is free speech more important than in the political and electoral process. Charges of fabricating “fake news” are essentially charges of intentional fraud on the electorate.  An essential element of fraud is a misstatement of fact, known to be false when made, and made for the purpose of inducing someone to rely on the false statement to their detriment.  Intentionally creating fake news for the purposes of misleading the electorate amounts to such fraud, and should not be tolerated. But calling for an investigation into smoke is not the same as asserting the existence of fire when one knows there is in fact no fire.   In my view, if a President thinks she sees smoke, even about a political opponent, calling for an investigation to determine if there is a fire strikes me as a very legitimate use of power – and one we should want to encourage in  our presidents, not despite a possible impact on the outcome of elections but because of such impacts, in which investigations help to bring out facts and in which the electorate is able to assess thje evidence and how that evidence impacts their votes. Even now, members of the House are calling for an investigation of the President, anticipating that it will affect the outcome of upcoming elections  Should that turn their very votes for impeachment into impeachable offenses themselves? Do we want a world in which all our elected representatives risk impeachment any time they call for investigations into their opponents?

     Some, I suspect, would say that Trump’s calling for an investigation of Hunter Biden was tantamount to a fraudulent falsification of fact because allegations of impropriety by Biden have already been “discredited.”  But Ukraine is a country with a history of corruption.  The prior investigation I’m aware of only found no evidence of a violation of Ukranian (not U.S.) law. Was the prior investigation thorough? Unbiased? Not itself the result of corruption? Might a new investigation unearth evidence of a violation of U.S. law, or simply information the U.S. electorate might find relevant to its voting in an upcoming election? Investigations are meant to dig deeper into the truth.  In my view, calling for them does not come close to the kind of manufacture of fake news that I would consider good grounds for impeachment.

     For these reasons, I am not a fan of the House’s articles of impeachment, as drafted.  That said, there are other grounds for impeachment I would not mind seeing the House approve.  If Mr. Trump is suspected of fabricating false statements in order to affect election outcomes, I say charge him with fraud on the electorate. If the evidence supports the charges, I say remove him from office because of it.   In fact, I’ll go even further.  Just as I believe that impeaching for bribery will tend to discourage bribery and impeaching for cover-ups will tend to discourage cover-ups, I believe that impeaching for eating hamburgers will tend to discourage eating hamburgers.  What constitutes good ground for impeachment is a political question, not a legal one. And I believe the grounds chosen can be expected to have an in terrorem effect on the behavior of future presidents, discouraging them from engaging in whatever type of conduct is seen as grounds for impeachment – even if its eating hamburgers.  

     As a result, while I oppose impeaching presidents for refusing to comply with Congressional subpoenas, and I oppose impeaching presidents for pressuring foreign governments to conduct investigations that could affect U.S. election outcomes, I would LOVE to see Congress impeach this president (and several of their own number) for “Fomenting National Divisiveness.” As I see it, particulars to such articles might include such things as “Making public statements and otherwise manifesting such extreme disrespect for others as to exceed the bounds of propriety in a pluralistic society.”  Evidence in support of such charges could certainly include fabrication of false news stories, calling for investigations of opponents in bad faith, etc– but the gist of such charges would be the disrespect and divisiveness involved.  If presidents (and members of Congress) were to fear impeachment for “fomenting national divisiveness,” I believe they would be influenced to call for greater harmony; that they would tend to manifest greater respect for those who disagree with them; that political rhetoric would soften, and that civility in political debate would increase.  In my view, those would be very good results –not for one party or the other, but for the country as a whole.

Primates and Praise

Early in the Christian churches, bishops and archbishops came to be called “primates.” The word was not intended to evoke images of orangutans or macaques.  (It would be another five hundred years before Carl Linnaeus classified homo sapiens as a member of that order.) Rather, even in Latin, the word for “first” had been used to mean a superior, a leader, or most excellent person, and the Christians had no problem designating their spiritual leaders with the term as well.

There are many things I like about my Christian heritage.  If Christians today preached what I believe the historical Jesus preached, I’d readily identify as a Christian.  But as I see it, modern Christianity gets Jesus wrong in a number of respects. 

When I was only eight, I was invited to spend the weekend in the countryside with a friend.  Since I’d have to miss Sunday mass, I made a phone call to ask for permission to do so.  My friend’s family got quite a laugh when, after the call, they discovered I hadn’t been calling home, but the church rectory. The “Father” they’d heard me addressing was not my biological father, but the parish priest.

I had already been taught to call all priests “Father,” and even when I talk to priests today, I use the term of respect I was taught as a child.

But it wasn’t long after the parish priest told me it would be a sin to miss Mass  that I came across Matthew 23:9, where Jesus is said to have told his followers “to call no man Father, for one is your Father, which is in Heaven.”  Given that scripture, I never understood how Christians developed the practice of calling their priests “Father” – especially in an age when fathers demanded so much respect – except, of course, that the priests had taught them to.

It’s easier for me to understand why hierarchies arose as church memberships and treasuries grew – and why words like “bishop” (from Greek epi-skopos, meaning to watch over) came into use.  And it seems almost inevitable that as such growth continued, layers of rank would have to be added, for practical, administrative reasons.  So by the time the Bishops of Canterbury, York, Armagh and St. Andrews had become powerful, it isn’t entirely surprising that they’d call these leaders ‘primates.” But the primates were always first among “fathers,” and I still had a hard time squaring that with Matthew 23:9.

Nor was it that particular scripture alone.   According to Matthew 12:50, Jesus instructed his followers, “Whosoever shall do the will of my Father, which is in Heaven, the same is my brother, and sister, and mother.”  Jesus preached, “Blessed are the meek; for they shall inherit the earth” (Matt. 5:5) and “Whosoever therefore shall humble himself as this little child, the same is greatest in the kingdom of heaven” (Matt. 18:4). I read of a Jesus who washed the feet of his disciples, of a Jesus who frequently dismissed those who treated him with special reverence, of a Jesus who said to a man who addressed him as Good Master, “Why callest thou me good? There is no one good but one, that is God” (Matt. 19:16). I read of a Jesus who, when asked if he was King, replied only, “You said it” (Matt 27:11), as if to disavow the title himself.  In fact, Jesus taught, in the Sermon on the Mount, that his followers should pray to the Father (for His was the power and the glory). And, if we believe Matthew 7:23, Jesus chastised those who would honor him, warning, “Many will say to me in that day, ‘Lord, Lord, have we not prophesied in thy name? and in thy name have cast out devils? And in thy name done many wonderful works?’ And then will I profess to them, I never knew you: depart from me, ye that work iniquity.”

One reason I haven’t been to church but a few times in the last fifty years is my lack of comfort with heaping praise on this man who fought so hard to avoid it.  Last month, I went to a Catholic mass for the first time in many years.  One of the first hymns sung was To Jesus Christ, Our Sovereign King.

“To Jesus Christ, our sovereign king, who is the world’s salvation, all praise and homage do we bring, and thanks, and adoration. Christ Jesus, victor!  Christ Jesus, Ruler! Christ Jesus, Lord and Redeemer! Your reign extend, O King benign, to every land and nation; for in your kingdom, Lord divine, alone we find salvation.  To you and to your church, great King, we pledge our hearts’ oblation – until, before your throne, we sing in endless jubilation.”

Homage? Kingdom?  Reign? Throne?  I was taught the theology behind this hymn.  But for me, the theology fails to justify adoration of a man who shunned adoration, who deflected all praise to God, his father in heaven.  To my way of thinking, Jesus would not have approved of such a hymn.

Meanwhile, whatever may be said in defense of praising Jesus, I have even greater trouble with adoration of mankind.

Consider this passage from Pope John Paul II’s Gospel of Life, Evangelicum Vitae.  I can’t read it without thinking of Jesus’ teaching that the meek shall be blessed.

52. Man, as the living image of God, is willed by his Creator to be ruler and lord. Saint Gregory of Nyssa writes that “God made man capable of carrying out his role as king of the earth … Man was created in the image of the One who governs the universe. Everything demonstrates that from the beginning man’s nature was marked by royalty… Man is a king. Created to exercise dominion over the world, he was given a likeness to the king of the universe; he is the living image who participates by his dignity in the perfection of the divine archetype.”

I hope that my thoughts are not taken as an attack upon those who sing the hymn, or upon Pope Paul II for his thoughts about mankind.    I mean no disrespect, and God knows, I may be wrong.  But as Christians prepare this month to celebrate Jesus and his birth, I’m moved to point out my inability to buy into these aspects of modern Christianity. As I like to think of it, “I prefer the original.”  Father, Primate, Pope, Homo Sapiens Sapiens.  Clearly, we are prone to bestow honor on ourselves.  I don’t know whether we inherited this tendency from other primates or not, but the Jesus I believe in warned us against it.

From Front Row Seats

Here’s a poem I just ran across. I wrote it twenty three years ago. I guess the origins of WMBW go back further than I previously thought.

.

From front row seats

behind home plate

we watch the batter swing,

we hear the bat crunch,

we cringe and wince,

able to feel the wood

shatter in our palms.

.

I know that when my friend

puts one hand on the top

of her head, and the other

at the base of her jaw,

and pushes hard

in opposite directions,

the sound I hear is not

my own, but her neck cracking.

Still, I put my hand

to my own neck, and rub

away the hurt I feel.

.

Stopped at a traffic light,

hearing tires screech,

I hold the wheel tight

and step on the brake

to avoid the crash.

And when the screeching

stops in silence, I’m relieved.

.

We cannot help but empathize.

How much we share, unwittingly,

At times like these!

And how sad we are,

if we only laugh

when someone else

does something dumb.

.

3/14/96

Submission

My recent trip to Morocco got me thinking how much our cultures shape us and make us who we are – that is, how much the ruts in our thinking can masquerade as truth itself.

As I packed for my trip, I decided to bring along a couple of books – Susan Miller’s A Modern History of Morocco and a copy of the Qur’an I’d bought a couple of years ago, a 1934 translation by Abdullah Yusuf Ali.   I thought they might help get me into the spirit of the trip – my first to a Muslim country.

Upon reading the first sixty pages of Ali’s translation when I first got it,  I’d found it a bit like Leviticus or the Gnostic Gospels – fragments of wisdom scattered among verses otherwise resistant to comprehension.  Miller’s history made more sense to me (once I started distinguishing between the Alawids, the Almohads and the Almoeavids).  But I like getting to know about things I know nothing about; the more foreign, the better.  So Morocco turned out to be a great trip, just as I’d hoped. 

To begin with, it felt like a different planet, the terrain like the barren brown land of La Mancha where Clint Eastwood filmed spaghetti westerns to pretend he was in the American West.   (It was barren, just sand and clay, devoid of plastic, steel or chlorophyll.)  Yet when we crossed the Atlas mountains into the Sahara, I realized how much green I’d been taking for granted.  Upon our return from the sand dunes to the “green” side of the Atlas, I did indeed notice occasional olive trees, date palms and cacti.  The rare new shoots of chlorophyll in the otherwise dry brown wadis – the result of a downpour on our third day in the country,  the first of a rainy season that, having just begun, showed no further hint of itself for the rest of our stay – were cause for celebration. After all, they had made the country comparatively lush.  What had seemed a wasteland at first now showed precious signs of life.

The architecture was equally striking.  Palaces, guest houses and mausoleums were opulent and ornate, sculpted or tiled into tiny squares, rectangles and diamonds, with Arabic scripts worming through the geometry like the tendrils of plants making their way through latticework.  But more than the fancy palaces and riads, I was struck by the simple architecture of the countryside.  Fields of clay separated by countless rock walls, most only one or two feet high, only a tiny fraction of which rose high enough to resemble stone buildings. Most of the structures were made of clay.  Berber villages, many miles apart, often consisted of only a dozen houses or so.  In one, a mountain village of sixty people called Outakhri, Lala Kabira treated us to two wonderful meals of lamb, eggs, vegetables, dates, couscous, green tea and flatbread, which we watched her bake in a blackened clay wood-burning oven.  When I asked our guide, Said, if she was his mother, aunt or other relative, he gave me a most curious stare.  Then he said, “No, she’s not related by blood.  But when you spend your life in a village of only sixty people, there’s really no difference.  Everyone is family.”

Not once in two weeks did I hear a complaint or a curse, not once an unpleasant gesture.  As the days passed, I began to feel majesty in the clay-colored, mountainous land.  The people, the food, even the terrain began to seem familiar.

One of our group, Juan Campo, is a professor of religious studies at the University of California, Santa Barbara.  When I learned that Juan specializes in Islam, I asked for his opinion of Ali’s translation of the Qur’an.  When he said it was a good one, I asked if he’d written any books that a layman like myself might understand.

Yes, he said.  He’d been the chief editor of The Encyclopedia of Islam, (Checkmark Books, 2009).   

I have now bought and studied that volume.  My thanks to Juan for helping me better understand the basics of the Qur’an.  I’ve also now read a good bit of Ali’s translation  Based on this elementary introduction, I now understand that the Qur’an teaches as follows (citations are to chapter and verse of the Qur’an unless otherwise noted):

1.   “There is no god but God” (21:25).  (That is, there is only one God, and he is the God of all.)

2. That God is loving (85:14), eternal (2:255), merciful (1:1), omnipotent (3:26), omniscient (6:59, 21:4, 49:16), wise (2:216, 3:18), righteous (2:177), just (41:46), and forgiving of sins (3:31).

3.  That when God says something should be, it is. (2:117, 3:59.)  He created the world, a task which took him six days, creating day and night, the Sun, the Moon, and the stars  (7:54, 10:3, 11:7, 21:33, 25:61-62.)  According to some Muslim teaching, he created the Universe out of love, so that he would be known (hadith qudsi.)

4. God created the first human being, Adam, making him out of dust or wet clay, breathing the spirit of life into him  (3:59, 6:2, 7:12, 15:29, 30:20, 32:9).  He set Adam and Eve down in a blissful garden called Paradise, eating the fruits of the garden until Satan, the enemy (whom God had expelled from heaven for his disobedience) tempted them to eat the fruit of the forbidden tree (2:34-36, 2:168, 7:11-18, 7:189, 20:117-123).

5. Eve gave birth to Cain and Abel, and Cain later murdered his brother out of jealousy because God accepted Abel’s sacrifice rather than his (5:27-32).

6. God chose to save the righteous Noah, man of faith, while causing a great Flood that drowned the people who’d fallen into evil ways (7:64, 17:3, 37:75-77).

7.  Jews, Christians and Muslims are “the People of the Book,” all descended from that great opponent of idolatry, Abraham, the pious husband of Hagar and Sarah, the father of Ishmael and Isaac, whose faith in One God was so strong that he was prepared to sacrifice his son at God’s command (2:133, 19:41, 19:69, 21:51-58, 21:66-72, 37:112, 6:74-84, 37:99-111).

8. God chose Jacob, Moses and Aaron as prophets (19:51-53, 21:48, 21:72).  Moses was cast away on the waters as an infant, by his mother, to save his life (20:37-40).  Moses rose to prominence under the pharaohs of Egypt (7:104-109).  God spoke to him from a burning tree by Mount Sinai (28:29-30).  He spent forty days in the desert and received the commandment tablets from God while there (7:144-145).  In the absence of Moses, the Israelites worshipped the golden calf (7:148-149, 20:85-91).

9.  After slaying Goliath, David received a kingdom and wisdom from God.  Solomon ruled with wisdom and justice.  God listened to Job in his distress, and was merciful to him for his righteousness. (2:251, 21:78-79, 21:83-86, 38:20).  

10. John, the son of Zechariah (known to Christians as “the Baptist”) was a prophet made known to the father of Mary; he was princely, chaste, wise and righteous, and confirmed the word of God (3:39, 19:12-13).

11. Angels appeared to Mary and announced to her that God had chosen her above the women of all nations, saying “O Mary! God giveth thee glad tidings of a Word from Him: his name will be Christ Jesus, the son of Mary, held in honor in this world and the Hereafter and of those nearest to God; he shall speak to the people in childhood and in maturity.  And he shall be of the righteous.” (3: 42-46).  Mary questioned the news, since she was a virgin, but God, who “createth what he willeth,” simply said “Be!” and breathed his spirit into her.  Thus was Jesus conceived. (3:47, 19:20-21, 66: 12).

12. Jesus is a spirit – Arabic ruh, or breath – proceeding from God; he is thus the word of God (4:171).   God strengthened Jesus with this holy spirit (2:87, 2:253, 5:110),  revealing the gospel of Moses and the prophets to him (2:136), teaching him the book of wisdom, and the law, and the gospel, and giving him the power to heal the sick and perform miracles (3:48-50, 57:27).  God said to Jesus, “O Jesus! I will take thee and raise thee to Myself and clear thee (of the falsehoods) of those who blaspheme; I will make those who follow thee superior to those who reject faith, to the Day of Resurrection.” (3: 55).  God ordained compassion and mercy in the hearts of those who follow Jesus (57:27).   Jesus is “a statement of truth” and a “sign for all people” (19:34, 21:91).

13. Charity is essential to a good and pure life.  As stated in the Qur’an (2:177):

Goodness is not that you turn your face to the east or the west.  Rather goodness is that a person believe in God, the last [judgment] day, the angels, the Book, and the Prophets; that he gives wealth out of love to relatives, orphans, the needy, travelers, and slaves; that he performs prayer; and that he practices regular charity.

14. The world will end on the Last Day, a day of Judgement and resurrection in which nothing will be hidden, the just will be rewarded by a return to Paradise and the unjust damned to hellish fire (1:4, 3:56-57, 19: 37-40, 21:47, 69:18-31, 74:38).  God will reward those who are faithful to him and his word by giving them a land of milk and honey, while punishing those without faith in eternal fire (2:164-167, 13:20-26, 21:39, 47-15).  “Those who believe (in the Qur’an), and those who follow the Jewish (scriptures), and the Christians and the Sabians [converts] – any who believe in God and the Last Day, and work righteousness – shall have their reward with the Lord; on them shall be no fear, nor shall they grieve” (2:62).  “Verily, this Brotherhood of yours is a single Brotherhood.” (21:92).

15. And so the Qur’an asks, “Who can be better in religion than one who submits his whole self to God, does good, and follows the way of Abraham, the true in Faith?” (4:125)

My dear Christian mother believed everything described above, yet her feelings about Muslims ranged somewhere between fear and loathing. 

As I understand it, in Arabic, there were traditionally no vowels.  The word Islam was essentially the three consonants, s-l-m – making Islam a cognate of the Arabic word Muslim, the Arabic word “Salam” (peace) and the Hebrew word “shalom” (peace).  The word Islam is often translated “enter into a state of peace.”

As we all know, some people err by confusing substance with translation.  Nowhere is this error more troublesome to me than when it comes to God.  When my mother cringed at the thought of worshipping Allah, I don’t believe she understood that “Allah” is simply the Arabic word for “God,” derived from the same Semitic root as El and Elohim.  I find it notable that, according to Professor Campo, Arabic-speaking Christians and Jews in the Middle East use the word “Allah” in referring to their God.

I so wish my mother could have understood this.  Nothing was more important to her than submission to God.  Yet she seemed not to understand that “Islam” is an Arabic word that, as usually translated, simply means submission to God.  And that “Muslim” is simply an Arabic word for one who so submits.  Had I spoken Arabic when Mom was alive, I shudder to imagine her reaction when I called her one who submits.

“I’m no Muslim!” she likely would have said.

My thanks to our guides, Hicham Akbour and Said ibn Mohamed, to our host, Lala Kabira, and to Professor Campo, for helping me take a new look at my family’s western culture.

Salam aleikom.

(Peace be with you.)