The Biggest Delusion of All

When I was fourteen, my parents sent me to Texas to work on my grandfather’s ranch.  The first job would be to paint the fence around the field between his house and the road.    When I asked what else I’d be doing, he replied, “Let’s see how long it takes to paint the fence.”  Two months later, having painted for eight hours a day, I returned home, the job of painting the fence still not finished.

The word “comprehend” means taking something in all at once.  Since flat land let me see that fence all at once, it seemed comprehensible.  In fact, if I held my two thumbs in front of my face, I could make the fence  seem to fit between them.  So thinking it might take a few days to paint the fence seemed reasonable.  Problem was, my brain does tricks with perspective.   In reality, that field was probably close to ten acres, the fence probably ten football fields long.  “Comprehension” of very large things requires large scale trickery.  It depends on deception.

I should have known better than to trust my brain about the fence.   My teacher Paul Czaja had told our class the story of the Emperor’s chessboard and the grains of rice: the Emperor said that if a single grain of rice were placed on the first square, two grains on the second square, four grains on the third, and so on, until the 64th square, the final square would require enough rice to stretch to the moon and back seven times.

The story of the Emperor and his grains of rice “wowed” me with the power of exponential growth.  I knew the moon was far away, and a grain of rice very small.  Stretching back and forth seven times had to make it a very large number indeed.

Of course I wanted to know what the total number of rice grains was in numbers I could understand.  Rather than simply give his class the answer, Paul asked us to compute the number ourselves.  Our homework was simply to multiply by two sixty-three times.

If Paul had told us that the total grains on the chessboard came to 18,446,744,073,709,551,615, I’d have realized that the number was larger than any I’d ever seen – but my brain would have attempted to make sense of the number’s “bigness” in the same way it had made sense of the fence in Texas – namely, by making it appear far smaller than it really was.   The only way to see a huge field was to make it seem small enough to fit between two thumbs.  The only way to make such a large number seem comprehensible is to reduce it to funny little shapes on a page that we call numerals.

Upon seeing that large number on the page, my brain immediately perceives it’s large, because it is physically longer than most numbers I see.   But how large? If it takes up two inches on the page, my brain suggests it’s twice as long as a one-inch long number, the same way a two-inch worm is twice as large as a one-inch worm.    But my schooling tells me that’s wrong, so I count the digits – all twenty of them.   Since twenty apples is twice as many as ten apples, and since my brain has spent years making such comparisons, my intuition suggests that a twenty digit number is twice as large as a ten digit number.  But I “know” (in the sense of having been told otherwise) that’s not right.

But here’s my real question: if my intuition is wrong, is it even possible for me to “know”  how wrong? Does “calculation”: amount to “comprehension”?

Psychologists tell me my brain is useful in this inquiry only because it tricks me into seeing the distorted, shrunken  “mini-icon” versions of things – numerals and digits, rather than the actual quantities themselves.  If we’re asked to describe what it feels like to be alive for a million years, we can’t.  We’ll never be able to.  And for the same reasons, it seems to me, we can’t “comprehend” the reality of 264, only the icons we can comprehend – the mental constructs that only work because they’re fakes.

Consider the Emperor’s assertion that the rice would be enough to go to the moon and back seven times.  That mental  image impressed me.  It made the size of 264 seem far more real than it would have if I’d merely seen the twenty digits on a page.  But do I really comprehend the distance between the moon and the earth?

The moon’s craters make it look like a human face.  I can recognize faces nearly a hundred yards away.  So… is my brain telling me that the moon is a hundred yards away?

I’ve seen the models of lunar orbit in science museums – the moon the size of a baseball two or three feet away from an earth the size of a basketball.  My brain is accustomed to dealing with baseballs and basketballs.    I can wrap my brain around (“comprehend”) two or three feet.  So when I try to imagine rice extending between earth and moon seven times, I relate the grains of rice to such “scientific models.”

But is that , too, an exercise designed to trick me into thinking  I “understand”?  Is it like making a fenced field seem like it could fit between my two thumbs?

I have little doubt that analogies seem to help.  Consider that a million seconds (six zeros) is 12 days, while a billion seconds (nine zeros) is 31 years and a trillion seconds (twelve zeros) is 31,688 years.  Wow.  That helps me feel like I understand.  Or consider that a million hours ago, Alexander Graham Bell was founding AT&T, while a billion hours ago, man hadn’t yet walked on earth.  A billion isn’t twice as big as ten thousand, it’s a hundred thousand times bigger.

Such mental exercises add to our feeling that we understand these big numbers.  They certainly “wow” me.  But is this just more deception?

Distrusting my brain’s suggestions, I decide to do some calculations of my own. A Google search and a little math tell me that seven trips to the moon and back would be about 210 billion inches.  Suppose the grains of rice are each a quarter inch long.   The seven round trips would therefore require 840 billion grains of rice.  The math is simple.  If there’s anything my brain can handle, it’s simple math.  Digits make it easy to do calculations.  But does my ability to do calculations mean I achieve comprehension?

Thinking that it might, I multiply a few more numbers.  My calculations show me that the Emperor’s explanation of his big number was not only wrong, but very wrong.  The number of grains of rice on the chessboard would actually go to the moon and back not seven times, and not even seven thousand times, but more than 150 million times!

Such a margin of error is immense. I don’t think I could mistake a meal consisting of 7 peas for a meal consisting of 150 million peas. I don’t think I could mistake a musical performance lasting seven minutes for a musical performance lasting 150 million minutes. What conclusion should I draw from the fact that I could, and did, fail to realize the difference between seven trips to the moon and back, and 150 million trips?

The conclusion I draw strikes me as profound.  It is that I have no real  “comprehension”  of the size of such numbers.    I’ve retold Paul’s chessboard story for fifty years without ever once supposing that the Emperor’s “seven times to the moon and back” might be wrong.  Could there be any better evidence that I have no real sense of the numbers, no real sense of the distances involved?  The fact that I can’t really appreciate the difference between  two such large numbers tells me that I’ve exceeded the limits of any real understanding.   My brain can comprehend baseballs and basketballs.  Using simple tools like a decimal number system, I can do calculations.    But when I try to comprehend the difference between 18,446,744,073,709,551,615 and 18,446,744,073,709,551,615,000,000, am I really able to understand what it means to say that the second number is a million times larger than the first?

I got my first glimpse of the difference between “calculating” and “comprehending” about five minutes into Paul’s homework  assignment.  Just five minutes into my calculations, my brain was already playing tricks on me.  Pencil in hand, there were already  too many numbers going around in my head.  I was (literally) dizzy from arithmetic overload.   I made errors;  I began to slow down, to be more careful.  The numbers were already absurdly large.   Five minutes after starting with a sharp pencil, my numbers were eight digits long; I was multiplying scores of millions, but no matter how slow I went, the frequency of errors increased.    After ten minutes, I had to sharpen the pencil because I couldn’t read my own writing.  After twenty minutes, my fingers hurt.  Soon, I could feel calluses forming.

Since the first eight squares of the chessboard had taken me about fifteen seconds to calculate, I’d unconsciously supposed that doing all eight rows might take me eight times as long.    But the absurdity of such a notion was becoming quickly apparent.  Looking up at a clock after half an hour, still not quite half way across the chessboard, my inclination, even then, was to think that the second half of the chessboard would take about as long as the first.  If so, I’d be done in another half an hour.  So I determined to finish.  But a couple of hours past my normal bedtime, I assured my mother I’d be finished soon – notwithstanding finger pain that had been crying for me to stop.  By the time I finished –about two a.m., the calculations having taken me over five hours to complete despite the fact that I’d long since given up trying to correct errors – my fingers were so painfully cramped it seemed they’d never recover.

In this way, I started to “feel,” to “experience,” the hugeness of the number 18,446,744,073,709,551,615.

By reading this account, some may feel they have a deeper sense of the enormity of 2 to the sixty-fourth power.    But I’m willing to bet that if you’ve never done it before, but now actually try to CALCULATE it, yourself, you’ll appreciate the hugeness in ways that symbols on a page – the stuff our brains use – can never convey.

As I look at the digits on the page, my brain is trying to spare me that visit to the land of reality.  It strives to shield me from calloused fingers and mental exhaustion with its easy comparisons, its suggestions to count digits, even its way of hearing words and using them to imagine a story  about going to the moon and back seven times.  In the same way, perspective had tricked me into thinking I understood the length of a fence as if to spare me a sore back, sore feet, sunburn and thirst for two months.  But the experience of painting the fence had taught me more about its length than framing it with my thumbs or even counting the posts between the rails   I don’t know how long it would have taken to finish painting that fence.  I could do the calculations, but only finishing the job would have really made me “comprehend” the time involved, and who knows –I might have died of heat exhaustion before I ever finished.

I should know that the farther away something is, the bigger it must be, if I can see it.  But through the deceit called perspective, my brain tells me precisely the opposite:  the farther away something is, the smaller it appears.  Stars more massive than the sun are reduced to mere pin pricks in the sky. When I remove a contact lens, my thumb  look like the biggest thing in the world.  What better evidence can there be that our brains are built to deceive us?

My brain (wisely) keeps me focused on things I need, like apples, and on things than can kill me, like woolly mammoths, men with rifles, fast moving cars, or thumbs in my eye.  But to do this, my brain necessarily distorts the things that are far away, and the things that are many, and the things that are very much larger than me, because they are things I can do nothing about.  In fact, I suspect that If my brain were asked to identify the biggest, most important thing in the world, it would say it was me.

And that might just be the biggest delusion of all.

–Joe

Digesting Reality

After I gave a short talk on We May Be Wrong, one man who heard me suggested I might want to read “Seeing Like a State,” by James C. Scott.  Along with Scott’s more recent book, “Against the Grain,” it has had a profound effect on my thinking.

Scott is a Professor of Political Science and Anthropology at Yale.  His subtitle, “How Certain Schemes to Improve the Human Condition Have Failed,” gives a clue to his thinking.

Seeing Like a State begins with a description of forestry practices in late eighteenth century Prussia and Saxony.  The forest, Scott reminds us, was a complicated, diverse ecosystem, consisting not just of varieties of trees, but of bushes and smaller plants, of foliage that was useful for fodder and thatch, of twigs and branches from which bedding was made, of bark and roots for the making of medicines, of sap for making resins, of fruits and nuts available for consumption, of grasses, flowers, lichens, mosses, and vines – not to mention being a habitat for fauna from insects and frogs to birds and foxes and deer, and a place human beings used for hunting, gathering, trapping, magic, worship, refuge, poetry and (he didn’t mention it, but I will –) love.

But the German state was focused on a single aspect of the forest – the commercial value of its timber.  In a series of steps recounted by Scott, the German state essentially redesigned its forests in order to maximize timber production and increase the wealth of the German state.  The consequences ultimately proved disastrous – for the state, its citizens, and the forest itself.

From this and a variety of other examples, Scott generalizes:  “Certain forms of knowledge and control require a narrowing of vision.” State action has frequently failed, says Scott, not because the particular state is politically leftist or rightist, wise or inept, forward or backward-thinking, but because its focus is the sort of abstract overview a state must adopt in order to manage a complex system based on whatever fundamental principles it chiefly values.  The connection to WeMayBeWrong  is suggested most strongly when Scott writes, “If the utilitarian state could not see the real, existing forest for the (commercial) trees, if its view of the forests was abstract and partial, it was hardly unique in this respect.  Some level of abstraction is necessary for virtually all forms of analysis (my emphasis).

If Scott’s next book is called “Thinking Like a Human Being,” I suspect I’ll like it, too.  For isn’t some level of abstraction necessary, not just for all forms of state action, and all forms of analysis, but  for all forms of communication?  For all forms of thought?  Isn’t it true that to make sense of things, we have to select certain attributes to focus on, to the exclusion of others?  Aren’t we compelled to categorize?  To deal in types rather than specifics?  To oversimplify?  Surely we can’t possibly think in terms of every dachshund on every street in every town in every country of the world, not to mention all the individual dogs of every other breed – especially if we’re going to start comparing them to cats and birds and lizards and apes.  We can only get our mind around such large numbers of unique animals by lumping all those breeds and individuals together, ignoring all their differences,  and speaking of “dogs.”  How could it be otherwise?

In Thinking Fast and Slow, Daniel Kahneman writes, “The often-used phrase ‘pay attention’ is apt: you dispose of a limited budget of attention that you can allocate to activities, and if you try to go beyond your budget, you will fail.”  Or, as Daniel Gilbert writes in Stumbling on Happiness, “It is difficult to escape the focus of our own attention – difficult to consider what it is we may not be considering.”

We aggregate.  We categorize.  We stereotype.  We oversimplify.  As I see it, group unique things together based on certain similarities – despite other differences – is fundamental to the very way we think.

The lead story on the front page of last Sunday’s Richmond Times-Dispatch was about a twenty-six year old man named Ted.  According to the article, Ted had gotten into hard drugs including opiates, cocaine, and heroin.  He’d been fired from his job and had stolen from his girlfriend.  He’d spent time in jail periodically for assault, grand larceny, and violating probation.  A few days after release from jail, he entered a “sober house” for addicts seeking to beat their addictions.  He signed a contract with the facility that laid out the rules, including curfews, twelve-step meetings, and a specific provision that use of drugs was grounds for immediate expulsion from the house.

As one official was quoted as saying, “These sober homes are not locked down jail cells.  The kids come and go.”  When Ted showed up at his sober house a week later acting suspiciously, a required drug test was positive for cocaine and morphine.  When asked to submit to a drug search, Ted refused.  In accordance with the contract he’d signed, he was told he had to leave the house.  Together with another resident, he did.  That was late on a Friday night.

On Saturday, Ted and the other man did some work for a landscaper.  Saturday night, Ted was exchanging text messages with a girlfriend in Florida and with the landscaper, who was asking about Ted’s plans for Sunday.  But on Sunday, Ted’s body was found on the side of a country road not far from where we live.  He had died of an overdose of fentanyl, cocaine and heroin, presumably consumed later that Saturday night.

Alright – it’s a tragic story, but what does it have to do with Seeing Like a State?  Or with WeMayBeWrong?

Ted’s picture was printed, rather large, on the front page of the paper, along with a headline that read, “The System that Was Trying to Help Him Crumbled.”  The article’s subtitle was “Death in Chesterfield Highlights Gaps in Care for Addicts Living in Sober Homes.”  According to the article, Ted’s grieving mother was “strongly critical” of the sober house’s conduct in telling him he had to leave, rather than releasing him to someone who could give him “proper care.”  What that might have entailed and how it might have worked is far from clear to me.  Apparently, calling a probation officer late on a Friday night is problematic.  Even had he been reached, would Ted’s probation officer have been able to locate Ted, or do anything that would have led to saving Ted from his final overdose?

But what I find interesting is the acclaim of “experts” calling for a standardized fix to the system. Interviewed for the article, the head of an unrelated recovery program said “operators of recovery homes need to have policies for making sure residents get the care they need when they test positive for drugs.”  The grieving mother posted a letter on another website to the effect that recovery facilities “MUST have a protocol, a plan of action” in such cases.  When interviewed for the article, the President of the National Alliance for Recovery Residences said that all fifty states should have laws requiring all sober houses to be certified – by them (state affiliates of the N.A.R.R), or by organizations like them.  Such certifications, he said, would be based on “clear policies,” “trained staff” and “approved standards.”   The grieving mother’s complaints that the “system” had “crumbled” became the headline the Times-Dispatch gave to its coverage.  That newspaper’s attention had caused the Virginia Association of Recovery Residences (V.A.R.R.) to schedule a vote, this coming month, “to create a uniform policy for what operators of sober homes should do when someone relapses.”

No less so than central governments, private organizations like the N.A.R.R. and V.A.R.R. meet, and analyze, and sometimes vote (depending on how democratic they are) to determine the best method of dealing with categories of problems.  Once these entities identify “best methods,” they seek to encourage or require others to adhere to them.  Hence the call for uniform policies, approved standards and “certifications” by these organizations.  But in Ted’s case, amidst all the calls for uniformity, written policies, standards and certifications, I fail to see the connection between such proposals and the conduct of this particular house, and this particular drug addict.  And I wonder whether all the sober houses of the world should be treating all the drug addicts of the world in a “uniform” manner when they relapse, as if all members of the category ought to be treated the same.

Understandably, the grief-stricken mother believes that releasing her son to “proper care” would have made a difference.  Understandably, she believes that the “system crumbled.”  It’s harder for me to understand why a newspaper headlines its story about Ted with that same diagnosis – that  the lack of – or deficiency in – a “system”  was the cause of the tragic event two days later.  And I wonder why organizations like the Virginia and National A.R.R.’s see written policies, uniform standards and certificates of compliance as the answer to problems like Ted’s – until I remember that those same organizations would be the ones setting the standards and issuing the certificates – in other words, “thinking like states.”

But I don’t think it’s just states.   Gilbert, again: “[M]uch of our behavior from infancy onward is simply an expression of [our] penchant for control.  Before our butts hit the very first diaper, we already have a throbbing desire to suck, sleep, poop and make things happen… Toddlers squeal with delight when they knock over a stack of blocks, push a ball, or squash a cupcake on their forehead.  Why?… The fact is that human beings come into the world with a passion for control…”

The questions raised by Ted’s tragic death and by Professor Scott’s books include whether uniform standards and systems imposed by any central authorities, public entities or large private corporations or associations,  are capable of fully addressing the complexities and fluidity of the world.   Large organizations, says Scott, can only operate based on uniform standards applied to categories shaped along lines that are capable of centralized, standardized administration.  By their nature, standards are uniform across whol categories. They are also meant to be relatively permanent in the face of constant change – permanent in the sense of controlling things until some newer, wiser “standard” is discovered and deemed worthy of taking its place.   But if the lack of uniform standards is the answer to Ted’s problems and the rest of the world’s problems, what do we make of the German approach to forestry?  Of the widespread use of DDT?  Of the failure of the Soviet Union?  Of the unbridled use of petrochemicals by private industry?  Of the increasing tendency for “superior” (but genetically uniform) corn to be planted all across America?

These days, science has become acutely aware of the dangers of monoculture when it comes to crops, wildflowers, bees, viruses, and all species of living things.  It was standardization that killed the forests of Saxony.  Diversity in the gene pool of flora and fauna is recognized as the best long term protection against an ever larger list of catastrophes – both the few that we’re aware of and the many we’re not.   The Supreme Court has before it a case in which Harvard University stresses the importance of diversity in its admissions practices, and most of the universities in the country support Harvard as to that importance.  . More and more, I hear scientists and psychologists speak of the impossibility of predicting the future, so that any scheme designed to protect us from the most visible threats may well subject us to others not yet perceived.  Yet in the face of growing concerns about monoculture and the importance of diversity, cries for standardization and uniform solutions continue from people convinced they know what’s best for us all.

According to Scott, the tendency of authorities who’ve decided they “know what’s best” to impose those ideas uniformly, in a “one-size fits all” manner, is a serious problem, and whether those authorities are private or public, totalitarian or democratic, they do so only after over-simplifying the world.   They design their systems like monocultures, giving precedence to a few priorities in an extremely complex and inter-dependent world that is, in the end, a forest (of one sort of another).   “Certain forms of knowledge and control require a narrowing of vision.” “Some level of abstraction is necessary for virtually all forms of analysis.”

Or, to borrow a thought from Jean Paul Sartre, quoted by Scott: “Ideas cannot digest reality.”

Perhaps, yet another reason that we may be wrong.

– Joe

There’s Nothing Like a Really Good Shower

Like many of you, I do some of my best thinking in the shower.  Have you ever wondered why?

Last night, my attention turned to the simplicity around me.  I was standing in a tub, with three walls and a shower curtain bounding my world.  Before me, four items of chrome: the shower head, a control plate, a faucet, and a drain.  At my side, a soap dish, a bar of soap, and a bottle of shampoo.  Once I’d turned the water off, there was nothing more.

When I opened the curtain, there was plenty more to see: a vanity, a mirror, a toilet, a pair of towel racks hung with towels, a bathrobe hanging from a hook on the door.  But as I inventoried this expanded-but-still-small world, I realized there was no end to the counting: three pictures on the walls, light fixtures, a light switch, a door knob, brass hinges, a toilet tissue dispenser, baseboards,  two floor mats, a patterned linoleum floor, and no fewer than twenty-six items on the vanity, from deodorant and toothpaste to a tub of raw African shea butter.  Two of the twenty-six items were ceramic jars, filled with scissors, tweezers, nail clippers, cotton swabs, and  other modern necessities.  Most of the items were labeled, little sheets of paper glued on them, each little sheet bearing product names, ingredients, and warnings in tiny fonts and a wide array of colors.

Early in fifth grade, Paul Czaja had our class use a sheet of paper, telescoped into a tube, to survey our surroundings.  The idea seemed too simple – easy to dismiss because we already “knew” the result.  But actually trying it proved us wrong.  Paul insisted that, one eye shut, we keep the other at one end of the scope for five minutes; he wouldn’t let us stop or look away.  Forced to view our classroom from these new perspectives, we were amazed at how different it became.  Desks, windows, blackboards and classmates disappeared, replaced by a tiny spider web  that trapped an even tinier bug in a corner; the pattern in the grain of a piece of wood; a piece of lint trembling in an unseen movement of air like a piece of desert tumbleweed.

As I toweled dry after my shower, the world of things too small to notice most of the time came into sharper focus.  My attention turned to things I go through life ignoring.  From the confines of my bathroom, I took stock of the unseen.

The room, I supposed, and no doubt my own body, were covered with bacteria.  (I might have found that thought abhorrent once, but today, nourished by probiotics and kombucha tea, I find it comforting.)   In the empty space between me and the mirror, I imagined all the even smaller things I couldn’t see, the atoms of nitrogen and oxygen, the muons and the quarks, the billions of things that swirl around me, unseen, though I breath them in and out, and though they sustain me.

I thought of things in the bedroom and the hall, and the things out in the yard, and things so far away that I couldn’t see them, even in the vast night sky beyond the bathroom’s walls because I don’t have X-ray vision and can’t see things more than a few million miles away.

But it wasn’t a matter of distance, size and walls alone that limited my sight.  I thought of all the colors I’d never be able to see, because the cones in my eyes don’t react to all the wavelengths of light that exist.  And  moving past the limitations of sight, I thought how oblivious I am to odors;  that every one of those ingredient labels lists chemicals and molecules  easily distinguishable by dogs, probably despite their containers, but all those stray molecules float into my nose unnoticed.

I hear but little of what there is to hear.  Some sounds are simply too quiet.  Others are loud enough to make dogs and teenagers come to attention,but too high pitched for my adult ears  to discern.  Others are at frequencies too low. And even dogs and teenagers hear but a tiny fraction of the oscillations that twitter, snap and buzz in the world around us.

Taste?  Surely, the world has more complexity to taste than five types of gustatory cells whose highest achievement lies in their ability – acting as a team –to distinguish  between  sweet, sour,  bitter, salt and savory.

And what about the things we call “forces”?  How often are we conscious of gravity?  If I focus on it, I can imagine that I feel the gravitational pull of the earth, but have I ever felt the pull of the moon?  And have I ever once thought about the gravitational pull of the vanity, the toilet, and the doorknob?  How often do I focus on the domino effect of the electrons hopping and pushing, connecting the light switch to the fixture?  And do I ever think of the magnetic fields surrounding them?  Unless I’m shocked by the sudden release of static electricity, I go through life completely oblivious to its existence .

Perhaps most of all, I’m unconscious of myself – the flow of hormones  that affect my mood; the constant traffic in my nervous system that never reaches my brain, much less my conscious thought; the processes of liver,  kidney and thalamus that keep me going.

In short, the world I experience, through my senses, is but a tiny fraction of the real world in which I live.

Yet that’s not all.  I haven’t even begun to count the ways my brain deceives me.  In fact, it wouldn’t be doing its job if it didn’t distort reality. The tricks my brain plays on me take the already small portion of reality I’m able to sense, and make it appear to be something other than it is.

My eyes see two different views of the world, but – as if it’s afraid I couldn’t handle multiple points of view – my brain tricks me into thinking I see only one.

When I turned off the shower, my world was completely silent – or so it seemed.  I’d heard the cessation of the water coming down.  It being late at night, there were no voices from downstairs, no television blaring, so my brain told me – convincingly – that the bathroom was silent.  Only when I closed my ear canals by pressing flaps of flesh to cover them did I realize that the hum of ambient background noise was now gone.  That noise had been so normal, so much a part of the ordinary, that my brain had convinced me it wasn’t there.  (My highly evolved brain still wants to know: What’s the use of  listening to background noise?)

Early on, my brain tricked me into thinking that some things are up and others are down, and that up and down are the same everywhere.  (I spent a lot of time as a child worrying about the people in China.)  And it was so intent on perpetuating this deception that when my retinas saw everything in the world “upside down,” my brain flipped the world around to be sure I saw everything “right side up.”

One of the brain’s most convincing tricks is what it does with my sense of touch.  It has convinced me  I’m doomed to a life in touch with the ground;  I’ve often regretted my inability to fly.  But in fact, I’m told, the stuff of which I’m made has never  come in contact with the ground, or with any other stuff at all  – if it had, I’d have exploded long ago.  The sense of touch might better be called the pressure of proximity.  All that time I dreamed of  flying, I was floating all the while!

How about my  sense of who I am?  I wonder if that’s not the biggest trick of all.  When my body changes with every bite of food, every slough of skin, every  breath of air I take, no present cell or atom there on the day I was born, is my very sense of self an illusion, created by my brain “for my own good”?

And so I surveyed the bathroom.  Having first considered  the things too small to notice,  or too quiet, or too far away, or at not the perfect frequency, and having then considered ways my brain tricks me, I next encountered a whole new category of deception.  As my eyes fell on various objects, I noticed something else my brain was doing.  For example: on the toilet tank was a vase full of flowers, but not real ones— pieces of plastic, molded and colored to look like real ones.  Another example: one of the pictures on the wall was of a swan and her cygnets – not a real swan, but a mixture of acrylics applied to a canvas, a two-dimensional image designed to give the illusion of life in a three dimensional pond.  The painting was designed to make me think of something not there, and it did. I didn’t think “acrylics,” I thought “swan.”   And as soon as it had done that, my brain had me thinking of the artist – my wife, Karen – and of her skill with a brush,  and of her sense of color, and of some of the many ways in which she’s blessed me through the years.  And I realized that all these mental associations, these illusions, these memories, form an extremely important part of the reality in which I live, despite the fact that they don’t reside in the  space between me and the mirror (at least not literally).  The flowers in the vase are just molecules of colored plastic, but  my brain gives them a fictional existence – a story of smells and bees and fresh air and blue sky, and all the associations that “flowers” evoke in my brain.  The swan and her cygnets remind me not only of a wife who paints, but of our children, and of times we walked  together, along water banks, watching swans and cygnets swim by. My mind, I realize, is a factory, churning out a never-ending assembly line of associations, all of which are things that “aren’t really there.”

And so, I conclude, I’ve spent a lifetime in a shower of a different sort –bombarded by  atoms, muons, quarks and dark matter, things so small I call them emptiness,  all the while pulling associations, memories, and narratives into my world that aren’t really there.

When I say they aren’t really there, I don’t mean to deny that Karen, and swans, and flowers, are real – but that memory itself is reconstructive.   My memories are hardly exact replicas of things I’ve experienced;  they’re present creations, constructed on the spot in a crude effort to resemble prior experience.  The result is affected by my mood, and by error, and by all sorts of intervening experiences.

And so, I  live in a world that isn’t the real world, but one extremely limited by my meager human  senses; one corrupted by a brain that’s determined to distort things, for my own good; one filled with the products of my own defective memory and my own boundless tendency to imagine things that aren’t there.  Somehow, I’m able to deal with the shower of inputs so created, over-simplified, distorted and augmented as it may be – in fact, I’m pretty well convinced that I can deal with it a lot easier than I could deal with the vast complexity of the “real thing.”

I woke up this morning hoping that I never lose sight of the difference between the two.

— Joe

Zooming In

Neil Gaiman tells the story of a Chinese emperor who became obsessed by his desire for the perfect map of the land that he ruled. He had all of China recreated on a little island, in miniature, every real mountain represented by a little molehill, every river represented by a miniature trickle of water.  The island world he created was enormously expensive and time-consuming to maintain, but with all the manpower and wealth of the realm at his disposal, he was somehow able to pull it off.  If the wind or birds damaged some part of the miniature island in the night, he’d have a team of men go out the next morning to repair it.  And if an earthquake or volcano in the real world changed the shape of a mountain or the course of a river, he’d have his repair crew go out the next day and make a corresponding change to his replica.

The emperor was so pleased with his miniature realm that he dreamed of having an even more detailed representation, one which included not only every mountain and river, but every house, every tree, every person, every bird, all in miniature, one one-hundredth of its actual size.

When told of the Emperor’s ambition, his advisor cautioned him about the expense of such a plan.  He even suggested it was impossible.  But not to be deterred, the emperor announced that this was only the beginning – that even as construction was underway on this newer, larger replica, he would be planning his real masterpiece – one in which every house would be represented by a full-sized house, ever tree by a full-sized tree, every man by an identical full-sized man.  There would be the real China, and there would be his perfect, full-sized replica.

All that would be left to do would be to figure out where to put it…

***

Imagine yourself standing in the middle of a railroad bed, looking down the tracks, seeing the two rails converge in the distance, becoming one.  You know the rails are parallel, you know they never meet, yet your eyes see them converge.  In other words, your eyes refuse to see what you know is real.

If you’re curious why it is that your mind refuses to see what’s real in this case, try to imagine what it would be like if this weren’t so.  Try to imagine having an improved set of eyes, so sharp they could see that the rails never converge.  In fact, imagine having eyes so sharp that just as you’re now able to see every piece of gravel in the five foot span between the rails at your feet, you could also see the individual pieces of gravel between the rails five hundred miles away, just as sharply as those beneath your feet.  In fact, imagine being able to see all the pieces of gravel, and all the ants crawling across them, in your entire field of vision, at a distance of five hundred miles away.  Or ten thousand miles away.  What would it be like to see such an image?

***

How good are you at estimating angles?  As I look down those railroad tracks, the two rails appear straight.  Seeing them converge, I sense that a very acute angle forms – in my brain, at least, if not in reality.  The angle I’m imagining isn’t 90 degrees, or 45 degrees; nor is it 30, or even 20.  I suppose that angle to be about a single degree. But is it really?  Why do I estimate that angle as a single degree?  Why not two degrees, or a half a degree?  Can I even tell the difference between a single degree, and a half of a degree, the way I can tell the difference between a 90 and a 45?  Remember, one angle is twice as large as the other.  I can easily see the difference between a man six feet tall and one who’s half his size, so why not the difference between a single degree and a half a degree?  What if our eyes – or perhaps I should be asking about our brains – were so sharp as to be able to see the difference between an angle of .59 degrees and one of .61 degrees with the same ease and confidence we can distinguish between two men standing next to each other, one who’s five foot nine and the other six foot one?

***

Yesterday, I was preparing digital scans of my grandfather’s Christmas cards for printing in the form of a book.  His Christmas cards are hand drawn cartoons, caricatures of famous personalities of his day.  Each is clearly recognizable, from Franklin Roosevelt and Adolf Hitler to Mae West and Mickey Mouse.  Some of the images were scanned at 300 pixels per inch, some at 600 etc.  Reflecting on pixel counts and resolutions so that my printed book would not appear blurry, I was testing the limits of my ability to distinguish different resolutions.   Of course, one neat thing about a computer is how it lets us zoom in.  As long as I zoomed in close enough, I could see huge differences between two versions of the same picture.  Every pixel was a distinct color, every image (of precisely the same part of the caricature) a very different pattern of colors  – indeed, a very different image.  Up close, the two scans of the cartoon of Mae West’s left eye looked nothing alike – but from that close up, I really had no idea what I was looking at – it could have been Mae West’s left eye, or Adolf Hitler’s rear end, for all I knew.  In any case, I knew, from my close-up examination, how very different the two scanned images of Mae West actually were.  Yet, only when I was far enough away was I able to identify either image as being a caricature of Mae West, rather than of Hitler, and at about that distance, the two images of Mae West looked (to my eye) exactly the same.

***

How long is the coastline of Ireland?

If I took a yardstick and walked the perimeter, I could lay my yardstick end to end the whole way around, count the number of lengths, and conclude that the coastline of Ireland was a certain number of feet long.  But if I used a twelve inch ruler instead, following the ins and outs of the jagged coast a little more precisely, the result would be a larger number of feet than if I had used the yardstick, because the yardstick was assuming straightness every time I laid it down, when it in fact the coastline is never perfectly straight.  My twelve inch ruler could more  closely follow the actual irregularity of the coastline, and the result I obtained would be a longer coastline.  Then, if I measured again, using a ruler that was only a centimeter long, I’d get a longer length still.  By the time my ruler was small enough to follow the curves within every molecule, or to measure the curvature around every nucleus of every atom, I’m pretty sure I’d have to conclude that the coastline of Ireland is infinitely long – putting it on a par, say, with the coastline of Asia.

***

How many rods and cones would my eyes have to contain, for me to be able to distinguish every ant and piece of gravel in every railroad bed, wheatfield and mountainside within my field of vision, at a distance of five hundred miles away?  How much larger would my brain have to be, to make sense of such a high-resolution image?  I suspect it wouldn’t fit inside my skull.

***

Why did we, so recently, believe that an atom was indivisible? Why did it take us so long to identify protons, neutrons, and electrons as the really smallest things?  What did it take us until 2012 to decide that that, too, was wrong, that not only were there quarks and leptons, that the smallest thing was the Higgs boson?  And not until 2014 that particles existed even smaller than that?

***

Given how long it took us to realize that our solar system was just one of billions in our galaxy, and how much longer to realize that our galaxy was just one of billions of galaxies, why are we now so confident of our scientists’ estimates of the size of the Universe – especially when told that the “dark matter” and “dark energy” they say accounts for most of it are just names given to variables necessary to make their equations come out right?  That, apart from their usefulness in making these equations come out right, the scientists have never seen this stuff and have no idea what it is?  Is it really so hard for us to say, “We simply have no idea”?

***

A human baby can distinguish between the faces of hundreds, even thousands, of human faces.  But to a human baby, all chimpanzees look alike. And to most of us Westerners, all Asians look alike.  Why do babies treat Asians and Chimpanzees like the rails of railroad tracks, converging them into “identical” images even when we know they are different?  Did our brains  evolve not to maximize perception and understanding, but to make them most efficient?  In other words, are we designed to have limited perception for good, sound reasons, reasons that are important to our very survival?

***

Why do we think, and talk, and act, as if our brains are capable of comprehending reality, in all its vast complexity?  Is it more efficient to feed and maintain fewer rods and cones, than it would take for us to feed and maintain enough of them to see the difference between quarks and Higgs bosons, or the individual pieces of gravel between the railroad tracks on all the planets of  Andromeda?

***

Mirror, mirror, on the wall: tell me, can I really achieve, within my brain, a true comprehension of the Universe? Or am I just like the Emperor of China?

– Joe

Hatred

I just watched a TED talk I liked.  The speaker (Sally Kohn) was articulate and funny; her message about hatred powerful.    Fearing that a synopsis of her talk would detract from the way she conveys her point, I’ll  simply share the link to her talk, with my strong recommendation.

https://www.ted.com/talks/sally_kohn_what_we_can_do_about_the_culture_of_hate?rss

But I do have one  disagreement with her.  At one point,  she refers to  “study after study after study that says, no, we are neither designed nor destined as human beings to hate, but rather taught to hate by the world around us…”

I’m not so sure.  Last year I saw a science show on TV that presented a series of studies of very young children; its disturbing suggestion was that we are born to hate.  Can anyone enlighten me about these studies, suggesting (one way or the other) whether hatred is learned, or innate?  A product purely of culture, or of biological evolution?

It has always seemed to me that while some of hate is surely learned, a predisposition toward it may be innate. But what would a predisposition toward hate look like?

Sally cites the early 20th century researcher Gordon Allport as saying that Hatred occupies a continuum, that things like genocide are at one end and “things like believing that your in-group is inherently superior to some out-group” lie at the other.  That much makes sense to me.  In fact, the very idea of a “hate continuum” with feelings of superiority lying at one end is why I think the answer to the innateness question may be important.

Whenever I hear it said that a positive self-image is important to mental health, I think of Garrison Keillor’s joke that in Lake Wobegon, everyone is above average.   I suspect the great majority of us think we’re at least slightly above average.  And don’t  psychologists say that that’s good?  Don’t we justify keeping our positive self-images by the corollary view that people who “suffer from a negative self-image” are likely unhealthy?  Don’t we think it would be beneficial  if everyone thought of himself or herself as above average?  Wouldn’t that mean an end, for example, to teen suicide?

But even if I’m far below average, there are at least some people in the world who are not as good (or as smart, or as fit, or as valuable) as me.  No?   And if I think my liberalism is superior to your conservatism, or the other way around,  you must lack some quality or insight I possess, no?  Does “feeling good about myself” require that, in some way, I feel superior to others?

Maybe not.  Maybe my positive self image need not depend on comparing myself to others – maybe I can see value in myself – have a positive self-image – without thinking of myself as superior to anyone else at all.  But the only way I can discern to do that is to see equal value in everyone.  And if we’re talking about wisdom or intelligence or validity of things in which we believe, that means that  my own power of discernment is no better than than the next guy’s; that everything I believe in has value, but everyone else’s beliefs have equal value.  And I see great debate about whether that’s desirable. Does it require me to abandon all my convictions?  To forego all my beliefs?  What does it even mean to say that my belief in God has no more value than your belief in atheism, or vice versa?  Can I really believe in anything, if I think an opposing belief is just as “good”? I think most of us say no.  I think that, for most of us, feeling good about ourselves and our beliefs is only possible through at least implicit comparison to others, a comparison in which we feel that our beliefs are at least slightly superior to somebody else’s.

Even if it’s both possible and desirable, it strikes me as very, very hard to have a positive self image without feeling such superiority.  I mean, can I really have a positive self-image if I think I’m doomed to be the very worst person on earth, in every respect?  It certainly seems likely that, for many, most or all people in the world, positive self-image depends on feeling superior to at least some others, in at least some respects.  I’d venture the guess that a tendency toward positive self-image (in comparison to others) has evolved in our species because of its evolutionary health benefits.  In any case, I suspect there’s a strong correlation between adults who feel their beliefs are superior and adults who feel disdain for the beliefs (or intellects) of others, and a strong correlation between those who feel disdain for the beliefs and intellects of others and those who hate them.  At the very least, positive self-image and a feeling of superiority seem at least early stepping stones in the direction of Hatred.

However, my suspicion that the seeds of Hatred  are themselves innate doesn’t depend entirely on positive self-image and feelings of superiority.  The science show I watched last year dealt not with self-image, but with group identification and preference: the idea that we ‘re willing to assist and protect those others who are most like ourselves, while we seek the opposite (competition, aggression, violence) directed at those who are unlike ourselves.

“My God, my family, and my country.”   The familiar formula implies a great deal, I think, about the subject of identity, as does the advice we give to our children: “Don’t ever talk to strangers.”  Why do we alumni all root for the home team?  Why would most of us save our spouse and children from an inferno first, before saving strangers, if we save the strangers at all?  Why do we lock our doors at night to protect those we know, while excluding those we don’t?  Why do we pledge allegiance to our respective flags?

(That last one’s easy, of course, if we believe that we Americans pledge allegiance to our flag because our country is the greatest on earth.  Perhaps I should really be asking why all the other people in the world – who far out number us –pledge their allegiance to their flags, when they live in inferior countries?  Are they uninformed?  Too stupid to recognize our superiority? Aware of our superiority, but unwilling to admit it, because of selfishness, dishonesty, or even evil design?  In which case, can Hatred be far behind? )

Why do we form Neighborhood Watch groups, erect walls on our borders, finance armies for self-defense, and erect tariffs to trade?   Is it not because we prefer the familiar, and because that preference is in our self-interest?  And isn’t self-interest as true of groups as of individuals?  In evolution,  groups do well who look out for each other – who favor those most like themselves – while treating dissimilar “others” with suspicion and distrust.  (We know that those like us aren’t dangerous, hostile predators, but fear that unknown strangers might be.)   In contemplating first contact with aliens from other worlds, some of us favor holding out olive branches, others making some sort of first-strike, but disagree as we might on how to first greet them,  we all tend to think in terms of a common goal: to preserve humanity.  We therefore focus on the need for global unity in facing the alien challenge.  But what is it that causes us to favor “humanity” over alien beings, when we know absolutely nothing about those alien beings?  Isn’t it because we know absolutely nothing about them?  Isn’t it because, innate within us is a bias in favor of  those who are most like ourselves?

Consider the following continuum, as it progresses from the unfamiliar to the familiar:

(1) We spend millions to combat and eradicate bacteria, giving Nobel prizes to those most successful in the effort;

(2) We spend some (but less) to eradicate mosquitoes, which we swat unthinkingly;

(3) On the contrary, we feel bad if we run over an armadillo on the road, but what the heck, such accidents are unavoidable;

(4) We try not to think much about slaughtering millions of cows, but we do it on purpose, because we have to eat;

(5) most of us abhor the idea of ever eating a monkey; and

(6) we condemn human cannibalism, abhorring murder so much we imprison murders, even if we oppose the death penalty because human life is sacred.

I think that assigning things to their place on such a continuum based on how much they seem similar or dissimilar to ourselves reflects our innate, natural preference for those most like ourselves.  Yet the tendency to feel safety in, and preference for, those who are most like ourselves, is precisely what leads to racism, no?

So, is this preference natural and good?  Or is it something to resist?  Should we be proud of our tendency to fight for our God, our country, our state, our species, our family, our planet –  and to disdain our enemies – or should we be suspicious of that tendency, aware that they largely result from the accidents of birth?  And does our tendency to root for the home team – not to mention our loyalty to political ideals –  exist only because we’re able to see the good in the familiar, while understandably blind to the good in the unfamiliar?

We don’t see what roosters see in hens.  We’re blind to what bulls see in cows.   But just like we can’t feel the love one three-headed Martian feels for another, I submit we won’t be able to  appreciate the goodness that aliens will be striving to preserve when they descend upon us,  maws open, preparing to treat us the way we treat swine.  I want to know WHY we are all in agreement on the importance of preserving our species, even if it means the poor aliens go hungry.   And I doubt its as simple as loyalty to good old mother earth, as I suspect we’d probably be happy to negotiate a peace with the invaders by offering them, say, all the world’s polar bears and squirrels, provided they’ll agree to leave humans alone.  This preference for humanity would prevail in that moment, I believe, never mind the national and regional warring between earthlings that had preceded it.  And it would seem strong enough to survive even if the alien species were acknowledged to be technologically “superior” to us.  But in that case, would our efforts rest on a reasoned belief that, at least morally, if not technologically, we are superior to such alien species?  Or would the instinctive feeling of moral superiority be only a disguise in which the instinct for self-preservation and consequent preference for things most like ourselves had clothed itself?

I don’t claim to have the answers.  Whether we deserve to defeat alien invaders, whether we ought to value human beings more than chickens or mosquitoes, whether we ought to fight for our flag, these are not the issue here.  My point is that I take our allegiance to things most like us to be innate, whether it’s good or (in the case of racism) abhorrent.  I think the preference is a natural, inborn one, a part of who we are, whether we like to admit it or not –and that it’s a tendency terribly hard to get rid of, as our struggle with racism shows.

For the type of reasons Sally suggests, I believe that understanding our feelings of superiority and our preference for the things most like ourselves is the key to overcoming Hatred.  But if we think of Hatred as merely cultural, as merely something we’ve “learned” from society, I fear that, as individuals, we may be tempted to think we’ve already rid ourselves of it, or that we no longer need to be alert to its presence deep in our hearts.  If we see it only as something others do – if we fail to see at least the seeds of it, innate in ourselves, ready to manifest itself in our own actions– we may be the Hateful ourselves.

– Joe

How Smart Are We?

Linnaeus caused a stir when he included human beings in the animal kingdom, even though he flattered us with the name homo sapiens.  Charles Darwin caused a similar stir, though he asserted “there can be no doubt that the difference between the mind of the lowest man and that of the highest animal is immense…”  But calling ourselves wise hasn’t been enough for most of us.  Our Bibles put us above mere animals, on a level just below the angels.  Even our scientists weren’t satisfied with Linnaeus; they further differentiated us from other homo sapiens because of our superior intelligence – never mind that we mated with Neanderthals.  The scientific world now bestows on us the title “homo sapiens sapiens” – not just wise, but doubly-wise.

When we were children, we were treated to many proofs of Man’s superior intelligence: we alone use tools; we alone have language; we alone care for our young so long; we alone can learn independently; we alone can solve new problems not encountered before; we alone have culture; we alone engage in entertainment, art, and play.  After it became obvious that these distinctions were proving false, people became more cautious.  A recent list of the top ten traits that set us apart from other animals shows how much ground has been conceded.  Charles Q. Choi, contributing to Live Science in 2016, listed the top ten distinctions as our speech, our upright posture, our lack of body hair, the fact we wear clothing, that we have “extraordinary brains,” that we have precise hands, that we make fire, that we blush, that we have long (if dependent) childhoods, and that we live past child-bearing age.[1] In creating that list, Choi acknowledged that we’re not the only animals that speak, we just speak differently; that our upright posture is responsible for high childbirth mortality rates compared to other primates; that we have as much body hair as other primates, but ours is thinner, shorter, and lighter; that while we have opposable thumbs, the apes do too, plus they have opposable big toes that do things ours cannot.  Blushing, and living past our reproductive usefulness, may be the only things that really sets us apart, but we don’t yet understand what good these things do us.

Distinctions such as Choi’s make us different, but do they make us superior?  For many religious, the claim that we’re superior depends first on our “souls,” which, despite a lack of proof for their existence, many of us believe in the way we believe in our superior intelligence.  When it comes to that intelligence, Choi acknowledges that our brains are not the largest.  But our brains, he tells us, are “extraordinary” because they can produce the works of Mozart and Einstein.  And as any human being will tell you, Mozart is more beautiful than the screeching and moaning of a whale.  As any human being will tell you, it takes a higher intelligence to develop an atom bomb than it does to fly like a bat.

But do such judgments tell us more about our vanity than our intelligence?  Consider our history of assessing animal intelligence.  In 2013, the Wall Street Journal published a wonderful article by primate researcher Frans de Waal.  For years, de Waal wrote, scientists believed elephants incapable of using tools – one of the classic “proofs” of human intelligence.  In earlier studies, the elephants had been offered a long stick while food was placed outside their reach to see if they would use the stick to retrieve it, as people (and chimpanzees) were able to do. When the elephants left the stick alone, the researchers concluded that the elephants didn’t understand the problem. “It occurred to no one,” wrote De Waal, “that perhaps we, the investigators, didn’t understand the elephants.” Elephants use their trunks to smell, not just to hold branches.  As soon as an elephant picks up a stick, its nasal passages are blocked and it can’t tell what’s food and what isn’t.  So years passed before researchers decided to vary the test.  But when they put a sturdy square box out of sight, a good distance away, the elephant easily retrieved it, nudging it all the way to the tree, and used it to reach the fruit with a trunk that could now smell, and touch, and approve, the fruit.

Even more anthropocentric, in retrospect, is the research on chimpanzees’ abilities to recognize faces.  For years, scientists had been giving chimps pictures of human faces, and when chimps failed to distinguish among them, researchers happily concluded that the “unique” human ability to recognize faces had not been matched by the chimps.  It took decades before someone thought to test chimps on the basis of their ability to recognize the faces of other chimps, and when they did. They discovered that chimps were amazingly good, not just at recognizing faces, but using them to extrapolate to family relationships!  And with improvements in testing methods, de Waal wrote, a 2007 study showed chimpanzees did significantly better than a group of university students at remembering a random series of numbers.[2]

The accepted idea is that “intelligence” involves the capacity to learn.  But to learn what?   If I can learn calculus easily but am helpless learning to play the piano, does that make me smarter than my counterpart with the opposite aptitudes, or less so?  Am I “smarter” than Dustin Hoffman in Rain Man, or less so?  If people learn different sorts of things at different speeds, then is there any basis to say that one is smarter than another, without “smartness” being related to a particular skill?  I once thought a fair answer might be that an individual could be considered “smarter” if she easily learned those things that are important  as opposed to the eccentric who has an aptitude for odd or useless things.   If I can learn to build a house, or a car, easily, but my friend was able to play the piano the first time his hands touched the keys, the question of who’s “smarter” might depend on which skill is more useful to a typical human being.  Indeed, standardized testing still exists in K-12 because it is useful in predicting success in college.  This utilitarian approach to intelligence made at least some sense to me – until I sought to apply it to the rest of the animal kingdom.

What does science tell us about the relative intelligence of animals?  Finding a raft of “top ten” lists on the internet, the first thing I noticed was their lack of consensus.  Several sources rated chimpanzees the smartest animals; others dolphins, whales, elephants, and pigs.  But the variety of nominees was striking: top-ten lists included parrots, dogs, cats, squirrels, rats, crows, pigeons, orangutans, gibbons, baboons, gorillas, otters, ants, bees, ravens, ducks, cows, bonobos, and octopi, each list focusing on different skill sets or aptitudes.  I quickly decided that the lack of IQ testing on Noah’s ark wasn’t the only reason people can’t agree on what makes an animal smart.  There’s no universally-accepted definition of intelligence for species, any more than there is for humans.

Clearly, we do some things better than other animals.  In fact, as we look around our world, we see examples of such things all around us.  But I suspect that from a dog’s perspective, the variety of sounds and smells he’s aware of, that we are not, makes it seem to him he’s aware of a great deal more in the world than we are.  What he sees all around him is equally full of confirmations of what he can appreciate, that we cannot. When we speak of our intelligence, when we give as examples Einstein and Mozart, what should I make of my assumption that a whale sees nothing special about Einstein?  And how would I know whether the whale appreciates Mozart?  Is it possible whales are simply bored by the ‘inferior’ sounds they see Mozart to be?  I’m quite sure, meanwhile, that I’m incapable of appreciating the ways whales communicate with each other.  Is it objective to conclude that we’re “smarter” because we understand the complexities of Mozart?

How is it we put so much stock in our ability to do the things we do well, and so little stock in the “unimportant” (to us) things we don’t do as well as other animals — like turn into a butterfly that can navigate its way back to a birthplace thousands of miles away?  Sure, a dog may not be able to learn Einstein’s theories.  But we’re not able to learn how to listen or smell with a dog’s acuity — even though dogs have been trying to teach us how to do so for centuries, by modeling it for us?  Why don’t we conclude we’re slow learners?

The second observation I made during my review of the “smartest animal” lists was that, in commenting on why these species were considered especially smart, list after list referred to the nominee’s similarities to us.

Take, for example, the reasons given by Mercola.com for ranking chimpanzees the most intelligent animals: “Chimps … like humans, live in social communities and can adapt to different environments…  Chimpanzees can walk upright on two legs if they choose…”[3]  (Surely most scientists don’t believe that walking upright has anything to do with intelligence.  (Am I wrong here, Stephen Hawking?)

In explaining why it ranks dolphins the second smartest (right after chimps), How Stuff Works tells us, “Schools of dolphins can be observed in the world’s oceans surfing, racing, leaping, spinning, whistling and otherwise enjoying themselves.”  Okay…  And why does it rank Elephants fourth smartest?  “Elephants are also extremely caring and empathetic to other members of their group and to other species, which is considered a highly advanced form of intelligence.”[4]  About chimps, CBS says their number one ranking is “Perhaps not entirely surprising given that chimpanzees happen to be the closest living relatives to humans in the animal kingdom.”

The CBS website makes a truly remarkable assertion based on the difference in the brains of dolphins and human beings:  “Turns out that like the other animal species in this gallery, dolphins possess large brains relative to their body size with a neocortex that is more convoluted than a human’s. Experts say that this puts dolphins just behind the human brain when it comes to cognitive capacity.”  If, as I understand, a convoluted brain surface is an indication of intelligence, how does the greater convolution of the dolphin brain put the dolphin behind us, rather than ahead of us?[5]  Is it because our inability to understand their squeaks renders their speech “gibberish,” much as E=mc2 might seem gibberish to them?

Having eyes behind our heads, or a third arm projecting from our backs, could be very useful to us in the right situations.  Yet we’re happy to be without them.  However, if we were to lose the sight of an eye or one of our arms, we might feel some tragedy had befallen us.  Why is it that we don’t regret not having eyes in the backs of our heads?  Why do we not lament the lack of a third arm – or the fact that we lack the olfactory prowess of a dog, or the sonar of a bat?  I’ll bet that if our noses could do what a dog’s can, our ability to distinguish thousands of things based on a just a few molecules in the air would rank among the first reasons that humans are the smartest animals.

So my dilemma is this: what happens if we try to remove any anthropocentric bias from our assessment of intelligence?  Is there a species-neutral standard by which to assess such things?  The more I consider the matter, the more I’m drawn to the possibility that the only definition by which one species can be said to be more intelligent than another is to ask how well-suited its unique talents are to ensuring its survival.  Measured that way, homo sapiens sapiens has done pretty well for itself, at least in the hundred thousand years it’s been around.  Maybe there’ve been a few times we haven’t seemed so bright – but hey, what’s an error like thinking that the entire universe revolves around the earth, when we can figure out how to make chemicals like DDT or fill the planet with gas-powered automobiles?  Have we not been successful, filling the earth with billions of copies of ourselves?  Some say that a measure of human intelligence is our extraordinary ability to adapt to new environments.  Have we not, after all, proven our ability to adapt to different environments? [6]

The four animals most commonly found at the top of the “smartest animals” lists I found were chimpanzees (and other primates), dolphins (and whales, porpoises, and other aquatic mammals), elephants, and pigs.  But most of the species in these groups are endangered.  If they really are similar to us, and they really are endangered, then what conclusions should we draw?  That like the great apes, we too are near extinction?  Or does the fact that we are responsible for the near extinction of most of these species mean that we are smarter than they are, and very different, after all?

Of course, not all the “similar” species are nearing extinction.  Dolphins are doing well, apparently.  Domesticated pigs are flourishing.  But before concluding that pigs have been “smart” to prosper so that they can end up on our dinner tables in such large numbers, if the true intelligence of a species is best evidenced by long term growth and survival, why do we find all the “intelligent” animals among mammalia?

It is nearly impossible to calculate the number of cockroaches that exist worldwide due to the fact that so many already exist and are reproducing at such a fast pace. Scientists believe that there are over 4,000 species around the world and there are at least 40 different species that exist in America. One source suggests that 36,000 cockroaches exist per building in some parts of America.[7]

Cockroaches have also been around for 300 million years – three thousand times longer than homo sapiens — and could easily survive a nuclear winter.[8]

But it simply isn’t acceptable to suggest that cockroaches are smarter than people.  Obviously, all mammals are smarter than insects; all primates are smarter than other mammals; all humans are smarter than other primates; and the smartest people in the world are those whose religious, political, and other beliefs all happen to match my own.  But doggone it, I still can’t figure out what makes us so extraordinarily smart.  Maybe someday we’ll figure it out, the way we finally figured out that the earth isn’t at the center of the Universe.

—Joe


[1] Charles Q. Choi, Top Ten Things That Makes Humans Special, http://www.livescience.com/15689-evolution-human-special-species.html 

[2] De Waal, Fran, The Brains of the Animal Kingdom, Wall Street Journal, March 22, 2013, http://online.wsj.com/news/articles/SB10001424127887323869604578370574285382756

[3] Dr. Karen Becker, The Most Surprisingly Smart Animals,  http://healthypets.mercola.com/sites/healthypets/archive/2015/08/22/10-most-intelligent-animals.aspx

[4] Top Ten Smartest Animals,  http://animals.howstuffworks.com/animal-facts/10-smartest-animals.htm

[5] CBS News, Nature’s 5 Smartest Animal Species,  http://www.cbsnews.com/pictures/natures-5-smartest-animal-species/5/

[6] I love that phrase “after all.”  Adaptability to different environments is indeed an oft-cited reason supporting human intelligence, but after only a hundred thousand years, it might be wiser to substitute the more accurate “so far” for “after all.”

[7] Larry Yundelson, Number of Cockroaches, The Physics Factbook, http://hypertextbook.com/facts/2009/LarryYundelson.shtml 

[8] See Zidbits, Can Cockroaches Survive a Nuclear Winter?  http://zidbits.com/2011/09/can-cockroaches-survive-a-nuclear-winter/

 

Comparing Apples and Oranges

You know: the very point of saying “it’s like comparing apples and oranges” is that it’s difficult, maybe even impossible, to do so, because —well —because they’re just not the same.  Consider this picture:

 

Forty-nine apples and one orange.   If I put all this fruit in a bag, mix it up and pull one piece out at random, the odds will be 49 to 1 that I’ll pull out an apple.  That is, 49 to 1 against the orange.

Now a question for you: Assuming a random draw, will I be surprised if I pull out an apple?  Answer: no, I won’t.  I fully expect to pull out an apple, due to the odds.  I assume you wouldn’t be surprised either.  I also assume we’d both be surprised if I pulled out the orange, for the same reason.  Am I right?

Now,  I feel as I do without qualification — by which I mean, for example, that if I pick out the orange, my surprise won’t be greater or less depending on whether the orange weighs nine ounces or ten, and I won’t be surprised if I pull an apple from the bag, regardless of the number of leaves on its stem.   The fact is I expect an apple, and as long as I get an apple, I’ll have no cause for surprise.  Right?

But now another question, and this one’s a little harder. What are the odds of my picking out an apple with two leaflets on its stem?  You can scroll back and look at the picture if you want, but try to answer the question without doing so: what are the odds of my picking an apple with two leaflets on its stem?

Ready?

Alright. Hard, wasn’t it?  If you went back to look at the picture, you found there was only one apple with two leaflets on its stem. Knowing that, you determined that the odds against my picking that particular apple were 49:1, the same odds as existed against my picking the orange.  Yet it’s pretty clear, as already determined, I would have been surprised if I’d picked the orange, but I wouldn’t have been surprised if I’d picked the only apple in the bag with two leaflets on its stem.

My real question, then, is why the difference?  And the only answer that makes sense to me comes not from probability theory, but from psychology.  I’m surprised if I draw the orange because, being mindful of the differences between the orange and the apples, I expected an apple. But not being mindful of the uniqueness of the two-leafed apple, I lumped all the apples together and treated them as if they were all the same.  I focused on the fact that the odds against the orange were 49:1, while never forming a similar expectation about the improbability of choosing the two-leafed apple.

Here, then, is my conclusion:  In pulling fruit from the bag, the actual improbability of every single piece of fruit is the same. Yet the perceived improbability of choosing the orange is far greater than the perceived improbability of drawing the two-leafed apple, because… well… because I hadn’t been paying attention to the differences among the apples.

Also, the division of the 50 pieces of fruit into only two categories – apples and oranges – was a subjective choice.  I could have grouped the fruit into large and small, or into three groups based on relative sweetness.  Or according to the number of leaves on the stem, in which case the orange would have been in a group with twenty apples.

Now, in any group of 50 pieces of fruit, no two are going to be exactly alike – the two-leafedness of one will be matched by the graininess of another, the seed count of a third, the sweetness of a fourth, and so on.  But we elect to ignore (or de-emphasize) a whole slew of possible differences, in order to focus on one or two traits.  Only by ignoring (or at least de-emphasizing) other differences do we construct a homogeneous group, treating all 49 of the red fruits the same for purposes of comparison to the orange one — treating them all as “apples” rather than one or two McIntosh, one or two sweet ones, etc.  That’s why I’m not surprised when I pick out that one, unique apple, despite the 49:1 odds against it.

Now consider a related point: that (subjective) decision about what criteria to base comparisons on, while ignoring other criteria, not only explains why we’re surprised if we select the orange, but how we estimate odds in the first place.  In fact, if we consider all their attributes, every piece of fruit is unique. The odds against picking any one are 49:1.  Yet, if we only focus on the uniqueness of the orange, our impression of odds will be vastly different than if we focus on fruit size, or sweetness, or seed count.

It isn’t some sort of unalterable constant of nature that determines how we perceive odds – it’s what we’re mindful of, and our resulting (subjective) expectations.

In an earlier post, Goldilocks and the Case Against Reality, I wrote of the concept that the limited focus which characterizes our brains has been useful to us.  (If I could see every part of the electro-magnetic spectrum, I’d be overwhelmed with information overload, so I’m advantaged by only being able to see visible light.)  My brain is just too small and slow to deal with all the information out there.  Even if I’d happened to notice there was only one two-leafed apple, I could never have taken the time to absorb all the differences among the forty-nine apples.  Compare that, say, to the difficulty of absorbing the different facial features of every person on this tiny, one-among-trillions planet.  I cope with reality by ignoring vast complexities of things I don’t understand, lumping a lot of very special things into groups for the very reason I can’t get my brain to focus on all their differences.

Now, this lesson about comparing apples and oranges teaches me something about God, and I hope you’ll give me a chance to explain.

The astronomer Fred Hoyle is said to have written, “The probability of life originating on Earth is no greater than the chance that a hurricane, sweeping through a scrapyard, would have the luck to assemble a Boeing 747.”  Hoyle apparently used the improbability of life as an argument for the theory of intelligent design. Hoyle’s statement was then quoted in The God Delusion (Houghton Mifflin, 2006), by the atheist Richard Dawkins, who said that the “improbability” of life is readily explained by Darwinian evolution, and declared, “The argument from improbability, properly deployed, comes close to proving that God does not exist.”

Now, whether either of these beliefs makes sense to me, I’ll leave for another day.  My focus is on trying to understand any argument based the “improbability” of life, and its because of what I’ve learned from the fruit.

I agree that the odds are against a hurricane assembling a 747, and against life’s existence exactly as it is today.  But is my surprised reaction to such improbabilities any different than my surprise at the random drawing of an orange, but not at the two-leafed apple?  Imagine, for a moment, that some other configuration of scrap parts had been left in the hurricane’s wake – one that appeared entirely “random” to me.  Upon careful inspection, I find that a piece of thread from the pilot’s seat lies in the precise middle of what was once the scrap heap.  A broken altimeter lies 2 meters NNE of there.  The knob of the co-pilot’s throttle abuts a palm frond 14.83 inches from that.  The three hinges of the luggage compartment door have formed an acute triangle, which (against all odds) points precisely north; the latch from the first class lavatory door is perched atop the life jacket from Seat 27-B….

I trust you get the picture.  Complex?  Yes.  Unique?  Yes.  So I ask, what are the odds the triangle of hinges would point exactly north?  The odds against that alone seem high, and if we consider the odds against every other location and angle, once all the pieces of scrap have been located, what are the odds that every single one of them would have ended up in precisely the configuration they did?

In retrospect, was it just the assembly of the 747 that was wildly against the odds?  It seems to me that every unique configuration of parts is improbable, and astronomically so.  Among a nearly infinite set of possible outcomes, any specific arrangement ought to surprise me, no?  Yet I’m only surprised at the assembly of the 747.  What I expect to see in the aftermath of the hurricane is a helter-skelter mess, and I’m only surprised when I don’t.

But on what do I base my expectation of seeing “a helter-skelter mess?” Indeed, what IS a “helter-skelter mess”?  Doesn’t that term really mean “all those unique and unlikely arrangements I lump together because, like the apples, I’m unmindful of the differences between them, unmindful of the reasons for those differences, ignorant of how and why they came to be as they are?”

Suppose, instead, that with the help of a new Super-Brain, I could not only understand all the relevant principles of physics, all the relevant data – the location, size, shape and weight of every piece of scrap in the heap before the storm — and suppose further that when the storm came, I understood the force and direction of every molecule in the air, etc.  With all that data, wouldn’t I be able to predict exactly where the pieces of scrap would end up?  In that case, would any configuration seem improbable to me?  I suggest the answer is no.  There’d be one configuration I’d see as certain, and the others would all be patently impossible.

Compare it to a deck of cards.  We can speak of the odds against dealing a certain hand because the arrangement of cards in the shuffled deck is unknown to us.  Once the cards have been dealt, I can tell you with certainty what the odds were that they’d be dealt as they were: it was a certainty, given the order they had previously taken in the deck.  And if I’d known the precise arrangement of the cards in the deck before it was dealt, I could say, with certainty, how they would be dealt.  Perfect hindsight and foreknowledge are alike in that neither admit of probabilities; in each case — in a state of complete understanding — there are only certainties and impossibilities. The shuffling of a deck of cards doesn’t mean that any deal of particular cards is possible, it means that we, the subjective observers, are now ignorant of the arrangement that has resulted. The very concepts of luck, probability and improbability are constructs of our limited brains.  Assessments of probability have developed as helpful ways for human beings to cope, because we live in a world of unknowns.

Now, let’s return to the scrap heap, one more time.  But this time, we don’t have an all-knowing Super-Brain.  This time, we’re just a couple of ants, crawling across the site after the hurricane has left.  On the off-chance that the hurricane has left a fully assembled 747, would we be mindful of how incredibly unlikely that outcome had been?  I suspect not. A 747 has no usefulness or meaning for an ant, so we probably wouldn’t notice the structure involved, the causes and purposes of each part being where it is. From our perspective as ants, that assembled 747 might as well be a helter-skelter mess — an array of meaningless unknowns.

Now, after traversing the 747, something else catches our little ant eyes. Immediately, we scramble up the side of the anthill, plunge into the entrance, race down the pathway to the Queen’s deep chamber, and announced with excitement that something truly amazing has happened.

“It’s surely against astronomical odds,” I say. “I wouldn’t believe it myself, had I not seen it with my own two eyes!”

“What is it?” the Queen’s courtiers demand to know.

“A great glass jar of sweet jelly has appeared,” you say, “just outside the entrance to our anthill!  That jelly could have landed anywhere in the jungle.  What are the odds it would land just outside the entrance to our hill?  A thousand to one?  A million to one?  There must be a reason…”

Well, there probably is some reason, it seems to me.  But the difference in approaches taken by people and ants to the perceived “improbabilities” here reminds me of comparing apples to oranges.  It’s not just that apples are different from oranges.  Whether “God” made us or not, we’re all unique, in many, many ways.  Some of us — I’ll call them the oranges — attribute perceived improbability to “plain ol’ luck.” Others, like one-leafed apples, attribute it to intelligent design.  Others, like leafless apples, say that improbability nearly proves the non-existence of God.  I say, what we perceive as improbable depends on whether we’re ants or people.  Our surprise varies widely, depending on the criteria we’re (subjectively) mindful of.  But as unique as we are, we’re all alike in one respect: we all have limited brains, and that’s why we need concepts like probability —to cope with our profound lack of understanding.

So, call me a two-leafed apple if you like, but when I encounter the improbable — the fact that the grains of sand on a beach happen to be arranged exactly as they are, and the snowflakes in a blizzard move exactly as they do — I try to remember that what I experience as “randomness” is just a name I give to what I can’t get my mind around.  “Improbability” tells me nothing about God, one way or the other, except that, if God does exist, she gave me a brain that’s incapable of fully understanding the uniqueness of things, or why any of it exists.

And I’m okay with that.

— Joe

 

WYSIATI

 

Have you ever seen a swan?  If so, how many have you seen?

For four years, my family lived on a pond that we shared with a family of swans.  I saw this one family a lot.  More recently, I’ve seen a few more swans, but given that swans live long, maintain monogamous relationships, and tend to remain in the same habitat, I suspect I’ve been seeing the same swans over and over again. I’d take a wild guess that I’ve seen a total of thirty swans in my life.  You might ask yourself the same question now: how many do you suppose you’ve seen?  (We’ll return to the matter of swans in a moment.)

I’ve been on vacation in Florida, so it’s been a couple of weeks since my last WMBW post.  During the holidays I was able to read a couple of excellent books: one of them, Thinking, Fast and Slow, by the psychologist and Nobel prize winner Daniel Kahneman, asserts that we have two systems in our brains – one designed to produce immediate beliefs, the other to size things up more carefully.  The other, Being Wrong, by journalist Kathryn Schulz, explores both the reasons we err and the emotional costs of recognizing our wrongness.  Both books have done much to clarify my intuitive beliefs about error.  If you suspect this is a case of “confirmation bias” you’re probably right – but at least my confirmation bias gives me a defense to those who say admitting doubt is tantamount to having no beliefs at all.  (I can’t have a bias in favor of a belief unless I have a belief to begin with, right?)

Well, to those who fear that I totter on the brink of nihilism, I assure you I do have beliefs.  And perhaps my strongest belief is that we human beings err – a lot. Since starting this website, I’ve started to see people committing errors with truly alarming frequency.  The experience helps me understand witch hunts, as I now see error the way Cotton Mather saw witches and Joe McCarthy saw communists – everywhere.  The difference, I’d submit, is that Cotton Mather never suspected himself of being a witch, and Joe McCarthy never suspected himself of being a communist. In contrast, I see myself being wrong every day.  In fact, most of the errors I’ve been discovering lately have been my own.

My willingness to admit to chronic wrongness may be partly due to the fact that Schulz devotes much of her book to rehabilitating the reputation of wrongness – pointing out that, far from being the shameful thing most of us consider it to be, wrongness is endemic to who we are and how we think – specifically, to our most common method of rational thought – reasoning by induction.

Consider this diagram:

block-sign

Reasoning by induction, says Schulz, is what causes even a four year old child to “correctly” answer the question of what lies behind the dark rectangle. By way of contrast, she says, a computer can’t answer such a puzzle. The reason? A computer is “smart” enough to understand that the dark rectangle may hide an infinite number of things, from a stop sign to a bunny rabbit to a naked picture of Lindsay Lohan. Without inductive reasoning, the computer will have to consider (and reject) an infinite number of possibilities before deciding on an answer. We humans, on the other hand, are much more efficient – we’re able to form nearly instantaneous conclusions, not by considering all the possibilities we don’t see, but by coming up with plausible explanations for what we do see. Even to a four year old child, it seems highly probable that the image behind the dark rectangle is the unseen middle of the white bar behind it. It’s certainly plausible, so we immediately adopt it as a belief, without having to exhaust an endless list of other explanations. Inductive reasoning makes us the intelligent, quick-thinking creatures we are.

In his book, Daniel Kahneman calls this WYSIATI. His acronym stands for “What you see is all there is.” Like Schulz, he points out that this is how human beings generally think – by forming plausible beliefs on the basis of the things we see, rather than by tediously rejecting an endless list of things we don’t. And, like Schulz, he gives this sort of thinking credit for a good deal of the power of the human brain.

But there’s a downside, a cost to such efficiency, which brings us back to swans. If you’re like me, you probably believe that swans are white, no?

“Which swans?” you might ask.

“Well,” I might well reply, “all of them.”

I first formed the belief that swans are white after seeing just a handful of them. Once I’d see a dozen, I’d become pretty sure all of them were white. And by the time I’d seen my thirteenth swan, and my fourteenth, confirmation bias had kicked in, leaving me convinced that my belief in the whiteness of swans was valid. It only took one or two more swans before I was convinced that all swans are white. Schulz says it was the philosopher Karl Popper who asked, “How can I be sure that all swans are white if I myself have seen only a tiny fraction of all the swans that have ever existed?”

Schulz observes that as children, we likely observed someone flipping a light switch only a few times before concluding that flipping switches always turns on lights. After seeing a very small sample – say, a golden retriever, a shih tzu, and Scooby Doo — children have sufficient information to understand the concept of “dog.” We form out beliefs based on very small samples.

Kahneman describes how and why it’s so common for groups to underestimate how long a project will take: the group imagines all the steps they anticipate, adding up the time each step will take; it factors in a few delays it reasonably foresees, and the time such delays will likely take; and it even builds in an extra cushion, to give it some wiggle room. But almost invariably, it underestimates the time its project ends up taking, because in fact, the number of things that could cause delays is virtually infinite and, well, you can’t know what you don’t know. In a sense, to use Kahnemen’s phrase, you can’t help but feel that “what you see is all there is.”

Now here’s what I think is a critical point. The way inductive reasoning takes such very small samples and draws global conclusions about them makes sense when worlds are very small. If the child’s world is her own house, it’s probably true that all the wall switches turn on lights – it’s only when she enters factories and astronomical observatories years later that wall switches turn on giant engines and rotate telescopes. Here in Virginia, all swans probably are white; I’ll only see black swans if I go to Australia or South America, which I may never do. There wasn’t much problem thinking of the world as flat until there were ocean voyages and jetliners. Both as individuals, and as a species, we grow up in very small, homogeneous worlds in which our inductive reasoning serves us well.

But the real world is more varied and complex. It’s when we expand our frames of reference – when we encounter peoples, cultures and worlds different from those of our youth – that what we “know to be true” is likeliest to be challenged.  And by that time, we’ve become convinced that we’re right. All experience has proven it. Everyone we know knows the truth of what we know. After all, our very conceptions of self, and of our worth, and our very comfort, depends on our being right about the Truth.

More, later, about the emotions involved when one of these strangers challenges that which we know to be true.

— Joe

Baby, It’s Cold Outside

Surely everyone knows the classic Ray Charles and Betty Carter duet in which Ray is intent on getting Betty to stay at his place for just one more drink, while Betty protests, insisting she can’t.  Hammering away with insistence that “It’s cold outside,” Ray eventually prevails on Betty to stay and enjoy the fire.  Snuggling up to him, happy to be together in harmony, Betty joins Ray in singing the final line, “Ah, but it’s cold outside!”

It’s a great study of persuasion in action – the use of words to produce apparent agreement.  I say “apparent” because – well, no, on second thought, I won’t go there.  The time’s not right to take up the subject of the obstacles words pose for minds that wish to share the same thought.  For today, let’s assume that words mean the same thing for everybody. And let’s use them, like Ray Charles so artfully does, for making a case.

If you’ve been following this website, you know that one of our friends made a suggestion that we include one or more “objective truths that everybody could agree on.”  Daunted by the prospect, I sought help from our readers.  The first to answer the call was my longtime friend Ann Beale.  Picking up where Ray and Betty left off, Anne declared that an objective truth to which everyone could agree was, “It’s cold outside.”

Now, I thought this nomination brilliant.  If you don’t know Anne, she lives in South Dakota, where the average low temperature in December is 5 degrees Fahrenheit, the average high only 25.   As it happens, reading Anne’s comment was the first thing I did after getting up at 6:00 a.m., and I was still dressed in the wool sweater I’d worn under the covers during the night – a wool sweater I’d worn over a night shirt, which I’d worn over a tee shirt, which I’d over a tank top.  With the help of these four layers, I’d endured a night of record low temperatures here in Virginia, but with the covers off, I was already shivering as I sat down at my desktop to read Anne’s post.  So I had no choice but to agree with her – it was very cold outside.

Then I read the nomination submitted by another long time friend, Philip McIntyre. Philip nominated an entire slate of candidates.  His description of his nominees – the physical laws of nature – wasn’t quite as pithy as Anne’s, but (always gracious) Philip pointed out that perhaps his post “built on” Anne’s.  You can read Philip’s comment for yourself, but I’d venture the opinion that Philip actually agreed with Anne regarding her nominee: that it was, in fact, cold outside.  One of Philip’s sentences began, “The cold temperature outside right now is…” which strikes me as coming pretty doggone close to agreement. (Philip, I might point out, lives in Buffalo, where the average low in December is 11, and the high, at 31, is still below freezing.)

Now, at that point, I was surprised, but elated.  As best I could tell, (“with three precincts reporting”) there was universal acceptance of an objective truth.  It was, in fact, cold outside.  But then, this morning, as I sat down to record my elation and post “It’s Cold Outside” on the WMBW website’s Home page, I discovered a third nomination.  While the third comment didn’t expressly disagree with Anne – while it wasn’t so contentious, for example, as to say, “Heck no, you fool, it’s hotter ‘n blazes, dammit!” – the writer did write, “Mightn’t the only objective truth be that we do not know what we do not know?”

Definitely food for thought there; I for one was tempted to make a fine breakfast of it, for at least several paragraphs.  But loath to digress, I strove to stay focused on the question at hand – i.e., could everyone agree, “It’s cold outside” – ?  The new writer’s suggestion that there might be only one objective truth everyone could agree on – and that such uniquely objective truth was neither a physical law of nature nor a statement about the weather – forced me to conclude that the new writer was advancing a position in irreconcilable  disagreement with Anne.

I hasten to add that the writer – my brother David – lives in south Georgia, where the average high this time of year is a near-tropical 65.  Well, there you go.  Despite his obvious effort to avoid confrontation with his friends to the north, David, by postulating that he might have put forward the only objective truth, had in a single stroke destroyed our unanimity of belief. (It was easy to see, in that moment, how the Civil War might have started, and as my long time friend Ron Beuch has now suggested with his comment — even as I write this post –bias can be very hard to shed.)

We May Be Wrong is a truly nascent phenomenon.*  During our first three weeks of existence, our growth has been phenomenal.  We already have a huge number of readers.  (At least thirty, I’d be willing to bet.)  But even with only four of us weighing in on the question, we appeared unable to agree that “It’s cold outside” was an objective truth which everybody could agree to.

Now, saddened as I was at this setback, I turned to Philip’s nominees – the physical laws of nature.  Searching for the sort of harmony Ray Charles had achieved with Betty Carter, I asked myself, is it possible that we four, at least, could all agree to the objective truth of Philip’s nominees?  I mean, perhaps, in South Dakota, “It’s Cold Outside” is a physical law of nature.   And perhaps “We don’t know what we don’t know” is a physical law of nature in south Georgia.  So maybe Philip’s comment deserved a closer look.  Maybe, if Anne and David already considered their nominations to be physical laws of nature, they already agreed with Philip, implicitly, and in that case, if I could see my way clear to agreement, Philip’s nomination would have agreement from all four of us.  (And maybe the other twenty-six of us, like Betty Carter, would eventually come around?)

First, I was a little concerned that Philip hadn’t nominated any one Law of Nature in particular, or even multiple such laws, but simply a category, “Physical Laws of Nature.”  It’s been a long time since I was in school, and if I ever knew, I’ve forgotten just how many physical laws of nature the experts have determined there are.   In fact, I’m left wondering what, exactly, a Physical Law of Nature is.  But as with the obstacles posed by words, I’ll forego the temptation to go down that perilous path.  Assume with me, if you will, that we all share a common understanding of what the Laws of Nature are.

I understand that this assumption is not an easy one to make.  In Philip’s comment, he writes, “The problem is, they [the laws of Nature] are so hard to understand.”  Well, I’d sure agree with that.  Relativity?  The space-time continuum?  Quantum mechanics?  They all elude my full understanding, to be sure, and maybe my partial understanding as well.  In fact, even gravity sometimes mystifies me (and not only when I’ve had too much to drink).  But that’s precisely why I wonder about Philip’s statement that, “properly understood,” the physical laws of nature are constant and immutable.  Having agreed that such laws are very hard to understand, I have great difficulty agreeing with anything about what they are when they’re “properly understood,” because I doubt very much that I properly understand them.

But surely I quibble.  And meanwhile, I’m actually more troubled by a different question.  Philip writes that the physical laws of nature are “constant and immutable” in the sense that they “will produce exactly the same result every time in exactly the same set of circumstances.”  I’ve been up all night (well, much of it, anyway) pondering the significance of the italicized words in that sentence.

Now, before I continue, I should acknowledge my own biases.  I personally believe in the value of the scientific method.  As I understand it, scientific “proofs” are all about “reliability” which I believe is the scientific word for what Philip is talking about.  When the scientist keeps extraneous factors under “control,” and can accurately predict the outcome of an experiment time and time again, always getting the same (predictable, identical) result, the scientist is said to have demonstrated “reliability.”   It’s another word for scientific “proof,” as far as I know.  I think there’s much to be said for the scientific method, as a means of learning new things about the physical world.  So if there’s any confirmation bias at work here, I’m pre-wired to agree with what Philip is saying.

But his qualification, “in exactly the same set of circumstances,” nags at me.  Can something be said to be a “law” at all, much less a “constant and immutable” one, if it all depends on an exact set of circumstances?  Isn’t a “law,” by definition, something that operates across circumstances?  There’s a saying in the (legal) law that you can’t have one rule for Monday and another for Tuesday.  It stands for the proposition that for a law to be a law, it has to apply to varied circumstances.  The trooper who issues a speeding ticket says, “I’m sorry, sir, but that’s the law,” by which he is essentially saying, “it doesn’t matter that you’re late for a meeting; the law is the law.  Circumstances don’t matter.”  Believe me, I know that laws often get riddled with exceptions which are essentially driven by variations in circumstance.  Murder?  >> Guilty!  (Oh, self-defense? >> an exception >> innocent.  But murder!  >> Guilty!  Oh, insanity?  >> an exception >> innocent.)   But in the legal world, I’d venture to say, the exceptions are like little “mini-laws” that live within the more general law, running contrary to it in result, but similar to it in form, in that they apply to all the circumstances they purport to include.  Riddled as they are with exceptions, both the general laws themselves and the little “mini-laws” that deal with exceptions are general principles that cut across variations in circumstance.  So I wonder: if every single variation in circumstance had its own special “law,” would there really be any law at all?   With each thing subject to rules applicable only to it, wouldn’t we have anarchy and lawlessness?

David’s nominee, “We do not know what we do not know,” strikes me as a classic tautology, a class of self-evident propositions that also includes “All I can say is what I can say,” “a rose is a rose…” and (importantly) “we do know what we do know.” As such, rather than being the only objective truth, it seems one of a type of an infinite number of truths. At the point at which each unique thing in the world can claim that it is what it is, that it does what it does, etc., it seems plausible to think we might not have objective truth at all, but the very essence of complete subjectivity.

As Philip appears to acknowledge, Anne’s nominee, “It’s cold outside,” seems to result from a constant and immutable set of laws, in the sense of being scientifically predictable, repeatable, and reliable — as long as you remain “in exactly the same set of circumstances.” For people in South Dakota, in the month of December, when there are no forest fires raging for miles around, when the sun is at an oblique angle to the hills around Sioux Falls, when none of the moose are wearing overcoats or carrying space heaters, etc. etc.) it will always be cold outside.

Last night I finished the book of David Foster Wallace essays, Both Flesh and Not, in which I read Wallace’s delightful essay, “Twenty Four Word Notes.” In that essay, Wallace discusses the class of adjectives that he calls “uncomparables,” the first of which is the word “unique.”  Since “unique” means “one of a kind,” he points out that one thing cannot be more “unique” than another; a thing is either unique or it’s not.  Wallace asserts that other uncomparable adjectives include precise, correct, inevitable, and accurate.  “[I]f you really think about them,” he writes, “the core assertions in sentences like, ‘War is becoming increasingly inevitable as Middle East tensions rise,’ [is] nonsense.   If something is inevitable, it is bound to happen; it cannot be bound to happen and then somehow even more bound to happen.”

Philip’s comment uses three key adjectives in describing the physical laws of nature.  He calls them “objective,” “constant” and “immutable.”  I’ll bet that if David Wallace were still alive, he’d agree that “constant” and “immutable” are uncomparables, and perhaps “objective” as well.  If you’re not always constant, then are you really constant at all?  If you’re not always “immutable” – because, on some occasions, you can change – then are you “immutable” at all? If something is “objective” because it doesn’t depend on one’s individual circumstances, then can it depend on any individual circumstances at all, and still be objective?

It seems to me that the class of tautologies comprises an infinitely large class of “truths” because everything is what it is, everything does what it does, and none of these subjective “truths” have to apply to anything else.  So it strikes me as pertinent to ask, ‘Does a truth transcend mere tautology when it applies to anything more than itself?’  And if so, once the gap between two discrete indivisible units is bridged by a “law” that applies to both, is it now a “law of nature” in any meaningful sense?  A constant, immutable, objective truth, because it applies to not just a single set of circumstances, but a second set, as well?   I wonder whether, to qualify as a constant, immutable “objective truth,” a law would only have to apply to two sets of circumstances, or to ten, or to a hundred?

if the “physical laws of nature” include Einsteinian relativity, then isn’t everything ultimately dependent on point of view (i.e., subjective?)  Well, not the great constant, c, the speed of light, you say?  But as I understand it, the speed of light in a vacuum can never be surpassed provided we’re not talking about dark matter, black holes, or parallel universes, and provided we’ve narrowed our consideration to the post-Big Bang era, which insulates our perspective as surely (it seems to me) as the vast Atlantic Ocean insulated pre-Columbian Europe.  And if scientists admit (as I understand they do) that for time prior to the Big Bang, all bets are off, then how is our understanding of physical laws not dependent on our point of view, i.e., subjective?

So pending a reply from Philip or others, who may yet convince me I’m wrong, I’m not yet prepared to agree that the physical laws of nature can lay claim to being “objective” truth.  The original challenge put to the website was to include not just any old objective truth, but an objective truth everyone could agree to.  Alas, much as I hope for our readership to grow, I fear this website may never appeal to those who live on the other side of the Bang, or in any quadrant of the multiverse, or in the world of dark matter, for that matter.

Oh well.  A day or so ago, when there were only three of us, I was, for however brief a time, able to bask in the comfort of pure harmony, knowing everyone agreed that it’s cold outside.  Today, I’ll close by reporting that it’s a few degrees warmer outside.  And in the game of Hide and Seek in which I fear never coming to know the truth, I think that warmth means I may be getting closer.

_______

* Nascent: “(especially of a process or organization) just coming into existence and beginning to display signs of future potential.” – See https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=nascent

–Joe

Goldilocks and the Case Against Reality

In my last blog, I credited Richard Dawkins with reminding me how human beings are able to see only a narrow band of the electromagnetic spectrum.  As Dawkins put it, “[N]atural selection shaped our brains to survive in a world of large, slow things.”  Would we be better off if we could see not only “visible light” but infrared and ultraviolet as well?  Or, like Goldilocks, do we have no use for chairs that are too large or too small?  Are we better off if we devote our attention to the things that are “just right” for creatures of our own size and needs?

I’m no evolutionary biologist, but I’ve long been fascinated by the anthropocentric idea that evolution first made us the dominant species on earth, and will now ensure we remain at the pinnacle of creation – presumably, because we’re so much more intelligent than any other creature on earth, so that no other species will ever be able to catch up.  Some people seem to believe evolution will ensure that our brains get ever larger and that we’ll ascend the evolutionary ladder ever higher toward omniscience.  The idea that, instead, natural selection has shaped our brains “to survive in a world of large slow things” – causing us to be blind to smaller and faster things, for our own good – is surely a different idea of evolution altogether.  I’ve been intending to research that question and to blog about what I found.

This morning, my brother David sent me hurtling in that direction faster and farther than I’d imagined possible.  David – who’s been kind enough to join me in starting We May Be Wrong – sent me a link to an article by Amanda Gefter that appeared in Quanta and was reprinted in The Atlantic.  It’s called The Case Against Reality, about the theories of cognitive scientist Donald D. Hoffman.  In the article, Hoffman says:

The classic argument is that those of our ancestors who saw more accurately had a competitive advantage over those who saw less accurately and thus were more likely to pass on their genes that coded for those more accurate perceptions, so after thousands of generations we can be quite confident that we’re the offspring of those who saw accurately, and so we see accurately. That sounds very plausible. But I think it is utterly false.

The illustration Hoffman proceeds to use, in order to simplify the point, reminded me of the story of Goldilocks.  He asks us to think of a creature that needs water for survival.  Too much of it and the creature will drown; too little and it will die of thirst.  What the creature really needs, for purposes of survival, is simply to know whether something contains a beneficial (medium) amount of water or not.  In Goldilocks terms, that it’s “just right.”

What I’ll call the “Goldilocks factor” strikes me as lying behind our inability to see ultraviolet or infrared light.  We don’t see the extremes of electromagnetic frequencies because we don’t need to, and because having all that extra information would bog down our brains with useless minutiae.  It’s just not efficient for a biological organism to spend its energy dealing with things of no immediate consequence to its survival, and if it took the time to do so, it would be fatal.  If Papa Bear’s porridge is so hot as to scald Goldilocks’ tongue, she has no reason to concern herself with whether its 300 or 350 degrees.  If she did, she’d succumb to what has been aptly dubbed “paralysis by analysis,” and her tongue would get very burned while she figured it out.

Hoffman compares it to what we see on a desktop interface.  We see icons, not binary code.  “Evolution,” he says, “has shaped us with perceptions that allow us to survive. They guide adaptive behaviors. But part of that involves hiding from us the stuff we don’t need to know. And that’s pretty much all of reality, whatever reality might be.”

I hesitate to further describe Gefter’s article lest it decrease the chance you’ll follow the link and read it for yourselves.  But in a nutshell, Hoffman’s view is that the world presented to us by our perceptions is nothing like reality – or that there is no such thing as objective reality — or that the only realities are our individual perceptions – or – well, doggone it, please read the article for yourself.

http://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/

It’s a beaut.  Thanks for sharing it, Dave.