The Meaning of Large and Small

I continue to have difficulty comprehending the very large and the very small.

Yesterday, thinking about the word “small” itself, I got to wondering what I mean when I call something small.  I wondered how I would phrase a definition of the word, if I were assigned the task of creating one for a dictionary.  For example, I could say “small” means the same as “little.” But what would that add, say, to the understanding of someone who spoke only Chinese, or Martian?  My dictionary in fact defines “little” as “small in size.” Could I define “small” other than by simply using an English synonym for it? If you’re a word nerd like me, you might try doing this yourself. If you do, I’d be interested in hearing what you come up with. 

I think my big American Heritage Dictionary (Houghton-Mifflin) struggled with the same problem. In that dictionary, as noted, “little” is defined as “small in size,” while “small” is defined as “being below the average in size or magnitude.”  Fair enough, I thought, until I considered some other definitions in that same book, where  “size” is defined as “the physical dimension, magnitude or extent” of something, but “magnitude” is defined as “greatness in size or extent,” and “extent” is defined as “the range, magnitude or distance over which a thing extends.” 

Considering all these definitions together,  I imagined my Martian visitor persuaded that abstractions like “small” and “little” mean the same thing, but having no idea what that is. When the words are only defined in terms of each other, how can anyone tell what they really mean?

Though I felt I was going in circles, I kept trying.

“Great,” I learned, is “Very large in size,” while “large” is “of greater than average size or extent.”  So great means large and large means great.  Great! But if I didn’t already have an idea of big and small, where would that get me?

Of course, linguists have long recognized this circularity of language.  The problem isn’t just defining “small” by using a synonym like “little.”  It’s more general than that, and it ultimately comes from the absurdity of trying to define words using other words.  If we want to define what a word means by saying that word A is equal to  words B, C, and D, the problem is that no matter how many words we go through, every set of words becomes equivalent to nothing but other sets of words.  B, C and D are defined by E, F, and G, and those by H,I and J, but H, I and J are defined by A, B and C.   Even in a language of 50,000 words, that fixed set of things is limited – a closed loop, explainable only by itself.  Every word, sooner or later, can only be defined by reference to itself or to words that it has helped to define.  And in any such closed system, entropy sets in.

The definitions of “small” and “large” above both  make use of the concept of “average,” which might seem helpful, because “average” is a concept which takes me from the world of words into the world of mathematics.  If small is less than average and large is greater than average, then that should prove helpful – provided I know what “average” means.  But what do I mean by “average”?

My mathematical concept of “average” requires a finite set of numbers to consider.   I can say that the average of two 12’s, one 17 and one 19 is 15, but only because I know how many of each number I have for my calculation.  I’m dealing with a known, fixed, quantifiable set.  I might even be forgiven if I say that the average (in size) of one golf ball, one tennis ball, and one soccer ball is (more or less) a  baseball, because, again, I’m dealing with a know set of data.  But what data set — what objects,  and how many of them — should I use to compute an average, on my way toward understanding that small is below average, and large is above average?  The average size of all things? If I take  the smallest things I know, like quarks, and the largest, like the whole universe,  don’t I still need to know how many quarks there are, and how many stars of various sizes,  before I can compute an average size of things, and therefore to know what it means to be above or below the average of all things, and therefore, inherently large or small?

Meanwhile, in order to take into consideration, say, my dictionary, in order to count it in considering the average size of all things, do I count it as a single thing, about 14 inches in length and weighing a few pounds, or as a thousand smaller things, called pages, or several billion even smaller things, called molecules?  Is my car just a single car, or is it an engine, a body, a chassis, and four wheels? Obviously, if I count myself as one person of my size, I have a very different impact on the “average” of all things than if I count myself as a few billion cells of far smaller size. With such questions pervasive about every thing and every size, I submit, it is impossible to formulate a data set capable of yielding any meaningful concept of an “average” in the size of all things —  yet Houghton Mifflin has no problem saying that small things are things smaller than “average,” and large things  larger.

(By the way, I submit that it it makes no difference if we think in terms of medians. Using medians, I suspect our calculation would yield something only slightly larger than a quark, and virtually everything else would then be considered very, very big by comparison. And if we used the half way point between the size of a quark and the universe, we’d get get something half the size of the universe, and everything else would be very, very small. Can our feeling that we understand what’s big and what’s small be so dependent on different mathematical ways of thinking about averages?)

Pulling out that big dictionary again, I wonder, what makes it big?  At first glance, it doesn’t seem nearly as big as my car, yet I call it big while I call my car small.  Surely, I mean that my dictionary is big because it has greater magnitude – more pages, and more words — than other things to which I tend to compare it (roughly speaking, those other things I also call books).  I call my Toyota  small because it has less trunk space and passenger seating than my daughter’s SUV.  Could size, then, be a concept that is relative?  It seems so – but relative to what?

I find this last question intriguing.  I think a book big when I compare it to other books, and a car large (or small) when I compare it to other cars.  That concept of relative size seems easy. But if I think for a second that a star can only be thought large in comparison to other stars, I quickly retreat from my relativistic comfort zone.  Surely  stars are always, and absolutely, larger than books, and surely  books are always, and absolutely,  larger than quarks.  If so, surely there is something about size that is not relative to its own group of similar objects – something absolute which enables me to feel quite strongly  that one group of things is inherently larger than some other group of things.   And so, once again, I’m back to square one, wondering what makes one thing large and another thing small.

In desperation, I consult the dictionary again.  This time, instead of “large” or “small,” I look up the word “big.”  (After all, what could a big dictionary be better at defining?)

“Big” is defined by the folks at Houghton Mifflin as, “Of considerable size, magnitude or extent; large.”  Size, magnitude, extent, large – nothing new here.  Big is large, and large is big./ For a moment, I’m disappointed.  But wait.  (There’s more!)   I look up “Considerable.”  The first definition of “considerable” is “large in amount, extent, or degree.” ( Arghhh!  Large means considerable, and considerable means large.  I feel like I’ve been here before.)  For a moment, I consider looking up the new words “amount” or “degree,” but I decide that effort won’t likely be useful.  Then my eyes fall on the second definition of “considerable.” 

“Worthy of consideration.” 

Ah! We’ve left the world of physical dimensions, some place outside the closed loop of size words. Am I finally on to something?  I look up “worthy.”    I find, “Useful.  The quality that renders something desirable, useful, or valuable.”

I think I’ve found the answer I’ve been looking for.  Something is “considerable” if it is worthy of consideration, and it is worthy of consideration if it is useful.  Size is indeed relative, but relative, primarily, to what I find useful

I recently watched Season Six of the survival series “Alone.”  (Synopsis: Ten people competing to survive in Ice Age conditions.) In that world, a moose was important, both because, unlike a squirrel,  it could kill you, and because, if you could kill it, it could feed you for a very long time.  The series contestants considered thirty-pound Northern Pike or lake trout more valuable than ten-pound rabbits, which were in turn more valuable as food than even smaller things like mice.  The closer in size something was to the contestants, the more nutrition it brought.  The more “worthy of their consideration” it was. 

The contestants on “Alone” embraced the value of living as our primitive ancestors did, and I find myself reflecting that it was this ancestral way of life that shaped our species’ understanding of words like “big” and “small.”  Pigs and cows and grizzly bears were more important than, say, mosquitoes. As human beings evolved, those who paid most attention to things about their own size — things between, say, the size of spiders and mastodons — survived and reproduced better than those who paid attention to grains of sand or the vastness of the Universe.  I conclude that, as we generally use the terms “small” and “large,” absent a context which suggests a different relative comparison (a car being small compared to other cars), the default meaning is not really relative to a some incalculable “average,” but relative to ourselves. That is, smaller or larger than we are.  I myself create my sense of the “average” size of things.  Things smaller than me are small, and things larger than me are large. Things are large or small relative to me. And from an evolutionary perspective, it is the things closest to my own size that are (subjectively) important to me.

But are pigs and grizzly bears really more important than mosquitoes, objectively?  Exploding supernovae and super massive black holes are not only extremely large.  Astronomers and cosmologists now tell us that if it weren’t for them, we wouldn’t exist, as they create the very elements from which we’re made.  Those who study life’s origins tell me all complex forms of life began when bacteria became essential parts of our cells, so we wouldn’t exist were it not for bacteria.  And the importance of bacteria is not just historical.  If, today, things like plankton and bacteria stopped being available as food for larger things, moving up the food chain, we’d have nothing to feed on ourselves.  And all the time, quantum physicists remind us that without things as small as quarks, we wouldn’t exist either.  

So it isn’t really true that lions, tigers and bears are most important to my existence.  Nor were they, in fact, most important to our ancestors’ existence.  From an evolutionary perspective, we succeeded by paying attention to things our own size, not because such things were more important to us, but because we could actually do something about them if we paid attention to them.  Evolution proved that paying attention to them was useful to our survival.

But if the issue is usefulness to me – whether I can put my understanding of something to use, to help me eat it or to keep me safe from it – which should I consider more worthy of consideration, more considerable in size, to my current life in the 21st century – a black hole, or a virus?  If the answer is that I can do more with my understanding of viruses than I can with my understanding of black holes, why do I think a black hole  more  worthy of my consideration – more considerable in magnitude —bigger — than a virus?

Our notions of smallness and bigness come from a time in our past when we could not deal effectively with things far smaller or larger than ourselves, a time when things our own size – the moose, the cow and grizzly bear – were most worthy of our consideration.  We could not concern ourselves in those days with virology or pandemics, with things as small as molecules of CO2 or as large as ozone layers or the acidity of the oceans. Thinking about viruses rather than grizzly bears would have been fatal in those days. But such things, both the very large and the very small, are beginning to enter our sphere of influence.  As science continues to broaden our understanding of the world, our ability to prosper (or not) in the face of things we previously thought too large, or too small, to matter, changes. Is it time, now, to revise our thinking about the meaning of words like “large” and “small”?

The Meaning of Meaning

Years ago, my brothers and I started debating the existence of absolute truth.  My brothers defended its existence and knowability.  I questioned its knowability, if not its very existence.   After decades retracing the same ground, our dialogue began to seem less like a path to enlightenment than a rut.  My brothers still believed in absolute, objective truth, and that it’s possible to know at least some of it, while I stuck to my subjectivist guns.

My subjectivism included the matter of language.  I see words as arbitrary patterns of sound waves without inherent meaning, which is to say, lacking any meaning until two or more people reach agreement (or at least think they’ve reached agreement) on what idea those sound waves represent.  The word “fruit” is not inherent in apples or oranges.  Not only the sound f-r-u-i-t but the very concept of fruit exists only in the mind.  A “fruit” is not a real thing, but a category, a label for an idea.  And ideas, as we all know, exist only in the mind. 

Having agreed that early ancestors of MacIntosh and Granny Smith had enough in common to be called “apples,” and that the ancestors of navels and clementines had enough in common to be called “oranges,” we then went further and called them both “fruit.”  Slicing and dicing with our verbal ginsu knives, we label some fruit as “citrus.” We group fruit with legumes and call them both plants.  We add ourselves and elephants as mammals, then add plants and viruses and call us all “living things.” All the while, scientists debate the very idea of what living things are, including and excluding actual things from what is, I maintain, just a concept.  Importantly, the things themselves are not affected by what the scientists call them.  A rose remains a rose, by whatever name we call it.   

And so language, I say, remains subjective.  We attempt to group and classify real things by using conceptual labels.  We distinguish between a gallop and a trot, but we ignore the difference between the way I “walk” and the way a thoroughbred horse does, or a camel or a duck.  Arbitraily, subjectively, we call them all the same thing: “walk.”  Why not distinguish between a walk and a shawk and a mawk?  It’s all very arbitrary.  What constitutes a “walk” is obviously an idea – and ideas exist only in the mind.

Comfortable in my subjectivist philosophy of language, I recently came across the late Hilary Putnam, former Harvard professor and president of the American Philosophical Association.  Putnam famously claimed that “meaning ain’t just in the head.”  In his books Meaning and Reference (1973) and The Meaning of Meaning (1975), he used a thought experiment to demonstrate that the meanings of terms are affected by factors outside the mind.

Essentially, Putnam did this by asking us to imagine a world that is a perfect twin of Earth – that is, in every way but one.  The only exception is that its lakes, rivers, and oceans are filled not with H20 but with XYZ.  But everything else is identical, including people, and their tongues, and their languages – so that both Earth’s Frederick and Twin-Earth’s Froderick use the identical word “water” to refer to the stuff that fills the oceans on their respective planets.  Since Frederick and Froderick are physically indistinguishable, and since their words “water” have different meanings, those meanings cannot be determined solely by what is in their heads.  

So said Putnam.

The idea that meanings are rooted in real things, not just in subjective minds, became known as “semantic externalism.” It was credited with bringing about an “anti-subjectivist revolution” in philosophy, a revolution that threw into question the very “truth” of subjective experience.[1]

Yikes!  Was I wrong yet again?  Did I have to rethink my whole philosophy of language?  Did I have to concede to my brothers that there is such a thing as objectivity, at least in the meaning of words?

Not so fast.

Putnam’s Twin Earth thought experiment had me worried.  But at the end of the day, I decided it suffers from the common logical fallacy that its conclusion is contained in its premise.   The real question, I believe, boils down to one that Putnam may have had in mind when he titled one of his books The Meaning of Meaning.

If language is as subjective as I suppose, and if words can mean different things to different people, as I believe, who’s to say what a word really means?  I don’t believe there’s an objective answer, and perhaps Dr. Putnam did, but I think it may come down to what we mean by the word “meaning.” When faced with such questions, I’ve often sought the judgment of etymology, the history of words. I find it instructive to retrace the way words (and their meanings) change over time. And so I set out to unearth the etymological path by which the word “meaning” came to have meaning.

According to my research, the word is related to the Greek and Latin root men– (“to think”) from which English words like mental and mentor have derived.  It came into Old English as the verb maenan, meaning to state one’s intention, to intend, to have something in mind.  And much later, the verb “to mean” led to formation of the verbal noun, “meaning.”

From an etymological perspective, I would argue that meaning is therefore subjective, by definition.  If to “mean” something means to “have it in mind,” then there cannot be meaning independent of someone’s mind.  Definitionally, it is the idea formulated in the mind.  The person whose tongue pronounces the word’s sound is trying to convey the meaning in her mind.  And when the listener who hears the sound uses it to form an idea in her mind, “meaning” happens again.  To “mean” something is, always, to have an idea in mind.

I find it interesting to imagine the day, within the past few hundred years, on which two people were watching a meteor shower, or a lightning storm, or a two-headed snake – some occurrence that struck them as an omen of sorts – and one of them first asked the question, “What does it mean?”

It’s a question we’ve all asked at some point – if not about an omen, then about a symbol, a gesture, or some other mindless thing. The question has become an accepted expression in modern English.  But what a momentous event, the first time it was asked!  Here we had a word – to “mean” something – which (at the time) meant that a speaker had some concept “in mind” and “intended” to convey that concept to another.  That is, as then used, the word clearly referred to a subjective concept.  You’d long been able to ask what a person meant, intended, or “had in mind.” But when the question was first asked, “what does it mean?” referring to a lightning bolt, an earthquake, or a flood, the one asking the question was implicitly asking another, broader question – whether, perhaps, the “it” – the burning bush, the flood, the electrical discharge – could have “something in mind.” 

Alternatively, they were asking if the thing or event had been endowed with a meaning by virtue of having been in the “mind” of some superconscious deity that had caused the event.  If the “meaning” was that which had been in the mind of such a deity, it was arguably still subjective, i.e., still dependent on the idea that existed in a particular mind.  But if the meaning had originated in the thing or event itself – in the rock, or the flame, or the electrical discharge – then the conclusion would have to be that “meaning” can exist independent of a mind.

At any rate, it seems to me that whoever first asked the question, “What does it mean,” was expanding the very idea of “meaning.” Until that moment, to “mean” something meant to have it in mind.  To think it.  Until that moment, as I see it, everyone understood that “meaning” is entirely subjective.  To ask what “it” means was a misuse of the word.

And so, on the basis of etymology, I stand my ground.  “Meanings,” by definition, are ideas that form in the mind.  The idea of fruit.  The idea of walking.  Even Mr. Putnam’s theory of semantic externalism – that meaning “ain’t just in the head” – is an idea that, like all ideas, is just in the head.


[1] Davidson, Donald, Subjective, Intersubjective, Objective, Oxford University Press, 2001.

Primates and Praise

Early in the Christian churches, bishops and archbishops came to be called “primates.” The word was not intended to evoke images of orangutans or macaques.  (It would be another five hundred years before Carl Linnaeus classified homo sapiens as a member of that order.) Rather, even in Latin, the word for “first” had been used to mean a superior, a leader, or most excellent person, and the Christians had no problem designating their spiritual leaders with the term as well.

There are many things I like about my Christian heritage.  If Christians today preached what I believe the historical Jesus preached, I’d readily identify as a Christian.  But as I see it, modern Christianity gets Jesus wrong in a number of respects. 

When I was only eight, I was invited to spend the weekend in the countryside with a friend.  Since I’d have to miss Sunday mass, I made a phone call to ask for permission to do so.  My friend’s family got quite a laugh when, after the call, they discovered I hadn’t been calling home, but the church rectory. The “Father” they’d heard me addressing was not my biological father, but the parish priest.

I had already been taught to call all priests “Father,” and even when I talk to priests today, I use the term of respect I was taught as a child.

But it wasn’t long after the parish priest told me it would be a sin to miss Mass  that I came across Matthew 23:9, where Jesus is said to have told his followers “to call no man Father, for one is your Father, which is in Heaven.”  Given that scripture, I never understood how Christians developed the practice of calling their priests “Father” – especially in an age when fathers demanded so much respect – except, of course, that the priests had taught them to.

It’s easier for me to understand why hierarchies arose as church memberships and treasuries grew – and why words like “bishop” (from Greek epi-skopos, meaning to watch over) came into use.  And it seems almost inevitable that as such growth continued, layers of rank would have to be added, for practical, administrative reasons.  So by the time the Bishops of Canterbury, York, Armagh and St. Andrews had become powerful, it isn’t entirely surprising that they’d call these leaders ‘primates.” But the primates were always first among “fathers,” and I still had a hard time squaring that with Matthew 23:9.

Nor was it that particular scripture alone.   According to Matthew 12:50, Jesus instructed his followers, “Whosoever shall do the will of my Father, which is in Heaven, the same is my brother, and sister, and mother.”  Jesus preached, “Blessed are the meek; for they shall inherit the earth” (Matt. 5:5) and “Whosoever therefore shall humble himself as this little child, the same is greatest in the kingdom of heaven” (Matt. 18:4). I read of a Jesus who washed the feet of his disciples, of a Jesus who frequently dismissed those who treated him with special reverence, of a Jesus who said to a man who addressed him as Good Master, “Why callest thou me good? There is no one good but one, that is God” (Matt. 19:16). I read of a Jesus who, when asked if he was King, replied only, “You said it” (Matt 27:11), as if to disavow the title himself.  In fact, Jesus taught, in the Sermon on the Mount, that his followers should pray to the Father (for His was the power and the glory). And, if we believe Matthew 7:23, Jesus chastised those who would honor him, warning, “Many will say to me in that day, ‘Lord, Lord, have we not prophesied in thy name? and in thy name have cast out devils? And in thy name done many wonderful works?’ And then will I profess to them, I never knew you: depart from me, ye that work iniquity.”

One reason I haven’t been to church but a few times in the last fifty years is my lack of comfort with heaping praise on this man who fought so hard to avoid it.  Last month, I went to a Catholic mass for the first time in many years.  One of the first hymns sung was To Jesus Christ, Our Sovereign King.

“To Jesus Christ, our sovereign king, who is the world’s salvation, all praise and homage do we bring, and thanks, and adoration. Christ Jesus, victor!  Christ Jesus, Ruler! Christ Jesus, Lord and Redeemer! Your reign extend, O King benign, to every land and nation; for in your kingdom, Lord divine, alone we find salvation.  To you and to your church, great King, we pledge our hearts’ oblation – until, before your throne, we sing in endless jubilation.”

Homage? Kingdom?  Reign? Throne?  I was taught the theology behind this hymn.  But for me, the theology fails to justify adoration of a man who shunned adoration, who deflected all praise to God, his father in heaven.  To my way of thinking, Jesus would not have approved of such a hymn.

Meanwhile, whatever may be said in defense of praising Jesus, I have even greater trouble with adoration of mankind.

Consider this passage from Pope John Paul II’s Gospel of Life, Evangelicum Vitae.  I can’t read it without thinking of Jesus’ teaching that the meek shall be blessed.

52. Man, as the living image of God, is willed by his Creator to be ruler and lord. Saint Gregory of Nyssa writes that “God made man capable of carrying out his role as king of the earth … Man was created in the image of the One who governs the universe. Everything demonstrates that from the beginning man’s nature was marked by royalty… Man is a king. Created to exercise dominion over the world, he was given a likeness to the king of the universe; he is the living image who participates by his dignity in the perfection of the divine archetype.”

I hope that my thoughts are not taken as an attack upon those who sing the hymn, or upon Pope Paul II for his thoughts about mankind.    I mean no disrespect, and God knows, I may be wrong.  But as Christians prepare this month to celebrate Jesus and his birth, I’m moved to point out my inability to buy into these aspects of modern Christianity. As I like to think of it, “I prefer the original.”  Father, Primate, Pope, Homo Sapiens Sapiens.  Clearly, we are prone to bestow honor on ourselves.  I don’t know whether we inherited this tendency from other primates or not, but the Jesus I believe in warned us against it.

On What’s Right

I think it’s time for someone to speak out in favor of the right wing.  I’m talking about true conservatism.  I’m talking about grammar.

My mother used to have fits when one of her sons said, “I’m done,” meaning that he’d finished eating.

“You sound as if you have no breeding,” she’d say.  ‘I’m done’ means you’ve been left in the oven long enough to be well cooked.  What you mean to say is, ‘I’m finished.’ ”

This is false conservatism.  Mom believed it simply because it was what her mother had taught her.  (In fact, she boasted that she believed everything her mother had taught her.)  I say, that’s blind faith in the old way, just because it’s the old way.

True conservatism, I say, is more principled.  And that’s why “I’m finished” is no more correct than “I’m done.” 

“I’m finished’ is what Al Capone said when Eliot Ness hauled him off to the federal pen.  It’s what Wile E. Coyote thought every time he was outwitted by the Road Runner. The correctness of the idea doesn’t depend on the main verb – “to do” being essentially equivalent to “to finish” – the difference depends on the choice between the two auxiliary verbs, ‘have’ versus ‘am.’  Specifically, the first is active, the second passive. 

There’s reason behind such principle.  If you “have done” your work, you “have finished” it.  (Active. You’re talking about what you have done to the work.)

Whereas, if the work “is” done, then it “is” finished.  (Passive. You’re talking about what has happened to the work.)

I told Mom a thousand times that the correct way to disavow the intention of further eating is to say “I have finished” or “I have done.”  It got me nowhere. It wasn’t what her mother had taught her.

I also pointed out to Mom that language changes over time.  (If it didn’t, we’d still be speaking Anglo-Saxon and Latin.  Even further back, the Tower of Babel would still be standing.)   But doggone it, recognizing that language changes over time doesn’t make me a liberal. I recognize the inevitability of change.  I just insist that conservatism, at its best, is not tradition for the sake of tradition, any more than it’s just rich people being greedy.  When there are good reasons for things to mean what they mean, then conservatism is more than greed, more than blind obedience to tradition.  It’s about being right!

That’s why, despite my liberal education, I’m comfortable  on the grammatical right wing.  That’s why I go into spasms when I hear people give the now prevalent answer to the question, “Do you mind?” 

The question essentially means: “Do you object?”  Yet people these days almost exclusively answer the question the wrong way, not just on the street, but even in otherwise high brow movies and books:

“Do you mind if I sit here?”

“Sure.  Go ahead,” they say!

“Would you mind if I step on your toes?”

“Sure.  Go ahead.”

“Do you mind if I take all your money?”

“Yes, please do.”

What are these people saying?  Do they want their money to be taken?  People, please!  What they mean to say is:

“Do you mind if I sit here?”

No. (I don’t mind at all.) Go right ahead.”

“Would you mind if I step on your toes?”

Yes.  I certainly would.  It would hurt!”

“Do you mind if I take all your money?”

“Heck yes!  Someone call a cop!”

True conservatives believe that some things are right, and others wrong, not because their mothers told them so, but because there are good reasons things are the way they are.

(That’s why they call them “right.”)

My granddaughter refers to having done things “on accident.”  When her mother doesn’t flinch, I’m not surprised, because her mother was the one who first made me flinch upon presenting me with the offensive phrase some thirty years ago.  But after thirty years of arguing unsuccessfully that “on accident” is wrong, must I now watch the abomination get passed on to yet another generation? 

I decided I should consult authority.  (After all, “authority” is the preferred weapon of liberals and conservatives alike, even if choice of authority varies.) And so I went to the indisputable source of all modern authority, the Internet, and googling on ‘by accident, versus on purpose,’ I came across a near unanimity of authority.  With nary an exception, these sites treated the problem as if nothing but the opinions of the masses mattered.

To a website, they agreed that “by accident” is correct in written English, and “on accident” incorrect, because “on accident” is hardly ever seen in ‘serious’ writing.  (‘Serious’ was conveniently not defined.) But when it comes to spoken English, all the authorities were on the infinitely tolerant left wing, agreeing that “on accident” has overtaken “by accident” among younger Americans.  Therefore, they conclude, when it comes to correctness, “it all depends on what sounds right to you.”

Egads! Even the esteemed Chicago Manual of Style seems to treat the question as a matter of popularity!

Hogwash, I say!  Someone please call the Queen! Rightness should remain rightness for reasons other than popularity! 

The authorities agree that “on accident” appears to have arisen by analogy to ‘on purpose.’  But uniformly, these authorities fail to address WHY there is a difference.  They fail to appreciate why things should always happen “on” purpose, but “by” accident.  

As with “I’m done,”  the problem is a failure to account for agency.  A failure to distinguish between the thing that is doing and the thing that’s getting done.

“By” is a preposition that speaks directly to agency.  If a ball was hit “by” you, then you were the one that hit the ball.  In contrast, if the ball hit you in the face while you weren’t looking, it was surely thrown by someone else – which is to say (from your perspective) by accident.  “By accident” means that whatever happened was done by someone, or something – some agent of causation – other than you yourself having willed it to be so..

“On” has many meanings, but one of them is to express alignment with purpose.  We say that the arrow we shot was “on” target if we shot it where we wanted to.  We do something “on” principle when we do it in accordance with our guiding philosophies.  We do something “on” faith when it is in alignment with what we hope to be true.  We do something “on” a hunch if our action is in alignment with our guess.  A rest stop is convenient if it is “on” the route we’re traveling.  This use of “on” is all about staying focused “on” our goal, remaining “on” our intended path.

So when we do something designed to achieve an intended result, and we do it successfully, it only makes sense to say we did it “on” purpose, i.e., in alignment with our purpose.  But when we fail – when, despite our own plans, some alien force intervenes, when some freak happening produces an unintended consequence – it only makes sense to say that it happened “by” the influence of something else – i.e., “by” accident. 

This is not by accident.  At least when it comes to language, .being right is all about being on the right. And if anything else makes sense to you, you can be sure it’s a part of whatever’s left.

Right?

Loaded Words

My starting place today is the word “ostensible.”

I came across it recently in a newspaper article here in Richmond – not an Op-Ed piece, but a “straight news” report about current events.  The article was about a public meeting.  In inviting the public to attend, the meeting’s sponsor had stated its purpose.  I count myself among the strong critics of the outcome of the meeting..  But to my way of thinking, while the outcome deserved criticism, the announced purpose of the meeting had been bona fide.  To my knowledge, there was no reason to question the honesty of the announced purpose, and the article itself certainly offered none.  Yet the news report had referred to the “ostensible” purpose of the meeting, as if to suggest the negative outcome had been the sponsor’s intent.

“Ostensible” is one of those words lawyers use when writing legal briefs,  which are intended to be the most one-sided (i.e., biased) types of writing known to man.  In their legal briefs, lawyers intentionally use words with multiple shades of meaning, some neutral and some “loaded.”   Dictionary definitions of “ostensible” include words like “apparent,” “surface,” “seeming,” and “pretended” – but there’s a difference between “apparent” and “pretended.”  If a lawyer writes that the weather was apparently pleasant the day an accident occurred, there’s no reason to think the word means anything but “apparent.”   But if she writes that the plaintiff’s injuries were “ostensibly” caused by the accident (though they were only noticed after the visit to her lawyer), well, everyone knows that “ostensible” means “pretended.”  Faked.  Using a word that could simply mean “apparent” becomes a subtle way of calling the plaintiff a good-for-nothing, bold-faced liar and all-around scoundrel.

Closing statements to a jury, like advocacy in legal writing, are full of such loaded words – words the lawyer who uses them can defend as objectively accurate on the basis of the facts proven at trial, but which, tucked into their underbellies, carry belittlement, accusation, or condemnation.  (If the reference is to one’s own client or witness, of course, the words are loaded with suggestions of reliability, honesty and wholesome character.)

When commercial advertisements boast about revolutionary new products that will make you feel young again and are “free” for the first hundred callers, most people recognize the hype for what it is.  But lawyers addressing judges and juries have to persuade their target audiences more subtly, which is to say, while seeming to be neutral.  Words like “ostensible” fit their needs well.   And that, I believe, is where they have a great deal in common with news reporters.

The field in which I spent most of my life was labor and employment law, a field which is practically all about bias.  Decades in that field convinced me that the vast majority of bias in the world – I mean well upwards of 95% – is unconscious.  Hardly anyone thinks they are biased.  A person who acknowledges, say, being anti-Semitic, doesn’t think he’s biased – he thinks Jews deserve his scorn.  Members of the KKK generally think blacks, Jews and Catholics are lesser beings, or dangerous, or whatever – their own thinking on the matter is clear-headed and objective – anything but biased.  And obviously, liberals don’t think they’re biased against conservatives, nor do Republicans think they’re biased against Democrats.

I defended hundreds of people during my legal career who were accused of bias of some sort, and every one of them expressed sincere outrage that anyone could accuse them of being biased.  I see precisely the same reaction when members of the news media get attacked for their perceived bias.    Indignation!  Sincere outrage!  Journalists pride themselves on not being biased, period.

So in considering media bias, I don’t think in terms of rooting out the journalistic equivalents of Klaus Barbie or Adolf Eichmann.   Sure, there are a few hack journalists who purposefully express outrageous opinions in order to appeal to only one side of the political spectrum while inflaming the passions of the other.  But there’s far more unconscious bias in the media.  It appears on all sides of the various political spectra.  Indeed, I’d like to know how it could be any other way, bias being a natural product of culture.  (Talk about loaded words  – “culture” is a good thing, “worldview” neutral, and “bias” bad.  But for our purposes, what’s the difference?)

Even in the face of Herculean efforts to escape its influence, I doubt it’s ever possible to be bias-free, to escape the influence of one’s culture, or to have no world-view at all.  I’m waiting for some reporter to answer an accusation of bias with, “You’re right, of course, but I’m trying really hard to change.”  (Now that would earn my respect for objective reporting.)

Where am I headed with all this?  I have a proposal.  In this day of fake news and counter-accusations of same, we now have a plethora of “fact-checking” sites.  Snopes.  Politi-Fact.  Etc.  My excitement for them quickly wore off when, time and again, the analyses supplied by the fact checkers struck me as containing the same sort of unconscious bias I see in the media.  A politician claims that “taxes have been rising lately.”  Is it true?  The “fact-checkers” interpret the meaning of “taxes” to mean federal income taxes, interpret “lately” to mean the past five years, decide to look at grosses, or averages, or families or individuals, and based on all those interpretations and assumptions, declares that the politician’s assertion that “taxes have been rising lately” is “true” or “false” as if they’re God handing tablets to Moses.  Personally, I don’t mind when a politician phrases things to support his or her position – as I see it, they’re supposed to advocate for what they believe.  But when self-appointed guardians of objective truth betray their biases, my blood pressure starts to rise.

After avoiding the news for ten years, I decided some months back to pay regular attention again.  I subscribed to the newspaper and I decided to record the evening news on my DVR.  I tried BBC, Fox, the three major networks, and others.  It was no surprise to me that Fox was different from CNN – even with my head in the sand all those years, I’d heard about their reputations – but of greater surprise to me was the difference between the evening news on ABC and CBS.  I didn’t compare them long enough to notice a single instance in which either “choice of story” or “facts reported” caused me to conclude that one was more accurate or objective than the other.  (As far as I know, Polit-Fact would have concluded that everything both networks said was true.)  But I noticed a marked difference in the use of “loaded” words, even down to the subtlety of calling a three-day-old story “breaking news.”  I wish I’d kept a notepad at hand to record examples, but night after night, story after story, I found one network using language I’d have been proud to use as a lawyer advocating a particular point of view, trying to arouse emotions through word choices, while the other did not.

So, can anything be done about media bias?   Back when I was practicing law, I aimed for enough ostensible accuracy to come across as objective while intentionally loading my arguments with as much advocacy (bias) as I could muster.  I exploited language to support my cause.  My sense is that news reporters do very much the same thing as lawyers, albeit (in most cases) unintentionally.  And my question is this: Can we not investigate this phenomenon more scientifically?

I’ve always thought that the way we speak is one of the most reliable windows into how we think.   As I understand it, part of textual criticism is a sub-discipline of linguistics that analyzes the subtleties of word usage and style in order to do things like identify authors – to show, for example, that the Book of Genesis was written by multiple people with different writing styles.  Hollywood, at least, depicts experts who analyze ransom notes and diaries to generate profiles of serial killers, based on patterns of word usage.   I propose that in some school of journalism, linguistics or political science, there are scholars who might explore the feasibility of doing the same sort of textual criticism of news coverage.  Not to pronounce whether a particular story was accurate or not, but to come up with a way of assessing the frequency of “loaded” words or phrases, or other subtleties of language  — patterns or other characteristics of speech which  suggest a tendency to “color” stories.

A panel of philologists might create a list of a thousand words like “ostensible” which have a neutral meaning but are loaded with pejorative connotations.  They might create another list of words with both neutral and positive connotations.  A third list might contain words with no “load” at all.  With modern technology, it ought to be easy to scan every news report written in the New York Times for the past year, or to transcribe every report on Fox or BBC World News Tonight, getting a huge sampling of word usage, and a resulting take on how much the reporter, or network, or other news source, injects positive or negative connotation into their stories.

Or, say, scan a thousand articles from Newspaper X dealing with indicted or scandalized politicians.  Group them according to the political affiliation of the accused.  Then count how frequently the political affiliation is mentioned.  If scandalized Democrats are identified as Democrats three times in every five hundred words, while scandalized Republicans are identified as Republicans only once, that might be pretty good evidence the newspaper has a Republican slant.

I’d find it very telling to see that one ostensibly objective news source used “loaded” words three times as often as another.  Or that words like “ostensible” are used to describe politicians in one party more than those in another.  I feel sure that my examples suffer from the fact that I’m not a professional linguist, but I feel sure we have the technology and scholarship to engage in a more scientific study of bias in news reporting.  I’d find it a far more objective method of assessing media bias than any I’ve heard about elsewhere.

— Joe

Cretins

In 1595, the early English explorer and colonist, John Davys, wrote in The Worlde’s Hydrographical Discription, Thomas Dawson, London,

“There is no doubt that we of England are this saved people, by the eternal and infallible presence of the Lord predestined to be sent into these Gentiles in the sea, to those Isles and famous Kingdoms, there to preach the peace of the Lord; for are not we only set on Mount Zion to give light to all the rest of the world? *** By whom then shall the truth be preached, but by them unto whom the truth shall be revealed?”

In the 1850’s, the Reverend Augustus Longstreet – president of a leading American University and minister of the Lord – wrote to his son-n-law regarding the unreasonable behavior of his slaves:

“The creatures persistently refuse to live together as man and wife, even after I have mated them with all the wisdom I possess, and built them such desirable homes.”

About the same time, the famous case of the slave, Dred Scot, wound its way up to the Supreme Court of the United States.  On its way, the Supreme Court of the state of Missouri found that one of the key issues before it was whether African slavery really did exist for the benefit of the slaves.

Of course, we’ve come a long way since then.  In 1997, Robert Hendrickson wrote, in The Facts on File Encyclopedia of Word and Phrase Origins, Checkmark Books :

“cretin.  Our pejorative cretin, for “an idiot,” began as a kindly word.  In the Middle Ages many deformed people with a low mentality lived in the Alpine regions, their condition resulting from a thyroid condition now known as myxedema, which was possibly caused by a deficiency of iodine in their drinking water.  These unfortunates were called Chrétiens, “Christians,” by the Swiss, because the word distinguished human beings like these people from brutes, and they believed these childlike innocents were incapable of actual sin.  But the kindly word went into French as cretin, meaning “idiot,” and retained the same meaning when it passed into English.”

It leaves me wondering: if our best navigators, University presidents, and supreme courts can be such cretins, where does that leave the rest of us?

Honk if you love word origins.

– Joe