A World Without Trust

Imagine two worlds. In one, everyone keeps their promises, honors not only the letter but the spirit of agreements, and are broadly reliable and trustworthy. In the other, promises are just empty words, and people are as opportunistic and as spiteful as they impulsively desire to be in a given moment. Which people, from which world, do you think is more capable of accomplishing anything?

In the very first episode of the popular Netflix series House of Cards, the main character, Frank Underwood, has a promise to him broken. Underwood marvels at it because he didn’t think they were capable of it—he admires it in a way, as though breaking a promise was something hard, and keeping it was something easy. This kind of cheap imitation of Nietzschean cynicism is all about having a will to power which allows one to overcome conventional morality.

But the real accomplishment is not overcoming your own trustworthiness, but the fact that such a thing has enough weight that even cynics feel they must “overcome” it. There are many parts of the world where trust and trustworthiness are not the default outside of the close circle of family or clan. A society in which relative outsiders and strangers are able to make promises to each other and trust they will be kept is a tremendous accomplishment.

Whose accomplishment is it? Hard to say. But it certainly isn’t the accomplishment of any cynical would-be despot. If anyone deserves the credit, it is the countless millions of ordinary people, across many generations, who have strived to live decently and treat each other fairly.

The utter hell of a trustless world cannot really exist for long in this one. But we should thank those decent people who came before us, for putting as much distance between us and it as we’ve got.

Speaking With Certainty

A while back, after my propertarian piece (which isn’t much at all about property), someone challenged me to write a follow-up piece on ancient religious views on property, making sure to account for slavery. I immediately agreed to the challenge, but I was paralyzed.

The Judeo-Christian writing called Leviticus, which is a part of the traditional text known as the Pentateuch, or the Five Scrolls of Moses, or just “Moses,” speaks at length concerning property distribution, property rights, and compensation for irregularities and violations. It is a writing which depicts a vigorous society in motion, a book which Jesus summarizes with the well-known apothegm, “Love your neighbor as yourself.” The very word “neighbor” evokes property and other notions of personal sovereignty.

Nevertheless, it is practically impossible for me to write a general piece touching on Leviticus or Levitical principles because, with respect to its provenance, I am neither a minimalist nor a maximalist, nor am I some sort of milquetoast via media advocate, either. I happen to take a scholarly, evidence-based approach to the provenance of this book, which is a standard view, but is contrary to what is taught in universities both secular and religious or parochial.

In secular universities, and those religious universities whose worldview is formed by Nineteenth Century Continental philosophy, the minimalist Documentary Hypothesis is still taught as de rigueur, a hypothesis which posits that the books of Moses, especially the Levitical material, were fabricated by a power-mongering priestly caste during the Judahite exile in Babylon during the Fifth Century BCE. I am under the impression that this hypothesis is presented as ironclad secular scholarship, i.e., the truth, when it is essentially the telos of the Sacramentarian movement which came to dominate Enlightenment Era religiosity.

Religious fundamentalism, deeply offended by this radical minimalism, developed a response which became reflexively maximalist, in defiance of all evidence (even internal evidence) to the contrary, namely that Moses wrote every jot and tiddle of his five scrolls somewhere between 1550 BCE and 1440 BCE, and never shall a true Christian vary from that view lest he deny the efficacy of the Word of God.

In public discourse, there is no middle ground. One can write classroom papers and discuss privately a more nuanced view, which is based on the evidence–OK, let’s be fair: I would say that, now wouldn’t I? Here: a nuanced view which assembles the evidence guided by a particular view of history, scholarship, science, and philosophy. So I begin again: there is no middle ground in public discourse.

I presented a paper at a regional meeting of the Society For Biblical Literature once upon a time, a radical deconstructive view of methodology with respect to the academic discipline known as “biblical studies.” In it I noted that the disciplines of the pure sciences, linguistics, philosophy, and history had all evolved drastically over the past two hundred years, but biblical studies still labored under the precepts of the Eighteenth and Nineteenth Century, and I actually stated that, if this were any other discipline aside from the contentious religious discipline it is, our colleagues in every other department in all universities across the world would remove us with extreme prejudice.

A professor from Harvard was in attendance, so, naturally, I was intimidated. I mean, I still had aspirations to one day maybe hopefully if-miracles-come-to-pass apply for a position at Harvard or one of the Ivy League schools, or even Michigan University, so I really wanted to come off as bright and snappy. He asked, “What do you do with history?”

Without thinking, I blurted out, “I just ignore it,” which is true, in one sense because of my deep respect for the science of linguistics, but not quite right, again, out of a deep respect for post-modern philosophical currents. What I was getting after was the primary importance of community in interpretation, but I didn’t say as much, so the entire room burst into laughter. I tried salvaging my point, but, you know how these things go.

Then I heard a fellow in the front row mutter, “Why do we have to bring ourselves into resonance with other academic disciplines [a phrase from my paper] when we already know what history is?” [his emphasis].

Well, that was the entire point of my paper, which had failed to convince the Harvard University types: we don’t really know history. I did not go so far as to radicalize my view inasmuch as to say that we construct history wholesale, but it is certainly true that we arrange data within a certain framework until we are pleased with the outcome.

A healthy skepticism of the self is thereby necessary. What am I up to? Can I identify my biases? What are my external influences? Why is this emotionally significant to me? Moreover, when it comes to historical realities (for a lack of a better term), an academic humility is very helpful, namely that we don’t know very much at all, and we know that we don’t know very much at all because we don’t have much physical evidence, and we are, most assuredly, arranging evidence as taught, not as is obvious. We make a convincing case, that is all.

So when we speak of Levitical principles, it is nigh impossible to speak on the same ground. If these principles have their origins in a nomadic group of people who had recently escaped from Thirteenth Century Egypt, drawing heavily on Hittite suzerain-vassal arrangements and the attendant societal characteristics, then our vocabulary will be significantly different than if they have their origins in a cynical repristination drawn from a vaguely Babylonian and/or Syrian religion-society.

That last paragraph should be the lead paragraph when I write my propertarian follow-up.

The Audience to and Author of Your Life

Instead of speaking of nature and nurture, determinism and free will, let’s think about the extent to which you are an audience to and the author of your own life.

We are all undeniably audience to our own life. We don’t choose to be born at all, nor who our parents are (or whether they raise us, or who does) or what nation we grow up in—or what part of history!

Moreover, we cannot rewrite the life we have lived up until now. We are an audience to our own past, disclosed to us through memories and stories about ourself we are told by others.

Continue reading “The Audience to and Author of Your Life”

Incomplete Virtue

In his essential book on virtue ethics, Daniel Russell advanced two arguments that I found highly novel and provocative.

The first is that the virtues are what he calls vague satis concepts, something I explore in depth here. The short version is that they have a threshold beyond which “virtuous enough” just is “virtuous in fact.” And this threshold is vague, in the sense that there are boundary cases that cannot be resolved simply by increasing your level of precision. One example of this is the threshold beyond which one goes from having thin or receding hair to being bald. More significantly, the concept of personhood is a vague satis concept, with boundary cases including long term coma patients, the severely brain damaged, and embryos.

In such cases, Russell argues, we need a model. This model is not simply an averaging of the most representative cases. As he puts it:

When we try to say what personhood really is, we construct a theoretical model of what we take to be the essential features of personhood, in some kind of reflective equilibrium, and realized to the fullest degree, since the model must illuminate the central cases, not just join their ranks. This model, we should note, is an ideal, and therefore not merely a central case: you or I could stand as a central case of personhood, but not as a model of personhood, since particular persons always have shortcomings in some dimension or other of personhood, a shortcoming that the model is to reveal as a shortcoming.

The second argument of interest is that virtue ethicists need a limiting principle on the number of virtues there are. The Stoics and Aquinas resorted to a very limited set of cardinal virtues of which all others were but aspects. Aristotle, however, offered no limitations at all, and most modern virtue ethicists follow him in this. Russell finds this unacceptable. This argument flows from the first one—we need a model of the virtuous person. If the number of virtues approaches infinity, then how could we ever hope to model such a person?

It is this second argument I wish to disagree with. Russell thinks virtues need a limiting principle because the model of the virtuous person that he has in mind is a formally specifiable model. But this is precisely what Aristotle’s notion of phronesis, with its radical particularity, precludes.

What Russell seeks is explanation, rather than understanding, when the latter is more appropriate.

Let us say that virtue is like the infinite, fractal coastline of a finite island. How could we model such a thing?

Simply demanding the subject matter be finite will not help. Pointing out that there is more context than we can take in does not mean that the quest for more context is a bad thing—Russell himself makes a similar argument about all-things-considered rationality:

But committing to making all-things-considered judgments is not the same as committing to the (rather queer) life-project of becoming the best maker of all-things-considered judgments there can be. That project, like every other, consumes resources and opportunities, and can no more be assumed to be a rational one than any other project can. That is a fact about practical rationality: when it comes to making all-things-considered judgments, at some point it is reasonable to stop considering, choose, and hope that the choice is one we can live with, or perhaps grow into. Indeed, trying to become persons who do consider all things before acting is something that we have all-things-considered reasons not to do.

My argument is that even the constructing of the ideal itself follows a similar rationale.

Consider my recent exposition of the hermeneutics of novels:

After finishing a given chapter of a novel, we no doubt have certain expectations about what the book as a whole will be like, based not only on the chapter itself but on our understanding of the genre conventions the novel is operating within, maybe even of our familiarity with the author herself or what other people have insinuated about the book. Once we have completed the novel, however, our understanding will have changed—not only of the novel as a whole, but even of a given chapter and its significance. Rereading the novel, we may find the chapter discloses things to us that it didn’t the first time—and these new disclosures, in turn, inform our understanding of the whole novel. In this way, even after we have read the whole book, we can learn from parts of it.

Even something as seemingly finite as a novel we can only understand incompletely. Summarizing Derrida, Jonathan Culler adds to this picture of incompleteness by arguing that meaning is determined by context, and context is boundless. We can always revisit the context and find some new aspect which sheds light on a different meaning.

But Gadamer’s take on this incompleteness is much more optimistic than Derrida’s. It is also ultimately more optimistic than Russell’s, for the latter is forced to ask for models and limiting principles we do not have, implying that we haven’t had much of an idea about how to live virtuously until now.

For Gadamer, it is less about models than about stories. One such story would be the story of the good life. The same story, told differently, is the story of the virtuous person. People have been contributing to this story for thousands of years. Contra Russell, most people already understand virtue and the good life, their understanding is simply and necessarily incomplete. This understanding can be improved, and we should strive to be lifelong learners in this matter, rather than finding a particular understanding and then clinging to it out of a desire for a false certainty. A courageous virtue ethics is one that asks us to accept our inability to complete it, and the necessary day-to-day role that faith must play in filling in the gaps.

The Singular of Data

My friend Jeff told me a story in response to a comment I made. I had just mentioned the travails of kid sports, especially since I enrolled my kids in a hockey program which includes one third more ice time than last year’s program. I sighed, “All consuming, you know.”

Jeff leaned back and intoned a story about his brother-in-law, whose boys were raised on the road to become hockey stars, but they were only so close to making it into the professional ranks, and now the anxiety is upon them, as young men in the late teens and early twenties, to acquire a meaningful vocation.

I said, “Can you imagine investing that much money and that much effort (giving away so much of the family life, in effect) toward a goal which has such a small chance of realization?”

Jeff shifted in his chair and recounted the tale of a dear friend of his who was a bona-fide rock star, in his own mind. He did nothing but play his guitar and practice with his band of fellow-travelers, living up the hedonistic ideal, touring Europe and Japan every year. “If you buy him a sandwich, he’ll take it home to his mom’s basement, where he lives, and save half of it for dinner the next day.”

Jeff rarely answers any question  with a propositional statement; he’s all stories, all the time. His experience is wide and varied, so I guess he can. What makes him especially delightful is that he doesn’t tell stories to fill empty space in a conversation, he’s answering a question. One story gives the answer, and then he’s done, no stringing endless tangential episodes ad infinitum.

I saw somewhere recently (and, forgive me, I can’t remember the context) someone mention that the Affordable Care Act might be screwing over huge numbers of people, but a) those numbers are still marginal, and b) the fundamentals of ACA are forged in good policy. I take that to mean, in other words, that as long as the proper number of people are served by this public policy, those who are hurt (ground to dust, more like it) by the same public policy are data. I’m under the impression that that number doesn’t even need to rise to a majority; it just needs to meet some data-triggered threshold which satisfies its designers. All others should be able to conform, no? If not, then selection has taken its course, alas.

It’s not that I’m against science, God forbid; it’s that I’m against its magisterial application in all aspects of the human experience. Public policy, public morality, public religiosity (for lack of a better word), public everything falls under the hegemony of science, as though science were some sort of impersonal absolute extracted by innumerable university studies from an easily-accessible material world. Science, in this manifestation, never serves; it is always master.

Okay, I yield the point: “ground to dust” is too much; there are worse things on this earth than ACA. Nevertheless, I will not yield the larger outcry, namely that this sentiment is a resistance to the notion that in our story-less data-gathering, individuals are being sorted in a grand perversity of science-wielding masters so that they lose their individuality, and thus their ability to serve on another. We don’t learn to serve each other by means of data; we learn by means of experience, which is brought forward through civilization through stories, wherein are the ties of myriad strands of data.

Queen Elizabeth was finally convinced that the monopolies, though they appeared to buy consolidation of her throne, were costing her far more than an open market would. The data had always been there, but the stories hadn’t trickled up to the throne.

Get Thee To a Nunnery

On the descent into madness

The contest for the greatest play in the English language comes down to one of two Shakespeare plays: Hamlet and Macbeth. Both of these plays delve deeply into the psyche of ordinary men and women who enter the realm of madness. The plays themselves and the characters therein resonate deeply, crossing boundaries temporal and cultural. In our contemporary culture, the descent into madness is the theme of two of the most popular record albums ever recorded, Pink Floyd’s The Dark Side of the Moon and The Wall. The former used to rival Michael Jackson’s Thriller for worldwide sales, and is still the second most selling record of all time. I’m sure that as soon as either David Gilmore or Roger Waters dies (Rick Wright, RIP) many new fans will restore the rivalry at the top of the all-time charts

Shakespeare draws a picture for us: Hamlet, young Hamlet, possessed by the ghost of his father to avenge his death, has been veritably banished by his uncle to England, whereupon he will be murdered, as everybody knows. By some twist of fate and the adventuring spirit of young Hamlet, he escapes, making his way back to Elsinore. Upon his arrival at the outskirts, he stumbles across an open grave. Holding up the skull of Yorick, his father’s jester, and a favorite person from his childhood, he says, “I knew him.”

Linear perspective insists that all parallel lines converge upon the horizon. Well, here at the open grave, the horizon been brought dramatically forward, and Hamlet experiences the confrontation which is a response to his melodramatic soliloquy: what dreams may come after we have shuffled off this mortal coil must give us pause.

Hamlet 1948 réal : Laurence Olivier Laurence Olivier  Collection Christophel
Hamlet 1948: Laurence Olivier

Not for long, for the grave is not passive; it is active, yawning, galloping, devouring. In a brilliant interpretation of the subtlest kind, Laurence Olivier’s Hamlet, when he hears the approaching funeral procession, tosses the skull of Yorick back into the grave, just in time for old Yorick to receive the recently deceased and politically important Ophelia.

All the powers of the earth are here converging, with love, politics, royalty, vengeance, and that always-pressing anxiety intersecting over a grave. War is ever on the horizon, hemming everyone within easy reach of the same.

And you run and you run to catch up with the sun

But it’s sinking

Racing around to come up behind you again

Banquo’s ghost won’t rest, either, charging up from the grave to confront, wordlessly, the ambitious Macbeth.

Here’s a curious aside: Richard Burton, an actor of some note, refused to play Macbeth because, as he says, he cannot be dominated by a woman in that way. The irony is captivating once you come to the understanding that he drove himself to drink over his treacherous divorce in order to win for himself the great prize, Elizabeth Taylor, who dominated him.

The descent into madness, and its appeal to popular and literary culture, is not limited to obsessive thoughts concerning the grave. “This is the end. There is an end to me. Life has no purpose, no meaning.” No, that’s maudlin stuff. Pap. Child’s play. The descent into madness is the lonely individual coping with the active, ongoing confrontation of the grave, that all our evil deeds and the evil deeds of many others manage to wriggle free from death’s strong bonds in an effort to possess us ahead of time.

Ordinary people have a fascination with the exploration of the descent of ordinary people into madness. A playwright or musician will set the scene in extraordinary circumstances, by my reckoning, to sell tickets on the entertainment value. The literary value, i.e., its meaningfulness to the paying ordinary public, is its deep-seated commonality, the themes which grasp a deep-seated anxiety, an anxiety which many people would declare possesses us all. Some of us, for various reasons, cope better with that anxiety than others.

The meaning of life, in other words, is a question of how to maintain meaningful behavior even while under possession of the grave.

Richard Burton’s Hamlet gestures toward Ophelia’s womb, saying, “Get thee to a nunnery.”

subhumans-from_the_cradle_to_the_grave

Truth Without Objects

When I wrote about the subject-object distinction and its alternatives, I did not actually intend it to be a response to the argument Rob Kroese and I had on the meaning of the Confederate battle flag and the ontological status of that meaning. Though I did remember attempting to convey to him that this meaning was what McCloskey calls “conjective” and more commonly is called intersubjective. In particular, I remembered that the nonexistence of something beyond subjects and objects seemed simply obvious to him:

I actually forgot what the main content of the argument was about until he wrote his response to my subject-object piece.

In his piece, he not only affirms the subject-object distinction—while conceding that intersubjectivity exists, though it is merely “the relationship between two subjective viewpoints”—he also plants a methodological individualism flag firmly in the ground.

The individual is the seat of both consciousness and perception, and because of that, it is qualitatively different from both its constituent elements and groups of which it is a part. Neither an atom nor a government has ideas, perceptions, a point of view, or consciousness.

I think this presents a perfect opportunity to go after the ontology of hard methodological individualists, something I only did glancingly in the previous post. But first, I’d like to talk a bit about what I was attempting to do in that post and how I got there.

Intersubjectivity Is Not Enough

I’ve believed in this space “between” subjects for quite some time; I read Searle on it ages ago. The deeper I have gone into philosophy over the past couple of years, the more this sort of ontology has jumped out at me as something quite attractive.

Still, I don’t really like the term “intersubjective.” Searle’s “institutional fact,” or the bigger mouthful “ontologically subjective, epistemologically objective” seemed better, given its reference to something larger or at least other than mere subjects.

I prefer McCloskey’s “conjective” most of all, because the etymology of the word emphasizes that it is the sort of knowing that we only know together. Of course, at this point it’s simply a neologism…its own conjective status is quite marginal.

conjective

In any case, when Rob and I had our encounter a couple of months ago, this was as far as I had got in my understanding of the subject-object distinction. I’ve since moved on, but before I get to that, I want to stress that I think that Rob makes the very mistake that I think the term “intersubjective” invites us to make.

Here is his first stab at it:

The fact is that on some level intersubjectivity has to exist. It’s absurd to dismiss the idea of the shared meaning of symbols while writing in a language that depends on the shared meaning of symbols. It seems to me, though, that intersubjectivity is not a property of the symbol/meaning pair itself, but of the relationship between two subjective viewpoints. In other words, intersubjectivity isn’t a thing in itself; it’s an emergent property of a collection of things. Think of it this way: there are such things are parallel lines, but there is no such thing as a parallel line. A line cannot be parallel by itself, and a single symbol/meaning pair cannot be intersubjective. The intersubjectiveness arises as a result of isomorphism between two subjective points of view, in the same way that parallelism arises as the result of equidistance between two lines.

So to the question “Is there such a thing as intersubjectivity?” I would answer “Yes, but only as an emergent property that depends on subjectivity.” It isn’t a new class of thing in between (or otherwise in addition to) subjective and objective. It’s just a description of shared subjectivity.

Further down he adds:

So to say that the Confederate flag is a racist symbol, and to base this statement on the idea of intersubjectivity, is, I think, a cheat after all (and maybe that’s not what Adam was saying, but that is my interpretation of his argument). It’s a way of saying “Pretty much everybody thinks X; therefore X is true.” You can’t derive either objective meaning or moral authority from a bunch of people believing something, no matter how many of them there are or how fervently they believe it.

His standpoint on this is reflected in the very first tweet in our discussion a couple of months back:

As I understand his model of intersubjectivity, it’s simply an aggregation of subjective standpoints; no more, no less. Being around a bunch of other people who believe the same thing as you makes you more likely to believe it, and so it has (let’s say) intersubjective weight. But ultimately this is all just the subject-object distinction as usual; intersubjectivity for Rob is nothing more than successful coordination of belief and meaning among subjects.

But the whole point of throwing conjective (or whatever your preferred term is) into the mix along with object and subject is to emphasize that something different is going on here.

Again, you must contend with Searle’s example of the dollar. This is not about adding up percentages of people who believe a dollar is money. A dollar is money. It can certainly cease to be money. But when that happens, it is unlikely to look like a process where fewer and fewer people agree with you that it is money. If you’re just looking at subjects, it happens like a cascade—all of a sudden you can’t find hardly anyone who thinks money is money. And, I would argue, those people who you might be able to find would simply be wrong—it isn’t that you could gather all of them together and make it intersubjectively the case for that group. They would simply have failed to notice that the conjective reality on the ground had changed when they weren’t looking.

Rob complains that I “elevate intersubjectivity to the level of objectivity,” but that’s simply a symptom of being stuck in subject-object thinking. You begin to think “objective” is a synonym for truth.

And that’s the whole problem.

Beyond, Truly Beyond Subjects and Objects

Since our discussion on Twitter, I have read quite a lot by and on philosophers who have asserted the primacy of holism—that is, of the meaningful relationship between parts and wholes—over the subject-object distinction. I wrote my post primarily as part of my process of thinking this through, because I am incapable of thinking clearly without writing.

When philosophers like Hans-Georg Gadamer talk about meaning and symbols, for example, he isn’t thinking of intersubjectivity, much less is he thinking of symbols as objects observed by subjects. Instead he is thinking of the hermeneutic circle. As Wikipedia succinctly puts it:

It refers to the idea that one’s understanding of the text as a whole is established by reference to the individual parts and one’s understanding of each individual part by reference to the whole. Neither the whole text nor any individual part can be understood without reference to one another, and hence, it is a circle.

As I put it in another post:

For Gadamer, our horizon is a provisional understanding of the whole, and our prejudices are in turn our provisional understanding of a given part that we encounter.

To make this concrete, consider a novel. After finishing a given chapter of a novel, we no doubt have certain expectations about what the book as a whole will be like, based not only on the chapter itself but on our understanding of the genre conventions the novel is operating within, maybe even of our familiarity with the author herself or what other people have insinuated about the book. Once we have completed the novel, however, our understanding will have changed—not only of the novel as a whole, but even of a given chapter and its significance. Rereading the novel, we may find the chapter discloses things to us that it didn’t the first time—and these new disclosures, in turn, inform our understanding of the whole novel. In this way, even after we have read the whole book, we can learn from parts of it.

The division between reductionist frameworks such as the subject-object distinction or methodological individualism, on the one hand, and holistic frameworks like intentionality or the hermeneutic circle, on the other, has been a key intellectual battle ground for decades.

For my part, I think that the holistic framework provides a superior ontology to the subject-object distinction. Though rather than parts and wholes, at the end of my post on the subject I argue for an ontology of the whole and processes within the whole, with taxonomies of parts serving a merely pragmatic role. Or perhaps it would be better to call them, as Daniel Dennett does in a paper that Jon Lawhead has just recently drawn my attention to, “patterns” rather than “parts”.

Our own David Duke is fond of saying that wisdom is about finding meaningful distinctions. Implied in how he says it, I think, is the importance of not trying to turn any one of those distinctions into a theory of everything. The subject-object distinction is meaningful and useful, to be sure. As is intersubjectivity or conjectivity, as is parts and whole, and reductionism and holism, and substance and process. I definitely have a perspective on which of these is superior in most circumstances to the others—a perspective I tried not to be too blatant about in my last post, when I was more interested in exploring each own its own terms and using comparison largely to shed light. But all have proven to be meaningful distinctions.

But the subject-objection distinction does not stand alone. Indeed, it has enormous blindspots.

Blindspots that become evident when methodological individualists attempt to talk about groups.

More Than a Subjective Feeling

On this matter I have been greatly influenced by this Christian List and Kai Spiekermann paper on methodological individualism and holism. They draw specifically on debates within philosophy of mind to distinguish between “levels of explanation.” That Dennett paper mentioned above actually does a good job of making this distinction concrete:

Predicting that someone will duck if you throw a brick at him is easy from the folk-psychological stance; it is and will always be intractable if you have to trace the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth.

Part of the problem between holists and reductionists is that the latter often interprets the former as saying that neurotransmitters, nerves, and the like, do not have anything to do with beliefs or ideas or intentions. Or that macroeconomic phenomena have nothing to do with individuals. List, Spiekermann, and Dennett as well, argue for patterns that have their own explanatory power that don’t preclude the existence of the “lower” levels, nor their explanatory value in certain specific circumstances.

But explanatory power of an nerve-level perspective is much weaker than the perspective of intentionality for explaining—indeed, predicting—that most people will duck when a brick is hurled at them, and giving reasons why this is so.

The attempts to deny this and reduce macro phenomena to nothing more than microfoundations can be useful, but too often you end up with ridiculous results, as Arnold Kling puts it:

Solow’s problem with Lucas was that Solow thought that reality should take precedence over microfoundations. Solow equated Lucas’ approach to macro with deciding that because one’s theory could not explain how a giraffe could pump adequate blood to its head that one had proven that giraffes do not have long necks.

Rob doesn’t seem to have any problem with intentional concepts from philosophy of mind; indeed he thinks it is precisely because individuals are “the seat of both consciousness and perception” that “the individual viewpoint is irreducible and sacrosanct.” That is, that groups cannot be given any special ontological status.

But in my view, to think of groups as nothing but collections of individuals is just as mistaken as thinking of beliefs as mere illusions created by neurons. And this is precisely where the subject-object distinction, and even or especially intersubjectivity, fail us.

Let’s talk about language for a moment. If you try to think of language as an “object”, or as some kind of mere coordination between “subjects” for what “objects” they are making reference for with what symbols, you do not get the full view.

A language is a whole. It is a whole that is constantly created and changing through the participation of the individuals who speak, write, read, and listen with it every day. The gap between someone who is a good student in a second language and a fluent speaker is enormous. The key difference is that the fluent speaker has a greater grasp of the whole, and so even when he is exposed to parts he has never seen before—words or phrases—in the context of his understanding of the language itself, he is in a much better position to understand them almost immediately.

Language is a property of groups rather than individuals, just as thoughts and ideas and beliefs are properties of individuals, not atoms. To claim that groups cannot have such properties because they don’t have ideas or beliefs, is tantamount to saying that thinking of termites or ants in terms of the whole colony is incorrect because colonies don’t have legs or antennae. I am not saying that human individuals stand in the same relationship to their groups as eusocial insects; I am saying that we have plenty of examples from nature where groups make better levels of explanation than individuals.

There are right and wrong ways to understand a sentence. This sometimes gets obscured by polyvalency, which is simply the fact that there is usually more than one right way to understand a sentence, or especially a paragraph or a whole book. Nevertheless, there are wrong ways to understand all of those things.

Rob trips up because he think truth just is objective truth. But the subject-object distinction is very young, compared the the distinction between truth and falsehood.

A foreigner who consults a menu and then repeats part of it phonetically, thinking that he is ordering beef when in fact he is ordering fish, is wrong. He has misunderstood the text. He is not objectively wrong. He is perhaps conjectively wrong. But he certainly has demonstrated an incorrect grasp of the relationship of this part of language to its whole.

Now,  on to that contentious issue, the Confederate flag. I am not going to die on this hill. But I will talk about how one might make an argument on this subject.

Rob says:

Group A says: “This symbol means X, and that is offensive. Take it down.”

Group B says: “No, this symbol means Y, which is not offensive. I won’t take it down.”

Now the charitable, rational thing for Group A to say at this point is “Maybe it means Y to you, but I want you to know that it means X to a lot of people. So out of respect for those people, you should take it down.”

But that is not what I have in mind.

I understand that there are plenty of scenarios where people could have been raised to see the Confederate flag as simply a matter of heritage, something that stands for the heroism in battle of their ancestors. And that isn’t necessarily wrong. But when it is claimed that it does not also stand for the defense of slavery and subordinate race relations, a claim is being made that is contestable as right or wrong.

When people make this claim, I don’t assume that they are racist. I just think that they are wrong. I would point out that members of the Confederacy proclaimed that the cornerstone of their cause was “the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition.” I would also point out that the designer of the flag argued that “we are fighting to maintain the Heaven-ordained supremacy of the white man over the inferior or colored race”. Finally, I would point out that the flag was not even flown by the state of South Carolina after the Civil War until 1961, precisely to signal its position on the Civil Rights Movement, which was mounting at the time.

Do these facts have an objective force which makes anyone and everyone see a revealed truth? No. But they are enough to persuade me. And persuasion is the mode not only of establishing the meaning of symbols but also of adjudicating the status of scientific theories.

Which brings me to my final point: there are no guarantees.

Rob reaches for an argument pretty common in libertarian circles, that we must assert the primacy of the individual or else we risk falling into totalitarianism.

The problem is this: the moment that you elevate intersubjectivity to the level of objectivity, or pretend that groups have some kind of importance above and beyond that of the individuals comprising the group, you are on very dangerous territory. This is the domain of groupthink and collectivism, where the lone dissenter is marginalized and crimes against individuals can be justified on the basis of the good of the group. After all, if people are to the nation-state as cells are to a human being, then executing a few dissidents should bother us no more than excising a suspicious mole. Unless we recognize that the individual is something qualitatively different from the group, and that the individual viewpoint is irreducible and sacrosanct, we risk falling into the trap of believing that human beings are just collections of atoms or that a single human being has value only insofar as he contributes to an arbitrarily defined group.

Two responses, one quick, and one not as quick.

First, as David Hume pointed out long ago, pointing out the consequences of a theory does not make it false. This is misleading in as much as it’s important to point out consequences so we can get an appropriate idea of what is at stake. But there’s something to it, in that if consequences are your only argument, you’re essentially saying that even if a theory isn’t true, we need to perpetuate it as a Noble Lie for people’s own good.

Second, there simply are no guarantees. The idea that believing that groups have no ontological status outside of aggregating individuals will protect us from tyranny seems to overstate things quite a bit. For one thing, there are people from history like Robespierre who believed they were fighting for the primacy of individuals and reason who became prototypes for the tyrannies by terror of the 20th century. For another, a radical individualism has its own risks—namely that there won’t be enough to hold the individuals together to serve any common purpose.

A moral nihilist might argue that believing in right and wrong could give people an excuse to do terrible things in the name of right causes. A moral realist might argue that nihilism leads to anything goes outcomes where Hobbes’ fictional state of nature is made real, or cynical power politics receives no corrective at all from organized idealists.

Not only do each broad approach to morality or ontology carry their own risks, but there is virtually an infinite range of particular versions of each approach, some of which are more dangerous than others.

In short, I don’t think the potential risk of sinking into tyrannical collectivism that some group ontologies carry with them is enough to claim that a giraffe’s neck is not as long as it is. For one thing, the devil’s in the details. For another, opposing ontologies often have the same risks arrived at by different paths, as well as risks of their own.

Precisely because I think the stakes in this matter are important, I’m not going to say I conclusively have any answers here when it comes to the right ontology that posits the right relationship between individuals and groups, parts and wholes, subjects and objects. But I’ve grown pretty damn sure that our assumptions about the last distinction carry with them tremendous blindspots which are detrimental to understanding most of the important things in life.

Of Subjects and Object

If you’ll forgive me for subjecting you to another lengthy post, I’ve got a subject I’d like to explore a bit: the subject-object distinction. Before you object, let me say that my primary objection is how few people even see it as a distinction, rather than revealed truth. In an argument a few weeks ago, I was accused of magical thinking simply for asserting the existence of what Deirdre McCloskey calls “conjective,” Searle’s “institutional facts,” or Habermas’ “intersubjective.”

The idea of something not purely subject or object seems impossible in our post-Enlightenment world. Even the religious largely argue for the existence of an objective world that is affirmed by God.

So I’d like to subject the subject-object distinction to some much merited scrutiny.

Continue reading “Of Subjects and Object”

The Spooky School

Glasgow at the turn of the 20th Century was bohemian, and if it weren’t for early 19th Century Parisian bohemians, turn of the century Glaswegian culture would define the term. More to the point, Glaswegian culture, by means of Charles Rennie Mackintosh, defines architecture to this very day. If you live in or around a city in Europe or North America which grew up before World War II, you see Charles Rennie Mackintosh everywhere, both in architectural design and in architectural embellishment, particularly the Mackintosh rose. You’ve seen that stylized rose everywhere, so much so that you probably don’t know you’re seeing it.

G112-1MackintoshRoseWhere did it come from, and how did it get such a wide distribution? This design, along with many of his design ideas, exploded into the arts and architecture world, as you might imagine happened if you study this specimen more closely, and perhaps if you study other specimens of the rose which he designed. All of Europe breathed a sigh of relief when that tension was released, and the relationship brought to bear by this explosion continued for twenty years, before Mackintosh succumbed to depression and tongue cancer in 1927.

Architectural design throughout the western world had become sclerotic and quite formalistic, suppressing artistic expression and craftsmanship. There were signs of growth and creativity, with such notable patrons as William Morris, but those who were trying to create new schools of architectural design were confined to certain pockets, quite literally confined, bound by physical walls within which individual creative thinking might be encouraged, but, aside from certain trade magazines, such thinking was all moot. In these terms, the political bound the individual so that he could not express. If he expresses as he is compelled to express from within himself, he can expect to lose his ability to eat.

Nevertheless, the Celtic revival movement was percolating, along with the English Arts and Crafts movement, but there was as yet no spearhead to bring the incredible talents into the larger professional world. Plenty of these artisans, however, were quite aware of their predicament, and they reacted quite predictably: they formed schools within schools.

In 1889, Charles Rennie Mackintosh was introduced to one of these schools within a school while he was working as an assistant for an architectural firm. They were a group of young women attending art school who called themselves the Immortals. Frances and Margaret Macdonald were among this group, and they struck up quite a relationship with Charles and his colleague Herbert McNair. The four of them began to collaborate, becoming known to the outside world as The Four, and the outside world began to take notice.

At first, their work was mocked and ridiculed, but, as they say in sports, “They don’t boo nobodies.” Something about their work had struck a chord and a nerve, so they were encouraged by the response. They received positive response to their work as well, from these other pockets of architectural bohemianism, particularly in Germany and Vienna. Being sensitive artist-types, however, they withdrew into a world of their own making, creating in their expressions a symbolic world whose interpretation is known only to the four of them. Soon, professional journals began to offer professional critique of their work, and they began to win prizes for submissions to open exhibitions. They gained no small notoriety throughout the architectural world as the principal representatives of the Glasgow School.

They relied so heavily on distorted female figures, flowers, and tears that outsiders began to call the Glasgow School the “Spooky School.” The Four had triumphed, but, as a foursome, they had reached their zenith; individual expression was still subverted to the political, albeit only four of them. Moreover, Herbert married Frances and moved away, leaving Charles and Margaret to look at each other, shrug, and marry. Their collaboration was remarkable.

It was the rose, however, Charles’ Glasgow Rose, which he made his own, that revolutionized the architectural world. Mackintosh had found freedom within this little school within a school within a school, developing a language to communicate with them and only them so that only those whom he trusted most could advise, criticize, and encourage him. Within that conclave he gestated, and from that conclave he was born with a brand new rose in hand.

mkkintosh-682x1024
Part Seen, Imagined Part (1896)

For this post I leaned heavily on my repeated readings of John McKean’s Charles Rennie Mackintosh: Architect, Artist, Icon, and also Fanny Blake’s Essential Charles Rennie Mackintosh. Page numbers by request.