The Gordian Knot

Featured image is Alexander Cutting the Gordian Knot, by Antonio Tempesta

One of the key disputes in the continental vs analytic divide in modern philosophy is one of style. German and French philosophers largely follow Hegel’s impenetrable style—or worse, Heidegger’s—while English-speaking philosophers largely follow Bertrand Russell’s approachable prose.

A problem arises immediately because the substance of philosophy is relevant to the question of its style. Consider that the conclusions of economic theory, which concern human beings, are thus relevant to the practice of economics itself.

Philosophy falls into a similar recursion, even when we are just talking about the style in which philosophy is done. Plato’s decision to write only in the form of dialogues was a conscious choice made on a philosophical basis; his master Socrates believed that written philosophy was a contradiction in terms.

What, then, are the philosophical presuppositions behind the stylistic divide in modern philosophy?

There’s a lot that can just be chalked up to bad or sloppy writing. I’m told that Kant wrote The Critique of Pure Reason in a hurry and did not get it properly edited. Analytic philosophy itself is no stranger to putrid prose. One need not be a good writer to become a professional philosopher, even in the English-speaking world.

But Hegel’s writing style was, I believe, a choice. And Heidegger’s certainly was. Heidegger believed that common language came with philosophical assumptions baked in. That point is at least defensible. His solution, however, seems worse than the problem. The relentless neologisms and wordplay are all but impenetrable. George Steiner claimed that in German, the style has literary merit. Perhaps so, but I am in no position to judge that. All I know is that Gadamer, no great stylist, nevertheless was able to wrestle with the same perceived problem in a perfectly straightforward manner.

If we are going to write, we should strive to be good writers. In this way, I stand much closer to Russell than to Heidegger. However, being a good writer does not always mean making the simplest possible point in the simplest possible way.

I fear that for all the faults of the continental tradition (the USSR used Marx to justify mass murder, while Heidegger was a registered Nazi and delivered a speech touting their virtues mere months after Hitler was made chancellor), the analytic tradition too often thinks that the world’s problems are merely a set of Gordian Knots begging for Alexander’s solution.

I often think, these days, in terms of “low context” or “direct” speech as opposed to “high context” or “indirect” speech, distinctions I learned of in Arthur Melzer’s book Philosophy Between the Lines.

Anthropologist Edward T. Hall, for example, probably the most famous and influential writer in the field, distinguishes between what he calls “low context” societies like the United States and Europe and the “high context” societies found throughout most of the developing world. In the former, when one communicates with others— whether orally or in writing— one is expected to be direct, clear, explicit, concrete, linear, and to the point. But in most of the rest of the world, such behavior is considered a bit rude and shallow: one should approach one’s subject in a thoughtfully indirect, suggestive, and circumlocutious manner.

To forestall an objection from Ryan, Melzer does not rely on evidence from theorists alone. He also draws on practical guides created for people who have to work in other cultures which emphasize the pervasiveness of indirect speech outside of the west. The book assembles a formidable corpus of such practical and theoretical discussions, all pointing in the same direction—towards the existence of cultures favoring “low context” and “direct” styles on the one hand, and “high context” and “indirect” styles on the other.

The chief distinction is not between obscurantism and clarity, but how much of an onus is put on the audience. From one paper Melzer cites:

The burden for understanding falls not on the speaker speaking clearly, but on the listener deciphering the hidden clues. In fact, the better the speaker, the more skillful he may be in manipulating the subtlety of the clues.

To a western and especially an English speaking audience, this seems the very definition of obscurantism. But Melzer emphasizes the pedagogical value of making students pay close attention to a text in order to be able to understand it. What seems superficially easy to understand too often yields only a superficial understanding.

It is an uncomfortable fact for philosophers that stories are the chief means through which societies convey wisdom, not philosophy. Philosophy and art have struggled over which was the appropriate source of wisdom since antiquity. Philosophy succeeded in achieving a certain status among intellectuals. But most people, especially children, find more wisdom from cartoons, movies, or comics than from philosophy–continental or analytic, ancient or medieval. Eastern or western.

In my view, there must be great value in conveying ideas indirectly. Of the writers here at Sweet Talk, no one has demonstrated this more thoroughly than David.

The exact opposite is not true, however. Indirect and direct, high and low context communication, both have their place.

What should not have a place are bad writing, badly organized presentation, and intentionally opaque language.

Kant, Hegel, and Heidegger were not seeking to provide well written parables or even dialogues, for all of their love of dialectic. For all the talk about “dialectical” styles, they ultimately gave lectures and wrote essays and books. And the stylistic choices they made provided cover for later, more mediocre thinkers to shroud their mediocrity in impenetrable writing.

What can be considered good writing depends on the goal as well as the audience of the piece. Good writing for a technical audience will be different than good introductory writing. Conveying wisdom through poetry, parable, or essay will also be judged differently in each case. But having different standards isn’t the same as having no standards, and philosophical writing is too often synonymous with bad writing.

Ignorant, Humble, and Curious

Featured image is a scholar with his books and globe.

Meredith L. Patterson has a wonderful post on the Platonic dialogues, and the different roles particular characters fill in them.

At a high level, there are just two roles: smart guy and dumb guy. Socrates is the only smart guy, so if you are his interlocutor, you’re the dumb guy.

But on the dumb guy side, Meredith makes a further distinction:

Broadly, they fall into two classes: hubristic dumb guys and epistemically humble dumb guys. Hubristic dumb guys think they know all the answers, and by definition don’t, because nobody does. Epistemically humble dumb guys know they don’t know most of the answers, and don’t mind, apart from the whole not-knowing part, and would like to know more.

You’d think that for your self-image it would be better to be Socrates, but I honestly kinda like being the second kind of dumb guy.

I think this is on the mark. To see why, consider two extreme interpretations of Socrates.

Continue reading “Ignorant, Humble, and Curious”

Demonstration, Theory, and Practice

Disputation

It would be easy to take my personal intellectual journey from Deirdre McCloskey’s post-modernism to Hans-Georg Gadamer’s hermeneutics to be a slide into relativism, given the reputation of such things. In fact, from the beginning it was a journey out of skepticism and into epistemological optimism, qualified though it may be.

If I have come to believe that knowledge is very political, in the sense of being unable to exist in an an individual vacuum without the context provided by groups, I have nevertheless come to believe that there is genuine knowledge.

In his first post here, Ryan made it clear that he wasn’t satisfied with that picture. He is even more optimistic—he believes that genuinely apolitical knowledge exists.

I am no master of epistemology or philosophy of mind. I’m not going to write a lengthy essay on why he’s wrong and I’m right. Instead, I want to pose a few questions. I have some preliminary answers to some of them, but the questions are more valuable than the answers.

Demonstration

A big part of my transition to relative optimism has been the abandonment of Cartesian foundationalism as the criteria for what knowledge is, and an embrace of classical notions of demonstration.

The most straightforward example is the defense of the principle of non-contradiction. We cannot prove that it is true in a foundationalist way, and it is the basis of the very method of proving things that we are attempting to defend. But we can demonstrate how it is impossible to make any argument without it. As I put it recently:

[I]t’s rather hard to make any assertion at all with teeth if you don’t care about consistency. My argument about inconsistency undermined my argument about the logical connection between God and morality—if consistency doesn’t matter, then there could both be no logical connection between God and morality, and be a completely crucial logical relationship between God and morality—simultaneously. Productive analysis would prove impossible.

Alasdair MacIntyre employed a similar sort of demonstration in critiquing Nietzsche and Foucault. If there is no truth, only positions that people take publicly in order to mask the cynical power relations going on in the background, then what is the status of this very claim? Is it true, or merely a mask for cynical power relations? And if the latter, does that not mean that the claim is false, and therefore some positions can be true and not just cynical power relations? Elliot Michael Milco makes a related set of arguments in his thesis.

One demonstration I came upon recently that I liked concerned the infinite regress argument for skepticism. This is the notion that we gives reasons for believing anything—we believe X because Y. But we have to justify our reasons with yet more reasons—and this chain goes on forever. We believe X because Y, which we believe because Z, and so forth. Because there is no foundational reason that we believe for no other reason, nothing is ever actually justified rationally.

In Joseph Heath’s Following the Rules he points out that this argument would imply we’re incapable of solving crossword puzzles. After all, our justifications for giving a particular answer for a particular part should follow the same logic that skeptics claim leads to an infinite regress. But it’s clear that we do solve crossword puzzles, when we have sufficient familiarity with the subject matter. So unless skeptics are prepared to insist that we somehow solve them irrationally, it would appear that their argument proves too much.

This sort of reasoning was the basis for my piece Science is Persuasion; Cartesian foundationalism has failed, yet still we have modern physics, medicine, and chemistry. Clearly we have knowledge. These sorts of demonstrations are more squishy than many philosophers after Descartes are comfortable with, but I’m convinced they are all we have, and that they are enough.

But What Is It To Know?

To say that a farm boy knows how to milk a cow is to say that we can send him out to the barn with an empty pail and expect him to return with milk. To say that a criminologist understands crime is not to say that we can send him out with a grant or a law and expect him to return with a lower crime rate.

Thomas Sowell, Knowledge and Decisions

The part of Ryan’s post that seemed most similar to the demonstrations above was his discussion of consciousness, which he gives as an example of innate knowledge.

For example, we are all aware of the fact that consciousness exists because consciousness is defined to be every aspect of our sense of awareness. We cannot even deny the existence of consciousness without experiencing it. This knowledge was not acquired through any sort of data collection, analysis, or persuasion. We possessed it as soon as we possessed consciousness itself. Knowledge of our own consciousness is, therefore, innate. At this risk of sounding like Hoppe, to deny this kind of knowledge is to demonstrate it; therefore, it can’t sensibly be denied.

This seems to me to get at the tension between theory and practice, a topic at least as old as philosophy itself.

Ryan’s argument boils down to the idea that it is self-defeating to deny the existence of consciousness. Yet there are people who do deny it—they are called eliminative materialists. The defense of the principle of non-contradiction is similar, as Milco’s thesis shows very capably—we can point out that it’s self-defeating to believe in its falseness, but that doesn’t stop someone from believing it is false.

The obvious argument is that in practice, we live as if we believe in consciousness and reality and causation (to name a few items on the skeptics’ list), and we argue as if we believe in the principle of non-contradiction, even if in theory we have explicitly declared that we do not believe in those things.

So here are the questions I promised: if certain ideas are implicit in our practices but we do not believe in them conceptually, is that knowledge? Does our incorrect explicit belief count as ignorance or falsehood or deficiency of knowledge, or error, in some way?

Given that we know of philosophical skeptics throughout history who have professed to disbelieve in just about everything, but clearly did not live as though that were the case, did they really know they were wrong in some meaningful sense?

These were the questions that came to my mind when I read Ryan’s post.

How We Think: a Simple Model

mind
Drilled Mind by Roman Klco

Forgive the clumsy workings of a mind untutored in philosophy of mind. But it has always been my way to read, and think, and argue, and then try to pull the threads together and see what emerges. What I lack in training I will try to make up for with brevity.

Crucial for understanding anything is understanding the context. The context is what is meant when people speak of the whole truth as opposed to partial truths. However, context—the whole truth—is boundless, and so whenever we attempt to grasp it, we’re always projecting and simplifying.

A partial truth might be dropping something, and a projection of the whole truth would be classical physics.

We can make sense of the centuries long tug-of-war between Enlightenment rationalists and empiricists from this perspective. The rationalist argument boiled down to the idea that we can never make sense of observed truths, which are always partial, without working out the whole truth abstractly beforehand. Empiricists basically believed the opposite; you add up a lot of partial observations until you get the whole truth.

Philosophical hermeneutics for the past century has taken another path, saying, in essence, you need both at once.

As Jonathan Haidt points out, people don’t really have a rational model of morality in mind when they make in the moment ethical judgments. His observation could be extended to nearly any judgement; more of the whole gets left out than is brought in explicitly. But there is an important two-way relationship between part and whole here. So how does it work?

Crucially, we externalize most of the tools for our thinking; Andy Clark calls this “extended cognition.” Joseph Heath and Joel Anderson point out that the most important way individual knowledge is externalized is through the social environment; that is, through other people. In large part this is accomplished by trusting the people in our lives and the people we perceive to be authorities on a subject, and having faith that the apparent contradictions or limitations to what is known can either be worked out, or aren’t very important.

But neither the trust nor the faith are blind. When we are confronted by partial truths, or assertions by people we trust, that stand in conflict with the whole truth as we understand it, or as other people we trust have explained it—they are subject to reevaluation.

Essentially, what the typical person brings to the fore in confronting the partial truths in their daily life is not an actual model of the whole, but prejudices. Hans-George Gadamer speaks of prejudices in just this sense as “pre-judgements,” similar to how a judge will make provisional judgments that influence the decisions he makes throughout the trial, before actually rendering a final verdict.

These prejudices aren’t simply mindless assumptions or “givens”, nor is the process by which we come to them. They always point back towards some skeleton projection of a whole truth, which can be fleshed out and scrutinized. Haidt leans hard on the fact that people will cling to their prejudices even when unable to come up with any reasons to justify them. But the fact that I might not be able to remember how to solve a quadratic equation in the moment does not invalidate mathematical reasoning as the correct way to solve such an equation. Nor does it mean I should abandon my belief in mathematical reasoning!

Context matters, and part of specifically human context is the knowledge that we have largely externalized. Haidt and his researchers are presumably complete strangers to their subjects, and the subjects in question knew that they were inside of a psychology experiment. Would the subjects have treated the matter differently if they were in a setting they trusted to be private, discussing the questions with a priest, a teacher, or other trusted moral authority who challenged their prejudices and explained the reasoning for doing so?

The very best projections of the whole truth that humanity is capable of mustering can get incredibly complex, and take years of training to really understand. Physics, chemistry, advanced mathematics, computer science, but also moral systems, offer up huge, interconnected sets of theories. Connecting these systems to people’s prejudices is a matter of persuasion. Non-specialists must be persuaded that specialists have authority on a given subject, and specialists must also persuade one another on any given point of contention.

Persuasion isn’t a matter of cold, logical syllogisms, but of rhetoric. But that doesn’t mean it is irrational. It appeals to thought processes as they actually occur in people. Among specialists, of course, this will often involve a great deal of technical details. But technical details alone cannot tell the whole story of what we ought to believe or why we ought to change our point of view.

In every case, the one attempting to persuade will draw on narratives and metaphors and examples from life that make the subject, as well as what is at stake, concrete for the people he wants to persuade. They will employ a situated reason. But reason, none the less.

 

The Whole Truth

truth

There is no way to establish fully secured, neat protocol statements as starting points of the sciences. There is no tabula rasa. We are like sailors who have to rebuild their ship on the open sea, without ever being able to dismantle it in dry-dock and reconstruct it from its best components.

Otto Neurath

If Martin Heidegger brought epistemological holism to Continental Philosophy through the central metaphor of the hermeneutic circle, W. V. Quine did the same for Analytic Philosophy. Part of how he did that was by popularizing the idea of Neurath’s Boat.

The idea is this: we can only appraise the truth-value of some specific assertion in light of a background web of beliefs that provide the necessary context. Thus, it is like we are sailing on a boat which we can scrutinize—and even repair—one plank at a time, but not all at once.

A more straightforward way of saying all of this is that you can only really understand the partial truths available to you in the context of the whole truth. But the whole truth is not available to anyone, and so instead we have beliefs, theories, frameworks.

Neurath’s Boat is a neat metaphor, but I prefer one I encountered from Susan Haack (via Joseph Heath). The idea is that knowledge is like a crossword puzzle. What you fill out has implications for what the likely answers are for nearby slots. But when you figure out what goes into those nearby slots, if you have greater confidence about them, you may need to revise what you put in the initial slot. Thus, the crossword puzzle has this property of a web of answers that are interconnected in important ways.

If we stuck with the idea of a puzzle we could see all of and were merely filling entries out for, it would imply an incrementalism just like the one explicit in Neurath’s Boat. But that would be silly—we do not know what the whole truth looks like. If we did, we would have a great deal more certainty about an enormous number of areas that remain hotly contested.

No, instead it seems to me that the crossword puzzle metaphor works, but we only get pieces of it and attempt to guess at the shape of the rest. Our experience of partial truths is fundamentally projective—based on the parts we encounter we attempt to project a provisional outline of the whole truth, though this outline may itself be quite incomplete.

When Hans-Georg Gadamer argues that the hermeneutic experience, as well as dialectic, are fundamentally about the subject rather than authorial intent, I think he means something like this: guessing at authorial intent is like trying to guess what their projection of the whole truth is. Focusing on the subject, on the other hand, is simply attempting to work out what we think the correct projection of the whole truth is, using what we learn from what we read or the conversation we participate in.

One last observation: the whole truth is not value-neutral; not for humans. Elizabeth Anderson argues this very convincingly—responding, coincidentally, to Susan Haack. She uses the example of The Secret Relationship, a book which apparently uses some very selectively chosen facts in a misleading way. “Although many characters in mysteries lie, the most interesting characters deceive by telling the truth—but only part of it.”

She continues:

How are we to assess the significance of the facts cited in The Secret Relationship? Taken in isolation, they suggest that Jews played a special or disproportionate role in the Atlantic slave system or that their participation was more intense than that of other ethnic and religious groups. But in the context of additional facts, such as those just cited, they show that Jewish participation in the slave system was minor in absolute terms and was no different in intensity from similarly situated ethnic and religious groups. The larger context exposes a serious bias or distortion in the way The Secret Relationship characterizes the significance of Jewish participation in the Atlantic slave system. The characterization is “partial” in the literal sense that it tells only part of the truth needed to assess the significance of the matters at hand. What matters for assessing significance, then, is not just that an account be true but that it in some sense represent the whole truth, that it be unbiased. Furthermore, the fact that an account is biased or distorted is a good reason to reject it, even if it contains only true statements. Haack’s premise (2) is therefore false: to justify acceptance of a theory one must defend its significance, not just its truth.

Our projection of the whole truth has political implications, and so too do the partial truths accessible to us, because of the whole truth that they imply. There is therefore an importantly ethical dimension in how we assemble those partials truths.

The Hermeneutic of the Dangerous Question

hermeneutic

In a  world where theory and practice are in ever-greater harmony, can there be such a thing as a dangerous question?

Outside of a Lovecraft story, the very idea seems bewildering. How can questions be dangerous? Dangerous how? Dangerous to whom?

I believe there is such a thing. In what follows I will discuss two types of dangerous question, and the vague boundary that separates them. I will flesh out the characteristics of knowledge that make such dangers not only possible, but unavoidable.

Two Types of Dangerous Question

The last time I discussed this matter, I gave an example I felt was uncontroversially evil—“What should be done about the Jewish problem?”

I could have had an even more forward question, such as “how are we going to eradicate the Jews?” This is the first type of dangerous question—the sort resting on morally monstrous foundations and seeking to pursue their implications without restraint. In this scenario, using the is-ought distinction as a cover would be especially heinous, but oddly enough it seems that people are rarely fooled when the proposed “is” is so extreme.

Nagging that “it is merely a factual question, we don’t have to suggest we ought to eradicate the Jews” does not really convince anyone, except perhaps for those who already agree with the morally monstrous premises and aren’t yet willing to openly say so.

It may seem that we needn’t concern ourselves with such questions at all. But to do so would be to exhibit a remarkable degree of short-term memory. On the scale of history, it was not so long ago that America had Jim Crow laws, internment of over 100,000 Japanese-Americans, and, of course, slavery. Nor was it long ago that a country considered to be at the heart of advanced western civilization created factories of death for the precise purpose of exterminating the Jews and other groups considered undesirable. We should never, ever forget that questions that seem unaskable now can always be brought back into play if political circumstances change, and indeed were asked in very recent history.

Which brings us to the second type of dangerous question—those that share many premises with more innocent or harmless questions but point towards potentially dangerous answers. The larger part of the danger comes from their connection to the first type of question—perhaps a line of inquiry carries with it the potential to make certain premises acceptable that were not beforehand. And perhaps those newly acceptable premises will bring us a few steps closer to a situation where the first type of dangerous question can be openly asked.

The first type of question—which let’s simply refer to as the immoral question—should naturally be resisted with all the resources we can bring to bear against it. We should apply social sanctions against those who insist on pursuing them, deny them positions within organizations which can assist them in their line of inquiry. Socially isolate them to the greatest extent possible.

The second type of question, however—which we can call the volatile question—cannot be avoided entirely. Depending on the particular question, it may well be that we shouldn’t avoid it. Consider the American federal system. Some have argued that it is inherently unstable, compared to parliaments. Certainly, this is an important and central concern. So what makes it volatile?

For one thing, the degree of precision available to us for answering the question is, unfortunately, inadequate to the task. In a complex system, very small errors can have enormous consequences. Even if the best evidence we have points to the greater stability of parliamentary systems in general, we cannot with sufficient precision demonstrate that it would definitely be stable or take roots at all in America. To begin building a coalition around this goal, therefore, is a very risky proposition.

Nevertheless, the stability of our system of government is obviously an important question, and one that needs to be carefully investigated. To claim that no one should ask after it because some might be overhasty in applying the answers is a logic that would bind us from asking far, far too many important questions.

One way to think of these questions is in terms of risk and expected value. The combination of the probability and magnitude of the outcomes an immoral question opens the door to give it a sharply negative expected value. Volatile questions have many more positive outcomes and a much lower risk of the negative ones—but part of that risk is constituted by the potential they create for asking immoral questions. For instance, one can imagine that certain inquiries about the biology of human beings create the possibility of drawing conclusions which ultimately make ethnic cleansing a more acceptable thought to entertain.

If we can distinguish among the types of questions by recourse to risk, it is uncertainty that creates boundary cases. And uncertainty looms very large, in as much as we are dealing with complex systems and the outcomes of specific choices are basically impossible to see beyond a few meager steps. And so there are many questions that might be either immoral or volatile, but also ones that might be harmless or volatile—even ones that might be immoral or harmless. Such boundary cases are what make these categories vague in the formal sense; and no amount of increased precision can resolve these boundary cases, as they are intrinsic to the concepts themselves.

To summarize, we can say that all questions exist on some spectrum of risk, and that irreducible uncertainty makes it hard to place many questions on that spectrum in any sort of precise way. In terms of ideal types, we can speak of high risk immoral questions, medium risk volatile questions, and low risk harmless questions.

The trouble with this way of thinking of course is that we tend to equate high risk with high rewards. But that is not the case here. Immoral questions can be thought of as being high risk with zero payoffs at best, genocide-as-loss at worse.

The Hermeneutic Question

At the end of the 20th century, Pope John Paul II asked what is certainly a volatile question within the Catholic Church: what is the relationship between the wrongs committed by the Church in the past and the present Church? To what extent is a wrong ever perpetrated by the Church as opposed to simply individuals who form a part of “the community of the baptized”? How should one even approach such a question, fraught with historical as well as theological and ethical presumptions?

Their answer was provided in the document Memory and Reconciliation: the Church and the Faults of the Past. The sins of the past that they take responsible for in this document include the treatment of the Jewish people throughout the history of Christianity, even going as far as to ask whether the attitude of Christians towards Jews paved the way for the Nazi atrocity.

For their hermeneutics they drew entirely on Hans-Georg Gadamer’s Truth and Method. In it he asserts, among other things, the primacy of the question over the answer in our pursuit of understanding. He also puts prejudice (that is, prejudgement) at the center of our ability to interpret and understand. He asked:

Thus we can formulate the fundamental epistemological question for a truly historical hermeneutics as follows: what is the ground of the legitimacy of prejudices? What distinguishes legitimate prejudices from the countless others which it is the undeniable task of critical reason to overcome?

The answer, in Memory and Reconciliation, is as follows:

a certain common belonging of interpreter and interpreted must be recognized without which no bond and no communication could exist between past and present. This communicative bond is based on the fact that every human being, whether of yesterday or of today, is situated in a complex of historical relationships, and in order to live these relationships, the mediation of language is necessary, a mediation which itself is always historically determined. Everybody belongs to history! Bringing to light this communality between interpreter and the object of interpretation – which is reached through the multiple forms by which the past leaves evidence of itself (texts, monuments, traditions, etc.) – means judging both the accuracy of possible correspondences and possible difficulties of communication between past and present, as indicated by one’s own understanding of the past words and events. This requires taking into account the questions which motivate the research and their effect on the answers which are found, the living context in which the work is undertaken, and the interpreting community whose language is spoken and to whom one intends to speak. For this purpose, it is necessary that the pre-understanding – which is part of every act of interpretation – be as reflective and conscious as possible, in order to measure and moderate its real effect on the interpretative process.

Emphasis added by me.

Our interpretation of something depends a great deal on what we bring to it—including why we are attempting to interpret it at all. Unlike the German idealists and historicists, Gadamer did not believe that we are merely trapped within our own path-dependency here. Instead, he believes that there are legitimate prejudices which are necessary for understanding, and illegitimate ones which distort our understanding.

The Catholic Church’s approach, drawing on Gadamer, was to attempt to bring out the prejudices that are in play in order to enter them into dialectic directly. As fellow Sweet Talker David put it:

Well, that was the entire point of my paper, which had failed to convince the Harvard University types: we don’t really know history. I did not go so far as to radicalize my view inasmuch as to say that we construct history wholesale, but it is certainly true that we arrange data within a certain framework until we are pleased with the outcome.

A healthy skepticism of the self is thereby necessary. What am I up to? Can I identify my biases? What are my external influences? Why is this emotionally significant to me? Moreover, when it comes to historical realities (for a lack of a better term), an academic humility is very helpful, namely that we don’t know very much at all, and we know that we don’t know very much at all because we don’t have much physical evidence, and we are, most assuredly, arranging evidence as taught, not as is obvious. We make a convincing case, that is all.

This is important, because it is very rarely the question as such that is immoral or volatile. Rather, it is the context the question is asked in, the background that the question brings with it, the tight inner logic between the situation and the question which pull towards particular answers and political responses to those answers.

Even the most seemingly immoral of questions can be virtuous in the right context. If we are asking “how would you exterminate the Jews?” but the broader context is not government policy but rather the war with the Third Reich, and attempting to figure out what they are up to in order to sabotage it, the question takes on a rather different character.

All of this gets at why we must not be dogmatic about distinguishing questions of fact with questions of morality. All questions of fact are asked because we believe there is some reason we ought to be asking them. All answers offered to such questions are essentially answers we believe people ought to believe. And all of this is always tied up in larger political and moral questions in a way that can never be separated in practice, even if they can be treated separately in theory.

The Politics, Ethics, and Rhetoric of Dangerous Questions

The hermeneutic question is always answered by recourse to politics, ethics, and rhetoric. Politics, because we are only capable of knowing together and because what we think we know always has joint implications beyond the individual. Ethics, because to know together means being able to trust one another, and also because becoming authoritative in a field carries with it institutional force that can be abused, as well as many others reasons. Rhetoric, because ultimately knowledge and understanding boil down to conversation and persuasion. As David put this latter point, “we make a convincing case, that is all.”

David also described what is political in this process:

He asked, “What do you do with history?”

Without thinking, I blurted out, “I just ignore it,” which is true, in one sense because of my deep respect for the science of linguistics, but not quite right, again, out of a deep respect for post-modern philosophical currents. What I was getting after was the primary importance of community in interpretation, but I didn’t say as much, so the entire room burst into laughter. I tried salvaging my point, but, you know how these things go.

Emphasis added by me.

When post-modernists say that all knowledge occurs within a community of rhetoric, most modernists simply smell relativism or anti-realism and run screaming (sometimes to return with a mob and torches). But there’s nothing anti-realist about it at all; it’s simply a characteristic of our knowledge stemming by necessity from specialization and the sheer scope of how much is known collectively compared to what can be known individually.

This is why the ethics of it is tied inseparably to the politics of it. Specialization of knowledge requires trust; in this way faith undergirds modern science as much as pre-modern theology. Trust can be betrayed, as in the vice of infidelity. But trust can also be misplaced, as in the vice of imprudence—a vice in the one doing the trusting.

I have seen several people today pursuing volatile questions and believing it to be an act of courage. This is true, but I think they only see part of the picture. So far as I can tell, in their eyes the courage is required because of the risk of social sanctions. From my point of view, the courage is required not only for this narrowly prudent concern, but also because, as I said, there is a larger social risk involved in those questions.

If you believe they are none the less important questions, it takes courage to proceed. It is like attempting to cross an unreliable bridge over a deep valley, but with a group of people you care about following behind you, rather than alone. If the destination is important enough and the alternate paths too few or too far, it may still be worth the risk.

My criticism of impolitic arguments, therefore, is not an argument that volatile questions should never be asked. Instead, it is an argument that prudence matters here, and not just in the sense of narrow self-concern against social sanctions. But a broader prudence, one in which your connection to other people is acknowledged and the risks of your undertakings managed responsibly.

Which brings us to rhetoric.

Garett Jones, so often my explicit or implied interlocutor on these matters, indicates I’m supporting some kind of Noble Lie story:

I will say right now that I do not believe in Noble Lies. Garett offers a possible alternative:

There may be something to this.

Consider the example of the post in question: libertarians who simply insist that a single vote can’t sway a big election. What was I getting at by calling out this particular phenomena?

It seems to me troubling that so many Americans make the idea that a vote “makes a difference” so central to their faith in democracy. I actually happen to think democracy, and the one we’ve got, is among the best forms of governance we can hope for in this fallen world of ours. As such I would like to substitute this belief for a different story of the legitimacy of these institutions, something that would obviously take a great deal more space than I could devote to it here. One doesn’t have to agree with my prejudices in this regard to agree that we should seek to improve our politics rather than worsen them—whatever you might take that to mean.

Well, it seems to me that libertarians who hammer on this point about the margin of victory in elections have no interest in improving our politics. Many are quite proud of this fact. It’s not so much that I think this simple, mathematical fact of voting should not be made at all. It’s the context, the way they make it, that turns it into something intentionally volatile. They have found a vulnerability in people’s faith and gone for it for no other purpose than dispelling that faith, or perhaps feeling satisfied with being right. But if people in America lose faith in democracy entirely, the results are highly unlikely to be good for much of anyone. Libertarians included.

The rhetoric of negation is a poison of our times, levied by imprudent, spoiled children who do not realize how good they have it, and haven’t the courage or the patience to work towards actually addressing the very real problems that exist. The latter cannot be done if we reject authority entirely, because without trust in authoritative sources we cannot build the knowledge required to face any sufficiently challenging task.

To Salvage or to Sever

I don’t think Schelling points (Garett’s “Focal Points”) is exactly the right metaphor here, though they undoubtedly play a role. But communication and rhetoric are far too central to this. I have in mind something more like an attractor, similar but not the same as an equilibrium in comparative statics. A great deal of variation in politics, ethics, and rhetoric may be possible that still keeps us circling that specific attractor.

However, for some variations in politics, ethics, or rhetoric, we may get knocked off course from that attractor entirely, with no way back. If there’s another fairly similar one “nearby” this may be only a moderate change. But of course that’s not always the case. And the transition could be quite violent in any case.

Now, this sounds like a very conservative way of looking at things, I’m sure. But I’m not saying that revolution is categorically impermissible. I’m saying that we need to take responsibility for the implications of what we do and what we ask. It’s clear to me that the libertarians I criticize really believe that America’s political order is illegitimate and needs to be torn down and replaced. But if that is so, their life choices seem, to me, to be quite irresponsible—they continue to pay taxes and enjoy the benefits of that political order. If ethics demands revolutionary politics, perhaps they ought to behave a little less like every other conventionally middle class American and a little more like revolutionaries?

I think their revolutionary politics is misplaced, but I am more troubled by their willingness to erode the faith in what we have without doing much of anything to pave the way for anything better.

Garett’s example—the historical accuracy of the Book of Mormon—was interesting to me, especially when compared to David’s post on Leviticus:

In secular universities, and those religious universities whose worldview is formed by Nineteenth Century Continental philosophy, the minimalist Documentary Hypothesis is still taught as de rigueur, a hypothesis which posits that the books of Moses, especially the Levitical material, were fabricated by a power-mongering priestly caste during the Judahite exile in Babylon during the Fifth Century BCE. I am under the impression that this hypothesis is presented as ironclad secular scholarship, i.e., the truth, when it is essentially the telos of the Sacramentarian movement which came to dominate Enlightenment Era religiosity.

Religious fundamentalism, deeply offended by this radical minimalism, developed a response which became reflexively maximalist, in defiance of all evidence (even internal evidence) to the contrary, namely that Moses wrote every jot and tiddle of his five scrolls somewhere between 1550 BCE and 1440 BCE, and never shall a true Christian vary from that view lest he deny the efficacy of the Word of God.

David speaks of fundamentalists, but in fact there are two. There are those for whom, as Garett said of the Mormons, to simply ask some volatile questions is a sign of apostasy. Perhaps even proof of it!

But then there are those whose dogmatic embrace of a cynical hermeneutics, a hermeneutic of suspicion, allows no other answer than that Leviticus was a cold, calculated Noble Lie.

David, a believer himself, nevertheless argues that we ought to proceed with an evidence-based approach. He defends, among other things, the use of the science of linguistics in order to attempt to “make a convincing case.”

To return to Memory and Reconciliation, it’s clear that the official of the Church believed that all the tools of the historian should be brought to bear in determining what wrongs were committed by the Church in the past:

What are the conditions for a correct interpretation of the past from the point of view of historical knowledge? To determine these, we must take account of the complexity of the relationship between the subject who interprets and the object from the past which is interpreted. First, their mutual extraneousness must be emphasized. Events or words of the past are, above all, “past.” As such they are not completely reducible to the framework of the present, but possess an objective density and complexity that prevent them from being ordered in a solely functional way for present interests. It is necessary, therefore, to approach them by means of an historical-critical investigation that aims at using all of the information available, with a view to a reconstruction of the environment, of the ways of thinking, of the conditions and the living dynamic in which those events and those words are placed, in order, in such a way, to ascertain the contents and the challenges that – precisely in their diversity – they propose to our present time.

It seems to me that the most hardy equilibriums of politics-ethics-rhetoric are capable of dealing with an enormous number of volatile questions without blowing themselves up. Secular people too often underestimate religion this way. Garett’s comments on Mormonism (which may very well suffer from the sort of fundamentalism that David encountered on the subject of Leviticus) put me in mind of a counter-example: the career of Lorenzo Valla.

Valla is most famous for proving that the Donation of Constantine was a fraud, something very politically inconvenient for the Vatican’s wordly aspirations. He was able to do so because of political cover from a king, it is true. And the regime of Pope Eugenius IV certainly made trouble for him, especially when he went on to question the provenance of the Apostles’ Creed.

Nevertheless, the Catholic Church did not dissolve in the face of his inquiries, and in fact Eugenius IV’s successor actually appointed Valla as papal secretary. Moreover, the techniques employed by Valla were spread most widely, in the end, by Jesuit schools.

In those days the volatility of questions was able to be kept under control to some extent because of how easy it was to restrict who entered the conversation. Valla entered at a moment when Renaissance humanism suddenly expanded the number of participants, but ultimately the Church and other institutions adapted to this new reality.

The printing press resulted in a major break, and not on purpose, either. If Martin Luther’s 95 Theses hadn’t done the early printing press equivalent of going viral, it’s highly likely he would have simply lived his days as a fairly successful member of the Catholic Church. Restricted to a small circle of fellow scholars, his arguments were much less volatile than when they were made available to the public at large.

Restriction is obviously no longer a viable strategy, and both modern science and modern standards of living are possible because of the drastic reduction in such restrictions. So not only is restriction unavailable, it’s not something we should pursue even if it was available.

For that reason, the burden on our ethics is heavier than ever. Not only to be trustworthy and to have trust in others, but to have prudence in how we approach our questions and in the rhetoric of our answers, and to have the courage to pursue the inquiries and actions worth pursuing once we have appraised the dangers.

Great openness in our conversations requires greater responsibility from ever more people. I was once very optimistic about how this would play out; I am less so lately. But I will continue to argue for greater responsibility in this area, and I will do my best to show people that they can trust me to argue in good faith.

I would encourage you to do the same.

Previous Posts in This Thread

Does the Is-Ought Divide Make Atrocities More Palatable?

David Hume, seen here contemplating the benefits of the one-child policy
David Hume, seen here contemplating the benefits of the one-child policy

Those who might object to the incendiary title of this post need to stop being such mood affiliators. I’m merely asking a positive question, quit being so normative about it!

This week, on reports that China will be moving from a one-child policy to a two-child policy, I was horrified to see a slew of pieces on the benefits of the original policy, or how it “worked”. Then, when people had the predictable human reaction to such a framing, they received, without fail, this same rationalism-bot response:

The original source of my own horror was Tyler Cowen’s post, Did China’s one-child policy have benefits?

The post is masterful in its way:

  • He starts out by saying that the policy is an immoral restriction on personal freedom so of course he opposes it.
  • Then he points to a paper on how it nevertheless increased investment per child.
  • Nevertheless, the paper, and he, think that decreased fertility explains the lion’s share, if not all of this, so the impact of the policy in this regard was probably small or “it is highly likely the policy has been obsolete for some while.”

When I read this post, I thought to myself: he believes he’s washed his hands by making the categorical assertion about infringement of liberty. And he thinks that this is a mere is-ought matter, or as economists like to say (following Milton Friedman), he’s talking about positive, factual economics as opposed to dealing with anything normative.

And moreover, the moment that anyone expresses outrage, he will call them a “mood affiliator,” which he defines as follows:

It seems to me that people are first choosing a mood or attitude, and then finding the disparate views which match to that mood and, to themselves, justifying those views by the mood. I call this the “fallacy of mood affiliation,” and it is one of the most underreported fallacies in human reasoning. (In the context of economic growth debates, the underlying mood is often “optimism” or “pessimism” per se and then a bunch of ought-to-be-independent views fall out from the chosen mood.)

This is in the fine and growing tradition of saying one is cognitively impaired through some bias, or invalidating an argument by calling some piece of it a fallacy, rather than coming out and saying that you think the other person is arguing in bad faith (or more simply, is mistaken!)

To summarize:

  • A morally monstrous policy that, in a slightly moderated form, continues to terrorize innocent people, is appraised specifically in terms of its benefits.
  • The author washes their hands by saying that of course they’re normatively opposed to the policy, they just want to examine it positively!
  • The costs—in a bloodless utilitarian sense or in the sense of human costs—are not even mentioned.
  • Anyone who reacts with predictable horror at this framing is told that they’re simply being illogical or irrational.

And indeed, in discussing this post a defender of it implied that I was being a mood affiliator.

And consider this glowing comment on the post itself:

This was an excellent post. It clearly separates the normative from the positive, separates means from ends, addresses the most obvious objections (e.g., for twins, “what about birth weight”?) immediately, and kicks the comments beehive.

Excellent post. Clearly separates the normative from the positive, and means from ends! What more can one wish for?

The Unbearable Gulf of Facticity

Hume’s famous (and in some circles, infamous) is-ought divide comes, of course, from his A Treatise of Human Nature. The frequently quoted passage is as follows:

In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention wou’d subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceiv’d by reason.

Rather than subverting “all the vulgar systems of morality,” it seems to me that rather it has given birth an endless string of philosophers who take it as a matter of revealed truth that you cannot get a normative implication from a fact about the world.

But that isn’t what Hume said. As Charles Pidgen explains:

He was not denying that the moral can be defined in terms of the non-moral. He was merely denying the existence of logically valid arguments from the non-moral to the moral. This becomes clear once we note that Hume does not think that he has to argue for the apparent inconceivability of is/ought deductions. It is something he thinks he can take for granted. This is what we would expect if he were making the logical point since it would have been obvious to his readers. For it was a commonplace of Eighteenth Century logical theory that in a logically valid argument the matter of the conclusion – that is the non-logical content – is contained within the premises, and thus that you cannot get out what you haven’t put in. Thus if an ‘ought’ appears in the conclusion of an argument but not in the premises, the inference cannot be logically valid. You can’t deduce an ‘ought’ from an ‘is’ by means of logic alone.

Which isn’t quite as Earth-shattering a revelation as it continues to be made out to be, because you can’t do much of anything by means of logic alone. Moreover, the obvious implication is that if you bake a relationship between is and ought into your premise, you can absolutely derive an ought deductively. For whatever that is worth.

Hume himself wanted to build up a foundation for morality from human nature—hence the name of the Treatise. Morality was not found in “the relation of objects” but rather in subjects, in our in-born moral sense. Our sense of what we ought to do, in Hume’s own system, originates from facts of our nature. Which is obviously different from deducing what we ought to do in a given situation. Nevertheless, even in Hume, this gulf does not appear to be so very vast.

Hume Was Contributing to a Project Which Has Failed

The Treatise only makes sense in the context of the Cartesian foundationalist project. Reason as it was understood in the Enlightenment was framed by Descartes’ project of radical doubt—only that which could be proved beyond a measure of doubt counted as justified, rational belief. His doubt took him right down to the level of whether he himself even existed, and the fact that there must be a questioner in order to ask such questions is essence of the famous cogito ergo sum. From there, he seeks to build reality back up through abstract reasoning. No one thinks he succeeded.

But he did succeed in imposing a new vision of rationality and reason, one that modern discussions still are all too often caged inside of. This is the notion that what is rational is only that which has some ultimate foundation that cannot be questioned. No such foundation exists, and in fact the very idea of it is incoherent, and so all contributions to this project were doomed to fail. Including Hume’s.

Descartes and his contemporaries also developed the subject-object distinction, something that has at this point become basically everyone’s background assumption of how reality and knowledge work. You can see it in Hume’s argument that “the distinction of vice and virtue is not founded merely on the relations of objects.” Operating within the subject-object schema, he argues that morality is not objective, “out there,” but rather a characteristic of subjects. This is not to be confused with our modern use of the word “subjective” to mean merely arbitrary. Hume clearly believed that noncognitivism, as it came to be called, provided a kind of foundation for morality. Just a different sort from those who believed in an external, objective moral law.

Unlike foundationalism, the subject-object distinction does have its uses. It’s clearly been a helpful tool throughout the centuries. But it is not the only set of joints from which to cut reality, nor, in my judgment, the best. I think we’d all do better to start treating it as a special case rather than assuming it by default.

How to Think Like a Human

Socrates often avails himself of the sophistic arts of argument, but whatever the reason might be, that he does cannot in any way be attributed to some deficiency in the logic of that time. To be sure, Aristotle was the first to clarify the essential theoretical foundations of drawing correct conclusions, and in so doing he also explained the deceptive appearance of false arguments. But no one could seriously contend that the ability to think correctly is acquired only by a detour through logical theory. If we find in Plato’s dialogues and in Socrates’ arguments all manner of violations of logic—false inferences, the omission of necessary steps, equivocations, the interchanging of one concept with another—the reasonable hermeneutic assumption on which to proceed is that we are dealing with a discussion. And we ourselves do not conduct our discussions more geometrico. Instead we move within the live play of risking assertions, of taking back what we have said, of assuming and rejecting, all the while proceeding on our way to reaching an understanding. Thus it does not seem at all reasonable to me to study Plato primarily with an eye toward logical consistency, although that approach can of course be of auxiliary importance in pointing out where conclusions have been drawn too quickly. The real task can only be to activate for ourselves wholes of meaning, contexts within which a discussion moves—even where its logic offends us. Aristotle himself, the creator of the first valid logic, was well aware of how things stand here. In a famous passage in the Metaphysics, he declares that the difference between dialectic and sophism consists only in the choice or commitment in life, i.e., only in that the dialectician takes seriously those things which the sophist uses solely as the material for his game of winning arguments and proving himself right.
-Hans-Georg Gadamer, Dialogue and Dialectic

Nothing can stand up to Cartesian doubt. If that’s the standard, then philosophy will fail every time.

It seems clear to me that scientists have achieved an astonishing amount when it comes to understanding the world we live in, however incomplete that understanding might remain (and perhaps always must be). So when someone trots out an epistemology that calls into doubt our ability to know anything at all, it seems to me that it is that philosopher, rather than scientists, who begins to look rather foolish.

Similarly, it seems to me that every day people are able to leap from an understanding of the facts to what they ought to do. Often they feel they have made mistakes, or think others have also made mistakes. But to trot out a bit of formal logic, where you can’t get an ought in the conclusion if you only have an is in the premise, and then treat any normative conclusions drawn from facts as a fallacy—seems, to me, to be ridiculous.

I will grant that people’s inferences about what they should do is not as strong a piece of evidence as the atom bomb, the astonishingly precise and accurate predictions from quantum physics, or the results of germ theory. Nevertheless, the argument against such inferences is even weaker than the evidence. As Gadamer put it, “no one could seriously contend that the ability to think correctly is acquired only by a detour through logical theory.” And invoking this particular piece of logic is, in the context, quite weak indeed.

Which brings us back to the piece that set me off: Tyler Cowen’s post on the one-child policy.

Look at the title again. The question is “Did China’s one-child policy have benefits?”

And yet the commenter I quoted above assures us that this cleanly separated “normative from positive.”

What, dear readers, is a “benefit”?

This is the problem with economic analysis in general. They talk a big talk about separating normative and positive, and thus honoring the is-ought distinction, but the very notion of a benefit is normative!

The counter argument, of course, is that economics assumes a “benefit” in terms of individuals’ own subjective valuations. But that is not a morally neutral choice, either. It demands, at least while performing analysis within the framework, that we exclude other notions of what a “benefit” is. Pure facticity is to be found nowhere in this.

The Humean turn in this regard is just one more way in which Enlightenment rationalism has made us very bad at thinking carefully about our rhetoric. Tyler Cowen is usually very careful in this regard, which is part of why I was stunned by the rhetoric of his post. But it’s clear, as his commenter observed, that the provocation was the main goal of how it was framed. I find this irresponsible given the horrors of the policy in question, horrors which remain unmentioned in the post.

Moreover, while he did not levy the accusation this time, this bit about “mood affiliation” is part of another trend that is due for a reversal; that of taking anything human about the nature of arguing, and categorizing in a way that makes it easy to dismiss.

Let’s take a relatively old one: the “ad hominem” fallacy that is the bread and butter of all Internet arguing.

When we’re talking about an Internet troll who is simply insulting you, of course there’s nothing productive going on.

But once you step out of foundationalism, it starts to become clear how much what we know rests on who we can trust. Trust puts the “faith” in arguing in “good faith”. If we cannot trust the one making an argument, we cannot trust the argument—simple as that. Taking an argument on its merits is not simply looking at the logic of the thing, it’s also looking at the credibility of both the one advancing it and the sources cited.

It’s one thing, in other words, to call someone ranting in a knee-jerk, thoughtless manner, a “mood affiliator.” I don’t know why you wouldn’t simply call such a person unhinged, but if sounding more clinical floats your boat, who am I to judge?

But when you’re talking about the benefits of a policy that is not only an infringement on freedom, but involves an ongoing brutality and inhumanity of a systematic sort, it is shameful to call the predictable negative response to it “mood affiliation” or a failure to understand the is-ought distinction.

Take Responsibility for Your Rhetoric

The Politics of Truth had several explicit targets and many more implicit ones that I felt best left unnamed. I will continue to leave them unnamed, but here are a couple of passages that might be of interest:

There’s a problem inherent to this fact of our nature. What if merely asking a particular question automatically made you less credible? And what if any group that systematically investigated that question, and other related ones, became similarly less credible as a result?

This is no abstract point. Asking “what should be done about the Jewish problem?” tells us a lot about who you are and the group you belong to. Anyone with an ounce of decency would not pay any attention to the claims made by the kind of person who would seriously ask and investigate that question, or the groups that would focus on it. The only attention that such people would draw would be defensive—it would be the attention we give to something we consider to be a threat.

Further down:

Now a modern libertarian, good child of the Enlightenment that they are, wants to assert the clear mathematical truth that no individual vote makes a difference. Especially at the national level. I have seen this happen many times, as a GMU econ alumni. I have done this many times myself.

The problem is that it’s never really clear to me what people hope to accomplish by doing this, other than being correct. Certainly the strong emotional backlash shows that there’s a cherished idea there. But other than casting ourselves in the role of unmasker of a manifest truth, what do we do when we insist upon this point?

A single vote does not sway an election, and therefore it follows…what? Other than undermining a cherished idea, which is indeed incorrect, what exactly is the larger value of the specific point?

In my view, “I believe in freedom so I would never implement the one-child policy, but if I didn’t, here are some benefits” is highly irresponsible and impolitic rhetoric.

Here’s one way that the substance of Cowen’s post could have been framed:

Subject: “Did Decreasing Fertility in China Result in Higher Relative Human Capital Investment?”

Body: “A recent paper suggests that this is the case, drawing on the natural experiment created by twins. They ultimately conclude that the one-child policy likely had little impact on this due to pre-existing trends in declining fertility in China and the rest of the region.”

Sounds different, doesn’t it? And the substance really is basically the same.

To frame it in terms of “the benefits” of the one-child policy, especially with substance so reasonable as that, is either poorly thought out or unethical.

But perhaps I am just a mood affiliator who does not properly understand the is-ought distinction.

The Process and Politics of Systemic Change

The Deluge
The Deluge

Scott Alexander is worried about the siren song of systemic change. His concerns combine the limits of knowledge with the nature of politics. This is something I’ve taken a few stabs at myself, and Alexander’s post seems like as good an opportunity as any to revisit the subject.

The Being and Becoming of Groups

Political philosophy, political science, and economics very often deal in models and comparative statics. Change is therefore thought of as getting from point A—the current social system—to point B—some proposed alternative. Sam provided a good take on the ontology of this:

In the case of economics, the core ontological preoccupation is with the nature and existence of market equilibria and their constituent parts: supply and demand, institutions, representative agents, social planners, and so on. Some focus on ontologies of being, like a static equilibrium, while Hayek and Buchanan famously had ontologies of becoming that emphasized the importance of analyzing economic processes.

This is a novel way to think about economic analysis, but implies a symmetry about the practice of economics that I’m fairly certain does not exist. The ontologies of becoming and the analysis of processes are largely considered a marginal concern among most economists, if they are considered at all. Most economists focus all of their energy on the analysis of being.

Systemic change looks very differently when approached from a comparative statics as opposed to a process perspective.

Consider the following example: the American federal system. You have a judiciary, the presidency, and a bicameral legislature. If we’re thinking about systematic change, in a framework of comparative beings, we may talk about wanting to switch to a parliamentary system of some sort. You can debate the details about what sort of parliamentary system you would want—there are plenty to choose from and you can of course come up with one that is somewhat unique—but whatever the specific alternative, most of us would agree that switching to a parliamentary system would be a pretty radical change.

Democracy is democracy, right? If the parliamentary system does it better than ours, we should start pushing to switch to that—right? Isn’t it just that simple?

Scott Alexander does not think so. Implicitly invoking Nassim Taleb, one of the characters in his dialogue worries about very rare but highly consequential risks:

There are many more ways to break systems than to improve them. One Engels more than erases all of the good karma created by hundreds of people modestly plodding along and making incremental improvements to things. Given an awareness of long-tail risks and the difficulty of navigating these waters, I’m not sure our expected value for systemic change activism should even be positive, let alone the most good we can do.

The reference to Engels being a case where one person’s altruism—in continuing to work for a living in order to altruistically fund the scholarship of Marx—can actually have huge and horrible systemic implications. To the extent that Marx’s ideas are responsible for the communist bloodbaths of the 20th century, Engels’ altruism helped to create a catastrophe of historic proportions.

The example has problems—but let’s put that to the side for now. Instead, let’s consider the person who is probably responsible for Alexander framing this problem in terms of “long-tail risks;” Nassim Taleb.

Taleb’s entire worldview boils down to a few very simple ideas:

  • There are events that are highly consequential but happen so rarely that no one may even be aware of their existence—pick your magnitude of “consequential” and your frequency of “rarely”, as well as whether you think people aren’t aware of them or merely think they won’t happen again.
  • We should strive to bound our downside risk as much as possible and leave as much upside potential as possible.
  • On a long enough timescale, all downside risks will play out.
  • The longer something has been around, the more likely it is to have encountered at least some of these rare, high magnitude events and thus to have proved itself resilient against (or better, antifragile; able to improve as a result of) those events that it has survived.

Taleb talks the talk when it comes to being against consequentialism and for virtue, stoicism, and duty, for instance, but at his core he’s just a consequentialist in a Stoic’s toga. I would argue that both his notion of the sacred, and his ontology, have not been very well thought out.

In terms of the sacred:

This is not just how he talks about duty; it’s also how he talks about religion, something that an actual believer, would, I think, find very strange. Take for instance our own Drew Summit, a devoted Catholic. When an online neoreactionary posited a consequentialist framework in which the only way to preserve our prosperity and order into the future was to mass-sterilize the poor (no, I’m not making this up) his answer was—even if all of that was true, we should not do it. In his words:

Duty, faith, and the sacred are not about consequences. Indeed, their very importance is to emphasize the things that matter beyond mere consequences. But more on that further down.

For now I want to look at the ontology underlying Taleb’s argument about how traditions and other time-tested things serve to hedge our bets against rare catastrophes.

From the perspective of being this makes a certain logical sense. The American federal system has has been in place for almost two and a half centuries. By Taleb’s logic, it has shown itself to be resilient against once-a-decade catastrophes, or ones that come every twenty, thirty, or fifty years. Maybe we can say that it’s resilient against once-a-century events, or even once every two century events, but that happened so few times at this point that it could have just been lucky.

Nevertheless, a brand new system wouldn’t have been tested against such centennial events ever, and in Taleb’s ontology I’m fairly certain that imposing a parliamentary system would count as a new system. This is because, even though parliaments have been around elsewhere for longer than the American federal system has been here, it’s not clear how that system would graft to the norms, practices, and other existing institutions here. So the change could make us more vulnerable to rare catastrophes.

Here is the problem: in what sense is there a continuity between the system as it existed during the presidency of George Washington, and the system as it exists under the presidency of Barack Obama? We can analyze the being of the system that existed under George Washington and the being of the system that exists now. We can remark upon the differences. But the main story here is one of becoming—the colonies becoming independent and loosely uniting under the Articles of Confederation, then throwing that out in favor of the Constitution and becoming the United States under the federal system. And then becoming the system we have today after having been the system that George Washington was a part of.

But there is more—the system under George Washington was not a static being. It was in the process of becoming from the very first day, and it did not suddenly hit a static equilibrium at some point along the path through his two terms in office. Unexpected structures appeared in this process right away, such as the creation of the cabinet.

Now, we also have the federal agency system. My question to Talebians, and Alexander specifically, is: wasn’t the birth of the agency system a systemic change? Was it so big a break from the past that any “resilience” or “antifragility” the system had throughout the 19th century has been lost? How much continuity is enough? What exactly counts as continuity when tradition is constantly in motion?

Alexander begins his piece with a look at a few people on the left who clearly think effective altruism is just another way to preserve global capitalism, which they wish to abolish.

In this context, let me list a few other systemic changes that come to mind:

  • The “globalization” of trade and commerce in the 19th century.
  • The European conquest of most of the world in the 19th century that they hadn’t already colonized in the 18th.
  • The 1930s: the end of 19th century globalization, the turn to fascism and communism in many countries, the birth of the modern social-democratic welfare state in the rest.
  • The spread of communism after World War II.
  • Decolonization during the same period.
  • Modern globalization: quite young still, long-term effects quite uncertain!

Would turning back the state of globalization to where it was in the 1970s count as systemic change, or would allowing the process of globalization to continue as it is currently proceeding be one? Was the end of the 19th century globalization a systemic change or was its onset? If its onset, was the end of it merely a return to something older, and therefore more resilient?

I’m inclined to think that systemic change is an unavoidable fact of life—indeed, of existence. We are constantly in the process of becoming, and one way or another we hit discontinuities big and small along the way.

As for the limits of our knowledge—well, I’ve written quite a lot about that as well, lately. Suffice it to say that I think we can know a great deal, but I stand with Aristotle in thinking that each subject matter has a level of precision appropriate to it. When it comes to complex systems, I’m with Taleb and Alexander—the level of precision is quite low. Though Taleb seems to go between saying that and claiming that his arguments are backed by unquestionable mathematical certainty….so there’s that. Alexander strikes me as being rather more intellectually humble on this score.

But Alexander’s point is not really about systemic change as such. It’s about politics.

The Unity of Politics, Ethics, and Rhetoric

Luisa chuckled. “I hear you, sugar. I’m not gonna say you’re wrong. But I have to warn you that this is the word—‘ politics’— that nerds use whenever they feel impatient about the human realities of an organization.”

-Neal Stephenson, Seveneves

I actually think that Alexander’s post focuses much more on the nature of politics than on the specific risks around systemic change.

One thing I like about the post is that it seems clear that he is sympathetic to the “if we got politics out of the way and just let smart people roll up their sleeves and get it done, everything would be fixed” perspective, but also is highly aware of its flaws.

Alice: Now you’re just being silly. There’s no efficient market hypothesis for politics!

Bob: But why shouldn’t there be? A lot of people mock rationalists for thinking they can waltz into some complicated field, say “Okay, but we’re going to do it rationally“, and by that fact alone do better than everybody else who’s been working on it for decades. It takes an impressive level of arrogance to answer “Why are your politics better than other people’s politics?” with “Because we want to do good” or even with “Because we use evidence and try to get the right answer”. You’d have to believe that other people aren’t even trying.

Further down:

Alice: I…think you’re being deliberately annoying? It seems like exactly the same kind of sophisticated devil’s-advocate style argument we could use for anything. Sure, nothing is real and everything is permissible, now stop playing the Steel Man Philosophy Game and tell me what you really think! It really should be beyond debate that some policies – and some voters- are just stupid. Global warming denialism? Mass incarceration? Banning GMOs? Opposing nuclear power? Not everything is a hard problem!

Bob: I really do sympathize with you here, of course. It’s hard not to. But I also look back at history and am deeply troubled by what I see. In the 1920s, nearly all the educated, intelligent, evidence-based, pro-science, future-oriented people agreed: the USSR was amazing. Shaw, Wells, Webb. They all thought Stalin was great and we needed a global communist revolution so we could be more like him. If you and I had been alive back then, we’d be having this same conversation, but it would end with both of us agreeing to donate everything we had to the Bolsheviks.

Alice: Okay, so the smart people were wrong once. That doesn’t mean…

Bob: And eugenics.

Alice: Actually…

Bob: ಠ_ಠ

It’s ironic to me that the Alice character accuses Bob of taking an outsider’s view of politics, but the approach she proposes seems entirely like the sort of rationalism that acts as though politics is an object of study rather than something we live with every day. They seek explanation rather than understanding.

What does politics actually looks like, from the inside?

I think understanding the whole picture where politics is concerned requires us to speak of the unity of politics, ethics, and rhetoric.

In another post on politics, I discussed what I referred to as the American democratic religion.

From a very young age, I was taught stories about the founders, and the proper moral framework in which the Revolutionary War ought to be understood. The history that I learned in school was steeped in democratic values; it was all too easy to see American history in particular as a straight line of progress of increasing enfranchisement, abolition, women’s suffrage, and civil rights. Martin Luther King, Jr. was elevated into the same pantheon as Jefferson and Lincoln.

Unlike most of my libertarian and economist friends, I believe that groups can have a distinct ontological status from individuals. To put it more pragmatically, groups are often the best level of abstraction for explaining certain causal relationships.

(as an aside—the paper in the previous link is highly recommended for anyone interested in the methodological individualism vs holism debate).

However, there are many sorts of groups. The macro-economy and the nation are obviously related and influence one another, but may be usefully considered separately. In order to exist as a group, a macro-economy simply requires trade and financial markets on a certain scale. Where we draw the lines on one macro-economy vs the other is, in some ways, as arbitrary as committed methodological individualists believe all borders to be. In other ways, not so—particular regulatory and cultural environments certainly have an impact on the overall character of a given macro-economy.

But with nations, or at least with the sense of being “a people,” things are a little different. The existence of a people requires a narrative—or more precisely, a body of narratives—which define who they are, do the difficult work of bridging is and ought, provide us with our sacred reasons for action. A people also always has important rituals, in which “the boundaries between individual and community become less-defined.”

A healthy club has subsumed your self to it without actually reducing your value as an individual; you are not necessarily subverted to the club, but you cannot assert individuality without pressing against the boundaries created by the ritual. If you do not partake of a haggis, you really aren’t a member; you’re an observer, and you cannot receive the benefits of the club, which are mostly transcendental.

My belief is that the American democratic religion has done a remarkably effective job at taking some 320 million people spread across a large land mass and turning them into a people, rather than just a smattering of different tribes. If it seems to get harder and harder to get some consensus on a growing list of issues, as the popular narrative of political polarization implies, it could be that our national narratives are feeling the strain of just how big our population has become. Or it could be that such polarization is overstated, relative to how it has always been.

In any case, this body of stories and rituals also form the environment in which our understanding of ethics is nurtured. Reports of the is-ought divide have been greatly exaggerated—we construe each situation as part of a whole which has the character of a narrative; the ought flows naturally from the is, when we are looking at the is from the inside.

That said, in a cosmopolitan nation with a rich literary, philosophical, and scientific tradition, we are able to learn a great deal about ethics outside the narrow scope of the narratives that currently sustain the people of which we are a part. As we learn, we may come to feel that there are some things that are unethical about our group—something that we believe needs to be changed. How fundamental (or systemic) these changes are will vary, of course. In America and in the west in general, we are, for better or worse, educated to consider big systemic change to be a good idea, a necessary part of progress. I think Alexander is right to try and counterbalance that.

The ethical problems that we see may be, per MacIntyre, problems that can be identified purely within the perspective of the reigning traditions of thought of your people. Or, per Gadamer, it may be that they are problems that are obvious only when you have expanded your horizons by looking elsewhere, something anyone is capable of doing by reading the many texts from different points in history as well as from different cultures.

Ethics here is taken in an Aristotelian sense—it’s not limited to what we moderns would consider the moral domain. “Good” in this sense is inclusive of the moral but goes beyond it; think of a good teacher or a good carpenter. Phronesis, practical wisdom, unites what we consider the narrowly practical with what we consider the narrowly moral.

I like that Alexander flirts with “an efficient market hypothesis for politics,” and it’s unfortunate to me that in an update to the post he notes that he takes this comment very seriously. That comment boils down to “consequentialism is true, few people realize it, so there are trillions of dollars lying on the proverbial table in politics.” (EDIT: seems I’ve misread it)

Except that the status of consequentialism, against the alternatives which are currently in play as well as those which are not, is just as contestable as the positions this commenter is seeking to subvert to it. Giving the commenter the benefit of the doubt, I assume they are aware that consequentialism has problems, just as all frameworks do, but that they simply believe it the best framework we have.

But the impulse which drove Alexander to grasp for something like an efficient markets hypothesis for politics should also extend to philosophy; all obvious assertions drawn from consequentialism have been made, all obvious responses have also been made, and the conversation has moved on from there. The primacy of consequentialism is not at all obvious, even to extremely smart and heavily researched people who have spent years thinking and debating about the matter. In short (too late), there is definitely not any money on the table here.

The central concern of Alexander’s post is what would happen to the effective altruism movement if it made systemic change a central part of its platitude.

And I also think effective altruism has an important moral message. I think that moral message cuts through a lot of issues with signaling and tribal affiliation, that all of these human foibles rise up and ask “But can’t I just spend my money on – ” and effective altruism shouts “NO! BED NETS!” and thus a lot of terrible failure modes get avoided. I think this moral lesson is really important – if everyone gave 10% of their income to effective charity, it would be more than enough to end world poverty, cure several major diseases, and start a cultural and scientific renaissance. If everyone became very interested in systemic change, we would probably have a civil war. Systemic change is sexy and risks taking over effective altruism, but this would eliminate a unique and precious movement in favor of the same thing that everybody else is doing.

Here is where we see the way in which rhetoric enters into the unity with politics and ethics.

Effective altruism is a young movement attempting to assert itself as worthy of our attention and loyalty. Its proponents are attempting to tell us stories (rhetoric) about the world and the duty of its most privileged inhabitants (ethics) in order to create a group of people who consider membership in the movement to be an important part of who they are (politics).

Consider, once again, the American democratic religion. How did it come to be? The answer: through persuasion, accomplished through both word and deed. The art of rhetoric begins by understanding, as best as we can, the perspective in which our audience is situated. It aims to translate ideas which are perfectly comprehensible within our own horizon of understanding into a form that can be understood within the audience’s horizons. It also aims to make it clear why those ideas ought to be evaluated a certain way; the way in which we understand them. As Gadamer emphasized, such translation necessarily entails transformation, both of the ideas and of the audience. In cases like the creation of the American democratic religion, not only was a new group born out of what had arguably been several smaller ones, but an entire nation with institutions of governance was created. A dramatic transformation indeed.

Effective altruists attempt to speak to our charity, but also to our prudence (in the narrow, modern sense of the word). They speak to our fellow feeling, our sympathy for the plight of the less fortunate, but also to our hard-nosed calculating side, which favors efficiency. “Just give up 10%—you get to keep 90% to do whatever you like!—and you can be part of a movement that ends extreme poverty around the world, forever.”

It’s a powerful message. It certainly seems to be on the path to establishing itself as a proper group with a distinct ontological status apart from simply “American” or “rationalist” or what have you. Alexander is worried that, at this stage, trying to be all things to all people—especially where big, divisive questions about systemic change are concerned—will stop this process before it has really begun.

It is a sharp, politic concern, for which I applaud him. These days, all or nothing rhetoric has become a corrosive tool, resulting in nothing but negation without creation. It’s good to see someone out there pushing for a healthy, prudent politics.

 

Previous Posts in This Thread:

Tradition, Authority, and Reason

When I started reading up on the virtues and following the trails through philosophy that I found along the way two years ago, I was pretty sure that I was a Burkean traditionalist of some sort. It was Alasdair MacIntyre who began to throw a wrench in this when he pointed out that Burke treated tradition as a sort of black box—something that actual adherents to traditions do not do. Moreover, Burke somehow did this while remaining an economic liberal for his day, something very much not traditional to his nation.

We are apt to be misled here by the ideological uses to which the concept of a tradition has been put by conservative political theorists. Characteristically such theorists have followed Burke in contrasting tradition with reason and the stability of tradition with conflict. Both contrasts obfuscate. For all reasoning takes place within the context of some traditional mode of thought, transcending through criticism and invention the limitations of what had hitherto been reasoned in that tradition; this is as true of modern physics as of medieval logic. Moreover when a tradition is in good order it is always partially constituted by an argument about the goods the pursuit of which gives to that tradition its particular point and purpose.

MacIntyre presents a different sort of traditionalism from Burke, one more like Michael Oakeshott’s. There is reason and reasoning but these are only made coherent by the traditions they are situated within.

Continue reading “Tradition, Authority, and Reason”

Interrogation, Dialectic, and Storytelling

Millais_Boyhood_of_Raleigh
The Boyhood of Raleigh by John Everett Millais

As far as I can tell, “deconstruction” is a word that simply means “academic trolling,” at least when it is performed by the man who coined it—Jacques Derrida.

This can clearly be seen in his deconstruction of speech act theorist J. L. Austin, which Jonathan Culler provides an account of in On Deconstruction.

Austin was arguing, against his predecessors, that language is not simply about making descriptive statements. He pointed out that fitting language into that straightjacket meant treating as exceptional what in fact was characteristic of huge amounts of discourse. As an alternative, he proposed the idea of language as including both constative statements, those which are true or false, and performative statements, those which have some consequence within the social reality in which they are stated. The canonical case of the latter would be the making of a promise.

Continue reading “Interrogation, Dialectic, and Storytelling”