The Gordian Knot

Featured image is Alexander Cutting the Gordian Knot, by Antonio Tempesta

One of the key disputes in the continental vs analytic divide in modern philosophy is one of style. German and French philosophers largely follow Hegel’s impenetrable style—or worse, Heidegger’s—while English-speaking philosophers largely follow Bertrand Russell’s approachable prose.

A problem arises immediately because the substance of philosophy is relevant to the question of its style. Consider that the conclusions of economic theory, which concern human beings, are thus relevant to the practice of economics itself.

Philosophy falls into a similar recursion, even when we are just talking about the style in which philosophy is done. Plato’s decision to write only in the form of dialogues was a conscious choice made on a philosophical basis; his master Socrates believed that written philosophy was a contradiction in terms.

What, then, are the philosophical presuppositions behind the stylistic divide in modern philosophy?

There’s a lot that can just be chalked up to bad or sloppy writing. I’m told that Kant wrote The Critique of Pure Reason in a hurry and did not get it properly edited. Analytic philosophy itself is no stranger to putrid prose. One need not be a good writer to become a professional philosopher, even in the English-speaking world.

But Hegel’s writing style was, I believe, a choice. And Heidegger’s certainly was. Heidegger believed that common language came with philosophical assumptions baked in. That point is at least defensible. His solution, however, seems worse than the problem. The relentless neologisms and wordplay are all but impenetrable. George Steiner claimed that in German, the style has literary merit. Perhaps so, but I am in no position to judge that. All I know is that Gadamer, no great stylist, nevertheless was able to wrestle with the same perceived problem in a perfectly straightforward manner.

If we are going to write, we should strive to be good writers. In this way, I stand much closer to Russell than to Heidegger. However, being a good writer does not always mean making the simplest possible point in the simplest possible way.

I fear that for all the faults of the continental tradition (the USSR used Marx to justify mass murder, while Heidegger was a registered Nazi and delivered a speech touting their virtues mere months after Hitler was made chancellor), the analytic tradition too often thinks that the world’s problems are merely a set of Gordian Knots begging for Alexander’s solution.

I often think, these days, in terms of “low context” or “direct” speech as opposed to “high context” or “indirect” speech, distinctions I learned of in Arthur Melzer’s book Philosophy Between the Lines.

Anthropologist Edward T. Hall, for example, probably the most famous and influential writer in the field, distinguishes between what he calls “low context” societies like the United States and Europe and the “high context” societies found throughout most of the developing world. In the former, when one communicates with others— whether orally or in writing— one is expected to be direct, clear, explicit, concrete, linear, and to the point. But in most of the rest of the world, such behavior is considered a bit rude and shallow: one should approach one’s subject in a thoughtfully indirect, suggestive, and circumlocutious manner.

To forestall an objection from Ryan, Melzer does not rely on evidence from theorists alone. He also draws on practical guides created for people who have to work in other cultures which emphasize the pervasiveness of indirect speech outside of the west. The book assembles a formidable corpus of such practical and theoretical discussions, all pointing in the same direction—towards the existence of cultures favoring “low context” and “direct” styles on the one hand, and “high context” and “indirect” styles on the other.

The chief distinction is not between obscurantism and clarity, but how much of an onus is put on the audience. From one paper Melzer cites:

The burden for understanding falls not on the speaker speaking clearly, but on the listener deciphering the hidden clues. In fact, the better the speaker, the more skillful he may be in manipulating the subtlety of the clues.

To a western and especially an English speaking audience, this seems the very definition of obscurantism. But Melzer emphasizes the pedagogical value of making students pay close attention to a text in order to be able to understand it. What seems superficially easy to understand too often yields only a superficial understanding.

It is an uncomfortable fact for philosophers that stories are the chief means through which societies convey wisdom, not philosophy. Philosophy and art have struggled over which was the appropriate source of wisdom since antiquity. Philosophy succeeded in achieving a certain status among intellectuals. But most people, especially children, find more wisdom from cartoons, movies, or comics than from philosophy–continental or analytic, ancient or medieval. Eastern or western.

In my view, there must be great value in conveying ideas indirectly. Of the writers here at Sweet Talk, no one has demonstrated this more thoroughly than David.

The exact opposite is not true, however. Indirect and direct, high and low context communication, both have their place.

What should not have a place are bad writing, badly organized presentation, and intentionally opaque language.

Kant, Hegel, and Heidegger were not seeking to provide well written parables or even dialogues, for all of their love of dialectic. For all the talk about “dialectical” styles, they ultimately gave lectures and wrote essays and books. And the stylistic choices they made provided cover for later, more mediocre thinkers to shroud their mediocrity in impenetrable writing.

What can be considered good writing depends on the goal as well as the audience of the piece. Good writing for a technical audience will be different than good introductory writing. Conveying wisdom through poetry, parable, or essay will also be judged differently in each case. But having different standards isn’t the same as having no standards, and philosophical writing is too often synonymous with bad writing.

Matt Bruenig is Anti-Social

Everyone has a hobby. For Matt Bruenig its writing “take downs” of libertarianism as lacking any singularly coherent normative theory. In his latest, he deconstructs the “just desert” basis of capitalism, specifically the claim that capitalism rewards risk. In this post my goal isn’t to defend just desert theories, per se. Rather I’d like to shed some light on Matt’s subversive modus operandi and the fallacies and dangers within it.

For context, I have gone back through Matt’s archive and not found a single positive defense of his own normative framework. In this sense he prefers to define his ideology negatively as “not x therefore y”. And while he makes regular gestures towards egalitarianism, he has yet to show how his own abstract normative theory is any less arbitrary or sensitive to the deconstructionist tactics he is fond of employing.

From Kant to Hegel to Hayek

To understand why Matt is so successful at taking apart normative theories and so cautious about defending his own, it’s worth tracing the background assumptions of modern moral philosophy back to Kant. Kant famously claimed our conceptual commitments are inescapably normative (e.g. if I say x is a cat I am “responsible” for a particular judgement about x) and that, in making those commitments, we are required to maintain justificatory, inferential and critical consistency (e.g. we can’t simultaneously say x is not a cat). Kant and followers like Rawls thought you could use this insight to construct a transcendental argument that bridged is and ought. Read philosopher Robert Brandom’s work for more on this, or enjoy this short video.

The key error Kant made was in taking as sacrosanct the “mentalistic” paradigm inherited from Descartes, which gave “subject” and “object” ontological primacy, and “representation” primacy in theories of epistemology and intentionality. If you don’t believe me, read about Fichte’s notion of “pure I“. It took Hegel to enter the scene and point out how weird the implication of a “noumenal” or objective realm of “things in themselves” was if it meant having knowledge of the inaccessible. So he “naturalized” Kant’s theory of normativity by arguing that it had to be situated socially and recognitively in cultural practices. This in effect rejected the subject-object paradigm by shifting to an intersubjective theory of meaning. See Jurgen Habermas on “de-transcendalizating mentalism” for more on this.

Normativity is, to paraphrase Kant, a property that leads a concept to self-bind, e.g. a duty as distinct from compulsion. Hegel accepted this but argued that it in no way necessitated Kant’s transcendental approach in which de-contextualized or “pure” normative principals were derived prior to interaction with concrete problems. Rather, Hegel argued normativity was immanent to the social process of intersubjective norm construction, the most “objective” of which are our stable institutions. We are bound to the normative commitments implicit to our objective institutions because in a very real sense they mirror us. This is a deep concept, but can be easily understood as a precursor to the idea of the “extended will” that follows from embodied cognition in cognitive psychology.  As philosopher of mind Andy Clark explains the idea,

advanced cognition depends crucially on our ability to dissipate reasoning: to diffuse achieved knowledge and practical wisdom through complex social structures, and to reduce the loads on individual brains by locating those brains in complex webs of linguistic, social, political and institutional constraints.

The rationality of our social structures is therefore often hidden as a feature, not a bug. Yet as self-conscious beings we ought to be able to extract and make explicit the implicit principals that pre-structure our social practices. For example, perhaps “justice as fairness” isn’t a context-free normative standard which looms over all other practices. Instead, what if discrete norms like “I cut, you choose” or “lets flip a coin” or “first one to improve and enclose unclaimed land gets it” develop spontaneously through cultural evolution as low cost ways of securing agreeable cooperative social relationships? For more on this idea, read Joseph Heath’s “A Puzzle for Constractualism“.

Rawls would actually be sympathetic to this view, since he characterized the egalitarian norm as based on conflict reduction. But that does not imply that particular norms can be isolated and then imposed from the top down. This makes the basic category error that FA Hayek explains as being behind all forms of “rational constructivism”. The fairness norm only gained its normative authority or self-binding character from the legitimating history of mutual recognition that preceded and maintained it in specific cases. This would seem to better match observed reality, where there is not, for example, one universal standard of “fair ownership”, but a multiplicity of standards rooted in historical practice. Thus instituting Rawls’ difference principal, for example, would not be just in the US context without substantial cultural buy-in — for the same reason imposing American capitalistic property norms in developing countries regularly leads to violent push back.

Bottom Up Normativity

If that is how actual normativity arises in practice, talking about “just desert” in abstract is totally wrong headed. Instead you would need to instantiate a desert norm in a concrete social reality. Then you would have to carefully investigate the genealogy of the norm to discover it implicit rationale. Jurgen Habermas calls this approach “rational reconstruction.” Note that reconstructing the rationality implicit in normative behavior is an interpretive (not descriptive) exercise.

People who follow this otherwise post-Kantian tradition have actually done this for the American context of capital and desert. The Hansmann argument for shareholder primacy, for example, rests on the demonstration that ownership in a firm will tend to flow to the constituency with the lowest governance cost, which for complex companies tends to be shareholders (specifically, it can be demonstrated that shareholder primacy is hicks-kaldor efficient). Risk is a non-trivial part of this issue. As residual claimants, capital holders are the most expendable insofar as they are what remains after other contractual obligations have been honored.

So does that mean at some point in history someone went out and designed corporate law based on a grand utilitarian moral theory? No, on the contrary. These norms of ownership were discovered in the same bottom up way as norms like “I cut, you choose”. Scholars like Hansmann had to explicitly reconstruct this rationale through interrogation of the alternatives, like stakeholder theory. (By the way, reconstruction of the welfare state also points to a transaction cost basis, not egalitarian principals.)

Still, it must be said that capital owners have a holistic normative relationship with the present state of affairs — that is, the bundle of concepts that tend to accompany norms of ownership, like “entitlement” and “deservedness,” apply perforce. From their own standpoint and from the point of view of the community at large, the reductive claim that “shareholders retain profits ONLY because that’s the best for the social welfare function” is illegitimate because the rational reconstruction only ever identifies one feature of an ethical totality.

To see this, consider that the dividend cheques get delivered not due to an awareness of Ronald Coase’s most cited work, but because of the self-binding intersubjective concept of ownership itself. It isn’t just the legal realist’s vision of command and compulsion. The USPS guy delivers the cheque largely because he has internalized and affirms the prevailing norms of ownership by which he himself implicitly benefits. As H.L.A. Hart put it, his recognition gives the law an “internal point of view.”

In this light, Matt’s struggle for abstract consistency is at root subversive. He has innumerable posts arguing against private ownership that would make no normative distinction between a company merger and civil asset forfeiture, other than perhaps that the latter is typically more regressive in its effects. The main upshot of Matt gaining any following of import would therefore be to further undermine the distinguishing legitimacy of various social norms through raw philosophical sophistry.

Whose Filibuster?

Seriously? If Matt were a luck egalitarian he’d be Anton Chigurh. Of course, Matt can go ahead and keep shoplifting without really causing much harm. This is because he is in essence free riding on the ethical behavior of everyone around him. If everyone behaved like Matt, on the other hand, it would be a catastrophe. This conclusion, too, can be rationally reconstructed by showing how norms play an important role in self-binding us to mutually beneficial cooperative equilibria.

Consider the US congress, which has reached new heights of dysfunction in recent years largely because congressional norms have collapsed. Writing for the National Journal, Norm Ornstein gives the profligate use of the senate filibuster as an example:

Rules matter, but in the Senate, norms and the larger fabric of interactions matter as much or more. The fact is that Rule XXII, which governs debate, remained the same from 1975 until this Congress; and for most of the era, it worked fine. Majorities were at times frustrated by the minority’s use of filibusters, but they were relatively rare, and most issues were worked out before legislation or nominations reached the floor. There was a larger understanding that filibusters were not to be used routinely.

The beginning the anti-social zeitgeist in congress probably began with some Republican strategist throwing his or her hands in the air and yelling “the norms are arbitrary. All that matters is that our normative framework is the right one!” The irony is that the collapse of the anti-filibuster norm has ended up hurting both the Democrat’s and Republican’s political agendas. Think of it in terms of a Prisoner’s Dilemma. The norm against its use acted as a self-binding mechanism against strategic gamesmanship, and helped make legislative cooperation stable.

To make this explicit, consider the model V(x) = U(x) + kN(x) where the value of doing x is given by its private utility plus its norm appropriateness weighted by k. k is the weight you assign normative considerations (i.e. how self-bound you are to a norm) and is a reaction function based on other agent’s k (that is, it’s intersubjective). Let’s say x represents the decision to filibuster. If k ever declines, say for the historical factors identified in Ornstein’s article, it risks collapsing as a self-fulling prophecy. Normativity goes out the window. Both parties become mired in strategic legislative undercutting. Multiple ethical equilibria and all that.

This is the sense in which I think a world full of Matt Bruenigs would be worse in virtually everyone’s eyes. He seems to think that by pulling the normative rug out from under capitalism he is improving the chances that America will fall into his half baked ideal of market socialism, whereas it is only liable to cause a cultural concussion. Identifying anti-social behavior and rhetoric is a real enough problem that I think it’s worth coining a new term to help with calling it out when it happens: