The Subject in Play is Not the Subject at Play

Featured image is Children’s Games, by Pieter Bruegel the Elder.

The subject-object schema is not destiny. It is handed down to us from the time of Descartes and Bacon, quite late in the history of philosophy. After Kant, subjectivity became a prison from which we are never free to directly perceive or interact with objects as things-in-themselves.

In the 20th century, Hans-Georg Gadamer and Ludwig Wittgenstein—starting from very different interests, training, and standpoints—looked to play and games as a way of moving beyond the Kantian trap.

How can something as seemingly trivial as play provide an answer to a serious philosophical problem? When we say “do you think this is a game?” are we not implying that the matter at hand is more important than such a thing?

Continue reading “The Subject in Play is Not the Subject at Play”

Advertisements

The Gordian Knot

Featured image is Alexander Cutting the Gordian Knot, by Antonio Tempesta

One of the key disputes in the continental vs analytic divide in modern philosophy is one of style. German and French philosophers largely follow Hegel’s impenetrable style—or worse, Heidegger’s—while English-speaking philosophers largely follow Bertrand Russell’s approachable prose.

A problem arises immediately because the substance of philosophy is relevant to the question of its style. Consider that the conclusions of economic theory, which concern human beings, are thus relevant to the practice of economics itself.

Philosophy falls into a similar recursion, even when we are just talking about the style in which philosophy is done. Plato’s decision to write only in the form of dialogues was a conscious choice made on a philosophical basis; his master Socrates believed that written philosophy was a contradiction in terms.

What, then, are the philosophical presuppositions behind the stylistic divide in modern philosophy?

There’s a lot that can just be chalked up to bad or sloppy writing. I’m told that Kant wrote The Critique of Pure Reason in a hurry and did not get it properly edited. Analytic philosophy itself is no stranger to putrid prose. One need not be a good writer to become a professional philosopher, even in the English-speaking world.

But Hegel’s writing style was, I believe, a choice. And Heidegger’s certainly was. Heidegger believed that common language came with philosophical assumptions baked in. That point is at least defensible. His solution, however, seems worse than the problem. The relentless neologisms and wordplay are all but impenetrable. George Steiner claimed that in German, the style has literary merit. Perhaps so, but I am in no position to judge that. All I know is that Gadamer, no great stylist, nevertheless was able to wrestle with the same perceived problem in a perfectly straightforward manner.

If we are going to write, we should strive to be good writers. In this way, I stand much closer to Russell than to Heidegger. However, being a good writer does not always mean making the simplest possible point in the simplest possible way.

I fear that for all the faults of the continental tradition (the USSR used Marx to justify mass murder, while Heidegger was a registered Nazi and delivered a speech touting their virtues mere months after Hitler was made chancellor), the analytic tradition too often thinks that the world’s problems are merely a set of Gordian Knots begging for Alexander’s solution.

I often think, these days, in terms of “low context” or “direct” speech as opposed to “high context” or “indirect” speech, distinctions I learned of in Arthur Melzer’s book Philosophy Between the Lines.

Anthropologist Edward T. Hall, for example, probably the most famous and influential writer in the field, distinguishes between what he calls “low context” societies like the United States and Europe and the “high context” societies found throughout most of the developing world. In the former, when one communicates with others— whether orally or in writing— one is expected to be direct, clear, explicit, concrete, linear, and to the point. But in most of the rest of the world, such behavior is considered a bit rude and shallow: one should approach one’s subject in a thoughtfully indirect, suggestive, and circumlocutious manner.

To forestall an objection from Ryan, Melzer does not rely on evidence from theorists alone. He also draws on practical guides created for people who have to work in other cultures which emphasize the pervasiveness of indirect speech outside of the west. The book assembles a formidable corpus of such practical and theoretical discussions, all pointing in the same direction—towards the existence of cultures favoring “low context” and “direct” styles on the one hand, and “high context” and “indirect” styles on the other.

The chief distinction is not between obscurantism and clarity, but how much of an onus is put on the audience. From one paper Melzer cites:

The burden for understanding falls not on the speaker speaking clearly, but on the listener deciphering the hidden clues. In fact, the better the speaker, the more skillful he may be in manipulating the subtlety of the clues.

To a western and especially an English speaking audience, this seems the very definition of obscurantism. But Melzer emphasizes the pedagogical value of making students pay close attention to a text in order to be able to understand it. What seems superficially easy to understand too often yields only a superficial understanding.

It is an uncomfortable fact for philosophers that stories are the chief means through which societies convey wisdom, not philosophy. Philosophy and art have struggled over which was the appropriate source of wisdom since antiquity. Philosophy succeeded in achieving a certain status among intellectuals. But most people, especially children, find more wisdom from cartoons, movies, or comics than from philosophy–continental or analytic, ancient or medieval. Eastern or western.

In my view, there must be great value in conveying ideas indirectly. Of the writers here at Sweet Talk, no one has demonstrated this more thoroughly than David.

The exact opposite is not true, however. Indirect and direct, high and low context communication, both have their place.

What should not have a place are bad writing, badly organized presentation, and intentionally opaque language.

Kant, Hegel, and Heidegger were not seeking to provide well written parables or even dialogues, for all of their love of dialectic. For all the talk about “dialectical” styles, they ultimately gave lectures and wrote essays and books. And the stylistic choices they made provided cover for later, more mediocre thinkers to shroud their mediocrity in impenetrable writing.

What can be considered good writing depends on the goal as well as the audience of the piece. Good writing for a technical audience will be different than good introductory writing. Conveying wisdom through poetry, parable, or essay will also be judged differently in each case. But having different standards isn’t the same as having no standards, and philosophical writing is too often synonymous with bad writing.

Unifying Moral Philosophy With Virtue Ethics

One of the things that is apparent when you begin reading virtue ethicists is that they seem to find arguments that rely on pointing out consequences alone to be distasteful. You can see why someone like Deirdre McCloskey would take that tone, when she works in economics, that last stronghold of utilitarianism. But such arguments often provoke dismissal—“virtue would be nice in an ideal world, but down here on the ground we have to deal with practical concerns.”

The distaste for argument-by-consequences isn’t held by virtue ethicists alone of course; it is also characteristic of many schools of deontology. The intellectual descendants of Kant believe that you should do your duty just because it is your duty. Virtue ethicists, however, argue that mere rule-following is not all there is to ethics. Aristotle pointed out that the very particularity of circumstances made the possible combination of factors too great to boil down to general rules. Navigating such particularities requires lived experience, and the development of phronesis; practical wisdom, translated into latin as prudentia, which became the English word prudence.

Virtue ethics as I understand it takes the best of consequentialism and deontology and integrates them into a much more human framework. Prudence, in the older sense of broad practical wisdom, includes within it the more modern sense of prudence, which is concerned solely with consequences and interest. As Albert Hirschman argued, even our notion of “interest” used to be much broader than it is now. Broader even than so-called “enlightened” self-interest which takes the long term into account, as opposed to myopic self-interest which is merely opportunistic. Given the unity of the virtues, a virtue ethicist ought to hold that consequences do in fact matter, they are just not all that matters.

Among the original cardinal virtues, and Aquinas’ famous seven, is justice; the virtue of always giving what is due. This seems to me to be the virtue of recognizing and acting on principles that have deontic authority—that is, the virtue of performing our duties. McCloskey’s recent paper on institutions has a good treatment on such deontic principles, which she argues are conjective in nature. Being able to distinguish the deontic from the merely suggested is, I think, also an important part of (the older sense of) prudence.

But why should anyone care about either consequences or duty? At bottom our desires and our reasons, no matter how seemingly rational, are grounded by some sort of faith. Some basic things we take for granted or hold dear to us, some things that cannot be rationally justified because they are themselves the basis of your justifications.

Prudence, justice, faith, courage, temperance, charity, and hope—ingredients for a meaningful life, for being the sort of person you can look in the mirror without shame. While seeing the wisdom in the accomplishments of consequentialist and deontologist thinkers, I believe that virtue ethics provides the best framework for grounding and making the best use of those accomplishments.

Pleasurable, Exalted Terror

Edmund Burke wrote that “whatever is qualified to cause terror is a foundation capable of the sublime.” But instead of a category of aesthetics, in contemporary English the word is mainly used by the pretentious to flatter one another. It has thus lost much of the nuance that originated in Burke’s treatise On the Sublime and the Beautiful, in favor of yet another superlative for “good”.

Strictly speaking, something is sublime if it uses the infinite or incalculable to create an experience of beauty incorporating fear or overwhelming. For example, I reserve sublime to describe my first visit to the Niagara Falls, whose dramatic horseshoe of roaring waters transfixed me in a torrent of terror and tranquility.

Yet sublime does not have to refer to natural wonders or artistry. Indeed, many social phenomena can be sublime. Slavoj Žižek once argued that ideology related to the sublime, due to an influence over social reality that defied perception. Specifically, he claims ideologies require a “sublime object” that carries an irreproachable greatness, be it God, the King or the proletariat.

The general idea comes from Kant, who wrote that the sublime is a “formless object” representing our intrinsic inability to perceive vastness or complexity, thus elevating “nature beyond our reach as equivalent to a presentation of ideas.” In confronting such objects, we at once feel displeasure “arising from the inadequacy of imagination in the aesthetic estimation of magnitude” and a “simultaneous awakened pleasure, arising from this very judgement of the inadequacy of the greatest faculty of sense…”.

In ideological space, this inadequacy of imagination parallels the subject’s inability to articulate the nature of their deepest political commitments, which in turn creates a similar “awakened pleasure” in the knowledge that their cause defies a complete description.

In this sense, there is something strangely sublime surrounding the recent brutal interfaces between state and citizen in New York,  Mexico, Hong Kong, and elsewhere. To appreciate the scope and complexity behind these patterns of violence and protest is literally impossible. So out of necessity, our inadequate media elevates that which is beyond our reach to a coherent presentation of ideas. Indeed, it seems as if the news and social media act as a magnifying glass, concentrating public attention onto stochastically occurring tragedies until a spark creates ignition, giving producers the cue for the “national conversation” graphic along the lower third.

There are those that decry the news for being guilty of exploiting “sensationalism,” but this is a mistake. What is being constantly exploited is precisely our craving for the sublime. Indeed, the grotesque scenes of protest that play across our screens, straining eyes that alternate from face to crowd to face, are genuine objects of beauty. And this in turn explains why as a society we have never been more at peace, but also never more in terror. Pleasurable, exalted terror.

hong kong

Matt Bruenig is Anti-Social

Everyone has a hobby. For Matt Bruenig its writing “take downs” of libertarianism as lacking any singularly coherent normative theory. In his latest, he deconstructs the “just desert” basis of capitalism, specifically the claim that capitalism rewards risk. In this post my goal isn’t to defend just desert theories, per se. Rather I’d like to shed some light on Matt’s subversive modus operandi and the fallacies and dangers within it.

For context, I have gone back through Matt’s archive and not found a single positive defense of his own normative framework. In this sense he prefers to define his ideology negatively as “not x therefore y”. And while he makes regular gestures towards egalitarianism, he has yet to show how his own abstract normative theory is any less arbitrary or sensitive to the deconstructionist tactics he is fond of employing.

From Kant to Hegel to Hayek

To understand why Matt is so successful at taking apart normative theories and so cautious about defending his own, it’s worth tracing the background assumptions of modern moral philosophy back to Kant. Kant famously claimed our conceptual commitments are inescapably normative (e.g. if I say x is a cat I am “responsible” for a particular judgement about x) and that, in making those commitments, we are required to maintain justificatory, inferential and critical consistency (e.g. we can’t simultaneously say x is not a cat). Kant and followers like Rawls thought you could use this insight to construct a transcendental argument that bridged is and ought. Read philosopher Robert Brandom’s work for more on this, or enjoy this short video.

The key error Kant made was in taking as sacrosanct the “mentalistic” paradigm inherited from Descartes, which gave “subject” and “object” ontological primacy, and “representation” primacy in theories of epistemology and intentionality. If you don’t believe me, read about Fichte’s notion of “pure I“. It took Hegel to enter the scene and point out how weird the implication of a “noumenal” or objective realm of “things in themselves” was if it meant having knowledge of the inaccessible. So he “naturalized” Kant’s theory of normativity by arguing that it had to be situated socially and recognitively in cultural practices. This in effect rejected the subject-object paradigm by shifting to an intersubjective theory of meaning. See Jurgen Habermas on “de-transcendalizating mentalism” for more on this.

Normativity is, to paraphrase Kant, a property that leads a concept to self-bind, e.g. a duty as distinct from compulsion. Hegel accepted this but argued that it in no way necessitated Kant’s transcendental approach in which de-contextualized or “pure” normative principals were derived prior to interaction with concrete problems. Rather, Hegel argued normativity was immanent to the social process of intersubjective norm construction, the most “objective” of which are our stable institutions. We are bound to the normative commitments implicit to our objective institutions because in a very real sense they mirror us. This is a deep concept, but can be easily understood as a precursor to the idea of the “extended will” that follows from embodied cognition in cognitive psychology.  As philosopher of mind Andy Clark explains the idea,

advanced cognition depends crucially on our ability to dissipate reasoning: to diffuse achieved knowledge and practical wisdom through complex social structures, and to reduce the loads on individual brains by locating those brains in complex webs of linguistic, social, political and institutional constraints.

The rationality of our social structures is therefore often hidden as a feature, not a bug. Yet as self-conscious beings we ought to be able to extract and make explicit the implicit principals that pre-structure our social practices. For example, perhaps “justice as fairness” isn’t a context-free normative standard which looms over all other practices. Instead, what if discrete norms like “I cut, you choose” or “lets flip a coin” or “first one to improve and enclose unclaimed land gets it” develop spontaneously through cultural evolution as low cost ways of securing agreeable cooperative social relationships? For more on this idea, read Joseph Heath’s “A Puzzle for Constractualism“.

Rawls would actually be sympathetic to this view, since he characterized the egalitarian norm as based on conflict reduction. But that does not imply that particular norms can be isolated and then imposed from the top down. This makes the basic category error that FA Hayek explains as being behind all forms of “rational constructivism”. The fairness norm only gained its normative authority or self-binding character from the legitimating history of mutual recognition that preceded and maintained it in specific cases. This would seem to better match observed reality, where there is not, for example, one universal standard of “fair ownership”, but a multiplicity of standards rooted in historical practice. Thus instituting Rawls’ difference principal, for example, would not be just in the US context without substantial cultural buy-in — for the same reason imposing American capitalistic property norms in developing countries regularly leads to violent push back.

Bottom Up Normativity

If that is how actual normativity arises in practice, talking about “just desert” in abstract is totally wrong headed. Instead you would need to instantiate a desert norm in a concrete social reality. Then you would have to carefully investigate the genealogy of the norm to discover it implicit rationale. Jurgen Habermas calls this approach “rational reconstruction.” Note that reconstructing the rationality implicit in normative behavior is an interpretive (not descriptive) exercise.

People who follow this otherwise post-Kantian tradition have actually done this for the American context of capital and desert. The Hansmann argument for shareholder primacy, for example, rests on the demonstration that ownership in a firm will tend to flow to the constituency with the lowest governance cost, which for complex companies tends to be shareholders (specifically, it can be demonstrated that shareholder primacy is hicks-kaldor efficient). Risk is a non-trivial part of this issue. As residual claimants, capital holders are the most expendable insofar as they are what remains after other contractual obligations have been honored.

So does that mean at some point in history someone went out and designed corporate law based on a grand utilitarian moral theory? No, on the contrary. These norms of ownership were discovered in the same bottom up way as norms like “I cut, you choose”. Scholars like Hansmann had to explicitly reconstruct this rationale through interrogation of the alternatives, like stakeholder theory. (By the way, reconstruction of the welfare state also points to a transaction cost basis, not egalitarian principals.)

Still, it must be said that capital owners have a holistic normative relationship with the present state of affairs — that is, the bundle of concepts that tend to accompany norms of ownership, like “entitlement” and “deservedness,” apply perforce. From their own standpoint and from the point of view of the community at large, the reductive claim that “shareholders retain profits ONLY because that’s the best for the social welfare function” is illegitimate because the rational reconstruction only ever identifies one feature of an ethical totality.

To see this, consider that the dividend cheques get delivered not due to an awareness of Ronald Coase’s most cited work, but because of the self-binding intersubjective concept of ownership itself. It isn’t just the legal realist’s vision of command and compulsion. The USPS guy delivers the cheque largely because he has internalized and affirms the prevailing norms of ownership by which he himself implicitly benefits. As H.L.A. Hart put it, his recognition gives the law an “internal point of view.”

In this light, Matt’s struggle for abstract consistency is at root subversive. He has innumerable posts arguing against private ownership that would make no normative distinction between a company merger and civil asset forfeiture, other than perhaps that the latter is typically more regressive in its effects. The main upshot of Matt gaining any following of import would therefore be to further undermine the distinguishing legitimacy of various social norms through raw philosophical sophistry.

Whose Filibuster?

Seriously? If Matt were a luck egalitarian he’d be Anton Chigurh. Of course, Matt can go ahead and keep shoplifting without really causing much harm. This is because he is in essence free riding on the ethical behavior of everyone around him. If everyone behaved like Matt, on the other hand, it would be a catastrophe. This conclusion, too, can be rationally reconstructed by showing how norms play an important role in self-binding us to mutually beneficial cooperative equilibria.

Consider the US congress, which has reached new heights of dysfunction in recent years largely because congressional norms have collapsed. Writing for the National Journal, Norm Ornstein gives the profligate use of the senate filibuster as an example:

Rules matter, but in the Senate, norms and the larger fabric of interactions matter as much or more. The fact is that Rule XXII, which governs debate, remained the same from 1975 until this Congress; and for most of the era, it worked fine. Majorities were at times frustrated by the minority’s use of filibusters, but they were relatively rare, and most issues were worked out before legislation or nominations reached the floor. There was a larger understanding that filibusters were not to be used routinely.

The beginning the anti-social zeitgeist in congress probably began with some Republican strategist throwing his or her hands in the air and yelling “the norms are arbitrary. All that matters is that our normative framework is the right one!” The irony is that the collapse of the anti-filibuster norm has ended up hurting both the Democrat’s and Republican’s political agendas. Think of it in terms of a Prisoner’s Dilemma. The norm against its use acted as a self-binding mechanism against strategic gamesmanship, and helped make legislative cooperation stable.

To make this explicit, consider the model V(x) = U(x) + kN(x) where the value of doing x is given by its private utility plus its norm appropriateness weighted by k. k is the weight you assign normative considerations (i.e. how self-bound you are to a norm) and is a reaction function based on other agent’s k (that is, it’s intersubjective). Let’s say x represents the decision to filibuster. If k ever declines, say for the historical factors identified in Ornstein’s article, it risks collapsing as a self-fulling prophecy. Normativity goes out the window. Both parties become mired in strategic legislative undercutting. Multiple ethical equilibria and all that.

This is the sense in which I think a world full of Matt Bruenigs would be worse in virtually everyone’s eyes. He seems to think that by pulling the normative rug out from under capitalism he is improving the chances that America will fall into his half baked ideal of market socialism, whereas it is only liable to cause a cultural concussion. Identifying anti-social behavior and rhetoric is a real enough problem that I think it’s worth coining a new term to help with calling it out when it happens:

Our Modern Euthyphro Dilemma

Does god make his commandments base on what is right, or is what is right based on his commandments? This is the Euthyphro dilemma, and it has boggled theologians and moral philosophers alike for literally millenia

The dilemma is supposed to challenge believers in divine command theory, but it has relevance for modern secular moral theory as well. This is because the original dialogue between Socrates and Euthyphro was not really about the nature of god, but about the nature of normative authority more generally. By being constant through time and space and separate from human particularity, God simply reflects the idealized universality and generality which we seek in our principals of justice.

In lieu of god, secular moral philosophy from Kant on has been trying to somehow leverage sureness back into our moral sense through convoluted transcendental arguments. Such efforts usually involve the metaphysical construction of an “ideal self” in some ideal scenario behaving in ideal ways to which we must all rationally assent. Our secular Euthyphro dilemma thus becomes: Are our abstract moral theories based on what is right, or is what is right based on our abstract moral theories? Against any Kantian construction, the dilemma is no less powerful as when levied against divine command.

rawls

Take 20th century Kantian philosopher John Rawls, for example. His concept of the “original position” asks us to imagine ourselves standing outside of society bereft of any knowledge of our personal identity, including our conception of a good life. Behind this veil of ignorance, he argued, we’d all rationally agree to an egalitarian society in which there was the greatest benefit for the least advantaged.

With this idealized social contract, Rawls’ goal is to establish a formal derivation of political authority in order to justify a particular macro-distributive end. But what appears to be an innocuous thought experiment is on closer inspection a series of arbitrary and inconceivable stipulations. After all, what is left of a self after its identity has been stripped away? How can a purely instrumental rationality even motivate a choice, much less reveal risk preferences? Why does the nation state set the boundary of social justice? Even taking the exercise at face value, the construct fails to establish a meta-ethical bridge to true normativity because it merely pushes the prescriptive element onto an unfounded imperative to act according to one definition of rationality.

To do Rawls justice, I should add that he was aware of all this and so in addendum wrote hundreds of pages of tedious conceptual scaffolding. This guarantees the incompleteness of my rough sketch, however the flaw with constructing a Kantian normative architecture lies not in the design specifics or even the level of detail, but in the very idea that normative authority can be grounded via ethical autoCAD. With sufficient prodding all Kantian constructions invariably implode under their unnatural abstract formalism. Indeed, examples span the political spectrum to include Kant inspired libertarians, whose invocations of the non-aggression principal are similarly void of content, and become contradictory fast once any substance is added.

Thus when contemporary Kantians debate it winds up being a symmetric game of mutually assured deconstruction. Distributive justice types are able to accurately reveal the inconsistencies of their opponents, while procedural justice types make a science of egalitarian absurdities. In the end, beneath the twin rubble piles that result, there remains only the meek voices of special pleading.

If what is right is not based on abstract moral theory, then normative authority must be antecedent to our modern moral philosophy. In later posts I will try to explain how normativity arises from the bottom up, from the particular to the general, rather than the other way around. As Nietzsche famously argued, relinquishing god as the locus of normative authority was essential to opening new possibilities of human development. Today, the same should be said of all secular moral frameworks which give normative authority the same god-like unity of voice, contra the polycentrism we actually observe. So say it with me:

Kant is dead. Kant remains dead. And we have killed him.