The Ontology of Economics

Earlier this year I opined on twitter that economists are essentially philosophers with full employment:

Jokes aside, my glaring omission, of course, was ontology. Ontology is the subset of philosophy concerned with the nature and categories of being and existence. In the case of economics, the core ontological preoccupation is with the nature and existence of market equilibria and their constituent parts: supply and demand, institutions, representative agents, social planners, and so on. Some focus on ontologies of being, like a static equilibrium, while Hayek and Buchanan famously had ontologies of becoming that emphasized the importance of analyzing economic processes. Others debate the gestalt between whole markets and individual exchanges — supply and demand curves versus a game theory model of bargaining, say. Others-still question the reality of economic “absences,” like productivity measurements produced as a statistical residual, or the output gap between real and potential GDP.

Economic ontology therefore touches on every aspect of economic thinking and analysis, and as such the biggest rifts in economics often come down to mutually incompatible ontological commitments. For instance, I once read a polemic against Keynesian economics that proclaimed matter of factly that the macroeconomy “doesn’t exist,” that it’s nothing more than a metaphor for a complex aggregation of individual interactions. Well — no duh. Individuals are aggregations of complex biochemical interactions, as well, but that doesn’t make them any less real. Much like debating the point at which a collection of individual grains becomes a heap — there simply is no fact of the matter.

As in the example above, it’s important to be able to discern the difference between a category mistake (like attributing motives to GDP or the fallacy of composition) and a difference in construal (like acknowledging aggregates exist in the first place). More often than not, the existential quantifier (or dubbing something “real”) is less about proposing an object as genuinely more or less fundamental and more about raising or lowering that object’s social status. This may be incredibly useful in the context of rhetoric and persuasion, but it is usually safer to embrace a plurality of ontologies as equally valid based on use and context.

That is, economists should be ontological pluralists. And self-consciously so.

Advertisements

Think Tank Theology

luther think tank

The modern Policy Analyst (homō-vāticinius) is a weird, hybrid creature. He or she must be an effective writer, researcher, organizer, and charismatic speaker all at once. Living within the scholarly lacuna between a university professor and a religious evangelist, he or she is perpetually torn by the tensions of academic integrity and ideological purity.

The natural habitat of the policy analyst is known as the Think Tank. While they exists to produce novel research, their deeper raison d’être is advocacy. Unlike the lobbyist, however, they do not speak for any particular firm or special interest. Rather, they keep their grip on non-profit status by appealing to causes or principles of broader appeal. Like a lion stalking its prey, the outcome of a research project is more often than not a foregone conclusion of economic correctness. Yet by pooling relevant facts and talking points under one heading, research generates the much vaunted “citation need” for any on-going debate.

Think Tanks tend to be lean operations, appearing more grandiose to outsiders thanks to the equally enigmatic species known and anointed as the “Senior Fellow”. Following Coase’s theory of the firm, Think Tanks do not spend their precious donations on elaborate office buildings full of retired professors typing out op-eds. Instead, more typically the full time staff is exactly contained by the needs of daily operations, with a network of Senior Fellows who carry on their own day jobs, essentially outsourced. Some of these Fellows produce nothing — they merely lend their name — while others are prolific, fed either by a salary or commission.

Such organizations provide a useful illustration of the limits of Coasean / transaction cost theories. While clearly shaped by a cost function, Think Tanks are also machines of persuasion to a theological degree. In a world of pervasive moral skepticism, Think Tanks stand as entities of secular normative force, continuously prescribing social and economic reforms couched in prophetic rhetoric.

While each organization has its own positive mission, they are nonetheless drawn in particular to the dissident act of debunking. The centrality of apostolic reform to a Think Tank’s mission thus makes them deeply Lutheran institutions. Like Martin Luther, himself an Augustinian friar, the archetypal policy analyst contemplates — but then must share the fruits of his contemplation, to nail his Ninety-Five Theses to the legislature’s door, now in optional infographical form.

Finally, the youngest friars of all — the next generation of public entrepreneurs — undergo a period of cloistered asceticism, more commonly known as the unpaid internship. Though he is free and belongs to no one, he has made himself a slave to everyone, to win as many as possible. And once endowed with the coveted Letter Of Reference, the young policy analyst proceeds alongs, fully prepared to preach the gospel.

Related: The Problem of Evil and its Coasian Solution

Separation Anxiety

Friendly Society

If given the choice to live in a world dominated by risk or uncertainty, I would choose risk every time. Risk is manageable. Risk can be hedged. Risk can even bring people together.

Take the Friendly Societies of the 18th and 19th century. These predecessors of the modern insurance cooperative helped to distribute financial risks among their members. But in addition to providing access to doctor care or income in tough times, members could literally count on a shoulder to cry on. The prerogative of a decentralized social safety net had the by-product of strengthening the all around community, in the form civic engagement, social events and close knit relationships.

Friendly and mutual aid societies flourished for over 300 years in places like England thanks to the challenge of measuring risk accurately, especially on an individual level. For the most part, insurance schemes, both formal and friendly, brush over the immense heterogeneity of risk types to come up with a flat rate or membership premium — an average cost — which in our fundamental ignorance we agree to pay.

IMG_0078

In fact, when individual risk is well known (usually by the individual him or herself) our ability to manage risk with insurance or mutual aid tends to break down. For instance, a person seeking health insurance may choose to conceal their heightened risk of cancer by not sharing their family history. In their famous 1976 paper, Stiglitz and Rothschild showed how this kind of asymmetry of information makes insurance hard, if not impossible. In contrast, consider that a young man cannot easily conceal the salient fact that he is young and male. Since this correlates with worse driving, auto insurance is able to separate into several pools, or to charge several prices, without worry of members misrepresenting their risk type.

In the jargon of game theory, this is the difference between a pooling and separating equilibrium, and it’s not limited to insurance. In any scenario where the type of person or good is not directly observed, you instead observe a signal — a piece of communication — which may or may not be informative. But when different types put off different signals, even if they’re not wholly accurate, types can be discerned, separated and priced accordingly.

In the case of England, Friendly Societies tended to be grouped around industry, skill level, and other imperfect “types”. As a whole, then, the Friendlies weren’t totally unsophisticated. But relative to modern insurance, the mechanisms available for making members reveal their riskiness were first order approximations at best.

History’s Card Sharks

This wasn’t necessarily a bad thing. In the limit, if every person has an idiosyncratic and public risk profile, insurance would be like trying to bet a round of poker with the cards face up. Rather than spreading the cost of car accidents, or health care, or unemployment across large groups, we would be much closer to paying our own way in full. While this could be considered efficient in a narrow sense (each consensually pays his or her marginal cost), in practice it could also be disastrous. Rather than having the congenitally lucky occasionally support the unlucky, the unlucky would lose by predestination. There’s no point in bluffing — you’re simply dealt the hand you’re dealt.

Now imagine Friendly Societies as represented by a group of casual poker players that meet regularly. The play is sloppy and heuristic based, and no one really knows how to calculate pot odds. Sometimes you’re up, sometimes you’re down, but in long run everyone tends to break even. Then one day a new player is invited, a player who happens to be a poker tournament champion and retired statistician. Sometimes he’s up, sometimes he’s down, but in the long run the rest of the table ends up consistently going bust. A friendly game among friendly society suddenly isn’t that friendly anymore, and the group disbands. This is more less the story of how Friendly Societies went from flourishing to sudden decline around the turn of the 20th century.

Innovations in the science of actuarial analysis (the statistical study of risk) had been diffusing through society since at least 1693, when Edmond Halley constructed the first “life table” allowing him to calculate annuities based on age. Not long after in 1738, Abraham de Moivre published “The Doctrine of Chances,” credited as discovering the normal distribution that was greatly expounded on by Gauss in the 1800s. Then in 1762, The Equitable Life Assurance Society was founded, with the first modern usage of the term “actuary” (the company exists to this day as Equitable Life, the world’s old mutual insurer). However, as a profession, insurance was truly born much later in 1848, with the founding of the Institute of Actuaries in London, thanks to breakthroughs in measurement and accounting techniques (such as commutation functions) that brought the doctrine of chances from theory to practice.

Scientific actuaries were history’s card sharks. In order to compete, Friendly Societies were forced to adapt — to learn to better calculate the odds — and ultimately they converged on many of the same administrative, procedural, and “scientific” insurance-like structures. The growing (and widely misused) economic surplus this generated fueled an insurance boom peaking in the later part of the 19th century. For efficiency advantages, societies began deepening national networks well beyond the scope of brotherly love, and strove to expand risk classifications and reduce exposure to high risk types.

IMG_0077

By better classifying risk, the “flat rate” pooling equilibrium of the 18th century and earlier rapidly became untenable. Across Europe, the market became increasingly separated, with many differentiated premia and some high-risk types pushed out altogether. This fueled a growing industrial unrest that culminated with the consolidation of private social insurance schemes into nationally run systems.

Commercial insurance, by generating a burst of competition and transitory political instability, was in a sense a victim of its own success. But as many economists have noted, while decidedly non-voluntarist, national schemes (like the one instituted in the UK by the National Insurance Act of 1911) were able to discover large efficiencies of scale through less administrative intensity, tax-based collections, and a comprehensive risk pool. This transaction-cost advantage — and the centuries of social capital it crowded out — guarantees that the days of the close knit mutualists are gone for good, save for some religious congregations. In their stead stands L’Etat Providence — The Welfare State — via a historical process that (as I’ve described) was most rigorously identified by French legal scholar Francois Ewald in a book by the same name.

Classification

The point of this essay (if you’ve made it this far) is to suggest that we are in the midst of a measurement and statistical revolution of equal or greater scale as the 19th century diffusion of actuarial science, with potentially many of the same social and political implications.

With a $99 genotype and sub-$1000 whole genome sequence, in the near future the idea of an insurer asking for your family history of cancer will seem quaint. The immense and inevitable promise of genomics and personalized medicine also portends the inevitable collapse of large, relatively heterogeneous insurance pools, in favour of equally “personalized” healthcare costs schedules.

As I hinted at earlier, this phenomena of moving from pooling to separating equilibria following  advances in measurement technology is by no means limited to risk or health care. Any qualitative distribution can be theoretically mapped to a price distribution, but wind up collapsing into a single price given practical measurement constraints. For example, in the past mediocre restaurants were partially supported by the churn of ignorant consumers, since reliable ratings and reviews were hard to come by. Today, rating platforms like Yelp.com mean that restaurants of different quality have more room to raise or reduce prices accordingly, to separate based on credible signals. It’s the end of asymmetric information.

In the corporate setting, pooling equilibria are represented by relatively flat salary structures given a particular seniority, department or education level. Sometimes there is a commission or performance bonus, but day to day productivity is rarely if ever tracked. This opacity is what permits the possibility of zero marginal product (ZMP) workers — workers who literally contribute nothing to a firm’s output.

For any given kitchen, at some point an additional cook does not actually produce more food. While it can be misleading to say that any particular cook is non-productive (maybe there are simply too many cooks in the kitchen), in deciding on which cook to dismiss it matters a great deal that the cooks aren’t all equally productive. On the contrary, the individual contribution of every kind of worker to a firm’s output is often extremely heterogeneous, with the top 20% of workers contributing as much as the lower 80%.

With automation and artificial intelligence reducing the demand for human inputs, the kitchen, so to speak, is shrinking.  It has therefore become paramount for firms to identify the 20 and eject the 80. The contemporary increase in country level inequality is widely recognized as technology driven, but few have put their finger on the micro-foundations that explain why. Part of the story is surely “human-machine substitutability,” but in addition firms have simply started monitoring and classifying worker productivity better than in the past. This leads to a separating equilibrium that shows up in the data as job market polarization, rising premia on the central “signals” like college degrees, and (to the extent that signals are sticky) reduced social mobility. Unsurprisingly, a class based society is first and foremost a society which classifies. The silver lining in this case is that, rather than classes based on pedigree, nobility or race, the future promises to be highly — if not brutally — meritocratic.

IMG_0079

In one future scenario, just as actuaries identified groups that were uninsurable, perhaps large sections of society will discover they are unhirable. Supposing they have too many “one-star” ratings, as it were, on their HR record, their only hope will be in working for fellow one-stars, and to build a matching-economy around their mediocrity. This is essentially the “Average is Over” scenario imagined by Tyler Cowen, who foresees the return of shanty-towns across the United States. But I wouldn’t bet on it. In many ways, the recent calls for a “universal basic income” exactly parallel the early 20th century’s push towards nationalized social insurance. Only here it is labour-income itself that would be nationalized, as part of the inescapable political economy of separation anxiety.

My own anxiety stems from that fundamental uncertainty of the future, as if the social order is dancing along a knife edge dividing two radically different steady states. In either state — from hyper-meritocracy to a New New Deal — the case of the Friendly Societies demonstrates that the only thing for certain will be the loss of our sacred intangibles: the unmeasured qualities that united distinct types under one roof, from the fraternal lodge to the corporate office.

Is Philosophy Information Efficient?

In Adam’s last post he proposed an Efficient Market Hypothesis (EMH) for philosophical debate. As a rule of thumb, it states that if you have heard a good argument for X, someone from camp Y has probably already thought of a good response. In turn, X has probably thought of an equally compelling rejoinder to the rejoinder, and so on down the line.

A “Strong EMH” of debate might say that, in a world with a lot of smart people with diverse views, the “spread” between a good reply and “the best possible reply” narrows considerably. Bad arguments get pushed to the side and the debates that remain are, on the margin, genuine stalemates. In contrast, a “Weak EMH” of debate might resemble Tyler Cowen’s “First Law” which simply asserts that there is a literature on everything. More pithily, there are no million dollar dissertations left lying on the ground.

Under either reality, we should expect intellectual controversies to be really challenging to budge in any direction, at least in aggregate. Philosophy know-it-alls in their freshmen year are thus a lot like hot-head Finance graduates who think their 4-year degree is all it takes to beat the market. The moral of the story is to be humble. If you’re really confident in your philosophy (or portfolio) you probably haven’t read enough.

It’s a compelling analogy, but there is an important and immediately apparent difference between the two: binding feedback. For the most part, finance grads lose their conceit once they get into the real world and see how hard the search for alpha truly is. The Philosophy (or Journalism, Economics, Poli-sci, etc) student doesn’t have the same opportunity (or obligation) to bet their beliefs. Indeed, I can hold completely ignorant — if not downright medieval — theories of modality and not be a penny poorer.

The currency that drives dialectic within professional philosophy is academic prestige, so making an overtly fallacious argument can be quite costly. But not only is academia a highly segmented market (especially if we include English or Sociology as philosophy), it is a hell of a lot easier to get a securities license than a faculty position.

The Efficient Market Hypothesis means different things to different people. But a point that is often lost is the fine distinction between claiming “there are no arbitrage opportunities left” and “the market price converges to fundamental value.”

I am happy to endorse a version of EMH in finance and otherwise that says, “You will go bankrupt betting against the market long before you’re proven right” (a no arbitrage condition), but I would not endorse the corollary that the “efficient market” price is thus necessarily “correct”. Many bubbles can be both impossible to effectively short sell and still far from “fundamental value” in the medium term, in terms of discounted cash flow or any other metric.

Even within the more “liquid” markets for ideas, how you think about the information efficiency of intellectual debate depends on which structural process you think underlies the dialectic. In practice I see two broad possibilities. Philosophical debate could follow either:

1. a Piercean kind of convergence, where lots of back and forth experimentation eats up profit opportunities (the pragmatic notion of “success” ) and settles on a fundamental value — truth; or,

2. Hegelian dialectic, where back and forth yields concepts that always portend a greater potential through synthesis, and the unmeasurable possibility to take random walks far away from “truth” into an ideological bubble.

I tend to think of the scientific literature as closer to 1. Over time literatures do eventually solidify, and the remaining arbitrage opportunities exist only over meagre, specialized debates, or are really, really expensive (CERN). Paradigm shifts occur but they are rare, and at any rate are more akin to a jump between multiple equilibria than a bursting bubble.

quine

Historically, the philosophical literature has been closer to 2. with instances of drifting far off into a bubble that suddenly collapses in on itself (Cartesianism, theology, logical positivism, post-structuralism). Is it any surprise that so many literary theorists endorse sociological theories of knowledge given that their own discipline is a perfect example of self-referential social construction? In short, there is a greater danger in assuming a type 1 EMH onto philosophy because the mere lack of apparent arbitrage opportunities is not sufficient to deem the “literature” as a whole as healthy.

Ideology, like the atmosphere we breathe, is hard to appreciate from within. Bull markets are no different. Even if there was a Case-Shiller Index for philosophical circle-jerks, by the time the ponzi scheme becomes obvious one is already [ intellectually ] bankrupt. So between a scientist and a philosopher, the latter always stands a better chance at being the greater fool.

This is my tentative case for philosophy as a discipline to reallocate its portfolio even more toward experiments. Under any version of an EMH, one is able to “beat the market” given a level of insider information. So if all the low hanging fruit has been plucked from the a prioristic discourse, experiments have the chance at discovering new frontiers. Some of the most interesting work in recent years has been stimulated by discoveries in neuroscience, decision theory and experimental psychology, for example. But even the margins in these new-ish fields are beginning to look thin.

So what will be the next frontier? If I could tell you that, someone would have already written a paper on it.

Joshua Greene’s “Deep Pragmatism” is Deeply Problematic

I was only 16 at the time, but I had already developed a strong interest in moral psychology thanks to the lingering suspicion that ethics was a fatal weakness for philosophical naturalism. And so when Marc Hauser‘s now classic book Moral Minds first came out in paperback, I rushed to buy a copy.

The book was a detailed exploration of human moral cognition through the lens of trolley problem experiments and Hauser’s (now dubious) research with primates. And despite Hauser’s indefensible academic misconduct, it remains a tour de force.  In fact it is still in my possession, now twice as thick and stained by sunlight from multiple re-reads.

My original copy of Moral Minds still sits in my book shelf
My original copy of Moral Minds still sits in my book shelf

At the time I became convinced of Hauser’s basic approach that updated David Hume in light of Chomsky’s work on innate syntax. This view says that our moral sense is at base noncognitive, that it is a product of our “passions” or sensations built into us like a “moral organ”. While morality may often seem relative to culture and upbringing, it is constrained by a “universal grammar” common to all moral orders. That grammar, I believed, was the key to resolving the moral divergences between tribes. If we could only speak clearly about our shared inheritance there could be no lasting rational disagreements.

Joshua Greene’s “Deep Pragmatism”

Consider this a premonition of what Joshua Greene has since dubbed “deep pragmatism”. Greene is also a Harvard neuroscientist and expert on trolley problems, and his recent book Moral Tribes is also concerned about what he calls the “failure of common sense morality,” i.e. when divergent moral orders collide. While I am about to be quite critical of Greene, let me say at the outset that I am actually a massive fan, and that I tend to be most critical of the ones I love.

If it is fair to say Hauser’s theory merged Hume with Chomsky’s linguistics, then Greene’s theory merges Hume with Daniel Kahneman’s Dual Process Theory. He claims our non-cognitive passions are part of our System One, or automatic / intuitive mode. But if we study the evolutionary function of our passions, we can then use our System Two, or rational / conscious mode, to resolve impassioned disputes deliberatively. Specifically, Greene posits that if morality is fundamentally about enforcing cooperation in order to reap collective benefits, two tribes with distinct ethical systems for cooperation simply have to recognize that they are using different means but have common ends.

The only thing truly novel about Greene’s argument is its tantalizing terminology. Indeed, on a recent EconTalk episode Greene admits that “deep pragmatism” is just his word for plain vanilla utilitarianism. Despite formal utilitarianism’s many problems, Greene believes clashing cultures can settle disputes by consciously reformulating their ethics based on the greatest good for the greatest number. When pressed by the host with counter-examples, Greene contended that the problems with his proposal are either merely empirical or due to an insufficient application of utilitarianism (for thinking too short-term, say).

I believe Greene makes three fundamental mistakes and thus has not provided a compelling solution to the tragedy of common sense morality. On top of this, his scientific pretenses distract from the fact that his core moral arguments come straight from the proverbial arm-chair. Indeed, as meticulously demonstrated in Selim Berker’s The Normative Insignificance of Neuroscience, Greene has a tendency to obscure his philosophical presuppositions behind a fascinating and important, but ultimately tangential, deluge of empirical data.

The three deep mistakes Greene makes are: a) to accept the Humean starting point of moral noncognitivism, b) to reify deontological thinking and utilitarian thinking as “System One” and “System Two” respectively, and c) to leap to utilitarianism when, even accepting his premises, better alternatives exist.

Deep Problems:
a) Noncognitivism Is False

Noncognitivism rose in popularity after the Enlightenment in large part due to an incorrect Cartesian view that morality like belief required an ultimate foundation. Hume put foundationalism to the test by taking it to its logical conclusion. In lieu of an infinite regress, Hume realized that connecting ought to is was impossible. Thus noncognitivism — and thus moral skepticism. And while Hume’s argument and conclusion were valid, the premise that we need foundations in the first place was dead wrong.

Since Quine, philosophers have largely accepted coherentism for beliefs. That is, it makes most sense to think of any particular belief as inhabiting a holistic web of beliefs rather than to link beliefs in a linear chain of justifications down to some “foundational” belief. When we are persuaded to change our beliefs we thus often are required to update a large number of interdependent beliefs to ensure coherence.

It turns out the same Quinean argument works for desires, preferences and other vernaculars for Hume’s passions. It’s tempting to think of desires as following a linear chain down to some base foundational affect, implanted somewhat arbitrarily by evolution. But this is an elementary error.

While true that evolution has equipped us with certain somatic states (like hunger pangs), desire (like “I desire to eat”) contains propositional content. Like beliefs, desires are part of a holistic web that we draw from in the discursive game of giving or asking for reasons. In turn, desires like beliefs are capable of being updated based on rational argumentation and the demand for coherence.

For whatever reason ethicists have been much slower to embrace coherentism for morality, preferring to soak in tired debates like deontology vs consequentialism. Greene is no different. And his attempted foundationalist argument for utilitarianism has not closed Hume’s gap one iota.

b) Dual Process Theory is Irrelevant

Using fMRIs to conflate deontology with automatic thinking and consequentialism with deliberative rationality is neither valid nor advances the argument. To quote University of Toronto philosopher Joseph Heath in his overview of empirical approaches to ethics:

Greene offered no reason to think that the theory of value underlying the consequentialist calculus was not based on the same sort of emotional reactions. In this respect, what he was really doing was presenting an essentially sceptical challenge to moral reasoning in general, yet optimistically assuming that it undermined only the position of his opponents.

Moreover, there are good reasons for thinking of deontological modes of reasoning are essentially cognitive. As Heath argues in his book Following the Rules, social norms take the form of a web of deontic constraints that we reference just like when we reference beliefs or desires when pressed to defend certain behavior. This makes social norms — and deontology in turn — analytically cognitivist. That is, regardless of the fact that deontic violations are more likely to elicit an emotional response, deontic reasoning must still inherently make use of System Two at some point.

Greene even acknowledges the more plausible explanation for why deontological violations cause more emotional fMRI activity than utilitarian ones: namely, that they each require different kinds of construal. Utilitarian reasoning tends to be about system wide outcomes and that level of construal imposes a psychological distance between the agent and the moral dilemma. But even if there is a link between construal level and dual process theory, just because utilitarian thinking is slow does not make slow thinking utilitarian!

c) Utilitarianism is a Non-sequitur 

Even accepting all of Greene’s major premises, the conclusion of utilitarianism is still unwarranted. Greene suggests that the social function of moral psychology points to a “common good” through cooperation, but utilitarianism is only one possible interpretation.

In economics there are two basic approaches to social welfare, one top down and the other bottom up. The top down approach is the closest in spirit to the utilitarianism expressed by Greene. It posits a social welfare function and conditions that must hold for its maximization, aka the greatest good for the greatest number. Adherents of this approach have spanned centuries, from Bentham up to Pigou.

The other approach begins with the process of transaction itself. It posits that two people will only exchange if they each preceive a mutual advantage in doing so — that is, if the trade will move them toward a Pareto improvement or win-win outcome. This is at the heart of bargaining theory, which would presumably make it a good candidate for solving the “tragedy of common sense morality” or any scenario where conflicting interests or value systems collide.

Batalla - Sebastian Franck (1640)
Batalla – Sebastian Franck (1640)

One of the worse “tragedies of common sense morality” in history occurred in the 1600s when Protestants and Catholics fought throughout Europe in the 30-Years War. From the ruin rose modern Liberalism and the legal basis for religious toleration and value pluralism. Liberalisms core value is thus mutual advantage in the Paretian sense, not a crude formula for util maximization.

In fact there is a substantial literature within trolley problem research analyzing the effect of Paretian considerations on moral judgement. Greene is even a contributor. Indeed, in all sorts of artificial moral dilemma subjects are consistently more likely to judge harm as permissible if it leads to a Pareto improvement.

For instance, this 2011 paper [pdf warning] co-authored by Marc Hauser suggests that “Paretian considerations should be treated as an abstract principle that is operative in folk-moral judgment across a wide variety of contexts, involving different sources of threat and different degrees of contact.” Note that this fits the criteria for Greene’s “deep pragmatism” surprisingly well, without any of the attending controversy or highly demanding prescriptions surrounding Peter Singer style utilitarianism. Indeed, the authors are correct to report that Paretian considerations “provide a reason for action even for the non-consequentialist.”

Conclusions

Despite my skepticism for Joshua Greene’s “deep pragmatism” I strongly commend his efforts. In fact it is mostly in line with my own approach. Yet its current manifestation suffers from philosophical naiveté.

Humean noncognitivism is tempting for any student of psychology, but it turns out to be philosophically untenable. Indeed, by their very nature the deontic statuses we assign taboos and other social norms are part of a cognitive process of giving and asking for reasons. We can even reason and debate over our desires and preferences since (in contrast to pure affect) they carry propositional content.

Furthermore, while utilitarian calculations often require over-riding our more intense “gut reactions,” that does not make them any more foundational to morality. This is especially the case when it is always possible to interpret ostensibly utilitarian outcomes as resulting from a bottom up process that respects the Pareto standard.

And from the point of view of resolving tragedies of common sense morality, liberal principles like value neutrality and free expression that implicitly endorses Pareto have never been more influential on a global scale, nor more vital for our continued peaceful coexistence. The inferiority of the utilitarian alternative is shown in the recent attacks on free expression in Paris. Who today could defend Charlie Hebdo’s provocative iconoclasm on purely utilitarian grounds in a country of perhaps 6 million Muslims?

Finally, it important to remind ourselves that free expression as such is not a “Western Value” unique to the strategy of our hemispheric “tribe”. Rather, the Pareto standard of mutual benefit transcends the tribe and individual as the only proven basis for peaceful, pluralistic civilization.

Ordoliberalism and The Myth of “Laissez Faire”

As many have noted before, even the great Adam Smith was not a believer in the kind of laissez-faire “self-regulating” ideology of, say, Alan Greenspan.

In Smith’s implicit model, the self-love that drives the butcher and baker to produce for the public good is an extremely special case, premised on the existence of a legal order including property rights and ample competition. One of Smith’s greatest anxieties about markets was that totally free exchange in the absence of these premises would invariably lead business men into “conspiracy against the public, or in some contrivance to raise prices.”

Hume held similar views, and as such based his political philosophy on the evolution of property rights as conducive to the public good, not based on a “natural right” that existed prior. In Hume’s anthropological account, “possession” was a natural sentiment that through socio-cultural evolution became projected onto property, imbuing it with moral connotations above and beyond its legal ones.

For both these classical liberal thinkers, then, property rights are a useful convention for aligning incentives to promote the stability of possession, investment and economic prosperity, and thus are objects of what Hume called artifice, i.e. part of an artificial system of justice.

This reading is what motivates me to call Ordoliberalism the truest modern expression of Scottish classical liberalism. Ordoliberalism is an economic school developed in Germany following WWII that, like the classicals, posited the interdependence of the social, legal and political orders in creating an economic order. To an ordoliberal, “laissez-faire” has no content. For economies to reach their theoretical potential, they depend on a legal, cultural and regulatory orders that align behavior with Smith’s “special case”.

In this view, law is a public good that must be calibrated to encourage co-ordination. On the other hand, ignoring the societal outcomes of a legal system based on the dogma of natural rights leads to disaster. For example, in the early 1900s the cartelization of the German economy was upheld by “laissez-faire” courts emphasizing the inalienable right of freedom to contract. Ordoliberals recognized the dramatic expansion of cartels as a key factor behind the rise of national socialism, and thus made competition law a central plank in their post-war project.

To Ordoliberals, the property system, including contract and liability law, was itself as much a type of competition policy as anti-trust measures like explicit acts prohibiting monopoly. This point of view is sometimes referred to Ordnungspolitik, an organic conception of regulatory policy as integral to and a part of economic order. The Ordo Yearbook of Economic and Social Order was thus one of the earliest academic journals to publish what today is called law and economics, public choice theory and constitutional economics, with a special emphasis on limiting the influence of rent seeking through a strong, independent state apparatus. Note that these are all fields which blur the line between market and state, consistent with the interdependence thesis.

This appreciation of legal and political process made Ordoliberals particularly suspicious of ad hoc interventions, preferring a rules based system consistent with the qualities of law, particularly stability and generality. As such, Ordoliberals have always been suspicious of discretionary fiscal policy, and from early on pushed for political independence for the monetary authority. Even today, Germany’s “austerian” approach to the Euro Crisis is based on a solution consisting of supply side reform and fiscal prudence in the periphery, rather than ECB expediency. Scholars have explained this stance as stemming from the influence of ordoliberal thinking within the CDU/CSU and FDP parties.

Ordoliberalism remains influential within the Christian Democratic and Social Union parties because of its intellectual precursor in Catholic social doctrine. In the 1930s Catholic academics developed a social teaching based on natural law and subsidiarity, arguing liberty and prosperity through free-markets and property rights were essential to upholding human dignity, but needed to be complemented with provisions for social security.

Ordoliberals transformed this teaching into the principals of the social market economy, the system which underpinned Germany’s post-war economic miracle. The social market is based on the principle that social insurance can actually be complementary to liberty, innovation, and expanding output. Indeed, social market policies were explicitly conceived as part of effort to avoid a return to either socialism or fascism.

An updated version of this view is called the “compensation hypothesis” and it has a fair amount of empirical support. Liberalization is unpopular because it creates short run winners and losers, and therefore threatens the economic security of incumbents who naturally call for ad hoc government protections. The compensation hypothesis maintains that policies such as free trade and labour market liberalization will therefore be more popular in regimes with larger safety nets.

This focus on path dependency and public choice helps the Ordoliberal dissolve the dogmatic debates between “capitalists” and “socialists”. For example, in a dynamic world it may very well be the case that public pension plans are pro-market. From a public choice perspective, corporate welfare policies like auto-industry bailouts are a clear consequence of special interest lobbying through trade unions. Looking closer, the bailouts are less about the company as they are about shoring up massive pension liabilities. Thus an Ordo could argue that to oppose an actuarially transparent public pension scheme is to tacitly support the maleficent alternative.

Modern libertarianism could learn a lot by studying the ordoliberal example and reading their early writings. The early ordoliberals of the Freiburg Circles were commendable adversaries of totalitarianism and get too little appreciation among libertarians due to their embrace of social security. But it is this very school of thought that resurrected liberalism in Germany and continental Europe!

Even Hayek was deeply affected by ordoliberal thinking, and maintained a close friendship with one of its founders, Walter Eucken, who both influenced the Road to Serfdom and the formation of the Mont Perelin Society. Hayek’s concept of “spontaneous order,” for instance, contains the imprint of their frequent dialogues on the nature of ordnung (and couldn’t be farther from the endorsement of “self-regulation” that his naive supporters suppose). Indeed, the opening address of Mont Perelin in 1948 is where Hayek debuted  “‘Free’ Enterprise and Economic Order”, which includes what I take to be a perfect single-sentence encapsulation of the ordoliberal credo:

While it would be an exaggeration, it would not be altogether untrue to say that the interpretation of the fundamental principle of liberalism as absence” of state activity rather than as a policy which deliberately adopts competition, the market, and prices as its ordering principle and uses the legal framework enforced by the state in order to make competition as effective and beneficial as possible — and to supplement it where, and only where, it cannot be made effective — is as much responsible for the decline of competition as the active support which governments have given directly and indirectly to the growth of monopoly.

When one considers how US regulatory capture has only seemed to worsen since the conservative revolution (“starve the beast,” “privatize, privatize, privatize,” and so on ), Hayek’s words ring as true as ever.

ordnung_

Pleasurable, Exalted Terror

Edmund Burke wrote that “whatever is qualified to cause terror is a foundation capable of the sublime.” But instead of a category of aesthetics, in contemporary English the word is mainly used by the pretentious to flatter one another. It has thus lost much of the nuance that originated in Burke’s treatise On the Sublime and the Beautiful, in favor of yet another superlative for “good”.

Strictly speaking, something is sublime if it uses the infinite or incalculable to create an experience of beauty incorporating fear or overwhelming. For example, I reserve sublime to describe my first visit to the Niagara Falls, whose dramatic horseshoe of roaring waters transfixed me in a torrent of terror and tranquility.

Yet sublime does not have to refer to natural wonders or artistry. Indeed, many social phenomena can be sublime. Slavoj Žižek once argued that ideology related to the sublime, due to an influence over social reality that defied perception. Specifically, he claims ideologies require a “sublime object” that carries an irreproachable greatness, be it God, the King or the proletariat.

The general idea comes from Kant, who wrote that the sublime is a “formless object” representing our intrinsic inability to perceive vastness or complexity, thus elevating “nature beyond our reach as equivalent to a presentation of ideas.” In confronting such objects, we at once feel displeasure “arising from the inadequacy of imagination in the aesthetic estimation of magnitude” and a “simultaneous awakened pleasure, arising from this very judgement of the inadequacy of the greatest faculty of sense…”.

In ideological space, this inadequacy of imagination parallels the subject’s inability to articulate the nature of their deepest political commitments, which in turn creates a similar “awakened pleasure” in the knowledge that their cause defies a complete description.

In this sense, there is something strangely sublime surrounding the recent brutal interfaces between state and citizen in New York,  Mexico, Hong Kong, and elsewhere. To appreciate the scope and complexity behind these patterns of violence and protest is literally impossible. So out of necessity, our inadequate media elevates that which is beyond our reach to a coherent presentation of ideas. Indeed, it seems as if the news and social media act as a magnifying glass, concentrating public attention onto stochastically occurring tragedies until a spark creates ignition, giving producers the cue for the “national conversation” graphic along the lower third.

There are those that decry the news for being guilty of exploiting “sensationalism,” but this is a mistake. What is being constantly exploited is precisely our craving for the sublime. Indeed, the grotesque scenes of protest that play across our screens, straining eyes that alternate from face to crowd to face, are genuine objects of beauty. And this in turn explains why as a society we have never been more at peace, but also never more in terror. Pleasurable, exalted terror.

hong kong