The American Federal System and the Administrative State

Stick_Rpg_screenshot

Imagine a group of friends sits down to play a tabletop RPG.

They picked a Dungeon Master ahead of time, to plan out the adventure and generally be the arbiter of what occurs and what’s allowed.

The remaining friends put together their characters, choosing types (such as warrior or wizard), stats (such as how intelligent their character is as opposed to how strong or nimble), names, species, and so on.

There are rules to these games, but they are fairly flexible, to allow for creativity on the part of the Dungeon Master as well as the players.

Suppose that after playing a few times, some of the players get tired of it, and want to switch to a different RPG. A space adventure, say. Neither the DM, nor the rest of the players, want to give up on what they’ve done so far, though. So they strike a compromise—their characters in their current game will play an in-game version of the space RPG, and accrue experience points based on how well they do.

At first this takes up about a fifth of their gameplay. But gradually, they spend more and more time on the subgame. What’s more, they create more subgames, of many different genres. Some are so completely unlike the one they’re playing as to be hardly comparable—focusing on boring domestic scenarios, for instance. Or working together to solve puzzle games.

At what point can they be said to ever play the original game at all? What if 80 percent of their gameplay takes place in subgames? But now, what if much of that 20 percent was used determining which subgame to play or creating new ones? At what point does the original game vanish entirely, as an entity?

The original game is formally higher up on the hierarchy than the subgames. The DM could decide to have a dragon attack while their characters’ attention is caught up playing house in a subgame. But to the extent that it’s hard to get a group of friends together who will regularly commit their time to a common game like this, the DM can’t just do whatever he wants. If people think he’s being unfair or aren’t having any fun, they can walk away.

From Penny Arcade
From Penny Arcade

If enough people do this, the game will simply be dead.

In short, the DM is constrained in as much as he wants to avoid killing the game entirely.

I ask again: at what point is it absurd to refer to the original game at all?

Previous Posts in This Thread:

Advertisements

McCloskey’s Straw Foe: Retro-Institutional Economics

I’m a big fan of Deirdre McCloskey. She’s become in the past two years one of my favorite intellectuals, alive or dead. At the very least, she’s responsible for introducing me to virtue ethics, my obsession for the past year. So I hate that my first piece for Sweet Talk is rather critical of a recent article of hers.

McCloskey has a new paper titled “Max U vs. Humanonics: A Critique of Neo-Institutionalism” (PDF) appearing in the Journal of Institutional Economics in which she takes the New Institutionalist school to task for failing to account for the influence of words and ideas in economic history, and especially for asserting that the Great Enrichment (the explosion of incomes from roughly 1800 to the present) can be explained without these concepts. The problem is she doesn’t seem to be talking about the same institutional economics that I’m familiar with (loosely, as a rank neophyte in all honesty).

She claims that the neo-institutionalists view institutions as mere rules-of-the-game, and that their attempts to introduce culture and ethics into the mix are tautological.

Norms” are one thing, “rules” are another. The neo-institutionalists turn their arguments into tautologies by melding the two. They end up saying, “Social change depends on society.” One supposes so. “Informal constraints” are not informal if they are constraints, and if they are informal the theory has been reduced to a tautology, because any human action is now by definition brought under the label “institutions.” The neoinstitutionalists have nothing non-tautological to say about ethics, because they have not read the immense literature on ethics since 2000 BCE, including the literature of the humanities turning back to look at the rhetoric of language. Being economist, raised on the childish philosophy that separates positive and normative when most of our scientific lives are spent in their intersection, they are quite unwilling to bring ethics seriously into their history and their economics. As one of them said genially to me, “ethics, schmethics.

But this is not what I have seen. Consider this review article by Alberto Alesina on culture, institutions, and how they interact. Alesina, by the way, is one among several scholars even I’m familiar with in the institutional economics field whom I was puzzled to see absent in McCloskey’s works cited.

Alesina discusses a handful of cultural variables that have been employed in the literature that have obvious ethical correlates. One is “generalized trust”, and it’s studied by survey studies (like the World Values Survey) about how far people feel they can trust others and which kinds of people can be trusted (just family? or people you meet on the street? only white folks?). It’s also measured in behavioral laboratory studies. It’s a component of what various social scientists refer to as social capital, but a Thomist like McCloskey or a McCloskeyan like me might better recognize it as the cardinal virtue, faith. Studying this character trait’s prevalence in populations and across time seems quite apposite of McCloskey’s own project of establishing a place of prominence in explaining the Great Enrichment to newfound dignity for the regular, non-aristocratic, non-privileged person. Bracketed comments are mine:

Trust has been shown to be relevant as an explanatory variable of economic development (Knack and Keefer (1997)) and individual performance (Butler et al. (2013)), financial development, participation in the stock market and trade (see Guiso, Sapienza, and Zingales (2004, 2008a, 2009)) [Faith gives one the hope and courage to participate], innovation (Fukuyama (1995)) [not piling brick upon brick but innovation led to the Great Enrichment] and firm productivity (Bloom, Sadun, and Van Reenen (forthcoming) and La Porta et al. (1997)). For a general review of the impact of trust on various economic outcomes, see the work of Algan and Cahuc (2013).

A closely related subject is “generalized versus particular morality,” which is a measure of how far out from family or other tight social circles should ethical consideration stretch. Does the stranger deserve ethical consideration? A member of a religious/racial/sexual minority? A foreigner? Ethics, schmethics indeed.

Individualism versus collectivism is another trait that is measured, and one can see how an individualism measure might correspond to a concept of dignity. “In individualist societies, the stress is put on personal achievements and individual rights.” These are societies where the individual is accorded dignity, and not merely subsumed into the prerogatives of the State or the Party or the Church or, as a matter of actual fact, the elites who dominate those entities.

The strength and importance of family bonds are measured and their influence on economic outcomes are discussed in the literature, and one can see how this might have similar effects as collectivism, and also be of interest to a humanist scholar of dignity for the individual to formulate and execute his own life plans, relatively free from overbearing illiberal families. This is a trickier trait, admittedly, as the family is important for the individual’s sense of identity (a species of faith, McCloskey might say).

“Attitudes toward work and perception of poverty” are grappled with in this literature. Again, this might be of interest to the author of the Bourgeois Virtues.

There are of course limitations to the research methods employed in this more up-to-date neo-institutional economics. Surveys surely don’t tell the whole story of the human conversation, and laboratory games can be misleading even when they don’t succumb to WEIRD sample problems. These methods can be accompanied and informed by the humanities, even up to the English and Philosophy departments. But this research program and the results it has yielded are better than McCloskey gives them credit for.

I’m decidedly not an expert in this field (I’ve never even taken an economics class!), but it strikes me as not only powerful on its own terms, but also fertile for the kind of rhetorical and ethical investigations McCloskey brings to bear. Alesina’s review discusses the ways culture and institutions influence each other in reciprocally causal ways. Institutions affect the cultural and indeed ethical behavior of individuals. As McCloskey notes, “Once someone is corrupted by life in a communist country, for example, it is hard to reset her ethics. She goes on relying on the “bureau” model of human interaction as against the market.” For sure. But culture influences institutions as well. Cultural traits, the conversation or “conjective” (or the Logos), the virtues as embodied in the people; these all influence the institutions that are formed and how they evolve and eventually fade unto dust.

And there are cultural shocks that can radically change institutions. The Reformation, the Sexual Revolution, Marxism, Christianity, etc. The gay rights revolution is a contemporary example of a rhetoric-ignited cultural shift that is altering institutions right before our eyes. McCloskey’s own thesis embeds nicely within the neo-institutionalist community of rhetoric, if I dare say. The Enlightenment, or certain ideas and conversations therein, acted as a cultural shock whereby the common person began to be seen with a dignity all her own, and this new dignity–and the liberty that accompanied it–encouraged her try out new ideas, to innovate.

My great fear is that McCloskey has not listened, really listened, to the strongest formulations of the ideas she disparages.

 

Of Course The U.S. Can Do Better

But do we want to? And why would we want to? Why should efficiency and progress be a goal of a federal government?

It was sweet of Sam Hammond to respond to my gigantic troll piece the way he did: normally, trolling Sam is like poking a hornet’s nest after chaining yourself to the tree; I expected to be obliterated (I would have deserved it), but he showed the kind of restraint that makes Sweet Talk Conversation the Azores of the internet.

As I mentioned, I spend half my working life in Canada, and I also travel infrequently overseas and to Central America to do leadership training among charitable organizations, so I hear and experience quite a bit of the States as an outsider looking in, fielding all sorts of questions, responding to myths and misconceptions, and also taking no small amount of grief for our many sins. As an example, on Tuesday, a Canadian gentleman I work with was astonished that Obama should not be the most popular person in America because he gave poor people free health care. It was all I could do to maintain decorum enough to explain that, no, that’s not quite the issue. In short, I’ve had to explain America to non-Americans and even non-Anglophones (a special challenge by itself) an awful lot.

There’s a similar thing going on with Sam, who has a tenacious grip on facts and concepts, along with the many relationships and moving parts, and I hope I can remain one of his students–at least until the time comes when his intellect flies to a place where mine can no longer fathom. Nevertheless, he glossed over one part of the Spirit of ’76, treating a feature of our founding as more of a design quirk. He said, “The US federal government was not designed to be good at stuff.”

I will quibble here, with all due respect to Sam, whom I must have irritated to the point of distraction. I think it is better to say that the US federal government was, in fact, designed to not be good at stuff.

Abraham Lincoln, of mixed fame, reminded the world of this feature of our founding at Gettysburg, when he intoned, “Four score and seven years ago…” which refers to the year 1776. Nowadays we take it for granted that the American Declaration of Independence is the loading of the gun that fired the shot heard ’round the world. But that perspective was not de facto until after Lincoln was enshrined in the American Virtues Hall of Fame. Until that moment at Gettysburg, the American Civil War was being fought over the interpretation of the U.S. Constitution, i.e., whether states had the right to determine this or that, with “this or that” including something as heinous as slavery–and not just human slavery, but the slavery of a race of men.

lincolnSUM_1811021c

In other words, pro-Constitutionalists who took the side of states’ rights were arguing in favor of tyranny. They were using the Constitution to exercise tyranny.

In the land of freedom? Shall tyranny be encoded in the Constitution of the land which declared independence from a tyrant?

The American spirit, from that day in Gettysburg forward, has always reached beyond the actual founding of the country according to the ratification of the Constitution in 1787–beyond that to the Declaration of Independence in 1776. What Lincoln did was revolutionary, highlighting the actual intent of the crafters of the Constitution, namely that it shall be so messy that no tyrant shall seize free people.

An efficient and progressive federal government is too tempting a prize for an ambitious person, and, on balance, I think that Thomas Jefferson, Alexander Hamilton, George Washington, James Madison, and Benjamin Franklin would look at the drooling, unwieldy behemoth that is presently the United States Government, and they would rejoice. What tyrant would seize control of this?

It is not without great irony that Abraham Lincoln would hear shouted as he was perishing, “Sic semper tyrannis!” He had just made it virtually impossible for a tyrant to arise. See subsequent American history, current events included.

The US Can Do Better

My favourite country in the world is the United States of America, and it’s not just because of the cheap liquor and whores. It’s because the USofA creates uncompensated benefits for the rest of the world. And on top of that, she is always trying to do better, inspite of her best efforts.

So while reading David’s latest tour de force at my expense, I couldn’t help but feel he was dead right. Right that the US political landscape is a knot of non-cognitivism. Right that education, poverty and basic infrastructure could probably use some attention. Right that corruption and rent seeking at the highest levels is tolerated as if it were an intrinsic bug in democracy. But David is quick to remind me of the private sector. Yes, the US has an inadequate collective action mechanism, but don’t sweat it. It controls a mere 35% of a $17.5 trillion economy. And there’s a whole other 65!

The irony is that despite all that failure to collectively act, the US is one of the world’s greatest providers of the most basic public good of all: innovation. Innovation makes the world permanently richer through breakthroughs that all countries are free to copy. And there, Canada cannot compete. We may stand on guard for our affordable and high quality health insurance, but we’re also a small, exporting country with dismal R&D rates. Contrast this to the private capital and survival instinct that torrents through US medical institutions, and you’ll be left with the impression that Canada and the world at large enjoy a free-ride.

Yet the resilience of the US private sector is no reason to neglect the sorry shape of its public sector. On the contrary, it’s all the more reason to be dissatisfied. Everytime $4 billion gets spent on a handful of competitive senate races it’s a Human Genome Project that wasn’t. Every time a social program is sabotaged while a free market solution is outlawed, another cohort slips into the limbo of a safety net designed by taking the average of two incompatible philosophies. And as the world’s most productive economy, the US is not only shooting itself in the foot — it’s using the world’s most expensive bullets.

As David alludes to, the US federal government was not designed to be good at stuff. This leads some to conclude that the path of least resistance is a libertopia fait accompli to minimize loses. But times have changed. Wagner’s Law means a wealthy public demands its goods, which in turn means that so as long as the US is democratic it endogenizes a role for good governance. The Founders never anticipated the New Deal, much less the Affordable Care Act. We can only thank the grace of God that they checked the Bill of Rights for typos.

I also recognize that the US is not going to hold a constitutional convention anytime soon, much less apologize for sedition and crawl back to the crown. It’s just not in the dominoes. But it doesn’t need to be. The issue isn’t the vital need for a rationally constructed privy council. The issue is that the micromotives and macrobehavior of the US political economy are out of whack.

Social criticism is based on the premise that we can and should do better. And to that end, comparative institutional analysis is indispensable.

How Public Welfare Enhances Social Capital

Lamenting the atomism of modern society, the decline of community, associations and other forms of “social capital,” is such a common refrain on both the left and right that one wonders why they haven’t put aside their differences to form a club! I hear there are some vacancies down at the YMCA, and I bet you rates have never been so good. Call it… the Enemies of Anomie & Toastmasters Society.

Yet theorists of social capital spend more time writing about it (in itself a highly autonomous practice) than they do actually forming new co-valent social bonds. Perhaps it’s because, for both camps, the decline is seen to have been caused by such deep and hard to resist forces that they are equally resigned to pontification.

On the right, the deep source of creeping atomism is the all-encompassing, bureaucratized welfare state. Redistribution in this view is inherently trust-reducing due to its zero-sumness (Mary robbing Peter to pay Paul). For example, its argued that universal social programs crowd-out private safety-nets, like religious organizations or the family, destroying unseen pro-social externalities. In some accounts this merely accelerates a feedback loop of eroding social norms that was initiated the second Western Civilization embraced value pluralism.

Surprisingly, many on the left have come to similar conclusions, if only in a different vocabulary. Habermas, for example, has argued that state welfare systems “colonize” more natural forms of solidarity, contributing to their “reification” — an objectifying process by which implicit social relations are made explicit and impersonal, sapping them of their moral character. Readers of Sweet Talk might know this as a re-balancing from the sacred to the profane, the inherent transcendental and instrumental duality of all social relations.

Heady stuff. But is any of it accurate? Is it an inexorable law of late capitalism that we become individuated narcissists? Is there some theorem in Public Choice that says more welfare = less social capital? The answer to both is a big fat no.

In fact, the inverse relationship between social capital and the modern welfare state has been greatly exaggerated. There are three main reasons for this tendency, which I explore below: Continue reading “How Public Welfare Enhances Social Capital”

Multiple Ethical Equilibria

In The Bourgeois Virtue of Contingency, I explained my skepticism about McCloskey’s thesis that the industrial revolution took off due to a change in societal values, namely the adoption of a bourgeois ethic that gave dignity to merchants and commerce.

Using historical examples, I tried to show that norms and values arise spontaneously following changes to the boundary conditions of our rules and institutions. Of course, new values must ultimately shape institutions as well, but it will depend on the rate of diffusion and whether, once the ideas are out there, they can coalesce in a politically potent way.

In the conversations that followed, McCloskey criticized the Acemoglu and Robinson view of economic development as “add-institutions-and-stir.” She buttressed her criticism with examples, such as the failure of time clocks to correct nurse absenteeism in India. According to McCloskey, what was lacking was an ethic of professionalism — that is, the mutual expectation of professionalism from nurse to nurse required to instill the disciplining sense of an Impartial Spectator.

My caveat is to say that in this and every other case, the details matter. In a sense, introducing a mere time clock assumed too much in terms of existing social capital. For example, in a disciplined household, a parent’s stern look can alone induce obedience, while dysfunctional households often involve the most yelling and corporal punishment. It’s a phenomenon that, of all things, reminds me of monetary policy and what economist Nick Rowe calls the fallacy of concrete steppes. As Rowe explains it, by nature humans like linear explanations, a set of concrete steps like “add a time clock”, yet market equilibria are by nature non-linear:

Sometimes the future causes the present, because people’s expectations of the future affect what they do in the present…

Next week, millions of Canadians will get up and go to work about one hour later than they did this week, if we measure time by the sun. “What concrete steps will the government take to get Canadians to do this? Can anybody tell me that?

No, I can’t tell you that. I don’t have a clue what concrete steps, if any, the government will take. But I fully expect it will work. All the government does is announce that it wants us all to do this, and to put all our clocks back one hour. Maybe the government has the power to force government clocks back one hour, and force some government workers to start and leave work one hour later by the sun. But the rest of us just follow along, simply because we expect everyone else to follow along.

When monetary policy focuses on the concrete steps, we get a ballooning monetary base. Credible central banks, on the other hand, can conduct looser monetary policy simply by making a credible threat, keeping base money flat. In the context of parenting, the household with the screaming parent is like Japan during their Lost Decade, taking many concrete steps with no results, while the disciplined household uses credible expectations management, meaning the bed gets made with mom nary lifting a finger.

All social institutions have similar multiple equilibria, and so I am totally on board with rejecting the naivety of the “add-institutions-and-stir” view. Expectations and credibility are just as important as the concrete steps. Yet equating ethics with expectations is misleading, since expectations are largely engendered by facts about the institutions.

Aggregate demand stagnated in Japan because people’s spending was partially contingent on the expectations of other people’s spending, diallelus. The emptier theories explain this as a failing of “animal spirits.” Or as I can imagine McCloskey putting it, perhaps Japan lacked an ethic of monetary velocity…

Fig. 5.1 Multiple Equilibria

One final example: In my office absenteeism picks up on Friday afternoons (surprise!). This is despite many “training workshops” designed to encourage a culture of professionalism. The more effective technique, however, is for the boss to simply do an office walk-around every Friday at 4pm, and then let those absent hear about it Monday morning. Slowly, the boss can make his walk around less frequent but with a surprise element. Absenteeism will drop off, self-reinforced by mutual expectations and the lingering credibility of this “professionalism anchor”.

In the case of the Indian nurses, Acemoglu and Robinson report (pg 491) that the program was actually effective at first, but was progressively sabotaged by the administration “in cahoots” with the nurses. The lesson here is not the that an ethic was lacking, but that like monetary policy, credible expectations depend largely on the rule enforcer’s independence. It’s a lesson drawn straight from the work of Thomas Shelling:

A would-be reformer who wants to improve social welfare by changing people’s behavior to a better equilibrium must take care to identify a social plan that is in fact a Nash equilibrium, so that nobody can profit by unilaterally deviating from the plan. If a leader tries to change people’s expectations to some plan that is not a Nash equilibrium, then his exhortations to change behavior would be undermined by rational deviations. The point of this example is that, even when the better equilibrium is well understood, there still remains a nontrivial social problem of how to change everyone’s expectations to the better equilibrium. Such coordinated social change requires some form of socially accepted leadership, and thus it may depend on factors that are essentially political.

In this light, its almost self-aggrandizing to blame our First World incorruptibility on the absorption of certain values. Instead we should recognize that in the background of ostensibly voluntary obedience to the norms of property and exchange is a credible threat of incentive compatible rule enforcement. If that credibility evaporates, not surprisingly, so does most of our high-mindedness.

Generality & the Boundary Conditions of Liberalism

Adam has a new post raising the problem of liberalism and neutrality:

Liberal Neutrality is the idea, embraced by people like Hayek, that the ideal of pluralism, the system it engenders, is itself ethically neutral.

I see two related concepts. The first sense of ‘liberal neutrality’ as the meta-value of pluralism, the value that others should be able to express their values. I’ve discussed the varieties of libertarian value neutrality before here.

The other sense of ‘liberal neutrality’ is what Hayek called the generality norm, or the principal that the rule of law should apply evenly and without targeting individuals or speciously discriminating groups.

One way to connect the two is to realize generality is, in James Buchanan’s words, “the sine qua non of law itself.” The neutrality of value pluralism relies on the generality of law, in the sense that a non-discriminatory set of rules shouldn’t favor one group over another (equality under the law). Buchanan argues that this is part of a social contract that gives the state and law legitimacy.

Hayek emphasized the generality of rule of law as well, though his was an evolutionary rather than contractarian account.  Here’s a quotation from Eugene Miller describing Hayek’s view as found in The Constitution of Liberty:

A command is an order to someone to take a particular action or refrain from it; and it presupposes someone who has issued the command. A law, by contrast, ‘is directed to unknown people,’ and it speaks in an impersonal voice. It abstracts ‘from all particular circumstances of time and place’ and ‘refers only to such conditions as may occur anywhere and at any time.’  …

The principle of generality does not, however, encompass the requirement that ‘any law should apply equally to all.’ A law might be general and yet make different provisions for different classes of persons and, where classes are defined narrowly, implicitly favour specific individuals …

Hayek’s case for freedom is not built around the idea of indvidual rights, but, nonetheless, rights are vital to his account of the Rule of Law. These are not to be understood as natural rights, in the Lockean sense, but as rights that have evolved historically and have found expression in various constitutional provisions.

This is closer to my own, historicist view. The extent to which law must be general is a function of its level of abstraction and thus what conceptual level of construal one is forced to assume, an “impersonal voice”. When designing rules or laws for a large population they must be “analytically egalitarian” and more or less utilitarian or risk making a category mistake. Hayek’s view was that his meant carefully designing institutions to maximize competition. Later he backed away from rationally constructing laws for competition for epistemological reasons. He still favored a decentralized legal process that ordered society towards its theoretical potential, as it were. Lets call it “laissez-faire within rules,” or “planning for freedom,” or “the boundary conditions of liberalism”.

What I like about that argument is that it relies solely on conceptual appropriateness (aka the ‘supervenience constraint’: conceptual properties ‘supervene’ to real properties). People can understand why “the house was happy” makes a kind of category error in the form of a metaphor. However, to say “the house was well ordered” at least connects a description (ordered) that can supervene to the object (a house). This is the same kind of attention Hayek makes when he argues against social justice. If you’re careful you can make some interesting arguments of this sort without ever becoming a moral realist.

neutral