Does the Is-Ought Divide Make Atrocities More Palatable?

David Hume, seen here contemplating the benefits of the one-child policy
David Hume, seen here contemplating the benefits of the one-child policy

Those who might object to the incendiary title of this post need to stop being such mood affiliators. I’m merely asking a positive question, quit being so normative about it!

This week, on reports that China will be moving from a one-child policy to a two-child policy, I was horrified to see a slew of pieces on the benefits of the original policy, or how it “worked”. Then, when people had the predictable human reaction to such a framing, they received, without fail, this same rationalism-bot response:

The original source of my own horror was Tyler Cowen’s post, Did China’s one-child policy have benefits?

The post is masterful in its way:

  • He starts out by saying that the policy is an immoral restriction on personal freedom so of course he opposes it.
  • Then he points to a paper on how it nevertheless increased investment per child.
  • Nevertheless, the paper, and he, think that decreased fertility explains the lion’s share, if not all of this, so the impact of the policy in this regard was probably small or “it is highly likely the policy has been obsolete for some while.”

When I read this post, I thought to myself: he believes he’s washed his hands by making the categorical assertion about infringement of liberty. And he thinks that this is a mere is-ought matter, or as economists like to say (following Milton Friedman), he’s talking about positive, factual economics as opposed to dealing with anything normative.

And moreover, the moment that anyone expresses outrage, he will call them a “mood affiliator,” which he defines as follows:

It seems to me that people are first choosing a mood or attitude, and then finding the disparate views which match to that mood and, to themselves, justifying those views by the mood. I call this the “fallacy of mood affiliation,” and it is one of the most underreported fallacies in human reasoning. (In the context of economic growth debates, the underlying mood is often “optimism” or “pessimism” per se and then a bunch of ought-to-be-independent views fall out from the chosen mood.)

This is in the fine and growing tradition of saying one is cognitively impaired through some bias, or invalidating an argument by calling some piece of it a fallacy, rather than coming out and saying that you think the other person is arguing in bad faith (or more simply, is mistaken!)

To summarize:

  • A morally monstrous policy that, in a slightly moderated form, continues to terrorize innocent people, is appraised specifically in terms of its benefits.
  • The author washes their hands by saying that of course they’re normatively opposed to the policy, they just want to examine it positively!
  • The costs—in a bloodless utilitarian sense or in the sense of human costs—are not even mentioned.
  • Anyone who reacts with predictable horror at this framing is told that they’re simply being illogical or irrational.

And indeed, in discussing this post a defender of it implied that I was being a mood affiliator.

And consider this glowing comment on the post itself:

This was an excellent post. It clearly separates the normative from the positive, separates means from ends, addresses the most obvious objections (e.g., for twins, “what about birth weight”?) immediately, and kicks the comments beehive.

Excellent post. Clearly separates the normative from the positive, and means from ends! What more can one wish for?

The Unbearable Gulf of Facticity

Hume’s famous (and in some circles, infamous) is-ought divide comes, of course, from his A Treatise of Human Nature. The frequently quoted passage is as follows:

In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention wou’d subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceiv’d by reason.

Rather than subverting “all the vulgar systems of morality,” it seems to me that rather it has given birth an endless string of philosophers who take it as a matter of revealed truth that you cannot get a normative implication from a fact about the world.

But that isn’t what Hume said. As Charles Pidgen explains:

He was not denying that the moral can be defined in terms of the non-moral. He was merely denying the existence of logically valid arguments from the non-moral to the moral. This becomes clear once we note that Hume does not think that he has to argue for the apparent inconceivability of is/ought deductions. It is something he thinks he can take for granted. This is what we would expect if he were making the logical point since it would have been obvious to his readers. For it was a commonplace of Eighteenth Century logical theory that in a logically valid argument the matter of the conclusion – that is the non-logical content – is contained within the premises, and thus that you cannot get out what you haven’t put in. Thus if an ‘ought’ appears in the conclusion of an argument but not in the premises, the inference cannot be logically valid. You can’t deduce an ‘ought’ from an ‘is’ by means of logic alone.

Which isn’t quite as Earth-shattering a revelation as it continues to be made out to be, because you can’t do much of anything by means of logic alone. Moreover, the obvious implication is that if you bake a relationship between is and ought into your premise, you can absolutely derive an ought deductively. For whatever that is worth.

Hume himself wanted to build up a foundation for morality from human nature—hence the name of the Treatise. Morality was not found in “the relation of objects” but rather in subjects, in our in-born moral sense. Our sense of what we ought to do, in Hume’s own system, originates from facts of our nature. Which is obviously different from deducing what we ought to do in a given situation. Nevertheless, even in Hume, this gulf does not appear to be so very vast.

Hume Was Contributing to a Project Which Has Failed

The Treatise only makes sense in the context of the Cartesian foundationalist project. Reason as it was understood in the Enlightenment was framed by Descartes’ project of radical doubt—only that which could be proved beyond a measure of doubt counted as justified, rational belief. His doubt took him right down to the level of whether he himself even existed, and the fact that there must be a questioner in order to ask such questions is essence of the famous cogito ergo sum. From there, he seeks to build reality back up through abstract reasoning. No one thinks he succeeded.

But he did succeed in imposing a new vision of rationality and reason, one that modern discussions still are all too often caged inside of. This is the notion that what is rational is only that which has some ultimate foundation that cannot be questioned. No such foundation exists, and in fact the very idea of it is incoherent, and so all contributions to this project were doomed to fail. Including Hume’s.

Descartes and his contemporaries also developed the subject-object distinction, something that has at this point become basically everyone’s background assumption of how reality and knowledge work. You can see it in Hume’s argument that “the distinction of vice and virtue is not founded merely on the relations of objects.” Operating within the subject-object schema, he argues that morality is not objective, “out there,” but rather a characteristic of subjects. This is not to be confused with our modern use of the word “subjective” to mean merely arbitrary. Hume clearly believed that noncognitivism, as it came to be called, provided a kind of foundation for morality. Just a different sort from those who believed in an external, objective moral law.

Unlike foundationalism, the subject-object distinction does have its uses. It’s clearly been a helpful tool throughout the centuries. But it is not the only set of joints from which to cut reality, nor, in my judgment, the best. I think we’d all do better to start treating it as a special case rather than assuming it by default.

How to Think Like a Human

Socrates often avails himself of the sophistic arts of argument, but whatever the reason might be, that he does cannot in any way be attributed to some deficiency in the logic of that time. To be sure, Aristotle was the first to clarify the essential theoretical foundations of drawing correct conclusions, and in so doing he also explained the deceptive appearance of false arguments. But no one could seriously contend that the ability to think correctly is acquired only by a detour through logical theory. If we find in Plato’s dialogues and in Socrates’ arguments all manner of violations of logic—false inferences, the omission of necessary steps, equivocations, the interchanging of one concept with another—the reasonable hermeneutic assumption on which to proceed is that we are dealing with a discussion. And we ourselves do not conduct our discussions more geometrico. Instead we move within the live play of risking assertions, of taking back what we have said, of assuming and rejecting, all the while proceeding on our way to reaching an understanding. Thus it does not seem at all reasonable to me to study Plato primarily with an eye toward logical consistency, although that approach can of course be of auxiliary importance in pointing out where conclusions have been drawn too quickly. The real task can only be to activate for ourselves wholes of meaning, contexts within which a discussion moves—even where its logic offends us. Aristotle himself, the creator of the first valid logic, was well aware of how things stand here. In a famous passage in the Metaphysics, he declares that the difference between dialectic and sophism consists only in the choice or commitment in life, i.e., only in that the dialectician takes seriously those things which the sophist uses solely as the material for his game of winning arguments and proving himself right.
-Hans-Georg Gadamer, Dialogue and Dialectic

Nothing can stand up to Cartesian doubt. If that’s the standard, then philosophy will fail every time.

It seems clear to me that scientists have achieved an astonishing amount when it comes to understanding the world we live in, however incomplete that understanding might remain (and perhaps always must be). So when someone trots out an epistemology that calls into doubt our ability to know anything at all, it seems to me that it is that philosopher, rather than scientists, who begins to look rather foolish.

Similarly, it seems to me that every day people are able to leap from an understanding of the facts to what they ought to do. Often they feel they have made mistakes, or think others have also made mistakes. But to trot out a bit of formal logic, where you can’t get an ought in the conclusion if you only have an is in the premise, and then treat any normative conclusions drawn from facts as a fallacy—seems, to me, to be ridiculous.

I will grant that people’s inferences about what they should do is not as strong a piece of evidence as the atom bomb, the astonishingly precise and accurate predictions from quantum physics, or the results of germ theory. Nevertheless, the argument against such inferences is even weaker than the evidence. As Gadamer put it, “no one could seriously contend that the ability to think correctly is acquired only by a detour through logical theory.” And invoking this particular piece of logic is, in the context, quite weak indeed.

Which brings us back to the piece that set me off: Tyler Cowen’s post on the one-child policy.

Look at the title again. The question is “Did China’s one-child policy have benefits?”

And yet the commenter I quoted above assures us that this cleanly separated “normative from positive.”

What, dear readers, is a “benefit”?

This is the problem with economic analysis in general. They talk a big talk about separating normative and positive, and thus honoring the is-ought distinction, but the very notion of a benefit is normative!

The counter argument, of course, is that economics assumes a “benefit” in terms of individuals’ own subjective valuations. But that is not a morally neutral choice, either. It demands, at least while performing analysis within the framework, that we exclude other notions of what a “benefit” is. Pure facticity is to be found nowhere in this.

The Humean turn in this regard is just one more way in which Enlightenment rationalism has made us very bad at thinking carefully about our rhetoric. Tyler Cowen is usually very careful in this regard, which is part of why I was stunned by the rhetoric of his post. But it’s clear, as his commenter observed, that the provocation was the main goal of how it was framed. I find this irresponsible given the horrors of the policy in question, horrors which remain unmentioned in the post.

Moreover, while he did not levy the accusation this time, this bit about “mood affiliation” is part of another trend that is due for a reversal; that of taking anything human about the nature of arguing, and categorizing in a way that makes it easy to dismiss.

Let’s take a relatively old one: the “ad hominem” fallacy that is the bread and butter of all Internet arguing.

When we’re talking about an Internet troll who is simply insulting you, of course there’s nothing productive going on.

But once you step out of foundationalism, it starts to become clear how much what we know rests on who we can trust. Trust puts the “faith” in arguing in “good faith”. If we cannot trust the one making an argument, we cannot trust the argument—simple as that. Taking an argument on its merits is not simply looking at the logic of the thing, it’s also looking at the credibility of both the one advancing it and the sources cited.

It’s one thing, in other words, to call someone ranting in a knee-jerk, thoughtless manner, a “mood affiliator.” I don’t know why you wouldn’t simply call such a person unhinged, but if sounding more clinical floats your boat, who am I to judge?

But when you’re talking about the benefits of a policy that is not only an infringement on freedom, but involves an ongoing brutality and inhumanity of a systematic sort, it is shameful to call the predictable negative response to it “mood affiliation” or a failure to understand the is-ought distinction.

Take Responsibility for Your Rhetoric

The Politics of Truth had several explicit targets and many more implicit ones that I felt best left unnamed. I will continue to leave them unnamed, but here are a couple of passages that might be of interest:

There’s a problem inherent to this fact of our nature. What if merely asking a particular question automatically made you less credible? And what if any group that systematically investigated that question, and other related ones, became similarly less credible as a result?

This is no abstract point. Asking “what should be done about the Jewish problem?” tells us a lot about who you are and the group you belong to. Anyone with an ounce of decency would not pay any attention to the claims made by the kind of person who would seriously ask and investigate that question, or the groups that would focus on it. The only attention that such people would draw would be defensive—it would be the attention we give to something we consider to be a threat.

Further down:

Now a modern libertarian, good child of the Enlightenment that they are, wants to assert the clear mathematical truth that no individual vote makes a difference. Especially at the national level. I have seen this happen many times, as a GMU econ alumni. I have done this many times myself.

The problem is that it’s never really clear to me what people hope to accomplish by doing this, other than being correct. Certainly the strong emotional backlash shows that there’s a cherished idea there. But other than casting ourselves in the role of unmasker of a manifest truth, what do we do when we insist upon this point?

A single vote does not sway an election, and therefore it follows…what? Other than undermining a cherished idea, which is indeed incorrect, what exactly is the larger value of the specific point?

In my view, “I believe in freedom so I would never implement the one-child policy, but if I didn’t, here are some benefits” is highly irresponsible and impolitic rhetoric.

Here’s one way that the substance of Cowen’s post could have been framed:

Subject: “Did Decreasing Fertility in China Result in Higher Relative Human Capital Investment?”

Body: “A recent paper suggests that this is the case, drawing on the natural experiment created by twins. They ultimately conclude that the one-child policy likely had little impact on this due to pre-existing trends in declining fertility in China and the rest of the region.”

Sounds different, doesn’t it? And the substance really is basically the same.

To frame it in terms of “the benefits” of the one-child policy, especially with substance so reasonable as that, is either poorly thought out or unethical.

But perhaps I am just a mood affiliator who does not properly understand the is-ought distinction.

Separation Anxiety

Friendly Society

If given the choice to live in a world dominated by risk or uncertainty, I would choose risk every time. Risk is manageable. Risk can be hedged. Risk can even bring people together.

Take the Friendly Societies of the 18th and 19th century. These predecessors of the modern insurance cooperative helped to distribute financial risks among their members. But in addition to providing access to doctor care or income in tough times, members could literally count on a shoulder to cry on. The prerogative of a decentralized social safety net had the by-product of strengthening the all around community, in the form civic engagement, social events and close knit relationships.

Friendly and mutual aid societies flourished for over 300 years in places like England thanks to the challenge of measuring risk accurately, especially on an individual level. For the most part, insurance schemes, both formal and friendly, brush over the immense heterogeneity of risk types to come up with a flat rate or membership premium — an average cost — which in our fundamental ignorance we agree to pay.

IMG_0078

In fact, when individual risk is well known (usually by the individual him or herself) our ability to manage risk with insurance or mutual aid tends to break down. For instance, a person seeking health insurance may choose to conceal their heightened risk of cancer by not sharing their family history. In their famous 1976 paper, Stiglitz and Rothschild showed how this kind of asymmetry of information makes insurance hard, if not impossible. In contrast, consider that a young man cannot easily conceal the salient fact that he is young and male. Since this correlates with worse driving, auto insurance is able to separate into several pools, or to charge several prices, without worry of members misrepresenting their risk type.

In the jargon of game theory, this is the difference between a pooling and separating equilibrium, and it’s not limited to insurance. In any scenario where the type of person or good is not directly observed, you instead observe a signal — a piece of communication — which may or may not be informative. But when different types put off different signals, even if they’re not wholly accurate, types can be discerned, separated and priced accordingly.

In the case of England, Friendly Societies tended to be grouped around industry, skill level, and other imperfect “types”. As a whole, then, the Friendlies weren’t totally unsophisticated. But relative to modern insurance, the mechanisms available for making members reveal their riskiness were first order approximations at best.

History’s Card Sharks

This wasn’t necessarily a bad thing. In the limit, if every person has an idiosyncratic and public risk profile, insurance would be like trying to bet a round of poker with the cards face up. Rather than spreading the cost of car accidents, or health care, or unemployment across large groups, we would be much closer to paying our own way in full. While this could be considered efficient in a narrow sense (each consensually pays his or her marginal cost), in practice it could also be disastrous. Rather than having the congenitally lucky occasionally support the unlucky, the unlucky would lose by predestination. There’s no point in bluffing — you’re simply dealt the hand you’re dealt.

Now imagine Friendly Societies as represented by a group of casual poker players that meet regularly. The play is sloppy and heuristic based, and no one really knows how to calculate pot odds. Sometimes you’re up, sometimes you’re down, but in long run everyone tends to break even. Then one day a new player is invited, a player who happens to be a poker tournament champion and retired statistician. Sometimes he’s up, sometimes he’s down, but in the long run the rest of the table ends up consistently going bust. A friendly game among friendly society suddenly isn’t that friendly anymore, and the group disbands. This is more less the story of how Friendly Societies went from flourishing to sudden decline around the turn of the 20th century.

Innovations in the science of actuarial analysis (the statistical study of risk) had been diffusing through society since at least 1693, when Edmond Halley constructed the first “life table” allowing him to calculate annuities based on age. Not long after in 1738, Abraham de Moivre published “The Doctrine of Chances,” credited as discovering the normal distribution that was greatly expounded on by Gauss in the 1800s. Then in 1762, The Equitable Life Assurance Society was founded, with the first modern usage of the term “actuary” (the company exists to this day as Equitable Life, the world’s old mutual insurer). However, as a profession, insurance was truly born much later in 1848, with the founding of the Institute of Actuaries in London, thanks to breakthroughs in measurement and accounting techniques (such as commutation functions) that brought the doctrine of chances from theory to practice.

Scientific actuaries were history’s card sharks. In order to compete, Friendly Societies were forced to adapt — to learn to better calculate the odds — and ultimately they converged on many of the same administrative, procedural, and “scientific” insurance-like structures. The growing (and widely misused) economic surplus this generated fueled an insurance boom peaking in the later part of the 19th century. For efficiency advantages, societies began deepening national networks well beyond the scope of brotherly love, and strove to expand risk classifications and reduce exposure to high risk types.

IMG_0077

By better classifying risk, the “flat rate” pooling equilibrium of the 18th century and earlier rapidly became untenable. Across Europe, the market became increasingly separated, with many differentiated premia and some high-risk types pushed out altogether. This fueled a growing industrial unrest that culminated with the consolidation of private social insurance schemes into nationally run systems.

Commercial insurance, by generating a burst of competition and transitory political instability, was in a sense a victim of its own success. But as many economists have noted, while decidedly non-voluntarist, national schemes (like the one instituted in the UK by the National Insurance Act of 1911) were able to discover large efficiencies of scale through less administrative intensity, tax-based collections, and a comprehensive risk pool. This transaction-cost advantage — and the centuries of social capital it crowded out — guarantees that the days of the close knit mutualists are gone for good, save for some religious congregations. In their stead stands L’Etat Providence — The Welfare State — via a historical process that (as I’ve described) was most rigorously identified by French legal scholar Francois Ewald in a book by the same name.

Classification

The point of this essay (if you’ve made it this far) is to suggest that we are in the midst of a measurement and statistical revolution of equal or greater scale as the 19th century diffusion of actuarial science, with potentially many of the same social and political implications.

With a $99 genotype and sub-$1000 whole genome sequence, in the near future the idea of an insurer asking for your family history of cancer will seem quaint. The immense and inevitable promise of genomics and personalized medicine also portends the inevitable collapse of large, relatively heterogeneous insurance pools, in favour of equally “personalized” healthcare costs schedules.

As I hinted at earlier, this phenomena of moving from pooling to separating equilibria following  advances in measurement technology is by no means limited to risk or health care. Any qualitative distribution can be theoretically mapped to a price distribution, but wind up collapsing into a single price given practical measurement constraints. For example, in the past mediocre restaurants were partially supported by the churn of ignorant consumers, since reliable ratings and reviews were hard to come by. Today, rating platforms like Yelp.com mean that restaurants of different quality have more room to raise or reduce prices accordingly, to separate based on credible signals. It’s the end of asymmetric information.

In the corporate setting, pooling equilibria are represented by relatively flat salary structures given a particular seniority, department or education level. Sometimes there is a commission or performance bonus, but day to day productivity is rarely if ever tracked. This opacity is what permits the possibility of zero marginal product (ZMP) workers — workers who literally contribute nothing to a firm’s output.

For any given kitchen, at some point an additional cook does not actually produce more food. While it can be misleading to say that any particular cook is non-productive (maybe there are simply too many cooks in the kitchen), in deciding on which cook to dismiss it matters a great deal that the cooks aren’t all equally productive. On the contrary, the individual contribution of every kind of worker to a firm’s output is often extremely heterogeneous, with the top 20% of workers contributing as much as the lower 80%.

With automation and artificial intelligence reducing the demand for human inputs, the kitchen, so to speak, is shrinking.  It has therefore become paramount for firms to identify the 20 and eject the 80. The contemporary increase in country level inequality is widely recognized as technology driven, but few have put their finger on the micro-foundations that explain why. Part of the story is surely “human-machine substitutability,” but in addition firms have simply started monitoring and classifying worker productivity better than in the past. This leads to a separating equilibrium that shows up in the data as job market polarization, rising premia on the central “signals” like college degrees, and (to the extent that signals are sticky) reduced social mobility. Unsurprisingly, a class based society is first and foremost a society which classifies. The silver lining in this case is that, rather than classes based on pedigree, nobility or race, the future promises to be highly — if not brutally — meritocratic.

IMG_0079

In one future scenario, just as actuaries identified groups that were uninsurable, perhaps large sections of society will discover they are unhirable. Supposing they have too many “one-star” ratings, as it were, on their HR record, their only hope will be in working for fellow one-stars, and to build a matching-economy around their mediocrity. This is essentially the “Average is Over” scenario imagined by Tyler Cowen, who foresees the return of shanty-towns across the United States. But I wouldn’t bet on it. In many ways, the recent calls for a “universal basic income” exactly parallel the early 20th century’s push towards nationalized social insurance. Only here it is labour-income itself that would be nationalized, as part of the inescapable political economy of separation anxiety.

My own anxiety stems from that fundamental uncertainty of the future, as if the social order is dancing along a knife edge dividing two radically different steady states. In either state — from hyper-meritocracy to a New New Deal — the case of the Friendly Societies demonstrates that the only thing for certain will be the loss of our sacred intangibles: the unmeasured qualities that united distinct types under one roof, from the fraternal lodge to the corporate office.

Do What Humans Do Best, Automate the Rest

AB’s post is inspiring because it touches on an intuition I’ve had that I haven’t seen discussed too much in the technological unemployment debate.

The thing that we think humans are good at is actually what they are terrible at. Namely: reason, logic, strict formal rule-following.

As Kahneman or Baumeister or any psychologist will tell you, something thinking through a lot of math equations is extremely hard for us. Multiplying 3,464,900 by 4,562 in your head without recourse to pen and paper puts a strain on you, and it doesn’t take many such math problems before your capabilities are compromised and you start giving inaccurate answers.

Once we have the concepts of numbers and multiplication, automating that process just makes sense. A cheap computer can give you the right answer to the above question in a fraction of a second. Human laborers will never outperform automation once the algorithm-makers have isolated the nature of what needs to be done over and over.

But that brings us back to AB’s post. Computers aren’t so great at discovery. As AB put it:

But where do these algorithms come from? Who tells the robot how to make a better hamburger? I’ll tell you one thing, it sure isn’t going to be a computer programmer who can’t make anything fancier than ramen noodles himself.

Substitute for “hamburger” the next great X. In AB’s post, it’s the next great line of Toyotas. But X can be just about anything. And it’s hard to believe that finding it will always or even mostly take PhD level skills. The lion’s share of the advancements from the Industrial Revolution came from tinkerers discovering through rote trial and error. Perhaps Tyler Cowen is correct that we have used up all the low hanging fruit in this regard (I’m skeptical), but it seems unlikely that we have exploited many of the possibilities of combining this discovery process with after the fact automation.