My fellow Sweet Talker Paul offered some thoughts on moral objectivity this week. It was not so long ago that I was looking for this very sort of answer to this very question. As few as two, perhaps even one and a half years ago, I might have quibbled with the details but agreed with the spirit of the piece. Eight years ago I took a stab at something like it, though much less sophisticated.
Lately it is not the substance of the answer that I struggle the most with. It is the question itself: is morality objective or subjective? Even intersubjectivity seems dissatisfactory, when most simply treat it as either one or the other–either pseudo-objectivity, or just as arbitrary as plain old subjectivity.
In what follows I will offer, if not an answer, then a picture, an attempt to portray how matters appear. I am not yet at a stage where I could tell you the question to which this picture is a provisional answer.
The gap between 25 and 30 is not very big in absolute terms, but it can certainly seem that way.
That is how I felt as I reread a post on morality that I wrote five years ago. Back then, I was pretty devoted to David Hume’s philosophy, and had been for some time. Part of it was due to many conversation with my father, for whom Hume loomed large as well. Part of it was that I’d actually read Hume, and not much else where philosophy was concerned. And in spite of that lack of familiarity with the alternatives (a shortcoming not shared by my father) I had an utter certainty in it.
This in spite of the fact that the very philosophy I was certain of had, as a cornerstone, the idea that reasoning was powerless to discover or demonstrate much of anything. This cornerstone eventually led me to conclude that there was no such thing as reason at all. Looking back, I want to ask that 25-year-old—how can you be so certain, when you are so unfamiliar with the arguments to the contrary and don’t even believe that certainty is ever warranted?
Martin Cothran made basically this argument, when he and his son Thomas engaged my father and I in discussion back at that time.
One aspect of my framework of the time was the notion that utter inconsistency didn’t matter. In talking about the relationship between God and morality, I argued that there was no logical connection between the two, and also that logical connections don’t matter.
Responding to my piece, Martin Cothran made a move that has lately become familiar to me:
I’m not sure I understand exactly what he is saying here in regard to the role of reason in moral discourse. If logic is “irrelevant to whether or not I believe in either God or morality,” then why should anyone find his point that there is “no logical inconsistency in the fact that I don’t believe in a divinity but do believe in morality” persuasive? If logic is not operative in the discussion of the relation between God and morality, then why is the absence of logical inconsistency in a position on this relation commendable?
As Martin pointed out, it’s rather hard to make any assertion at all with teeth if you don’t care about consistency. My argument about inconsistency undermined my argument about the logical connection between God and morality—if consistency doesn’t matter, then there could both be no logical connection between God and morality, and be a completely crucial logical relationship between God and morality—simultaneously. Productive analysis would prove impossible.
In discussing this point with my father, he argued that it doesn’t apply to human relations. His example was that it’s possible to both love and hate someone. But this isn’t a true inconsistency—no one would argue that love and hate are mutually exclusive. And if you do argue that, then you are committed to defending the idea that you can’t both love and hate someone at the same time, by definition.
Moreover, I very strongly believed that I had made a case for my Humean framework. I made what Joseph Heath identified as a typical non-cognitivist mistake; I used general-skeptic arguments and thought I only undermined my opponents’ position.
Aporia, the beginning of so many philosophical investigations, is precisely the discovery of seeming inconsistencies. Not everyone needs to be bothered with such things, to be sure. But the advance of our knowledge requires us to attempt to uncover whether inconsistencies are merely superficial, or whether some deeper revision in our framework is necessary.
Man Cannot Flourish On Moral Sentiments Alone
The most particularly Humean aspect of my framework at the time was what I later learned was called non-cognitivism. It’s the idea that we don’t think something is wrong because of judgment or beliefs, but because of feelings. As my father put it in his contribution to the discussion five years ago:
We don’t reason our way to condemnation of child abuse: we grow angry at the sight of it. Later, we may devise rational arguments to persuade others – sometimes ourselves – of the rightness of our opinions and actions. But I feel the wrongness of child abuse with a delicacy that, say, an ancient Spartan would have lacked.
As the Sparta example was intended to indicate, there is a strong cultural element involved in the shaping of our moral sentiments. In this way, morality is reduced to unthinking feeling combined with unreflective cultural practice (rather than theory). They combine to form a non-cognitive cocktail.
Among the evidence that he brings to the fore is Jonathan Haidt’s “The Emotional Dog and Its Rational Tail,” in which Haidt points out that people jump to moral conclusions first and rationalize after the fact.
This opposition is founded upon a category mistake. The question of why people believe in certain moral principles belongs do the discipline of moral psychology; the question whether those moral principles are true belongs to moral philosophy. Obviously people can believe in true things for false reasons: just because one’s belief that Caesar existed is predicated upon the belief that the television show “Rome” is a documentary does not make the historical arguments any less valid. Further, people often believe in true things on the basis of beliefs that don’t have much to do with the truth or falsity of the subject: people might believe in relativity theory because Stephan Hawking believes in it, but the fact that Hawking has an opinion on the subject doesn’t bear on the truth or falsity of relativity theory. The process of evaluating the truths of beliefs is distinct from the process of evaluating how different people come to their beliefs.
From our very different projections of the whole truth of morality, Jonathan Haidt’s study has commensurately different implications. For my father and I, it seemed to confirm the Humean processes at work. For Thomas, it seemed like an ad hominem attack on moral realism, because it attempted to discredit it by means of the psychology of particular people.
I have come around to Thomas’s point of view. The idea that Haidt’s study on moral reasoning discredits moral realism now seems to me akin to making the argument that Kahneman’s studies showing people are bad at statistical reasoning is evidence against the validity of statistics.
Moreover, I have come believe that all of non-cognitivism’s core premises are simply false. First, desires are not non-cognitive. As our own Sam Hammond put it:
It’s tempting to think of desires as following a linear chain down to some base foundational affect, implanted somewhat arbitrarily by evolution. But this is an elementary error.
While true that evolution has equipped us with certain somatic states (like hunger pangs), desire (like “I desire to eat”) contains propositional content. Like beliefs, desires are part of a holistic web that we draw from in the discursive game of giving or asking for reasons. In turn, desires like beliefs are capable of being updated based on rational argumentation and the demand for coherence.
Let’s take the child abuse example my father provided. We don’t just see a thing and unthinkingly get angry about it. In order to get angry, we have to have grasped what the situation is. The anger comes from the belief that child abuse is wrong and that it is occurring. This is crucial—in many cases of child abuse (or other wrongs), the thing is allowed to keep going on precisely because we don’t want to acknowledge that a loved one or an important person in the community is capable of doing such a thing. People go out of their way not to notice or acknowledge something going on right in front of them, because to notice would be to acknowledge a duty to act. How we construe a situation is not simply reflexive; we have a dual responsibility as audience of the situation as well as part-authors of our own.
The Spartan example points to how pliable even the belief that child abuse is bad can be. But culture, convention, and tradition are not simply unthinking practice. Like desire, they have a cognitive content, they have intentionality. As Adam Adatto Sandel puts:
Despite all appearance and without explicit “innovation and planning,” tradition is constantly in motion. What might seem to be blind perpetuation of the old, or mechanical habit, always has a “projective” dimension. Handing down tradition really means adapting it to the current circumstances, maintaining it in the face of other possibilities. And through such preservation, tradition is constantly being redefined: “affirmed, embraced, cultivated.” Although this process operates, for the most part, unconsciously, it is nevertheless critical. The distinction between tradition and reason is ultimately unfounded.
For an in depth treatment of this very subject, see this post. Suffice it to say that in as much as Martin and Thomas Cothran buy into Alasdair MacIntyre’s vision of tradition, I think they, too, fall into error.
But in an email, Thomas provided a picture of Aristotle’s method that strikes me as the correct one:
First, Aristotle doesn’t have a rigid system, his inquiry is responsive to the conditions of the everyday world. Second, Aristotle’s method does make room for cultural difference, but not in a way that precludes finding some ultimate truth. A culture may have good or bad practices and beliefs, but in any case they can neither be accepted uncritically or ignored in favor of some abstract rule. And finally, Aristotle’s method requires that we evaluate his arguments for ourselves — he never encourages his readers to accept things on his own say so, and his thinking is designed to be taken up critically.
In terms of “finding some ultimate truth,” I interpret him as saying that we can find moral truths that are not culturally relative. However, we do not have some logical foundation that puts such truths beyond doubt, and what knowledge we obtain—ultimate or otherwise—is fallible knowledge, just like human knowledge in any domain.
The Tragic Nature of the World
My father asserted:
Every morality is erected on ideals. Every ideal is an unattainable model of behavior. Every good person is inching toward an ideal vision of himself: that person-as-he-could-be.
To which Thomas responded:
[T]he assertion that “[e]very ideal is an unattainable model of behavior” is manifestly false: many ideals (such as the ideal that people ought not murder each other) are more attained than not.
I think this is a nit-pick, and that Thomas misses the tragic vision of the world behind my father’s statement. Human arrangements and ideals always have gaps in them that cannot be filled; an ideal that is “more attained than not” is often either undemanding, or—in the case of murder—is often attained at the expense of some other ideal. The police and prison apparatus that we built up in this country over the past couple of decades is rife with abuses of its own, some quite horrible. But I don’t want to quibble over particular points—the larger image of human beings as inherently imperfect and imperfectible, but also as striving and struggling towards betterment, is the right one.
It is because of this tragic nature, however, that authority is an ineradicable aspect of human life. Martin Cothran asked me how, in my framework at the time, “any moral statement can be considered authoritative over human behavior.” My father answered with a call to embrace contingency:
By embracing contingency, nothing is lost. No moral proposition can ever be “authoritative over human behavior” – Cothran’s phrase – in the absolute way that gravity has authority over bodies in space. An ideal must always be chosen, and always there will be those who refuse to do so: bad persons, weak persons, good persons in a weak moment.
I have come to think that authority—of ideals and of persons—is an ineradicable aspect of human life. But I wouldn’t call it authority “in the absolute way that gravity has authority of bodies in space.” At minimum, it is contingent on the existence of the human race. And a good Aristotelian would say that a great deal is indeed contingent on the circumstances—without ruling out the possibility of ultimate (but human) truths.
Those who might object to the incendiary title of this post need to stop being such mood affiliators. I’m merely asking a positive question, quit being so normative about it!
This week, on reports that China will be moving from a one-child policy to a two-child policy, I was horrified to see a slew of pieces on the benefits of the original policy, or how it “worked”. Then, when people had the predictable human reaction to such a framing, they received, without fail, this same rationalism-bot response:
He starts out by saying that the policy is an immoral restriction on personal freedom so of course he opposes it.
Then he points to a paper on how it nevertheless increased investment per child.
Nevertheless, the paper, and he, think that decreased fertility explains the lion’s share, if not all of this, so the impact of the policy in this regard was probably small or “it is highly likely the policy has been obsolete for some while.”
When I read this post, I thought to myself: he believes he’s washed his hands by making the categorical assertion about infringement of liberty. And he thinks that this is a mere is-ought matter, or as economists like to say (following Milton Friedman), he’s talking about positive, factual economics as opposed to dealing with anything normative.
And moreover, the moment that anyone expresses outrage, he will call them a “mood affiliator,” which he defines as follows:
It seems to me that people are first choosing a mood or attitude, and then finding the disparate views which match to that mood and, to themselves, justifying those views by the mood. I call this the “fallacy of mood affiliation,” and it is one of the most underreported fallacies in human reasoning. (In the context of economic growth debates, the underlying mood is often “optimism” or “pessimism” per se and then a bunch of ought-to-be-independent views fall out from the chosen mood.)
This is in the fine and growing tradition of saying one is cognitively impaired through some bias, or invalidating an argument by calling some piece of it a fallacy, rather than coming out and saying that you think the other person is arguing in bad faith (or more simply, is mistaken!)
A morally monstrous policy that, in a slightly moderated form, continues to terrorize innocent people, is appraised specifically in terms of its benefits.
The author washes their hands by saying that of course they’re normatively opposed to the policy, they just want to examine it positively!
The costs—in a bloodless utilitarian sense or in the sense of human costs—are not even mentioned.
Anyone who reacts with predictable horror at this framing is told that they’re simply being illogical or irrational.
And indeed, in discussing this post a defender of it implied that I was being a mood affiliator.
And consider this glowing comment on the post itself:
This was an excellent post. It clearly separates the normative from the positive, separates means from ends, addresses the most obvious objections (e.g., for twins, “what about birth weight”?) immediately, and kicks the comments beehive.
Excellent post. Clearly separates the normative from the positive, and means from ends! What more can one wish for?
The Unbearable Gulf of Facticity
Hume’s famous (and in some circles, infamous) is-ought divide comes, of course, from his A Treatise of Human Nature. The frequently quoted passage is as follows:
In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention wou’d subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceiv’d by reason.
Rather than subverting “all the vulgar systems of morality,” it seems to me that rather it has given birth an endless string of philosophers who take it as a matter of revealed truth that you cannot get a normative implication from a fact about the world.
He was not denying that the moral can be defined in terms of the non-moral. He was merely denying the existence of logically valid arguments from the non-moral to the moral. This becomes clear once we note that Hume does not think that he has to argue for the apparent inconceivability of is/ought deductions. It is something he thinks he can take for granted. This is what we would expect if he were making the logical point since it would have been obvious to his readers. For it was a commonplace of Eighteenth Century logical theory that in a logically valid argument the matter of the conclusion – that is the non-logical content – is contained within the premises, and thus that you cannot get out what you haven’t put in. Thus if an ‘ought’ appears in the conclusion of an argument but not in the premises, the inference cannot be logically valid. You can’t deduce an ‘ought’ from an ‘is’ by means of logic alone.
Which isn’t quite as Earth-shattering a revelation as it continues to be made out to be, because you can’t do much of anything by means of logic alone. Moreover, the obvious implication is that if you bake a relationship between is and ought into your premise, you can absolutely derive an ought deductively. For whatever that is worth.
Hume himself wanted to build up a foundation for morality from human nature—hence the name of the Treatise. Morality was not found in “the relation of objects” but rather in subjects, in our in-born moral sense. Our sense of what we ought to do, in Hume’s own system, originates from facts of our nature. Which is obviously different from deducing what we ought to do in a given situation. Nevertheless, even in Hume, this gulf does not appear to be so very vast.
Hume Was Contributing to a Project Which Has Failed
The Treatise only makes sense in the context of the Cartesian foundationalist project. Reason as it was understood in the Enlightenment was framed by Descartes’ project of radical doubt—only that which could be proved beyond a measure of doubt counted as justified, rational belief. His doubt took him right down to the level of whether he himself even existed, and the fact that there must be a questioner in order to ask such questions is essence of the famous cogito ergo sum. From there, he seeks to build reality back up through abstract reasoning. No one thinks he succeeded.
But he did succeed in imposing a new vision of rationality and reason, one that modern discussions still are all too often caged inside of. This is the notion that what is rational is only that which has some ultimate foundation that cannot be questioned. No such foundation exists, and in fact the very idea of it is incoherent, and so all contributions to this project were doomed to fail. Including Hume’s.
Descartes and his contemporaries also developed the subject-object distinction, something that has at this point become basically everyone’s background assumption of how reality and knowledge work. You can see it in Hume’s argument that “the distinction of vice and virtue is not founded merely on the relations of objects.” Operating within the subject-object schema, he argues that morality is not objective, “out there,” but rather a characteristic of subjects. This is not to be confused with our modern use of the word “subjective” to mean merely arbitrary. Hume clearly believed that noncognitivism, as it came to be called, provided a kind of foundation for morality. Just a different sort from those who believed in an external, objective moral law.
Unlike foundationalism, the subject-object distinction does have its uses. It’s clearly been a helpful tool throughout the centuries. But it is not the only set of joints from which to cut reality, nor, in my judgment, the best. I think we’d all do better to start treating it as a special case rather than assuming it by default.
How to Think Like a Human
Socrates often avails himself of the sophistic arts of argument, but whatever the reason might be, that he does cannot in any way be attributed to some deficiency in the logic of that time. To be sure, Aristotle was the first to clarify the essential theoretical foundations of drawing correct conclusions, and in so doing he also explained the deceptive appearance of false arguments. But no one could seriously contend that the ability to think correctly is acquired only by a detour through logical theory. If we find in Plato’s dialogues and in Socrates’ arguments all manner of violations of logic—false inferences, the omission of necessary steps, equivocations, the interchanging of one concept with another—the reasonable hermeneutic assumption on which to proceed is that we are dealing with a discussion. And we ourselves do not conduct our discussions more geometrico. Instead we move within the live play of risking assertions, of taking back what we have said, of assuming and rejecting, all the while proceeding on our way to reaching an understanding. Thus it does not seem at all reasonable to me to study Plato primarily with an eye toward logical consistency, although that approach can of course be of auxiliary importance in pointing out where conclusions have been drawn too quickly. The real task can only be to activate for ourselves wholes of meaning, contexts within which a discussion moves—even where its logic offends us. Aristotle himself, the creator of the first valid logic, was well aware of how things stand here. In a famous passage in the Metaphysics, he declares that the difference between dialectic and sophism consists only in the choice or commitment in life, i.e., only in that the dialectician takes seriously those things which the sophist uses solely as the material for his game of winning arguments and proving himself right.
-Hans-Georg Gadamer, Dialogue and Dialectic
Nothing can stand up to Cartesian doubt. If that’s the standard, then philosophy will fail every time.
It seems clear to me that scientists have achieved an astonishing amount when it comes to understanding the world we live in, however incomplete that understanding might remain (and perhaps always must be). So when someone trots out an epistemology that calls into doubt our ability to know anything at all, it seems to me that it is that philosopher, rather than scientists, who begins to look rather foolish.
Similarly, it seems to me that every day people are able to leap from an understanding of the facts to what they ought to do. Often they feel they have made mistakes, or think others have also made mistakes. But to trot out a bit of formal logic, where you can’t get an ought in the conclusion if you only have an is in the premise, and then treat any normative conclusions drawn from facts as a fallacy—seems, to me, to be ridiculous.
I will grant that people’s inferences about what they should do is not as strong a piece of evidence as the atom bomb, the astonishingly precise and accurate predictions from quantum physics, or the results of germ theory. Nevertheless, the argument against such inferences is even weaker than the evidence. As Gadamer put it, “no one could seriously contend that the ability to think correctly is acquired only by a detour through logical theory.” And invoking this particular piece of logic is, in the context, quite weak indeed.
Look at the title again. The question is “Did China’s one-child policy have benefits?”
And yet the commenter I quoted above assures us that this cleanly separated “normative from positive.”
What, dear readers, is a “benefit”?
This is the problem with economic analysis in general. They talk a big talk about separating normative and positive, and thus honoring the is-ought distinction, but the very notion of a benefit is normative!
The counter argument, of course, is that economics assumes a “benefit” in terms of individuals’ own subjective valuations. But that is not a morally neutral choice, either. It demands, at least while performing analysis within the framework, that we exclude other notions of what a “benefit” is. Pure facticity is to be found nowhere in this.
The Humean turn in this regard is just one more way in which Enlightenment rationalism has made us very bad at thinking carefully about our rhetoric. Tyler Cowen is usually very careful in this regard, which is part of why I was stunned by the rhetoric of his post. But it’s clear, as his commenter observed, that the provocation was the main goal of how it was framed. I find this irresponsible given the horrors of the policy in question, horrors which remain unmentioned in the post.
Moreover, while he did not levy the accusation this time, this bit about “mood affiliation” is part of another trend that is due for a reversal; that of taking anything human about the nature of arguing, and categorizing in a way that makes it easy to dismiss.
Let’s take a relatively old one: the “ad hominem” fallacy that is the bread and butter of all Internet arguing.
When we’re talking about an Internet troll who is simply insulting you, of course there’s nothing productive going on.
But once you step out of foundationalism, it starts to become clear how much what we know rests on who we can trust. Trust puts the “faith” in arguing in “good faith”. If we cannot trust the one making an argument, we cannot trust the argument—simple as that. Taking an argument on its merits is not simply looking at the logic of the thing, it’s also looking at the credibility of both the one advancing it and the sources cited.
It’s one thing, in other words, to call someone ranting in a knee-jerk, thoughtless manner, a “mood affiliator.” I don’t know why you wouldn’t simply call such a person unhinged, but if sounding more clinical floats your boat, who am I to judge?
But when you’re talking about the benefits of a policy that is not only an infringement on freedom, but involves an ongoing brutality and inhumanity of a systematic sort, it is shameful to call the predictable negative response to it “mood affiliation” or a failure to understand the is-ought distinction.
Take Responsibility for Your Rhetoric
The Politics of Truth had several explicit targets and many more implicit ones that I felt best left unnamed. I will continue to leave them unnamed, but here are a couple of passages that might be of interest:
There’s a problem inherent to this fact of our nature. What if merely asking a particular question automatically made you less credible? And what if any group that systematically investigated that question, and other related ones, became similarly less credible as a result?
This is no abstract point. Asking “what should be done about the Jewish problem?” tells us a lot about who you are and the group you belong to. Anyone with an ounce of decency would not pay any attention to the claims made by the kind of person who would seriously ask and investigate that question, or the groups that would focus on it. The only attention that such people would draw would be defensive—it would be the attention we give to something we consider to be a threat.
Now a modern libertarian, good child of the Enlightenment that they are, wants to assert the clear mathematical truth that no individual vote makes a difference. Especially at the national level. I have seen this happen many times, as a GMU econ alumni. I have done this many times myself.
The problem is that it’s never really clear to me what people hope to accomplish by doing this, other than being correct. Certainly the strong emotional backlash shows that there’s a cherished idea there. But other than casting ourselves in the role of unmasker of a manifest truth, what do we do when we insist upon this point?
A single vote does not sway an election, and therefore it follows…what? Other than undermining a cherished idea, which is indeed incorrect, what exactly is the larger value of the specific point?
In my view, “I believe in freedom so I would never implement the one-child policy, but if I didn’t, here are some benefits” is highly irresponsible and impolitic rhetoric.
Here’s one way that the substance of Cowen’s post could have been framed:
Subject: “Did Decreasing Fertility in China Result in Higher Relative Human Capital Investment?”
Body: “A recent paper suggests that this is the case, drawing on the natural experiment created by twins. They ultimately conclude that the one-child policy likely had little impact on this due to pre-existing trends in declining fertility in China and the rest of the region.”
Sounds different, doesn’t it? And the substance really is basically the same.
To frame it in terms of “the benefits” of the one-child policy, especially with substance so reasonable as that, is either poorly thought out or unethical.
But perhaps I am just a mood affiliator who does not properly understand the is-ought distinction.
In a secular age, we are often uncomfortable talking about faith outside of church or possibly among family. Many of us do not even go to a church, or have not ever. Especially among decisive, hard-headed people of business, faith can be an embarrassing subject. But it is an important subject, for Atheists and Christians, businessmen and teachers alike. And this isn’t a high-minded statement pronounced while looking down from above—faith is as crucial to the practicalities of daily life as the very ground under our feet.
For thousands of years there have been philosophers who made a name for themselves by attacking what was accepted on faith. The ancient skeptics believed all knowing and reasoning was impossible. The ancient cynics thought human society was inferior to nature. More recently, David Hume argued that the fact that something has happened repeatedly does not logically demonstrate that it will happen again—so there is no proof that the sun will come up tomorrow. Even more recently, Derrida emphasized that context determines the meaning of what we say and do, but we have an endless amount of context that we could focus on for any one action. So how can we ever be sure we understood it or have been understood?
In some sense, all of these skeptics were right. There are deep limitations to our knowledge and what we can work out with nothing but reasoning.
Faith fills in these gaps and makes it possible to live a full life without constantly being paralyzed by uncertainty. This is not a blind faith—treating faith and reason as opposites is a big mistake. It’s not just that you need faith and prudence together to be fully virtuous, the way you need to be courageous on behalf of justice rather than cruelty. It’s more than that. You need faith before prudence is even possible. Remember our discussion of the novel—how all other books and stories we had read or heard or watched helped form the perspective we bring as readers of a specific book. This is faith—the belief that everything we have experienced up until now in our lives is not for nothing, that it is salient as readers of the situation we are now confronted with and as authors of the rest of our lives.
Again, this is no blind faith! When we are confronted by circumstances that challenge our perspective as it stands, the prudent person will reexamine those aspects of their perspective that have been challenged. The just person knows that they owe it to the people in their lives to be open to the questions such circumstances pose to us, rather than stubbornly ignoring them and missing an opportunity to refine our judgment.
But stubbornly ignoring the questions posed by situations that do not fit your expectations is not what it means to be faithful. That is an unvirtuous faith, an imprudent faith, just as much a vice as an imprudent courage is mere recklessness. Moreover, unexpected circumstances only appear at all if we have expectations in the first place. It is the faith that we bring into the situation that identifies it as special to begin with. It is only because of the perspective we already have that we are capable of viewing our subverted expectations productively in the form of questions that such subversion raises.
Without faith, you are just a “bundle of experiences,” as Hume put it. With faith, you are a person, a character in an ongoing story of which you are both a reader and a co-author. Confidence, self-assurance, and trust—these qualities, which are so vital to our lives, are aspects of the virtue of faith.
I was only 16 at the time, but I had already developed a strong interest in moral psychology thanks to the lingering suspicion that ethics was a fatal weakness for philosophical naturalism. And so when Marc Hauser‘s now classic book Moral Minds first came out in paperback, I rushed to buy a copy.
The book was a detailed exploration of human moral cognition through the lens of trolley problem experiments and Hauser’s (now dubious) research with primates. And despite Hauser’s indefensible academic misconduct, it remains a tour de force. In fact it is still in my possession, now twice as thick and stained by sunlight from multiple re-reads.
At the time I became convinced of Hauser’s basic approach that updated David Hume in light of Chomsky’s work on innate syntax. This view says that our moral sense is at base noncognitive, that it is a product of our “passions” or sensations built into us like a “moral organ”. While morality may often seem relative to culture and upbringing, it is constrained by a “universal grammar” common to all moral orders. That grammar, I believed, was the key to resolving the moral divergences between tribes. If we could only speak clearly about our shared inheritance there could be no lasting rational disagreements.
Joshua Greene’s “Deep Pragmatism”
Consider this a premonition of what Joshua Greene has since dubbed “deep pragmatism”. Greene is also a Harvard neuroscientist and expert on trolley problems, and his recent book Moral Tribes is also concerned about what he calls the “failure of common sense morality,” i.e. when divergent moral orders collide. While I am about to be quite critical of Greene, let me say at the outset that I am actually a massive fan, and that I tend to be most critical of the ones I love.
If it is fair to say Hauser’s theory merged Hume with Chomsky’s linguistics, then Greene’s theory merges Hume with Daniel Kahneman’s Dual Process Theory. He claims our non-cognitive passions are part of our System One, or automatic / intuitive mode. But if we study the evolutionary function of our passions, we can then use our System Two, or rational / conscious mode, to resolve impassioned disputes deliberatively. Specifically, Greene posits that if morality is fundamentally about enforcing cooperation in order to reap collective benefits, two tribes with distinct ethical systems for cooperation simply have to recognize that they are using different means but have common ends.
The only thing truly novel about Greene’s argument is its tantalizing terminology. Indeed, on a recent EconTalk episode Greene admits that “deep pragmatism” is just his word for plain vanilla utilitarianism. Despite formal utilitarianism’s many problems, Greene believes clashing cultures can settle disputes by consciously reformulating their ethics based on the greatest good for the greatest number. When pressed by the host with counter-examples, Greene contended that the problems with his proposal are either merely empirical or due to an insufficient application of utilitarianism (for thinking too short-term, say).
I believe Greene makes three fundamental mistakes and thus has not provided a compelling solution to the tragedy of common sense morality. On top of this, his scientific pretenses distract from the fact that his core moral arguments come straight from the proverbial arm-chair. Indeed, as meticulously demonstrated in Selim Berker’s The Normative Insignificance of Neuroscience, Greene has a tendency to obscure his philosophical presuppositions behind a fascinating and important, but ultimately tangential, deluge of empirical data.
The three deep mistakes Greene makes are: a) to accept the Humean starting point of moral noncognitivism, b) to reify deontological thinking and utilitarian thinking as “System One” and “System Two” respectively, and c) to leap to utilitarianism when, even accepting his premises, better alternatives exist.
Deep Problems: a) Noncognitivism Is False
Noncognitivism rose in popularity after the Enlightenment in large part due to an incorrect Cartesian view that morality like belief required an ultimate foundation. Hume put foundationalism to the test by taking it to its logical conclusion. In lieu of an infinite regress, Hume realized that connecting ought to is was impossible. Thus noncognitivism — and thus moral skepticism. And while Hume’s argument and conclusion were valid, the premise that we need foundations in the first place was dead wrong.
Since Quine, philosophers have largely accepted coherentism for beliefs. That is, it makes most sense to think of any particular belief as inhabiting a holistic web of beliefs rather than to link beliefs in a linear chain of justifications down to some “foundational” belief. When we are persuaded to change our beliefs we thus often are required to update a large number of interdependent beliefs to ensure coherence.
It turns out the same Quinean argument works for desires, preferences and other vernaculars for Hume’s passions. It’s tempting to think of desires as following a linear chain down to some base foundational affect, implanted somewhat arbitrarily by evolution. But this is an elementary error.
While true that evolution has equipped us with certain somatic states (like hunger pangs), desire (like “I desire to eat”) contains propositional content. Like beliefs, desires are part of a holistic web that we draw from in the discursive game of giving or asking for reasons. In turn, desires like beliefs are capable of being updated based on rational argumentation and the demand for coherence.
For whatever reason ethicists have been much slower to embrace coherentism for morality, preferring to soak in tired debates like deontology vs consequentialism. Greene is no different. And his attempted foundationalist argument for utilitarianism has not closed Hume’s gap one iota.
b) Dual Process Theory is Irrelevant
Using fMRIs to conflate deontology with automatic thinking and consequentialism with deliberative rationality is neither valid nor advances the argument. To quote University of Toronto philosopher Joseph Heath in his overview of empirical approaches to ethics:
Greene offered no reason to think that the theory of value underlying the consequentialist calculus was not based on the same sort of emotional reactions. In this respect, what he was really doing was presenting an essentially sceptical challenge to moral reasoning in general, yet optimistically assuming that it undermined only the position of his opponents.
Moreover, there are good reasons for thinking of deontological modes of reasoning are essentially cognitive. As Heath argues in his book Following the Rules, social norms take the form of a web of deontic constraints that we reference just like when we reference beliefs or desires when pressed to defend certain behavior. This makes social norms — and deontology in turn — analytically cognitivist. That is, regardless of the fact that deontic violations are more likely to elicit an emotional response, deontic reasoning must still inherently make use of System Two at some point.
Greene even acknowledges the more plausible explanation for why deontological violations cause more emotional fMRI activity than utilitarian ones: namely, that they each require different kinds of construal. Utilitarian reasoning tends to be about system wide outcomes and that level of construal imposes a psychological distance between the agent and the moral dilemma. But even if there is a link between construal level and dual process theory, just because utilitarian thinking is slow does not make slow thinking utilitarian!
c) Utilitarianism is a Non-sequitur
Even accepting all of Greene’s major premises, the conclusion of utilitarianism is still unwarranted. Greene suggests that the social function of moral psychology points to a “common good” through cooperation, but utilitarianism is only one possible interpretation.
In economics there are two basic approaches to social welfare, one top down and the other bottom up. The top down approach is the closest in spirit to the utilitarianism expressed by Greene. It posits a social welfare function and conditions that must hold for its maximization, aka the greatest good for the greatest number. Adherents of this approach have spanned centuries, from Bentham up to Pigou.
The other approach begins with the process of transaction itself. It posits that two people will only exchange if they each preceive a mutual advantage in doing so — that is, if the trade will move them toward a Pareto improvement or win-win outcome. This is at the heart of bargaining theory, which would presumably make it a good candidate for solving the “tragedy of common sense morality” or any scenario where conflicting interests or value systems collide.
One of the worse “tragedies of common sense morality” in history occurred in the 1600s when Protestants and Catholics fought throughout Europe in the 30-Years War. From the ruin rose modern Liberalism and the legal basis for religious toleration and value pluralism. Liberalisms core value is thus mutual advantage in the Paretian sense, not a crude formula for util maximization.
In fact there is a substantial literature within trolley problem research analyzing the effect of Paretian considerations on moral judgement. Greene is even a contributor. Indeed, in all sorts of artificial moral dilemma subjects are consistently more likely to judge harm as permissible if it leads to a Pareto improvement.
For instance, this 2011 paper [pdf warning] co-authored by Marc Hauser suggests that “Paretian considerations should be treated as an abstract principle that is operative in folk-moral judgment across a wide variety of contexts, involving different sources of threat and different degrees of contact.” Note that this fits the criteria for Greene’s “deep pragmatism” surprisingly well, without any of the attending controversy or highly demanding prescriptions surrounding Peter Singer style utilitarianism. Indeed, the authors are correct to report that Paretian considerations “provide a reason for action even for the non-consequentialist.”
Despite my skepticism for Joshua Greene’s “deep pragmatism” I strongly commend his efforts. In fact it is mostly in line with my own approach. Yet its current manifestation suffers from philosophical naiveté.
Humean noncognitivism is tempting for any student of psychology, but it turns out to be philosophically untenable. Indeed, by their very nature the deontic statuses we assign taboos and other social norms are part of a cognitive process of giving and asking for reasons. We can even reason and debate over our desires and preferences since (in contrast to pure affect) they carry propositional content.
Furthermore, while utilitarian calculations often require over-riding our more intense “gut reactions,” that does not make them any more foundational to morality. This is especially the case when it is always possible to interpret ostensibly utilitarian outcomes as resulting from a bottom up process that respects the Pareto standard.
And from the point of view of resolving tragedies of common sense morality, liberal principles like value neutrality and free expression that implicitly endorses Pareto have never been more influential on a global scale, nor more vital for our continued peaceful coexistence. The inferiority of the utilitarian alternative is shown in the recent attacks on free expression in Paris. Who today could defend Charlie Hebdo’s provocative iconoclasm on purely utilitarian grounds in a country of perhaps 6 million Muslims?
Finally, it important to remind ourselves that free expression as such is not a “Western Value” unique to the strategy of our hemispheric “tribe”. Rather, the Pareto standard of mutual benefit transcends the tribe and individual as the only proven basis for peaceful, pluralistic civilization.
The view from nowhere does not exist; it is accessible to no one.
I think that justice—the character trait—is arrived at through a lifelong dialogue among three points of view.
The first is your own, honed through proper notions of prudence, courage, temperance, charity, inheritance and hope.
The second is the person or people you are trying to figure out what you owe, or what they owe you. Hume’s faculty of sympathy, or what we call sympathy today plus what we call empathy, is your primary tool here, as well as a lifetime of experience attempting to understand yourself and others.
The final one is Smith’s impartial spectator; impartial not in the sense of objective but in the sense of not being partial, not having a stake in the outcome.
All three are in a way mental constructs, as we must have mental models of ourselves, the people we are dealing with, and an imagined impartial judge observing the matter. And all three require nourishment through use and persistent critical evaluation over time.
You arrive at just decisions through the effective weighing of the claims made by these inner agents; you become truly wise when these inner constructs closely approximate real people who exist outside of your mental world.
As far as I’m concerned Protagoras (as portrayed by Plato) had this all figured out about two and a half millenia ago:
Education and admonition commence in the first years of childhood, and last to the very end of life. Mother and nurse and father and tutor are vying with one another about the improvement of the child as soon as ever he is able to understand what is being said to him: he cannot say or do anything without their setting forth to him that this is just and that is unjust; this is honourable, that is dishonourable; this is holy, that is unholy; do this and abstain from that.
Everyone is a teacher and everyone is a pupil of ethics, from the time we can understand words to our passing from this world. This the world seen by Burke and by Oakeshott; a world thick with ethical prescriptions and corrections and contested intuitions. Chris puts it like this:
I am agnostic on whether, all things being equal, a more capital-P Philosophically-literate populace would necessarily grease the wheels of American democracy. I don’t think that well-defined Philosophies are necessary for an individual to live a good and virtuous life (for my definitions of those words, anyway), as we already have socialized ‘default scripts’ for how people ought to act, even without a cleanly articulated framework. I do think that social engagement and personal effort can make one a more conscientious, empathetic, aware person- whether that necessarily has intrinsic or instrumental value, I’m unequipped to even guess.
Emphasis added by me.
This all closely parallels the earlier conversation about art. I made the case that art appreciation is a situated, institutional thing, possible only as part of a community. My brother David thought that there must be more to it than this, just as Socrates rejected the soft socialness of Protagoras’ ethics. Sam H played the peacekeeper by bringing underlying, to some extent pre-social emotions into the picture in addition to the Protagorean framework.
Humans have the capacity to learn the rules of a particular art and then bend them, inventing new forms of artistic media and waggle dances all our own. But it is important to bear in mind that the rules would cease to exist without the aesthetics underlying them. Even in Manga, another captivating form for which I (like Indian classical dance was for Best) have no artistic appreciation, the unique and extremely idiosyncratic iconography Adam highlights are all conspicuously exploiting a Pleistocene aesthetic, in the same way cheesecake exploits our adaptive sweet tooth. Namely, Manga hits on the sentimental fondness for cutesy and wide eye child-like facial features that one would expect in a species that protects and invests as heavily in their kin as humans do.
A Humean or Smithian moral sentimentalist perspective added to a Protagorean and Oakeshottian thick traditionalist perspective is, in short, a very good first approximation of the reality on the ground.
But to come back around to the original question again, does philosophy have a role to play within that reality?
I think so, and so did Protagoras. He taught ethics and charged for the privilege. Thinking it through, this shouldn’t seem so odd—after all, language’s reality on the ground is basically identical, and yet we still have English teachers. Certainly people can speak English without English teachers, but yet we expect English teachers to instill certain conventions, certain norms.
I think Deirdre McCloskey’s account of the clerisy, her word for the writerly intellectual class, is something like what I have in mind. By her reckoning, they are supposed to help out by clarifying and providing coherent (but not Platonically self-sufficient) frameworks that help people both in making evaluations and simply in making sense of their lives. By her reckoning, the class of people has been asleep at the wheel (or worse, drunk and hostile) since at least 1848.