“Me” and “We,” Where “We” Is “Thee”

CHRIS: I think we ought to do more to help the poor.

PAT: So do I!

CHRIS: How can that be? Just yesterday we were talking about a proposal to tax the 1% more heavily in order to fund poverty relief programs, and you said you opposed that plan.

PAT: I do oppose that plan. What does that have to do with anything?

CHRIS: Well, evidently you oppose at least one thing we can do to help the poor.

PAT: Well, hang on. You just said you thought we ought to do more to help the poor. Now you’re saying that you think someone else ought to do more to help the poor…

CHRIS: What I meant was that I think we as a society ought to do more to help the poor.

PAT: I see. I agree with that, too. Only, now I have question: Don’t you consider yourself a part of society?

CHRIS: I most certainly do.

PAT: I thought so, but then why, when talking about what we as a society ought to do for the poor, did you choose to single out a group to which you do not belong? Don’t you think you, personally ought to do more to help the poor?

CHRIS: I give what I can, but the wealthy could afford to give much more than I can give.

PAT: Yes, that is probably true. However, you said before that you thought we as a society ought to do more for the poor. Upon clarification, I now see that what you really meant was that you personally cannot afford to do more for the poor, but someone else can, and so you feel that they ought to. You began by talking about “us,” but what you really meant was “them.” Why did you say “we” when what you really meant was “they?”

CHRIS: We are all part of society, all of us. If we want to enjoy a society in which all of us has the opportunity to flourish, then we must all meet our ethical responsibilities. Because the wealthy are part of our society, I include them whenever I say “we.”

PAT: Chris, are you okay? I thought you were doing pretty well for yourself. You have a nice job and seem to be making a comfortable living. You even have some money left over to donate to charity. Are you not flourishing?

CHRIS: Wait, what? Of course I’m flourishing; I have a great life.

PAT: Well, doggone it. Now I’m really confused.

CHRIS: I didn’t think it was a very complicated concept. What seems to be perplexing you?

PAT: Well, before, you were saying that “we” ought to do more, but what you really meant was a group of “us” to which you don’t belong. But just now, you said you thought we all had to meet our ethical responsibilities in order for all the rest of us to enjoy the opportunity to flourish…

CHRIS: You don’t seem to be confused to me, Pat.

PAT: Well, hang on. When you say that we need to meet our ethical responsibilities, you obviously mean that the wealthy ought to do more to help the poor, right?

CHRIS: Right.

PAT: But “the wealthy” doesn’t mean you.

CHRIS: Right.

PAT: And since you say you have a great life, then that means “the poor” doesn’t mean you, either.

CHRIS: Well… right.

PAT: So then when you say that you want all of “us” to flourish, you mean that you want someone other than you to gain something contributed by someone else, other than you.

CHRIS: Yes, so?

PAT: So, you keep saying “us” and “we,” but in no case do you actually mean to refer to yourself. I know you and I have disagreed on politics in the past, but I never expected us to disagree so profoundly on the meaning of the words “we” and “us.”

CHRIS: Come, now, Pat. Don’t you think you’re being a little obtuse? I’m talking about making our society a better place. We all live here, rich, poor, and average. We should all accept some level of responsibility for the society in which we live, and we should all strive to provide the foundation of a better polity. That naturally means that some of us will be beneficiaries and some of us will be benefactors. Because I’m doing okay, I make a point of donating what I can, and I never make a point of accepting a donation I don’t need. But those who are doing much better than I am should give more, and those who are worse off than I should be given more. But we’re all part of the polity.

PAT: I agree with all of that. All I’m saying is that you’re not really talking about “society” or “the polity,” you’re talking about what should happen to people other than you. Even worse, you’re talking specifically about people who have different characteristics than you have. Some of them give more, some of them receive more, but none of them are you. Let me ask you another question: Would you say you belong to the same society as “the 1%?”

CHRIS: I see where you’re going with this. In one sense, I belong to the same society they do because we are all part of the same polity. But in another sense, we don’t exactly hang out in the same social circles, so I guess I don’t belong to their society per se.

PAT: Neither do you hang out with the poor, Chris.

CHRIS: That’s true, too. But we all belong to the same polity, meaning we are all subject to the same government and the same laws. So when I was talking about that progressive tax increase, I meant that this is a policy everyone within the same polity should support.

PAT: Well, I still disagree with you there, but I think you probably know now that my disagreement has nothing to do with “society.”

CHRIS: Of course it has nothing to do with society, Pat. It’s rational self-interest. You’re rich. You don’t want to pay more taxes.

PAT: Wait a minute. That means that when you first said “we,” what you really meant was… me?

CHRIS: So it would seem.

PAT: Why didn’t you just say so in the first place? You made it sound like you wanted to help. And remember, I initially agreed that you and I ought to do more. You never intended to do more for the poor.

CHRIS: I guess it doesn’t sound very nice when you put it that way. I only wanted to make an agreeable case for our helping the poor… er, I guess for your helping the poor. Look, I’m sorry for putting it to you in an offensive way. I really didn’t see it that way.

PAT: It was an honest mistake. We’re friends, apology accepted.

CHRIS: Well, now I feel a little awkward. Let’s talk about something else.

PAT: Okay, sure.

CHRIS: I think we ought to treat women more fairly…

Theory and Practice, Episode Four


If you spend any time at all thinking about moral philosophy, eventually you face a set of difficult questions. Some of these are:

  • If making ethical decisions comes down to learning and applying the correct moral framework, why do people disagree about morality at all?
  • Couldn’t we just sit down together, discuss The Virtues, or whatever, determine what the most virtuous action is, and proceed accordingly?
  • Why, even after acting in accordance with our moral philosophy, do we still face doubts and even regrets about what we’ve done?
  • And so on.

There are a few possible explanations for all of this. One might be that, while the Virtues (or our preferred moral philosophy) are perfect, human reasoning is not. Another might be that truth is untruth in the moral realm as much as elsewhere. Still another might be that morality is subjective. Or, more radically, perhaps morality is a psychological illusion or a sense of self-justification we instigate ex post facto.

But I gravitate to another explanation: Moral reasoning is a skill that must be practiced and perfected.  Continue reading “Theory and Practice, Episode Four”

So You Live In A Rape Culture: Now What?

Paul’s recently proposed definition of the term “rape culture” is a worthy attempt at explaining to those who require such an explanation that the cards are stacked against rape victims in today’s society.

One challenge with his definition, however, is that it seems to indicate that if you are a human being, you are someone who lives in a rape culture. Any difference between the cult-like sexual mutilations that occurred during Rwanda’s genocide, India’s ability to produce horrific gang rapes on public transportation (in addition to thousands-a-day public gropings), and jokes about prison in the United States is not a difference in kind, according to Paul, only one of degree. Welcome to Planet Earth: Rape Culture.

I think Paul’s hope is that it will dawn on some how bad things are for rape victims, even right here “at home,” so to speak. I he hopes that his audience, upon consideration of his definition, will be moved to alter their behavior. However, there are a few reasons why I don’t think this is likely.

The first reason is that there is probably a category difference between “people who would use the phrase ‘rape culture'” and “people whose behavior needs to change.” To the extent that I am right about this, then the phrase “rape culture” provides no benefit that the phrase “your behavior needs to change” does not already provide. (It’s unlikely to me that a person who regularly engages in the behaviors that comprise Paul’s definition will change his mind solely because his behaviors have now been identified as “rape culture.”)

A second reason I don’t think Paul’s proposed definition will sway people is that there is cry-wolf effect involved. What I mean is, despite the fact that there is lots of room for improvement, life in the Western world is pretty good – even, or perhaps especially, by comparative rape culture standards. If someone lives in a pretty-good culture, but is told that they live in a rape culture, it’s possible that they will become desensitized to Paul’s claim. A phrase, once used for shock value, cannot be reused for shock value; we can only be shocked by it once. Afterward, it becomes “another one of those aggressive terms social justice warriors use,” like “greed culture,” “victimhood culture,” or essentially, Anything-That-Needs-To-Change Culture. It blends in with the other things being shouted at us and we no longer give it specific attention, even though it is indeed a matter that deserves our attention.

A third reason I don’t think Paul’s definition will persuade people to change their mind is because ordinary people – especially those most likely to be unwittingly misogynistic – don’t tend to think in the terminology of academic philosophy or feminism. So, to them, being told that they live in, and might be passively propagating, a “rape culture” feels like they are being accuse of something. If there is anything less effective at changing someone’s mind than immediately putting them on the defensive, it’s seemingly accusing them of one of mankind’s most heinous crimes.

As an important side-note to this third point, there is a growing body of journalism that seeks to call attention to false accusations of rape. Imagine what impact it must have on Average Joe when he sees victims advocates calling his culture a “rape culture” in one place, and making actually false allegations of rape elsewhere. Needless to say, Joe would not be convinced.

What Paul has managed to do, however, is provide a list of serious grievances that any sufficiently introspective person will find non-contentious. The content of Paul’s list is basically indisputable, as his many citations ably demonstrate. I believe his list functions as an excellent starting point for identifying problematic aspects of cultural behavior and attempting to correct them. To that end, I feel his invocation of the term “rape culture” works against his objective – an objective with which I most assuredly agree.

Theory and Practice, Episode Three

Maybe it was a bad idea to cite an acerbic guy like Lubos Motl. When a guy says that a lot of questions are just stupid, that’s not exactly “sweet talk.” Motl has an important point, but I won’t defend his tone.

He did take the time to outline exactly what he means when he says “stupid questions,” and not only does that definition not apply to Adam, it is also fully consistent with the Gadamer quote Adam gave us. In fact, I am as surprised that Adam would quote an argument in favor of authentic dialogue as a response to a criticism of inauthentic questions as I was when Samuel quoted a Situationist to critique my endorsement of Situationism.

Clearly there is a gap between what I think I’m saying and the message I actually manage to convey. And clearly this gap is caused by me because it keeps happening, and I am the common denominator. Motl might be wrong for his aggressive tone, but at least he gets his point across. No such luck for me. Even when my fellow Sweet Talkers agree with me, they think they disagree. Continue reading “Theory and Practice, Episode Three”

Theory and Practice, Episode Two


“No Epistemic Value”

The social value of philosophy was hiding in its role of a subject that used to attract – and, to a lesser extent, still attracts – high-IQ people and makes them think about important questions. Historically, philosophy was therefore the ultimate “protoscience” and became the seed of science as we know it today, too. And that was good for the mankind.

However, its modus operandi is a flawed approach to learning the truth. The old philosophy was studied before the scientific method was understood; and the modern philosophers – by the very definition of philosophers – are still failing to use the scientific method. They don’t understand that Nature is smarter than us which is why they still hope to “guess the important truths” without any accurate empirical input; and, more importantly, they fail to formulate their musings sharply enough and eliminate the falsified ones.

Therefore, we may say that philosophy as a human enterprise has a “social value” but philosophy as a body of knowledge, methods, and results has no “epistemic value”.

That is from a 2013 blog post written by a physicist named Lubos Motl. Even now, years later, this post continues to leave a lasting impact on me. Always outspoken, Motl doesn’t mince words in this post, either:

In general, there aren’t any big questions posed by philosophers that were solved within science simply because philosophy’s modus operandi is not only a flawed method to find the right answer; it is a flawed method to choose the right questions, too. For this reason, virtually all important enough questions first posed by philosophers were scientifically shown to be meaningless or building on invalid assumptions (and all “specific enough” theories invented by philosophers – whether they have called them “questions” or, which was more typical, “teaching” – were shown scientifically false). The philosophy’s unscientific method not only fails to eliminate the blunders and misconceptions from the answers; it fails to eliminate them from the questions, too.

It’s tempting to summarily dismiss anyone who themselves writes so dismissively about important ideas, but I ask the reader to resist that temptation. Motl doesn’t hate philosophy, he just doesn’t see the point of investing time and intellect into poorly specified questions. (He even wrote a separate post about stupid questions.)

No, his real point isn’t that philosophy is useless, but rather that it has failed to get results:

The main problem with the philosophical method is not that it produces no results for other fields; the main problem is that it doesn’t produce the true answers in its own field.

Ask yourself what philosophy has done for you, personally. What are its benefits? And by “benefits” I mean “positive impacts on your life over and above the mere ability to use philosophy’s internal jargon to describe things that can just as easily and accurately be described without that jargon?”

I won’t say that philosophy can’t produce those benefits. What I will say is that one’s lack of clear benefits indicates that one has the wrong philosophy. If you don’t have them or can’t list them out- or, even worse, if your life is observably worse as a result of philosophy – then you need to change course. And this is true according to your standards, not just mine.

Spinning Wheels

It’s a bad idea to promote the ideas of Ayn Rand, Eliezer Yudkowsky, and Dr. Phil on this blog… much less in the same blog post… much less in the same sentence… but consider this the exception that proves the rule. The one idea they all seem to have in common is that they all seem to be dedicated to the notion that philosophy, done right, ought to be of practical use to ordinary people. Yudkowsky calls it “making beliefs pay rent,” but I prefer Dr. Phil’s folksy way of saying it: “How’s that workin’ for ya?

Philosophy is only as good as its ability to make us happy and help us solve problems. At its worst, philosophy is infuriating nonsense that misses the point, causing endless debates about whether an X is a “true” X. Fun though it may be in the moment, it’s practically useless. We don’t need a correct definition of “happiness” in order to be happy – we already know what happiness feels like, because we’ve all felt it. The rest is navel-gazing. I like to brood with a snifter of cognac as much as the next guy (okay, more than the next guy), but on my best days I remember that cognac is nice to drink even when I’m not brooding. That’s when the real fun begins. Or should I say the true fun?

I’m arming myself with a bandwagon of sundry other thinkers out there to lend a little extra credence to my claim that moral foundations ought to be psychological, i.e. not philosophical. As I put it in a separate conversation recently, “What good is a philosophy that puts you in therapy?” We debate the philosophy or the moral framework, but nobody debates the results; we all want to be sane, happy, healthy people. Touting this as the central goal of any moral or philosophical system puts the focus where it belongs: the proof of the pudding.

This is why, when we raise philosophical objections to someone’s stated belief, we don’t very often convince them to change her mind. What difference does it make if “capitalism, carried to its natural conclusion” produces anarchy? No debate about economic systems should rest on taking the real, physical world in which we live, and moving it to a hypothetical “natural [philosophical] conclusion.” I’m for economic growth and widespread prosperity. You too? Okay, what policy can be shown to produce those results? If my idea makes us all rich but philosophically inconsistent, I promise to buy you a hamburger. (NB: a hamburger is more satiating than philosophical consistency.)

The problem, as I see it, is that the deeper one gets into philosophy, the further one gets from the solution to one’s problem. I firmly believe that Plato’s Republic could be convincingly re-translated as comedy. One simple question about the definition of the word justice produces an entire treatise on government. You couldn’t make up better satire if you tried. If someone who knew nothing about philosophy (…or a thousand monkeys sitting at a thousand typewriters…) were asked to write a pilot for a sit-com the express purpose of which was to make fun of philosophers, it would look a lot like the Republic.

Suppose that in real life you actually had to solve a trolley problem, and that you could choose to either go with your gut instinct or pause time long enough to perform an exhaustive philosophical analysis of the problem. My thesis, restated: (1) If your analysis produced exactly the same conclusion as your gut instinct, then it was a wasted effort; (2) If your analysis made you less certain of what to do, it made your life worse and it was a wasted effort; (3) If your analysis produced a perverse conclusion, it made your life worse; (4) But, if your analysis produced a better outcome than your gut instinct would have, it was worthwhile.

Getting Results

I suppose at this point I should establish that I’m not straw-manning anything. Does philosophy actually produce bad outcomes for people? Yes. Here are two examples.

The first one is the curious case of Mitchell Heisman, who no one remembers anymore. By most accounts, Heisman was a highly intelligent and motivated man who showed no warning signs, and who ultimately harmed no one but himself. Despite his admirable intellect, he shot himself on a Harvard University landmark… as an act of philosophy. He left a “suicide note” in the form of a 1,900 page treatise on nihilism posted to a now-defunct website. The few who profess to have read it said it was “creepy,” but no one says that its claims are untrue. (Not to damn with faint praise, but Lubos Motl read it and enjoyed it.) Heisman was an intelligent man whose core philosophical beliefs were nihilistic. Unlike most nihilists, Heisman actually put his beliefs into practice: if there is no point, then why live? It’s important to note that Heisman didn’t misunderstand nihilism or get it wrong. He understood it perfectly, and ended his life accordingly.

In learning about Heisman, we all sense that something is wrong, but academic philosophy is powerless to tell us what. At best we can disagree with his conclusions, but when it comes to getting results, i.e. suicide prevention, what good is that? We know that suicide is a tragedy; we don’t need to prove it. The suggestion is almost silly. But philosophy can’t do the work needed to save Heisman’s life or anyone else’s. Instead, it can spur a debate about whether Heisman’s suicide is “truly” a tragedy or whether he is “truly” worse-off now compared to when he was alive.

The result of philosophy for Mitchell Heisman, then, is death. That’s a bad outcome.

The second example is that of Katherine Ripley, a young journalist who found herself traumatically victimized, left in a sad situation in which the academic philosophies she had learned in school gave her exactly the wrong advice. What those philosophies did give her, on the other hand, was a powerful set of rationalizations for self-destructive behavior that she only learned to overcome after what she describes as “intensive therapy sessions.”

Ms. Ripley isn’t a crazy person or an idiot. In fact, her career would suggest very much the opposite. She is an intelligent and articulate defender of her ideas. The philosophies that did her wrong for her own life (the real world, live and in the flesh) are defended on the highest terms and in the halls of the most prestigious philosophy departments in the country. The question isn’t “who could believe such a thing?” because the answer to that is “pretty much anyone smart enough to follow a valid chain of logic.” No, the question is what results did she get out of those ideas? The proof of the pudding.

I have no good arguments against either nihilism or feminism. They are both valid, consistent moral belief systems. They’re both well-reasoned and provide cogent explanations for a person’s actions. But how’s that workin’ for ya?

There is a fair criticism to be made at this point: Some great argument, Ryan. You take two tragedies that happen to have a connection to a couple of mainstream philosophies, and from that you indict the philosophies themselves, rather than the people. What about all the millions of other practitioners of these philosophies who don’t suffer an ill fate?

My response: Those philosophies appear to be working for all the happy people, don’t they? See? Multiple coherent, consistent, valid philosophies are available to us and anyone else. You might choose one and I might choose another. It doesn’t really matter. What matters is how the pudding tastes.

Practice, Not Theory

There’s more than one way to skin a cat, and more than one reason to give an old man some spare changeCeteris paribus, the philosophy that consistently results in a skinned cat (uh, assuming you’re into that…) or a donation to the needy is a good philosophy. It shouldn’t matter if that philosophy happens to be nihilism or feminism or utilitarianism or the Word of God. (Ceteris paribus.)

By contrast, the philosophy that only seems to work in special cases is not a good philosophy. Nor the philosophy that works 95% of the time, and 5% of the time you blow your brains out in front of the library; nor the philosophy that makes millions of women feel empowered at the expense of thousands of women who end up really hurting on the inside; nor the philosophy that teaches peace and harmony on Sundays and insular biases and discrimination the rest of the week; nor the philosophy that justifies a particular economic policy at the expense of all human altruism. And so on, and so forth.

The results matter. The theories are only as valuable as their ability to deliver those results. Did you like your pudding? Good, then you got the recipe right. Or not? Then change your recipe. It’s just words on paper. You can’t eat words on paper unless you’re desperate, and even then you’re getting mostly fiber. In one end and out the other.

And just in case I haven’t fully tapped-out the recipe analogy, here are some old-timey instructions for ammonia cookies, which are exactly what they sound like.


Why would someone want to put ammonia in their cookies? Because it tastes like mint. No, really, I’ve eaten them before. They are delicious. You can scratch your head about putting ammonia in food, but the fact is, it’s been done – successfully. That philosophy has been tested to tasty effect, so at this point there is no use questioning the thinking behind eating ammonia. It got results. (It’s not as if the drinking of cow milk is any more rational – or, for you vegans out there, the consumption of the barely-edible plant stalks attached to fatally poisonous leaves.) It doesn’t have to make sense if the cookies are both tasty and edible, and they are.

The underlying philosophy – the why – doesn’t matter anymore.

But you want “why”, you’re drawn to “why” like you’re drawn to a pretty girl in the rain.  Let me guess: she has black hair, big eyes, and is dressed like an ingenue.   “Why?” is the most seductive of questions because it is innocent, childlike, infinite in possibilities, and utterly devoted to you.

“Why am I this way?  Why do I do what I do?”  But what will you do with that information?  What good is it?  If you were an android, would it change you to know why you were programmed the way you were?   “Why” is masturbation, “why” is the enemy, the only question that matters is, now what?

And anyway, the answer to “why” isn’t very interesting. (Spoiler alert: “It is inevitable.”)

We don’t really want a perfect philosophical theory, anyway. That’s just an intermediary step to more interesting goals, like “happiness” and “sanity,” just like you don’t write C# code in order for it to be syntactically correct, but rather as a step toward a more interesting goal, like buying food at the grocery store and subsequently eating it. A job that pays well and keeps you dipping your ammonia cookies in milk instead of anti-freeze is a good job; a philosophy that keeps you happy, well-adjusted, and sane is a good philosophy.

It’s the proof of the pudding, see, that’s what we want. Results.

Theory and Practice, Episode One


I need to make a point about something, but as it turns out, it’s impossible to make this point in a single blog post. So I’ll have to do this on an installment plan.

Adventures In Comparative Legal Systems

When I lived in Canada, I used to hang out with a lot of law students. During that time, the conversation would inevitably turn to Canadian law. By this, I mean that they were often doing their homework right in front of me, and I was helping them with it. So it was a bit more than just casual conversation.

And in case you’re wondering, the answer is: Yes, my experience tells me that most law school homework is done in a pub over multiple pitchers of beer.

Anyway, one of the things that struck me about the Canadian legal system is the way human rights are organized, legally speaking. Canada has what’s called the Canadian Charter of Rights and Freedoms, which is analogous to the American Bill of Rights. It spells out what rights are guaranteed to the people by the government. The Canadian government, according to Canadian law, is permitted to violate the Charter in certain cases, as long as the details of those cases conform to certain legal guidelines, which are spelled out in writing and in jurisprudence.

As a fiery young, philosophical man, this used to incense me. After all, the Bill of Rights is a document that outlines things that the U.S. federal government is not permitted to do. In other words, the presumption here in the United States is that human beings hold certain inalienable rights that supersede any additional legal power. In Canada, subject to legal conventions, it is the government that grants all rights to the people, so government powers supersede the rights of the people.

I say it used to make me incensed. It doesn’t anymore. Why not? Because while studying the law alongside my friends, I eventually learned that in practice the Canadian legal system reaches the same important conclusions regarding human rights as the American legal system.

The only material difference in these matters is the language used to justify the conclusion. In America, our courts tend to use language that refers to what the government cannot do, and what the intended meaning of legislation is. In Canada, their courts tend to use language that refers to what the government is permitted to do and whether the intended meaning of the legislation provides sufficient justification for doing it.

But, as I said, when it comes to everything that matters on human rights issues, the two countries’ legal systems tend to reach the same conclusions, even though their justifications are phrased differently.

What’s the Point, Ryan?

I bring this up because one of the least attractive things about philosophy is that it tends to raise objections that need not be raised.

We see a homeless man shivering outside a coffee shop with an outstretched arm holding a cup. Most people I know who have spare change will drop a few coins in the man’s cup. Of those who do, some of them do so for reasons of faith, some of them do so for reasons of utility maximization, some of them do it for reasons of virtue. And, yes, some of them do it for reasons of guilt, shame, embarrassment, or to help clear their conscience.

I know a few people who would choose not to help the man. They all refuse to do it for various reasons, but no matter what their moral philosophy happens to be, they all justify their decision on moral terms. Maybe they want to give the man incentive to get a job. Maybe they think someone else is more deserving. Maybe they think the man will spend the money contrary to his own best interests, i.e. on drugs or alcohol.

Philosophy tends to raise objections that need not be raised. If you and I both give the man our spare change, there is no point arguing over which one of us had the better moral reasoning: the outcome was the same, ergo our reasoning was equal. You can say this however you like: what matter are results; actions speak louder than words; practice is more relevant than theory.

What matters outside of that coffee shop is not the spotless philosophical reasoning used to justify a particular course of action, but rather what we choose to do. If I give the old man my spare change for totally incomprehensible and inconsistent reasons “which, if taken to their natural conclusion…” would destroy the world I don’t care. Neither does the old man. Because the outcome of my moral reasoning was the same as if I had used a superior moral framework (or an even more inferior one): the man got his money and the world is still intact.

Now, if a particular philosophy fails to produce the right results, or fails to produce them consistently, then we have a good reason to evaluate the coherence of that philosophy and address its shortcomings. (More on that in a forthcoming post.) But if I’m giving my change to deserving old men, my friends and family are happy with me, and I am generally impacting the world in a positive way, whatever crazy and internally inconsistent moral framework I’m working with is working for me/paying rent.

If we raise objections to “wrong” thinking that consistently yields “right” results, then maybe it’s time we checked our premises.

Situationism: Let’s Clarify Some Things

Samuel’s most recent post is phrased with the language of disagreement, leading me to believe that he does not fully understand the extent to which his post fully agrees with both my views on the Situationism of war and with Situationism in general.

He quotes Joseph Heath on page 336 of Morality, Competition, and the Firm. One page earlier, however, Heath cites the most famous and profound Situationist social psychology experiment of all time, the Milgram experiments. Heath writes (all emphases mine, throughout):

Arendt’s observations about “the banality of evil” generated considerable outrage when first published. The Milgram experiments, it is worth recalling, were undertaken with the goal of disproving Arendt’s hypothesis, although in the end they wound up providing its most powerful confirmation. All of this suggests that there is actually an element of wishful thinking in the idea that “bad people do bad things.” Since the people who say this typically do not conceive of themselves as bad people, adherence to this theory is a way of putting some distance between themselves and those who perpetrate such acts, and thereby of avoiding the disquieting suggestion that they too are perfectly capable of inflicting great suffering on others.

“Wishful thinking” is sort of like a self-delusion, isn’t it? couldn’t possibly do the wrong thing, because I’m not a bad person. The problem here is that, again and again, after controlling for many individualized psychological factors, researchers find that perfectly good people are willing to deliver lethal levels of electric shock to innocent strangers if someone in a lab coat (or, in later versions of the experiment, even without a lab coat) simply prompts them to do so.

Heath continues:

Yet while a single, general theory of criminal motivation remains elusive, we have managed, over the course of the twentieth century, to do an enormous amount of debunking. In particular, the idea that criminals “don’t know right from wrong,” or that they were “not raised right,” or that they don’t “share our values” have all been decisively rejected. Furthermore, these claims have themselves become the object of suspicion, because they all have the effect of “othering” the criminal, suggesting that there is some kind of an instrinsic or essential difference between criminals and law-abiding citizens. This seems more likely to be a motivated belief imposed by the social psychology of punishment than an accurate characterization of the underlying structure of criminal motivation.

In fact, Philip Zimbardo defines evil to be “knowing better but acting worse.” Zimbardo, Heath, Arendt, Milgram, and many others have reached the same conclusion again and again, that what causes evil is not part of a person’s disposition, but rather exposure to a particular kind of situation.

What Situationism Is Not

Situationism is not a way to deflect individual moral responsibility. Situationism is not a way to blame forces outside of our control. Situationism is not a method of saying “the system made me do it.”

Situationism is also not a means to ascribe all behavior to the surrounding environment, rather than the acts of individuals. Samuel leads me to believe that this is what he had in mind when he wrote,

Reading Ryan’s post, I was left with the sense that he sees a situation’s influence over moral decision as inevitable, possibly even deterministic…

The problem with this argument comes back to the eternal question asked by criminologists: Why isn’t there more crime than there actually is? Given the state’s limited enforcement capacity, society depends on most people, most of the time behaving morally, i.e. of following the rules. If self-delusion were truly the rule, rather than the exception, civilization would collapse under a crisis of endemic shirking.

It goes without saying that peer pressure can only affect a person during moments when that person is exposed to it. Absent that exposure, we couldn’t ever blame peer pressure for anything.

And so it is with Situationism. Society generally follows the rules because we seldom find ourselves in situations that severely undermine our ability to tell right from wrong. Those situations arise when we are cast into new environments, as when adolescents enter junior high school (a notoriously difficult time in the lives of most teens). The new-ness of the situation draws people out of their own identities and forces them to deal with new stimuli in new ways. Is it any wonder so many young people cave to peer pressure?

It is not sitting in an Introduction to Philosophy course in college that turns formerly well-behaved kids into reckless frat-party rascals, it’s the exposure to the fraternity’s calculated initiation procedures and package-deal social dynamics. Caught up in a new and stressful situation, far away from home, and hungry to find a “true” identity, young people “experiment” in ways that aren’t always positive (as the many allegations of “rape culture” etc. aptly attest).

So Situationism is not a description of how any set of circumstances at all are to blame for individuals’ bad behavior, and if this is what Samuel had in mind when he wrote his post, he can rest assured that I agree with him.

What Situationism Is

Instead, Situationism is a set of theories that offer explanations for exactly how and why good people do bad things. They still bear responsibility for their actions, but psychologists aren’t as interested in assigning blame and enforcing justice as they are in explaining human behavior. Thus, if psychologists conclude that identifiable situational factors consistently produce certain human behavior, that is good information to know.

Furthermore, it’s not just “good to know” for the sake of our intellectual curiosity, it’s good to know because it provides us a means for avoiding evil in the future. If you know in advance that you are going to experience a certain kind of situation, you are better prepared to resist Situational influence. Furthermore, it is your moral responsibility to do so.

Ignorance of Situationism’s theories isn’t an excuse, of course, but once you have the knowledge that war means rape (to cite just one example), you  no longer have a moral justification for not applying influence-resisting techniques to situations you face.

This is part of what Dr. Zimbardo means when he says that Situationism does not excuse evil, it democratizes it. You, the reader, are now aware of Situational influence. You, the reader, are now morally on the hook for recognizing bad Situational influence when you see it, and resisting it.

It Can Happen To You

Samuel closes his most recent post with the following:

The upshot is that we shouldn’t stop holding people accountable for their actions just because the situation they somehow found themselves in made shirking their moral duties the path of least resistance. Indeed, just the opposite. Employing techniques of neutralization, as a self-serving behavior, should itself be an object of social sanction.

Moreover, it means there’s a chance we can preempt our techniques of neutralization by being aware of them, and by training ourselves in strategies that undercut self-delusion. That’s essentially what Joseph Heath argues business ethics courses should look like, rather than tired lessons in the history of moral philosophy. But in general it’s probably the sort of moral education we should all be subject to, starting as children.

Notice how similar his prescription is to mine (and Zimbardo’s).

There is, however, one key difference: Samuel’s perspective is still highly colored by dispositional reasoning. That is, for him, “neutralization” and self-serving behavior is a personal problem that we must be trained to overcome. It’s a flaw in ourselves which can be stamped-out with proper moral training.

My nitpick here is that this sort of reasoning leaves open the door for any moral analyst to say, “Well, Bob did a bad thing because he never trained himself to do good things.” This way of looking at things still provides an avenue for blaming bad apples rather than recognizing bad barrels. But a pivotal revelation of Situational analysis is that the same apple may be good or bad, depending on the barrel.

In Chapter 14 of The Lucifer Effect, Zimbardo provides a full background history on Ivan “Chip” Frederick, one of the soldiers found guilty of abuse during the Abu Ghraib scandal. For decades, Frederick was the model of moral behavior. He was an exemplary civilian prison guard, beloved by family and friends alike. There was never a hint throughout his entire life of being the sort of person who would harm another human being, even despite his position as a national guardsman and a prison guard. He was a decorated and beloved soldier who was assigned to work at the Abu Ghraib prison precisely because of his record.

Once exposed to the conditions at Abu Ghraib, however, Frederick (like virtually everyone in that hell-hole) quickly fell apart. Absent of clear rules and chains of authority, Frederick no longer had a cohesive structure on which to fall back during times of intense stress. He raised this issue with his superiors, along with many others, to no avail. Absent any outside support, he who was once precisely the kind of person who excelled as a prison guard quickly deteriorated into the worst kind of guard imaginable.

It was not his weakness that caused his decline, nor was it his propensity to explain away his behavior internally. It’s doubtful that most people fully understand the conditions inside Abu Ghraib prison circa-2003. There was no indoor plumbing, and the outhouses were literally overflowing with raw sewage. The prison was not just over-crowded, it was over-flowing and impossible to manage. The prisoners – many of whom were innocent civilians imprisoned for no reason at all, including adolescents – staged frequent, violent riots and escape attempts. The prison suffered mortar attacks on a daily basis. Guards raped prisoners. Prisoners raped prisoners. Non-government contractors “interrogated” (read: tortured) prisoners to extract confessions from them. Clothing shortages and instances of self-harm resulted in a large number of prisoners being naked constantly.

The point here is that, if we take time to really try to understand that place and that situation, we will quickly understand that if we were there, we likely wouldn’t act anything like ourselves any more than Chip Frederick did. This is the horrible power of negative Situational influence.

Resisting Negative Influence

In reading Zimbardo’s book, I have been struck by some of the small, seemingly insignificant ways we can resist negative influence.

One of the interesting things to come out of the Stanford Prison Experiment was the observation that the prisoners, when not in the presence of any prison guards, only ever spoke to each other about the prison. They never talked to each other about who they were in real life, their families, their interests, their hobbies, and what they planned to do after the experiment ended. They only ever spoke to each other about prison conditions, the guards, and so on.

So, think about it. How often do you talk about your non-work-related interests and activities while you are at work? Most of us have been in employment situations that started to dominate our lives. We check in from our personal computers at home, even when not required to do so. We get involved in the office politics. We start to see Bob From Accounting as Bob From Accounting, rather than Bob Smith Who Coaches Soccer On The Weekends. You can be certain that, in those situations, people see you as Bob From Accounting, or Jane From Legal, or whatever the case may be.

This is Situational influence that is rather easy to resist by simply placing your identity outside of work in the forefront of your mind. Remind yourself, and others, who you really are. Talk about your family or what you did on the weekend. This one, small act can make a big difference if the Situation is volatile, and negative.

At LuciferEffect.com, Zimbardo offers twenty “hints” for resisting influence. These simple pearls of wisdom can go a long way toward helping a person keep sight of their moral compass in stressful situations. They’re good tips, but in order to really prepare oneself for resisting this influence, it’s even better to understand the basic principles of social influence and how they are used. The website offers a good introduction to that process here.

I Sympathize With Adam

Adam’s story resonated with me, because I have a similar one of my own.

Many years ago, I befriended a number of my work colleagues. On one occasion, a small group of us headed over to my apartment for some drinks, intending to head out to a club a little later. We got to talking, casual conversation turned deep, and eventually one of my colleagues recounted a story of being physically abused – she said “beat down” – on a crowded public beach. The crowd did nothing to stop the abuse. Even when the beating was finished, no one came to help her even clean herself up.

No one did anything.

I learned later that my colleague was describing a very tragic and personal experience with what social psychologists call the bystander effect. When others are present, we are unlikely to help the victims of an attack. The more people present, the less likely we are to help. Sometimes we struggle to understand how innocent people can be hacked to death in the streets in front of literally hundreds of onlookers without anyone stepping in to help. The bystander effect explains this phenomenon.

At the time, I said to my colleague, “I can’t believe no one did anything.”

She chortled and said, “Nobody does anything.”

“I would have done something,” I said.

“Yeah, you say that,” she replied, “but when stuff like this happens, nobody ever does anything. You might think you’ll do something, but you won’t.”

I thought about that for a moment. Then I made a promise to her: “Because you said that, I’m going to make sure that I do something if I ever see something like that.” She didn’t believe me, but said she hoped I would.

I have not yet had the chance to fulfill my promise to my friend. However, our conversation made a permanent impact on my life. I am not certain that, when the time comes, I will indeed do what I promised to do. If I don’t, that failure will haunt me to my grave.

It wasn’t until I started reading The Lucifer Effect that I learned that the promise I made is one of Zimbardo’s recommended strategies for actually doing what I intend to do. He writes:

We also want to believe that there is something IN some people that drives them toward evil, while there is something different IN others that drives them toward good. It is an obvious notion but there is no hard evidence to support that dispositional view of evil and good, certainly not the inner determinants of heroism. There may be, but I need to see reliable data before I am convinced. Till then, I am proposing we focus on situational determinants of evil and good, trying to understand what about certain behavioral settings pushes some of us to become perpetrators of evil, others to look the other way in the presence of evil doers, tacitly condoning their actions and thus being guilty of the evil of inaction, while others act heroically on behalf of those in need or righting injustice. Some situations can inflame the “hostile imagination,” propelling good people to do bad deeds, while something in that same setting can inspire the “heroic imagination” propelling ordinary people toward actions that their culture at a given time determines is “heroic.” I argue in Lucifer and recent essays, that follow here, it is vital for every society to have its institutions teach heroism, building into such teachings the importance of mentally rehearsing taking heroic action—thus to be ready to act when called to service for a moral cause or just to help a victim in distress.

The best I can do is try. Until then, I argue that become better-equipped to recognize and deal with Situational influence is the best and most reliable means to prevent evil from happening, to overcome situations, and – with some hard work – overturn the systems that keep these situations intact.


The System and the Situation

Maintaining cognitive biases and willful illusions isn’t just problematic for our ability to reason – it’s morally wrong. A simple example can illustrate what I mean:

At the age of 9, Ethan Couch was still sleeping with his mother even though he had his own bedroom.

“Tonya has [Ethan’s] bed in her room and considers Ethan to be her protector.Very unusual,” Flynn said of that arrangement. “Very unusual, and highly questionable.”

The term “adultified” was used to explain that Ethan was treated as an equal, rather than as a child.

“This was a very dysfunctional family,” Flynn said. “Did not prepare Ethan for adulthood. It doesn’t surprise me at all that it has run its course this way.”

No matter what you happen to personally believe about Ethan Couch and his infamous “affluenza” defense, the fact of the matter is that his having been raised by a woman who wanted to maintain severe (possibly drug-induced) delusions rather than raise him properly clearly ruined his life. Ultimately, this ended the lives of four innocent motorists.

I’m not suggesting that their deaths were inevitable in light of Couch’s upbringing, I’m simply highlighting the fact that when we sustain self-delusions at the expense of other people, we’re acting immorally.

Of course, the matter of Ethan Couch’s dysfunctional family is cut-and-dry, and we don’t learn anything from stating the obvious. The real question, to what extent do we ourselves play the role of Tonya or Ethan Couch in our own lives, and who ultimately pays the price for this?

*        *        *

By now, readers must surely be aware of the recent allegations of sexual abuse by United Nations “peacekeepers” in the Central African Republic. At least seven women and little girls claim to have been raped, an allegation serious enough, and numerous enough, to lead us to believe that it is highly unlikely that these allegations are not true.

On some level, no one is really surprised by this. On some level, we all understand that “war is hell” and that the conditions of war almost always (or simply always) enable a rash of sexual abuse. This tragic phenomenon is thousands of years old, maybe as old as human civilization itself. It’s fair to say that the Situation of war is a direct cause of sexual abuse; meaning, if we want to guarantee that sexual abuse occurs, all we have to do is start a war. Period.

It’s important to note that sexual abuse has nothing to do with the reasons people or nations or groups wage war. That is, there’s nothing inherent in the politics of war that causes rape. In modern warfare, rape is not a strategic decision made by generals. It is not a planned and coordinated action by the military.

On the contrary, it is an emergent phenomenon of wartime social psychology. It happens in every war. The sexual abuse is committed, not by rogue scoundrels seeking to exploit a volatile background scenario, but normal, healthy boys and girls you personally know and grew up with. But for your decision not to enlist – or someone else’s decision not to deploy you to that particular war zone – the person committing these atrocities not only could have been you, but probably would have been you.

This is the profoundly important lesson of social psychology. The abusers went into the Situation as psychologically normal, moral human beings. In the Situation, they became monsters. Once removed from the Situation, they reverted back to their previous state, plus or minus the impacts of having experienced war firsthand.

*        *        *

Many well-meaning people will react to this information in a natural, but in my opinion, ineffectual way: They’ll call for rules and procedures to be put in place. These rules and procedures will help prevent the people in a bad Situation from falling victim to situational influence, and thus preempt them from committing sexual abuse.

What these people fail to understand is that rules and procedures are already in place (emphasis added):

“I will not rest until these heinous acts are uncovered, perpetrators are punished, and incidents cease,” the U.N envoy for Central African Republic and head of the U.N. mission Parfait Onanga-Anyanga said during a visit to Bambari, as Reuters reports.

He reminded the troops that “sexual abuse and exploitation is a serious breach of the U.N. regulations and a human rights violation; a double crime that affects the vulnerable women and children you were sent here to protect.”


Anthony Banbury, assistant secretary general to the U.N., said that there are around 69 confirmed allegations of sexual abuse or exploitation among the U.N.’s 16 international peacekeeping missions, for the whole of 2015, as reported by The Globe and Mail.

Peacekeeping missions are not anarchy. Rules and procedures designed to prevent chaos exist and have been implemented. These crimes are not a lapse in the execution of a perfectly designed process.

The process itself serves as a self-delusion. Its mere existence convinces us that

  • If we commit a crime against humanity, we were just following orders;
  • If we observe a crime against humanity, procedures were violated and transgressors must be brought to justice;
  • Their thus having been punished, justice is served and world is normal again.

In reality, nothing about this serves to actually prevent sexual abuse. This is just system we’ve cooked-up ex post facto to prevent an existential vacuum from opening up every time rapes occur on UN peacekeeping missions (or US wars, or sectarian uprisings, or etc., etc.); which is to say, every single time war is waged and/or peace is “kept.”

Every. Single. Time.

I’m beating a dead horse here because it’s important to understand that (once again, shout it from the rooftops) rape is an inevitable and predictable byproduct of war.

Once we understand that – I mean really understand it – that means we are no longer entitled to be surprised by allegations of sexual abuse. We are no longer entitled to believe that it was the bad behavior of a few bad apples. We are no longer entitled to believe that we had nothing to do with it or that our procedures are well-conceived and capable of addressing the problem.

As Philip Zimbardo has said, Situational psychology does not excuse evil, it democratizes it. It’s easy to believe that a U.N. peacekeeping mission in the Central African Republic, or a torture chamber in Cuba, or an insane-asylum-cum-torture-chamber in Iraq, or the total eradication of life as we know it in Syria, has nothing to do with us.

It’s easy to believe that these situations are just too complex to easily solve, and that the best we can do is vote for the right people, who will implement the right procedures, which will solve the problem.

But, no. That’s just our delusion talking. That’s just the part of our brain that doesn’t want to acknowledge that the crime is a direct result of the Situation, and the Situation is a direct result of the System that enables it. To prevent these and other atrocities, we don’t need better rules written by better philosopher-kings.

Instead, we need to dispense with the delusion and confront the existential vacuum. We need to admit to ourselves that things like this don’t magically stop happening after election season is over. We need to acknowledge that Good Guy Candidate A was still powerless to prevent the Situation, which means our belief that he could is also delusional. We need to admit to ourselves that the System enabling this terror is the delusion we carry with us, the one that tells us that the rules can work, if only they’re properly implemented.

Then, and only then, will we be ready to make a real and productive change.

Psychological Foundations for Morality: Some Problems

In my previous post, I made a rather bold proposition: maybe our moral beliefs can be based on notions of mental health. The sound-bite version of this was:

Actions that serve to augment or support the mental health of moral agents are moral, actions that serve to diminish their mental health are immoral, and actions that have no impact on mental health are morally neutral.

I then solicited feedback in hopes of finding the most glaring holes in my idea. That feedback came in the form of comments under the post, as well as private conversations with my fellow Sweet-Talkers.

In this post, I’d like to summarize that feedback in hopes of highlighting what many perceived to be the major weaknesses of the idea. I’ll defer my own responses, where applicable, to a later post.

Morality Is Not An Individual Phenomenon

One of the most interesting (to me) criticisms I received came once again from Samuel Hammond, who not only agreed with the Mises passage I quoted, but took the idea even further.

My understanding of his view (i.e. my words, not his) is that morality is basically only ever a social question. In other words, we can talk about character development or mental health from an individual perspective, and raise all sorts of interesting points. However, since morality is only an assessment of the extent to which a given action is “pro-social,” such individual considerations are not questions of morality. They might be interesting. They might be worthy of examination. But since morality itself is a social question, individual choices can’t really be called “moral” or “immoral” except in reference to how those actions relate to the society in which they’re made.

Truth be told, if this criticism is true, then it is indeed devastating to my idea. If morality really is a purely social construct, then when one finds oneself at odds with society, I was right to have written, “Better luck next social order.”

Psychology Is Subjective

Several objections were raised that I might broadly classify as complaints that psychology, being highly subjective in a number of ways, cannot objectively solve moral problems.

One version of this objection, quite well articulated by Andrew, highlighted the fact that behaviors and mindsets have been considered mental illnesses at one point in time and completely normal and healthy at other points in time. In other words, “what is and is not considered a mental illness changes depending on the culture.” One very obvious example is homosexuality, but perhaps a less-obvious and more powerful example is autism. In psychology’s infancy, many autistic people were simply considered invalids, or mentally deranged. Today, however, we know them to be simply atypical, but fully capable of leading happy, mentally healthy lives as valued – often times superior – contributors to human society. Under the argument that mental illness classifications change radically over time, isn’t psychology poorly equipped to serve as a foundation for moral choices? Wouldn’t it have been wrong in the year 1850 to call an autistic person immoral for reasons of her mental health classification under the prevailing psychological theories of the times?

To complicate matters further, psychology isn’t just subjective across cultures, it’s also subjective within them. Another point Andrew raises is that the moral conclusions of a true sociopath will differ radically from those of a “normal” person. (See my comment to Andrew for a brief answer to the question of sociopathy.) Suppose we are to analyze the morality of a malignant narcissist. Such a person can feel no remorse and no empathy for other human beings, and consequently derives pleasure from taking advantage of and/or abusing others. If “mentally healthy” is to mean “baseline emotional equilibrium for the individual,” then such a person might feel morally entitled to engage in behaviors that the rest of us would consider obviously abhorrent. How, then, might psychology be able to answer that malignant narcissist’s moral justifications?

It would be great if the complications ended there, but in fact they get worse. The line between “mentally healthy behavior” and “mentally unhealthy behavior” isn’t even all that well-defined for any given individual. As in the Paradox of the Heap, a patient who slips into major depresssive disorder from a previous state of “mere” profound grief doesn’t suddenly become majorly depressed as a matter of 1, 2, 3, go! It is a gradual process in which the extremes are easily identified, but for which there is no one, defining moment that delineates between mental health and mental illness. And so it goes for other kinds of patient experiences as well. If lines can be so fuzzy for an individual in a single experience, how could we ever hope to delineate concrete moral precepts from sets of experience that have absolutely no perceptible demarcation?

Psychology Is Corruptible

Another powerful criticism against the notion of basing our morality on psychology is that psychology itself can be, and has been corrupted. I received a lot of examples of this this kind.

Andrew worried that my proposal runs the risk of “pathologizing moral disagreements,” and to buttress his case, he pointed out that Soviet dissidents were often psychologically “determined” to be unfit, and institutionalized. He also pointed out quasi-psychological theories exemplified by Liberalism is a Mental Disorder and The Reactionary Mind, both of which seek to dismiss leftist and rightist political ideologies as psychological problems rather than valid, informed, principled disagreements.

Adam, for his part, pointed to the experiences of the transgendered, namely Dierdre McCloskey and her experience being forcibly institutionalized by her own sister, a psychologist. In that case, it may genuinely have been true that McCloskey’s sister believed that Dierdre needed to be institutionalized, and yet when we take a step back and assess the matter more stoically (and within the context of a more modern culture), the reaction seems both heinous and extreme. That experience echoes the well-documented cases of homosexuals subjected to shock therapy in order to “fix” their homosexuality, and of all manner of torture our society has unfairly unleashed against innocent people whose only real “crime” is deviation from the norm.

If Psychology Is Founded On Moral Premises (Even Partially), Then My Proposition Assumes The Consequent

Paul offered what I thought was a rather novel criticism. He says, “[P]sychology… rests on certain axioms or assumptions. These axioms touch on moral topics. So it’s tricky to make psychology the singular foundation of morality.”

Restating this position in slightly different terms, if morality is logically prior to the axioms of psychology, then employing psychology as a foundation for morality is probably impossible. At the very least, any morality contained in the foundations of psychology will necessarily end up in our moral foundations, not because the evidence pointed that way, but because the way we have chosen to investigate the evidence can only ever perceive those moral precepts, and never contradict them.

This is a bigger problem than it first appears, and one that I will never likely solve. This is a foundational question about the philosophy of science, not really even unique to psychology specifically. How can a human mind seek to understand itself in objective terms? How can three-dimensional physical beings ever hope to understand the physics of an M-dimensional universe?

I must quickly and readily acknowledge my inability to respond to this question. It is much bigger than the present discussion, and indeed one of the biggest problems that exist in philosophy.


While I don’t agree with all of the criticisms above, I nonetheless hope I have done an adequate job in summarizing them in a way that makes them seem not only plausible, but also compelling. My objective here was to poke serious holes in my own idea, and where possible, I tried to elaborate on the ideas expressed by others in order to make their points even stronger than they were in the context of off-the-cuff, casual conversation.

If you, the reader, are now in serious doubt of my original proposal after having read the above, then I can consider my endeavor a success.

I can only hope that any subsequent response I provide to these concerns is equally successful.

Toward A Better Foundation For Morality

In Liberalism: In The Classical Tradition, Ludwig von Mises writes, “Everything that serves to preserve the social order is moral; everything that is detrimental to it is immoral.” For my money, this represents the clearest, most concise way to frame the notion of of morality as a social institution.

Recently, fellow Sweet Talker Samuel Hammond elaborated a bit:

For my part, I think ethics lies not in formally consistent logical arguments, but the public recognition of norms. Where norms vary so does public reason. To the extent that some norms are more universal than others, it’s because discourse and other cultural evolutionary biases create normative convergence. Those convergent forces trace an outline of a more general logic behind certain norns that you can call trancendental, in the sense of being abstracted from human particularity.

I don’t know the extent to which Samuel agrees with Mises, but his statements seemed consistent with the “liberal-subjectivist” interpretation of morality Mises seemed to describe in his oeuvre. In short, our notions of right and wrong change with the times, and with the circumstances. Something once considered immoral is today anything but (think casual sex); something once considered common practice is now considered utterly heinous (think child brides). Morality seems to flex and change with the society experiencing it, at least according to Mises’ view. And even if Hammond himself looks at ethics differently, it’s not controversial to suggest that what I’ve just described is a view widely held by many people of many different political and philosophical persuasions.

Still, problems arise with this point of view. Here’s a big one, for example: Suppose you were a slave in 19th-Century America. Then, for you, your position in society and society’s treatment of you is perfectly ethical – even though you know quite rationally and reasonably that this cannot be the case.

For an ethicist like Mises, slavery is perfectly ethical until the social order evolves; then, we need to change the ethics to serve the new social order. If the new social order happens to enslave you, well, that’s just tough cookies. Better luck next social order; today, your beliefs about human equality are immoral.

Subjective Ethics: Alternatives To

I’ll readily admit my bias here: I don’t think a world in which social orders allow us to enslave each other until such time as a new social order comes along is a particularly ethical world. I have a bias in favor of liberating the enslaved and oppressed. I think human beings are capable of a better system of ethics. Unfortunately, many of the alternatives pose problems of their own.

One alternative, for example, is deontological decree, i.e. the word of god. The appeal here is obvious: god is perfect, surely his system of ethics is a pretty good one. But deontology poses two main problems for people like me.

The first is that, if I don’t believe in the particular god making the decree, then for me the system is identical to the “liberal-subjectivist” scenario described above. In other words, if your god happens to decree that my daughter can be made a child bride, and I object, that’s just too bad for her and me. To object is to contradict god’s will, and only evil (unethical) people do that.

The second problem with deontology is that, as a purely empirical matter, it tends to be absolutely miserable in practice, leading to ethical catastrophes like the Holocaust, the Inquisition, the Taliban, “honor” killings, caste systems, and so on. Deontology is so atrocious in practice that almost no one thinks it’s a good idea anymore, not even most theists.

Another possibility is utilitarianism, a favorite among rational types and academics. Calculating the choice that results in the most net total (subjective) happiness is an attractive proposition because it gives us a way to apply objective thinking (economic models and the like) to legitimately subjective questions. It’s a democratic approach to ethical problems. Here again, however, I spot two primary problems.

The first I shall illustrate by repeating an example I read in Steven Landsburg’s excellent book, The Big Questions. While he tells it better than I do, the example in my words goes something like this: Suppose everyone in the entire world were experiencing a dull but perpetual headache which could, for some strange reason, be stopped by killing a single innocent man. According to Landsburg and most other utilitarians, unless that man happens to be a Utility Monster, the right ethical choice is to kill one man in order to spare the rest of the world a mild headache. The moral of the story is: Don’t be the guy.

Still not convinced? Then what if I called that one guy “the guy with slightly darker skin and a comparative advantage in picking cotton?”

Aha, there it is. We’re back to the same scenario I laid out at the outset of this blog post. If your utility happens to be in the minority, too bad for you. Under utilitarianism, the enslavement or murder of any person y is theoretically justifiable, so long as we can buttress the case with a utility function U = ∑(xi,y) for sufficiently large i.

Virtue Ethics, a favorite among Sweet Talkers – myself included – seems to work really well on an individual level, because it affords the moral agent a means by which to reason through the ethical ramifications of a particular dilemma and arrive at a strong conclusion that reflects the totality of one’s moral code.

Adam highlighted a couple of substantial problems in a previous post:

Like Aristotle, and Julia Annas and Daniel Russell, I think that you must grasp the reasons in order to become fully virtuous. Unlike them, I think a substantial part of this understanding—the largest part in fact—is tacit, rather than explicit. This does not mean they are completely inexplicable; it’s just that people vary in their ability to articulate their reasons, and it has not been my experience that eloquence and clearness of explicit thought go hand in hand with goodness. Often such people are able to talk themselves into perfectly ridiculous perspectives, or worse. The USSR and Maoist China were creations of highly educated people capable of being very articulate about their reasons, and equally capable of filling mass graves with the bodies of the innocent dead.

It is the rightness of the reasons, and the responsiveness to them, that matters. The ability to explain and defend them is absolutely a valuable quality, and especially crucial in a liberal democracy where talk and persuasion are paramount. But that does not detract from the fact that many truly good people are bad at rhetoric, and many skilled in that art are quite rotten.

So, one weakness of virtue ethics is that it means our moral worth might depend on our ability to rationalize our virtue rhetorically. Anything goes as long as we can make a good argument for it, nothing is moral unless it can be argued-for. Then, another weakness is that, in a liberal democracy, we still end up putting it up to a majoritarian vote. The result is… yep, the same scenario I described at the outset of this blog post.

A New (?) Proposal For What Makes Something Moral

Is there any better standard upon which to found our systems of ethics, something that performs a little better than the ones I’ve described thus far?

I think I might have one: mental health. Actions that serve to augment or support the mental health of moral agents are moral, actions that serve to diminish their mental health are immoral, and actions that have no impact on mental health are morally neutral. Applying this evaluative criterion to moral decision-making seems to yield consistently good results.

For example, slavery is universally bad for every society in every time period, since no one could argue that being enslaved results in anything other than a mental health tragedy for the victim; furthermore, I could easily make the case that slave-ownership is corrosive to the mental health of the owner, too. The fact that slavery fails my moral test for both the victim and the perpetrator, by their own internal and subjective standards is a major advantage, to put it lightly.

The test performs equally well for minor ethical dilemmas. For example, lying is shown to be wrong not just because “society deems it so,” nor because “honesty serves the greatest good for the greatest number,” nor even because honesty is virtuous. Rather, lying is immoral because your life will be miserable if people don’t trust you, you won’t be able to live with yourself if you consistently betray the trust of others, and everyone else will be miserable, too. As for “white lies,” the mental health test shows us that uncouthly stating whatever you’re thinking in the name of total honesty is closer to a pathology than a virtue; if you can’t be sensitive to others’ feelings while telling the truth, then you might need to improve your mental health.

This seems intuitively true to just about everyone. On some level, we all know that it is only very weird people or people in a state of dire mental health who give no consideration to the feelings of others, and merely robotically act in a prescriptively “moral” fashion.

At the same time, the mental health test arms us against the prevailing attitudes of an oppressive mob. Society at large might tell you that it’s moral to force you to marry an old man or a first cousin – but if you know that it’s wrong for your mental health, then you have the moral authority to say “No!” even in the face of unanimous peer pressure. Better a mentally healthy social outcast – maybe even better a dead dissident – than a crazed or broken slave. Furthermore, a pervasive sense of conformism is itself mentally unhealthy, and going along with the crowd when one’s sense of morals suggests otherwise is a prime example of why conformism can be a big problem. And yes, contrarianism for the sake of contrarianism is also problematic. So, the key question, the one that provides clear moral guidance both for the contrarian and the pushover is, “Is this good or bad for mental health?”

One conclusion that arises is that a compromised state of mental health, no matter what the reason, compromises our ability to make good moral decisions. This, too, seems to make intuitive sense: soldiers in a war zone have a reduced ability to make the same kinds of moral decisions that they do during peace time; someone experiencing profound grief will often neglect her loved ones; and so on. This underscores the importance of mental health, as it might be integral to our ability to be good people.


Part of my reason for writing this post is because I’m hoping for feedback. This idea is relatively new to me, and I’m still not totally sold on it. It seems robust, but I feel as though I might be missing something important – possibly a few things. This notion does tend to put me at odds with people who would ordinarily agree with me on some significant moral issues. That is sufficient to cast doubt on the idea. At the same time, the more I evaluate it myself, the stronger the idea appeals to me.

So maybe I need help spotting my errors. If you happen to notice something I’ve missed, please do leave a comment.