Theory and Practice, Episode One


I need to make a point about something, but as it turns out, it’s impossible to make this point in a single blog post. So I’ll have to do this on an installment plan.

Adventures In Comparative Legal Systems

When I lived in Canada, I used to hang out with a lot of law students. During that time, the conversation would inevitably turn to Canadian law. By this, I mean that they were often doing their homework right in front of me, and I was helping them with it. So it was a bit more than just casual conversation.

And in case you’re wondering, the answer is: Yes, my experience tells me that most law school homework is done in a pub over multiple pitchers of beer.

Anyway, one of the things that struck me about the Canadian legal system is the way human rights are organized, legally speaking. Canada has what’s called the Canadian Charter of Rights and Freedoms, which is analogous to the American Bill of Rights. It spells out what rights are guaranteed to the people by the government. The Canadian government, according to Canadian law, is permitted to violate the Charter in certain cases, as long as the details of those cases conform to certain legal guidelines, which are spelled out in writing and in jurisprudence.

As a fiery young, philosophical man, this used to incense me. After all, the Bill of Rights is a document that outlines things that the U.S. federal government is not permitted to do. In other words, the presumption here in the United States is that human beings hold certain inalienable rights that supersede any additional legal power. In Canada, subject to legal conventions, it is the government that grants all rights to the people, so government powers supersede the rights of the people.

I say it used to make me incensed. It doesn’t anymore. Why not? Because while studying the law alongside my friends, I eventually learned that in practice the Canadian legal system reaches the same important conclusions regarding human rights as the American legal system.

The only material difference in these matters is the language used to justify the conclusion. In America, our courts tend to use language that refers to what the government cannot do, and what the intended meaning of legislation is. In Canada, their courts tend to use language that refers to what the government is permitted to do and whether the intended meaning of the legislation provides sufficient justification for doing it.

But, as I said, when it comes to everything that matters on human rights issues, the two countries’ legal systems tend to reach the same conclusions, even though their justifications are phrased differently.

What’s the Point, Ryan?

I bring this up because one of the least attractive things about philosophy is that it tends to raise objections that need not be raised.

We see a homeless man shivering outside a coffee shop with an outstretched arm holding a cup. Most people I know who have spare change will drop a few coins in the man’s cup. Of those who do, some of them do so for reasons of faith, some of them do so for reasons of utility maximization, some of them do it for reasons of virtue. And, yes, some of them do it for reasons of guilt, shame, embarrassment, or to help clear their conscience.

I know a few people who would choose not to help the man. They all refuse to do it for various reasons, but no matter what their moral philosophy happens to be, they all justify their decision on moral terms. Maybe they want to give the man incentive to get a job. Maybe they think someone else is more deserving. Maybe they think the man will spend the money contrary to his own best interests, i.e. on drugs or alcohol.

Philosophy tends to raise objections that need not be raised. If you and I both give the man our spare change, there is no point arguing over which one of us had the better moral reasoning: the outcome was the same, ergo our reasoning was equal. You can say this however you like: what matter are results; actions speak louder than words; practice is more relevant than theory.

What matters outside of that coffee shop is not the spotless philosophical reasoning used to justify a particular course of action, but rather what we choose to do. If I give the old man my spare change for totally incomprehensible and inconsistent reasons “which, if taken to their natural conclusion…” would destroy the world I don’t care. Neither does the old man. Because the outcome of my moral reasoning was the same as if I had used a superior moral framework (or an even more inferior one): the man got his money and the world is still intact.

Now, if a particular philosophy fails to produce the right results, or fails to produce them consistently, then we have a good reason to evaluate the coherence of that philosophy and address its shortcomings. (More on that in a forthcoming post.) But if I’m giving my change to deserving old men, my friends and family are happy with me, and I am generally impacting the world in a positive way, whatever crazy and internally inconsistent moral framework I’m working with is working for me/paying rent.

If we raise objections to “wrong” thinking that consistently yields “right” results, then maybe it’s time we checked our premises.

Situationism: Let’s Clarify Some Things

Samuel’s most recent post is phrased with the language of disagreement, leading me to believe that he does not fully understand the extent to which his post fully agrees with both my views on the Situationism of war and with Situationism in general.

He quotes Joseph Heath on page 336 of Morality, Competition, and the Firm. One page earlier, however, Heath cites the most famous and profound Situationist social psychology experiment of all time, the Milgram experiments. Heath writes (all emphases mine, throughout):

Arendt’s observations about “the banality of evil” generated considerable outrage when first published. The Milgram experiments, it is worth recalling, were undertaken with the goal of disproving Arendt’s hypothesis, although in the end they wound up providing its most powerful confirmation. All of this suggests that there is actually an element of wishful thinking in the idea that “bad people do bad things.” Since the people who say this typically do not conceive of themselves as bad people, adherence to this theory is a way of putting some distance between themselves and those who perpetrate such acts, and thereby of avoiding the disquieting suggestion that they too are perfectly capable of inflicting great suffering on others.

“Wishful thinking” is sort of like a self-delusion, isn’t it? couldn’t possibly do the wrong thing, because I’m not a bad person. The problem here is that, again and again, after controlling for many individualized psychological factors, researchers find that perfectly good people are willing to deliver lethal levels of electric shock to innocent strangers if someone in a lab coat (or, in later versions of the experiment, even without a lab coat) simply prompts them to do so.

Heath continues:

Yet while a single, general theory of criminal motivation remains elusive, we have managed, over the course of the twentieth century, to do an enormous amount of debunking. In particular, the idea that criminals “don’t know right from wrong,” or that they were “not raised right,” or that they don’t “share our values” have all been decisively rejected. Furthermore, these claims have themselves become the object of suspicion, because they all have the effect of “othering” the criminal, suggesting that there is some kind of an instrinsic or essential difference between criminals and law-abiding citizens. This seems more likely to be a motivated belief imposed by the social psychology of punishment than an accurate characterization of the underlying structure of criminal motivation.

In fact, Philip Zimbardo defines evil to be “knowing better but acting worse.” Zimbardo, Heath, Arendt, Milgram, and many others have reached the same conclusion again and again, that what causes evil is not part of a person’s disposition, but rather exposure to a particular kind of situation.

What Situationism Is Not

Situationism is not a way to deflect individual moral responsibility. Situationism is not a way to blame forces outside of our control. Situationism is not a method of saying “the system made me do it.”

Situationism is also not a means to ascribe all behavior to the surrounding environment, rather than the acts of individuals. Samuel leads me to believe that this is what he had in mind when he wrote,

Reading Ryan’s post, I was left with the sense that he sees a situation’s influence over moral decision as inevitable, possibly even deterministic…

The problem with this argument comes back to the eternal question asked by criminologists: Why isn’t there more crime than there actually is? Given the state’s limited enforcement capacity, society depends on most people, most of the time behaving morally, i.e. of following the rules. If self-delusion were truly the rule, rather than the exception, civilization would collapse under a crisis of endemic shirking.

It goes without saying that peer pressure can only affect a person during moments when that person is exposed to it. Absent that exposure, we couldn’t ever blame peer pressure for anything.

And so it is with Situationism. Society generally follows the rules because we seldom find ourselves in situations that severely undermine our ability to tell right from wrong. Those situations arise when we are cast into new environments, as when adolescents enter junior high school (a notoriously difficult time in the lives of most teens). The new-ness of the situation draws people out of their own identities and forces them to deal with new stimuli in new ways. Is it any wonder so many young people cave to peer pressure?

It is not sitting in an Introduction to Philosophy course in college that turns formerly well-behaved kids into reckless frat-party rascals, it’s the exposure to the fraternity’s calculated initiation procedures and package-deal social dynamics. Caught up in a new and stressful situation, far away from home, and hungry to find a “true” identity, young people “experiment” in ways that aren’t always positive (as the many allegations of “rape culture” etc. aptly attest).

So Situationism is not a description of how any set of circumstances at all are to blame for individuals’ bad behavior, and if this is what Samuel had in mind when he wrote his post, he can rest assured that I agree with him.

What Situationism Is

Instead, Situationism is a set of theories that offer explanations for exactly how and why good people do bad things. They still bear responsibility for their actions, but psychologists aren’t as interested in assigning blame and enforcing justice as they are in explaining human behavior. Thus, if psychologists conclude that identifiable situational factors consistently produce certain human behavior, that is good information to know.

Furthermore, it’s not just “good to know” for the sake of our intellectual curiosity, it’s good to know because it provides us a means for avoiding evil in the future. If you know in advance that you are going to experience a certain kind of situation, you are better prepared to resist Situational influence. Furthermore, it is your moral responsibility to do so.

Ignorance of Situationism’s theories isn’t an excuse, of course, but once you have the knowledge that war means rape (to cite just one example), you  no longer have a moral justification for not applying influence-resisting techniques to situations you face.

This is part of what Dr. Zimbardo means when he says that Situationism does not excuse evil, it democratizes it. You, the reader, are now aware of Situational influence. You, the reader, are now morally on the hook for recognizing bad Situational influence when you see it, and resisting it.

It Can Happen To You

Samuel closes his most recent post with the following:

The upshot is that we shouldn’t stop holding people accountable for their actions just because the situation they somehow found themselves in made shirking their moral duties the path of least resistance. Indeed, just the opposite. Employing techniques of neutralization, as a self-serving behavior, should itself be an object of social sanction.

Moreover, it means there’s a chance we can preempt our techniques of neutralization by being aware of them, and by training ourselves in strategies that undercut self-delusion. That’s essentially what Joseph Heath argues business ethics courses should look like, rather than tired lessons in the history of moral philosophy. But in general it’s probably the sort of moral education we should all be subject to, starting as children.

Notice how similar his prescription is to mine (and Zimbardo’s).

There is, however, one key difference: Samuel’s perspective is still highly colored by dispositional reasoning. That is, for him, “neutralization” and self-serving behavior is a personal problem that we must be trained to overcome. It’s a flaw in ourselves which can be stamped-out with proper moral training.

My nitpick here is that this sort of reasoning leaves open the door for any moral analyst to say, “Well, Bob did a bad thing because he never trained himself to do good things.” This way of looking at things still provides an avenue for blaming bad apples rather than recognizing bad barrels. But a pivotal revelation of Situational analysis is that the same apple may be good or bad, depending on the barrel.

In Chapter 14 of The Lucifer Effect, Zimbardo provides a full background history on Ivan “Chip” Frederick, one of the soldiers found guilty of abuse during the Abu Ghraib scandal. For decades, Frederick was the model of moral behavior. He was an exemplary civilian prison guard, beloved by family and friends alike. There was never a hint throughout his entire life of being the sort of person who would harm another human being, even despite his position as a national guardsman and a prison guard. He was a decorated and beloved soldier who was assigned to work at the Abu Ghraib prison precisely because of his record.

Once exposed to the conditions at Abu Ghraib, however, Frederick (like virtually everyone in that hell-hole) quickly fell apart. Absent of clear rules and chains of authority, Frederick no longer had a cohesive structure on which to fall back during times of intense stress. He raised this issue with his superiors, along with many others, to no avail. Absent any outside support, he who was once precisely the kind of person who excelled as a prison guard quickly deteriorated into the worst kind of guard imaginable.

It was not his weakness that caused his decline, nor was it his propensity to explain away his behavior internally. It’s doubtful that most people fully understand the conditions inside Abu Ghraib prison circa-2003. There was no indoor plumbing, and the outhouses were literally overflowing with raw sewage. The prison was not just over-crowded, it was over-flowing and impossible to manage. The prisoners – many of whom were innocent civilians imprisoned for no reason at all, including adolescents – staged frequent, violent riots and escape attempts. The prison suffered mortar attacks on a daily basis. Guards raped prisoners. Prisoners raped prisoners. Non-government contractors “interrogated” (read: tortured) prisoners to extract confessions from them. Clothing shortages and instances of self-harm resulted in a large number of prisoners being naked constantly.

The point here is that, if we take time to really try to understand that place and that situation, we will quickly understand that if we were there, we likely wouldn’t act anything like ourselves any more than Chip Frederick did. This is the horrible power of negative Situational influence.

Resisting Negative Influence

In reading Zimbardo’s book, I have been struck by some of the small, seemingly insignificant ways we can resist negative influence.

One of the interesting things to come out of the Stanford Prison Experiment was the observation that the prisoners, when not in the presence of any prison guards, only ever spoke to each other about the prison. They never talked to each other about who they were in real life, their families, their interests, their hobbies, and what they planned to do after the experiment ended. They only ever spoke to each other about prison conditions, the guards, and so on.

So, think about it. How often do you talk about your non-work-related interests and activities while you are at work? Most of us have been in employment situations that started to dominate our lives. We check in from our personal computers at home, even when not required to do so. We get involved in the office politics. We start to see Bob From Accounting as Bob From Accounting, rather than Bob Smith Who Coaches Soccer On The Weekends. You can be certain that, in those situations, people see you as Bob From Accounting, or Jane From Legal, or whatever the case may be.

This is Situational influence that is rather easy to resist by simply placing your identity outside of work in the forefront of your mind. Remind yourself, and others, who you really are. Talk about your family or what you did on the weekend. This one, small act can make a big difference if the Situation is volatile, and negative.

At, Zimbardo offers twenty “hints” for resisting influence. These simple pearls of wisdom can go a long way toward helping a person keep sight of their moral compass in stressful situations. They’re good tips, but in order to really prepare oneself for resisting this influence, it’s even better to understand the basic principles of social influence and how they are used. The website offers a good introduction to that process here.

I Sympathize With Adam

Adam’s story resonated with me, because I have a similar one of my own.

Many years ago, I befriended a number of my work colleagues. On one occasion, a small group of us headed over to my apartment for some drinks, intending to head out to a club a little later. We got to talking, casual conversation turned deep, and eventually one of my colleagues recounted a story of being physically abused – she said “beat down” – on a crowded public beach. The crowd did nothing to stop the abuse. Even when the beating was finished, no one came to help her even clean herself up.

No one did anything.

I learned later that my colleague was describing a very tragic and personal experience with what social psychologists call the bystander effect. When others are present, we are unlikely to help the victims of an attack. The more people present, the less likely we are to help. Sometimes we struggle to understand how innocent people can be hacked to death in the streets in front of literally hundreds of onlookers without anyone stepping in to help. The bystander effect explains this phenomenon.

At the time, I said to my colleague, “I can’t believe no one did anything.”

She chortled and said, “Nobody does anything.”

“I would have done something,” I said.

“Yeah, you say that,” she replied, “but when stuff like this happens, nobody ever does anything. You might think you’ll do something, but you won’t.”

I thought about that for a moment. Then I made a promise to her: “Because you said that, I’m going to make sure that I do something if I ever see something like that.” She didn’t believe me, but said she hoped I would.

I have not yet had the chance to fulfill my promise to my friend. However, our conversation made a permanent impact on my life. I am not certain that, when the time comes, I will indeed do what I promised to do. If I don’t, that failure will haunt me to my grave.

It wasn’t until I started reading The Lucifer Effect that I learned that the promise I made is one of Zimbardo’s recommended strategies for actually doing what I intend to do. He writes:

We also want to believe that there is something IN some people that drives them toward evil, while there is something different IN others that drives them toward good. It is an obvious notion but there is no hard evidence to support that dispositional view of evil and good, certainly not the inner determinants of heroism. There may be, but I need to see reliable data before I am convinced. Till then, I am proposing we focus on situational determinants of evil and good, trying to understand what about certain behavioral settings pushes some of us to become perpetrators of evil, others to look the other way in the presence of evil doers, tacitly condoning their actions and thus being guilty of the evil of inaction, while others act heroically on behalf of those in need or righting injustice. Some situations can inflame the “hostile imagination,” propelling good people to do bad deeds, while something in that same setting can inspire the “heroic imagination” propelling ordinary people toward actions that their culture at a given time determines is “heroic.” I argue in Lucifer and recent essays, that follow here, it is vital for every society to have its institutions teach heroism, building into such teachings the importance of mentally rehearsing taking heroic action—thus to be ready to act when called to service for a moral cause or just to help a victim in distress.

The best I can do is try. Until then, I argue that become better-equipped to recognize and deal with Situational influence is the best and most reliable means to prevent evil from happening, to overcome situations, and – with some hard work – overturn the systems that keep these situations intact.


Techniques of Neutralization

Ryan and Adam have been discussing the role of situation in morality. Do read both in full.

Ryan’s is a convincing defense of the banality of evil. Rape is an inevitability of war, for example, not because the participants of war are particularly bad human beings, but because the situation of being at war drives otherwise normal human beings to do heinous things. As he writes,

Situational psychology does not excuse evil, it democratizes it. It’s easy to believe that a U.N. peacekeeping mission in the Central African Republic, or a torture chamber in Cuba, or an insane-asylum-cum-torture-chamber in Iraq, or the total eradication of life as we know it in Syria, has nothing to do with us.

Both he and Adam point to self-delusion as the culprit. Writing from the experience of having once rationalized the immoral actions of a close friend, Adam says he

received that wake-up call about my own capacity for self-deception over a decade ago. The bigger shock was not that I was able to be so willfully blind, but that so many of my friends continued to be in light of what the investigation uncovered. In fact, they doubled down, entrenching themselves in a persecution narrative which provided a useful framework for rationalizing away any hint of their own guilt.

I don’t have much to add so far that my senpai hasn’t (as usual) said earlier and much better. From his discussion of the shortcomings of virtue ethics in Morality Competition, and the Firm, Joseph Heath brings up the criminology literature on violent subcultures:

In the 1950s David Matza and Gresham Sykes suggested that the reason deviant subcultures (such as youth gangs) are criminogenic is not that they encourage primary deviance with respect to the moral norms and values of society, but that they facilitate secondary deviance with respect to cognitive and epistemic norms governing the way situations are construed. … Instead of maintaining that violence itself is good, members of the group may instead convince themselves that they had no choice to act as they did, or that the victim had done something to deserve it … What distinguishes the criminal, according to this view, is not a motivational defect or an improper set of values, but rather a willingness to make self-serving use of excuses, in a way that neutralizes the force of conventional values.

One implication of these “techniques of neutralization,” as they’re known, is that proper behavior, for the most part, is not hidden knowledge that the deviant is ignorant of. In fact, social deviants usually “know” the right thing to do, but explain it away with reference to exceptional circumstances, or by construing the situation differently. Paraphrasing an example Heath often gives, when someone says they have “borrowed” an item they in fact stole, they are in essence substituting one normative violation (“do not steal”) with a different, less bad cognitive violation (the generally accepted meaning of the word “borrowed”). He discusses other techniques of neutralization here. They include:

  • Denial of responsibility
  • Denial of injury
  • Denial of the victim
  • Condemnation of the condemner
  • Appeal to higher loyalties
  • “Everyone else is doing it”
  • Entitlement

Reading Ryan’s post, I was left with the sense that he sees a situation’s influence over moral decision as inevitable, possibly even deterministic. He thus suggests abandoning the even greater delusion that we can avoid self-delusion, and instead focus on reforming the broader system that generates the situations that leave us most compromised.

The problem with this argument comes back to the eternal question asked by criminologists: Why isn’t there more crime than there actually is? Given the state’s limited enforcement capacity, society depends on most people, most of the time behaving morally, i.e. of following the rules. If self-delusion were truly the rule, rather than the exception, civilization would collapse under a crisis of endemic shirking.

Ironically, blaming the system is one of the most pernicious techniques of neutralization criminologists have identified. Indeed, saying “it’s systemic” is one of the easiest ways to deny responsibility for one’s action, and in turn make the problematic behavioral pattern all the more common and entrenched.

This is true not just with respect to crime stemming from war or systemic poverty, but applies equally well to white collar crime, too. When bankers engage in shady lending or regulatory arbitrage, for example, they often neutralize their bad behavior by blaming the systemic forces of market competition (“Everyone else is doing it”), or the duty to maximize shareholder value within the letter of the law (Appeal to higher loyalty). Over time this leads to juridification, the thickening of law books, as behaviors that were once enforced by unwritten social norms and voluntary self-restraint must be replaced by codified laws with explicit sanctions.


The upshot is that we shouldn’t stop holding people accountable for their actions just because the situation they somehow found themselves in made shirking their moral duties the path of least resistance. Indeed, just the opposite. Employing techniques of neutralization, as a self-serving behavior, should itself be an object of social sanction.

Moreover, it means there’s a chance we can preempt our techniques of neutralization by being aware of them, and by training ourselves in strategies that undercut self-delusion. That’s essentially what Joseph Heath argues business ethics courses should look like, rather than tired lessons in the history of moral philosophy. But in general it’s probably the sort of moral education we should all be subject to, starting as children.

The System and the Situation

Maintaining cognitive biases and willful illusions isn’t just problematic for our ability to reason – it’s morally wrong. A simple example can illustrate what I mean:

At the age of 9, Ethan Couch was still sleeping with his mother even though he had his own bedroom.

“Tonya has [Ethan’s] bed in her room and considers Ethan to be her protector.Very unusual,” Flynn said of that arrangement. “Very unusual, and highly questionable.”

The term “adultified” was used to explain that Ethan was treated as an equal, rather than as a child.

“This was a very dysfunctional family,” Flynn said. “Did not prepare Ethan for adulthood. It doesn’t surprise me at all that it has run its course this way.”

No matter what you happen to personally believe about Ethan Couch and his infamous “affluenza” defense, the fact of the matter is that his having been raised by a woman who wanted to maintain severe (possibly drug-induced) delusions rather than raise him properly clearly ruined his life. Ultimately, this ended the lives of four innocent motorists.

I’m not suggesting that their deaths were inevitable in light of Couch’s upbringing, I’m simply highlighting the fact that when we sustain self-delusions at the expense of other people, we’re acting immorally.

Of course, the matter of Ethan Couch’s dysfunctional family is cut-and-dry, and we don’t learn anything from stating the obvious. The real question, to what extent do we ourselves play the role of Tonya or Ethan Couch in our own lives, and who ultimately pays the price for this?

*        *        *

By now, readers must surely be aware of the recent allegations of sexual abuse by United Nations “peacekeepers” in the Central African Republic. At least seven women and little girls claim to have been raped, an allegation serious enough, and numerous enough, to lead us to believe that it is highly unlikely that these allegations are not true.

On some level, no one is really surprised by this. On some level, we all understand that “war is hell” and that the conditions of war almost always (or simply always) enable a rash of sexual abuse. This tragic phenomenon is thousands of years old, maybe as old as human civilization itself. It’s fair to say that the Situation of war is a direct cause of sexual abuse; meaning, if we want to guarantee that sexual abuse occurs, all we have to do is start a war. Period.

It’s important to note that sexual abuse has nothing to do with the reasons people or nations or groups wage war. That is, there’s nothing inherent in the politics of war that causes rape. In modern warfare, rape is not a strategic decision made by generals. It is not a planned and coordinated action by the military.

On the contrary, it is an emergent phenomenon of wartime social psychology. It happens in every war. The sexual abuse is committed, not by rogue scoundrels seeking to exploit a volatile background scenario, but normal, healthy boys and girls you personally know and grew up with. But for your decision not to enlist – or someone else’s decision not to deploy you to that particular war zone – the person committing these atrocities not only could have been you, but probably would have been you.

This is the profoundly important lesson of social psychology. The abusers went into the Situation as psychologically normal, moral human beings. In the Situation, they became monsters. Once removed from the Situation, they reverted back to their previous state, plus or minus the impacts of having experienced war firsthand.

*        *        *

Many well-meaning people will react to this information in a natural, but in my opinion, ineffectual way: They’ll call for rules and procedures to be put in place. These rules and procedures will help prevent the people in a bad Situation from falling victim to situational influence, and thus preempt them from committing sexual abuse.

What these people fail to understand is that rules and procedures are already in place (emphasis added):

“I will not rest until these heinous acts are uncovered, perpetrators are punished, and incidents cease,” the U.N envoy for Central African Republic and head of the U.N. mission Parfait Onanga-Anyanga said during a visit to Bambari, as Reuters reports.

He reminded the troops that “sexual abuse and exploitation is a serious breach of the U.N. regulations and a human rights violation; a double crime that affects the vulnerable women and children you were sent here to protect.”


Anthony Banbury, assistant secretary general to the U.N., said that there are around 69 confirmed allegations of sexual abuse or exploitation among the U.N.’s 16 international peacekeeping missions, for the whole of 2015, as reported by The Globe and Mail.

Peacekeeping missions are not anarchy. Rules and procedures designed to prevent chaos exist and have been implemented. These crimes are not a lapse in the execution of a perfectly designed process.

The process itself serves as a self-delusion. Its mere existence convinces us that

  • If we commit a crime against humanity, we were just following orders;
  • If we observe a crime against humanity, procedures were violated and transgressors must be brought to justice;
  • Their thus having been punished, justice is served and world is normal again.

In reality, nothing about this serves to actually prevent sexual abuse. This is just system we’ve cooked-up ex post facto to prevent an existential vacuum from opening up every time rapes occur on UN peacekeeping missions (or US wars, or sectarian uprisings, or etc., etc.); which is to say, every single time war is waged and/or peace is “kept.”

Every. Single. Time.

I’m beating a dead horse here because it’s important to understand that (once again, shout it from the rooftops) rape is an inevitable and predictable byproduct of war.

Once we understand that – I mean really understand it – that means we are no longer entitled to be surprised by allegations of sexual abuse. We are no longer entitled to believe that it was the bad behavior of a few bad apples. We are no longer entitled to believe that we had nothing to do with it or that our procedures are well-conceived and capable of addressing the problem.

As Philip Zimbardo has said, Situational psychology does not excuse evil, it democratizes it. It’s easy to believe that a U.N. peacekeeping mission in the Central African Republic, or a torture chamber in Cuba, or an insane-asylum-cum-torture-chamber in Iraq, or the total eradication of life as we know it in Syria, has nothing to do with us.

It’s easy to believe that these situations are just too complex to easily solve, and that the best we can do is vote for the right people, who will implement the right procedures, which will solve the problem.

But, no. That’s just our delusion talking. That’s just the part of our brain that doesn’t want to acknowledge that the crime is a direct result of the Situation, and the Situation is a direct result of the System that enables it. To prevent these and other atrocities, we don’t need better rules written by better philosopher-kings.

Instead, we need to dispense with the delusion and confront the existential vacuum. We need to admit to ourselves that things like this don’t magically stop happening after election season is over. We need to acknowledge that Good Guy Candidate A was still powerless to prevent the Situation, which means our belief that he could is also delusional. We need to admit to ourselves that the System enabling this terror is the delusion we carry with us, the one that tells us that the rules can work, if only they’re properly implemented.

Then, and only then, will we be ready to make a real and productive change.

Psychological Foundations for Morality: Some Problems

In my previous post, I made a rather bold proposition: maybe our moral beliefs can be based on notions of mental health. The sound-bite version of this was:

Actions that serve to augment or support the mental health of moral agents are moral, actions that serve to diminish their mental health are immoral, and actions that have no impact on mental health are morally neutral.

I then solicited feedback in hopes of finding the most glaring holes in my idea. That feedback came in the form of comments under the post, as well as private conversations with my fellow Sweet-Talkers.

In this post, I’d like to summarize that feedback in hopes of highlighting what many perceived to be the major weaknesses of the idea. I’ll defer my own responses, where applicable, to a later post.

Morality Is Not An Individual Phenomenon

One of the most interesting (to me) criticisms I received came once again from Samuel Hammond, who not only agreed with the Mises passage I quoted, but took the idea even further.

My understanding of his view (i.e. my words, not his) is that morality is basically only ever a social question. In other words, we can talk about character development or mental health from an individual perspective, and raise all sorts of interesting points. However, since morality is only an assessment of the extent to which a given action is “pro-social,” such individual considerations are not questions of morality. They might be interesting. They might be worthy of examination. But since morality itself is a social question, individual choices can’t really be called “moral” or “immoral” except in reference to how those actions relate to the society in which they’re made.

Truth be told, if this criticism is true, then it is indeed devastating to my idea. If morality really is a purely social construct, then when one finds oneself at odds with society, I was right to have written, “Better luck next social order.”

Psychology Is Subjective

Several objections were raised that I might broadly classify as complaints that psychology, being highly subjective in a number of ways, cannot objectively solve moral problems.

One version of this objection, quite well articulated by Andrew, highlighted the fact that behaviors and mindsets have been considered mental illnesses at one point in time and completely normal and healthy at other points in time. In other words, “what is and is not considered a mental illness changes depending on the culture.” One very obvious example is homosexuality, but perhaps a less-obvious and more powerful example is autism. In psychology’s infancy, many autistic people were simply considered invalids, or mentally deranged. Today, however, we know them to be simply atypical, but fully capable of leading happy, mentally healthy lives as valued – often times superior – contributors to human society. Under the argument that mental illness classifications change radically over time, isn’t psychology poorly equipped to serve as a foundation for moral choices? Wouldn’t it have been wrong in the year 1850 to call an autistic person immoral for reasons of her mental health classification under the prevailing psychological theories of the times?

To complicate matters further, psychology isn’t just subjective across cultures, it’s also subjective within them. Another point Andrew raises is that the moral conclusions of a true sociopath will differ radically from those of a “normal” person. (See my comment to Andrew for a brief answer to the question of sociopathy.) Suppose we are to analyze the morality of a malignant narcissist. Such a person can feel no remorse and no empathy for other human beings, and consequently derives pleasure from taking advantage of and/or abusing others. If “mentally healthy” is to mean “baseline emotional equilibrium for the individual,” then such a person might feel morally entitled to engage in behaviors that the rest of us would consider obviously abhorrent. How, then, might psychology be able to answer that malignant narcissist’s moral justifications?

It would be great if the complications ended there, but in fact they get worse. The line between “mentally healthy behavior” and “mentally unhealthy behavior” isn’t even all that well-defined for any given individual. As in the Paradox of the Heap, a patient who slips into major depresssive disorder from a previous state of “mere” profound grief doesn’t suddenly become majorly depressed as a matter of 1, 2, 3, go! It is a gradual process in which the extremes are easily identified, but for which there is no one, defining moment that delineates between mental health and mental illness. And so it goes for other kinds of patient experiences as well. If lines can be so fuzzy for an individual in a single experience, how could we ever hope to delineate concrete moral precepts from sets of experience that have absolutely no perceptible demarcation?

Psychology Is Corruptible

Another powerful criticism against the notion of basing our morality on psychology is that psychology itself can be, and has been corrupted. I received a lot of examples of this this kind.

Andrew worried that my proposal runs the risk of “pathologizing moral disagreements,” and to buttress his case, he pointed out that Soviet dissidents were often psychologically “determined” to be unfit, and institutionalized. He also pointed out quasi-psychological theories exemplified by Liberalism is a Mental Disorder and The Reactionary Mind, both of which seek to dismiss leftist and rightist political ideologies as psychological problems rather than valid, informed, principled disagreements.

Adam, for his part, pointed to the experiences of the transgendered, namely Dierdre McCloskey and her experience being forcibly institutionalized by her own sister, a psychologist. In that case, it may genuinely have been true that McCloskey’s sister believed that Dierdre needed to be institutionalized, and yet when we take a step back and assess the matter more stoically (and within the context of a more modern culture), the reaction seems both heinous and extreme. That experience echoes the well-documented cases of homosexuals subjected to shock therapy in order to “fix” their homosexuality, and of all manner of torture our society has unfairly unleashed against innocent people whose only real “crime” is deviation from the norm.

If Psychology Is Founded On Moral Premises (Even Partially), Then My Proposition Assumes The Consequent

Paul offered what I thought was a rather novel criticism. He says, “[P]sychology… rests on certain axioms or assumptions. These axioms touch on moral topics. So it’s tricky to make psychology the singular foundation of morality.”

Restating this position in slightly different terms, if morality is logically prior to the axioms of psychology, then employing psychology as a foundation for morality is probably impossible. At the very least, any morality contained in the foundations of psychology will necessarily end up in our moral foundations, not because the evidence pointed that way, but because the way we have chosen to investigate the evidence can only ever perceive those moral precepts, and never contradict them.

This is a bigger problem than it first appears, and one that I will never likely solve. This is a foundational question about the philosophy of science, not really even unique to psychology specifically. How can a human mind seek to understand itself in objective terms? How can three-dimensional physical beings ever hope to understand the physics of an M-dimensional universe?

I must quickly and readily acknowledge my inability to respond to this question. It is much bigger than the present discussion, and indeed one of the biggest problems that exist in philosophy.


While I don’t agree with all of the criticisms above, I nonetheless hope I have done an adequate job in summarizing them in a way that makes them seem not only plausible, but also compelling. My objective here was to poke serious holes in my own idea, and where possible, I tried to elaborate on the ideas expressed by others in order to make their points even stronger than they were in the context of off-the-cuff, casual conversation.

If you, the reader, are now in serious doubt of my original proposal after having read the above, then I can consider my endeavor a success.

I can only hope that any subsequent response I provide to these concerns is equally successful.

Toward A Better Foundation For Morality

In Liberalism: In The Classical Tradition, Ludwig von Mises writes, “Everything that serves to preserve the social order is moral; everything that is detrimental to it is immoral.” For my money, this represents the clearest, most concise way to frame the notion of of morality as a social institution.

Recently, fellow Sweet Talker Samuel Hammond elaborated a bit:

For my part, I think ethics lies not in formally consistent logical arguments, but the public recognition of norms. Where norms vary so does public reason. To the extent that some norms are more universal than others, it’s because discourse and other cultural evolutionary biases create normative convergence. Those convergent forces trace an outline of a more general logic behind certain norns that you can call trancendental, in the sense of being abstracted from human particularity.

I don’t know the extent to which Samuel agrees with Mises, but his statements seemed consistent with the “liberal-subjectivist” interpretation of morality Mises seemed to describe in his oeuvre. In short, our notions of right and wrong change with the times, and with the circumstances. Something once considered immoral is today anything but (think casual sex); something once considered common practice is now considered utterly heinous (think child brides). Morality seems to flex and change with the society experiencing it, at least according to Mises’ view. And even if Hammond himself looks at ethics differently, it’s not controversial to suggest that what I’ve just described is a view widely held by many people of many different political and philosophical persuasions.

Still, problems arise with this point of view. Here’s a big one, for example: Suppose you were a slave in 19th-Century America. Then, for you, your position in society and society’s treatment of you is perfectly ethical – even though you know quite rationally and reasonably that this cannot be the case.

For an ethicist like Mises, slavery is perfectly ethical until the social order evolves; then, we need to change the ethics to serve the new social order. If the new social order happens to enslave you, well, that’s just tough cookies. Better luck next social order; today, your beliefs about human equality are immoral.

Subjective Ethics: Alternatives To

I’ll readily admit my bias here: I don’t think a world in which social orders allow us to enslave each other until such time as a new social order comes along is a particularly ethical world. I have a bias in favor of liberating the enslaved and oppressed. I think human beings are capable of a better system of ethics. Unfortunately, many of the alternatives pose problems of their own.

One alternative, for example, is deontological decree, i.e. the word of god. The appeal here is obvious: god is perfect, surely his system of ethics is a pretty good one. But deontology poses two main problems for people like me.

The first is that, if I don’t believe in the particular god making the decree, then for me the system is identical to the “liberal-subjectivist” scenario described above. In other words, if your god happens to decree that my daughter can be made a child bride, and I object, that’s just too bad for her and me. To object is to contradict god’s will, and only evil (unethical) people do that.

The second problem with deontology is that, as a purely empirical matter, it tends to be absolutely miserable in practice, leading to ethical catastrophes like the Holocaust, the Inquisition, the Taliban, “honor” killings, caste systems, and so on. Deontology is so atrocious in practice that almost no one thinks it’s a good idea anymore, not even most theists.

Another possibility is utilitarianism, a favorite among rational types and academics. Calculating the choice that results in the most net total (subjective) happiness is an attractive proposition because it gives us a way to apply objective thinking (economic models and the like) to legitimately subjective questions. It’s a democratic approach to ethical problems. Here again, however, I spot two primary problems.

The first I shall illustrate by repeating an example I read in Steven Landsburg’s excellent book, The Big Questions. While he tells it better than I do, the example in my words goes something like this: Suppose everyone in the entire world were experiencing a dull but perpetual headache which could, for some strange reason, be stopped by killing a single innocent man. According to Landsburg and most other utilitarians, unless that man happens to be a Utility Monster, the right ethical choice is to kill one man in order to spare the rest of the world a mild headache. The moral of the story is: Don’t be the guy.

Still not convinced? Then what if I called that one guy “the guy with slightly darker skin and a comparative advantage in picking cotton?”

Aha, there it is. We’re back to the same scenario I laid out at the outset of this blog post. If your utility happens to be in the minority, too bad for you. Under utilitarianism, the enslavement or murder of any person y is theoretically justifiable, so long as we can buttress the case with a utility function U = ∑(xi,y) for sufficiently large i.

Virtue Ethics, a favorite among Sweet Talkers – myself included – seems to work really well on an individual level, because it affords the moral agent a means by which to reason through the ethical ramifications of a particular dilemma and arrive at a strong conclusion that reflects the totality of one’s moral code.

Adam highlighted a couple of substantial problems in a previous post:

Like Aristotle, and Julia Annas and Daniel Russell, I think that you must grasp the reasons in order to become fully virtuous. Unlike them, I think a substantial part of this understanding—the largest part in fact—is tacit, rather than explicit. This does not mean they are completely inexplicable; it’s just that people vary in their ability to articulate their reasons, and it has not been my experience that eloquence and clearness of explicit thought go hand in hand with goodness. Often such people are able to talk themselves into perfectly ridiculous perspectives, or worse. The USSR and Maoist China were creations of highly educated people capable of being very articulate about their reasons, and equally capable of filling mass graves with the bodies of the innocent dead.

It is the rightness of the reasons, and the responsiveness to them, that matters. The ability to explain and defend them is absolutely a valuable quality, and especially crucial in a liberal democracy where talk and persuasion are paramount. But that does not detract from the fact that many truly good people are bad at rhetoric, and many skilled in that art are quite rotten.

So, one weakness of virtue ethics is that it means our moral worth might depend on our ability to rationalize our virtue rhetorically. Anything goes as long as we can make a good argument for it, nothing is moral unless it can be argued-for. Then, another weakness is that, in a liberal democracy, we still end up putting it up to a majoritarian vote. The result is… yep, the same scenario I described at the outset of this blog post.

A New (?) Proposal For What Makes Something Moral

Is there any better standard upon which to found our systems of ethics, something that performs a little better than the ones I’ve described thus far?

I think I might have one: mental health. Actions that serve to augment or support the mental health of moral agents are moral, actions that serve to diminish their mental health are immoral, and actions that have no impact on mental health are morally neutral. Applying this evaluative criterion to moral decision-making seems to yield consistently good results.

For example, slavery is universally bad for every society in every time period, since no one could argue that being enslaved results in anything other than a mental health tragedy for the victim; furthermore, I could easily make the case that slave-ownership is corrosive to the mental health of the owner, too. The fact that slavery fails my moral test for both the victim and the perpetrator, by their own internal and subjective standards is a major advantage, to put it lightly.

The test performs equally well for minor ethical dilemmas. For example, lying is shown to be wrong not just because “society deems it so,” nor because “honesty serves the greatest good for the greatest number,” nor even because honesty is virtuous. Rather, lying is immoral because your life will be miserable if people don’t trust you, you won’t be able to live with yourself if you consistently betray the trust of others, and everyone else will be miserable, too. As for “white lies,” the mental health test shows us that uncouthly stating whatever you’re thinking in the name of total honesty is closer to a pathology than a virtue; if you can’t be sensitive to others’ feelings while telling the truth, then you might need to improve your mental health.

This seems intuitively true to just about everyone. On some level, we all know that it is only very weird people or people in a state of dire mental health who give no consideration to the feelings of others, and merely robotically act in a prescriptively “moral” fashion.

At the same time, the mental health test arms us against the prevailing attitudes of an oppressive mob. Society at large might tell you that it’s moral to force you to marry an old man or a first cousin – but if you know that it’s wrong for your mental health, then you have the moral authority to say “No!” even in the face of unanimous peer pressure. Better a mentally healthy social outcast – maybe even better a dead dissident – than a crazed or broken slave. Furthermore, a pervasive sense of conformism is itself mentally unhealthy, and going along with the crowd when one’s sense of morals suggests otherwise is a prime example of why conformism can be a big problem. And yes, contrarianism for the sake of contrarianism is also problematic. So, the key question, the one that provides clear moral guidance both for the contrarian and the pushover is, “Is this good or bad for mental health?”

One conclusion that arises is that a compromised state of mental health, no matter what the reason, compromises our ability to make good moral decisions. This, too, seems to make intuitive sense: soldiers in a war zone have a reduced ability to make the same kinds of moral decisions that they do during peace time; someone experiencing profound grief will often neglect her loved ones; and so on. This underscores the importance of mental health, as it might be integral to our ability to be good people.


Part of my reason for writing this post is because I’m hoping for feedback. This idea is relatively new to me, and I’m still not totally sold on it. It seems robust, but I feel as though I might be missing something important – possibly a few things. This notion does tend to put me at odds with people who would ordinarily agree with me on some significant moral issues. That is sufficient to cast doubt on the idea. At the same time, the more I evaluate it myself, the stronger the idea appeals to me.

So maybe I need help spotting my errors. If you happen to notice something I’ve missed, please do leave a comment.


The Futility of Policy

If you are fan of anything sportsing, you know the anxiety of blown officiating calls, but lately, the agony of video replay has become unbearable, a system which satisfies only a vocal party within the sportsing community affectionately known as the “Get It Right Crowd” (GIRC). These nerds will rest at nothing to establish a policy or series of policies or layers of policies to somehow ensure sportsing fans that the sportsing officiating calls are regulated and correct, and corrected if they are not correct on the field.

The GIRC has at least accomplished hours and hours of interesting philoso-talk on Local Sports Talk Radio. Alas, the assumption is that officiating calls are always correctable.

In the sportsing called “The NFL” there is a boiling controversy over the application of the definition of a “football catch.” Now, to the casual observer, this may seem like a no-brainer. To wit: a receiver “catches” the “football.” That is a football catch.

But, no.  The policy, or “rulebook,” as it were, has a strict definition of what a “football catch” might actually be. It is a lengthy definition, with a main paragraph and many sub-paragraphs, each of which deal in minutiae, and each of which must be memorized by game officials, and which, in the rigors of a contact sportsing event, must be expertly and thoroughly applied at an instant.

Therefore, it has come to be expected that game officials apply the policy incorrectly. After all, they’re only human, and, in the NFL, they’re quasi-professional. Many millions of dollars hang on their competence, both in player salaries and in gambling money.

To resolve this given human propensity to actually err, a video replay system has been devised. It seems easy enough: if there is a question about the application of the policy, one simply projects a video of the play in question onto a large-screen, high-definition television set, and everyone has the luxury of studying the question from very many different angles at different speeds with as many repetitions as is desired, that is, until the officiating crew is satisfied that they have, indeed, the right application of the policy to the play in question.

In practice, however, there has been very little satisfaction within the sportsing community concerning this system to regulate and correct application of policy. The officials are making worse applications in the instant, and they are increasingly unable to correct themselves with video replay. So far, recriminations have flown against incompetence in officiating as a quasi-profession, but the same phenomenon is occurring in sportsing where the officials are full-time professionals. Recriminations have also flown against the video replay system, but the only solution there seems to include outrageous complexity, which everyone, even The GIRC, recognizes as detrimental to the playing of the game.

In my opinion, which I will label “obviously,” it is the policy itself. The policy is regulating that which cannot be regulated. A “football catch” is known only until it is atomized. After it is atomized, in an effort to define its characteristics and elements, no one knows with any confidence what a “football catch” might actually be.

The solution, then, is quite clear: jettison The GIRC altogether. Remove the policy of “football catch” from the rule book. Remove video review systems. Place officials who have been examined for competency into a properly defined role as judges, not as regulators or enforcers, except where necessary. Periodically grade the judges according to their performance in the instant, not according to camera angles or variable speeds of the videos. Assess, revisit, reassess.

Good men will make good judges, and good judges will make good men. When we have reestablished this principle, then sportsing events will become enjoyable again, or, at least in Buffalo, NY, less agonizing.

replay booth

The Private Life is Dead In New York State

It seems like only yesterday I went to the marketplace to pick a health insurance plan that was right for me and my family. Already, it was tricky because New York State had some cockamamie law in place that prevented insurance companies from underselling a certain level-benefit, in order to not compete against the state subsidized “Healthy NY” program, which was designed, in effect, for pregnant teenagers in New York City and Albany.

Despite that, I remember the good ole days, when I had, I think, nine different plans to choose from. Even though I am a sole proprietor, I was able to join a group through the chamber of commerce, which spread morbidity risk, you see, lowering costs.

The giant gavel of Spring 2010 struck. Almost immediately, cost increases accelerated until, after a few short years, the plans were double in price. Last year, the law forced me out of my group because I am a sole proprietor. I joined the New York State co-op because it had the most affordable plan. The premiums were somewhat less than before, but the deductible was sky-high, and the care was less amenable to my needs.

In other words, instead of a suite of plans to choose from, I was forced into one. I mean, I could conceivably have chosen a different plan, but the price was, shall we say, prohibitive.

I just received a letter from the co-op announcing that the New York State Insurance Commission is forcing them to close at the end of November 2015, and that I should look for a new plan for December of 2015 before I go through the dreaded Exchange to pick a plan for 2016. Something about a $250 million debt and sick people in Upstate New York. I don’t live in Upstate New York. I live in Western New York.

What a disaster.

How did this happen? Why can’t I just go get health care and pay for it myself? Why can’t I negotiate with an insurance company for a plan that suits my family’s needs? I was doing just fine in the cold, cruel, unprotected marketplace before. What’s going on?

Why bother? Just tell me where to sign. Here, take my stuff. What do you want me to do? Just tell me, OK? I’m exhausted. I give up. Which line do I shuffle toward? Should I look you in the eyes, in make-believe fashion, or should I keep my eyes to the ground like a good little citizen?

Just to spell it out so there’s no misunderstanding: I’m mourning the state’s subsuming a large chunk of my manhood unto itself.

Exemptions, ladies and gentlemen, did not apply to the individual market, for obvious political reasons. How much longer are those exemptions from which you currently benefit going to hold back the tides of the marketplace?

No worries, though, my friends. Pretty soon, the metric for health care will be reduced to one thing: not customer satisfaction, not affordability, not availability, not quality, but efficiency.

And it will be efficient.

serf and lord

Scouting With Rafe

Hunting season commences on October 1 this year, for bowhunters, so Rafe and I went out one last time this weekend to check our stands for integrity. One in three hunters will experience a fall in their careers, so we are extra-diligent toward the end of remaining in the 66.6%. Thank God we’re in a Rational Age; otherwise, being in that number would surely dampen the celebration when we go marching in.

You have to understand: when I call Rafe “my friend,” I mean it only in the sense that he has a tremendous deer population on his property, and I’m kind to him in many ways so that he lets me hunt on his property. He has many small mannerisms and annoyances which prevent true friendship from developing.

For example: we were sitting together in our double buddy, which is set over the premier deer-path crossroads, as it were, where the deer are coming in from all points feeding, going toward easy water, then to all points bedding. We were pointing out to each other some of the new features another year has brought to the immediate surroundings, waiting to actually see some deer, when he suddenly asked, “Why do we drive on the right side of the road?”

“I dunno,” I said. “Convention? In Australia they drive on the left side of the road.”

“Do they?” he asked. “Is that why water spins the other way ’round when it goes down the toilet?”

“No,” I said. “It spins the other way ’round when it goes down the toilet because the moon goes around the earth the other way ’round in the Southern Hemisphere. Like a mirror image of the Northern Hemisphere.”

“No, seriously,” he said. “Isn’t there some sort of logic to the convention of driving on one side or t’other?”

“Even if there once had been, the logic is meaningless now,” I replied.

“Ah!” he exclaimed, which ruined any hope of seeing deer before sunset, and possibly changed their patterns forever. “This is what I was talking about last time, how everything is relative.”

“But it has to be relative to something.” I caught myself. I saw where he was going, but it was too late.

“So by convention, by an organic, common agreement, we drive on the right side of the road just because it works for us,” he said.


“No!” he said, again very loudly, annoying me even further. I reminded myself that I was sitting above his property. “When convention was codified, no one argued for the left side of the road, to be in communion with England or Australia? No, someone argued. Some percentage of the population strongly desired to drive on the left side of the road. In fact, I hazard to guess that there will always be a percentage of people who want to drive on the left side of the road, and they are making the same argument as you: the convention for right-side driving is founded upon baseless, logic-free, utilitarian convention.”

Sweden arbitrarily codifies the Continental convention.
Sweden arbitrarily codifies the Continental convention.

I stared into a distant thicket, hoping against vain hope to catch sight of movement. Alas, he continued:

“What percentage of people do you think it is?”

“I dunno, three?”

“Okay, let’s say five percent, for round numbers,” he said. “Five percent of the population is completely opposed to right-side only driving, constantly lobbying the rest of the population to allow left-side driving in addition to right-side driving.”

“Normally, the ninety-five percent would ignore the five.”

“Moreover, when it came right down to it, only two percent of the population said they would actually drive on the left-side of the road, were it conventional,” he said.

“Okay, so the ninety-eight percent would ignore the two. Or, in realistic terms, a vast majority would ignore the two percent, going on with their business.”

“Ah,” said Rafe. “But what if the two appeal to justice?”


“It’s all relative,” he said.

“But it’s all relative to something,” I argued.

“If that’s true–and it’s not–who are you to say what that something is?” he riposted. I sighed.

“Why have convention on the roads at all?” I said.

“Indeed,” said Rafe. “Many countries do not. Aren’t you the one who brags about going down to Nicaragua all the time? How’s the driving down there?”


“Says you. One man’s terror is another man’s adventure. After all, big shot, you can step in the river only–”

“Don’t!” I shouted. “Don’t you dare.” He laughed.

“What was that?” he whispered, pointing into the thicket. I stared. He stared. Minutes passed. The sun drooped toward the horizon. Presently, I felt something crawling along my thigh toward my crotch.

I swiped at it. It was Rafe’s hand!

“Hey!” I cried out. “What the–?”

He laughed, and we stared into the distance again while the light waned. After a minute he did it again, this time with his hand nearing the danger zone.

“Listen, Raphael,” I said. “You do that again, and I swear, I will throw you off this double-buddy. I promise. To the ground. I’ll tell the DEC you fell.” He laughed.

After another minute, he did it again, right on my man-parts, so I jammed my elbow into his ribs.

“Hey, cut it out,” he said. “I was just funnin’.”

“Just funnin’? With my man-parts? Who funs with another guy’s man-parts?”

“Seriously, Dave,” he said. “Teasin’. Just joshin’, yankin’ yer chain.”

“TWSS,” I said. We laughed and climbed out of the double buddy. I can’t get back out there until Friday, October 2.

The Ontology of Economics

Earlier this year I opined on twitter that economists are essentially philosophers with full employment:

Jokes aside, my glaring omission, of course, was ontology. Ontology is the subset of philosophy concerned with the nature and categories of being and existence. In the case of economics, the core ontological preoccupation is with the nature and existence of market equilibria and their constituent parts: supply and demand, institutions, representative agents, social planners, and so on. Some focus on ontologies of being, like a static equilibrium, while Hayek and Buchanan famously had ontologies of becoming that emphasized the importance of analyzing economic processes. Others debate the gestalt between whole markets and individual exchanges — supply and demand curves versus a game theory model of bargaining, say. Others-still question the reality of economic “absences,” like productivity measurements produced as a statistical residual, or the output gap between real and potential GDP.

Economic ontology therefore touches on every aspect of economic thinking and analysis, and as such the biggest rifts in economics often come down to mutually incompatible ontological commitments. For instance, I once read a polemic against Keynesian economics that proclaimed matter of factly that the macroeconomy “doesn’t exist,” that it’s nothing more than a metaphor for a complex aggregation of individual interactions. Well — no duh. Individuals are aggregations of complex biochemical interactions, as well, but that doesn’t make them any less real. Much like debating the point at which a collection of individual grains becomes a heap — there simply is no fact of the matter.

As in the example above, it’s important to be able to discern the difference between a category mistake (like attributing motives to GDP or the fallacy of composition) and a difference in construal (like acknowledging aggregates exist in the first place). More often than not, the existential quantifier (or dubbing something “real”) is less about proposing an object as genuinely more or less fundamental and more about raising or lowering that object’s social status. This may be incredibly useful in the context of rhetoric and persuasion, but it is usually safer to embrace a plurality of ontologies as equally valid based on use and context.

That is, economists should be ontological pluralists. And self-consciously so.