Reconciling Pluralism and Liberationism in Education

As part of my writing project over a short holiday from work, Adam asked me to discuss a piece he wrote back in November on this site.
It’s helpful to return to Tamara’s piece where I think she best describes the tension Adam struggles with:

However, if part of a university’s responsibility is to reflect society as it should be, to not just boast diversity, but actively work to promote inclusivity, then it’s reasonable that students would expect their university to take measures to ensure all students feel welcome. This sentiment seems to be at the core of calls for trigger warnings, explicit racial quotas in admissions and employment, and other efforts to fulfill the university’s role as reflection of societal progress. Following this line of reasoning, college administrations have a duty to create a safe space for students, which either equals or exceeds its duty to cultivate an intellectual space. Looking at it this way, it’s less difficult see why some college campuses seem to be coming apart at the seams right now.

If I were to shorten Adam’s main points, they would be as follows:

When justified on the basis of discovery, there’s a natural link with the notion of pluralism in ideas as a tool for finding the best ideas. When justified on the basis of freedom, however, we get closer to the goal of greatest diversity in ideas and people as an end in itself. Pluralism in ideas, people, and governments for pluralism’s sake.

First, pluralism can be interpreted as similar to federalism, inasmuch as pluralism values diversity in itself in demography just like federalism values diversity in itself of political systems.

The tension that Tamara identifies in her piece is, I think, between federalists in the domain of ideas on the one hand, and liberationists in the domain of diversity on the other. For the latter, diversity in demography is a tool for liberating minorities from the chains of a white patriarchal normative system— but also for liberating whites of each sex from that very system, by exposing them to groups who have been marginalized by it.

Second, this interpretation of pluralism, which Adam finds particularly effecting, stands in contrast to other popular rationalizations of demographic diversity, namely a politics of liberation. Diversity should be enforced as a means of liberation itself for those who are oppressed.

Under this rationalization, the activity of introducing diversity in demographics is, itself, illiberal to pluralism of ideas because it is privileging a singular set of intellectual beliefs. Indeed, third, education itself, when conceptualized as a liberating activity:

[Precludes pluralism in education b]y imposing a single form of education on all, …forcing any tensions over competing visions into the scale of the nation, rather than the locality.

Ultimately, the problem lies with authority, without which a federalist/pluralist and liberationism are doomed. After all:

Pluralism of all stripes is often anti-authority, or at least an attempt to minimize the problem of authority. But the problem of authority is inescapable; even more so for those who take seriously the value of diversity. A serious understanding of such value must be connected to a serious understanding of its limits.

I would argue that diversity in the classroom is first and foremost a necessary condition for learning. In the ideal environment, students are provided with varying interpretations of the information they are being taught. More than that, students are asked to critically analyze the interpretations before them, and, eventually, to determine which is the most accurate or compelling. Fundamentally, this is what we expect learning (particularly in a higher education setting) to be. Each of these steps, providing multiple interpretations of information and critiquing these interpretations, is greatly dependent on diversity of both faculty and students.

What worth is inquiry in a classroom full of like-minded individuals? Without multiple views expressed, questions lose their power and purpose, and there can be no learning. We fail because students have arrived with one set of beliefs, and left with the same beliefs, unchallenged, no further developed, and given institutional approval without institutional critique.

Perhaps this argument is unconvincing. Why is diversity in demographics a requirement for diversity in ideas? Consider one of my favorite blog posts ever written, which succinctly describes privilege in the language of mathematics that I myself had always used to understand the meaning of privilege. In discussing say, income inequality, a homogeneous classroom filled with individuals who were raised with limitless opportunities could at best theoretically conceptualize alternative views to an argument that “equal opportunity for success exists in America.” Their entire life was surrounded with confirmatory evidence, where effort and caring were the only obvious prerequisites to success with high school academics and non-academic barriers to college attendance were non-existent. What occurs in a classroom like this is, at best, a sort of slum tourism without any real meaning.

We don’t need liberationism to justify diversity in education, we need diversity to ensure education.

I’ll get back to liberationism in a moment, but first it is important to recognize that this justification of diversity, that it ensures education, does not require a pluralism or federalism that values diversity in itself. Diversity is not a value based on freedom of ideas nor is it a value based on a mechanistic argument that suggests diversity is more likely to find the best ideas. Instead, diversity is of value because education as a process requires challenge and critique, whether they serve to change minds or deepen conviction (hopefully through further development). Whereas the pluralism/federalism Adam describes values diversity almost endlessly, the value of diversity to cause education is quite bounded. It is completely appropriate to discard views that are unable to pose serious intellectual threat. There is little value in considering say, conspiracy theories when discussing the Apollo missions.

That’s a loaded example. But there is also little value in teaching detailed theories that depend upon the existence of lumineforous aether, regardless of the fact that they represented cutting edge at one time.

Diversity of demographics, or even ideas, within institutional education can be justified on a far narrower set of beliefs that do not require the same anti-authoritarianism conflict of a broader pluralism.

So diversity is justified on a weak form of Adam’s pluralism/federalism case that does not rise to valuing diversity in itself, but instead values diversity that leads to a meaningful impact of conflicting view points that are necessities for learning. These views must be both experiential and intellectual for rich learning. In this weaker conceptualization, anti-authoritarianism is far less central. In fact, we have a highly individualized conception of knowledge construction that depends upon individual analysis and an individual determining which truth is the most compelling.

Does this absolve the university setting from actively seeking diversity as part of a liberationism framework? Actually, not at all. We can satisfy the pluralist needs of the classroom without liberationism, but we can also satisfy these needs through liberation. Rather than understanding liberation as a central embedded component of pedagogy or the learning process, we should understand liberation as the result of being educated.

Without debating signaling versus human and/or social capital accumulation, it is clear that formalized education provides positional benefits in the labor market and in “social markets” such as forming partnerships and nuclear families. Even when education does not directly empower, it certainly can serve as an obstacle to accessing fully participation with a whole class of society. If we believe there is a moral imperative similar to say, equality of opportunity, it’s quite obvious we can imply that there is a strong requirement that universities accept diverse student bodies and a diverse teaching force since education is a mediator of opportunity.

I don’t think this implies, by the way, that universities are compelled to generate “safe spaces” beyond enabling learning to happen. The process of education as it exists generates opportunity through complex mechanisms that are not entirely separable. There could be aspects of confronting a culture of power and authority that is critical to educations “opportunity generation”. Importantly, if we accept the argument for diversity as a process required for education to occur, we have to be vigilant about discarding the very conflict that allows learning to happen.

The challenge facing education is not accepting the notion that not all ideas are created equal or are worthy. I have ceded that in this very post and definitions of academic freedom already largely acknowledge this fact as a prime distinction between academic freedom and freedom of speech. Instead, the challenge is the demand of the academic left that their ideas have already met a high burden of authority that necessitates the eradication of certain view points as worthy of consideration in the classroom.

The conflict is not between liberationism and federalism. It is not pluralism run amok. It is, in fact, a crisis of authority, but not in the existence of authority but instead of whether absolute authority belongs to the current strand of the academic left. They certainly seem to think so, but far from everyone is along for the ride.


† This may not satisfy Adam, who I think will read this conceptualization as falling pray to the same problems of authority. I disagree. The very insistence that views are critiqued and alternatives presented is a statement of authority on how we build knowledge and learn. We don’t have to remove individual involvement in identifying legitimate authority and authoritative claims by asserting authority. That’s too strong a form of this challenge much like his pluralism/federalism is too strong a conception of the value of diversity of thought. Some ideas *are* worth more than others, so much so that we can discard some from even being presented. That does not mean that authority implies a lack of conflict in the education process between people, ideas, experiences, etc. The existence of more than one conflicting authoritative claim practically defines areas of worthwhile study.

Author’s Note: It is rare that I can go back directly to a piece of writing I did for a weekly assignment for a college course from my sophomore year (almost a decade ago!) and find almost everything I want to say. So thank you to UC170 and the “weekly response” I wrote on Diversity on October 16, 2006.

When No Argument Can Save You

This week I had the pleasure of listening to my friend Noah appear on EconTalk to discuss the status of economics as science with my former professor in that discipline, Russ Roberts.

I would characterize neither of them as epistemologists or philosophers of science, but perennial practitioners. The chief difference between them, other than age or Noah’s ability to draw on a knowledge of physics as well as economics, is one of faith.

Noah himself brought this up: all science requires a leap of faith somewhere, as he put it. The example he used was Galileo’s experiment demonstrating that two balls of different mass will fall at the same rate. There’s only so far you can go to prove that this represents a universal law, or even a very general one. What if it only applies in our part of the universe? What if it only applies when there is a human observer?

Noah isn’t saying this makes us helpless or that we have to willfully ignore such thought experiments—nor should we.

The arc that Russ Roberts has gone through on this subject since I took his class during the crash in 2008 to the present can be characterized as a loss of faith—rather than the embrace of a given intellectual framework.

Russ has become unwilling to make that leap of faith when it comes to economic methods and arguments. But more importantly, he has lost faith in the sense of trust—trust in his fellow economists. Most importantly of all, he has lost faith in his own judgment.

The questions that he seems to come to again and again—why economists can’t agree on the effect of the 2009 stimulus, whether any study has ever completely won over people whose perspective was at odds with its conclusions—are attempts to establish, or prove once and for all the absence of, the credibility of economics as a field.

I’m not sure there’s an answer that could satisfy him. There’s a certain self-fulfillingness to losing trust in this way, much as widespread generosity in granting trust seems to perpetuate itself. How such trust can get established in the first place is a mystery, one that I’m certainly not going to get to the bottom of in a blog post.

How We Think: a Simple Model

mind
Drilled Mind by Roman Klco

Forgive the clumsy workings of a mind untutored in philosophy of mind. But it has always been my way to read, and think, and argue, and then try to pull the threads together and see what emerges. What I lack in training I will try to make up for with brevity.

Crucial for understanding anything is understanding the context. The context is what is meant when people speak of the whole truth as opposed to partial truths. However, context—the whole truth—is boundless, and so whenever we attempt to grasp it, we’re always projecting and simplifying.

A partial truth might be dropping something, and a projection of the whole truth would be classical physics.

We can make sense of the centuries long tug-of-war between Enlightenment rationalists and empiricists from this perspective. The rationalist argument boiled down to the idea that we can never make sense of observed truths, which are always partial, without working out the whole truth abstractly beforehand. Empiricists basically believed the opposite; you add up a lot of partial observations until you get the whole truth.

Philosophical hermeneutics for the past century has taken another path, saying, in essence, you need both at once.

As Jonathan Haidt points out, people don’t really have a rational model of morality in mind when they make in the moment ethical judgments. His observation could be extended to nearly any judgement; more of the whole gets left out than is brought in explicitly. But there is an important two-way relationship between part and whole here. So how does it work?

Crucially, we externalize most of the tools for our thinking; Andy Clark calls this “extended cognition.” Joseph Heath and Joel Anderson point out that the most important way individual knowledge is externalized is through the social environment; that is, through other people. In large part this is accomplished by trusting the people in our lives and the people we perceive to be authorities on a subject, and having faith that the apparent contradictions or limitations to what is known can either be worked out, or aren’t very important.

But neither the trust nor the faith are blind. When we are confronted by partial truths, or assertions by people we trust, that stand in conflict with the whole truth as we understand it, or as other people we trust have explained it—they are subject to reevaluation.

Essentially, what the typical person brings to the fore in confronting the partial truths in their daily life is not an actual model of the whole, but prejudices. Hans-George Gadamer speaks of prejudices in just this sense as “pre-judgements,” similar to how a judge will make provisional judgments that influence the decisions he makes throughout the trial, before actually rendering a final verdict.

These prejudices aren’t simply mindless assumptions or “givens”, nor is the process by which we come to them. They always point back towards some skeleton projection of a whole truth, which can be fleshed out and scrutinized. Haidt leans hard on the fact that people will cling to their prejudices even when unable to come up with any reasons to justify them. But the fact that I might not be able to remember how to solve a quadratic equation in the moment does not invalidate mathematical reasoning as the correct way to solve such an equation. Nor does it mean I should abandon my belief in mathematical reasoning!

Context matters, and part of specifically human context is the knowledge that we have largely externalized. Haidt and his researchers are presumably complete strangers to their subjects, and the subjects in question knew that they were inside of a psychology experiment. Would the subjects have treated the matter differently if they were in a setting they trusted to be private, discussing the questions with a priest, a teacher, or other trusted moral authority who challenged their prejudices and explained the reasoning for doing so?

The very best projections of the whole truth that humanity is capable of mustering can get incredibly complex, and take years of training to really understand. Physics, chemistry, advanced mathematics, computer science, but also moral systems, offer up huge, interconnected sets of theories. Connecting these systems to people’s prejudices is a matter of persuasion. Non-specialists must be persuaded that specialists have authority on a given subject, and specialists must also persuade one another on any given point of contention.

Persuasion isn’t a matter of cold, logical syllogisms, but of rhetoric. But that doesn’t mean it is irrational. It appeals to thought processes as they actually occur in people. Among specialists, of course, this will often involve a great deal of technical details. But technical details alone cannot tell the whole story of what we ought to believe or why we ought to change our point of view.

In every case, the one attempting to persuade will draw on narratives and metaphors and examples from life that make the subject, as well as what is at stake, concrete for the people he wants to persuade. They will employ a situated reason. But reason, none the less.

 

The Space Peoples

In January of this year now coming to an end, I wrote about Rockets and how low-cost access to space will be a game changer. At that time there were no rockets that could fly to space and return safely to Earth, ready to fly again. Today, twelve months later, there are two.

The Blue Origin New Shepard and the SpaceX Falcon 9 are not really in the same league. The Falcon 9 is twice the size, ten times as powerful, flies in a parabolic arc flight path, and (most importantly) is capable of boosting its second stage to orbital velocities. The New Shepard cannot do any of that, and basically just flies straight up like a bottle rocket until it just barely kisses space, and then falls back down. But let’s just put that aside for a moment and think about the fact that two new entrants to the rocket business have in a decade finally done what half a century of Boeing and Lockheed Martin flying spec missions for NASA (and equivalent arrangements in Europe, Japan, and China) have not: advanced the art of what is possible.

To recap from my previous posts, reusability will reduce the cost to reach orbit on a per-unit-weight basis by a factor of 100. Rocket fuel is cheap, accounting for less than $200,000 of the current $60 million price tag, so amortizing the hardware and R&D over many flights will bring prices down a small multiple of that. This will in turn expand the market of space customers beyond the current crowd of military, NASA, and cable TV people. Use cases will expand, including mining asteroids for fuel and valuable resources, private space stations, and cheap satellite internet constellations. The size and frequency of our space missions will increase. NASA could end up have twenty times the number of scientific probes in action as they do today, with the same budget. Companies like Tethers Unlimited could self-fund (without waiting for NASA grants) their own in-space experiments on things like SpiderFab, which would in turn lead more quickly to extremely large space structures for science and communications. And course eventually it could lead to people living on Mars.

This may seem like a bit of a stretch at this time, but I feel we are at an important inflection point in human history. I am reminded of the civilizations that flourished around the Eastern Mediterranean during the Bronze Age, and then collapsed; of the Roman Empire that flourished and then collapsed; of the Islamic Golden Age that came and went; and of the rise of the British Empire and its continued global dominance through its successor state, NATO.

In each of the above cases, there was a time of bright flourishing that was ushered in by an expansion of trade and resources. The larger the network of trade, the more wealth that was produced even absent significant technological change. Basic task specialization and comparative advantage at work. But technological change followed too, from the greater number of people who were having ideas and discussing those ideas with like-minded people. Well space doesn’t have any people to discuss ideas with (yet), but it does have a great deal of resources. The Moon can be mined, and manufacturing operated there, without concern for Earth’s environment. The Sun’s solar power can be captured in space with greater efficiency, and in far greater amounts, than on Earth. (It may seem far fetched at this time, but Google and Facebook moving their server farms to orbit could solve a number of problems in one go.) Further in the future we could start building destinations for colonization, taking the population burden off Earth. And so forth.

When England expanded its resource base ten-fold by colonizing North America, its wealth and influence was permanently increased. What happens when all of Earth taps into the ten-thousand-fold greater resources of the Solar System? It’s of course impossible to know for sure, but I feel confident in predicting that it will be a genuinely good thing for both the entrepreneurs seeking opportunities and the regular folks who trade with them. Let’s hope this golden age doesn’t come to an end too soon.

Fantasy, Myth, Ritual

The Roman world, when the early Christians exploded onto the scene, was a world awash in myth and ritual. When Christians came proclaiming that they had one additional god to which they prayed, it was natural for the Romans at the time to ask exactly which one they had in mind. Jupiter perhaps, or Apollo, or one of the mystery cults like Dionysus, or some other God, like the God of the Persians or of the Egyptians. The Romans were entirely used to new myths coming along, or new sub-cults seeking to elevate one of the deities above the rest. The pagans dealt with this tension, these competing cosmologies, by separating truth from religion, from rituals. The philosophers, lovers of truth, wanted to encounter the divine through reason, and therefore rejected the myths, even as they demanded duty to the God their reason identified.

The paradox of ancient philosophy is that intellectually it destroyed myth, but also tried to legitimize it as religion, as ritual. If you could pay homage to the cult of the emperor it didn’t really matter whether you actually believed that the emperor was divine, and fulfilling your duty to Neptune demanded sacrifices, not anything so prosaic as intellectual assent to his creation myth. This uneasy balance was destroyed by the early Church, who insisted that the God they worshiped was not a God of myth, but Being itself, the God that the philosophers had begun to apprehend. That their God demanded, not sacrifice but belief. That true religion was not empty ritual, not merely a useful custom that must be performed for the sake of edification, but was actually true. As Tertuallian formulated it, “Christ called himself Truth, not Custom”.

This reconciliation between truth and myth was always fraught, as a faith that professed itself as Truth needed to carefully ensure that reason was never entirely unbounded by myth, and once myth could no longer constrain truth, to change the myth to fit the new truth. But there is only so many times that the myth can change, and only so quickly, only so many parts that can be hived off before the whole myth is called into question, and the ritual stands empty once more. A culture that was not prepared to entirely sacrifice the idea that myth and Truth should be at cross purposes created a new myth, which subsumed the old and imbued the traditional rituals with new meaning. The old God of the philosophers was replaced with a new telos, Reason, Liberty, Equality, Progress. But the new gods were as austere, only slightly more atheistic than the pagans found the Christian god. One can provide their assent to mystical Equality, but that mystical union that accompanies the dissolution of self and submersion into a larger whole must come from rituals that involve other personalities, and the God of Equality is not personified. Ritual union, if it is to occur, must be union with a community, not union with an ideal.

The new gods require new myths, but as the new gods are social gods, the new myths no longer need to be true. The new myths are created rather self-consciously to serve as myth, intially by Christians seeking to preserve what wisdom they could by implanting some of the old myths into the new, what JRR Tolkien called Mythopoeia, but the myths that survive are ones that serve the appropriate social function of bringing together a community to engage the ritual and propagate the myth. This can be political, like the myth of class solidarity or natural rights, or the apotheosis of founding fathers into paragons of virtue, like George Washington and the cherry tree, but just as often is not explicitly so. The most popular myths and rituals are cribbed from Germanic pagans, Christmas trees, Easter Eggs, the Easter Bunny, Santa Claus. Liberated from the need to be true, and from the need to appear true, the new myths are instead selected for their role in ritual, until the ritual and myth become entirely severed from any tether to any teleological end, a self-sustaining ritual cosmology, protected from collapse by the complete lack of any expectation of coherence.

One of the functions of ritual is to bridge the gap between the fact of continued inequality in an egalitarian age, and the yearning for unity, of the kind which can only be found amoung equals. It is in this context that the new myths thrive, by creating a world so alien to our experience, that they can be encountered without our baggage. The triumph of the new myths is to make the characters so archetypal, the story so uncomplicated, that a vast swath of people with widely varying backgrounds and experience can immediately identify and lose themselves inside the story unreflectively, providing a common experience that can be shared across lines of gender, class, occupation, generation or race otherwise unavailable. A world where lawyers, doctors and engineers cosplay cheek by jowl with installers and plumbers, retail workers and draughtswomen, lost together in a fantasy of power and triumph which transcends them.

Scott Weiland, RIP

I don’t have to accuse you of anything. You are already accusing yourselves all the time.

Scott Weiland (RIP December 3) was one of my personal favorites, coming up there in the heady days of grunge bathos, and I chose him over Blind Melon’s Shannon Hoon (RIP 1995) and Pearl Jam’s Eddie Vedder (still alive). Of course, our generation’s Jim Morrison had already led the way, Nirvana’s Kurt Cobain (RIP 1994).

Weiland’s wife, in her distress (I hope), wrote a note, which reads, in part, “Our hope for Scott has died…”

How can hope die? How can hope for a person die? Is this a possibility? From my perspective, there is always hope for Scott, even if there is very little hope, and even if he is already dead. Who knows what he met when his heart stopped?

His wife continues, “We are angry and sad about this loss, but we are most devastated that he chose to give up.”

That’s his wife.

Oh, wait. Ex-wife. Why is she commenting? I don’t know, but that’s harsh, bro, really harsh, the pinnacle of wifely arrogance and judgmentalism, that a drug addict could choose anything. She then instructs those who loved him, for whatever reason, to take a kid to a ballgame rather than memorialize their love for him.

It is curious to me, and I see it as a similarity, that Kurt Cobain would write a song, essentially declaring “Married = Buried,” then choose to use the muzzle end of a shotgun to eat a shotgun shell, whence his wife garnered some notoriety with her band, Hole.

Like I said, I don’t have to accuse you of anything. You already accuse yourselves all the time.

Forgive Me My Humean Trespasses

The gap between 25 and 30 is not very big in absolute terms, but it can certainly seem that way.

That is how I felt as I reread a post on morality that I wrote five years ago. Back then, I was pretty devoted to David Hume’s philosophy, and had been for some time. Part of it was due to many conversation with my father, for whom Hume loomed large as well. Part of it was that I’d actually read Hume, and not much else where philosophy was concerned. And in spite of that lack of familiarity with the alternatives (a shortcoming not shared by my father) I had an utter certainty in it.

This in spite of the fact that the very philosophy I was certain of had, as a cornerstone, the idea that reasoning was powerless to discover or demonstrate much of anything. This cornerstone eventually led me to conclude that there was no such thing as reason at all. Looking back, I want to ask that 25-year-old—how can you be so certain, when you are so unfamiliar with the arguments to the contrary and don’t even believe that certainty is ever warranted?

Martin Cothran made basically this argument, when he and his son Thomas engaged my father and I in discussion back at that time.

Against Unreason

One aspect of my framework of the time was the notion that utter inconsistency didn’t matter. In talking about the relationship between God and morality, I argued that there was no logical connection between the two, and also that logical connections don’t matter.

Responding to my piece, Martin Cothran made a move that has lately become familiar to me:

I’m not sure I understand exactly what he is saying here in regard to the role of reason in moral discourse. If logic is “irrelevant to whether or not I believe in either God or morality,” then why should anyone find his point that there is “no logical inconsistency in the fact that I don’t believe in a divinity but do believe in morality” persuasive? If logic is not operative in the discussion of the relation between God and morality, then why is the absence of logical inconsistency in a position on this relation commendable?

Though I had not read a word of Foucault at the time, I think this position was more in his court than Hume’s. I held onto this point about inconsistency for a very long time, but after our own Drew Summit began sending me works in Aristotelian and Thomist metaphysics, I began to see the absurdity of it. The last straw for me was Elliot Michael Milco’s fantastic thesis, “Michel Foucault and Thomas Aquinas in Dialogue on the Basis and Consummation of Intelligibility“.

As Martin pointed out, it’s rather hard to make any assertion at all with teeth if you don’t care about consistency. My argument about inconsistency undermined my argument about the logical connection between God and morality—if consistency doesn’t matter, then there could both be no logical connection between God and morality, and be a completely crucial logical relationship between God and morality—simultaneously. Productive analysis would prove impossible.

In discussing this point with my father, he argued that it doesn’t apply to human relations. His example was that it’s possible to both love and hate someone. But this isn’t a true inconsistency—no one would argue that love and hate are mutually exclusive. And if you do argue that, then you are committed to defending the idea that you can’t both love and hate someone at the same time, by definition.

Moreover, I very strongly believed that I had made a case for my Humean framework. I made what Joseph Heath identified as a typical non-cognitivist mistake; I used general-skeptic arguments and thought I only undermined my opponents’ position.

Aporia, the beginning of so many philosophical investigations, is precisely the discovery of seeming inconsistencies. Not everyone needs to be bothered with such things, to be sure. But the advance of our knowledge requires us to attempt to uncover whether inconsistencies are merely superficial, or whether some deeper revision in our framework is necessary.

Man Cannot Flourish On Moral Sentiments Alone

The most particularly Humean aspect of my framework at the time was what I later learned was called non-cognitivism. It’s the idea that we don’t think something is wrong because of judgment or beliefs, but because of feelings. As my father put it in his contribution to the discussion five years ago:

We don’t reason our way to condemnation of child abuse: we grow angry at the sight of it. Later, we may devise rational arguments to persuade others – sometimes ourselves – of the rightness of our opinions and actions. But I feel the wrongness of child abuse with a delicacy that, say, an ancient Spartan would have lacked.

As the Sparta example was intended to indicate, there is a strong cultural element involved in the shaping of our moral sentiments. In this way, morality is reduced to unthinking feeling combined with unreflective cultural practice (rather than theory). They combine to form a non-cognitive cocktail.

Among the evidence that he brings to the fore is Jonathan Haidt’s “The Emotional Dog and Its Rational Tail,” in which Haidt points out that people jump to moral conclusions first and rationalize after the fact.

In response, Thomas Cothran argues that we mixed up moral psychology with moral philosophy.

This opposition is founded upon a category mistake. The question of why people believe in certain moral principles belongs do the discipline of moral psychology; the question whether those moral principles are true belongs to moral philosophy. Obviously people can believe in true things for false reasons: just because one’s belief that Caesar existed is predicated upon the belief that the television show “Rome” is a documentary does not make the historical arguments any less valid. Further, people often believe in true things on the basis of beliefs that don’t have much to do with the truth or falsity of the subject: people might believe in relativity theory because Stephan Hawking believes in it, but the fact that Hawking has an opinion on the subject doesn’t bear on the truth or falsity of relativity theory. The process of evaluating the truths of beliefs is distinct from the process of evaluating how different people come to their beliefs.

From our very different projections of the whole truth of morality, Jonathan Haidt’s study has commensurately different implications. For my father and I, it seemed to confirm the Humean processes at work. For Thomas, it seemed like an ad hominem attack on moral realism, because it attempted to discredit it by means of the psychology of particular people.

I have come around to Thomas’s point of view. The idea that Haidt’s study on moral reasoning discredits moral realism now seems to me akin to making the argument that Kahneman’s studies showing people are bad at statistical reasoning is evidence against the validity of statistics.

Moreover, I have come believe that all of non-cognitivism’s core premises are simply false. First, desires are not non-cognitive. As our own Sam Hammond put it:

It’s tempting to think of desires as following a linear chain down to some base foundational affect, implanted somewhat arbitrarily by evolution. But this is an elementary error.

While true that evolution has equipped us with certain somatic states (like hunger pangs), desire (like “I desire to eat”) contains propositional content. Like beliefs, desires are part of a holistic web that we draw from in the discursive game of giving or asking for reasons. In turn, desires like beliefs are capable of being updated based on rational argumentation and the demand for coherence.

This is actually quite close to what Aristotle believed.

Let’s take the child abuse example my father provided. We don’t just see a thing and unthinkingly get angry about it. In order to get angry, we have to have grasped what the situation is. The anger comes from the belief that child abuse is wrong and that it is occurring. This is crucial—in many cases of child abuse (or other wrongs), the thing is allowed to keep going on precisely because we don’t want to acknowledge that a loved one or an important person in the community is capable of doing such a thing. People go out of their way not to notice or acknowledge something going on right in front of them, because to notice would be to acknowledge a duty to act. How we construe a situation is not simply reflexive; we have a dual responsibility as audience of the situation as well as part-authors of our own.

The Spartan example points to how pliable even the belief that child abuse is bad can be. But culture, convention, and tradition are not simply unthinking practice. Like desire, they have a cognitive content, they have intentionality. As Adam Adatto Sandel puts:

Despite all appearance and without explicit “innovation and planning,” tradition is constantly in motion. What might seem to be blind perpetuation of the old, or mechanical habit, always has a “projective” dimension. Handing down tradition really means adapting it to the current circumstances, maintaining it in the face of other possibilities. And through such preservation, tradition is constantly being redefined: “affirmed, embraced, cultivated.” Although this process operates, for the most part, unconsciously, it is nevertheless critical. The distinction between tradition and reason is ultimately unfounded.

For an in depth treatment of this very subject, see this post. Suffice it to say that in as much as Martin and Thomas Cothran buy into Alasdair MacIntyre’s vision of tradition, I think they, too, fall into error.

But in an email, Thomas provided a picture of Aristotle’s method that strikes me as the correct one:

First, Aristotle doesn’t have a rigid system, his inquiry is responsive to the conditions of the everyday world. Second, Aristotle’s method does make room for cultural difference, but not in a way that precludes finding some ultimate truth. A culture may have good or bad practices and beliefs, but in any case they can neither be accepted uncritically or ignored in favor of some abstract rule. And finally, Aristotle’s method requires that we evaluate his arguments for ourselves — he never encourages his readers to accept things on his own say so, and his thinking is designed to be taken up critically.

In terms of “finding some ultimate truth,” I interpret him as saying that we can find moral truths that are not culturally relative. However, we do not have some logical foundation that puts such truths beyond doubt, and what knowledge we obtain—ultimate or otherwise—is fallible knowledge, just like human knowledge in any domain.

The Tragic Nature of the World

My father asserted:

Every morality is erected on ideals. Every ideal is an unattainable model of behavior. Every good person is inching toward an ideal vision of himself: that person-as-he-could-be.

To which Thomas responded:

[T]he assertion that “[e]very ideal is an unattainable model of behavior” is manifestly false: many ideals (such as the ideal that people ought not murder each other) are more attained than not.

I think this is a nit-pick, and that Thomas misses the tragic vision of the world behind my father’s statement. Human arrangements and ideals always have gaps in them that cannot be filled; an ideal that is “more attained than not” is often either undemanding, or—in the case of murder—is often attained at the expense of some other ideal. The police and prison apparatus that we built up in this country over the past couple of decades is rife with abuses of its own, some quite horrible. But I don’t want to quibble over particular points—the larger image of human beings as inherently imperfect and imperfectible, but also as striving and struggling towards betterment, is the right one.

It is because of this tragic nature, however, that authority is an ineradicable aspect of human life. Martin Cothran asked me how, in my framework at the time, “any moral statement can be considered authoritative over human behavior.” My father answered with a call to embrace contingency:

By embracing contingency, nothing is lost. No moral proposition can ever be “authoritative over human behavior” – Cothran’s phrase – in the absolute way that gravity has authority over bodies in space. An ideal must always be chosen, and always there will be those who refuse to do so: bad persons, weak persons, good persons in a weak moment.

I have come to think that authority—of ideals and of persons—is an ineradicable aspect of human life. But I wouldn’t call it authority “in the absolute way that gravity has authority of bodies in space.” At minimum, it is contingent on the existence of the human race. And a good Aristotelian would say that a great deal is indeed contingent on the circumstances—without ruling out the possibility of ultimate (but human) truths.