Take the sentence “Fido is a dog.” From this, we are entitled to infer various other sentences by substituting for a subsentential component. Thus we are entitled to infer “Fido is a mammal.” We are also entitled to infer “My pet is a dog.” The difference is that the latter inference is reversible, while the former is not. From “Fido is a mammal” we are not entitled to infer that “Fido is a dog,” whereas we can infer “Fido is a dog” from “My pet is a dog.” The particular, in other words, is that segment of the sentence that has symmetric substitution relations, while the universal is the one that does not. It is this symmetry that gives us the notion of different particulars being “coreferential,” and out of that, the very idea that there is an object to which singular terms refer.
This simple insight just saved you hours of cannabis fueled dorm room philosophy debates about the existence of universals. You’re welcome.
The key all along was to think of the meaning or content of a word as being derived from its use in a sentence—that is, from its contribution to a judgment. And judgments are things we do. They are actions… speech actions, like the act of asserting something. Just as a valid chess move is governed by shared rules over permissible actions rather than the intrinsic properties of a chess piece itself, shared rules of inference and judgment govern valid moves in discourse, rather than the intrinsic properties of words and concepts—much less the arbitrary phonemes that attach to them.
It’s easy to see how this resolves a lot of the problems created by the other school of thought, the one that locates meaning and content in reference, in sign and signified. If meaning comes from reference, then what does the universal “dog” mean absent its particular instantiations? Platonists thought the question demanded there must be an ideal dog, an abstract object with reality just as real as the particular Fido on your lap, perhaps an idea in the mind of God. Nominalists rightly thought that was absurd, and so instead posited that universals are just names that refer to particular things with common properties—preserving the same meaning as reference that led Plato down the wrong track in the first place.
It’s edifying to realize thousands of pages of scholarship, and centuries of debate within Medieval Europe, stemmed from a confusion generated by the inferential structure of ordinary language. Indeed, the failure to make meaning as reference “work” as a theory, combined with the odd resistance to giving it up, has led generations of “semioticians” to radical conclusions like nihilism, post-structuralism, and moral error theory, when it turns out the starting premise was unmotivated in the first place.
Wittgenstein’s Beetle in a Box analogy provides a great illustration of the basic idea. It shows how we can talk meaningfully about concepts, i.e. signs, even without direct access to the private, subjective perception of the thing “signified”. Instead, the shared, public meaning of any given word (like beetle) is given by its use, its pragmatics, particularly in the context of a sentence.
That’s why Wittgenstein argued a language can never be totally private. The rules of discourse, like the rules of chess, only make sense insofar as they are shared. Sure, you could invent a new board game with all new rules that only you know. But when you make a move in a game that no one else around you can recognize you might as well be speaking gibberish.
Ryan and Adam have been discussing the role of situation in morality. Do read both in full.
Ryan’s is a convincing defense of the banality of evil. Rape is an inevitability of war, for example, not because the participants of war are particularly bad human beings, but because the situation of being at war drives otherwise normal human beings to do heinous things. As he writes,
Situational psychology does not excuse evil, it democratizes it. It’s easy to believe that a U.N. peacekeeping mission in the Central African Republic, or a torture chamber in Cuba, or an insane-asylum-cum-torture-chamber in Iraq, or the total eradication of life as we know it in Syria, has nothing to do with us.
Both he and Adam point to self-delusion as the culprit. Writing from the experience of having once rationalized the immoral actions of a close friend, Adam says he
received that wake-up call about my own capacity for self-deception over a decade ago. The bigger shock was not that I was able to be so willfully blind, but that so many of my friends continued to be in light of what the investigation uncovered. In fact, they doubled down, entrenching themselves in a persecution narrative which provided a useful framework for rationalizing away any hint of their own guilt.
I don’t have much to add so far that my senpai hasn’t (as usual) said earlier and much better. From his discussion of the shortcomings of virtue ethics in Morality Competition, and the Firm, Joseph Heath brings up the criminology literature on violent subcultures:
In the 1950s David Matza and Gresham Sykes suggested that the reason deviant subcultures (such as youth gangs) are criminogenic is not that they encourage primary deviance with respect to the moral norms and values of society, but that they facilitate secondary deviance with respect to cognitive and epistemic norms governing the way situations are construed. … Instead of maintaining that violence itself is good, members of the group may instead convince themselves that they had no choice to act as they did, or that the victim had done something to deserve it … What distinguishes the criminal, according to this view, is not a motivational defect or an improper set of values, but rather a willingness to make self-serving use of excuses, in a way that neutralizes the force of conventional values.
One implication of these “techniques of neutralization,” as they’re known, is that proper behavior, for the most part, is not hidden knowledge that the deviant is ignorant of. In fact, social deviants usually “know” the right thing to do, but explain it away with reference to exceptional circumstances, or by construing the situation differently. Paraphrasing an example Heath often gives, when someone says they have “borrowed” an item they in fact stole, they are in essence substituting one normative violation (“do not steal”) with a different, less bad cognitive violation (the generally accepted meaning of the word “borrowed”). He discusses other techniques of neutralization here. They include:
Denial of responsibility
Denial of injury
Denial of the victim
Condemnation of the condemner
Appeal to higher loyalties
“Everyone else is doing it”
Reading Ryan’s post, I was left with the sense that he sees a situation’s influence over moral decision as inevitable, possibly even deterministic. He thus suggests abandoning the even greater delusion that we can avoid self-delusion, and instead focus on reforming the broader system that generates the situations that leave us most compromised.
The problem with this argument comes back to the eternal question asked by criminologists: Why isn’t there more crime than there actually is? Given the state’s limited enforcement capacity, society depends on most people, most of the time behaving morally, i.e. of following the rules. If self-delusion were truly the rule, rather than the exception, civilization would collapse under a crisis of endemic shirking.
Ironically, blaming the system is one of the most pernicious techniques of neutralization criminologists have identified. Indeed, saying “it’s systemic” is one of the easiest ways to deny responsibility for one’s action, and in turn make the problematic behavioral pattern all the more common and entrenched.
This is true not just with respect to crime stemming from war or systemic poverty, but applies equally well to white collar crime, too. When bankers engage in shady lending or regulatory arbitrage, for example, they often neutralize their bad behavior by blaming the systemic forces of market competition (“Everyone else is doing it”), or the duty to maximize shareholder value within the letter of the law (Appeal to higher loyalty). Over time this leads to juridification, the thickening of law books, as behaviors that were once enforced by unwritten social norms and voluntary self-restraint must be replaced by codified laws with explicit sanctions.
The upshot is that we shouldn’t stop holding people accountable for their actions just because the situation they somehow found themselves in made shirking their moral duties the path of least resistance. Indeed, just the opposite. Employing techniques of neutralization, as a self-serving behavior, should itself be an object of social sanction.
Moreover, it means there’s a chance we can preempt our techniques of neutralization by being aware of them, and by training ourselves in strategies that undercut self-delusion. That’s essentially what Joseph Heath argues business ethics courses should look like, rather than tired lessons in the history of moral philosophy. But in general it’s probably the sort of moral education we should all be subject to, starting as children.
In the history of human civilization, no large society has ever come close to achieving consensus, be it on values, life styles, or standards of taste. Yet there have been many that have tried. Today, they are known as theocracies.
By theocracy I do not mean a strict religious society, at least not in the usual sense of religious. Rather, I define theocracy as any society with a strong commitment to moral and political perfectionism. Perfectionism is a term that refers to any attempt to prescribe a theory of what constitutes “the good life,” as it was known by Aristotle. Perfectionism comes in many shapes and sizes, from suppression of so-called sexual deviants, to the soft paternalism of Michael Bloomberg.
Classical liberalism is in essence the repudiation of perfectionism. That’s why advocates of “libertarian paternalism” are still properly understood as illiberal even though they abstain from direct coercion. When policy has the aim of shaping our lives based on a bystander’s substantive theory of how one ought to live, be it who to love or how much soda to drink, it runs the principle of liberal neutrality through the shredder.
Liberal neutrality is essential for ensuring legitimate laws don’t discriminate against adherents with irreconcilable conceptions of the good life. This does not mean liberal neutrality is itself value neutral, in the sense of amoral. Rather, liberal neutrality is better thought of as embodying a Paretian or win-win standard—a norm which transcends the depths of human particularity—and in turn makes classical liberal constitutions minimally controversial. As Joseph Heath puts it:
The normative intuition underlying the Pareto standard is essentially contractual. Pareto improvements are changes that no one has any reason to reject. Making these improvements therefore means making some people better off, under conditions that everyone can accept. Recalling that the purpose of these normative standards is to permit cooperation, efficiency as a value permits social integration while requiring very little in the way of consensus about basic questions of value.
The alternative is a world with perpetual unanimity around inscrutable disputes, and the imperative that any deviation in the form of dissent be crushed. In that sense, free expression in theocracies is despised not due to the particular content of the speech, but due to the subversiveness embodied in the volitional act itself—what G.L.S. Shackle referred to as the “cause uncaused.”
This is why striving for perfect consensus around the good life leads invariably to moral and cultural stagnation. Without a “cause uncaused” the pursuit of happiness comes to resemble seminary. Theocracies are like a static equilibrium, a Walrasian box from which there’s no escape. That includes Saudi Arabia and Iran, but also the Stalinist regimes of Cuba and North Korea which, without irony, enforce their impoverished status quo by banning unsolicited expression as “counter-revolutionary.”
New ideas are transmitted by equally novel acts of speech. When speech is unbounded and permissionless, new ideas can diffuse, ear by ear, through the rest of society, disrupting a closed system from within. Take the free thinking Athens of ancient Greece, and then contrast it with its monolithic and Spartan neighbor. One fostered innovations in philosophy, mathematics, science, arts and culture. The other is synonymous with militarized asceticism, and a laconic rationing of thought.
Freedom of thought and life-pursuit are therefore engines of creative destruction as well as inescapably heretic. Today, however, we are forgetting how tightly the two roles are entwined. We desire the benefits of a flourishing society without exposure to words and concepts that challenge our eudaemonic preconceptions.
That’s why the Enlightenment concept of toleration did not require one man or woman to endorse the views of another. On the contrary, classical liberals defended free expression as a matter of mutual respect, not mutual acceptance. Toleration contains the seeds of disagreement and argumentation, and doesn’t sacrifice human flourishing for false consensus.
Modern proponents of universal acceptance have a natural affinity with traditional theocrats. Both prove themselves by their piety to an immutable creed, conveyed through zealous displays of righteousness. And both endeavor to inquisition any who depart from the flock.
The culture war demonstrates how much the ink on our Paretian contract has faded. But if traditional theocrats continue in their attempts to regulate virtue they cannot justly complain when proponents of universal acceptance force them to acquiesce in other settings, and vice versa. Defection from liberal neutrality opens a perfectionist Pandora’s Box that cuts in both directions.
There is no way around it. The essential heresy of freedom means we either live with imperfection or all burn at the stake.
( PS: This is apparently Sweet Talk’s 500th post. Here’s to 500 more. )
I think of norms as valid types of reasons that we give to or ask from interlocutors to justify behavior. That makes them inherently rational. When this point is missed, the tendency is to demand for there to be a “why” behind the norm, when the role of norms is to be the “why” behind the action.
Maybe this is what Cameron means when he writes that norms “must be accepted either tout court or on the basis of a mythology.” But you can see why, if norms are rational at their core, this phrasing is misleading.
I have written on this point in a post called Sacred and Profane Reasons. In short, I think the notion that desires, preferences, values, and norms are non-rational or even irrational is not only mistaken, but has perverse consequences. Namely, it makes us instrumentalize imperatives, leading to pareto-inferior social orders.
I fully agree with all of this. But moreover, I think these points, taken together, imply the rationality of norms—especially once norms are conceptualized as [cognitive] moves in a language game. As Cameron writes below point 3:
Perception is filtered and structured by pre-conscious judgements about the significance of various aspects. This judgement (“theory”) is not essentially different from value judgements which operate on the conscious level.
Thus if Cameron really views value judgments as non-rational, then he’s committed to all judgments being non-rational, which contradicts the intelligibility of the universe.
I have also written that calling an imperative or norm a “myth” (as Cameron does for liberal norms and natural rights) amounts to a category error. Assertions and imperatives stake very different types of validity claims. For example, I can assert the non-existence of God while still holding on to the imperative of ritual. Imperatives don’t carry an intrinsic epistemic burden.
The confusion arises because ethical vocabularies using words like “ought to” and “rights” transform imperatives into assertions. But this doesn’t change the fact that the concept of “rights” is at core about expressing certain imperatives. It simply lets us express imperatives in a more flexible, natural way,
In Theory and Practice Reconciled I went so far as to define progress as any process whereby our theoretical assertions come into alignment with our practical imperatives. In other words, progress equals cooperation without the assistance of pious fictions.
Necessary, yes, but not sufficient. This one goes to the importance of language, and its role in normative / cultural reproduction. As communicative animals our societies are subject to much more directionality than can be explained by purely Darwinian types of selection. I came to this view from reading Joseph Heath, as well: The second and final chapters in Following the Rules; and his synopsis / defense of Habermas’ theory of discourse ethics.
As SpaceX successfully landed their 23 story tall Falcon-9 rocket in upright position, Jeff Bezos, the CEO Blue Origin (a rocket company which performed a superficially similar, but technically much less impressive feat days before), tweeted the following:
Congrats @SpaceX on landing Falcon’s suborbital booster stage. Welcome to the club!
Ouch! Within an instance, Bezos became the target of scorn for hundreds of fawning SpaceX and Elon Musk fans who derided Bezos’ “welcome to the club” comment as classless and back handed. Yet as my colleague Andrew noted at the time, “given that space exploration is mostly a billionaire dick measuring contest, petty squabbling is probably the best motivator we could ask for.”
I think this is exactly right, but I will go a big step further. “Dick measuring contests,” more generally known as status competitions, are often called “wasteful,” “zero-sum,” and “inefficient.” Yet even when those labels are technically accurate (and they often aren’t—the private sector space race, for example, is clearly socially useful), another important truth can be simultaneously true: Status competitions are our main, if not only, source of meaning in the universe.
The Anxieties of Affluence
For all the wealth controlled by the three comma club, they turn out to be relatively poor when it comes to status goods. The reason relates to the inherent positionality of status. As in a game of King of the hill, moving up a rank necessarily means someone else must move down one, with the top-most players having the least to grab on to. Climbing from second-from-the-top to “King” is thus exponentially harder than moving from third to second, forth to third, and so on. And for whomever is King, with no one above to latch on to, the only way to truly secure one’s position against the penultimate scourge would be to invent a (proverbial) sky hook.
If not for this zero sum (at the psychosocial level) drama, what would drive Musk or Bezos to invest so heavily in their own (quite literal) sky hooks? Bezos tweet is at least evidence that Musk’s aeronautical successes have gotten under his skin—ahh, the anxieties of affluence. But all that means is one of the world’s most socially productive people has all the more reason to wake up in the morning.
In contrast, for a middle class and median IQ American to broadcast their status relative to their peers they can always buy a bigger house, drive a faster car, learn a new talent, travel to more exotic places, or give more to charity. That is, the space to broadcast ever greater social distinction is seemingly unbounded from the top. This was the nouveau riche mindset of Elon Musk circa 1999, when he bought (and later crashed) a million dollar McLaren F1. But today, as an ennuyé riche multi-billionaire, simply owning an awesome car is old-hat, cheap-talk, something any rich CEO can do. So now he builds and designs even better cars from first principles, incidentally spurring innovation as he literally pushes against the physical and technological boundaries of keepin’ up with the Bezos.
As the McClaren incident shows, for all his self-effacing talk about saving humanity from extinction even Musk is human, and in that humanity ultimately motivated by subterranean vanity. Bezos’ only sin was to let his vanity see the light. At least he punches up.
Critics of the free market point to these sorts of positional arms races as the downfall of the neoclassical economists’ conception of efficiency. On the one (invisible) hand, competition and exchange can guide the butcher and baker to produce meat and bread for the common good. On the other hand, identical competitive forces can lead nations to the brink of nuclear war, marketing and political campaign budgets to balloon, and large SUVs to pollute the roads due to safety in relative size. That is, individual incentives need not be aligned to the collective good. (As I’ve argued before, classical liberals like Adam Smith understood this full well).
Robert Frank influentially explained markets where individual and collective goals diverge in terms of what he calls Darwin’s Wedge (or what writer Jag Bhalla variously calls “dumb competition” and “spontaneous disorder”). The term comes from evolutionary biology, where wasteful arms races are ubiquitous. In the classic example, deer evolved large, cumbersome antlers because whenever a mutation made a buck’s rack marginally larger he was able to beat out and reproduce more than his sexual competitors, passing on the trait. But since what really matters is not the absolute size of the antlers, but their size relative to the local average, competition over the trait lead sexual selection to favor ever larger antlers up to the point where the marginal benefit of a bit larger antler equaled its marginal cost (i.e. until it was evolutionarily stable).
In economics MB=MC is the mark of optimality, but here it’s clear competition in some sense failed. Male deer must now go through life with awkward bone-branches extruding above their eyes, getting them caught on trees, and generally using caloric resources that might be better spent procreating. Had the ancestors of deer somehow colluded genetically to cap the size of antlers, or else to compete along some other, less handicapping marker of genetic fitness, the entire deer species would in some sense be made “better off” through greater numbers.
But alas, genes are selfish. As the famed selfish gene raconteur Richard Dawkins himself once wrote:
In a typical mature forest, the canopy can be thought of as an aerial meadow, just like a rolling grassland prairie, but raised on stilts. The canopy is gathering solar energy at much the same rate as a grassland prairie would. But a substantial proportion of the energy is ‘wasted’ by being fed straight into the stilts, which do nothing more useful than loft the ‘meadow’ high in the air, where it picks up exactly the same harvest of photons as it would – at far lower cost – if it were laid flat on the ground.
And this brings us face to face with the difference between a designed economy and an evolutionary economy. In a designed economy there would be no trees, or certainly no very tall trees: no forests, no canopy. Trees are a waste. Trees are extravagant. Tree trunks are standing monuments to futile competition – futile if we think in terms of a planned economy. But the natural economy is not planned. Individual plants compete with other plants, of the same and other species, and the result is that they grow taller and taller, far taller than any planner would recommend.
And how lucky we are that this is the case! I am grateful for hemlock forests, flamboyant peacock tails, and even moose, the silly looking cousin to deer. Were it not for the playing out of these so-called wasteful competitions, instead of a world of immense biodiversity and wonder, life on Earth would consist in a hyper-efficient photosynthesizing slime spread thinly across the globe.
Indeed, the self-defeating hunt for relative fitness, including social (and sexual) distinction, is responsible for bootstrapping literally every one of our perceptual and cognitive faculties, including our ability to appreciate aesthetics. If not for positional arms races around sexual selection, for instance, it is unfathomable that beauty would exist at all. All creativity, when not strictly for survival, is rooted (in the sense of ultimate causation) in status games. Even the fact that I’m writing this right now.
Beyond biology, the same story explains the artistic and cultural diversity created by market societies. While there are no doubt those who think the classical era represented a pinnacle of cultural achievement, a stationary point at which we should have made every effort to hold in perpetuity, this is nothing more than the golden age fallacy. Instead, the greatest classical musicians were only great because they superseded their predecessors and contemporaries by chasing the same ephemeral distinction as Elon Musk and the white-tailed deer, and as such were contributing to a self-defeating cultural churn that baked-in its own impermanence. This holds true today, as dozens of musical and artistic genres have been invented, grown steadily popular, and then “mainstream” and stale as their social cachet dries up.
Ironically, it is often those who are most critical of neoclassical economics that still seem wedded to its narrow and lifeless conception of optimality. Rather than moving beyond the Samuelsonian allocation paradigm to one based in creation, innovation and discovery, they thus double down on the dangerous illusion that positional status competitions can be easily muted or improved on by a central planner (the “design economy” referred to by Dawkins). While there’s obvious merit in blocking literal arms races, tweaking the tax deductibility of marketing expenses, and so on, I always worry whenever I read calls for a general luxury tax, or other excoriations of variability in the type and quality of consumables.
In the extreme, this thinking is what underlied the Marxist-Leninist ideology that transformed Mao’s China into a literal “Nation in Uniform.” A bit earlier in history it also motivated the Soviet government’s attempt and failure to make the luxury goods used by the petite bourgeoisie available to one and all. Rather than try to “eliminate” bourgeois values, in contrast, a capitalist society is healthy precisely because it enables a nation of rebels and the inequality that implies.
Resistance is Futile
One thing neoclassical economics did get right is non-satiation. Humans can never be fully satisfied: not with our mates, not with our station in life, nor with this final draft. However, this is not because we have neat, monotone preferences, but rather it’s because relative status has shaped every corner our psyche.
Buddhism rightly teaches that this dissatisfaction, called dukkha, pervades all of existence. As Buddha supposedly once said, “I have taught one thing and one thing only, dukkha and the cessation of dukkha.” But why? If resistance is futile, why not embrace it. Satisfaction is over-rated anyway. What person has ever achieved any kind of success or excellence without being tortured by anxiety, stress, or self-consciousness?
Of course Buddhists, like Stoics, would presumably question my definition of success. Maybe if we all meditated daily and simply learned to lower our expectations we’d learn to be satisfied with poverty. Yet we ran that experiment and we self-evidently were not.
Rather than be zen about our lack of zen, even Buddhist practices have ironically become (or was it not always?) their own dimension for pursuing social distinction. Don’t forget, Veblen’s magnum opus on status goods was called “The Theory of the Leisure Class,” and what could be a greater advertisement of belonging to the leisure class than the ability to sit absolutely idle for hours out of every day.
I don’t deny that meditation can be incredibly useful for reducing and controlling the stresses and anxieties of civilization. But if you’re a fan of meditation you should also not deny nor feel shame in the bourgeois half of your BoBo paradise. You are not above consumerism or hedonic treadmills. On the contrary, you are a leading light, an early adopter, an innovator in waste.
Otherwise, a monomaniacal focus on achieving nirvana (the state when all attachments and dukkha have melted away) simply becomes an agent-centric example of the social planner’s protoplasmic conception of optimality. At the same time, I recognize the futility in my own attempt to disillusion you, dear reader. As Mises wrote, human action is predicated on “theexpectation that purposeful behavior has the power to remove or at least to alleviate felt uneasiness.” It just turns out that that expectation is as mistaken as it is incorrigible.
So meditate if you have to, but don’t be afraid to day dream a little, too. It may fill you with anxiety, and it definitely won’t make you happy, but later in life you just might find yourself building a spaceship to Mars.
There is a tension between theory and practice, but it is not as insurmountable as the classical philosophers believed.
The tension arises from the original embodiment of right and wrong in social practice. For example, our belief in gravity is not, in the first instance, some kind of mental state containing propositional content. Rather, it’s implicit in our unwillingness to step off the edge of a cliff, or in our daily interactions which take the pull of the earth for granted.
But there’s a twist. As talkative creatures, we have the capacity to reconstruct and articulate our reasons for acting. These speech acts take what was originally tacit or implicit and converts it into something explicit, so it can be shared and transformed. This is the origin of theory, Mankind’s most precious yet volatile invention.
The flexibility of language allows theory to take something immanent in practice and simplify, synthesize, depersonalize, express and extrapolate it; to take pragmatic truths embodied in particular acts and discuss them in relation to far away quantities. Think of the legendary apple that dropped on Newton’s head—inspiring, in an instant, his model of universal gravitation.
The classics recognized this power of theory was not limited to metaphysics. The same discursive filter that caused Newton to extrapolate the forces affecting earth to heavenly bodies, or Democritus to propose the atomic unity of matter, is just as liable to extend and unify the dignity of masters to their slaves, or of the elite to the hoi polloi.
The cognoscenti of every bygone age thus had a personal incentive to treat theory like yellowcake—an ingredient with such transcendental implications that, while indispensable to those with self-control, would be unwise to have proliferate. Hence the double-doctrine—the concealment of radical thought in recapitulated conventional wisdom.
Recall, it was same Democritus who first declared that “equality is everywhere noble,” and yet made (ostensible) exceptions for women and serfs. Evidently, the strength of prevailing social practice made the equal dignity of all people more controversial than his equally egalitarian theory of matter. And so one reads between the lines.
With the Enlightenment this art of esotericism was mostly lost. Scientific and social revolutions occurred in tandem, pointing at a natural harmony between theory and practice. Marx declared that the purpose of philosophy was not to interpret the world, but to change it. The tragedies of mass “social experiments” notwithstanding, this harmony has stood the test of time.
The fears of classical philosophers and other esoteric writers were thus overwrought. Avoiding the persecution of the state, church or media are contingent factors of history, and do not prove their view that theory and practice are inherently irreconcilable. Indeed, the very business of theory is reconciliation. Theory abhors the contradictions and antagonisms of prevailing practices, and therefore generates an imperative within the self-conscious practitioner to squelch her own dissonance, and communicate to others the need to do the same. The arc of the moral universe is long, but it bends towards coherence.
Of course, one may resist these urges given sufficient training in mental compartmentalization. But these days normative and epistemic incoherence, once called out, rarely survives a generation. Indeed, our norms are evolving at an unprecedented pace, leading to a revival of the classical concerns with a slight but important modification.
Rather than view the conflict as one between poet and philosopher, immanent and transcendent, the modern view seems more concerned about the conflict between two broad types of expressive rationality: epistemic and instrumental. In other words, it questions whether our commitment to both truth and happiness is sustainable in the long run. Perhaps all religions are technically false, for example, and yet essential for organizing a robust civilization.
I view this as a mistake, a conflation of assertion with imperative. It is perfectly reasonable, if not inevitable, to adopt categorical imperatives that step outside instrumental calculus. However, in the past these imperatives have been tied together with assertions or truth claims, be it about God or some other property of nature. By diligently (and correctly) separating facts from values, is from ought, the Enlightenment made the first big push to untangle the imperative for solidarity from the epistemic burden of superstition. As Hume wrote, “this small attention [to the fact-value distinction] would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.”
This process is by no means over, as if all the vulgarities of our moral system have been fully excised. Nonetheless, history does have an arrow, a tendency for progress, insofar as language—as the medium for cultural reproduction—imposes a structural and universalizing coherence on our practices over time.
In the past this process took place over centuries, with paradigms shifts achieved through either dialectical unanimity or violent revolution. Today, however, a plurality can and do sink shibboleths before such consensus is achieved, and with enthusiasm. For those left in the lurch, there is a tendency to sense that the rational has become, as it were, far too real. Yet the reconciliation of theory and practice subsists.
In resisting the cultural under-toe, there then arises a temptation to reject, not the direction of the tides (for they cannot be disputed), but instead the pretence of coherence more generally. Thus the reaction to embrace the incoherent, irrational and the obscure; to double down on superstition and occult mysticism; to castigate global theory for local practice—a vain last breathe before the zeitgeist submerges them, too.
One way to think about the sacred and profane distinction is in terms of types of reasons. “Do not trespass on the Holy of Holies,” says the Elder, “for it is sacred.” As far as practical reasons go, sacredness suffices. It has to suffice. Otherwise practical rationality enters an infinite regress, leading to decision paralysis in lieu of an epistemic foundation that simply does not exist. Put differently, the whole point of a good reason is that it provides a stopping rule in the game of giving and asking for reasons.
In the context of decision making, sacred reasons are categorical. That is, we feel duty-bound to respect valid claims of sanctity, where validity is a function of its coherence within the body of reasons we already take for granted (i.e. are presupposed) as implicit in existing social practices. Conversely, violations of sacred objects or spaces is socially deviant, even blasphemous.
The sacred, it seems, is the byproduct of an imperative. When questioning the imperative “do not kill,” an appropriate and argument-ending reply is “because life is sacred.” The question naturally arises as to why we do not simply issue the imperative, full-stop?
Well, some do. In the Mayan language of Sakapultek norms of every kind are conveyed with the underlying imperative, their equivalent of “do” and “do not”, or indirectly through irony. As one would expect, this severely constrains the ways a norm can be expressed. For instance, they lack the ability to say “you ought to do x because y.” By having terms like “ought,” “right” and “wrong,” English speakers have a much easier time expressing and univeralizing imperatives across domains.
This view is an up-shot of taking Wittgenstein’s private language argument and meaning as use claim seriously. The alternative, meaning as reference, leads down a host of dead ends outside of the scope of this post. Suffice it to say, thinking of “rightness” outside the context of use (as in doings, social practices, or deontic constraints over a choice set) leads to the search for a referent somewhere in the universe.
In other words, sacredness is not “out there” like some sort of metaphysical substance, but is rather a stand-in part of speech, a general predicate, that aides in the expression of certain types of imperatives. Rather than having to explicitly declare “do not kill __” in every discrete case, saying “killing is wrong” harnesses our existing competency with verbs and predicates to establish the general case.
While “right” and “wrong” are used to modify actions, “sacred” more usually seems to modify objects, like sacred places, items or institutions. It’s immediately obvious how this can expand our expressive capacity that much more. We have the option of expressing identical imperatives against adultery, or example, either by declaring the act to be wrong, or by declaring the indirect object, marriage, to be sacred.
Sacredness so often attaches to rituals like marriage because rituals require us to overcome our self-interest, such as the temptation of infidelity. Likewise, the categorical nature of a sacred imperative is essential for sustaining collective action, or group rituals, and for legitimating sanctions against defectors. Moreover, sacred imperatives (as opposed to more prosaic ones) tend to be accompanied by feelings of sublimity that group practices are uniquely able to elicit.
In contrast, consider profane reasons. A profane reason to marry someone is in the expectation to save resources through economies of scale. As economists are want to point out, this is undoubtedly among the fortuitous consequences of marriage, and perhaps in some non-teleological way part of the “ultimate” explanation for the ubiquity of marriage practices. But these profane patterns are distinct from the proximate and sacred reasons proffered by the wedded themselves. If instrumental reasons were all their were, a green card marriage would be celebrated with the same enthusiasm as any other.
This is why its nonsense to claim to be (at least without debilitating cognitive dissonance) “religious but not spiritual” — i.e. To adopt ritual and other religious practices, up to and including prayer and congregational attendance, without endorsing the sacred character of one’s own actions. As soon as one takes a purely instrumental stance towards ritual, its power begins to wane.
The notion of “effective” here is what is problematic, as effectiveness is endogenous to the degree of sacredness, while sacredness rests on a cognitive disinterest in being effective!
Thus we may find ourselves subject to the counsels of prudence, climbing a ladder of instrumental reasons that, rung by rung, persuade us that ritual practice is an effective means for reaching our ends, namely well-being, group cohesion, and so on and so forth. Yet one will never reach the highest height until the ladder has been kicked away, and the practice left to stand on its own sacred terms. This is not to embrace the non-rational. On the contrary, it is to recognize instead a higher type of reason, a reason that cannot be circumvented.
That modernity has had a rationalizing tendency is what led Habermas to remark that all traditionalism has become neo-traditionalism. By that he meant that, whereas true traditionalists saw traditionas a valid source of authority as such, neo-traditionalists have revealed themselves to be indirect rationalists, offering up great reasons for obeying tradition that go beyond presupposition: Tradition is stress-tested wisdom, or a product of smarter-than-thou spontaneous order, and so on. To simply accept tradition as a “good reason” in and of itself is, to most moderns, absurd, and rightly so.
So it appears instrumental rationality has a great power to crowd out more communicative and categorical types of reasons over time, as our shared presuppositions are torn asunder by critical self-reflection and re-conceived as anachronistic husks around a fundamentally profane kernel. We are still clearly in the thralls of this rationalization process, which has progressed sporadically and unevenly, and which has never once reversed. Recall, Prisoner’s Dilemmas by design preclude any communicative action. But once strategic maximizing becomes the norm, it’s cooperation that’s heretic. As the only remaining disposition that we may will that it should become a universal law, this destines cynicism to be our last and most endearing mode of communion.