Featured image is “The Fortune Teller,” by Simon Vouet.
The “rainbow ruse” is a cold reading technique in which the reader first assigns the subject a personality trait, and then assigns its opposite. For example, one might say, “You can be a spontaneous person, but in your private life you tend to stick to a routine that works.” Or, “You see yourself as an open-minded person, but you tend to dispense with bad arguments quickly.”
Crude examples of the rainbow ruse are easy to spot as nonsense, but the more skilled a person is in cold reading, the better that person can craft tailor-made rainbow ruse statements to gain the subject’s confidence and leave him or her with the impression of having been deeply and profoundly understood. Continue reading “The Rainbow Ruse”→
As SpaceX successfully landed their 23 story tall Falcon-9 rocket in upright position, Jeff Bezos, the CEO Blue Origin (a rocket company which performed a superficially similar, but technically much less impressive feat days before), tweeted the following:
Congrats @SpaceX on landing Falcon’s suborbital booster stage. Welcome to the club!
Ouch! Within an instance, Bezos became the target of scorn for hundreds of fawning SpaceX and Elon Musk fans who derided Bezos’ “welcome to the club” comment as classless and back handed. Yet as my colleague Andrew noted at the time, “given that space exploration is mostly a billionaire dick measuring contest, petty squabbling is probably the best motivator we could ask for.”
I think this is exactly right, but I will go a big step further. “Dick measuring contests,” more generally known as status competitions, are often called “wasteful,” “zero-sum,” and “inefficient.” Yet even when those labels are technically accurate (and they often aren’t—the private sector space race, for example, is clearly socially useful), another important truth can be simultaneously true: Status competitions are our main, if not only, source of meaning in the universe.
The Anxieties of Affluence
For all the wealth controlled by the three comma club, they turn out to be relatively poor when it comes to status goods. The reason relates to the inherent positionality of status. As in a game of King of the hill, moving up a rank necessarily means someone else must move down one, with the top-most players having the least to grab on to. Climbing from second-from-the-top to “King” is thus exponentially harder than moving from third to second, forth to third, and so on. And for whomever is King, with no one above to latch on to, the only way to truly secure one’s position against the penultimate scourge would be to invent a (proverbial) sky hook.
If not for this zero sum (at the psychosocial level) drama, what would drive Musk or Bezos to invest so heavily in their own (quite literal) sky hooks? Bezos tweet is at least evidence that Musk’s aeronautical successes have gotten under his skin—ahh, the anxieties of affluence. But all that means is one of the world’s most socially productive people has all the more reason to wake up in the morning.
In contrast, for a middle class and median IQ American to broadcast their status relative to their peers they can always buy a bigger house, drive a faster car, learn a new talent, travel to more exotic places, or give more to charity. That is, the space to broadcast ever greater social distinction is seemingly unbounded from the top. This was the nouveau riche mindset of Elon Musk circa 1999, when he bought (and later crashed) a million dollar McLaren F1. But today, as an ennuyé riche multi-billionaire, simply owning an awesome car is old-hat, cheap-talk, something any rich CEO can do. So now he builds and designs even better cars from first principles, incidentally spurring innovation as he literally pushes against the physical and technological boundaries of keepin’ up with the Bezos.
As the McClaren incident shows, for all his self-effacing talk about saving humanity from extinction even Musk is human, and in that humanity ultimately motivated by subterranean vanity. Bezos’ only sin was to let his vanity see the light. At least he punches up.
Critics of the free market point to these sorts of positional arms races as the downfall of the neoclassical economists’ conception of efficiency. On the one (invisible) hand, competition and exchange can guide the butcher and baker to produce meat and bread for the common good. On the other hand, identical competitive forces can lead nations to the brink of nuclear war, marketing and political campaign budgets to balloon, and large SUVs to pollute the roads due to safety in relative size. That is, individual incentives need not be aligned to the collective good. (As I’ve argued before, classical liberals like Adam Smith understood this full well).
Robert Frank influentially explained markets where individual and collective goals diverge in terms of what he calls Darwin’s Wedge (or what writer Jag Bhalla variously calls “dumb competition” and “spontaneous disorder”). The term comes from evolutionary biology, where wasteful arms races are ubiquitous. In the classic example, deer evolved large, cumbersome antlers because whenever a mutation made a buck’s rack marginally larger he was able to beat out and reproduce more than his sexual competitors, passing on the trait. But since what really matters is not the absolute size of the antlers, but their size relative to the local average, competition over the trait lead sexual selection to favor ever larger antlers up to the point where the marginal benefit of a bit larger antler equaled its marginal cost (i.e. until it was evolutionarily stable).
In economics MB=MC is the mark of optimality, but here it’s clear competition in some sense failed. Male deer must now go through life with awkward bone-branches extruding above their eyes, getting them caught on trees, and generally using caloric resources that might be better spent procreating. Had the ancestors of deer somehow colluded genetically to cap the size of antlers, or else to compete along some other, less handicapping marker of genetic fitness, the entire deer species would in some sense be made “better off” through greater numbers.
But alas, genes are selfish. As the famed selfish gene raconteur Richard Dawkins himself once wrote:
In a typical mature forest, the canopy can be thought of as an aerial meadow, just like a rolling grassland prairie, but raised on stilts. The canopy is gathering solar energy at much the same rate as a grassland prairie would. But a substantial proportion of the energy is ‘wasted’ by being fed straight into the stilts, which do nothing more useful than loft the ‘meadow’ high in the air, where it picks up exactly the same harvest of photons as it would – at far lower cost – if it were laid flat on the ground.
And this brings us face to face with the difference between a designed economy and an evolutionary economy. In a designed economy there would be no trees, or certainly no very tall trees: no forests, no canopy. Trees are a waste. Trees are extravagant. Tree trunks are standing monuments to futile competition – futile if we think in terms of a planned economy. But the natural economy is not planned. Individual plants compete with other plants, of the same and other species, and the result is that they grow taller and taller, far taller than any planner would recommend.
And how lucky we are that this is the case! I am grateful for hemlock forests, flamboyant peacock tails, and even moose, the silly looking cousin to deer. Were it not for the playing out of these so-called wasteful competitions, instead of a world of immense biodiversity and wonder, life on Earth would consist in a hyper-efficient photosynthesizing slime spread thinly across the globe.
Indeed, the self-defeating hunt for relative fitness, including social (and sexual) distinction, is responsible for bootstrapping literally every one of our perceptual and cognitive faculties, including our ability to appreciate aesthetics. If not for positional arms races around sexual selection, for instance, it is unfathomable that beauty would exist at all. All creativity, when not strictly for survival, is rooted (in the sense of ultimate causation) in status games. Even the fact that I’m writing this right now.
Beyond biology, the same story explains the artistic and cultural diversity created by market societies. While there are no doubt those who think the classical era represented a pinnacle of cultural achievement, a stationary point at which we should have made every effort to hold in perpetuity, this is nothing more than the golden age fallacy. Instead, the greatest classical musicians were only great because they superseded their predecessors and contemporaries by chasing the same ephemeral distinction as Elon Musk and the white-tailed deer, and as such were contributing to a self-defeating cultural churn that baked-in its own impermanence. This holds true today, as dozens of musical and artistic genres have been invented, grown steadily popular, and then “mainstream” and stale as their social cachet dries up.
Ironically, it is often those who are most critical of neoclassical economics that still seem wedded to its narrow and lifeless conception of optimality. Rather than moving beyond the Samuelsonian allocation paradigm to one based in creation, innovation and discovery, they thus double down on the dangerous illusion that positional status competitions can be easily muted or improved on by a central planner (the “design economy” referred to by Dawkins). While there’s obvious merit in blocking literal arms races, tweaking the tax deductibility of marketing expenses, and so on, I always worry whenever I read calls for a general luxury tax, or other excoriations of variability in the type and quality of consumables.
In the extreme, this thinking is what underlied the Marxist-Leninist ideology that transformed Mao’s China into a literal “Nation in Uniform.” A bit earlier in history it also motivated the Soviet government’s attempt and failure to make the luxury goods used by the petite bourgeoisie available to one and all. Rather than try to “eliminate” bourgeois values, in contrast, a capitalist society is healthy precisely because it enables a nation of rebels and the inequality that implies.
Resistance is Futile
One thing neoclassical economics did get right is non-satiation. Humans can never be fully satisfied: not with our mates, not with our station in life, nor with this final draft. However, this is not because we have neat, monotone preferences, but rather it’s because relative status has shaped every corner our psyche.
Buddhism rightly teaches that this dissatisfaction, called dukkha, pervades all of existence. As Buddha supposedly once said, “I have taught one thing and one thing only, dukkha and the cessation of dukkha.” But why? If resistance is futile, why not embrace it. Satisfaction is over-rated anyway. What person has ever achieved any kind of success or excellence without being tortured by anxiety, stress, or self-consciousness?
Of course Buddhists, like Stoics, would presumably question my definition of success. Maybe if we all meditated daily and simply learned to lower our expectations we’d learn to be satisfied with poverty. Yet we ran that experiment and we self-evidently were not.
Rather than be zen about our lack of zen, even Buddhist practices have ironically become (or was it not always?) their own dimension for pursuing social distinction. Don’t forget, Veblen’s magnum opus on status goods was called “The Theory of the Leisure Class,” and what could be a greater advertisement of belonging to the leisure class than the ability to sit absolutely idle for hours out of every day.
I don’t deny that meditation can be incredibly useful for reducing and controlling the stresses and anxieties of civilization. But if you’re a fan of meditation you should also not deny nor feel shame in the bourgeois half of your BoBo paradise. You are not above consumerism or hedonic treadmills. On the contrary, you are a leading light, an early adopter, an innovator in waste.
Otherwise, a monomaniacal focus on achieving nirvana (the state when all attachments and dukkha have melted away) simply becomes an agent-centric example of the social planner’s protoplasmic conception of optimality. At the same time, I recognize the futility in my own attempt to disillusion you, dear reader. As Mises wrote, human action is predicated on “theexpectation that purposeful behavior has the power to remove or at least to alleviate felt uneasiness.” It just turns out that that expectation is as mistaken as it is incorrigible.
So meditate if you have to, but don’t be afraid to day dream a little, too. It may fill you with anxiety, and it definitely won’t make you happy, but later in life you just might find yourself building a spaceship to Mars.
I was only 16 at the time, but I had already developed a strong interest in moral psychology thanks to the lingering suspicion that ethics was a fatal weakness for philosophical naturalism. And so when Marc Hauser‘s now classic book Moral Minds first came out in paperback, I rushed to buy a copy.
The book was a detailed exploration of human moral cognition through the lens of trolley problem experiments and Hauser’s (now dubious) research with primates. And despite Hauser’s indefensible academic misconduct, it remains a tour de force. In fact it is still in my possession, now twice as thick and stained by sunlight from multiple re-reads.
At the time I became convinced of Hauser’s basic approach that updated David Hume in light of Chomsky’s work on innate syntax. This view says that our moral sense is at base noncognitive, that it is a product of our “passions” or sensations built into us like a “moral organ”. While morality may often seem relative to culture and upbringing, it is constrained by a “universal grammar” common to all moral orders. That grammar, I believed, was the key to resolving the moral divergences between tribes. If we could only speak clearly about our shared inheritance there could be no lasting rational disagreements.
Joshua Greene’s “Deep Pragmatism”
Consider this a premonition of what Joshua Greene has since dubbed “deep pragmatism”. Greene is also a Harvard neuroscientist and expert on trolley problems, and his recent book Moral Tribes is also concerned about what he calls the “failure of common sense morality,” i.e. when divergent moral orders collide. While I am about to be quite critical of Greene, let me say at the outset that I am actually a massive fan, and that I tend to be most critical of the ones I love.
If it is fair to say Hauser’s theory merged Hume with Chomsky’s linguistics, then Greene’s theory merges Hume with Daniel Kahneman’s Dual Process Theory. He claims our non-cognitive passions are part of our System One, or automatic / intuitive mode. But if we study the evolutionary function of our passions, we can then use our System Two, or rational / conscious mode, to resolve impassioned disputes deliberatively. Specifically, Greene posits that if morality is fundamentally about enforcing cooperation in order to reap collective benefits, two tribes with distinct ethical systems for cooperation simply have to recognize that they are using different means but have common ends.
The only thing truly novel about Greene’s argument is its tantalizing terminology. Indeed, on a recent EconTalk episode Greene admits that “deep pragmatism” is just his word for plain vanilla utilitarianism. Despite formal utilitarianism’s many problems, Greene believes clashing cultures can settle disputes by consciously reformulating their ethics based on the greatest good for the greatest number. When pressed by the host with counter-examples, Greene contended that the problems with his proposal are either merely empirical or due to an insufficient application of utilitarianism (for thinking too short-term, say).
I believe Greene makes three fundamental mistakes and thus has not provided a compelling solution to the tragedy of common sense morality. On top of this, his scientific pretenses distract from the fact that his core moral arguments come straight from the proverbial arm-chair. Indeed, as meticulously demonstrated in Selim Berker’s The Normative Insignificance of Neuroscience, Greene has a tendency to obscure his philosophical presuppositions behind a fascinating and important, but ultimately tangential, deluge of empirical data.
The three deep mistakes Greene makes are: a) to accept the Humean starting point of moral noncognitivism, b) to reify deontological thinking and utilitarian thinking as “System One” and “System Two” respectively, and c) to leap to utilitarianism when, even accepting his premises, better alternatives exist.
Deep Problems: a) Noncognitivism Is False
Noncognitivism rose in popularity after the Enlightenment in large part due to an incorrect Cartesian view that morality like belief required an ultimate foundation. Hume put foundationalism to the test by taking it to its logical conclusion. In lieu of an infinite regress, Hume realized that connecting ought to is was impossible. Thus noncognitivism — and thus moral skepticism. And while Hume’s argument and conclusion were valid, the premise that we need foundations in the first place was dead wrong.
Since Quine, philosophers have largely accepted coherentism for beliefs. That is, it makes most sense to think of any particular belief as inhabiting a holistic web of beliefs rather than to link beliefs in a linear chain of justifications down to some “foundational” belief. When we are persuaded to change our beliefs we thus often are required to update a large number of interdependent beliefs to ensure coherence.
It turns out the same Quinean argument works for desires, preferences and other vernaculars for Hume’s passions. It’s tempting to think of desires as following a linear chain down to some base foundational affect, implanted somewhat arbitrarily by evolution. But this is an elementary error.
While true that evolution has equipped us with certain somatic states (like hunger pangs), desire (like “I desire to eat”) contains propositional content. Like beliefs, desires are part of a holistic web that we draw from in the discursive game of giving or asking for reasons. In turn, desires like beliefs are capable of being updated based on rational argumentation and the demand for coherence.
For whatever reason ethicists have been much slower to embrace coherentism for morality, preferring to soak in tired debates like deontology vs consequentialism. Greene is no different. And his attempted foundationalist argument for utilitarianism has not closed Hume’s gap one iota.
b) Dual Process Theory is Irrelevant
Using fMRIs to conflate deontology with automatic thinking and consequentialism with deliberative rationality is neither valid nor advances the argument. To quote University of Toronto philosopher Joseph Heath in his overview of empirical approaches to ethics:
Greene offered no reason to think that the theory of value underlying the consequentialist calculus was not based on the same sort of emotional reactions. In this respect, what he was really doing was presenting an essentially sceptical challenge to moral reasoning in general, yet optimistically assuming that it undermined only the position of his opponents.
Moreover, there are good reasons for thinking of deontological modes of reasoning are essentially cognitive. As Heath argues in his book Following the Rules, social norms take the form of a web of deontic constraints that we reference just like when we reference beliefs or desires when pressed to defend certain behavior. This makes social norms — and deontology in turn — analytically cognitivist. That is, regardless of the fact that deontic violations are more likely to elicit an emotional response, deontic reasoning must still inherently make use of System Two at some point.
Greene even acknowledges the more plausible explanation for why deontological violations cause more emotional fMRI activity than utilitarian ones: namely, that they each require different kinds of construal. Utilitarian reasoning tends to be about system wide outcomes and that level of construal imposes a psychological distance between the agent and the moral dilemma. But even if there is a link between construal level and dual process theory, just because utilitarian thinking is slow does not make slow thinking utilitarian!
c) Utilitarianism is a Non-sequitur
Even accepting all of Greene’s major premises, the conclusion of utilitarianism is still unwarranted. Greene suggests that the social function of moral psychology points to a “common good” through cooperation, but utilitarianism is only one possible interpretation.
In economics there are two basic approaches to social welfare, one top down and the other bottom up. The top down approach is the closest in spirit to the utilitarianism expressed by Greene. It posits a social welfare function and conditions that must hold for its maximization, aka the greatest good for the greatest number. Adherents of this approach have spanned centuries, from Bentham up to Pigou.
The other approach begins with the process of transaction itself. It posits that two people will only exchange if they each preceive a mutual advantage in doing so — that is, if the trade will move them toward a Pareto improvement or win-win outcome. This is at the heart of bargaining theory, which would presumably make it a good candidate for solving the “tragedy of common sense morality” or any scenario where conflicting interests or value systems collide.
One of the worse “tragedies of common sense morality” in history occurred in the 1600s when Protestants and Catholics fought throughout Europe in the 30-Years War. From the ruin rose modern Liberalism and the legal basis for religious toleration and value pluralism. Liberalisms core value is thus mutual advantage in the Paretian sense, not a crude formula for util maximization.
In fact there is a substantial literature within trolley problem research analyzing the effect of Paretian considerations on moral judgement. Greene is even a contributor. Indeed, in all sorts of artificial moral dilemma subjects are consistently more likely to judge harm as permissible if it leads to a Pareto improvement.
For instance, this 2011 paper [pdf warning] co-authored by Marc Hauser suggests that “Paretian considerations should be treated as an abstract principle that is operative in folk-moral judgment across a wide variety of contexts, involving different sources of threat and different degrees of contact.” Note that this fits the criteria for Greene’s “deep pragmatism” surprisingly well, without any of the attending controversy or highly demanding prescriptions surrounding Peter Singer style utilitarianism. Indeed, the authors are correct to report that Paretian considerations “provide a reason for action even for the non-consequentialist.”
Despite my skepticism for Joshua Greene’s “deep pragmatism” I strongly commend his efforts. In fact it is mostly in line with my own approach. Yet its current manifestation suffers from philosophical naiveté.
Humean noncognitivism is tempting for any student of psychology, but it turns out to be philosophically untenable. Indeed, by their very nature the deontic statuses we assign taboos and other social norms are part of a cognitive process of giving and asking for reasons. We can even reason and debate over our desires and preferences since (in contrast to pure affect) they carry propositional content.
Furthermore, while utilitarian calculations often require over-riding our more intense “gut reactions,” that does not make them any more foundational to morality. This is especially the case when it is always possible to interpret ostensibly utilitarian outcomes as resulting from a bottom up process that respects the Pareto standard.
And from the point of view of resolving tragedies of common sense morality, liberal principles like value neutrality and free expression that implicitly endorses Pareto have never been more influential on a global scale, nor more vital for our continued peaceful coexistence. The inferiority of the utilitarian alternative is shown in the recent attacks on free expression in Paris. Who today could defend Charlie Hebdo’s provocative iconoclasm on purely utilitarian grounds in a country of perhaps 6 million Muslims?
Finally, it important to remind ourselves that free expression as such is not a “Western Value” unique to the strategy of our hemispheric “tribe”. Rather, the Pareto standard of mutual benefit transcends the tribe and individual as the only proven basis for peaceful, pluralistic civilization.
What types of people are likely to be risk averse, to fear the vagaries of lady luck? Curious, I looked up the personality literature:
By MBTI personality types, risk tolerance is negatively related with Introversion, Sensing, and Judging; and positively associated with Extroversion, Intuitive, and Perceiving types.
By NEO personality types, risk tolerance is negatively associated with “Conscientiousness Factor (Overall) and the six Agreeableness Subscales (all) – C1: Competence, C2: Order, C3: Dutifulness, C4: achievement.”
Interestingly, it seems like the more “stoic” personality characteristics are related to risk aversion. Yet if one is relatively immune to the vagaries of lady lucky, (all else equal) shouldn’t one be more risk tolerant? Is there no moral hazard for emotional insurance? Indeed, who could rightfully claim to be “stoic” to loss and then constantly try to avoid it?
In his latest Umlaut piece, Adam takes exception with the Epicurean and Stoic greats, arguing against the total mastery of luck:
The philosophic quest to banish luck has largely been a failure, though it has given us many useful tools along the way. But taken at their word schools such as stoicism practically ask us to cut off our limbs to avoid the risk of lady luck taking them from us.
From the POV of entrepreneurial capitalism, extroversion, restlessness and leaps of faith are all virtues. But the stoics weren’t capitalists. As Adam alludes, they generally shunned what we would today call conspicuous consumption. The stoic Cato, for example, would intentionally wear the opposite of what was fashionable, an F.U. to the proverbial “Jones” he decided weren’t worth keeping up with. How paradoxical — to embrace the unfashionable to dodge the risk of appearing out of fashion?
Perhaps the Stoic’s apparent self-spiting was really an anti-fragile sensitivity to tail risks — cutting off a limb to save the head — that only seems like a reductio ad absurdum because of the narrow time frame. If accurate, it should at very least change the way one invests.
I admire the stoics, but like Adam, have tried to “unbundle” their wisdom. In spite of my efforts, stoicism and asceticism are a packaged deal.
Partly it’s the failure of my reason to master my passions given the allure of prestige and rewards. Partly its a problem of collective action — few of my own risks are statistically independent. I feel like the world is on a roller coaster increasing in speed until it collapses, but my only choices are to ride it or stand in its shadow: Might as well enjoy the ride. And so a tincture of risk tolerance inevitably leads to a gallon of reward tolerance. A taste of the fashionable, and from there, a hedonic treadmill that is pressing our luck.
AB’s post is inspiring because it touches on an intuition I’ve had that I haven’t seen discussed too much in the technological unemployment debate.
The thing that we think humans are good at is actually what they are terrible at. Namely: reason, logic, strict formal rule-following.
As Kahneman or Baumeister or any psychologist will tell you, something thinking through a lot of math equations is extremely hard for us. Multiplying 3,464,900 by 4,562 in your head without recourse to pen and paper puts a strain on you, and it doesn’t take many such math problems before your capabilities are compromised and you start giving inaccurate answers.
Once we have the concepts of numbers and multiplication, automating that process just makes sense. A cheap computer can give you the right answer to the above question in a fraction of a second. Human laborers will never outperform automation once the algorithm-makers have isolated the nature of what needs to be done over and over.
But that brings us back to AB’s post. Computers aren’t so great at discovery. As AB put it:
But where do these algorithms come from? Who tells the robot how to make a better hamburger? I’ll tell you one thing, it sure isn’t going to be a computer programmer who can’t make anything fancier than ramen noodles himself.
Substitute for “hamburger” the next great X. In AB’s post, it’s the next great line of Toyotas. But X can be just about anything. And it’s hard to believe that finding it will always or even mostly take PhD level skills. The lion’s share of the advancements from the Industrial Revolution came from tinkerers discovering through rote trial and error. Perhaps Tyler Cowen is correct that we have used up all the low hanging fruit in this regard (I’m skeptical), but it seems unlikely that we have exploited many of the possibilities of combining this discovery process with after the fact automation.