The smart people have thought it through: Earth puts all our eggs in one basket. While the world is fairly healthy now and incremental progress marches on, it just takes one major asteroid, one nuclear world war, one mass extinction event, or one unfriendly AI to send us back to the stone age (or worse). And while the probability of most apocalyptic scenarios is tiny, they can be made infinitesimal by diversifying humanity across another astronomical object.
Earth 2 is about diversifying against risk. Colonizing Mars does nothing to diversify against the risk of a massive solar storm, since it shares the same star. But it does diversify against the risk of global contagion, climate catastrophe, and robot wars. If and when we colonize Mars, however, it would defeat the purpose to carry on extensive interplanetary trade. Deep linkages between Earth 1 and 2 might have short term benefits (the division of Martian labor is limited by the extent of the market), but it would also allow major shocks on one planet to propagate to the other—the very thing the costly endeavor of terraforming Mars is trying to avoid.
It’s worth reiterating how long term and counter-intuitive this thinking is. Musk and Hawking are not unmoved by the massive improvements that have been made in global health and well-being in their lifetimes. But they also aren’t autoregressive thinkers, expecting that the state at period T to always closely resemble the state at period T-1. Their mental model contains the possibility of making slow and steady progress that is totally reversed by a single, rare roll of the dice.
Now substitute linkages with Earth 2 with the linkages globalization has created with the rest of the world. And substitute the far fetched risk of an unfriendly AI with the very real risk of global financial crisis, collapsing political authority, and the reversion to predominately decadent and dysfunction political institutions like those of the pre-Enlightenment era.
This is the forecast in the subtext of neoreactionary and paleocon fears about modernity and globalization. While global trade networks have created supply chain redundancies, reduced inter-state conflict, and raised millions out of poverty, a common market also creates the possibility of systemic risks that affect everyone. Think Europe’s deflationary debt spiral. Add to that a dominant liberal-cosmopolitan ideology which denies hard truths about assimilation, and you get an out of control refugee crisis, authoritarian backlash and the terminal decline of Western civilization. Or something like that.
See, I don’t know if I buy any part of this story. But I believe in the principle of charity, which means framing my opponent’s view in the best possible light and avoiding attacks based on their motivation. And often the best way to do just that is to substitute your opponents argument with an analogous one which you’re already sympathetic to. In this case, my opponents are anti-globalists. They want an Earth 2 called the West, which limits its linkages with the rest of the world, forgoing short run benefits in favor of protection against radical, systemic changes with uncertain long run effects.
Given my priors, it’s easy for me to think of antiglobalists as simply xenophobic hatemongers, whose fear of modernity is nothing more than sublimated white identity politics. And many of them are. But I also know many who aren’t—many who are smarter than me, and more kind and humane, too.
So I’m torn. If the rallying cry for anti-globalists is “build that wall,” then the rallying cry for Musk and Hawking is “build that space moat”. And while I accept the latter I whole-heartedly reject the former, but can’t quite figure out why. Mood affiliation? Status seeking? Because the fantasy of colonizing Mars is far, while the implications of not responding to the refugee crisis are here and now, and will lead to real and tangible harms? Or because, in the case of globalization vs isolationism, the uncertainties cut both ways in roughly equal proportion?
It’s this last possibility that, Tyler Cowen argues, means we should focus on doing things we know produce good consequences in the short run:
Let us start with a simple example, namely a suicide bomber who seeks to detonate a nuclear device in midtown Manhattan. Obviously we would seek to stop the bomber, or at least try to reduce the probability of a detonation. We can think of this example as standing in more generally for choices, decisions, and policies that affect the long-term prospects of our civilization.
If we stop the bomber, we know that in the short run we will save millions of lives, avoid a massive tragedy, and protect the long-term strength, prosperity, and freedom of the United States. Reasonable moral people, regardless of the details of their meta-ethical stances, should not argue against stopping the bomber.
No matter how hard we try to stop the bomber, we are not, a priori, committed to a very definite view of how effective prevention will turn out in the long run. After all, stopping the bomber will reshuffle future genetic identities, and may imply the birth of a future Hitler. Even trying to stop the bomber, with no guarantee of success, will remix the future in similar fashion. Still, we can see a significant net welfare improvement in the short run, while facing radical generic uncertainty about the future in any case.