I want you guys to help me out with something, which I’ve been struggling with lately. Is there such a thing as something being good in itself? Is all value in some way derived from either evolution or some version of felt satisfaction and pleasure, or both? Does this question even matter?
Of course, at the end of the day there has to be some final good or goods in order for anything to have instrumental value. By definition, something being instrumental means its value is derived from its ability to serve as a means to some valued end. In other words, the value is from the end.
But a lot of people feel that pleasure and its more healthy cousin satisfaction are not good enough (not satisfying?) as ends. And a lot of people recoil at the idea that it’s all just derived from what enables effective reproduction across generations (which is what any evolutionary account boils down to).
This was on my mind again today as I read Rupert Read and Nassim Taleb’s paper on religion as intergenerational risk management. Taleb’s whole worldview, the thing that has made him famous, is entirely consequences-oriented: if you don’t think in terms of bounds rather than attempting to master probability distributions you can’t actually know, a black swan will come and kick your ass and your loved ones’ asses and your civilization’s ass and probably the ass of the whole human race. His arguments are given their bite, in short, by the threat of catastrophic consequences.
Yet Taleb says left and right that we should only do things because it is our duty to. He is clearly some kind of deontologist with a fondness for virtue ethics, especially the Stoics. But should we do our duty because of the categorical imperative, which is indifferent to consequences, or because it is the right thing to do, in itself?
It seems to me that there is no getting around the fact that human morality emerged through the evolutionary process, and any morality that results in the destruction of the peoples who adopt it is unlikely to last for long. It also seems to me that you should do the right thing even when it might mean very bad consequences, at least in the short term. I’m not comfortable making the logical leap from there and saying that the right thing inherently means doing what has the best consequences in the longest term and the largest scale.
I have more thoughts on this, but I’d rather leave it as a question now than attempt to answer it any further. All thoughts are welcome.