Maybe it was a bad idea to cite an acerbic guy like Lubos Motl. When a guy says that a lot of questions are just stupid, that’s not exactly “sweet talk.” Motl has an important point, but I won’t defend his tone.
He did take the time to outline exactly what he means when he says “stupid questions,” and not only does that definition not apply to Adam, it is also fully consistent with the Gadamer quote Adam gave us. In fact, I am as surprised that Adam would quote an argument in favor of authentic dialogue as a response to a criticism of inauthentic questions as I was when Samuel quoted a Situationist to critique my endorsement of Situationism.
Clearly there is a gap between what I think I’m saying and the message I actually manage to convey. And clearly this gap is caused by me because it keeps happening, and I am the common denominator. Motl might be wrong for his aggressive tone, but at least he gets his point across. No such luck for me. Even when my fellow Sweet Talkers agree with me, they think they disagree.
Still, it was a little disheartening to see Adam call-out Motl on free will. Adam notes that Motl does not appear to have bothered to consult the SEP‘s entry on free will. But I did – and guess what I noticed: the entry does not include any discussion of the Free Will Theorem discussed by Motl in his post, despite the fact that the Theorem is directly relevant to the philosophy of free will. This oversight is a major validation of Motl’s point. Did Adam read about the Theorem before he wrote his last post?
I also find it to be somewhat of a weak argument against my claims to stick me with an -ism or two and then argue against those -isms rather than any of my claims.
Maybe we can all improve the authenticity of our dialogue.
Well, I’ll answer Adam’s questions as clearly and authentically as I can, and I hope I won’t disappoint the reader too much by not giving you this:
The Rapid-Fire Questions
Is mine a philosophy of “scientism?” No. I never meant to suggest that science is the only valid form of inquiry. I feel bad about having given that impression. However, I don’t think Adam is taking Motl’s criticism seriously enough. Some “problems” are only problems because the language by which they have been formulated is ambiguous. I have mentioned this before. The Free Will Theorem I cited above, along with its philosophical implications, is a good example of what I believe to be a better way forward. Does Adam disagree?
Is “scientism” the reason I think morality ought to be grounded in psychology? No. The reason I think so is because I think morality and philosophy ought to make human beings measurably happier and more mentally healthy than they otherwise would be. Those things just so happen to fall under the umbrella of psychology.
Is mine a philosophy of pragmatism? No, at least not in the formal sense. As I mentioned in my comment to Adam (and in Episode One), all I’m really saying is that she who gives a beggar a dollar for utilitarian reasons is no less moral than she who does so for eudaimonic reasons. If Adam feels otherwise, I would love to know his reasons.
What do I suppose I’m doing? , What do I believe the scientific method is? , Has anyone ever done this other kind of inquiry? I hope my answers to the previous questions clear these questions up, too. I will only point out that these are not authentic questions in the sense that Gadamer would mean.
The Real Meat
Adam’s final three questions should have been his first three, because they are the most important questions he asks. I’d like to spend a little more time on these.
I’ll start with this third question, but I am going to rephrase it in light of my answers above. Adam writes, “What is the difference between the reasoning done to arrive at the scientism-pragmatism position, and the reasoning that is being rejected as unscientific?” My rephrasing is as follows: What is the difference between the reasoning done to arrive at my position and the reasoning done to arrive at, e.g. Adam’s position?
My answer: Nothing. In my framework, the reasoning is irrelevant, only the outcome matters. You are what you do, that is all you can ever be.
I’m trying to decide how best to describe this. Let me try it this way: If I am a knave, I hope that Adam will not give me a free pass just for having well-thought-out philosophical justifications for my behavior. In the end, knaves are knaves.
That much is intuitive enough, but there’s a flip-side: If Adam is a moral person (and he is), he doesn’t get “extra credit” for having an airtight intellectual moral framework. To the people in his life, it matters only that he is a good person. His family won’t disown him if they find out that he is a good-but-philosophically-inconsistent person. I can’t speak for them, but I can tell you that my family has a good laugh at all the time I invest in philosophy. In the end, it doesn’t really matter to them how I think, it only matters what I think. Even then, what I think isn’t nearly as important as what I do. This was one of the points made in that other post I linked to, from The Last Psychiatrist, who also happens to write in many of his posts that “you are what you do.”
(The other important point made there is relevant to something Samuel pointed out in his post. A lot of our thoughts are really just defense mechanisms against mental change. When that’s true, it’s often better to just bypass the thoughts entirely and focus on the actions, the results. Sound familiar?)
So, in a funny way, if Adam finds my ideas totally unpersuasive and unappealing, and goes about living his life as a great eudaimonist, and this makes him happy and sane and prosperous and healthy, not only can he claim a victory, but so can I.
Adam’s first good question was “How do we determine what counts as ‘getting results’?” I like the way I addressed this in my comment: When doctors want to treat pain, they show their patients a picture and ask them to point:
A similar approach can be used for moral and psychological well-being: the closer you are to “0,” the more we can say you’re getting good results; the closer you are to “10,” the worse your philosophy is. And because moral behavior affects other people, we consider their moral and psychological well-being, too. So, really, the closer you and everyone you affect is to 0, the better your philosophy; the closer you all are to 10, the worse your philosophy.
Any more specific determination of “getting results” would do a worse job of getting those results. I reiterate: the rationale does not matter, only the actions, only the results. So it doesn’t matter that we don’t define it any more specifically. The way you feel about things, actions, deeds, etc. might be subjective, but the fact that you feel that way is not. While this seems like a radically subjective point of view, it’s actually as close as we can get to an objective standard of morality. I don’t need to know the ins-and-outs of your preferences in order to know what kind of moral behavior will help you achieve them. We can measure your success by the extent to which you do achieve them and the extent to which doing so moves you closer to 0 than 10.
Now we get to Adam’s second question, which to me is the last question. “What is it about psychology that makes its conclusions more trustworthy than moral philosophy?”
Adam is used to thinking philosophically, and moral philosophy is prescriptive in nature – especially Adam’s eudaimonism. Eudaimonism prescribes virtues by which to live, rules which produce virtue, i.e. moral behavior, if followed correctly, and immorality if followed incorrectly or ignored. Because of this, I can understand why he would be suspicious of a system that uses psychological findings to prescribe actions to us which must be followed. What puts a psychologist in a better position than a philosopher to call behavior moral or immoral?
If I’m correct about Adam’s thinking, then his suspicion is a little off-base. Psychology doesn’t provide us with a prescriptive list of acceptable vs. unacceptable behaviors, so we need not worry that its prescriptions aren’t valid. Psychology only provides descriptions of human behaviors and establishes cause-and-effect relationships between those behaviors and our feelings. (Psychology also provides insight into physical abnormalities of the brain and therapeutic options available to those experiencing them, but that is of course less useful for informing our moral conduct.)
Nor am I suggesting that we match up all our behaviors with “happiness research.” To be sure, if research finds certain things produce happiness reliably, we ought to pay attention, but it would be an odd way to approach happiness to go searching for it in the clinical literature!
To wit, psychology can’t tell you that heading straight for your bottle of Glenlivet every day after work is an immoral thing to do, but it can tell you that doing so is associated with disruptions in family life. It can’t tell you that having a drink now is a sin, but it can tell you that if you happen to notice that your drinking is bothering the other people around you, you might consider a cup of coffee instead. All I’m saying is, look at the chart, point out how your behavior is making you and other people feel, and then decide whether it’s scotch or iced tea tonight.
Take note of how this differs from moral philosophy. There is no discussion of the virtue of whiskey, or the concept of moderation, or the value of principles in theory. There are no principles to abide by. There are merely choices and outcomes. What outcome do you want? Okay, then make the choice that corresponds to that outcome. Maybe having a drink is exactly what you want to do right now, and maybe it has no adverse impact on your mental health or anyone else’s. Then have that drink. But maybe while you’re having that drink your wife is in the other room reading Fifty Shades of Grey and wondering when was the last time you two slept on satin sheets. If so, then you have to choose not to have that drink if you want to experience the outcome associated with that choice.
This sounds like common sense becuase it is common sense. You could reach the same conclusion by maxing your U subject to vector C, but why do all that when you can just point to the right smiley face and act accordingly?
In theory, complex moral philosophy unlocks a higher truth; in practice, Occam’s Razor trims the fat. And here’s the crux of it: So long as I’m getting the same results you are, who’s to say I’m wrong? If you find this approach more problematic than eudaimonism, for example, then there must be some reasoning that establishes why. I’d love to hear it.
The goal in a morality governed by psychology is not to balance a list of virtues or adhere to a deontology or to maximize global utility. Instead, the goal is to make personal choices that reward you with more happiness. (That’s long-term happiness, champ. You don’t get to spend all your time on hookers and blow “because it makes you happy.” Twenty years from now, your arteries are going to shatter, and that’s also part of the calculus here.) More zeroes and fewer tens, more smileys and fewer frownies.
Does this make me an adherent of “pragmatism,” or “scientism,” or does it mean I am condemning all philosophy as worthless? These suggestions are strange and surprising to me. I’m suggesting that we arrive at moral conclusions based on what actually works and makes us happy – anyone who offers an approach based on any other criteria is, in my view, missing the point of moral philosophy. We’re not here to develop clever reasoning, we’re here to be happy.
5 thoughts on “Theory and Practice, Episode Three”
Doesn’t this discussion fall foul of Motl’s first problem with philosophy: ill-defined terms? What do you mean by “happy”? It’s not a dumb question – people with no obvious reason to be unhappy sometimes are. And they’re not all nihilists or feminists. Expanding the issue to other’s happiness only compounds the issue.
To explore this further, we end up doing philosophy, talking about existence, meaning, even virtue (which is about how to live life, not rules.) Or we muddle along, which is fine but no more calculated to make us happy (whatever that is) than the philosophising method. (Less so, if we have a taste for philosophising.)
Of course Motl falls foul of his own criticisms. What does ‘spacetime regions really “invent the answers”‘ mean? On the face of it he seems to be ascribing intelligence to spacetime regions, but what he means and why this would be earthshaking for free will theorists isnt isn’t clear either.
Absolutely not claiming that science is irrelevant to philosophy btw. Assuming philosophy is only done by closeted people called ‘philosophers’ is a mistake, though. Philosophy is about trying to get past & clarify ill-defined terms, remove ambiguity etc. As wiki puts in in your link, “see Bell’s Theorem”. Philosophers can do physics (despite rumours), and physicists can do philosophy.
What I mean by happy is “0 on the chart.” I’m purposely not defining it. First, because as you say it would end up being pointlessly ambiguous. Second – and more importantly – because it doesn’t matter what anyone “means” by happiness. The only thing that matters is whether we feel that way.
I can’t overstate this. If you spent the rest of your days absolutely thrilled to be alive, but cognizant of the fact that your life fell short of a hypothetical and purely intellectual “true happiness,” would you care? If so, you shouldn’t.
Happiness isn’t a definition, it’s something you feel. At any point in time, you should be able to assess your own personal level of happiness and express it with a corresponding smiley face. That’s it, that’s all there is to it. No philosophy required.
That’s fine…until you are trying to decide what to do. What will move you from a 7 to a 6? More money? More free time? Will kids increase or decrease your happiness? That’s where reflection comes in. What’s the alternative? Flailing around trying random changes in the hope one works?
This is not necessarily philosophy – but it’s judgement that has nothing to do with science.
And the question of other people’s happiness is even more complex. You need to judge what will increase the happiness of those people affected by your actions if you’re balancing your happiness with theirs. Hopefully they’ll tell you – but some may have no idea what will make them happy. (Some won’t even be good judges of how happy they are, even if they “should”.)
The complexity of all that is precisely why Bentham came up with consequentialism in the first place. It was suggested by him as an aid to rulers in deciding what laws to enact.
“but it’s judgement that has nothing to do with science.”
Yes, that’s true – which is precisely why I have explicitly rejected scientism in the above post.
You’ve doubled-down on complexity, but I think you might be giving it too much credit. It’s not necessary for me to perform utilitarian calculus when making decisions with Bob if I can just talk to Bob. That’s easy enough. You could say, “What if there are lots of Bobs and you can’t talk to all of them?” My response: We do the best we can. If you can show me that a complex moral theory yields superior results, I will use it, because results are what matter. But there is no sense engaging in complexity for its own sake.