This is my first post here, so let me begin in the manner in which I hope to continue: with some backhanded praise. Adam Gurri’s concept and critique of “telescopic morality” is a fantastic rhetorical flourish. It evokes the idea of a moral trompe l’oeil, a trick in which we erroneously transmute distant moral concerns into near ones. It’s also a great entry point into some excellent analysis of our relationship to information, and our general irascibility on social media. If you haven’t read the article that lays out the idea, you should. So far, so good.
It is also what a certain kind of academic might describe, archly, as “problematic”. It rests on twin foundations: first, that there are relevant moral distinctions based on proximity (literal or metaphorical); and, second, that telescopic morality tends towards the Sisyphean, since it encourages ineffectual or irrelevant moral actions. I disagree with both of those propositions.
But there’s more to this discussion than those two propositions alone. The rip tide is a suspicion about consequentialist ethical systems (or perhaps just utilitarianism). That’s a much larger battle, but I would like to try to set down what I regard as a limiting principle, no matter what your ethical proclivities – numbers cannot be ignored in in any ethical construct, particularly as applied to charity.
Near, far, wherever you are…
Though it’s not explicit, Adam’s critique of “telescopic morality” is really a critique of Peter Singer’s “expanding circle”. For the unfamiliar, Singer is a utilitarian in the bullet-biting mold. He is famous for many things, but the most relevant for current purposes is his admonition to give generously to charity, particularly in the developing world. “The Drowning Child and the Expanding Circle” is a quick and easy introduction.
The “expanding circle” refers to the expanding scope of ethical concern – beginning traditionally with the individual, then the family, class, nation, and so on to humanity at large. Interestingly, one of Singer’s first arguments in favor of expanding the circle is made on a pragmatic basis: that it is now possible to effect change on a global basis in a way that was impossible in the past. This is the inverse of Adam’s view—that it’s essentially impossible to make a difference—so I’ll revisit it in a moment.
The second argument is that the interconnection of human interests (think global trade; think global warming) leads ineluctably to the development of a global ethic. There is something to this idea. Though we may not meet the Bangladeshi that has sewn our t-shirt in the way we could once have known a village tailor, do we owe them any less an obligation of community? This is the kernel of the much maligned, over-used, but not entirely inaccurate metaphor of the “global village”.
From these precepts one could build a fairly strong set of claims about the ethical value of distant persons. One could also make strong claims about the importance of distant persons based on fundamental rights, or working off some Kantian imperative of universalizability, or perhaps even through the virtue of compassion. Yet I don’t think we need to go all the way to the fundamental equality of man to embrace “telescopic morality.”
No, all we need to do is demonstrate that distant persons have some non-trivial value and then let math do the rest of the work. Getting over this hurdle is fairly easy. Whatever you think about the methodological foundation of ethics, our collective ethical instincts have some pulling power. And you will be hard pressed to find allies if your starting position is that distant persons have zero value. If you accept that distant persons have some ethical value then, as the old joke goes, you’re just haggling over the price.
Sisyphus pushes the rock downhill
And when you’re haggling about the price, you’ll want to think about the exchange rate. There’s a reason I say “prices” and “exchange rates.” The language of finance is a neat little heuristic in the debate about charitable interventions. It’s probably the easiest entry point to effective altruism because it provides a ready-made language in which to talk about the relative merits of two charitable options.
Try this comparison about the relative merits of Make a Wish and VillageReach, for example, stolen shamelessly from a brilliant post by Jason Kuznicki:
“One wish [from Make a Wish] for one relatively privileged (albeit distinctly unlucky) first-world kid. Or fifteen hundred years of life [from VillageReach] — for children who will otherwise die.”
Or, two examples from Giving What We Can:
“[T]he UK’s National Health Service considers it cost-effective to spend up to £20,000 (over $30,000) for a single year of healthy life saved. […] Against Malaria Foundation distributes mosquito nets at a cost of only $2,300 per life saved.”
“For the same amount of money as training one guide dog, we could instead completely cure over 2,000 people of blindness.”
These kinds of results should be obvious in the developed world. The low-hanging fruit has already been picked in first world countries, and you can leverage the enormous purchasing power differential when spending dollars (or euros, or pounds) in the third world.
So, the question to ask yourself is how much more do you care about your neighbors than those in foreign countries? This is what I like to call your “personal ethical exchange rate”. Take a moment, if you’re willing, to set an exchange rate between the interests of hypothetical neighbor A and hypothetical distant person B.
Do you care thousands of times more about A than B? Tens of thousands of times more? Because when it comes to the relative efficiency of local versus foreign interventions, that’s the order of magnitude we’re talking about. There are few, if any, who are prepared to discount distant lives quite so heavily. There may be such people, but I think they are forced to defend a far more difficult proposition than merely “one should favor one’s neighbors.”
We can’t be heroes
When Adam argues that telescopic morality is ineffectual, he is (on my reading at least) arguing that the scale of certain problems is so great that individual interventions are immaterial. A key quote:
“The crusader on behalf of the greater good who fights their hardest on behalf of policies whose outcomes they cannot hope to actually measure is nothing compared to the everyday citizen who does not hesitate to help pick up the pieces after a disaster. Hell, the activist-crusader is nothing compared to the neighbor who helps clear up the snow after a blizzard.”
Adam is tallying efforts like those of VillageReach and Against Malaria against the totality of problems such as child health and malaria infection in the developing world. Those problems seem to be insurmountable or, at least, of a scale much larger than, say, building local community networks.
However, there seems to me to be no clear reason why one would measure each intervention only in terms of progress (with respect to some arbitrarily defined “problem”), rather than on a standalone basis. I regard each choice as a kind of ethical unit—an object of discrete analysis—and one which may have its own merits or demerits. That choice should not be thought of as just another block on a progress bar, its value determined by how many percentage points it has brought you closer to 100. (By analogy, if asked whether you would rather have paid down 1% of your mortgage or 5% of your credit card, looking only at the percentages would be silly.)
But even so, there’s plenty of evidence that seemingly insurmountable problems can be overcome. Smallpox has been eradicated, and nearly 1 billion people have risen out of extreme poverty. Our actions in distant places have a distinct, measurable outcome. Those results are no less distinct or measurable because they are not directly observed by us. None of this is to say that there aren’t important debates around the effectiveness of aid. But those debates are founded on a shared assumption that helping distant persons is a worthwhile goal if it can, in fact, be achieved.
The consequences of virtue
All that I have said above is intensely focused on outcomes. It’s the tallying of results. If that sticks in your craw, as I imagine it might in Adam’s, it may be because your system of ethics is based on who people are rather than what they do. You might be more interested in virtue than in results. And you might think a person’s place in their community is a better measure of their ethics than some arid accounting of global benefits.
I am not a virtue ethicist in any traditional sense, so I can’t honestly say I find that point of view persuasive. But I am not hostile to it. It falls, for me, into the “reasonable minds may disagree” mental category. However, I am not prepared to extend that generosity to a critique of “telescopic morality.” And that is because, at its most basic, I do not think that virtue ethics is a license to be innumerate. As Joseph Raz rather neatly puts it “numbers matter.” And once you begin to look at the numbers, the case for helping the distant over your local community is, in my view, overwhelming. Indeed, I think the virtues of wisdom and compassion demand it.
 In my more whimsical moments, I like to think of Singer as the Ghost of Cognitive Dissonance, haunting undergraduate ethicists in well-to-do universities worldwide. “Don’t you agree you are morally obliged to give to the very poor?” “Then why don’t you?”
 I’m going to leave to one side for a moment the animal world, notwithstanding that it is mentioned by Singer and seems to me to be obviously in-scope. This is mainly because the case for vegetarianism tends to be so obvious, so right, and so utterly inconsistent with most people’s choices, that it produces violent objections. Accordingly, it has a tendency to derail conversations.
 Singer also makes an evolutionary argument for the expanding circle, which I fully intend to ignore right now.
 At this point I will mention Experimental Philosophy (aka “x-phi”), a line of inquiry in which empirical results from, for example, surveys, can be used for various philosophical purposes. It’s particularly interesting when applied to classic ethical problems, like the Trolley Problem. I only mention it, though, rather than really weigh in, because I live in suppressed fear of being assassinated by trolley in some sleepy university town.
 I admit this is a more controversial, and complex, question than I have really indicated here. One set of people who don’t much like thinking in pieces in this way are rule consequentialists (although, in my defense, there’s just as many problems with thinking in recursive act-rule loops). There’s also just the more prosaic point that thinking about ethics in tiny little pieces is some version of mental transaction costs hell. That said, I think my point about arbitrary “progress” measures stands.