Adam has a new post raising the problem of liberalism and neutrality:
Liberal Neutrality is the idea, embraced by people like Hayek, that the ideal of pluralism, the system it engenders, is itself ethically neutral.
I see two related concepts. The first sense of ‘liberal neutrality’ as the meta-value of pluralism, the value that others should be able to express their values. I’ve discussed the varieties of libertarian value neutrality before here.
The other sense of ‘liberal neutrality’ is what Hayek called the generality norm, or the principal that the rule of law should apply evenly and without targeting individuals or speciously discriminating groups.
One way to connect the two is to realize generality is, in James Buchanan’s words, “the sine qua non of law itself.” The neutrality of value pluralism relies on the generality of law, in the sense that a non-discriminatory set of rules shouldn’t favor one group over another (equality under the law). Buchanan argues that this is part of a social contract that gives the state and law legitimacy.
Hayek emphasized the generality of rule of law as well, though his was an evolutionary rather than contractarian account. Here’s a quotation from Eugene Miller describing Hayek’s view as found in The Constitution of Liberty:
A command is an order to someone to take a particular action or refrain from it; and it presupposes someone who has issued the command. A law, by contrast, ‘is directed to unknown people,’ and it speaks in an impersonal voice. It abstracts ‘from all particular circumstances of time and place’ and ‘refers only to such conditions as may occur anywhere and at any time.’ …
The principle of generality does not, however, encompass the requirement that ‘any law should apply equally to all.’ A law might be general and yet make different provisions for different classes of persons and, where classes are defined narrowly, implicitly favour specific individuals …
Hayek’s case for freedom is not built around the idea of indvidual rights, but, nonetheless, rights are vital to his account of the Rule of Law. These are not to be understood as natural rights, in the Lockean sense, but as rights that have evolved historically and have found expression in various constitutional provisions.
This is closer to my own, historicist view. The extent to which law must be general is a function of its level of abstraction and thus what conceptual level of construal one is forced to assume, an “impersonal voice”. When designing rules or laws for a large population they must be “analytically egalitarian” and more or less utilitarian or risk making a category mistake. Hayek’s view was that his meant carefully designing institutions to maximize competition. Later he backed away from rationally constructing laws for competition for epistemological reasons. He still favored a decentralized legal process that ordered society towards its theoretical potential, as it were. Lets call it “laissez-faire within rules,” or “planning for freedom,” or “the boundary conditions of liberalism”.
What I like about that argument is that it relies solely on conceptual appropriateness (aka the ‘supervenience constraint’: conceptual properties ‘supervene’ to real properties). People can understand why “the house was happy” makes a kind of category error in the form of a metaphor. However, to say “the house was well ordered” at least connects a description (ordered) that can supervene to the object (a house). This is the same kind of attention Hayek makes when he argues against social justice. If you’re careful you can make some interesting arguments of this sort without ever becoming a moral realist.
4 thoughts on “Generality & the Boundary Conditions of Liberalism”
Coming back to this subject again. Why do you connect analytical egalitarianism with utilitarianism? I understand that the latter is a version of the former (in assuming every individual to matter as much as every other individual) but there are other moral traditions which fit the bill. The rights-tradition you mention is one, the Stoics are another.
I was arguing that normative theories are distinct but compatible if you think of them as being mainly about fitting different pragmatic constraints.
For example, public policy decisions there is an attenuation from a policy cause to an individual effect. Moreover, due to the sheer scale, both perceiving discrete differences and demanding perfect unanimity between all individual becomes impossible. This guarantees there will always be winners and losers. It’s in this sense that the “far mode” forces us to be “analytically” egalitarian. From a top a mountain the masses below look homogeneous. Given that we *must* have a far-policy (from monetary policy up to the political system) we are forced by circumstance to be utilitarians or take on some other similarly conducive theory.
Violations of rights on the other hand depend greatly on how human psychology perceive discrete acts of intention within “near” network of casual relationships. Consider the classic thought experiment of a Dr who has 3 patients dying of organ failure, and you come in for checkup. He could murder you to harvest your organs and save the 3 – a perfect utilitarian outcome. But people surveyed find this outcome repugnant. In the book Moral Minds Marc Hauser reports on a number of similar thought experiments to map out how people assess intention based on perceptions of act vs omission and degree of attenuation. The contours of this repugnance end up forming a primitive psychology of “rights”.
However, I agree that the modern conception of human rights is egalitarian. But analytically, when rights based logic is applied, it tends to evoke heterogeneity or matters around consents. Conversely, deciding how to design a healthcare system abstracts from that particularity for pragmatic reasons, and so you end up making assessments based on maximizing, say, organ transplants, because the causal link from your choice to any particular patient failing to get an organ is extremely attenuated.
The organ system we have right now is sub-optimal precisely because near-mode repugnance has crept into a utilitarian design problem. The meta-problem is designing a system that satisfies both levels of construal. Alan Roth’s organ matching system is one example. Capitalism is another.
Ah, that all makes perfect sense. Thanks for clarifying!
Pingback: The Specter of Pluralism’s Bloodstained History | The Ümlaut