Tagged: reductionism

A brief dilemma on the nature of law

The nature of law, broadly speaking, has been conceived of in two ways: through positivism and through naturalism. The former holds that law is the set of decrees lain out by legislators, interpreted by judges, and enforced by armed men. The latter contends that a law is the set of just decrees lain out by legitimate legislators, interpreted correctly by judges, and enforced by armed men. The interminable struggle between the positivist and naturalist ends up as a sort of merely verbal dispute over what we are willing to call “the law,” but contains within it the kernel of a genuine dilemma. Namely, regardless of which position we take, neither seems to allow for the possibility of law as an autonomous field in its own right. Rather, positivism renders law a confused sort of sociology or psychology and naturalism leaves it a wrongheaded sort of moral philosophy. The idea of law qua law cannot be sustained under either interpretation. Given how positivism and naturalism frame the debate over the nature of law, one may find this quite a problem.

The positivist, cynically or pragmatically, recognizes law as a system of power. Law allows speech acts to compel coercion by the state against a population in roughly the same way the rules of basketball allow a referee to disqualify a player from the game. To study law is to study the actual conditions under which legislators write and pass statutes, judges decide cases, and the state punishes violators. Positivists see questions of morality – of how the law ought to be, rather than how it really is – as flights of fancy best left to the philosophers. The study of law becomes a rather restricted subfield of psychology, sociology, and those related social sciences which study a human practice without succumbing to the “bias” of normativity. The project of legal positivism would be complete with the invention of a supercomputer which, when input a history of a judge’s past rulings, political affiliations, what they ate for breakfast, et cetera, would yield a correct prediction about how they would rule on any given case placed in front of them. Under this conception, the study of law could in principle be reduced to the study of physics. The positivist claims to study something called “law,” in other words, but their methodology ultimately undermines the idea that law ought be accorded its own field of study. We might imagine that an alien looking into courthouses or jails through a telescope would end up a legal positivist. That alien, just like the positivist, would be unable to tell us anything we cared about in the study of the law – like whether a case was rightly decided – but could provide myriad statistical generalizations about the propensity of any given judge to pen “Affirmed” or “Denied” after a host of people flap their lips in front of them.

The naturalist, naively or optimistically, recognizes law a system of norms. Law creates the conditions under which society justifiably binds its subjects to a set of rules. To study law is to study the creation and correct interpretation of social rules. Where the positivist sees a realistic approach to law, the naturalist diagnoses a nihilism regarding the very possibility of law; merely using the word “law” and donning nice robes while one utters it does not a law make. While law ultimately concludes in the actual exercise of power, its proper subject matter is the adequacy of the reasons justifying that exercise.

Identifying the nature of law presents an intractable problem for the positivist, who should be happy to pawn this work off to the sociologists and psychologists. The situation seems less dire for the naturalist, who has cordoned off a pasture in the field of norms and set off to graze. But a problem looms wherein law, though not reducible to empirical science, turns out to be political or moral philosophy by another name. If law’s domain is normativity, then lawyers, judges, and legal scholars have thoroughly missed the mark: where they cite case law, they should really be citing Kant. A judge ought decide not by reference to precedent but through rigorous argument to normative legitimacy. The naturalist, in other words, should take law as a subset of moral philosophy and be able to draw a line from any particular legitimate decision back to the fundamental normative principles from which it derives. Any less in the service of state coercion amounts to barbarism.

We are left with the following dilemma: either law is the empirical study of actually-existing legal practice (positivism), in which case law is a subset of descriptive psychology, sociology, or even physics; or else law is the study of the legitimacy of legal practice (naturalism), in which case law is a subset of moral or political philosophy. Both horns yield the conclusion that law has no fundamental nature qua law, and legal scholars and practitioners are systematically confused. Positivists should be running experiments and handing out surveys; naturalists should be deriving judicial decisions from first principles.

Stripping down to philosophical kernels (part 2)

(part 1 here)

But what exactly offends so greatly when we engage in philosophical stripping? Isn’t it rather the case that, as Aristotle puts it in his Ethics, “our discussion will be adequate if it has as much clearness as the subject-matter admits of, for precision is not to be sought for alike in all discussions, any more than in all the products of the crafts.” The insight here pertains to the contextuality of our inquiries: the method of our investigation and the precision of our measurement must respond to our subject.

Our imagined dispute between Adorno and Aristotle lies in their disagreement over the criterion for determining conceptual inadequacy. For Adorno, full adequacy could only (and inconceivably) arrive at the moment of total identity between thought and world, in which the object of our cognitive judgment is rendered bare to us in its entirety – in which thought becomes just as whole as the world which it reaches out to. Aristotle proceeds more modestly: full adequacy involves the proper sort of fit between thought and world, and what sort of fit that is involves an investigation of the object of thought and our own aims in thinking it. Particle physics admits of more precision in measurement than ecology, but this in no way means either field is more adequate to its phenomena. It is rather that to prove the existence of gravitational waves requires a two-and-a-half mile long apparatus sensitive to perturbations of “smaller than one-ten-thousandth the diameter of a proton (10-19 meter),” whereas to discover the detrimental effects of mercury on a particular ecosystem  requires a different set of tools and standards.

At first glance Aristotle’s sensibly conservative view may seem to have the upper hand above Adorno’s impossibly demanding method. But, it turns out, we may not yet be off the hook. Two concerns give reason to push us from Aristotle to Adorno: the circularity of assessing conceptual adequacy by way of “proper fit,” and the reasonable notion that philosophy as a knowledge-domain ought not only give us insight about parts of the world, as particle physics or ecology might, but should rather present us with an account of the whole of the world in terms of what is most fundamental about it. The first concern, we’ll see, makes way for the second.

So let’s say we agree with Aristotle – our concepts are adequate insofar as they properly fit the subject matter they seek to describe. Well, what’s a proper fit? It seems we’ve just punted the question of adequacy down the field. Our goal was to discover the criterion for ascribing adequacy to concepts; our discovered criterion ends up being that a concept is adequate when it properly – or, should I say, adequately – fits the object we are conceptualizing. But this sort of adequacy was precisely what was at stake in our deliberation, and our criterion seems to come up empty.

What would it possibly mean for a conceptualization to be adequate to its subject matter? Suppose we have a model like this: when thinking about some subject matter S, our conceptualization of it C is adequate just in case it meets our criterion of adequacy R. For example, when we think about the existence of gravitational waves, our conceptualization of that existence involves (among other things) a sensitivity to instrumental perturbations of 10-19 meters, and so meets our criterion of adequacy involving standards of precision and accuracy (among other things). But, for any conceptualization, how do we determine the criterion of adequacy R? How do we know, for example, that our concept of gravitational waves must be sensitive to one standard of measurement, but our concept of an omelette does not? None of us, I hope, measures out the egg content of their omelette to the attometer. It seems, in other words, that we must have some criterion for choosing the correct criterion of adequacy R. But to choose such a criterion we’ll need another, and to choose that one we’ll need another, and to choose… well, it’s a long and unproductive tumble from there.

(part 3 here)

Stripping down to philosophical kernels (part 1)

To cut through the confusions of theorizing our world, messy as it is, philosophers strip – they remove features of the object of inquiry until they reach some essential kernel. These kernels offer enough meat for substantial conclusions and generalizations, without the mess of irrelevant contingency to hamper philosophical progress. Indeed, on some conceptions this just is what philosophy amounts to: stripping down and generalizing based on what’s left. For example, in discussions on ethics, one might treat human beings as moral agents insofar as they possess the capacity for rationality. All of those other features about human beings – their bodies, preferences, emotions, and what have you – are set aside by the ethical theorist for the purposes of their analysis.

Typical objections to the stripping method rely on the notion that something is lost when a thing is considered in terms of its particular essential kernel. When we take human moral agency to consist in the capacity for rationality, our account is defective insofar as we have left other things out of the picture. Stripping, in other words, is inherently reductive. Perhaps the most dedicated critique along these lines is Theodor Adorno’s negative dialectics, which argues that thought always fails to do justice to its object. Whenever we conceive of a particular object (or set of particular objects) under a concept, our conception necessarily fails to encompass the whole of that object (or set of objects).

For example, suppose I conceive of my feline companion, Bashir, as a cat. I will (quite accurately) capture the fact that Bashir is a four-legged, furry felis catus whose ancestors were quite revered in Egypt. But, I’ll fail to capture his propensity to beg for food by standing on his hind legs, as well as his particular sort of mercurial disposition toward baths. And, even the brief description I’ve just provided of those particularities invariably comes short of describing exactly who Bashir is.

For Adorno, the failure of our concepts to make good on their promises of rendering the whole of the world intelligible gives us a duty to engage in ongoing self-critique. Our failed conceptualizations produce suffering: first, of the sort that comes intrinsically from the inability of our conceptualization to make good on its implicit promise to adequately think the object of thought; second, of the sort that arises from the consequences of acting on such a failed theorization of the world. Though Adorno rejects the notion that we’ll ever reach a state of identity between thought and world (in which, as Hegel says, the real is the rational, and the rational the real), he does take it that we can move toward progressively less inadequate understandings. Indeed, such is the progress of philosophy.

(part 2 here)