By Michael E. Berumen 7-1-06
An old friend with an interest in philosophical matters inquires: “Why be moral?” The question, broadly interpreted, asks two things: why ought we to be moral, which primarily interests philosophers; and why do we seek or even desire to be moral, which is also a question for science. In my book, Do No Evil, I address both issues, though my main interest was to answer the former one. Let me take the easier one first: why do we seek or desire to be moral? The answers are found largely in biology, psychology, and anthropology.
Certainly one reason we desire morality is that we want others to behave a certain way towards ourselves, even if we, ourselves, choose to act otherwise. Indeed, an argument can be made that, given a certain moral framework, it would be quite irrational for one to want others to behave immorally, for doing so would be to our own detriment. For example, no rational person desires to be killed without some justification, some reason, such as saving the lives of others. Thus, to the extent "Do not Kill" is a moral rule, in most circumstances it would be irrational to want others to ignore it.
All organisms seek at least two things: survival and reproduction. No doubt the quest for survival is itself a function of the need to reproduce in order for the organism to continue its kind. Insofar as an organism seeks to survive, it also strives to avoid being harmed, to avoid what is bad for it, whether that means being damaged, impeded, or killed. Organisms with nervous systems all seek to avoid pain. There is not necessarily an underlying, conscious motive here. Such behaviors are hardwired at the most primitive level of existence, prior to any sentient capacity.
As with any other animal, humans naturally want to avoid being harmed. However, humans have minds and thoughts leading them to make conscious choices, the power of volition, whether for good or bad, and whether it is free or determined. We seldom choose a course of action that knowingly and purposefully results in our own harm . On the other hand, our choices sometimes involve incurring personal harm for some perceived benefit. Rationality, in its strict psychological sense, means we operate in our self-interest at some basic, minimal level in our daily lives, and, thereby, we are able to function as humans and all that this minimally entails. Rules of behavior, social rules, often help to guide our conduct with this in mind. Such systems, first and foremost, enable individuals to prevent harm to themselves because they also agree not to harm others or they are able to secure the protection of stronger members of society. While there are other reasons we choose to accept such codes, I think Thomas Hobbes was quite right about this.
We also have strong feelings towards others, feelings of beneficence, and they can be especially positive in relation to members of our family or our proximate group. We have compassion for those who suffer, and we also to want to promote the interests of those about whom we care, or at least their interests as we see them. Such feelings are rooted in our ability to empathize with the pain, suffering, and desires of others because we experience these things, too. While this capacity appears to be greatest with kith and kin with most people, improvements in transportation and communication, and the consequent contact with other places and cultures, have resulted in our extending our empathy and compassion to much larger groups than before, indeed, even to more abstract objects, such as a nation or humanity. Such feelings impel us to devise various systems that purport to promote the welfare of those dearest to us as well as individuals that we do not know and even society as a whole.
For these and, no doubt, other reasons, we choose to establish standards of behavior codified in morality, customs, and law, and, in each case, we can easily observe where the rudimentary principles informing these codes overlap and converge. But we have only touched on why we seek to become moral, or why we have a moral sensibility or inclination, and not why we ought to be moral in the most fundamental sense. In other words, we have not shown why we ought to choose this or that moral system without invoking its own assumptions or begging the question. This is a much more difficult thing to do.
Throughout the ages, most philosophers have maintained there are moral facts and moral truths. They come up with such facts and assert their truth using various methods, but usually it boils down to defining moral terms with other moral terms that have similar or equivalent meanings or, alternatively, by stating that we intuit moral truths and they are inexplicable, unverifiable, and, for example, act as atomistic or unanalyzable properties ascribed to other facts (e.g., as asserted by G.E. Moore). In other words, they end up either begging the question or simply asserting the truth of their existence without being able to demonstrate it. And nowhere are they able to point to a single, agreed upon standard or reference for such moral truths that does not itself assume what we seek to demonstrate.
One can quickly dispense with the idea that moral truths are intuitive by observing that people everywhere often arrive at different moral judgments even when the facts being analyzed are essentially the same. If there truly were intuitive moral truths, one might reasonably expect greater congruity of opinion, especially given the similarity of our neurological construction. The fact is that two otherwise reasonable people can hold opposite moral "intuitions." In contrast, no two reasonable people could hold opposite opinions about, say, the law of identity or the commutative axiom. There are no truly self-evident moral propositions nor is there evidence that their truth can be determined intuitively.
Plato offered several theories of morality in his writings, and each of them assumed there are moral facts in the world. In his most famous dialogue, the Republic, he sought morality through justice writ large in the state, which he maintained arises through the unity and harmony of the state. In other places, he saw practical morality as participating in and having knowledge of the purist of the universal forms, namely, the good. A facile rhetorician, in several places Plato tells us that no one would intentionally harm himself and, what is more, that to act immorally would clearly bring oneself harm (but not so clearly that he could demonstrate it); therefore, one ought to be moral simply out of self-interest.
Plato was onto something, but not quite what he thought. His line of reasoning has led many to believe morality is a rational requirement. Aristotle, his greatest student, thought everything ought to work in accordance with its purpose, and of course, he saw man’s principal purpose or function as rationality, which includes right conduct. But it is easy to show that one can lead a perfectly rational existence whilst harming others along the way; it is when we knowingly (or when we should know) bring harm our own way for its own sake that we are behaving irrationally. So, if we harm another without knowingly jeopardizing our own interests, we have not acted contrary to our rational requirements or to reason. On the other hand, as I observed before, it is quite rational to want others to behave morally and even irrational to want them to behave immorally towards oneself, that is, insofar as it entails harming oneself.
David Hume showed that as hard as we might try, we cannot point to anything in nature that has a moral correlate to a fact or judgment other than our own psychological disposition or sentiments. He claimed that there really were no moral facts. He therefore said morality is a subjective value judgment and that the ends of morality are a product of our passions, not of our reason. Reason can help us obtain our ends or desires, but not to select them. Matters of fact must be distinguished from matters of value. Hume was partly right, but he overstated the case when he said moral judgments were purely subjective and based on sentiments, as I shall show momentarily.
Some would say we ought to be moral because god wants us to be, but that still leads to the question: why. Is it simply because he wants it, which, implies capriciousness on his part and rudimentary utilitarianism or obsequious acceptance, ex cathedra, on our part, or is it because god himself apprehends the object, morality, as being what we ought to do, in which case the question of why remains.
Others will say because if we don’t choose morality we will suffer some adverse result such as punishment or the opprobrium of others, but that, too, is little more than saying morality is simply acting in our own interest, utilitarianism, or, more broadly speaking, consequentialism of one sort or another , leaving us to wonder whether avoiding punishment or other negative consequences, is the only reason why we ought to be moral. The more common flip side of avoiding adversity, of course, is to promote happiness or pleasure, the utilitarianism of the likes of Jeremy Bentham and John S. Mill. I contend that such views might explain why we often choose to be moral, an empirical fact, and not really why we ought to be moral.
For example, what if I can act immorally without incurring a negative consequence such as punishment? An absolute ruler of a state or deity might be in such a position. Why, then, ought he to act morally? What if I can get away with my malefaction without being detected or without an adverse consequence, much as Gyges did with his magical ring in Plato’s Republic? Is there any reason I ought to be moral in such a circumstance? Many of us believe that we should, but how can we actually demonstrate that this is so? And what if one can show that we can actually improve our interests by acting immorally (or, if you prefer, by harming others)? Logical consistency demands that the underlying principle of self-interest or utility should continue to govern our actions. If one posits that we should act to benefit society or increase average happiness, what, for example, if one were able to show that slavery would improve things for the majority? One can begin to see some of the problems inherent with such consequentialist systems, which I have treated at some length in Do No Evil.
Immanuel Kant believed moral judgments are the result of what he called practical reason, the product of our will to do good and our rational nature. His most simple formulation of the categorical imperative requires that we will any exception to moral maxims as though they apply as universal laws and that we include ourselves as potential victims. To the extent we cannot will such course of action, we cannot morally act on it. Kant was prescient in seeing the role logic plays in assessing moral judgments, however, his required construction was so generalized that one ended up with absurdities, such as one should never lie, even if we are foiling Nazis about the whereabouts of a family of Jews we've hidden away. His formula, however, is amenable to modification such that more specific facts are taken into account, which enables us to avoid such moral anomalies. Kant wrongly assumed rationality requires morality and he put too much emphasis on motivation as opposed to action. This is not to say that motives are unimportant, but to recognize that morality is about what we do, not merely what we believe or intend.
Metaethics, a more modern branch of ethics, deals primarily with understanding moral terms and the structure of moral arguments, and it seeks to treat the assumptions that underlie our normative systems. While some of his ideas are found in his predecessors' works, Hume is probably the first to focus on purely metaethical issues. While Hume himself would not have drawn this conclusion, many later philosophers have maintained that from a metaethical point of view, there is no reason to prefer one moral system over another, and that our choice (or belief) simply boils down to securing or expressing our own subjective preferences. In its most extreme form, this leads to a kind of abstract relativism, suggesting that there is no preferred standard for assessing competing, normative systems.
I think such a conclusion is only partly correct and only to the extent that the descriptive properties of moral terms apparently do not have correlates subject to truth conditions, as Hume suggested, and, consequently, we are left without a preferred standard for judging them. However, as hinted by Kant and shown more elaborately by more recent philosophers, such as R.M. Hare, moral propositions are also subject to the rules of logic, which surely are a function of universal standards. What is more, certain moral terms are more similar to formal, modal operators, such as “if” “and” “if, then” “is” and so forth, which also lack descriptive properties that are subject to truth conditions. Thus, for example, the word “ought” says something about the facts, even though the term itself does not have descriptive properties. When I say that one ought not to jump off the cliff, there is something supervenient about the term ought. It works as a formal operator. And finally, there is an evaluative aspect to certain terms such as “better” “worse” “good” “bad” that is not relativistic, much as when I say this wrench is better than that one over there, or this act is better than that act.
Modal operators and logic are objective in much the same way that mathematics is. This does not imply that formal, logical propositions or terms correspond to something in nature, however. That is a confusion people sometimes advance when they think of “objectivity” as being something “out there” in nature. The descriptive properties of moral terms are most likely predicated upon subjective states of emotion, our sentiments, preferences, and passions. However, Hume was mistaken to hold that everything about moral propositions is a subjective matter or a function only of our personal values, for it can be shown they have several formal aspects to them, properties that are quite objective.
This is not completely satisfying to those who seek a more robust justification for normative ethics. They would like for us to be able to say there are moral facts that inhere in the world and that these facts are subject to truth conditions. But no one has shown that this is true, and I myself rather doubt that it is. However, at the very least, we can avoid the stranglehold of the metaethical doctrines of extreme relativism or emotivism, which both lead one to the conclusion that moral propositions and arguments are essentially meaningless and that there are not any grounds for preferring one moral assertion over another. I cannot overstate the importance of simply being able to have a meaningful and logical discussion on ethical topics, notwithstanding the difficulty of evaluating any descriptive properties of, say, "the good." It at least allows us to assess normative formulations using some preferred standards of reference. We can not only discuss opposing moral propositions using meaningful evaulative terms, a process by which we might seek to convince others or work towards a convergence of moral perspectives, we can also assess their logical coherence. These things are true even though the properties of specific moral terms are not subject to truth conditions and lack cognitive content.
When we observe that moral propositions are subject to the rules of logic; that (some) moral words denote something more than mere preferences; and that moral judgments can be assessed using objective, evaluative criteria, we are thereby able to have moral discourse. If the truth were otherwise, moral assertions would amount to little more than nonsense, a view that some philosophers have actually held. Thus, for example, one cannot logically justify murder in one case and not in another when all of the same facts obtain. One cannot logically hold opposite points of view at the same time. When one says one ought to do such and such, “ought” clearly says something about the facts, much as the formal operator “and” does, even though neither correlates to one fact in the world. And positing that “feeding the poor is better than starving them” has meaning across the cultural and moral divide because of the objective nature of our evaluative terms, which, as Kurt Bair, Hare and others have shown, allows us to at least have moral arguments.
With this said, permit me now to state why I think we ought to choose to be moral, and more specifically, why we ought to choose impartial rationality as the underlying moral principle for establishing rules of conduct. To my own mind, it really rests on a variation of something the philosopher Henry Sidgwick observed. I cannot think of any reason we should assume one person’s worth, significance, or set of preferences is greater than any other person’s in relation to the standard or perspective of the universe. I do not think anyone else can, either. The reference to the universe’s "perspective" is a useful metaphor, only; for as far as I can tell, there is no real “standard” in the universe and it does seem to be quite neutral, indifferent, and without any perspective at all. But maybe this is the point. What it says is that there is no reason to believe one person is objectively more important or less important than another.
This seems to me to be as close to a "self-evident" truth in morality as one can get, but I stop short of saying that. However, I do put this principle of equal interests forth as an objective fact. I think I can further say that since it is empirically true that humans, like other animals and organisms, avoid their own harm, death and suffering, that there is absolutely no reason to assume that my own avoidance of death or suffering is intrinsically any more or less important or meritorious than any other person’s avoiding equal harm, and, consequently, that I ought (this is the leap) to extend this principle to them and they ought to extend it to me. Is that an iron clad case of reasoning? No. But it comes close. What is more, it also feels right, and as the great Hume said, reason ought to be the slave of passions. We must simply find a way to convince others to feel that way too, and I think that one way is to begin is with the argument of equal interests, or more precisely, of an equally disinterested universe, making one person's interests no more worthy than another's by any objective standard.
One of the properties of impartial rationality that strikes me as particularly important is the fact that it enables one to formulate universal rules. Various conceptions of the good and consequentialist theories can easily be shown not to be amenable to universality. Everyone cannot act on rules derived from such concepts everywhere and all of the time. Such concepts do not lend themselves to codes that are at once understandable and actionable by everyone. In contrast, one can construct universal maxims that require one to not cause others harm, for it is possible for all rational people to both understand and act on such principles. Such maxims are surprisingly consistent with our "common sense," which speaks to their being rooted in our rational prohibitions against causing our own needless death or suffering. It also stands to reason that rules that everyone ought to follow all of the time are the most important moral prescriptions. Such a system is simple and allows for divergent views on other moral matters, but only to the extent they do not conflict with them.
There may not be moral facts, per se. Moreover, rationality, alone, certainly does not require us to be moral. It is quite possible to act immorally and rationally at the same time. On the other hand, a rational person does want others to behave morally, inasmuch as not doing so could bring harm to himself, which no rational person would desire. Similarly, impartiality does not require us behave morally. One can administer the rules of a concentration camp quite impartially. Rational impartiality, on the other hand, does require us to behave in a particular way, for we then extend our rational prohibitions to others. I believe that such a system comes closest to being consistent with the principle of equal interests, and also to what we observe about ourselves in nature. And for this reason I believe we ought to accept it, though I cannot show that it is a requirement of reason. However, as Hume also showed, we must make a leap of faith in order to connect causes and their effects and to sustain our belief in the uniformity of nature, resulting in the so-called problem of induction, which, to this day, remains with us. The rationale for not becoming a practicing radical skeptic in the matter of causality or other epistemic uncertainties strikes me as no more compelling than the one for accepting certain elementary moral concepts.
A system based on rational impartiality, or what I have at various times called "rational objectivism," also simplifies morality, for it focuses our attention on making exceptions to what we might call the sine qua non of common sense, namely, avoiding harming ourselves or others without an overriding reason. However, I do not contend that such a system emerges ineluctably from empirical analysis as a scientific theory does, or deductively from a given as a mathematical proof does. Philosophers are sometimes trapped into trying to make ethics resemble science or mathematics, but to no avail. This is sheer folly. The best we can hope for is to make it clear that there are certain rules that flow from our humanity and the minimalist principle of an equally disinterested universe, and though such rules are not truly apodictic, neither are they contrary to reason or rationality and, I submit, they are the most defensible of the possible alternatives for rational, compassionate people.