Why Morals aren’t Real

As we’ve all noticed, politics has heated up recently. Not just in my particular niche of AI, but all over the place. I have a strong impression that conversations have not just become less friendly, but also less fruitful. I think the hidden pattern behind many of these disagreements, and in many cases, the cause of their fruitlessness, is a difference in the fundamental values people hold. In this essay, I will lay out the case of why this is an even bigger problem than it sounds like and how it reveals a fundamental truth about the nature of morality itself.

Moral frameworks: Yard-Sticks for good and bad

What’s important? What should I do? Is this right or wrong? These are some of the questions that are at the basis of our lives. How we answer them is ultimately how we govern everything, from big decisions in politics to small actions in our personal lives.

To say that their answers are important would be an understatement and redundant. In fact, we define the very concept of importance by how we answer them. Considering this, it’s shocking how rarely we explore how exactly we answer them. What are the instruments we use to make these decisions?

The thing that differentiates our experience from a cold universe of atoms bumping into each other without importance or utility is that we care about things. Some things should happen, and some things shouldn’t. We think murder is bad. We (dis)like that the politician gets elected. We hate being kicked in the shin. We dislike seeing the interest rate go up. We love seeing that kid laughing in the park.

When asked, “Why is it bad when the interest rate goes up?” we might say something like: “Because it will make mortgages more expensive”. We can then continue to ask “Why is it bad that mortgages will get more expensive? “ and get an answer like “Because it’s less likely that non-rich people can buy houses”. We can obviously continue to ask for the moral element of the answer. If we do so, we will eventually get to some basis. This basis is the moral framework of the person being asked. It’s the thing that they ultimately refer to when trying to answer what’s good and bad, what’s important and by extension what they should be doing.

Examples for moral frameworks are utilitarianism (I must take the action that optimizes for utility), deontology (i must follow rules a, b and c), but they can also just be your intuition (I must do what my intuition tells me is right and wrong).

For someone who bases their morality on religious principles, this basis might consist of the principle that it is inherently bad to go against the will of their deity, for a utilitarian, it might be that pain is inherently bad or that utility is inherently good and for a deontologist, it might be that breaking specific rules is inherently bad.

Warring houses built on sand

This method of getting to the basis of someone’s moral framework reveals a worrying truth. The choice of basis is, by definition, provably, irrefutably arbitrary. Here is how:
If one had a reason to choose a specific basis for one’s moral system, this reason would be its actual basis. As an example, let’s look at someone who reports having pain as the basis of their morality. They do things that minimise pain in the long term for everyone involved. If they are asked, “Why is pain bad?” and they say, “It just is.”, this is indeed the basis of their moral framework. But if they say something like “because people don’t like pain”, it’s obvious that pain is actually not the basis of their moral framework. The actual basis is apparently people’s preferences. Sometimes people will start reasoning in circles here, saying that “Violating people’s preferences is bad, because it causes them pain”. But circular reasoning does not provide a foundation either, and can start in an arbitrary place.

Evidently, the basis of a moral system can’t have a reason. The choice of moral criterion must follow from nothing. This is a problem for those of us who find comfort in the idea that disagreements can be resolved in debate. If we share a common basis for our morality, this is the case. As an example, we might disagree over the usefulness of electric vehicles because we have different opinions on whether they contribute towards our common goal (eg. whether their production is sufficiently ecological). This disagreement can be reasoned over; it’s about an observable earth and subject to the reality out there. Eventually, we can come to a conclusion on what is best to do with EVs to make sure we do the right thing.

But if we disagree on what it means to do the right thing, there is no solution. Neither of us can justify our conception of “the right thing” because it’s, as we established, baseless. Nor can we criticise the others’ choice of their conception of the “right thing” because it’s also not chosen to fit any criteria, and thus also can’t fail to fit them.

Now what?

I’m not sure what to do with this information. I obviously still have strong moral beliefs. Things feel “objectively” good and bad to me. When I see something like the brutality in Ukraine, I viscerally feel that its wrongness goes beyond my intuition and my subjectivity. But that’s just not true.

In this sense, pursuing what I feel to be right and avoiding what I perceive as wrong feels egotistical, because I’m essentially just projecting my arbitrary choice of morality on the world. I have no explanation for the people who would like me to choose a different moral framework, nor do they have any argument to convince me to do so. It feels odd to have such a strong conviction for something I know to be arbitrary.
But then again, viewing egotistical actions as “wrong” is just as arbitrary.

Next
Next

The Tightrope