Conservative Moral Extensions

Conservative Extensions

Before I get to morality, I need to lay some groundwork.

Let’s start with what “theory” means in formal logic. A theory is a collection of theorems. A theory is often defined with a set of axioms (starting assumptions) and things you can prove from them "title": "Theorem".

If I have some theory $T_1$, then I can extend it by adding some axioms to create a new theory: $T_2$.

Example 1

Let’s say $T_1$ is (1) every living thing dies and (2) I am a living thing. From this, we can deduce the theorem (A) I will die. So far so good.

Now let’s say $T_2$ adds the axiom (3) rabbits are living things. From this, I can deduce a second theorem: (B) rabbits die. It’s important to note that $T_2$ refers to all 3 axioms, not just axiom #3.

Note, that in this case $T_2$ introduced a new term to the language: “rabbits”. The fact that we can't prove any new theorems using the language of $T_1$ is what makes $T_2$ a conservative extension of $T_1$ "title": "Conservative extension".

Example 2

For the sake of clarity, let's make this a non-conservative extension.

Let's add the axiom (4) the only things that exist are rabbits and I. From this, I can deduce a third theorem, that (C) everything is living.

This is not a conservative extension, because theorem C only uses the language of $T_1$ - i.e. it never uses the term “rabbits”.

We can call $T_2$ a conservative extension of $T_1$, because every true statement that we can express in the language of $T_1$ can be proven using just the axioms in $T_1$. Adding $T_2$ doesn’t let us add any new statements using the language of $T_1$.

Instead, $T_2$ only lets us make statements that also make use of the language of $T_2$, such as theorem B: rabbits die.

Moral Extensions

This is all important, because I believe morality should be a conservative extension of reality.

Consider this scenario. Alice has grown up with a strong sense of right-and-wrong. She believes that people who hurt others deserve to be punished, because they had the free-will and chose evil.

This ethical system might be a tad simplistic, but it has a more serious problem. What happens if scientists disprove free-will? If this occurs, Alice has no reason to support punishing those who harm others. Now, a human will probably find some other reason to justify their punishment, because most people use their emotions to make ethical decisions and justify it after the fact with reason "title": "Morality".

But a half-way decent ethical system founded on reason doesn’t have this luxury. If your only justification for punishing those who harm others relies on this notion of free-will, and free will is disproved, then you have to admit that punishing harmful people is not longer necessary.

When people don’t admit this, its because they usually come up with an alternative justification, but what’s the point of requiring justifications in ethics, if we’ll just come up with them to prove what we want proven anyways? We should believe things because of the justification, not justify what we already believe.

Naturally, we could collect a host of justifications so that if any one goes down, then we don’t have to abandon what we really want to believe. In this particular case, its easy to get around the issue Alice faces. From the start, we should recognize that punishing people who harm others has other moral benefits, such as discouraging them and others from harming people in the future.

However, that’s the wrong lesson to draw. By adding more axioms to your moral system, you increase the probability of a contradiction and create greater and greater risk that you’re not actually believing what you’ve justified, but simply justifying what you believe.

Fortunately, we can get around these issues using a more general and elegant principle: the axioms of an ethical system should never depend on the truth-value of statements about the real-world; nor should our beliefs about the physical world depend on our ethical beliefs.

In other words, our moral beliefs should be a conservative extension of our factual beliefs.

Example 3

Let’s say I have a really simple ethical system: maximize Bob’s happiness. This shouldn’t change any of my beliefs about the real world. However, if I later learn that “Bob loves baseballs”, I can deduce that “I should buy Bob baseballs.”

Note the fundamental difference here. Alice’s problem was that her moral axiom relied on a factual claim. I have no identity-crisis problems, because its only a moral theorem that relies on a factual claim, not an axiom. If I later learn that, Bob actually doesn’t love baseballs, my moral system is still intact (that I should maximize Bob’s happiness); it’s just my belief that I should buy Bob baseballs that has to change.

Real World Example: Race

There is a second reason to prefer conservative moral extensions. Consider a historic conflict between people who used the the lower average IQ scores of certain minority groups "title": "The Black-White Test Score Gap" to suggest racist conclusions "title": "Heritage study co-author opposed letting in immigrants with low IQs". Apparently the issue is so bad, that some people want to ban research on race and IQ "title": "Should Research on Race and IQ Be Banned?".

However, if you believe the equal-value of all human beings is a moral axiom, and that morality is a conservative extension, then there is no problem. Even if difference races have different IQs, we shouldn't treat them different because of it.


I generally dislike using many popular term, including “racist”, outside discussions of personal anecdotes, emotion, etc. I think that in the vast majority of cases when the word “racist” is used, it could be replaced with a much less evocative and much more precise alternative.

I have been told several times that my perspective is less than ideal, and I’ll probably discuss this more in another post. In this case, I use “racist”, because more specific information is irrelevant to the topic at hand.

However, this entire debate stems from a faulty moral premise: if some group has a lower IQ, then that group’s wellbeing matters less. No one states this explicitly, but why then do people get so worked up about this issue?

Again, the answer is to have a conservative moral extension. You should treat people equally. Full Stop. As soon as you start basing moral statements in assertions of factual uncertainty, you have already created a vulnerable moral system.

This isn’t too bad from a liberal standpoint. If it is ever proven beyond all doubt that the average IQ score among African Americans is lower than than among whites for genetic reasons, most liberals will simply shrug and pick up the stance that IQ doesn’t determine moral worth anyhow.

The main problem is that in the meantime, good people who want to defend the principle of everyone’s wellbeing being equally important, have a very difficult time keeping a clear mind. The emotional strength of their moral arguments have a tendency to cause them to be less able to evaluate factual claims separately from moral ones. However, how you feel about an argument emotionally, is not evidence against it. The answer is simple: don’t let facts mix with morality, and don’t let morality mix with facts.

Real World Example: Determinism

One of the arguments I heard in my introductory philosophy class against determinism, is that it would imply we didn’t have free will and that everything was mechanical and meaningless. However, if scientists actually proved the universe was perfectly deterministic, the people who had this view shouldn’t actually feel bad.

They didn’t have free-will the day before, and they were happy and nice. Now, the only difference is they know they don’t have free-will. There is no reason for this to make you unhappy if you view morality as a conservative extension. Like with God, there is no good reason to put all your eggs into one proverbial basket. There is no law of the universe that says that your happiness should depend on determinism.

Real World Example: Religion

A common questions atheists get to answer is how they determine what is moral "title": "Where do atheitsts get their morality". Eliezer Yudkowsky has an interesting analysis "title": "Leave a Line of Retreat" of how this plays into the emotional rollercoaster of deconversion.

When you grow up viewing God as the “source” of morality, then it seems that if God doesn’t exist, then neither does morality, so everything is meaningless.

The answer is conservative moral extensions. The discussion about God’s existence is a factual claim and arguments over morality and the subconscious emotional biases they evoke should be minimized in evaluating factual claims. Even believers benefit. If, by some chance, God doesn’t exist, then you don’t also lose your source of ethics.


These examples highlight the benefits of having a conservative moral extension. It let’s you evaluate factual claims without getting worked up about their moral consequences. If you ever find yourself worrying about the moral consequences of a fact, this means that your moral axioms need to be different.

For instance, I might think one of my moral axioms is “The value of all beings stems from their intelligence”, but if I start feeling uneasy about the possible racial implications there, it should be clear that that should never have been an axiom in the first place.

Conversely, the second benefit is that no matter what factual claims are proven, you won’t ever have to say “I guess, I can go kick babies now, because you proved I don’t have free will.”

The simple truth of the matter is that there are many things about the physical universe about which we are uncertain, and occasionally the very foundation of our knowledge get shaken "title": "Quantum mechanics". This shouldn’t cause the world to end (metaphorically).

Works Cited [show]