Basics


I thought the basics of morality were settled when I was in college decades ago. The right theory was called utilitarianism and the alternatives were fundamentally flawed. Now all I see are wrong versions of the theory, so I will state/summarize utilitarianism here.

At the present time, only pleasure and pain are known to have fundamental relevance or significance. The relevance of everything else derives from effects on pleasure and/or pain. From experience we not only know that pleasure and pain are relevant, but we also know they can be quantified to some extent and combined and compared for different situations. We do the best we can (to the point of diminishing returns) with different types, severities and amounts of pleasure and pain and with estimating these in other beings, predicting future events and ultimately making comparisons which are imperfect, but not wildly imperfect. We clearly have an ability to do these things and our ability improves with more experiences of pleasure and pain and with more knowledge and intelligence. Morality is about optimizing pleasure/pain. Debate about what optimizes it (including the left/right political debate) shouldn’t prevent people from believing the previous sentence.

Act utilitarianism and rule utilitarianism shouldn’t exist. There should only be utilitarianism. Act utilitarianism is wrong because the sum of incremental analyses is different from and inferior to a greater analysis when a suitable greater analysis is possible. Rule utilitarianism is wrong because it isn’t fundamental enough. Utilitarianism says to sometimes create rules and says what the rules should be. It also says how they should be enforced and what the punishments for violations should be. Utilitarianism usually says to follow the rules (whether one agrees with them or not), but not always. Similarly, average utilitarianism and total utilitarianism shouldn’t exist. There should only be utilitarianism. We must appeal to our judgment to determine what would be optimal.

Utilitarianism and overconfidence are a bad combination, but utilitarianism is not to blame. In determining what is right, it is important to keep in mind our great ability to be wrong.

When dealing with the uncertainty of future events, it is an oversimplification to just compare calculated expected values of pleasure/pain for different courses of action. These probability-weighted averages don’t contain as much information as the data used to calculate them. Basic utilitarianism doesn’t say to seek the maximum probability-weighted mean of pleasure/pain, which would be arbitrary and susceptible to the St. Petersburg paradox. Basic utilitarianism says to seek optimal pleasure/pain.

The way utilitarianism is presented with so many potential flaws and flawed versions makes it look like it isn’t the most sensible moral theory or even a sensible theory. Basic utilitarianism, as described above, is the most sensible moral theory.

With people’s moralities deriving less from world religions, it is surprising there aren’t more utilitarians. The new moralities many have are worse than those from the world religions and are leading to mistakes involving a lack of objectivity. One characteristic of most world religions was an independent God. When making decisions, people would try and determine what God would do which helped in being objective. Now, instead of trying to do what God would, many are pursuing self interest or group interest and pain and suffering from conflict and competition are increasing. A survival of the fittest is developing. While competition and conflict are sometimes right, survival of the fittest should be seen as a principle for non-intelligent life only. Where there is intelligent life, morality should replace (or radically alter) survival of the fittest. Survival of the fittest is about survival. It is indifferent to pleasure and pain. However, we know pleasure and pain are relevant, and in opposite ways, so morality exists. For life on Earth the results of survival of the fittest are very different from optimal pleasure/pain. Evil (or that which does not optimize pleasure/pain) is often fit for survival, so intelligent life has a purpose of reducing evil. In the words of utilitarianism, intelligent life’s purpose is the optimization of pleasure/pain in the universe.