A lot of writing defending, criticising, and applying utilitarianism assumes that utility is one single unified thing; that pleasure and pain can be added together and subtracted on a shared scale. This is not the case, or at least is significantly less true than many ethicists would like to think. That fact has a lot of knock-on consequences for how we should think about utilitarian ethics.
I will first demonstrate that pleasure and pain are quite different things, and not simply the same thing with + and – signs in front of it; this is hopefully not too controversial. After that I’ll argue that there is no universally correct way to balance them against each other, and that we cannot say for any beings other than ourselves that X amount of pleasure is worth Y amount of pain.
I will show that it is possible to accept all of this and still be a utilitarian, although it probably affects what kind of utilitarian one ought to be. The biggest consequences of the view are in population ethics, and in how we ought to think about the suffering of animals, and I will sketch out some of the changes to our views which result.
Pleasure and Pain are asymmetrical, and not objectively comparable
There are a number of reasons to think that pleasure is not simply “pain times minus one”:
They have different and asymmetrical effects on our behaviour: people are generally more concerned to avoid pain, than they are to pursue pleasure.
There are cases where the two seem to go together, and pleasure depends upon the presence of pain. The clearest example would be people who engage in masochistic behaviour; other possible examples could include spicy foods, and people voluntarily reading Tess of the D’Urbervilles or A Little Life.
They are processed by different, though overlapping, areas of the brain. (NB. I do not understand neuroscience and if I tried to make this point rigorous I would never get this essay written.)
I would hope that this is not a particularly contentious claim. The more important claim I want to make is that there is furthermore no objectively correct way to compare one against the other, to say that X amount of pain is worth Y amount of pleasure.
Part of me thinks I shouldn’t have to claim that, that the burden of proof should be on those trying to claim that if I give you two apples and take away three oranges, there is a fact of the matter about whether I have increased your amount of fruit. But arguments over the burden of proof are usually a low-quality displacement for talking about the actual issue at hand.
The most obvious reason to think we can compare pleasure and pain on a shared scale is that we do it all the time. If I endure an unpleasant journey in order to attend a social occasion, clearly I’m managing to compare pleasure and pain. This is all very well when the pleasure and pain are experienced by the same individual – but I don’t think we’re all operating on the same scale. For example, Brian Tomasik discusses in various places his intuition that millions of minutes of pleasure could never make up for a single minute of extreme torture. I simply don’t share that intuition, and I don’t think either of us is objectively right.
As a more pedestrian example, suppose that Smith and Jones are flatmates who have been invited to a party; they agree on how fun the party will be, they agree on how unpleasant the bus to get there will be; but Smith thinks the party will be worth the journey while Jones thinks it will not. This strikes me as an entirely possible scenario, and it doesn’t strike me that either of them has to be wrong: they simply have a difference of values[1].
Non-Unified Utilitarianism
Does this mean we shouldn’t be utilitarians? I do think that it creates rather a problem for Act Utilitarianism, since it turns out that in many cases there simply is no fact of the matter about whether a given action is good: if the action increases the total of pleasure but also the total of pain, or if it reduces both, then unless the gains and losses are experienced by one single agent we have no way of judging.
I do think, however, that one can be a rule utilitarian. Instead of having one single command to maximise utility, one has two commands: to maximise pleasure, and to minimise pain. In cases where we deal in trade-offs, we can require that you apply a consistent “exchange rate” between the two, in a manner similar to Dutch book arguments in epistemology[2]. There isn’t an objectively correct exchange rate to apply between pleasure and pain, but if you don’t personally apply a consistent exchange rate then you’re liable to end up in a position which is Pareto inferior to an achievable alternative.
Population Ethics
This implies a few changes to applied consequentialist morality. Firstly, at the more abstract end, the dream of achieving universally correct answers in population ethics looks a bit more remote.
I don’t see why one couldn’t continue doing thought experiments about people who are all stipulated to only ever experience pleasure. But there’s at least one additional step before you get to anything with useful results about the world. And frankly, if we’re willing to accept that there’s no fact of the matter about whether increasing both total pleasure and total pain makes the world a better place, maybe we should also be willing to concede that there’s probably no objective truth about whether the world is improved by increasing total pleasure but dividing it between more people such that average happiness is lower.
Animal Welfare
Secondly, this affects how we think about issues of animal welfare. It doesn’t mean we have to stop caring about the suffering of wild or farmed creatures so long as they experience both happiness and suffering. It does, however, undermine our ability to say that such-and-such a live is or is not objectively worth living compared to non-existence.
The practical consequences of this need not be all that revolutionary. Battery-farmed chickens appear to experience very little, if any, pleasure, and quite a lot of pain: bringing them into existence may indeed be Pareto-worsening. We can seek to reduce the pain experienced by other creatures without impacting pleasure – for example, by stunning shrimps immediately prior to slaughter. And we can attempt to assess the pleasure-to-pain ratios experienced by different farmed animals, and attempt to transfer our meat consumption more towards creatures with higher ratios of pleasure: beef and lamb rather than pork and chicken.
We can also compare the lives of farmed and wild animals. I don’t think it’s obvious whether wild or farmed bees and fish would come out ahead: farmed creatures face lower risk of violent predation, and their owners have an incentive to keep them alive year-to-year which is likely to result in at least some positive interventions; on the other hand, being stocked at higher densities leads to a higher preponderance of parasites. I wouldn’t be surprised if it comes down to the fact that farmed animals are, thanks to technological developments, increasingly able to have relatively painless deaths.
Can we give moral judgements on vegetarianism and veganism? If you want a universally, objectively true assessment of whether vegetarianism improves the world, I think my argument is unfortunate for you. However, I think you can quite legitimately apply your own personal moral sense in such cases – if you don’t think you would accept the life of a battery-farmed chicken over non-existence, then you might consider whether it’s really right to impose that life on another being. Humans will naturally disagree on this, as we disagree on many aspects of life, and fundamentally we will have to resolve those disagreements through the ordinary processes of the market and of politics. (In this context I’d note that Brian Tomasik, who has done a lot of the foundational EA work on the welfare of wild animals, is an anti-realist about both consciousness and morality, and this doesn’t seem to undermine his caring about animal welfare.)
Paternalism
Third, non-unified utilitarian gives somewhat greater weight to personal judgement and preferences, and suggests we should be a bit less sympathetic to paternalism. If I declare that the pleasure to me of a big greasy burger is worth the pain of the heart attack it may contribute to, then standard utilitarianism says that I am in a privileged position to know whether this is true but it is nevertheless possible that I am mistaken, and there are multiple ways in which I might be mistaken. Non-unified utilitarianism doesn’t really have the resources to say that I might be mistaken about the balance of pleasure vs pain, although I might well still be mistaken about the empirical facts of how much I will enjoy the burger or how likely it is to cause a heart attack.
I don’t know how big of a difference this makes in practice: thoroughgoing utilitarians will usually favour less-coercive forms of paternalism in any case. It does however suggest that policymakers can’t simply come up with back-of-a-fag-packet estimates for benefits and harms of a behaviour before proposing to ban, mandate, tax, or subsidise it - they ought to demonstrate that people are genuinely unaware of its benefits or costs to them personally.
[1] You might suggest that it’s something of a nonsense to argue that they “agree” on how good the party will be and how bad the bus will be, but if you think that then you probably won’t believe in any kind of utilitarianism.
[2] This is deeply hand-wavy, and it is quite possible that my argument would not really stand up if made fully rigorous.