Do social animals suffer more?

mangusti[Epistemic status: Very speculative!! Not science!! Armchair evo-psych is bad for you etc., but there are some important questions we currently don’t have a better way to try to answer, so.]

[TL;DR: The intensity of suffering, an evolved motivational state, is likely to vary even between species with generally similar levels of sentience. I describe four principles which could suggest that as a result of their evolutionary history, social animals typically suffer more than asocial ones.]

Introduction

There’s a lot of research going on about sentience and moral patienthood: which creatures are phenomenally conscious and to what extent is one of the first things to consider when figuring out what exactly deserves our moral consideration. However, consciousness itself is arguably a neutral property, much like existing as a material object is a neutral property. Even if a creature has subjective experiences, if these experiences solely consists of being aware of stuff – with no desires, aversions, or other subjectively felt motivations towards anything – it’s not really good or bad that such creatures exist or that things happen to them. Only the capacity to experience states with emotional valence makes something a moral patient (unless you insist on consciousness itself as a terminal value, which some people do of course – I think it’s aesthetically interesting and okay I guess, but distinct from morally important properties, which need to be tied to hedonic tone or motivations or preferences to make sense).

If our aim is to minimize suffering in some conventionally defined sense, it is obviously not enough to know if an animal is conscious. If we accept that life or even consciousness don’t necessarily imply a capacity to suffer, we need to estimate the extent to which the animal reacts to stress with subjective distress. Most mobile creatures produced by the brutal evolutionary processes we’re familiar with show clear behavioural signs of nociception when physically hurt, such as avoidance and attempts to disrupt the sensory pain signal if possible; and the closer an animal is to our own physiological, behavioural, and taxonomical type, the greater is also the probability that these signs really do imply subjective suffering as well, instead of just reflexive or mechanical reactions (this blatantly anthropocentric line of evidence is far from conclusive, of course, it’s just that it’s almost all the evidence we currently have).

However, assuming that suffering is a product of evolutionary processes, there are good reasons to believe that the intensity of subjective suffering varies between species just like other evolved properties do: according to their historical usefulness during the unique evolution of a given population. Even if every tetrapod has four limbs, different environments and niches have formed different uses and adaptations for these limbs. The capacity to suffer is more fundamental than that and its uses are probably more unified, but slightly different adaptations are to be expected, depending on what sort of things an animal is motivated to do and what kind of an environment it has been shaped by.

This seems likely because contrary to the standard biology textbook view, suffering is more than just a signal of a harmful situation. Intense suffering especially is primarily a motivational state that facilitates not only direct avoidance of harmful acts and environments but also complex decisions under threat or risk, long-term learning, social investment and bonding, competition and communicating, all depending on the other aspects of an animal’s evolutionary history, cognition, and lifestyle.

Behaviourally and, uh, anecdotally, it seems that humans have the capacity to suffer a lot. A defining feature of our species is the immensely complicated social behaviour we develop when surrounded by other people, and it has probably shaped our subjective experience more than any other aspect of our cognition has. So, in this post, I try to pin down some principles and hunches that suggest that a social evolutionary history in particular could produce species that suffer intensely – though significant suffering is still probably present in all conscious animals – and then take a brief look at the implications of this possibility.

The extended homeostasis of social animals

Suffering as a motivational state is typically the mental component of an animal’s homeostatic regulation, i.e. the processes that keep all the relevant physiological variables between healthy parameters. Most things that threaten your homeostasis in a way that humans have historically been able to survive when motivated to do so will cause some kind of suffering: thirst when your blood volume starts to drop, pain when a wound opens and leaves you vulnerable to pathogens and blood loss, sickness when you have ingested toxins and need to expel them. When the threat isn’t currently actual but can pretty reliably be predicted to come true unless you take physiological or behavioural precautions, your species will evolve predictive homeostatic processes. Many of these predictive processes are cognitive or emotional in nature, e.g. people often feel distress in darkness and high places – things that cause absolutely no damage in themselves, but correlate with future homeostatic disturbances.

Among social animals that habitually rely on others to survive and thrive, predictive homeostasis is extended to social relations as well, so that an individual without sufficient relationships suffers from loneliness and other emotional disturbances. Not all social relationships are homeostatically maintained: the drive to acquire social status probably doesn’t really settle around a set point or anything, as it has more to do with mating opportunities than with survival. Social belonging, on the other hand, can somewhat accurately be defined as the part of social relationships that is indeed homeostatic – maintained by feedback loops within a certain dynamic range, where a lack of it leads to negative emotions, and an excess is quite naturally dropped due to time constraints and/or social stress.

As the number of things you need to consciously attend to when maintaining your homeostasis increases, so does the probability that something is missing, which plausibly leads to more suffering. In a community, your wellbeing becomes directly tied to the wellbeing of others, which again increases the number of things that can go wrong: not only do you care about how others treat you to ensure your direct wellbeing, their interests are now inherently important to you too, so that you feel some of their pain even when it’s directly irrelevant to you. Empathy, especially its affective aspects, is a major mechanism by which this extension of homeostatic suffering becomes possible, since it motivates you to make sure your companions survive and thrive as well, and in a silly metaphorical sense moves you closer to becoming a single organism (only with multiple simultaneous consciousnesses, and so increased maximum suffering levels).

Suffering and contingent social commitment

A lot of human suffering comes in the form of worry or grief over lost social bonds. Evo-psych hypotheses about the origins of social grief are based on the utility found in maintaining close relationships and seeking reunion on pain of distress (e.g. Archer 1998). When an important bond is permanently broken and little to no chance of reunion remains, the normally useful reaction becomes temporarily maladaptive. Prolonged, intense, and public displays of grief probably serve a signalling purpose as well, providing evidence that you’ll emotionally commit to maintaining a social bond: this can only apply to animals whose social attachments are contingent and based on reciprocity, individual recognition, and familiarity, whereas eusocial animals (primarily social insects) may not need to experience such loyalty towards specific individuals. The exact evolutionary processes at play are poorly understood, but it remains likely that most other cognitively advanced, conditionally social animals also experience emotional separation distress, and that the accompanying behaviour aids an individual’s commitment to maintaining social bonds.

All of this should work in synchrony with the social homeostasis model sketched above. Indeed, Hofer (1984) found two distinct behavioural patterns in nonhuman animals separated from their companions. An immediate, acute reaction to a specific loss appears as distress, searching, preoccupation, and even aggression. This reaction quite naturally helps an animal to reunite with its lost companion should it still be possible. Another reaction develops afterwards or simultaneously but over a longer time period, and involves passivity, inactivity, and disturbances in biological rhythms, presumably in the absence of familiar sensory regulators provided by the lost companion or group. This is probably not directly adaptive in itself, but a byproduct of the otherwise useful state of being able to consistently rely on cues from others (possibly persisting again as an exaptation due to signalling or other indirectly adaptive reasons).

Some of human grief can also be modelled as a combination of these two processes, but might there be a difference between the typical separation distress that many social animals feel, and the cognitively heavy, temporally complex pain that human social suffering involves? Some intuitions suggest that animal suffering, even when subjectively experienced, is qualitatively different from human suffering since most animals lack the psychological layers of future-directed worry, advanced processing and rumination, and the resulting elements of subtle despair and hopelessness that intense human suffering typically involves. I’m not sure how likely this is regarding suffering in general, but I do think long-term social suffering is at least greater in humans, who rely on personal social commitments more than most other animals do. There are tons of unexplored nuances both in human grief and animal separation distress, but the strongest function may simply be that by making social relations part of the necessary conditions we feel miserable without, we successfully blackmail ourselves to seek company, and also prove ourselves loyal to others in the same predicament.

markhor.pngViolence: probably literally the worst thing

An obvious source of distress to social animals is intraspecific violence, which to the victim is likely to differ dramatically from other kinds of tissue damage. For an asocial animal, violence is not really an applicable concept: literal violence requires social intentionality of some sort. Much like a shark attacking a human isn’t really violent (just uh, various other kinds of suboptimal), for asocial animals conspecifics and other animals alike are basically forces of nature that may or may not harm you according to their non-negotiable whims. It’s typically useful to fear these things and of course suffer when damaged by them – but embedded in a social lifestyle where the risk of game-theoretically regulated intentional harm from others is possible but not inevitable, and often dependent on your communication and the community around you, suffering serves more functions than that. So new layers of intense suffering have developed to organize and guide individuals in violent populations – in addition to tissue damage, violence causes purely psychological harms like terror, long-term anxiety, disgust and distrust, hate, extreme despair, and of course vengefulness and the perpetuation of conflicts, again depending on the species in question.

Now, we can’t directly compare suffering levels even between humans attacked by other humans and humans attacked by lions or harmed by tornadoes. We do seem to fear and avoid intentional violence significantly more than other sources of harm, though: guns, murderers, and terrorists cause widespread panic and behavioural changes, whereas similar non-intentional harms are easier to bear, more quickly forgotten, and rarely get people to instantly rally around political causes or radically change their habits or anything. A stronger argument for violence feeling worse than non-social harms is that post-traumatic stress disorder – presumably the long-term consequence of going through something maximally upsetting and horrifying while equipped with a predisposing genetic makeup – is disproportionately often seen in humans after interpersonal harm, as opposed to accidents, natural disasters, and especially diseases (Kessler 1998). There are other possible explanations for this depending on the actual etiology of PTSD, but the simplest explanation seems to be that violence is indeed worse than other forms of damage, suffering-wise.

For social animals that have a grasp of humans as intentional agents, it is possible that humans hurting them is also experienced as violence of some kind. This grasp doesn’t necessarily mean that they have a solid theory of mind or anything, just that they model humans as agents a bit like their conspecifics – something you can somewhat personally trust or distrust based on external cues, and possibly communicate with. Animals that are known to exhibit PTSD-like behaviour after human mistreatment include dogs, elephants, chimps, and possibly cetaceans, all animals with complex and relatively personal, communicative social structures. Most of the research on animal PTSD is based on captive animals and human mistreatment, so we currently don’t know what conditions typically lead to similar pathologies in the wild, if any – but being social and somewhat cognitively advanced again seems like a prerequisite for this type of suffering. Even if all of them don’t process violence as intensely as humans do, it seems plausible that for these vulnerable animals, it’s also more traumatizing than other kinds of tissue damage. Since it is such a powerful way to build hierarchies and organize group behaviour, violence and its threat plays a part in the life of most other social creatures as well, and it probably adds a few extra layers of stress and suffering to every unstable social situation even among less cognitive animals.

Also, the best way to reduce the lethality of intraspecific violence is probably having a clear signal of submission, i.e. a credible display of sufficiently intense pain; among social organisms, showing suffering is a straightforward way to signal many other things as well, such as a need for help from allies when challenged. Asocial animals receive no benefits from displaying their suffering and typically have no purposeful external signals for communicating injuries or pain – on the contrary, being able to conceal your injuries as best as you can is crucial when calling for help is not even a comprehensible option for you and showing weakness typically leaves you vulnerable to predators. Social animals, on the other hand, usually do have signals for suffering – and since suffering more intensely makes your signals stronger and more credible, suffering more in these situations has also been adaptive to an extent.

Having friends: an exciting opportunity to suffer more than you otherwise could have afforded to

What else does suffering give you in a social environment? If your species is mostly prosocial, potentially a lot. When ill or injured, an animal feels long-term pain and distress, which discourages it from using and stressing damaged body parts and makes it keep still and use the available energy to recover – all of which also effectively prevents it from seeking food, shelter, or other necessities. Therefore, a member of an asocial species faces a straightforward survival tradeoff: prolonged and intense suffering, while protective, is also severely limited by the animal’s need to actively gather resources and defend itself. When you’re a social animal surrounded by basically sympathetic and reciprocal companions and relatives, however, this tradeoff could become slanted towards a greater intensity of suffering. If others can temporarily take care of your resource needs and protect you from threats, it suddenly becomes possible to spend a lot more time resting and recovering – as long as you’re in the right motivational state to do so, e.g. preoccupied with how unbearable your current existence is.

This is a more fundamental mechanism than any of the others above. Just having more potential homeostatic disturbances doesn’t necessarily mean they are experienced as more frequent or more intense suffering: maybe internal motivation levels are roughly calibrated between species so that where a social animal feels extreme agony over separation from its companions, an asocial animal can afford to be more sensitive to hunger or thirst since it naturally lacks the things social animals are debilitated without, and so feels similarly intense suffering under even a milder starvation threat. Maybe violence is the worst thing that can happen to a social animal, but asocial animals again experience similar fear and pain from natural causes, which social animals just have to rank as lower pains on a basically similar gradient in order to stay functional. But having prosocial companions could shift the absolute cap of your species’ suffering just by allowing individuals to wallow in all-consuming pain and misery without simply dying of hunger in a couple of days. (Friendship is magic.)

The usefulness of this hypothesized system varies a lot between different species. Clever and reasonably adaptive animals, such as humans, have some ways to protect an individual from harm and many to bring them suitable food and water when necessary. Elephants – while smart, prosocial, and exceptionally good at weighing several tons and so protecting weak herd members from predators – are grazers and browsers with a nutritionally unimpressive diet. This makes it immensely difficult for others to bring an injured individual all the food it needs, so at least some level of activity needs to be maintained even when ill (one should hope this means that an elephant’s maximum amount of physical suffering can’t be as intense and devastating as it sounds like to us). A good but heartbreaking rule of thumb might be that whenever we hear an uplifting story about an animal taking care of its weak or injured companion, we’re also looking at a species capable of experiencing the worst feelings of suffering in the biosphere. Maybe.

kanat
Implications and conclusions

Should we conclusively find that an animal’s natural degree of social behaviour is a good predictor of how much it suffers in various situations – both social and nonsocial – we would obviously have better tools for building policies and other solutions to effectively reduce suffering. Future research confirming similar conclusions could direct our attempts to improve animal welfare: for example, seafood is currently estimated to be one of the most suffering-dense protein sources to consume due to the small size of fish (which leads to a low meat/consciousness ratio) compared to cattle or pigs – but since the large herbivorous mammals typically grown as livestock are very social, their capacity to suffer may be greater quite independently of their other cognitive capabilities, which might eventually turn out to outweigh their large size. Chickens, unlike fish, have a very social lifestyle, which combined with their small size would make them one of the absolute worst animal-based foods to eat. Still, I’m wary of this approach to animal welfare now that veganism is heavily trending anyway (I hope? At least in Finland?) and our knowledge base is so severely lacking. It’s probably best to just ride the wave and focus on advocating better plant-based protein sources as well as in vitro meat as soon as it becomes a real option.

What about wild-animal suffering? Ecosystems whose fauna primarily consist of solitary herbivores may be more desirable than systems with lots of social animals even in the absence of predators, as social animals may react to other inevitable disturbances with greater suffering. When designing interventions to aid animals in the wild (emotionally compelling small-scale example here), social animals should possibly be prioritized, and long-term ecoengineering solutions developed for these species in particular. Other people have written at length about possible utopian interventions to manage suffering in wild ecosystems, and while it is currently unknown how feasible these goals are and what the relevant timescales could realistically look like, more research on the nature of suffering and the differences between species is probably useful before choosing any interventions becomes relevant, to make sure we actually prioritize reducing suffering.

A practical limitation to making use of these principles, should they turn out to be true, is that even minimally social animals must usually have ways to communicate and get along with conspecifics in order to mate, and many otherwise asocial animals still care for and invest in their offspring for a while. So, while some animals are clearly exceptional in their social bonding and commitment, purely asocial animals can’t really be found to use as points of comparison, and some of the principles above may apply to a varying extent to most sexually reproducing animals. Another complicating factor is that in the case of many animals, sexes are dimorphic so that females are typically more social than males, who may even live entirely alone. Is a significant sex difference in suffering plausible? There are a lot of confounders here, but human data says yeah maybe – gender differences in sociability are comparatively small in humans, though, and so is the difference between experienced intensity of pain in women and men, so the signal isn’t exactly clear.

Anyway, to reiterate, there are four main mechanisms that could cause a social evolutionary history to produce species that suffer more than otherwise similar asocial ones: 1) the extended homeostasis principle based on the fact that more things can go wrong (hence feel bad) for a naturally social animal simply due to the increased number of things to keep tabs on, 2) social commitment, which is purposefully fueled by psychological pain such as grief, worry, and empathetic pain, 3) purposeful violence, which only happens among social animals and plausibly feels subjectively worse than other kinds of tissue damage due to complicated signalling and group organizing things, and 4) the fact that fully utilizing the rest-and-recovery functions of suffering when physically injured or ill only becomes possible when your resource gathering needs can temporarily be covered by friends and you can afford to stay preoccupied with the pain. Due to the hypothetical nature of these principles, they are probably not super relevant to practical ethics or policy decisions or anything really until we know more, but maybe consider forever being extra nice to dogs, the blessed animal we purposely bred for maximum personal sociability, cooperation, dependency, and companionship. Thank you.

_____________________________

(All photos mine – feel free to use them as long as you mention the possibly absolutely brutal suffering levels involved e.g. in the life of a dwarf mongoose. Also if you know anything more empirically informed about the subject, let me know!)

A brief history of humans trying to pretend that suffering is actually OK: Analogies between religious theodicy and secular justifications

[Epistemic status: I have no deep background in theology or philosophy of religion, so this isn’t meant to be a very comprehensive or detailed picture, just scratching the surface based on a few papers and lectures. Expect some major oversimplifications and a couple of misunderstandings.]

[TL;DR: Theodicy: do not do the thing.]

Introduction

Theodicy was originally the religious project to justify, explain, or at least find ways to accept the intuitively unacceptable suffering we paradoxically see in a world supposedly ruled by a benevolent, omnipotent deity. Recently the concept has metaphorically been expanded to also encompass a more general, secular version of itself: the age-old human tradition of seeking meaning in or justifications for suffering in general, not just because these explanations are required by some theistic ontology. There are a lot of similarities in how people try to justify suffering within these two frameworks (though the projects seem to fail for different reasons) and the religious search for a viable theodicy has certainly influenced the justifications we now see even in reasonably secular cultures, but I suppose it’s fair to assume that most of the motivation is rooted in a deeper, more universal need for a coping mechanism, not so much in some lingering influence of specific religious memes.

Theodicy is distinct from defending theism against a fundamental logical incompatibility between God and evil, and much more interesting, especially from a secular point of view. We, too, are beings who to a great extent seem to tolerate evils we could at least potentially eradicate, so I guess in a sense we have almost as much to explain as a hypothetical benevolent, omnipotent deity has. The purpose of this post is to examine typical secular theodicies by comparing them to existing theophilosophical attempts and their critiques (obviously in the light of a secular ontology), because the large body of work surrounding religious theodicy could shed some light on the secular approaches as well.

Importantly, the consensus currently seems to be that no satisfying religious theodicy has actually been found, and that anti-theodicies – various explicit flat-out refusals to explain, justify, or even forgive God, especially prevalent among Jewish theophilosophers post-WWII – are the closest a theist can get to a solution. The project of theodicy itself is often seen as rotten and immoral; many go as far as to assert there can be no morally sufficient reasons for God to permit a world as evil as ours. The Finnish philosopher Sami Pihlström, for instance, argues that morality is more fundamental than metaphysics – no matter how mysterious the ways in which deity so-and-so works, or how feeble our rational capacities, we should have enough confidence in our moral sense to abandon a project this bizarre and instead take suffering and its victims seriously even if we subscribe to theism. And if anti-theodicy is the primary way theists have to deal with suffering, if even a fundamentally incomprehensible, all-powerful entity can’t really save the idea that suffering in itself is ultimately meaningful somehow, what hope can a secular morality have for preserving it?

Secular theodicies: some requirements

Anyone who has ever earnestly advocated the abolition or dramatic reduction of global suffering in almost any social setting has probably met some major resistance and a colourful bunch of common-or-garden theodicies.  Some of them are rooted in low-level misunderstandings, such as the notion that pain as a physiological process is a necessary warning signal (so our current levels of overall suffering are somehow optimal), or that abolishing suffering is necessarily basically equivalent to wireheading, or that prolonged boredom or existential dread isn’t really suffering or will for some other reason be preserved and intensified when the robotic abolitionists get their inhuman project off the ground and nothing will feel meaningful to anyone ever again. But even when people are roughly on the same page regarding these issues, the idea of reducing the biosphere’s overall suffering sounds extremely alarming to many people – probably due to its unintuitiveness and the immensely important role that suffering has historically played in our emotional meaning-making machinery. Dissecting this discomfort is useful both instrumentally and theoretically: in order to effectively advocate reducing suffering we obviously need to understand the counterpoints, and even more importantly, these counterpoints could eventually indicate something we’re currently missing about the functions of suffering.

All in all, though, it seems that comparing all the apparently futile religious theodicies with secular justifications for suffering mostly just reveals how weak the enterprise in general is. If a natural framework could reasonably justify the suffering we see in the world, centuries upon centuries of theodical philosophy would not have been needed in the first place, or they probably would at least have resulted in stronger conclusions than the ones we’re currently stuck with – basically yielded some acceptable general justifications disguised as religious ones. Even more damning than the lack of viable options is the conclusion accepted by many modern theophilosophers that it is immoral and possibly downright bizarre to even try, because the evil in our world is so evidently so bad that no benevolent God could ever be able to justify its existence.

So what would a viable secular theodicy need to explain? Among other criteria, religious theodicies can be classified according to the range of evils they tackle (Trakakis 2008). Why must there be any suffering at all? Why must there be purposeful evil, or naturally occurring accidental suffering? Is the current amount of suffering also necessary or justified? Is there a justification for every single instance of harm? All of these questions can be applied when searching for a secular theodicy as well: any sufficient justification for not reducing suffering will need to respond to these points (except perhaps to the last one, since micromanaging individual instances of suffering isn’t currently feasible for humans, so some collateral damage may be necessary).

Another perspective that usually has to be addressed (again according to Trakakis) concerns the nature of the benefits suffering is supposed to result in. In a theistic ontology, the potential benefits are different than in a secular one, of course, but some relevant principles remain. Suffering should at least be causally or logically connected to the resulting goods: if we want to argue that horrifying pain builds character, we should be fairly confident that it really does so, that similar character-building properties can’t easily be found elsewhere (with less of the, you know, horrifying pain), or better yet, that the suffering is absolutely necessary as a foundation for an ideal character. If this condition is satisfied, we now need to assess whether the benefits gained are somehow greater than the suffering endured: this is a tall order, for imagine the greatness of character that is needed to compensate even for the fairly typical everyday atrocities in history or in the present. Even if you could make this case for some humans, which I don’t think you could tbh, consider the pain felt by animals with no capacity for anything like character-building (if I find fifty righteous fruit flies tho).

The greater good approach

This brings us to the most common approach to theodicy, which probably covers the vast majority of both religious and secular justifications for suffering. The main point is simple: something really is worth all the suffering we endure, and suffering is likely to be the only way for us to achieve it. Candidates for this good include virtue or character, personal growth, close social relations, artistic inspiration, a sense of meaning, and even positive emotions in general – in a secular ontology, people will probably glare at you unless you can give an explanation of what exactly this benefit is and how it’s supposed to be related to suffering and also worth it; in a theistic one, you have the bonus option of just trying to convince us that there surely is such a benefit, it’s just mysterious like that, and also adding something about how souls need to be forged in the crucible of magic suffering in order to become worthy of the heavenly afterlife or something. Neither of these has so far been a satisfactory response to anything but fairly mundane or trivial pains on a scale from stubbed toes to genocide. Pihlström protests against any attempt, religious or otherwise, to justify intense suffering from the outside in this manner: if suffering does indeed result in something sufficiently valuable to make it worthwhile, it should only be up to the victim to decide whether or not it really does – other approaches trivialize the evil and the victim. This makes epistemic sense, as we don’t really have the subjective knowledge to assess the intensity of anyone else’s suffering. If we did, though, and if the benefits were gained by someone else, a utilitarian case could be made for justified suffering even when the victim doesn’t super agree.

Some suffering obviously does lead to good things, even to stuff that’s quite clearly worth it all. Maybe some kind of a contrast between, say, sadness and happiness really does enhance the overall experience. And maybe a genuine chance of failure and disappointment really makes it feel more meaningful to strive for nice things in life. And close and committed social relationships probably do require you feel some distress when you lose a loved one. However, this is entirely consistent with accepting that too much of a bad thing is in fact a very bad thing, and that there are forms of suffering that are entirely unacceptable in relation to the benefits they result in. Many kinds of distress actually make you a worse person: being in pain and stressed out makes it harder to focus on anything except your own personal survival and well-being, often even after the situation improves. Surviving a hardship makes you less empathetic to other people going through it later on, and so on.

There are many ways to assess this approach empirically, which is what any secular morality needs to do of course. Whatever the benefits are, they probably don’t scale ad infinitum with the suffering we experience; otherwise we would find people just advocating MAXIMUM SUFFERING, which maybe we do, I don’t know. This and common decency suggest that the current, horrifying amount of global suffering has not satisfactorily been proven optimal and hence justified, and that even if there are some hardships we need to go through in order to grow as human beings or something, people being brutally murdered or billions of sentient animals dying of thirst and infections everywhere all the time are not necessary properties of a world even if we also want it to have grown human beings. Also, the fact is that any benefit brought by distress can only be determined afterwards. People avoid intense pain, and wholeheartedly approve of others avoiding intense pain, even when the post hoc narrative just sometimes is that it was all worth it in the end. This looks a lot like the benefits of non-trivial suffering are mostly accidental, a net negative, and suffering isn’t a reliable way to gain anything valuable at all (with some specific, typically low-intensity exceptions – in which the suffering usually is more of a byproduct than the actual cause of the benefit). The dramatic ways in which the victims differ throughout the biosphere further reduce the odds that current suffering levels are fine: it seems extremely implausible that there is some suffering-benefit tradeoff that applies to every animal taxon, or otherwise renders all of the suffering we know about somehow acceptable.

The agency/free will approach

Another common approach is based on agency or free will: religious theodicies of this type either tend to claim that it’s logically impossible to be sentient or good without the possibility of evil (i.e. wanting to harm others), which doesn’t fly for multiple reasons this margin is too narrow to contain, or that good can only be meaningful if it’s a genuine choice, or that free will is otherwise more important than other beings not suffering (again for soul-forging purposes probably or because people need to make an active choice to remain close to God or something). Typically this also vaguely implies that God isn’t the direct source of the evil we see, and hence not really responsible for it: suffering only exists because humans misuse their agency.

From a secular point of view, I’m not sure what to make of it – I don’t think people place a lot of value on folks in general being able to kill each other and just not choosing to do so. I guess people do see value in freely choosing to be good when it’s just about them, but as evidenced by the self-centered nature of this judgment, this has more to do with virtue signalling and moral competition than with freedom-to-cause-or-not-cause-harm as a value. I also don’t think this applies to many major forms of harm; I, for one, have never congratulated myself for not severely beating people up in the subway, or for not having any desire to do so.

The secular version of this theodicy is sort of a subtype of the greater good approach above. So what goods would we lose if, starting tomorrow morning, people were unable to significantly harm each other for no good reason? I’m not even sure this would reduce our overall autonomy. In a sense, a great deal of violence is already rooted in impaired agency – people rarely choose to lead a life of, say, gang violence or war, as long as there are reasonable and realistic alternatives (building a life of order out of such chaos is extremely difficult but people still tend to prefer to attempt this when given a chance, whereas choosing a life of absolute chaos when living comfortably is extremely easy, yet few people choose to do so). Of course, there are disagreements about what kinds of suffering you are justified to cause as a necessity to preserve e.g. your social autonomy, but again, the evil or suffering itself isn’t needed for you to be autonomous. (The concept of autonomy and genuine agency in a social environment running on human brains is, in any case, probably too muddled to provide anything useful here.)

Another shortcoming of this approach is that a lot of suffering is still caused by diseases and natural disasters; so maybe you inexplicably want people to be able to maim each other at will (though they should still be stopped, and also they belong in prison afterwards, let’s not be unreasonable here), but there’s tons of suffering besides human evil. This is also a counterargument to Alvin Plantinga’s free will defense.

But autonomy is often evoked as a justification for suffering in the other direction as well: since people tend to place some value on their past suffering, and a lot of it has very genuinely been valuable to them, someone wanting to reduce or abolish suffering threatens many of the things they currently find meaningful, or the struggles and choices they made to be able to get through it. I don’t see why this isn’t a reasonable justification for some hardships and pains: again, if there are painful things people generally are glad to go through, or if there is an apparent relationship between these things and positive outcomes later on, maybe these forms of suffering shouldn’t be eradicated; but maybe an alternative should still be offered for people who would rather choose not to go through them, you know, because of the autonomy stuff and all. Also, this is again not a plausible argument for intense suffering, or credible in the presence of burning children, as rabbi Greenberg more eloquently put it. Also also, animal suffering is not properly justified by this theodicy any better than by the more general greater good approach above: even Darwin lamented the suffering of wild animals and found it irreconcilable with the concept of a benevolent God, and didn’t seem to glorify the freedom of wild creatures in the midst of it all.

The “Best of all possible worlds” approach

This theodicy is also pretty well-known, presented by Leibniz in the 1700’s, and it’s pretty much exactly what it says on the tin – out of all possible universes, God chose the one with the best conditions and actualized it and since he is obviously good and reasonable, everything’s basically fine by definition. Moving on without comment, a common secular analogy is rooted in the powerlessness of mankind: if there is no God, there’s also no way for anyone to directly make things better without the possibility of everything backfiring horribly. There may be terrible things going on in the world, but there’s no way we can help it – this is the best we can do.

The solution here, it seems to me, is to tirelessly gather more information and power, not shrug and turn your back to a world full of unimaginable distress you could at least help alleviate. I know, I know, there are massive coordination problems we haven’t really solved and fixing even most of the ways in which the world is bad currently looks like an intractable project, but at the same time everything is making some sort of progress and there are people doing a lot of good with whatever they’ve got and the change is slow but we’re making the intractable very tractable in surprising ways all the time. This entire theodicy is a lazy excuse mostly and y’all know it.

Minor theodicies, other directions, and conclusions

There are a lot of approaches in religious theodicy that aren’t really transferrable to a secular framework, such as all the Original Sin stuff and the related karmic explanations, all of which mainly try to shift the responsibility on us mortals – uninteresting now that we already accept it. There are also some justifications that are mostly just seen in secular contexts, such as wanting our experiences to be authentic or real in some usually poorly defined but intuitively natural sense, and thus wanting to retain distress almost as a terminal value because it’s part of the authentic human or animal #lifestyle. This is horrible and fundamentally incoherent with everything but I get it, there’s a chance that while carelessly getting rid of some traditional human stuff you throw away something valuable as well. None of these seem to fare better than the ones described above when asked to respond to all of the reasonable requirements.

What I’m interested in right now is suffering as a social motivator, though. As mentioned above, it’s plausible that the implicit fear of intense social distress is such a major part of human social dynamics that abolishing it or allowing it to become voluntary would change the way we have to approach human relationships and require us to strengthen other sources of emotional commitment. There are close social bonds without super notable suffering even when the bond eventually breaks, but at the same time, the most distressing events of a typical first world life are social losses of different kinds, and this might be something people will generally want to retain for complicated sentimental and social reasons. Again, this is not going to lead to a satisfying theodicy even if we only wanted a narrow, anthropocentric one, but I think the relationship between suffering and social bonds is worth investigating before the hypothetical future where abolitionism or dramatic reduction of suffering becomes feasible.

Anyway, I realize that most of the rejections above are based on pretty intuitive moral judgments about what an acceptable justification should look like, and some people will obviously find them more persuasive than I do. I would kind of like to do more research on the subject and write up a more rigorous analysis of it, though probably focusing on the secular justifications to an even greater extent, since a deeper understanding of the religious approaches doesn’t seem very useful after this point. But it seems like the reasons people so strongly oppose reducing suffering aren’t very well understood right now: many of the individual arguments are trivially kind of weak, but the discomfort remains. Clarifying this issue and some related concepts could be really useful in understanding human values.

On mind-reading

I feel that explicit communication of preferences and emotions is frequently a bit overrated as an ideal habit. Obviously, clear and open communication is invaluable in most intentional social situations, but it’s also a common (and less frequently addressed) failure mode to not place enough value on needing to explicate as little as possible because you’re being understood more effortlessly on an intuitive level.

The subcultures I vaguely identify and interact with tend to be especially fond of explicit communication over mind-reading. This could be because many people roughly in this category (nerdy, analytic, thing-oriented) would seem to be somewhat below average at intuitively reading other people, which could make it more difficult to see how well mind-reading works when it works, and in some cases because empathy and related concepts are disvalued as a result of this (and even seen as fundamentally opposed to systemizing and rationality). Dichotomies such as the empathizing/systemizing divide in Baron-Cohen’s work on autism contribute to these attitudes, and I’m guessing it’s not implausible that there’s something to this divide in how the human brain works, but these thinking styles being inherently neurofunctionally antithetical to each other to the extent that empathizing should deserve its irrational reputation isn’t something I would bet a lot of money on (except possibly on the level of individual situations).

However, in many social environments I hang out in both online and in person, the culture has developed a firm appreciation of explicit communication while half-ignoring that explicit communication sometimes is actually genuinely worse than the nonverbal, gut-level understanding it enhances and replaces, that it certainly takes more effort from one or both of the parties in many situations, and that many people would probably benefit from cultivating and trusting their skills in intuitive empathy more than from being told that communicating every preference explicitly is the only good way to build and maintain healthy relationships (and expecting anything else is ridiculous and just causes silly problems to irrational people who expect some sort of magical mind-reading from others).

This doesn’t mean that all functional relationships require high levels of empathy, of course, and ideally the more empathetic people should of course accommodate those who require more verbal information about other people’s internal states. But in close relationships especially, you may run into a major compatibility issue where one person expects their intuitive signals to be understood because empathizing is a fundamental and important aspect of how they think, and the other person kind of scoffs at this and genuinely believes that the more empathetic party is demanding impossible, supernatural levels of mind-reading – again, because this is how their thinking generally, kind of fundamentally works. And this may not always be solved just by increasing explicit communication, because it in turn will quickly exhaust the person who possibly has spent most of their life not needing to describe their basic emotions and preferences to other people, and this is a form of labor that really really drains their energy. (I have on a few occasions been super exhausted by people who have wanted to have this great and healthy explicit communication thing with me, and I haven’t seen what the root of the problem was until years later, because of course explicit communication in every situation is the most important mark of a healthy relationship, and it would be silly to expect anyone to read my mind, right?)

In conclusion, the way discussing every issue explicitly is valued over everything else prevents many people from seeing that a close relationship they are trying to build with someone might just never work as well as it would with someone else because of this difference. Lots of explicit communication is not always a sign that your relationship is great or even functional; it isn’t what’s valuable in itself, being able and willing to respect each other’s preferences is. Lacking this, looking at the relationship and going “yup, gotta increase verbal communication” is sometimes a patch to fix something that wouldn’t have to be broken in the first place. Similarly, trying to improve your empathy levels to fix this may also not work out depending on the extent to which empathy is part of your congenital personality (and I’m sure many (most?) subcultures also demand exhausting accommodations from the people who would prefer very explicit emotional sharing – it’s just not something I run into as often as I see the anti-empathy sentiment described here). I’m not sure I have a good solution at hand, but respecting other thinking styles and even trying them out to the extent that you can will probably not hurt, as unsatisfying and insufficient as it sounds.

Re: “We just don’t know enough about ecology to reliably prevent wild animal suffering without causing more damage in the process”

Say there’s a horrible moral disaster going on at the moment that you (1) know about, (2) know something could at least plausibly be done about, (3) know we don’t currently have enough information to safely do much about, but (4) know how to collect at least some relevant and potentially useful information about. This is not a very difficult problem: if you know what kind of information could be helpful to alleviate the issue, the first step in alleviating the issue is in fact collecting more information, not just deciding whether or not you should help based only on your existing knowledge.

A simple miniature version of the same situation is knowing someone vulnerable is lost in the freezing cold forest area nearby, knowing you could (with your better navigation equipment and warm clothing) probably track and save them if you were there, knowing you wouldn’t be able to find your way to the forest from where you’re currently standing, but also knowing you have a smartphone you could probably use to find your way there. Ignoring other things, obviously you should consult the smartphone or otherwise seek the necessary information to help out the person in trouble: you may not immediately know the best map application or where your winter boots currently are located, but there are many ways to increase your relevant knowledge base here, and thinking about it for a while instead of dismissing the issue is probably going to help. If we expand the analogue to include the rest of civilization as well, there’s also the helpful official people with dogs and helicopters: maybe you could encourage them to do the job in case you think it’s not where your comparative advantage lies. What you probably wouldn’t do is shrug and accept that someone will definitely die out there just because there are multiple steps and some uncertainty in the process, and you don’t immediately know how to do the object-level helpful things.

When talking about wild animal suffering, all but the most radical utilitarians and altruists are understandably super cautious about doing anything substantial. Some people simply have strong intuitions against meddling with the natural order (no, not the natural order that directly hurts human societies, just the more natural natural order, the nature, you know) which I think is a weak position for reasons I won’t go into right now because other people have written about it in length before. Some people don’t think things are so bad for wild animals anyway, probably because they feel suffering and satisfaction are somehow hedonistically commensurable and animal lives have some good moments as well (whereas I reject this view of pain an pleasure as the opposite, positive and negative aspects of the same stuff – they can be indirectly compared using preferences for different tradeoffs, sure, but experience-wise they’re not simply opposites in valence, but fundamentally different (and the bad is more relevant than the good)).

But the majority of people I interact with seem to basically accept that the lives of wild animals are often really bad, nature isn’t inherently sacred to the extent that we couldn’t help sentient beings out there – it’s just that we don’t have enough information, so our hands are basically tied. Some of these people accept that there might be a point in the distant future where we could maybe do something about the issue, while some people don’t really think about this possibility either, because the task sounds so thoroughly daunting. Both responses ignore the possibility of actually immediately working to increase our understanding of ecosystems so as to build sufficiently informed, actionable plans to alleviate ecological suffering, which is exactly what we should urgently be doing, instead of just accepting our temporary helplessness.

Beware of sneaky malicious agents in moral thought experiments

In moral philosophy, thought experiments are supposed to use and clarify common intuitions to help us distinguish between acceptable and unacceptable states and actions in morally problematic situations. Moral frameworks are currently built mostly by trying to make these intuitions as coherent as possible, as people have various (interpersonally different but, all things considered, fairly convergent) intuitive responses to ethical dilemmas. In addition to these normative intuitions, the end results of a thought experiment also rely on intuitions about what exactly is *going on* in the situation in the first place, and subtle misunderstandings on the descriptive level could easily damage the reliability of our responses to the experiments.

So I’m pretty sure that suffering caused by violence or other malicious acts subjectively feels at least somewhat worse than suffering originating from impersonal causes, even when the tissue damage is equivalent: it certainly feels vastly more terrifying, disgusting, and unacceptable from a third-person perspective (to me at least). So, to evoke the sense of absolutely maximal suffering in moral thought experiments, it’s useful to describe the suffering as torture or violence of some kind, as this maxes out the intensity of the suffering current human brains are capable of (and capable of empathizing with, for the purposes of the thought experiment). But there’s a tradeoff here: yeah, we probably make the experiment more emotionally effective, which all things equal gives us a better understanding of the relevant moral intuitions – but we also lose some clarity regarding the full implicit consequences in these scenarios in a way that I’m pretty sure will bias our judgment.

This is because torture (or other personal, malicious causes of suffering) doesn’t actually happen in a consequentialist void where the torture is the only consequence and correlate of a choice, and no otherwise alarming or threatening events follow. Most importantly, this has basically never happened in the history of human evolution that has shaped all of our intuitions, which means they may not be well equipped to pretend there is such a void even when it’s required by the experiment. In our descriptive-level intuitive interpretation of the situation, the presence of torture or violence probably implies there is an obviously callous, probably unpredictable agent around who thinks it’s OK to hurt others, an unknown unsafe environment where such agents apparently are born, no one around with enough empathy and power to stop the violence, possibly something that poses a threat to us too, etc. (Though note that this is exactly the kind of idle evo-psych speculation your mother warned you about; I don’t know if anyone knows how intuitions really work.) Yes, often other causes of intense suffering, such as starvation or disease, also imply there’s something wrong on a community-affecting level – but probably not quite as saliently and alarmingly as personal violence does, because violence requires direct, immediate action. Whatever the exact cause is, I think there’s an intuitive emotional reaction against violence that isn’t purely reducible to its direct consequences (the suffering it causes), because it has always been such a different kind of a problem to solve, I guess?

Links & misc #5

petunia

• I took the Giving What We Can Pledge a few months ago! I’ll probably write about the reasoning behind my decision to do it now some more in the future, but basically people are bad at consistently caring about things for an entire lifetime, so if you have anything at all you care about (that you, with solid justification, hope to care about in the future as well) you should probably just sell your soul to GOOD before EVIL HEDONISTIC INDIFFERENCE has a chance to make an offer, because it will, and at that point refusing it could feel like giving up something cool you really really want. Human brains: not even once.

• Foundational Research Institute lists some basic intuitions that support suffering-based ethics.

• The inevitable evolution of bad science (original article here)

• Scott Aaronson v. Roger Penrose on conscious computers. Fave paragraph: “Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience.  I replied that, for me, the relevant issues here are ‘well below neuroscience’ in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: ‘give me a long enough lever and a place to stand, and I’ll move the earth,’ so too I can declare, ‘give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.’ The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.”

• I gave a presentation on Aaronson’s fantastic essay Why Philosophers Should Care About Computational Complexity for a philosophy class and will keep recommending this article to everyone forever.

• From the Weird Sun blog, a list of men. Also on a more serious note, Descriptive Before Normative.

• Cool project by Pippin Barr (known among other things as the Ancient Greek Punishment guy): It is as if you were playing a videogame (later applied to chess)

• “Existence values are an unusual and somewhat controversial class of economic value, reflecting the benefit people receive from knowing that a particular environmental resource, such as Antarctica, the Grand Canyon, endangered species, or any other organism or thing exists.”

• Based on this interview, Peter Unger sounds like a vaguely frustrating guy but I’m also tempted to agree with many of his pessimistic views regarding academic philosophy (with a great deal of exceptions though, such as many ideas in ethics and philosophy of science maybe?). Ugh now everything is going to feel slightly meaningless for a few days, yeah thanks a lot vaguely frustrating guy

• WikiHow seems like a rabbit hole you never really get around exploring because the concept sounds kinda boring and possibly not rabbit-hole-y enough on the surface, but occasionally you’ll bump into an article on how to pretend you have magical ice powers that leaves you in a weird state of low key wondering what else is out there though probably still not being sufficiently motivated to find out.

Your actual terminal values are your actual terminal values

One of the coolest things I’ve internalized during the past year is this: you don’t need to have a particular emotional response to the things you deem terminally valuable, and you don’t need to assign any moral relevance to the things you happen to feel, unless you endorse these feelings as part of your moral framework. You get to decide what your terminal, inherently important values are, with no particular obligation to infer them from simple unprocessed gut feelings, emotions, or other stuff outside of your direct conscious analysis and control. Sure, all sorts of affects and intuitions are going to influence your decisions and form the basis of your motivation, but still, to what extent you endorse a given potentially morally relevant feeling and how much it’s weighed if embedded into your explicit values is something you get to decide for yourself. And once you have a somewhat satisfactory set of terminal values, you don’t need to care about anything else than that (as well the identifiable instrumental goals that connect to it, which of course is a lot to care about, but like).

This dissociation between a genuinely meaningful life and a life full of conventional, individualistic, sometimes entirely superficial but emotionally salient markers of success is to me extremely liberating because of the relative fickleness of personal happiness. The causal pathways that lead to personal wellbeing are only somewhat reliable: unfortunately, it’s easy to overestimate the value of external life changes, or have a brain that just generally is never really satisfied in the stimuli it encounters. This is not to say that the actions you take to affect the rest of the world have certain effects either, of course – but you probably have more empirical information on how they transform the world, and these effects are usually large enough to make it a safer bet despite the uncertainties involved. Compared to altruism, hedonistic psychological egoism especially when tied to external factors is a treadmill that leads to distress and powerlessness: you can’t decide to be loved, wealthy, or high status in the same way you can decide to show love to someone else, help out people around the world with vastly lower incomes, or act respectfully towards others. Effort-wise, it’s cheaper to do your best to make sure other people have their basic needs fulfilled than to strive to improve your own state which probably already is subject to the principle of diminishing returns.

To not have to see myself as a morally special, intensely relevant entity just because this is how the subjective point of view I inhabit automatically feels like (and because any subjective entity as a result of evolutionary processes has a natural egoistic bias) lets me accept even severe personal emotional setbacks as the inevitable, but absolutely not catastrophic, infinitesimally small moral negatives that they are. After screwing something personal up I may be in pain I’m unable to alleviate, but there are other people in pain as well, some of which I may be able to do something about. This grants me safety, but also control and power over making the world as a whole better: maybe I can take a moderately laborious action that improves my long-term wellbeing by a puny hopeless expected 0.05%; but inherently it is exactly equally important that any other person is spared from unhappiness, so a similar amount of work will usually lead to larger expected effects by an order of magnitude or more if I simply reject my default intuitive human egoism (to the surprisingly large but of course not complete extent that such a rejection is possible). Again, this doesn’t mean you have direct control over how the actions you take will be transduced into qualia experienced by the entities whose lives you seek to improve, but e.g. making statistically sure fewer people suffer for weeks and maybe die of painful preventable diseases is very likely to be a good idea in this respect, and a benefit of a magnitude by which you will probably not be able to easily improve your own life.

At a glance, this seems somewhat like an inversion of Stoicism: instead of accepting that you have no control over external happenings and focusing on cultivating appropriate attitudes towards the world in order to reach an internal sense of peace, you accept you sometimes have very little power over your internal emotional states, and turn your attention to the outside world to at least reduce the distress of other sentient beings. This incompatibility is illusory, and I’ve found these attitudes reinforce each other quite neatly, at least if your preferred flavour of Stoicism is modern in that it’s based on empirical, statistical observations about the extent of the effects of your actions. To me, it seems the virtues of Stoic philosophy were originally designed to counter the same futile hedonistic egoism I oppose, not explicitly to advocate resignation to the world as a place full of disease, abuse, and suffering you can’t do anything to alleviate (impartial empirical altruism just sort of hadn’t been invented yet). Accepting personal emotional setbacks as something not super morally relevant reinforces a Stoic acceptance of things outside of your control, and lets you focus on achieving vastly more important things without disproportional aversion or fear of personal discomfort.

Personal wellbeing is probably going to be a part of the stuff you value, and it might not be very universalizable to assign no intrinsic value at all on one’s own happiness, but in the end, you still get to decide whether your subjective experience is restricted to a solely instrumental role, a parameter you only need to improve insofar as it helps you advance other features of the world. From this, it follows that even if you’ll never feel especially “happy”, even if you’ll never be successful or particularly high status or whatever it is that you feel you may so far have failed at, you may still be doing your best towards progressing the things that you truly value. Even persistent anhedonia doesn’t have to mean anything at all: even without the associated emotional reward, you can still succeed at the things that actually matter.