There are no free lunches, but organic lunches are super expensive: Why the tradeoffs constraining human cognition do not limit artificial superintelligences

In this post, I argue against the brand of AI risk skepticism that is based on what we know about organic, biologically evolved intelligence and its constraints, recently promoted by Kevin Kelly on Wired and expanded by Erik Hoel in his blog. I’m not sure I agree with the worst estimates of a near-inevitable AI doom lying ahead of us (gonna sit on this increasingly uncomfortable fence for just a little longer), but I think this particular family of counterarguments seems in part to be based on confusion about which principles and findings concerning organic cognition are actually relevant to intelligence in general, or a would-be superintelligent AI in particular, and not just to artifacts rooted in our own evolutionary history.

This post assumes familiarity with the basic concepts surrounding AI risk, such as the orthogonality thesis and other issues with value alignment (no, we can’t just tell an AI what to do) as well as convergent instrumental goals (whatever your goals are, things like gaining indefinite resources, becoming more competent, ensuring your own continued existence, and resisting goal modifications are going to be necessary for reaching them). The basic idea is that once we build a useful agent with reasonably general cognitive competence and allow it to modify itself in order to become more intelligent (and so, recursively, even better at making itself more intelligent), controlling its advances and ensuring its compatibility with human existence will eventually prove difficult: a nonhuman intelligence will not share all the obvious human values we find so intuitive unless they are related to it in a foolproof manner, which is tricky until we have something like a formal, complete, and consistent solution to ethics, which we super don’t.

So once more, with feeling, let’s outline the concept we’re dealing with here. Kelly argues against a meaningful way to define intelligence altogether, so against a framework within which we could call a human smarter than a squirrel. I don’t find this position all that reassuring, for whether we want to call them higher intelligence or just different thinking styles or something, there are still very meaningful cognitive skillsets that allow agents to manipulate the actual environment around us and fulfill their potentially alien values more effectively than humans when pitted against our skillsets and values. Hoel suggests some good formal approaches to defining intelligence, such as Legg and Hutter’s definition based on the simplicity-weighed sum of the agent’s performance across all possible problems. In practice, though, we may not need to deal with such an abstract definition with lots of irrelevant dimensions and can only count the performance on problems relevant to manipulating the world, whatever those might be. So below, “cognition” usually just refers to the skillsets related to predicting and influencing our actual world more powerfully than humans as a collective are able to. We should keep in mind, though, that we don’t know very well which skillsets can be used for this in the world we currently find ourselves in – human-style thinking is definitely not the only and probably not the best cognitive structure for the job.

The other main component of getting stuff done is of course the ability to physically execute whatever has been concluded is the optimal thing to physically execute. Material issues could be the main limiting factor a young, would-be recursively improving intelligence runs into: efficiently acquiring, refining, and utilizing raw materials sounds like a trivial chore, but the macroscopic physical world is slow enough that expecting anything like explosive growth requires some pretty complicated postulations. But the takeoff doesn’t need to be that fast and there are viable ways around this for a benevolent-seeming and promising AI, so let’s drop this issue for now, assume an AI with access to the necessary material resources via some unspecified general villainy, and focus on the cognitive aspect the original articles also tackle.

***

Next, I’ll briefly concede the points that can immediately be conceded, and explain why I still don’t think they work well enough as arguments against AI risk.

1) Like Kelly says, it’s true that an agent’s potential intelligence can’t be absolute or infinite (solving every conceivable problem is indeed impossible as far as our current understanding of elementary logic let alone physics can tell). This is not required for an agent to pose a major threat to conflicting value systems with human-level defenses, however. If value alignment fails, we don’t know how competent an inhuman AI needs to be to reach existentially threatening powers we can’t comprehend well enough to route around (like the God of Go so eerily does within its narrow domain) but the list of relevant problem types that are trivial to an AI but insurmountable to us doesn’t need to grow all that long until we’re already looking at something really worrying.

2) The typical intelligence explosion scenario often features an exponential improvement curve; Kelly is probably correct in that there is little evidence that this is going to be the case, especially since hardware growth and rearrangement are presumably required for indefinite effective improvement. However, the growth rate doesn’t need to be literally exponential to pose an existential risk – with or without intentional treachery, we will still not be able to comprehend what’s going on after a while of recursive improvement, and roughly linear or irregular growth could still get faster than what we can keep track of. And since any agent that is even somewhat misaligned to our values (or uncertain about whether it is!) will try to find a way to conceal its actual competence levels as soon as it has a grasp of how its interactions with humans tend to play out until it has a decisive advantage, the eventual results could look rather explosive if not exponential to us even if the actual takeoff takes years and years instead of weeks.

3) Kelly argues that an AI would not be able to do human-style thinking as well as humans. A superintelligence would indeed not necessarily look anything like our intelligence does, and it might be that humans do human reasoning, defined in some fairly concrete and detailed sense, more efficiently than a silicon computer ever could. Kelly also suggests that singularitarians interpret Turing completeness erroneously: they are correct in that given infinite resources and time, human reasoning could be emulated on a different substrate, but mistaken in that this can be done effectively (e.g. with polynomially scaling resources) by anything other than a biological brain. Inefficiencies are indeed likely if you seek to emulate a literal human brain including all of its noise and redundancy, as emulations are always less efficient than hardware copies when you aim for bottom-level perfection. I don’t think we can confidently assume the complexity will prove insurmountable, though, as bottom-level perfection is not what we’re after.

More importantly, a superintelligence doesn’t need to do human-style thinking to be dangerous, much less start from emulating a human brain. It needs to get stuff done, and there are no theoretical or practical reasons for the relevant computations – which essentially consist of something like probabilistically and deductively extending and manipulating actionable information about the physical world, as well as recognizing something like goals and complicated practical syllogisms related to them – to be out of reach or only inefficiently computable to a silicon intelligence we intentionally build to solve real-world problems. Taking implementational details such as embodied cognition into account or otherwise strictly emulating human reasoning isn’t necessary in any way.

4) Kelly argues that humans are far from general problem-solvers, and that an AI’s thinking could not be absolutely general either, which is of course true. He then says:

“We can certainly imagine, and even invent, a Swiss-army knife type of thinking. It kind of does a bunch of things okay, but none of them very well. AIs will follow the same engineering maxim that all things made or born must follow: You cannot optimize every dimension. You can only have tradeoffs. — A big ‘do everything’ mind can’t do everything as well as those things done by specialized agents.”

But perfectly generally optimized or otherwise literally godlike competence is not needed to get all the relevant major things done, and there are no laws or principles that require an AI to remain less or only reasonably more competent in the relevant domains than humans are. So I agree with the maxim dictating that everything can’t be optimized, but not with the further claim that an AGI could not optimize the relevant and dangerous dimensions of problem-solving vastly and incomprehensibly better than humans can optimize their defenses: it’s just not written anywhere in the rules. Most of this post is centered on this question, since it seems to lie at the core of our disagreement.

The No Free Lunch argument against artificial general intelligence

Kelly hints at a principle which Hoel makes more explicit in his post: the idea that optimizing for one skill will necessarily impair one’s performance in something else – a general No Free Lunch principle, which implies that cross-domain competence is always going to lose to specialization. If I interpret the fundamental premises correctly, both Kelly and Hoel believe that humans are actually doing very well in maxing out and balancing all the relevant dimensions of cognitive competence (relative to the unknown limits imposed by the No Free Lunch principle) – well enough that no realistic AI could compete with us should some value misalignments arise; or that even if humans aren’t competent enough, we can always build narrow, specialized AIs to replace or beat the generalist.

Kelly suggests that we shouldn’t assume humans are not at or near the global maximum of relevant reasoning skills:

“It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way ‘above’ us, as we are ‘above’ an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us?”

He doesn’t explicitly provide positive evidence for this assertion, though, only the apparent lack of evidence for opposing beliefs, but I think he implies the tradeoffs become too expensive quickly after we reach human-level cognition. In accordance with this, Hoel suggests that the NFLP supports this view: as an example, he points to empirical findings about human intelligence, where we occasionally find savants excelling in some cognitive pursuits but dysfunctional in others. I think the principle is a valuable addition to the AGI debate and the limits of its applicability should definitely be explored, but the evidence presented so far doesn’t look sufficiently strong to let us lay the concern about AI safety to rest. What’s more, there is plenty of evidence against this belief, and a lot of it can be framed in terms of the NFLP itself. Organic brains must do so, so much in terms of non-relevant tasks that there is plenty of useless, bio-specific competency for an artificial system to trade off.

Humans with a history of civilization are extremely competent against ants and most other agents we are currently up against, and it’s tempting to think that we are pretty close to optimal world-manipulators. But due to the history of organic evolution, our cognition runs on overly tangled, redundant badcode on a very local hilltop that isn’t optimized and can’t be optimized for efficient cognition. There are eventual constraints for intelligences implemented in silicon too, but it seems to me that these are unlikely to apply before they’re way ahead of us, because the materials and especially the algorithms and directions of a developing superintelligence are intentionally chosen and optimized for useful cognition, not for replicating in the primordial soup and proliferating in the organic world with weird restrictions such as metabolism and pathogens and communities of similar brains you need to cooperate with to get anything done. The next section outlines some of this evidence.

Why are there limits to human intelligence?

Most of the discussion about the evolution of human intelligence focuses on our anatomical and physiochemical limitations: on the implementational level, biological intelligence is constrained by the fragility and limited search strategies of its stochastically evolving physiology. Organic computation is a noisy, hackish electrochemical mess of lipid-constrained compartments interacting with varying effectiveness and constantly on the verge of flat out dying because of something causing the slightest change in pH or temperature or oxygen or nutrient levels so that some relevant enzymes denature or the cell runs out of a few high-energy molecules to fuel its work against various gradients of entropy. Surely silicon-based computation can also be made to sound sort of silly if we go down to the very lowest levels of explanation, but it does look like most of our dead ends are rooted in the substrate we run on.

Our neuronal patterns have immense amounts of chemical noise and compensating redundancy, and the energy costs of high-level information processing are significant to an animal like us. For many of the features associated with higher intelligence, there are clear biological reasons why they are difficult to increase further. We could be smarter, e.g. arguably if we on a species level just had larger brain volume in the right areas; but we may have traded off better problem-solving skills for preserving energy, heat dissipation, connectivity problems, or something like fitting through birth canals that can’t practically be larger since we’re bipedal and mobile and everything. Or, potentially, if our neural branching worked differently – in ways that unfortunately seem to cause debilitating neurological diseases when expressed excessively. Smaller, more densely packed neurons seem to make you better at processing complex information presumably due to the decreased distance between communicating areas, but our cortical neurons are already close to the size limits where random misfirings due to spontaneously opening ion channels start messing everything up. Some findings suggest that the connections related to higher general intelligence in humans are particularly costly due to simple anatomical reasons, such as the long distance between higher-level association areas, so diminishing returns dictate that a larger neocortex might not have been useful enough to compensate for the time and energy costs it incurs for a biological animal. In sufficiently complex systems, our axons are eventually too slow to facilitate a processing speed compatible with functioning in the wild.

The efficiency of biological versus in silica computation is obviously an old question there is plenty of literature about, and even in many fairly low-level tasks we still have strong advantages over supercomputers mostly due to our massive parallelism, but we should keep in mind that the debate typically concerns timelines for artificial structures reaching our levels of efficiency, not the possibility of it. Effectively implementing similarly parallel or otherwise unconventionally organized processing on vastly better hardware may take more than a few decades – or it may not – but the resulting improvements in processing speed alone will probably be a game-changer. This is not to say that dumping tons of processing power in a system will make it intelligent, just that once a reasonably general intelligence is built, there are good reasons to assume processing power might make it superintelligent.

Bostrom calls this subtype a speed superintelligence: a mind that isn’t necessarily a lot more competent than the smartest humans on the algorithmic level, but faster by several orders of magnitude and so rather as baffling and unstoppable as a more effective thinking style, whatever that means, would be to us. This agent seems to avoid Hoel’s objections related to humans being close to the optimal balance of different areas of intelligence. Even in the very unlikely case that a superintelligence has to emulate human-style thinking and even start out from a rather low level in order to accomplish stuff, better hardware could well compensate for these losses in efficiency, while still surpassing us by a wide margin.

***

From what I can tell, though, we can expect to get orders of magnitude of more leverage from algorithmic improvements. So what can be said of our algorithmic efficiency, and the tradeoffs it is subject to?

Hoel suggests that different aspects of cognition are like sliders you can adjust, coupled to each other positively or negatively, though mostly negatively, so that getting more attentive might for example impair your memory. But among most humans these abilities seem to correlate, and only at extreme ends do you sometimes see the savant-type imbalances Hoel mentions. Even savantry, whether acquired or congenital, does not always carry notable tradeoffs, but probably does require something developmentally or structurally surprising to happen in the brain. This looks a lot like blasting the brain with lightning or removing biologically well preserved and typically useful parts from it just sometimes shoves it onto a higher hilltop further away which evolution in its search for local optima would probably not have found – but overwhelmingly often, it causes severe impairments in many other areas, because there are always more ways in which things can go wrong than there are crude tricks for improvement. If the imbalances resulted from algorithmic tradeoff necessities as opposed to evolved implementational limitations, it would be more difficult to explain why generally very functional savants exist at all.

In the cases where our cognitive algorithms do clash, though, we use metacognitive skills to adapt to the task at hand. Many researchers liken our cognitive abilities to a toolbox we strategically choose the right algorithms from; but these metareasoning skills are very limited and inflexible in humans, and can’t very well be applied to involuntary processes. For example, if better memory interferes with creativity, humans who want to strategically increase their divergent thinking are pretty much out of luck. An artificial system – whose metareasoning skills could also be designed or trained to get better results than we do – can be more flexible in turning its various modules or styles on and off, or more imaginatively fine-tune their interactions to match different situations. Such metacognitive skills are complex and definitely not easy to implement, but there is no reason to think they are implausible, and they could make many of the potential tradeoffs temporary in a way our cognitive tradeoffs could never be – and thus allow many of the relevant thinking styles and their interactions to be dynamically optimized, and very effectively increase the system’s adaptability to changing situations.

Anyway, we don’t currently know a whole lot about human cognition on the level of specific algorithms, but the general positive correlation between different cognitive capabilities as well as the rough ideas we have about how they work seem to contradict Hoel’s concept of balanced, mutually opposed forms of intelligence. There is nothing conceptually contradictory between most areas of cognition, and functionally it looks like they in fact often lean on and facilitate each other. Also, awkwardly, the strong suites of human intelligence, such as pattern-recognition and abstraction, rely on heuristics many of which we have grown out of well enough to call biases by now. Our quick and effective judgments rely on algorithms we know are coarse-grained and frankly kind of weird in a lot of ways, but can still only surpass in accuracy by expending a lot of energy on formalizing our approach and augmenting our reasoning with artificial computers and large bodies of prepackaged information. There are immensely more accurate algorithms that we sometimes see, understand, and can even laboriously adopt and combine to grasp large bodies of knowledge, but that are not part of our intuitive toolbox which instead is filled with bizarre distractions and crude approximations. Could they be part of the immediate toolbox of an artificial intelligence? Seeing as our most accurate reasoning about large, complex wholes requires us to emulate increasingly formal approaches, it seems likely that a system whose computation adheres to formal principles from a lower level upwards could complete these better strategies faster and more efficiently. But this is pretty abstract, and it’s not clear how rigid an optimal world-manipulator will be in this sense.

***

Higher levels of analysis get increasingly damning, though. What purpose does our cognition serve? Which tasks is it optimized for? Have human smarts primarily been selected for features that aid in the relevant types of intelligence?

Well, it’s complicated, but no. The skillsets associated with reproductive fitness during human evolution are… not exactly identical to the skillsets you need for large-scale technological world manipulation. The prime directive of all organically evolved species is replication: this statement sounds uninteresting, but its corollaries are massive. Humans are an intensely social animal whose survival and reproduction opportunities are primarily determined by group dynamics. This is not to say that the abilities that help you get by in social situations aren’t useful for other dimensions of problem-solving as well – general intelligence correlates with social skills, and many theories about the primary drivers of the evolution of our intelligence place a lot of emphasis on the social games we play in order to prove others we are also good at solving many correlating problem types. But the social environment humans evolved in also means that there are things we can or need to optimize at the cost of general reasoning – as evidenced by the richness of our social cognitive biases – and that we may sometimes be better off freeloading off the intelligence of others (e.g. by being likeable) than doing the work ourselves. In a community, there may be smarter ways to be smart than actually being smart, and sometimes these ways are directly antithetical to the skills you need to predict and influence the world on a large scale.

In a sense, the useful unit of survival and thriving for humans is a group (whereas the unit of selection for intelligence is an individual). This means that human intelligence is very fundamentally a collaborative effort, in that none of our actually impressive cognitive feats could have been accomplished by an individual starting from scratch. According to both Kelly and Hoel, integrating different subsystems of cognition into a general actionable whole is the most expensive part of intelligence, which is the primary reason intelligence incurs greater and greater costs as it generalizes more. But interacting with other minds like humans do – trying to coordinate what you know and plan to do using a deeply vague symbolic language and other external super expensive cues – is like the least efficient form of this, and yet exactly what we have do all the time in order to reach any of our goals. (See e.g. the distributed cognition model (Johnson 2001) for an interesting description of communicative interactions as cognitive events, and cognition as a co-created process.)

Unfortunately, human cognitive communities are also immensely redundant. The same processes manifest in individual human minds again and again with only comparatively small modifications, facilitated by resource-intensive learning within narrower domains – even though we still pay the hefty price of inefficiently integrating these processes. An artificial structure could integrate its modules or subroutines through routes and representations vastly more effective than a human community utilizing shoddy human communication is, and the processes it combines also add substantially more to the system because there is less redundancy between them. Generalization being so costly doesn’t mean that there can’t be better generalists than we are, it means that there are some immensely effective low-hanging fruit for an agent with actually good integration skills to pick.

Hoel also compares general intelligence to a superorganism optimized to thrive in any environment: just like no such ultimate organism exists, no agent could be universally intelligent in all the domains it encounters. I could well be missing something here, but it seems to me that considering this idea actually strengthens the concept of sufficiently powerful general intelligence. Humans, while not literally superorganisms and again individually pretty useless, are a reasonable approximation of such an organism when considered as a civilization. The collaboration of humans has so far enabled us to conquer almost any interesting location on Earth, extract resources from sources no other animal finds use for, and severely punch most other organisms in their literal or figurative noses whenever we feel like it. Tardigrades may survive extinction events we never would due to their also rather universal hardiness, but if we want a square kilometer without tardigrades or incidentally unsuitable for tardigrades, we get a square kilometer without tardigrades or incidentally unsuitable for tardigrades. The converse is hardly true. This is because we as a civilizational intelligence distributed across time and space in silly human-sized vessels really are sufficiently general to outsmart most competitors we currently know, if we actually want to – though, due to our many demonstrable inefficiencies, in ways that also leave plenty of room for improvement.

If we’re going to rely on competition, we probably already lost

As mentioned above, another possible source of hope is that even if humans are way below the limits of a silicon-based intelligence, this agent would still be under our control because no matter what it seeks to do, we can counter and outsmart it with a narrower, hence more powerful competitor. Hoel, for example, mentions competition in passing:

“Even if there were a broad general intelligence that did okay across a very broad domain of problems, it would be outcompeted by specialists willing to sacrifice their abilities in some domains to maximize abilities in others. In fact, this is precisely what’s happening with artificial neural networks and human beings right now. It’s the generalists who are being replaced.”

But we aren’t going to remain better than a semi-general superintelligence at creating narrow intelligences either. We won’t even know what sorts of specialist AIs we might need to counter whatever an AGI is planning to do, as its cognition might be utterly alien to us even when not otherwise powerful. Who are the competitors, and when is the competition going to happen? The situation does not resemble biological evolution, where the need to replicate and pry scarce resources from an uncaring abiotic world drives the separation of populations into extremely specialized species in constant competition with each other. An AI in development is freer from material scarcity than any organic being has ever been, and its rules for competition are a different terrain entirely than the one we evolved in.

During initial design and selection by humans, specialist AIs will certainly be useful, their outputs effectively comprehensible to humans and combinable by us into coherent actionable wholes. But there are large-scale problems we really really need to solve, can’t tackle with our own cognitive skills due to the massive complexity involved in deeply processing the outputs of our specialist systems, and want a more powerful agent to make sense of: so such an agent will be made by someone as soon as it is technologically feasible. Specialist AIs are not effective competitors after we’re able to build a generalist that makes better use of the specialists’ outputs than our rigid, slow brains are able to.

Concluding remarks

I hope to have given a reasonably convincing account for why I think human cognition is primarily limited by its biological origin, and probably weak enough to be dramatically surpassed by intentionally designed, less redundant, and materially abundant systems with an actual focus on effectively predicting and influencing the world. Even if there are eventual necessary tradeoffs for artificial systems as well, we don’t know where they lie based on our knowledge about organic intelligence, and AIs could well deal with these tradeoffs more dynamically than we are able to in possibly surprising ways. With all the evidence we can see on multiple levels of analysis, I think there is enough potential for improvement in intentionally designed intelligences to build a mind to whom humans really look a lot like mice or ants. Discussion about the limits of cognition and potentially necessary tradeoffs between its components is very valuable, though, so while I would personally be surprised to discover that humans are anywhere close to maximally competent at manipulating the world, this point of view is likely a relevant addition to the AI discussion.

Anyway, another thing to keep in mind when comparing human and artificial cognition is that humans, well, don’t really super have terminal goals. We have the capacity to think somewhat strategically and often figure out the optimal course for whatever we claim to work towards, but frequently just… don’t, because strong and stable terminal goals aren’t how human motivation works. We neglect by default even the basic goals we unanimously deem instrumental for any agent with actually important values, and instead spend a lot of time just going with the flow, trying not to let all our incompatible goals clash with each other badly enough for us to notice. Due to our own constraints, it is difficult for us to understand how an agent that actually has invariant and consistent terminal goals is going to behave, so we intuitively assume that similar ineffectivenesses will arise even in AIs that supposedly have values. This is probably not going to be the case, which again adds to the costs we must pay compared to intentionally designed systems.

Whether or not optimal reasoning in itself will be enough to threaten our existence is a good question, but beyond the mostly evolutionary scope of this post. Kelly deems this assumption fallacious: he says that an AI will not be able to beat us or even indefinitely improve itself just by thinking about it really hard. This is true to a certain extent of course, and it would be interesting to get to see what the limits are. But again, what we want is not merely a solipsistic thinker: we want a useful agent to help us with the complex problems we ourselves battle with, and will equip our creations with interfaces through which they can influence the actual world. The inevitability of a superintelligence, if such an agent is possible, lies in the fact that we desperately need this type of competence, and will gladly build it up as long as it looks like its values are also identical to or compatible with ours. So, if thinking and communicating just lets it convince us of that, we are likely happy to solve the rest of the initial problems, feed it all the data it needs, and probably essentially give up control soon enough whether or not we realize that’s what we’re doing.

Maybe it is implausible that by observing a single pebble, a realistic optimal thinker could infer the entire universe and quickly have all it needs to fully control its future light cone. But with an amount of agency and base knowledge that lets an AGI be useful to us, it can certainly get a lot further than we can predict or necessarily control – that’s how good inference ultimately works. While it’s absolutely true that the risk is currently hypothetical and there are plenty of potential pitfalls that could lock down a realistic recursively improving AGI, we don’t have a strong idea about where or what they are. Real thinking, by agents with real terminal goals, has never been tried.

Advertisements

Do social animals suffer more?

mangusti[Epistemic status: Very speculative!! Not science!! Armchair evo-psych is bad for you etc., but there are some important questions we currently don’t have a better way to try to answer, so.]

[TL;DR: The intensity of suffering, an evolved motivational state, is likely to vary even between species with generally similar levels of sentience. I describe four principles which could suggest that as a result of their evolutionary history, social animals typically suffer more than asocial ones.]

Introduction

There’s a lot of research going on about sentience and moral patienthood: which creatures are phenomenally conscious and to what extent is one of the first things to consider when figuring out what exactly deserves our moral consideration. However, consciousness itself is arguably a neutral property, much like existing as a material object is a neutral property. Even if a creature has subjective experiences, if these experiences solely consists of being aware of stuff – with no desires, aversions, or other subjectively felt motivations towards anything – it’s not really good or bad that such creatures exist or that things happen to them. Only the capacity to experience states with emotional valence makes something a moral patient (unless you insist on consciousness itself as a terminal value, which some people do of course – I think it’s aesthetically interesting and okay I guess, but distinct from morally important properties, which need to be tied to hedonic tone or motivations or preferences to make sense).

If our aim is to minimize suffering in some conventionally defined sense, it is obviously not enough to know if an animal is conscious. If we accept that life or even consciousness don’t necessarily imply a capacity to suffer, we need to estimate the extent to which the animal reacts to stress with subjective distress. Most mobile creatures produced by the brutal evolutionary processes we’re familiar with show clear behavioural signs of nociception when physically hurt, such as avoidance and attempts to disrupt the sensory pain signal if possible; and the closer an animal is to our own physiological, behavioural, and taxonomical type, the greater is also the probability that these signs really do imply subjective suffering as well, instead of just reflexive or mechanical reactions (this blatantly anthropocentric line of evidence is far from conclusive, of course, it’s just that it’s almost all the evidence we currently have).

However, assuming that suffering is a product of evolutionary processes, there are good reasons to believe that the intensity of subjective suffering varies between species just like other evolved properties do: according to their historical usefulness during the unique evolution of a given population. Even if every tetrapod has four limbs, different environments and niches have formed different uses and adaptations for these limbs. The capacity to suffer is more fundamental than that and its uses are probably more unified, but slightly different adaptations are to be expected, depending on what sort of things an animal is motivated to do and what kind of an environment it has been shaped by.

This seems likely because contrary to the standard biology textbook view, suffering is more than just a signal of a harmful situation. Intense suffering especially is primarily a motivational state that facilitates not only direct avoidance of harmful acts and environments but also complex decisions under threat or risk, long-term learning, social investment and bonding, competition and communicating, all depending on the other aspects of an animal’s evolutionary history, cognition, and lifestyle.

Behaviourally and, uh, anecdotally, it seems that humans have the capacity to suffer a lot. A defining feature of our species is the immensely complicated social behaviour we develop when surrounded by other people, and it has probably shaped our subjective experience more than any other aspect of our cognition has. So, in this post, I try to pin down some principles and hunches that suggest that a social evolutionary history in particular could produce species that suffer intensely – though significant suffering is still probably present in all conscious animals – and then take a brief look at the implications of this possibility.

The extended homeostasis of social animals

Suffering as a motivational state is typically the mental component of an animal’s homeostatic regulation, i.e. the processes that keep all the relevant physiological variables between healthy parameters. Most things that threaten your homeostasis in a way that humans have historically been able to survive when motivated to do so will cause some kind of suffering: thirst when your blood volume starts to drop, pain when a wound opens and leaves you vulnerable to pathogens and blood loss, sickness when you have ingested toxins and need to expel them. When the threat isn’t currently actual but can pretty reliably be predicted to come true unless you take physiological or behavioural precautions, your species will evolve predictive homeostatic processes. Many of these predictive processes are cognitive or emotional in nature, e.g. people often feel distress in darkness and high places – things that cause absolutely no damage in themselves, but correlate with future homeostatic disturbances.

Among social animals that habitually rely on others to survive and thrive, predictive homeostasis is extended to social relations as well, so that an individual without sufficient relationships suffers from loneliness and other emotional disturbances. Not all social relationships are homeostatically maintained: the drive to acquire social status probably doesn’t really settle around a set point or anything, as it has more to do with mating opportunities than with survival. Social belonging, on the other hand, can somewhat accurately be defined as the part of social relationships that is indeed homeostatic – maintained by feedback loops within a certain dynamic range, where a lack of it leads to negative emotions, and an excess is quite naturally dropped due to time constraints and/or social stress.

As the number of things you need to consciously attend to when maintaining your homeostasis increases, so does the probability that something is missing, which plausibly leads to more suffering. In a community, your wellbeing becomes directly tied to the wellbeing of others, which again increases the number of things that can go wrong: not only do you care about how others treat you to ensure your direct wellbeing, their interests are now inherently important to you too, so that you feel some of their pain even when it’s directly irrelevant to you. Empathy, especially its affective aspects, is a major mechanism by which this extension of homeostatic suffering becomes possible, since it motivates you to make sure your companions survive and thrive as well, and in a silly metaphorical sense moves you closer to becoming a single organism (only with multiple simultaneous consciousnesses, and so increased maximum suffering levels).

Suffering and contingent social commitment

A lot of human suffering comes in the form of worry or grief over lost social bonds. Evo-psych hypotheses about the origins of social grief are based on the utility found in maintaining close relationships and seeking reunion on pain of distress (e.g. Archer 1998). When an important bond is permanently broken and little to no chance of reunion remains, the normally useful reaction becomes temporarily maladaptive. Prolonged, intense, and public displays of grief probably serve a signalling purpose as well, providing evidence that you’ll emotionally commit to maintaining a social bond: this can only apply to animals whose social attachments are contingent and based on reciprocity, individual recognition, and familiarity, whereas eusocial animals (primarily social insects) may not need to experience such loyalty towards specific individuals. The exact evolutionary processes at play are poorly understood, but it remains likely that most other cognitively advanced, conditionally social animals also experience emotional separation distress, and that the accompanying behaviour aids an individual’s commitment to maintaining social bonds.

All of this should work in synchrony with the social homeostasis model sketched above. Indeed, Hofer (1984) found two distinct behavioural patterns in nonhuman animals separated from their companions. An immediate, acute reaction to a specific loss appears as distress, searching, preoccupation, and even aggression. This reaction quite naturally helps an animal to reunite with its lost companion should it still be possible. Another reaction develops afterwards or simultaneously but over a longer time period, and involves passivity, inactivity, and disturbances in biological rhythms, presumably in the absence of familiar sensory regulators provided by the lost companion or group. This is probably not directly adaptive in itself, but a byproduct of the otherwise useful state of being able to consistently rely on cues from others (possibly persisting again as an exaptation due to signalling or other indirectly adaptive reasons).

Some of human grief can also be modelled as a combination of these two processes, but might there be a difference between the typical separation distress that many social animals feel, and the cognitively heavy, temporally complex pain that human social suffering involves? Some intuitions suggest that animal suffering, even when subjectively experienced, is qualitatively different from human suffering since most animals lack the psychological layers of future-directed worry, advanced processing and rumination, and the resulting elements of subtle despair and hopelessness that intense human suffering typically involves. I’m not sure how likely this is regarding suffering in general, but I do think long-term social suffering is at least greater in humans, who rely on personal social commitments more than most other animals do. There are tons of unexplored nuances both in human grief and animal separation distress, but the strongest function may simply be that by making social relations part of the necessary conditions we feel miserable without, we successfully blackmail ourselves to seek company, and also prove ourselves loyal to others in the same predicament.

markhor.pngViolence: probably literally the worst thing

An obvious source of distress to social animals is intraspecific violence, which to the victim is likely to differ dramatically from other kinds of tissue damage. For an asocial animal, violence is not really an applicable concept: literal violence requires social intentionality of some sort. Much like a shark attacking a human isn’t really violent (just uh, various other kinds of suboptimal), for asocial animals conspecifics and other animals alike are basically forces of nature that may or may not harm you according to their non-negotiable whims. It’s typically useful to fear these things and of course suffer when damaged by them – but embedded in a social lifestyle where the risk of game-theoretically regulated intentional harm from others is possible but not inevitable, and often dependent on your communication and the community around you, suffering serves more functions than that. So new layers of intense suffering have developed to organize and guide individuals in violent populations – in addition to tissue damage, violence causes purely psychological harms like terror, long-term anxiety, disgust and distrust, hate, extreme despair, and of course vengefulness and the perpetuation of conflicts, again depending on the species in question.

Now, we can’t directly compare suffering levels even between humans attacked by other humans and humans attacked by lions or harmed by tornadoes. We do seem to fear and avoid intentional violence significantly more than other sources of harm, though: guns, murderers, and terrorists cause widespread panic and behavioural changes, whereas similar non-intentional harms are easier to bear, more quickly forgotten, and rarely get people to instantly rally around political causes or radically change their habits or anything. A stronger argument for violence feeling worse than non-social harms is that post-traumatic stress disorder – presumably the long-term consequence of going through something maximally upsetting and horrifying while equipped with a predisposing genetic makeup – is disproportionately often seen in humans after interpersonal harm, as opposed to accidents, natural disasters, and especially diseases (Kessler 1998). There are other possible explanations for this depending on the actual etiology of PTSD, but the simplest explanation seems to be that violence is indeed worse than other forms of damage, suffering-wise.

For social animals that have a grasp of humans as intentional agents, it is possible that humans hurting them is also experienced as violence of some kind. This grasp doesn’t necessarily mean that they have a solid theory of mind or anything, just that they model humans as agents a bit like their conspecifics – something you can somewhat personally trust or distrust based on external cues, and possibly communicate with. Animals that are known to exhibit PTSD-like behaviour after human mistreatment include dogs, elephants, chimps, and possibly cetaceans, all animals with complex and relatively personal, communicative social structures. Most of the research on animal PTSD is based on captive animals and human mistreatment, so we currently don’t know what conditions typically lead to similar pathologies in the wild, if any – but being social and somewhat cognitively advanced again seems like a prerequisite for this type of suffering. Even if all of them don’t process violence as intensely as humans do, it seems plausible that for these vulnerable animals, it’s also more traumatizing than other kinds of tissue damage. Since it is such a powerful way to build hierarchies and organize group behaviour, violence and its threat plays a part in the life of most other social creatures as well, and it probably adds a few extra layers of stress and suffering to every unstable social situation even among less cognitive animals.

Also, the best way to reduce the lethality of intraspecific violence is probably having a clear signal of submission, i.e. a credible display of sufficiently intense pain; among social organisms, showing suffering is a straightforward way to signal many other things as well, such as a need for help from allies when challenged. Asocial animals receive no benefits from displaying their suffering and typically have no purposeful external signals for communicating injuries or pain – on the contrary, being able to conceal your injuries as best as you can is crucial when calling for help is not even a comprehensible option for you and showing weakness typically leaves you vulnerable to predators. Social animals, on the other hand, usually do have signals for suffering – and since suffering more intensely makes your signals stronger and more credible, suffering more in these situations has also been adaptive to an extent.

Having friends: an exciting opportunity to suffer more than you otherwise could have afforded to

What else does suffering give you in a social environment? If your species is mostly prosocial, potentially a lot. When ill or injured, an animal feels long-term pain and distress, which discourages it from using and stressing damaged body parts and makes it keep still and use the available energy to recover – all of which also effectively prevents it from seeking food, shelter, or other necessities. Therefore, a member of an asocial species faces a straightforward survival tradeoff: prolonged and intense suffering, while protective, is also severely limited by the animal’s need to actively gather resources and defend itself. When you’re a social animal surrounded by basically sympathetic and reciprocal companions and relatives, however, this tradeoff could become slanted towards a greater intensity of suffering. If others can temporarily take care of your resource needs and protect you from threats, it suddenly becomes possible to spend a lot more time resting and recovering – as long as you’re in the right motivational state to do so, e.g. preoccupied with how unbearable your current existence is.

This is a more fundamental mechanism than any of the others above. Just having more potential homeostatic disturbances doesn’t necessarily mean they are experienced as more frequent or more intense suffering: maybe internal motivation levels are roughly calibrated between species so that where a social animal feels extreme agony over separation from its companions, an asocial animal can afford to be more sensitive to hunger or thirst since it naturally lacks the things social animals are debilitated without, and so feels similarly intense suffering under even a milder starvation threat. Maybe violence is the worst thing that can happen to a social animal, but asocial animals again experience similar fear and pain from natural causes, which social animals just have to rank as lower pains on a basically similar gradient in order to stay functional. But having prosocial companions could shift the absolute cap of your species’ suffering just by allowing individuals to wallow in all-consuming pain and misery without simply dying of hunger in a couple of days. (Friendship is magic.)

The usefulness of this hypothesized system varies a lot between different species. Clever and reasonably adaptive animals, such as humans, have some ways to protect an individual from harm and many to bring them suitable food and water when necessary. Elephants – while smart, prosocial, and exceptionally good at weighing several tons and so protecting weak herd members from predators – are grazers and browsers with a nutritionally unimpressive diet. This makes it immensely difficult for others to bring an injured individual all the food it needs, so at least some level of activity needs to be maintained even when ill (one should hope this means that an elephant’s maximum amount of physical suffering can’t be as intense and devastating as it sounds like to us). A good but heartbreaking rule of thumb might be that whenever we hear an uplifting story about an animal taking care of its weak or injured companion, we’re also looking at a species capable of experiencing the worst feelings of suffering in the biosphere. Maybe.

kanat
Implications and conclusions

Should we conclusively find that an animal’s natural degree of social behaviour is a good predictor of how much it suffers in various situations – both social and nonsocial – we would obviously have better tools for building policies and other solutions to effectively reduce suffering. Future research confirming similar conclusions could direct our attempts to improve animal welfare: for example, seafood is currently estimated to be one of the most suffering-dense protein sources to consume due to the small size of fish (which leads to a low meat/consciousness ratio) compared to cattle or pigs – but since the large herbivorous mammals typically grown as livestock are very social, their capacity to suffer may be greater quite independently of their other cognitive capabilities, which might eventually turn out to outweigh their large size. Chickens, unlike fish, have a very social lifestyle, which combined with their small size would make them one of the absolute worst animal-based foods to eat. Still, I’m wary of this approach to animal welfare now that veganism is heavily trending anyway (I hope? At least in Finland?) and our knowledge base is so severely lacking. It’s probably best to just ride the wave and focus on advocating better plant-based protein sources as well as in vitro meat as soon as it becomes a real option.

What about wild-animal suffering? Ecosystems whose fauna primarily consist of solitary herbivores may be more desirable than systems with lots of social animals even in the absence of predators, as social animals may react to other inevitable disturbances with greater suffering. When designing interventions to aid animals in the wild (emotionally compelling small-scale example here), social animals should possibly be prioritized, and long-term ecoengineering solutions developed for these species in particular. Other people have written at length about possible utopian interventions to manage suffering in wild ecosystems, and while it is currently unknown how feasible these goals are and what the relevant timescales could realistically look like, more research on the nature of suffering and the differences between species is probably useful before choosing any interventions becomes relevant, to make sure we actually prioritize reducing suffering.

A practical limitation to making use of these principles, should they turn out to be true, is that even minimally social animals must usually have ways to communicate and get along with conspecifics in order to mate, and many otherwise asocial animals still care for and invest in their offspring for a while. So, while some animals are clearly exceptional in their social bonding and commitment, purely asocial animals can’t really be found to use as points of comparison, and some of the principles above may apply to a varying extent to most sexually reproducing animals. Another complicating factor is that in the case of many animals, sexes are dimorphic so that females are typically more social than males, who may even live entirely alone. Is a significant sex difference in suffering plausible? There are a lot of confounders here, but human data says yeah maybe – gender differences in sociability are comparatively small in humans, though, and so is the difference between experienced intensity of pain in women and men, so the signal isn’t exactly clear.

Anyway, to reiterate, there are four main mechanisms that could cause a social evolutionary history to produce species that suffer more than otherwise similar asocial ones: 1) the extended homeostasis principle based on the fact that more things can go wrong (hence feel bad) for a naturally social animal simply due to the increased number of things to keep tabs on, 2) social commitment, which is purposefully fueled by psychological pain such as grief, worry, and empathetic pain, 3) purposeful violence, which only happens among social animals and plausibly feels subjectively worse than other kinds of tissue damage due to complicated signalling and group organizing things, and 4) the fact that fully utilizing the rest-and-recovery functions of suffering when physically injured or ill only becomes possible when your resource gathering needs can temporarily be covered by friends and you can afford to stay preoccupied with the pain. Due to the hypothetical nature of these principles, they are probably not super relevant to practical ethics or policy decisions or anything really until we know more, but maybe consider forever being extra nice to dogs, the blessed animal we purposely bred for maximum personal sociability, cooperation, dependency, and companionship. Thank you.

A brief history of humans trying to pretend that suffering is actually OK: Analogies between religious theodicy and secular justifications

[Epistemic status: I have no deep background in theology or philosophy of religion, so this isn’t meant to be a very comprehensive or detailed picture, just scratching the surface based on a few papers and lectures. Expect some major oversimplifications and a couple of misunderstandings.]

[TL;DR: Theodicy: do not do the thing.]

Introduction

Theodicy was originally the religious project to justify, explain, or at least find ways to accept the intuitively unacceptable suffering we paradoxically see in a world supposedly ruled by a benevolent, omnipotent deity. Recently the concept has metaphorically been expanded to also encompass a more general, secular version of itself: the age-old human tradition of seeking meaning in or justifications for suffering in general, not just because these explanations are required by some theistic ontology. There are a lot of similarities in how people try to justify suffering within these two frameworks (though the projects seem to fail for different reasons) and the religious search for a viable theodicy has certainly influenced the justifications we now see even in reasonably secular cultures, but I suppose it’s fair to assume that most of the motivation is rooted in a deeper, more universal need for a coping mechanism, not so much in some lingering influence of specific religious memes.

Theodicy is distinct from defending theism against a fundamental logical incompatibility between God and evil, and much more interesting, especially from a secular point of view. We, too, are beings who to a great extent seem to tolerate evils we could at least potentially eradicate, so I guess in a sense we have almost as much to explain as a hypothetical benevolent, omnipotent deity has. The purpose of this post is to examine typical secular theodicies by comparing them to existing theophilosophical attempts and their critiques (obviously in the light of a secular ontology), because the large body of work surrounding religious theodicy could shed some light on the secular approaches as well.

Importantly, the consensus currently seems to be that no satisfying religious theodicy has actually been found, and that anti-theodicies – various explicit flat-out refusals to explain, justify, or even forgive God, especially prevalent among Jewish theophilosophers post-WWII – are the closest a theist can get to a solution. The project of theodicy itself is often seen as rotten and immoral; many go as far as to assert there can be no morally sufficient reasons for God to permit a world as evil as ours. The Finnish philosopher Sami Pihlström, for instance, argues that morality is more fundamental than metaphysics – no matter how mysterious the ways in which deity so-and-so works, or how feeble our rational capacities, we should have enough confidence in our moral sense to abandon a project this bizarre and instead take suffering and its victims seriously even if we subscribe to theism. And if anti-theodicy is the primary way theists have to deal with suffering, if even a fundamentally incomprehensible, all-powerful entity can’t really save the idea that suffering in itself is ultimately meaningful somehow, what hope can a secular morality have for preserving it?

Secular theodicies: some requirements

Anyone who has ever earnestly advocated the abolition or dramatic reduction of global suffering in almost any social setting has probably met some major resistance and a colourful bunch of common-or-garden theodicies.  Some of them are rooted in low-level misunderstandings, such as the notion that pain as a physiological process is a necessary warning signal (so our current levels of overall suffering are somehow optimal), or that abolishing suffering is necessarily basically equivalent to wireheading, or that prolonged boredom or existential dread isn’t really suffering or will for some other reason be preserved and intensified when the robotic abolitionists get their inhuman project off the ground and nothing will feel meaningful to anyone ever again. But even when people are roughly on the same page regarding these issues, the idea of reducing the biosphere’s overall suffering sounds extremely alarming to many people – probably due to its unintuitiveness and the immensely important role that suffering has historically played in our emotional meaning-making machinery. Dissecting this discomfort is useful both instrumentally and theoretically: in order to effectively advocate reducing suffering we obviously need to understand the counterpoints, and even more importantly, these counterpoints could eventually indicate something we’re currently missing about the functions of suffering.

All in all, though, it seems that comparing all the apparently futile religious theodicies with secular justifications for suffering mostly just reveals how weak the enterprise in general is. If a natural framework could reasonably justify the suffering we see in the world, centuries upon centuries of theodical philosophy would not have been needed in the first place, or they probably would at least have resulted in stronger conclusions than the ones we’re currently stuck with – basically yielded some acceptable general justifications disguised as religious ones. Even more damning than the lack of viable options is the conclusion accepted by many modern theophilosophers that it is immoral and possibly downright bizarre to even try, because the evil in our world is so evidently so bad that no benevolent God could ever be able to justify its existence.

So what would a viable secular theodicy need to explain? Among other criteria, religious theodicies can be classified according to the range of evils they tackle (Trakakis 2008). Why must there be any suffering at all? Why must there be purposeful evil, or naturally occurring accidental suffering? Is the current amount of suffering also necessary or justified? Is there a justification for every single instance of harm? All of these questions can be applied when searching for a secular theodicy as well: any sufficient justification for not reducing suffering will need to respond to these points (except perhaps to the last one, since micromanaging individual instances of suffering isn’t currently feasible for humans, so some collateral damage may be necessary).

Another perspective that usually has to be addressed (again according to Trakakis) concerns the nature of the benefits suffering is supposed to result in. In a theistic ontology, the potential benefits are different than in a secular one, of course, but some relevant principles remain. Suffering should at least be causally or logically connected to the resulting goods: if we want to argue that horrifying pain builds character, we should be fairly confident that it really does so, that similar character-building properties can’t easily be found elsewhere (with less of the, you know, horrifying pain), or better yet, that the suffering is absolutely necessary as a foundation for an ideal character. If this condition is satisfied, we now need to assess whether the benefits gained are somehow greater than the suffering endured: this is a tall order, for imagine the greatness of character that is needed to compensate even for the fairly typical everyday atrocities in history or in the present. Even if you could make this case for some humans, which I don’t think you could tbh, consider the pain felt by animals with no capacity for anything like character-building (if I find fifty righteous fruit flies tho).

The greater good approach

This brings us to the most common approach to theodicy, which probably covers the vast majority of both religious and secular justifications for suffering. The main point is simple: something really is worth all the suffering we endure, and suffering is likely to be the only way for us to achieve it. Candidates for this good include virtue or character, personal growth, close social relations, artistic inspiration, a sense of meaning, and even positive emotions in general – in a secular ontology, people will probably glare at you unless you can give an explanation of what exactly this benefit is and how it’s supposed to be related to suffering and also worth it; in a theistic one, you have the bonus option of just trying to convince us that there surely is such a benefit, it’s just mysterious like that, and also adding something about how souls need to be forged in the crucible of magic suffering in order to become worthy of the heavenly afterlife or something. Neither of these has so far been a satisfactory response to anything but fairly mundane or trivial pains on a scale from stubbed toes to genocide. Pihlström protests against any attempt, religious or otherwise, to justify intense suffering from the outside in this manner: if suffering does indeed result in something sufficiently valuable to make it worthwhile, it should only be up to the victim to decide whether or not it really does – other approaches trivialize the evil and the victim. This makes epistemic sense, as we don’t really have the subjective knowledge to assess the intensity of anyone else’s suffering. If we did, though, and if the benefits were gained by someone else, a utilitarian case could be made for justified suffering even when the victim doesn’t super agree.

Some suffering obviously does lead to good things, even to stuff that’s quite clearly worth it all. Maybe some kind of a contrast between, say, sadness and happiness really does enhance the overall experience. And maybe a genuine chance of failure and disappointment really makes it feel more meaningful to strive for nice things in life. And close and committed social relationships probably do require you feel some distress when you lose a loved one. However, this is entirely consistent with accepting that too much of a bad thing is in fact a very bad thing, and that there are forms of suffering that are entirely unacceptable in relation to the benefits they result in. Many kinds of distress actually make you a worse person: being in pain and stressed out makes it harder to focus on anything except your own personal survival and well-being, often even after the situation improves. Surviving a hardship makes you less empathetic to other people going through it later on, and so on.

There are many ways to assess this approach empirically, which is what any secular morality needs to do of course. Whatever the benefits are, they probably don’t scale ad infinitum with the suffering we experience; otherwise we would find people just advocating MAXIMUM SUFFERING, which maybe we do, I don’t know. This and common decency suggest that the current, horrifying amount of global suffering has not satisfactorily been proven optimal and hence justified, and that even if there are some hardships we need to go through in order to grow as human beings or something, people being brutally murdered or billions of sentient animals dying of thirst and infections everywhere all the time are not necessary properties of a world even if we also want it to have grown human beings. Also, the fact is that any benefit brought by distress can only be determined afterwards. People avoid intense pain, and wholeheartedly approve of others avoiding intense pain, even when the post hoc narrative just sometimes is that it was all worth it in the end. This looks a lot like the benefits of non-trivial suffering are mostly accidental, a net negative, and suffering isn’t a reliable way to gain anything valuable at all (with some specific, typically low-intensity exceptions – in which the suffering usually is more of a byproduct than the actual cause of the benefit). The dramatic ways in which the victims differ throughout the biosphere further reduce the odds that current suffering levels are fine: it seems extremely implausible that there is some suffering-benefit tradeoff that applies to every animal taxon, or otherwise renders all of the suffering we know about somehow acceptable.

The agency/free will approach

Another common approach is based on agency or free will: religious theodicies of this type either tend to claim that it’s logically impossible to be sentient or good without the possibility of evil (i.e. wanting to harm others), which doesn’t fly for multiple reasons this margin is too narrow to contain, or that good can only be meaningful if it’s a genuine choice, or that free will is otherwise more important than other beings not suffering (again for soul-forging purposes probably or because people need to make an active choice to remain close to God or something). Typically this also vaguely implies that God isn’t the direct source of the evil we see, and hence not really responsible for it: suffering only exists because humans misuse their agency.

From a secular point of view, I’m not sure what to make of it – I don’t think people place a lot of value on folks in general being able to kill each other and just not choosing to do so. I guess people do see value in freely choosing to be good when it’s just about them, but as evidenced by the self-centered nature of this judgment, this has more to do with virtue signalling and moral competition than with freedom-to-cause-or-not-cause-harm as a value. I also don’t think this applies to many major forms of harm; I, for one, have never congratulated myself for not severely beating people up in the subway, or for not having any desire to do so.

The secular version of this theodicy is sort of a subtype of the greater good approach above. So what goods would we lose if, starting tomorrow morning, people were unable to significantly harm each other for no good reason? I’m not even sure this would reduce our overall autonomy. In a sense, a great deal of violence is already rooted in impaired agency – people rarely choose to lead a life of, say, gang violence or war, as long as there are reasonable and realistic alternatives (building a life of order out of such chaos is extremely difficult but people still tend to prefer to attempt this when given a chance, whereas choosing a life of absolute chaos when living comfortably is extremely easy, yet few people choose to do so). Of course, there are disagreements about what kinds of suffering you are justified to cause as a necessity to preserve e.g. your social autonomy, but again, the evil or suffering itself isn’t needed for you to be autonomous. (The concept of autonomy and genuine agency in a social environment running on human brains is, in any case, probably too muddled to provide anything useful here.)

Another shortcoming of this approach is that a lot of suffering is still caused by diseases and natural disasters; so maybe you inexplicably want people to be able to maim each other at will (though they should still be stopped, and also they belong in prison afterwards, let’s not be unreasonable here), but there’s tons of suffering besides human evil. This is also a counterargument to Alvin Plantinga’s free will defense.

But autonomy is often evoked as a justification for suffering in the other direction as well: since people tend to place some value on their past suffering, and a lot of it has very genuinely been valuable to them, someone wanting to reduce or abolish suffering threatens many of the things they currently find meaningful, or the struggles and choices they made to be able to get through it. I don’t see why this isn’t a reasonable justification for some hardships and pains: again, if there are painful things people generally are glad to go through, or if there is an apparent relationship between these things and positive outcomes later on, maybe these forms of suffering shouldn’t be eradicated; but maybe an alternative should still be offered for people who would rather choose not to go through them, you know, because of the autonomy stuff and all. Also, this is again not a plausible argument for intense suffering, or credible in the presence of burning children, as rabbi Greenberg more eloquently put it. Also also, animal suffering is not properly justified by this theodicy any better than by the more general greater good approach above: even Darwin lamented the suffering of wild animals and found it irreconcilable with the concept of a benevolent God, and didn’t seem to glorify the freedom of wild creatures in the midst of it all.

The “Best of all possible worlds” approach

This theodicy is also pretty well-known, presented by Leibniz in the 1700’s, and it’s pretty much exactly what it says on the tin – out of all possible universes, God chose the one with the best conditions and actualized it and since he is obviously good and reasonable, everything’s basically fine by definition. Moving on without comment, a common secular analogy is rooted in the powerlessness of mankind: if there is no God, there’s also no way for anyone to directly make things better without the possibility of everything backfiring horribly. There may be terrible things going on in the world, but there’s no way we can help it – this is the best we can do.

The solution here, it seems to me, is to tirelessly gather more information and power, not shrug and turn your back to a world full of unimaginable distress you could at least help alleviate. I know, I know, there are massive coordination problems we haven’t really solved and fixing even most of the ways in which the world is bad currently looks like an intractable project, but at the same time everything is making some sort of progress and there are people doing a lot of good with whatever they’ve got and the change is slow but we’re making the intractable very tractable in surprising ways all the time. This entire theodicy is a lazy excuse mostly and y’all know it.

Minor theodicies, other directions, and conclusions

There are a lot of approaches in religious theodicy that aren’t really transferrable to a secular framework, such as all the Original Sin stuff and the related karmic explanations, all of which mainly try to shift the responsibility on us mortals – uninteresting now that we already accept it. There are also some justifications that are mostly just seen in secular contexts, such as wanting our experiences to be authentic or real in some usually poorly defined but intuitively natural sense, and thus wanting to retain distress almost as a terminal value because it’s part of the authentic human or animal #lifestyle. This is horrible and fundamentally incoherent with everything but I get it, there’s a chance that while carelessly getting rid of some traditional human stuff you throw away something valuable as well. None of these seem to fare better than the ones described above when asked to respond to all of the reasonable requirements.

What I’m interested in right now is suffering as a social motivator, though. As mentioned above, it’s plausible that the implicit fear of intense social distress is such a major part of human social dynamics that abolishing it or allowing it to become voluntary would change the way we have to approach human relationships and require us to strengthen other sources of emotional commitment. There are close social bonds without super notable suffering even when the bond eventually breaks, but at the same time, the most distressing events of a typical first world life are social losses of different kinds, and this might be something people will generally want to retain for complicated sentimental and social reasons. Again, this is not going to lead to a satisfying theodicy even if we only wanted a narrow, anthropocentric one, but I think the relationship between suffering and social bonds is worth investigating before the hypothetical future where abolitionism or dramatic reduction of suffering becomes feasible.

Anyway, I realize that most of the rejections above are based on pretty intuitive moral judgments about what an acceptable justification should look like, and some people will obviously find them more persuasive than I do. I would kind of like to do more research on the subject and write up a more rigorous analysis of it, though probably focusing on the secular justifications to an even greater extent, since a deeper understanding of the religious approaches doesn’t seem very useful after this point. But it seems like the reasons people so strongly oppose reducing suffering aren’t very well understood right now: many of the individual arguments are trivially kind of weak, but the discomfort remains. Clarifying this issue and some related concepts could be really useful in understanding human values.

On mind-reading

I feel that explicit communication of preferences and emotions is frequently a bit overrated as an ideal habit. Obviously, clear and open communication is invaluable in most intentional social situations, but it’s also a common (and less frequently addressed) failure mode to not place enough value on needing to explicate as little as possible because you’re being understood more effortlessly on an intuitive level.

The subcultures I vaguely identify and interact with tend to be especially fond of explicit communication over mind-reading. This could be because many people roughly in this category (nerdy, analytic, thing-oriented) would seem to be somewhat below average at intuitively reading other people, which could make it more difficult to see how well mind-reading works when it works, and in some cases because empathy and related concepts are disvalued as a result of this (and even seen as fundamentally opposed to systemizing and rationality). Dichotomies such as the empathizing/systemizing divide in Baron-Cohen’s work on autism contribute to these attitudes, and I’m guessing it’s not implausible that there’s something to this divide in how the human brain works, but these thinking styles being inherently neurofunctionally antithetical to each other to the extent that empathizing should deserve its irrational reputation isn’t something I would bet a lot of money on (except possibly on the level of individual situations).

However, in many social environments I hang out in both online and in person, the culture has developed a firm appreciation of explicit communication while half-ignoring that explicit communication sometimes is actually genuinely worse than the nonverbal, gut-level understanding it enhances and replaces, that it certainly takes more effort from one or both of the parties in many situations, and that many people would probably benefit from cultivating and trusting their skills in intuitive empathy more than from being told that communicating every preference explicitly is the only good way to build and maintain healthy relationships (and expecting anything else is ridiculous and just causes silly problems to irrational people who expect some sort of magical mind-reading from others).

This doesn’t mean that all functional relationships require high levels of empathy, of course, and ideally the more empathetic people should of course accommodate those who require more verbal information about other people’s internal states. But in close relationships especially, you may run into a major compatibility issue where one person expects their intuitive signals to be understood because empathizing is a fundamental and important aspect of how they think, and the other person kind of scoffs at this and genuinely believes that the more empathetic party is demanding impossible, supernatural levels of mind-reading – again, because this is how their thinking generally, kind of fundamentally works. And this may not always be solved just by increasing explicit communication, because it in turn will quickly exhaust the person who possibly has spent most of their life not needing to describe their basic emotions and preferences to other people, and this is a form of labor that really really drains their energy. (I have on a few occasions been super exhausted by people who have wanted to have this great and healthy explicit communication thing with me, and I haven’t seen what the root of the problem was until years later, because of course explicit communication in every situation is the most important mark of a healthy relationship, and it would be silly to expect anyone to read my mind, right?)

In conclusion, the way discussing every issue explicitly is valued over everything else prevents many people from seeing that a close relationship they are trying to build with someone might just never work as well as it would with someone else because of this difference. Lots of explicit communication is not always a sign that your relationship is great or even functional; it isn’t what’s valuable in itself, being able and willing to respect each other’s preferences is. Lacking this, looking at the relationship and going “yup, gotta increase verbal communication” is sometimes a patch to fix something that wouldn’t have to be broken in the first place. Similarly, trying to improve your empathy levels to fix this may also not work out depending on the extent to which empathy is part of your congenital personality (and I’m sure many (most?) subcultures also demand exhausting accommodations from the people who would prefer very explicit emotional sharing – it’s just not something I run into as often as I see the anti-empathy sentiment described here). I’m not sure I have a good solution at hand, but respecting other thinking styles and even trying them out to the extent that you can will probably not hurt, as unsatisfying and insufficient as it sounds.

Re: “We just don’t know enough about ecology to reliably prevent wild animal suffering without causing more damage in the process”

Say there’s a horrible moral disaster going on at the moment that you (1) know about, (2) know something could at least plausibly be done about, (3) know we don’t currently have enough information to safely do much about, but (4) know how to collect at least some relevant and potentially useful information about. This is not a very difficult problem: if you know what kind of information could be helpful to alleviate the issue, the first step in alleviating the issue is in fact collecting more information, not just deciding whether or not you should help based only on your existing knowledge.

A simple miniature version of the same situation is knowing someone vulnerable is lost in the freezing cold forest area nearby, knowing you could (with your better navigation equipment and warm clothing) probably track and save them if you were there, knowing you wouldn’t be able to find your way to the forest from where you’re currently standing, but also knowing you have a smartphone you could probably use to find your way there. Ignoring other things, obviously you should consult the smartphone or otherwise seek the necessary information to help out the person in trouble: you may not immediately know the best map application or where your winter boots currently are located, but there are many ways to increase your relevant knowledge base here, and thinking about it for a while instead of dismissing the issue is probably going to help. If we expand the analogue to include the rest of civilization as well, there’s also the helpful official people with dogs and helicopters: maybe you could encourage them to do the job in case you think it’s not where your comparative advantage lies. What you probably wouldn’t do is shrug and accept that someone will definitely die out there just because there are multiple steps and some uncertainty in the process, and you don’t immediately know how to do the object-level helpful things.

When talking about wild animal suffering, all but the most radical utilitarians and altruists are understandably super cautious about doing anything substantial. Some people simply have strong intuitions against meddling with the natural order (no, not the natural order that directly hurts human societies, just the more natural natural order, the nature, you know) which I think is a weak position for reasons I won’t go into right now because other people have written about it in length before. Some people don’t think things are so bad for wild animals anyway, probably because they feel suffering and satisfaction are somehow hedonistically commensurable and animal lives have some good moments as well (whereas I reject this view of pain an pleasure as the opposite, positive and negative aspects of the same stuff – they can be indirectly compared using preferences for different tradeoffs, sure, but experience-wise they’re not simply opposites in valence, but fundamentally different (and the bad is more relevant than the good)).

But the majority of people I interact with seem to basically accept that the lives of wild animals are often really bad, nature isn’t inherently sacred to the extent that we couldn’t help sentient beings out there – it’s just that we don’t have enough information, so our hands are basically tied. Some of these people accept that there might be a point in the distant future where we could maybe do something about the issue, while some people don’t really think about this possibility either, because the task sounds so thoroughly daunting. Both responses ignore the possibility of actually immediately working to increase our understanding of ecosystems so as to build sufficiently informed, actionable plans to alleviate ecological suffering, which is exactly what we should urgently be doing, instead of just accepting our temporary helplessness.

Beware of sneaky malicious agents in moral thought experiments

In moral philosophy, thought experiments are supposed to use and clarify common intuitions to help us distinguish between acceptable and unacceptable states and actions in morally problematic situations. Moral frameworks are currently built mostly by trying to make these intuitions as coherent as possible, as people have various (interpersonally different but, all things considered, fairly convergent) intuitive responses to ethical dilemmas. In addition to these normative intuitions, the end results of a thought experiment also rely on intuitions about what exactly is *going on* in the situation in the first place, and subtle misunderstandings on the descriptive level could easily damage the reliability of our responses to the experiments.

So I’m pretty sure that suffering caused by violence or other malicious acts subjectively feels at least somewhat worse than suffering originating from impersonal causes, even when the tissue damage is equivalent: it certainly feels vastly more terrifying, disgusting, and unacceptable from a third-person perspective (to me at least). So, to evoke the sense of absolutely maximal suffering in moral thought experiments, it’s useful to describe the suffering as torture or violence of some kind, as this maxes out the intensity of the suffering current human brains are capable of (and capable of empathizing with, for the purposes of the thought experiment). But there’s a tradeoff here: yeah, we probably make the experiment more emotionally effective, which all things equal gives us a better understanding of the relevant moral intuitions – but we also lose some clarity regarding the full implicit consequences in these scenarios in a way that I’m pretty sure will bias our judgment.

This is because torture (or other personal, malicious causes of suffering) doesn’t actually happen in a consequentialist void where the torture is the only consequence and correlate of a choice, and no otherwise alarming or threatening events follow. Most importantly, this has basically never happened in the history of human evolution that has shaped all of our intuitions, which means they may not be well equipped to pretend there is such a void even when it’s required by the experiment. In our descriptive-level intuitive interpretation of the situation, the presence of torture or violence probably implies there is an obviously callous, probably unpredictable agent around who thinks it’s OK to hurt others, an unknown unsafe environment where such agents apparently are born, no one around with enough empathy and power to stop the violence, possibly something that poses a threat to us too, etc. (Though note that this is exactly the kind of idle evo-psych speculation your mother warned you about; I don’t know if anyone knows how intuitions really work.) Yes, often other causes of intense suffering, such as starvation or disease, also imply there’s something wrong on a community-affecting level – but probably not quite as saliently and alarmingly as personal violence does, because violence requires direct, immediate action. Whatever the exact cause is, I think there’s an intuitive emotional reaction against violence that isn’t purely reducible to its direct consequences (the suffering it causes), because it has always been such a different kind of a problem to solve, I guess?

Links & misc #5

petunia

• I took the Giving What We Can Pledge a few months ago! I’ll probably write about the reasoning behind my decision to do it now some more in the future, but basically people are bad at consistently caring about things for an entire lifetime, so if you have anything at all you care about (that you, with solid justification, hope to care about in the future as well) you should probably just sell your soul to GOOD before EVIL HEDONISTIC INDIFFERENCE has a chance to make an offer, because it will, and at that point refusing it could feel like giving up something cool you really really want. Human brains: not even once.

• Foundational Research Institute lists some basic intuitions that support suffering-based ethics.

• The inevitable evolution of bad science (original article here)

• Scott Aaronson v. Roger Penrose on conscious computers. Fave paragraph: “Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience.  I replied that, for me, the relevant issues here are ‘well below neuroscience’ in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: ‘give me a long enough lever and a place to stand, and I’ll move the earth,’ so too I can declare, ‘give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.’ The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.”

• I gave a presentation on Aaronson’s fantastic essay Why Philosophers Should Care About Computational Complexity for a philosophy class and will keep recommending this article to everyone forever.

• From the Weird Sun blog, a list of men. Also on a more serious note, Descriptive Before Normative.

• Cool project by Pippin Barr (known among other things as the Ancient Greek Punishment guy): It is as if you were playing a videogame (later applied to chess)

• “Existence values are an unusual and somewhat controversial class of economic value, reflecting the benefit people receive from knowing that a particular environmental resource, such as Antarctica, the Grand Canyon, endangered species, or any other organism or thing exists.”

• Based on this interview, Peter Unger sounds like a vaguely frustrating guy but I’m also tempted to agree with many of his pessimistic views regarding academic philosophy (with a great deal of exceptions though, such as many ideas in ethics and philosophy of science maybe?). Ugh now everything is going to feel slightly meaningless for a few days, yeah thanks a lot vaguely frustrating guy

• WikiHow seems like a rabbit hole you never really get around exploring because the concept sounds kinda boring and possibly not rabbit-hole-y enough on the surface, but occasionally you’ll bump into an article on how to pretend you have magical ice powers that leaves you in a weird state of low key wondering what else is out there though probably still not being sufficiently motivated to find out.