Links & misc #3

careless2• Such a good book: Careless Thought Costs Lives, an exploration by Janet Radcliffe Richards of the practical ethics surrounding organ transplants. The topic isn’t something I had previously spent a lot of time thinking about, but I was waiting for someone at the library the other day and just grabbed a book to pass the time (I’m glad this edition happens to have such a bright red cover). The legislation involving transplants affects a huge number of human lives and works in a context of interesting incentives, so the task of carefully examining its justification is quite important in itself – however, the object-level discussion & the conclusions the author arrives at aren’t the main reason why I found this book so great: it’s the clarity of thought that’s really impressive about it. I feel like I should recommend it to everyone just because of the exemplary reasoning it presents, but I’m not sure how to convincingly gush about a book on organ transplant policy.

Should effective altruism be seen as an obligation or an opportunity? Initially, I was going to respond with the latter. I’m guessing most people aren’t going to embrace novel ethical obligations they didn’t figure out for themselves, and for me EA has always primarily felt like a positive-affect thing that allows me to do obviously valuable and good things with my life, something I’d rather not be without. But I guess I also think it’s obligation to some extent (reminds me of a tweet).

• The ethical problems of erasing traumatic memories. Andrea Lavazzi raises some points against full personal autonomy regarding memory modification, and while I’m not really convinced by his specific arguments, the issues discussed here are interesting.

• Lab-grown meat has dropped dramatically in price lately, which is excellent news. However, the researcher being interviewed still predicts that it will take a couple of decades more before the stuff is going to be widely available, which I guess matters more and isn’t soon enough to sound quite as astonishing. Then again, everything is for some absolutely mysterious reason always estimated to be a couple of decades away, even when it’s actually just around the corner, so make of that what you will – maybe the huge price drop is a more reliable indicator after all. (Milk might be a bit simpler to produce using modified yeast, and easily gets cheaper than raising dairy cows, so at least we’ll soon have cheese.)

• Sarah Perry on some core features of ordinary human experience and behaviour. A lot of the assumptions made here are quite bold & not necessarily based on uncontroversially accepted theories, but most of it certainly plausible and genuinely insightful. (I really like all of her writings at Ribbonfarm. Most of the stuff I read is very anti-traditionalist by default, so it feels refreshing and healthy to occasionally switch to such a completely different mental framework, especially when the writer is so good.)

• Predicting is applied believing, and the only way to figure out how accurate your models of the world actually are. Julia Galef lists 16 useful types of predictions at Less Wrong, so now we all have something to actually use PredictionBook for.

• A great bunch of studies published so far about ketamine, everyone’s favourite potential future antidepressant, in a meta-analysis. The studies pretty unambiguously show large effect sizes in reducing depressive symptoms starting almost immediately after infusion, which is remarkable given the normally much longer time required for antidepressive agents to work, but patients also seem to relapse within a couple of weeks at most. The addictive potential and apparent side-effects of long term use also sound pretty discouraging, so it will take a bit more to actually make it sound like a feasible treatment, but I’m glad it’s being investigated.

• Reward and punishment seem to have clearly asymmetrical effects on our choices: choice repetition after a reward scales with the magnitude of the reward, while choice avoidance after punishment is apparently unaffected by the magnitude of the punishment. The tendency to put a greater subjective weight on negative outcomes than positive ones of similar magnitude is a well-known feature of human decision making, but this idea suggests a more fundamental distinction between these two factors and could maybe help to explain a lot about human behaviour, especially once more research is done on what constitutes a reward or a punishment for different people (e.g. if nobody laughs at your joke, do you predominantly experience it on the reward scale (no reward) or on the punishment scale (mild social rejection)). Maybe utilitarian philosophy would also benefit from distinguishing between rewards and punishments more clearly instead of just always lumping everything that we assign value to into the single factor of utility/negative utility. I mean I get that it’s often reasonable to think about it in those terms especially withing preference utilitarianism, but I don’t think it’s recognised clearly enough that suffering isn’t just negative happiness.

Advertisements

On the urgency of ethics

Compared to all other areas of philosophy as well as science, ethics seems to ask a particularly urgent kind of questions. Why should we prefer one action over another? How should we take into account other people, actual or potential? What should we do to give our lives meaning, and why does it matter? If there indeed is a basis for normatively evaluating our actions in this manner – a meaningful sense in which we should do some things instead of others, as most people seem to believe there is – it seems true that our understanding of these things ideally needs to precede everything else. By definition, it’s important to figure out the potential shoulds as soon as possible, lest we just carry on doing things essentially at random almost certainly including a bunch of stuff we shouldn’t have done, and wouldn’t have if only we’d have thought about it in time.

Still, surely only a tiny fraction of the people best equipped to work on this kind of problems, those with the curiosity, motivation, imagination, and intelligence to tackle all types of difficult questions currently conceivable to humans, work on ethics research or are otherwise explicitly very interested in ethics. Similarly, ethics as a field isn’t especially well-funded, to say the least: judging by how little people are willing to allocate resources to other people figuring out ethics, it isn’t particularly important to the general public either. And so far, I haven’t seen research on ethics listed as a potential high-impact cause in effective altruism, even though the movement is about “doing good well” (absolutely a question ethics should answer) – and most of the EAs I know certainly acknowledge that ethics is still largely a mess and this phrase has no obvious rigorous definition.

Below is a list of possible reasons for this apparent prioritizing paradox. Some of them are mutually exclusive, many are compatible and could even work sort of synergistically if true, but as of now I’m not confident about how likely or significant any of these are as explanations.

A great deal of people actually are working on or motivated by figuring out questions related to ethics, just not in the context of academic philosophy. There are, and have historically been, a lot of interesting empirical questions that greatly influence how we should make ethical judgments: is there a God, what is life and does it matter, how does the human brain make decisions or create consciousness, why do things suffer or die, what kind of future scenarios are physically possible for humanity, and so on. What ought to be is fundamentally based on what is, and it’s useless, or at least probably rather inefficient, to worry about what we should do before we have any idea of what we can do (and what’s going on in the first place).

Our approximations about what one should do seem pretty strong already, and they suggest humans have more urgent things to put our minds to than solving the rest of ethics. It seems overwhelmingly likely that we’ll find things such as curing diseases, decreasing global poverty, and efficient energy production to be good ideas no matter how we end up defining ethics, so precisely what we should be doing until we hit significantly diminishing returns. Especially explains why effective altruism isn’t currently very interested in ethics research – we are pretty confident about being at least on the right tracks, and many quite uncontroversially urgent shoulds are right in front of us, so it’s probably best to address them first.

You can’t pick your own interests, and ethics just isn’t very interesting. Quite literally the boring answer: compared to sexy, prestigious and rewarding occupations in high-status science and entrepreneurship, ethics like all philosophy seems to offer a lot of frustration and uncertainty, little in terms of concrete advancements and impact. Things are thoroughly fuzzy because a great deal of arguments rely on vague-sounding stuff like intuitions instead of something empirical or quantifiable, and as a result everyone mostly argues about definitions a lot while mostly staying dirt poor because the results aren’t interesting or useful enough for anyone to be willing to pay much for.

There is no meaningful sense in which we “should” do anything, and most of the people who have a clue of anything really have simply picked up on this. This is probably my least favourite answer, not only because I don’t personally quite accept moral nihilism, but also because the reasoning sounds suspicious as an explanation for this particular problem. Many brilliant non-ethicists certainly act as though there were things that are morally better than others, often taking explicit ethical stances in difficult situations – it’s just that they’re not spending lots of their time actually advancing ethics. (Anyway, if moral nihilism is trivially true to them, they could at least bother to inform the rest of us about that convincingly enough that we’d move on to doing something productive instead of, you know, blogging about ethics.)

Our brightest minds aren’t bright enough. Ethics, though (arguably) a human construct and therefore probably accessible to at least some human minds, could just be too complex in its emergent properties after all for most or even all people to think of with sufficient clarity. The cognitive abilities it requires could just be uncommon: if, say, the empathising/systemising divide hypothesised to define major aspects of human neurodivergence turns out to be a thing, it seems plausible that doing ethics well requires a person to excel at both modes, which I suppose is pretty rare even among our brightest minds.

Politics actually is what happens when people remember the urgency of ethics. Politics is a complicated system in which diseased reasoning, dishonesty, and biases abound, but at its core it’s also a bunch of people most of whom are trying to figure out ethics and consequently put it into practice. So apparently when we attach into ethics (1) a ton of power and status to gain determined by a pretty straightforward hierarchy, (2) wages actually worth working for, and (3) incredibly strong identities and group loyalties to signal, the resulting incentives transform it into the thousand-faced abomination of almost total ineffectiveness that we usually see in politics.

Pretty much no one really wants ethics after all. So there might be a way to evaluate and coordinate interactions in a way that corresponds to the concept of morality and generally contributes to the flourishing of each of us and whatever entities we think should flourish, making moral nihilism incorrect, but this might be something people would just rather not think about or commit to due to akrasia and conflicting, egotistical desires. So when people worry about “doing the right thing” or wonder what “a good life” is, it’s usually either posturing or unproductive symptoms of cognitive dissonance within personal intuitions, not anything like a sincere expression of a need for moral guidelines.

How selfless should altruism be?

When discussing effective altruism or charity in general, most of us have encountered (and/or been) the cynic who points out that no true altruism exists and those who help others do it just to feel good about themselves and gain social approval. The implicit message here of course is that because of this, donating to charity doesn’t make you morally better than anyone else, except maybe if you can somehow prove you do so for the right reasons (which you can’t, so you should stop doing a falsely virtuous thing or at least talking about it, or you’re sending dishonest signals).

Anyway, people who tend to think in consequentialist terms rarely give much thought to this kind of accusations. Unless they create unwanted incentive structures or otherwise affect other people in the future, who cares about the motives behind donated money? Probably not the people whose lives are saved because of it. The habit of donating is good and perfectly moral as long as its consequences are good, even if it’s done to benefit oneself.

However, most people by default act on a colourful mix of non-explicit consequentialist, deontological, and virtue-ethical intuitions and principles, and to those whose consequentialist tendencies aren’t overwhelmingly dominant in cases like this, the matter is more muddled. Why and how much should we care about why people do charity?

Essentially, it seems there are two opposing intuitions competing here: the intuition to praise a person whose actions demonstrably and unquestionably save or better human lives, and the intuition to scold a person who seeks social status by ways that scream hypocrisy and dishonesty – who is “faking” a valuable signal of selflessness and virtue. (Scare quotes because the intuition can be triggered even when the person outright admits they act out of self-interest, and thus isn’t actually even being dishonest about anything at all. In fact, in some hilariously paradoxical way, this sometimes seems to evoke an even stronger dislike towards our fishy altruist: someone chooses to help others because it makes them feel better, and then has the nerve to admit it instead of pretending to hold more virtuous motives, like normal people do. How outrageous can you get?)

If we value saving human lives, the praise reaction is obviously important – we should place positive value on things that have the right consequences. But how should we treat the latter intuition? After all, the reason for our deep-seated ideas about good people and the right motives is that in many situations and social environments, they work: as predictive tools, helpful pointers to where we should allocate our time and kindness. Someone who donates because of a genuine will to do good will pretty likely keep doing good in the future, whereas someone who mostly optimizes for, say, social status, might quit as soon as they find a better strategy to reach their actual terminal goals.
In this situation, however, it isn’t useful. The cynical approach is mostly correct in that only a pretty small part of human motivation consists of components other than increasing social status, and all of the rest are related to feeling personally good in one way or another: even the most self-sacrificing martyr feels good more than they suffer because of their sacrifice. That’s how motivation works. Our intuition regarding selfless people isn’t supposed to find people who are selfless in the absolute sense, it’s supposed to find people who are motivated to be virtuous, who feel good about being virtuous, and who are reliably going to stay virtuous exactly because it’s personally, emotionally important to them. The cynic is fooling our intuitive system into believing these people are selfish in a stronger sense: that they don’t really care, even though they do.

So whenever your friendly neighbourhood cynic wants to end all charity again (or you’re feelin’ unhelpfully cynical yourself), below are some ways to look at the issue. I assume a convergence of virtue and consequentialist ethics: that the things we hold virtuous we do because they tend lead to good consequenses more often than not – perhaps that virtues are basically abstractions from the rules of rule utilitarianism, instead of some new metaphysical category. (There are probably some esoteric forms of virtue ethics where traits are seen as virtuous independent of the consequenses, but since this piece is not intended as a refutation of all virtue ethics forever as much as a suggestion for non-virtue-ethicists to handle a single virtue-ethical-ish intuition, I don’t caaaaaare.)

• Feeling good about doing good is as virtuous as it gets. If someone gets a rush out of donating money and knowing they have alleviated the suffering of strangers, how does that make them less awesome instead of more? I wish more people had motivational systems like that.

• Similarly, seeking social status by donating to charity is better than approximately 100% of other possible ways to seek social status, most of which no one complains about ever. Social approval is a fundamental part of healthy human lives anyway – it’s not like you can do without it. What kind of people seek status by giving some of their money and energy to help out strangers, when they could have made a choice to spend a bit more of it on switching into better cell phones or dying their hair more often? Pretty good people with healthy priorities, that’s who.

• People donating mostly for warm fuzzies and status (instead of some pure desire to do good) does risk shifting the focus of donations towards more charismatic, often less effective charities. This is a valid concern, but I don’t think the only solution to this problem is “don’t be motivated by fuzzies or status”. I think this is, in fact, the worst solution that I’ve heard anyone seriously advocate. The oft-repeated old advice which says that people should purchase fuzzies and utilons separately is better, but also makes it sound like it’s unusual to get, or be able to easily self-modify oneself to get, warm fuzzies precisely from knowing one has donated to effective charities. Likewise, most people who hang around effective altruists, even just in an online community, have a social environment which awards lots of status for smart giving and less for less smart. All in all, this point is something that might in some situations possibly cause problems but which it’s useless to complain about before it actually does.

• Given how human identity building works, even if a person donates almost purely for status, it’s not likely that they will just stop donating on a whim when they think of a more effective way to gain approval. Once you invest a lot of time and effort into something such as effective charity, you also tend to grow towards the direction of it as actions become habits and habits become identity. This effect is especially strong if you participate in a community with lots of people holding EA-ish values: when a great deal of your existing status depends on altruism, it will be less tempting to start doing something entirely different instead. Sometimes, this might actually be a stronger motivation to keep going than just an unemotional conviction to do good on a more abstract morality-based basis, which is vulnerable to value erosion.

Moreover, I’m personally something of a moral naturalist in that I would define morality as a game-theoretical system the function of which is perhaps similar to Hofstadter’s superrationality (long, rambling post elaborating on this can be found here). Altruism, like everything in the realm of morality, is certainly rooted in egoism, but it transcends individual-level selfishness to promote healthy and cooperative communities instead of Nash equilibria where everyone mostly loses all the time due to following simple personal incentives, seemingly self-serving but actually bad for everyone involved.
This works in part through heuristics and hard-wired intuitive constructs such as empathy and other genuine prosocial emotions, which is why it’s a bit unreasonable to accuse people of being selfish when they help others and fel good about it – as I said above, this is just how our motivational systems work and how our species mostly coordinates its cooperation.
We are biological creatures with an evolutionary history who arguably don’t even have access to ontogenetically pure, thoroughly selfless motives, but are still capable of remarkable levels of good. We need to work with the incentives that help us with that.