On good people

There may be a lot of overlap between good people and the people you like, but they are not strictly the same thing.

Humans have a fairly accurate sense of how well their acquaintances can be expected to treat them personally: how kind, generous, useful, and loyal they are going to be to their friends and family. I’m not an expert on the history of morality, but I’m guessing that for most of our cultural evolution, these have been among the most important criteria for determining how “good” someone is as a person, and utilitarian or otherwise less personal virtues which primarily affect people far away have certainly not been universally included in the definition.
However, during the past few centuries of philosophical progress, we have managed to gather a complicated, slightly more consistent body of analysis regarding our moral intuitions and behaviour, and they would seem to require of us at least moderate levels of far-mode altruism. We also see a myriad of novel ways to improve the world around us and help other people currently stuck in bad situations, from organized charity to blood or kidney donations to participating in abstract economical and political research, all of which means that simply being friendly and kind to those near and dear to us is arguably not sufficient to capture most of the meaning behind “being a good person” anymore. (Perhaps not even necessary; but this depends on just how committed one is to the utilitarian definition of goodness.)

Now, the cognitive tools which often are pretty accurate in judging how valuable someone is as a friend are totally biased, and in general just not very useful, when assessing how good someone is in this impartial ethical sense: how much good they seek to do, how much their success improves the lives of others in the grand scheme of things, how much we should support them based on consequential considerations. There’s certainly a correlation – for example, all else being equal, compassionate and empathetic people tend to be nice to hang out with, and also more likely to find value in charity and other morally admirable, altruistic habits. But then there are other factors, such as a generally positive mood, conversational skills, eagerness to please, command of etiquette, confidence, and general charisma, which are more or less orthogonal to the strength of a person’s moral fibre, but obviously make other people like them a lot. And these form such a big part of being likeable that if you lack them, it doesn’t matter how much work you do to save the world or how morally considerate you are in far mode. Most people just aren’t going to like you very much.

There’s nothing inherently wrong with this, of course. People should naturally hang around people whose company they find value in: being likeable and being moral are just separate trait sets which are valuable in different ways, and no one should expect to become super popular and loved when they are insufferable to be around, even if they donate lots to effective charities. But it’s important to be aware of this distinction, because influenced by the affect heuristic we easily end up with messed up evaluations of other people and their values and ideas if we just take our intuitions at face value.
Intuitively, we often equivocate between how much we like a person and how good they are. Especially if the feeling of liking or disliking someone is very strong, it’s often quite difficult to genuinely think of the person as “really fun to hang out with and always treats friends well, but doesn’t seem to care about the feelings of strangers at all, isn’t bothered by suffering far away, and is ultimately very selfish” or “boring, arrogant, and kinda rude, but has important goals and is worth helping and encouraging whenever possible” instead of just adopting a simplified black-and-white approach where likeable people are liked & good, unlikeable people disliked & bad. Even fairly serious ethical transgressions may go largely unnoticed as long as they are committed by a person we like, and perfectly reasonable ideas and suggestions are ignored or criticized when they come from a less likeable person. This is a bug in the system.

This is obviously something effective altruists should keep in mind, too. PR/social skills are so so important, in that practically everything worth doing requires some amounts of collaboration, and if you’re not likeable in the least you’re generally not going to achieve as much as you potentially could have in terms of saving the world (whether you’re aiming for a high income or possibly some other worthwhile form of impact). Not to mention how crucial social belonging is considering one’s personal wellbeing, of course – your best shot at building a life that feels happy, meaningful and motivating is probably striving for habits that are satisfying both morally and socially, so completely neglecting near-mode goodness will typically lead to miserable outcomes.

But everything is about resources. For some otherwise good and compassionate people, being likeable is difficult, social situations tend to be stressful, non-human things are simply more interesting than socializing, relating to other people is hard, and so on. There may be good reasons to not like their company a whole lot, but they can still be wonderful people in the impartial ethical sense: worth cheering on, thinking of as people whose success in life is a very positive thing, and maybe getting to know better too in case they turn out to actually be surprisingly ok. Or maybe they still don’t call you back, are never ever going to be any fun at parties and forget your birthday anyway, but if somewhere on a continent far enough for their awkward telescopic philanthropy to reach there is an increasing amount of people who statistically are alive and healthy just because of them, I think they may be prioritizing the right things, no matter how counterintuitive it feels.

Advertisements

A quick honesty check

When seriously arguing for (or against) a simple policy or practice, you probably aren’t being intellectually honest if you aren’t willing to explicitly list all the reasonably predictable consequences that you think said policy or practice will plausibly have – benefits for some parties, harms to some, and more or less neutral consequences for others, just clinically jotted down on a piece of paper or something. Ideally you should also ask someone holding a differing opinion if they would agree on the list, because in principle, it only includes straightforwardly descriptive statements and not value judgments. (If you’re playing on hard mode, also try to come to an agreement on the approximate magnitudes of each effect you’ve written down.)

This should help to ensure that differing opinions about policy matters are not primarily rooted simply in epistemic disagreements about empirical matters (which are often easier to begin to solve, and, uh, should not warrant a deep hatred of the people on the other side with their profoundly disturbing alien values) as well as reduce certain biases. We typically put disproportionately little weight on the fact that not all of the consequences of whatever we’re currently promoting are going to be 100% positive for everyone forever: someone is almost certainly going to suffer as well, or there probably wouldn’t be much to argue about in the first place. This suffering always seems like a trivial inconvenience to us, and sometimes it is, but sometimes it isn’t, quite independently of how insignificant we’re personally inclined to believe it is. This is why we need to address it explicitly instead of just accepting our immediate, likely flawed, intuitive judgment. All the information we use in the process is often pretty obvious, but we still don’t really take it into account until we explicate it (perhaps a bit like how being aware of the hidden zero when making personal decisions will often lessen temporal discounting).

After considering the list, you’re free to point out how small or easily treatable the resulting suffering is relative to the benefits the policy will provide, of course. But you have to be able to make a list – without omitting real, relevant consequences just because they can be used to support an opinion opposing yours. If the policy is any good in the first place, it should be able to face all of reality.

Since I currently seem to be arguing for a practice of listing both the negative and the positive consequences of whatever practices one is arguing for, and have so far mainly focused on why this would be a good idea, I guess I’m sort of required to add some caveats too. Cognitively speaking it does get quite taxing to consciously assess decisions in this way, especially when you’re discussing something very complicated or novel and unpredictable. But this means it’s probably a good idea to gather more evidence before applying the policy, to make it less complicated or unpredictable: so maybe argue for gathering the needed evidence instead. Still, it’s true that sometimes when we’re not talking about a critically important decision affecting large amounts of people, our heuristics and intuitions are a more cost-effective decision-making tool, and will do just fine. Another discouraging point is that since practically nobody else is immediately willing to play by these rules, and the majority of people are in any case going to be biased like this, you won’t accomplish much apart from making your own opinions look worse if you admit they may have bad effects too. This kind of thinking – reasoning that you have to keep promoting irrationality if you want to effectively participate in a largely irrational society – is absolutely terrifying, but unfortunately this doesn’t mean the conclusion isn’t true in some cases. Still, the list principle is good to keep in mind during debates in actually curious environments, when participants are actually aiming for a rational consensus: it will highlight important things about intellectual honesty, and help to locate the actual disagreements.