Links & misc #5


• I took the Giving What We Can Pledge a few months ago! I’ll probably write about the reasoning behind my decision to do it now some more in the future, but basically people are bad at consistently caring about things for an entire lifetime, so if you have anything at all you care about (that you, with solid justification, hope to care about in the future as well) you should probably just sell your soul to GOOD before EVIL HEDONISTIC INDIFFERENCE has a chance to make an offer, because it will, and at that point refusing it could feel like giving up something cool you really really want. Human brains: not even once.

• Foundational Research Institute lists some basic intuitions that support suffering-based ethics.

• The inevitable evolution of bad science (original article here)

• Scott Aaronson v. Roger Penrose on conscious computers. Fave paragraph: “Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience.  I replied that, for me, the relevant issues here are ‘well below neuroscience’ in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: ‘give me a long enough lever and a place to stand, and I’ll move the earth,’ so too I can declare, ‘give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.’ The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.”

• I gave a presentation on Aaronson’s fantastic essay Why Philosophers Should Care About Computational Complexity for a philosophy class and will keep recommending this article to everyone forever.

• From the Weird Sun blog, a list of men. Also on a more serious note, Descriptive Before Normative.

• Cool project by Pippin Barr (known among other things as the Ancient Greek Punishment guy): It is as if you were playing a videogame (later applied to chess)

• “Existence values are an unusual and somewhat controversial class of economic value, reflecting the benefit people receive from knowing that a particular environmental resource, such as Antarctica, the Grand Canyon, endangered species, or any other organism or thing exists.”

• Based on this interview, Peter Unger sounds like a vaguely frustrating guy but I’m also tempted to agree with many of his pessimistic views regarding academic philosophy (with a great deal of exceptions though, such as many ideas in ethics and philosophy of science maybe?). Ugh now everything is going to feel slightly meaningless for a few days, yeah thanks a lot vaguely frustrating guy

• WikiHow seems like a rabbit hole you never really get around exploring because the concept sounds kinda boring and possibly not rabbit-hole-y enough on the surface, but occasionally you’ll bump into an article on how to pretend you have magical ice powers that leaves you in a weird state of low key wondering what else is out there though probably still not being sufficiently motivated to find out.


Your actual terminal values are your actual terminal values

One of the coolest things I’ve internalized during the past year is this: you don’t need to have a particular emotional response to the things you deem terminally valuable, and you don’t need to assign any moral relevance to the things you happen to feel, unless you endorse these feelings as part of your moral framework. You get to decide what your terminal, inherently important values are, with no particular obligation to infer them from simple unprocessed gut feelings, emotions, or other stuff outside of your direct conscious analysis and control. Sure, all sorts of affects and intuitions are going to influence your decisions and form the basis of your motivation, but still, to what extent you endorse a given potentially morally relevant feeling and how much it’s weighed if embedded into your explicit values is something you get to decide for yourself. And once you have a somewhat satisfactory set of terminal values, you don’t need to care about anything else than that (as well the identifiable instrumental goals that connect to it, which of course is a lot to care about, but like).

This dissociation between a genuinely meaningful life and a life full of conventional, individualistic, sometimes entirely superficial but emotionally salient markers of success is to me extremely liberating because of the relative fickleness of personal happiness. The causal pathways that lead to personal wellbeing are only somewhat reliable: unfortunately, it’s easy to overestimate the value of external life changes, or have a brain that just generally is never really satisfied in the stimuli it encounters. This is not to say that the actions you take to affect the rest of the world have certain effects either, of course – but you probably have more empirical information on how they transform the world, and these effects are usually large enough to make it a safer bet despite the uncertainties involved. Compared to altruism, hedonistic psychological egoism especially when tied to external factors is a treadmill that leads to distress and powerlessness: you can’t decide to be loved, wealthy, or high status in the same way you can decide to show love to someone else, help out people around the world with vastly lower incomes, or act respectfully towards others. Effort-wise, it’s cheaper to do your best to make sure other people have their basic needs fulfilled than to strive to improve your own state which probably already is subject to the principle of diminishing returns.

To not have to see myself as a morally special, intensely relevant entity just because this is how the subjective point of view I inhabit automatically feels like (and because any subjective entity as a result of evolutionary processes has a natural egoistic bias) lets me accept even severe personal emotional setbacks as the inevitable, but absolutely not catastrophic, infinitesimally small moral negatives that they are. After screwing something personal up I may be in pain I’m unable to alleviate, but there are other people in pain as well, some of which I may be able to do something about. This grants me safety, but also control and power over making the world as a whole better: maybe I can take a moderately laborious action that improves my long-term wellbeing by a puny hopeless expected 0.05%; but inherently it is exactly equally important that any other person is spared from unhappiness, so a similar amount of work will usually lead to larger expected effects by an order of magnitude or more if I simply reject my default intuitive human egoism (to the surprisingly large but of course not complete extent that such a rejection is possible). Again, this doesn’t mean you have direct control over how the actions you take will be transduced into qualia experienced by the entities whose lives you seek to improve, but e.g. making statistically sure fewer people suffer for weeks and maybe die of painful preventable diseases is very likely to be a good idea in this respect, and a benefit of a magnitude by which you will probably not be able to easily improve your own life.

At a glance, this seems somewhat like an inversion of Stoicism: instead of accepting that you have no control over external happenings and focusing on cultivating appropriate attitudes towards the world in order to reach an internal sense of peace, you accept you sometimes have very little power over your internal emotional states, and turn your attention to the outside world to at least reduce the distress of other sentient beings. This incompatibility is illusory, and I’ve found these attitudes reinforce each other quite neatly, at least if your preferred flavour of Stoicism is modern in that it’s based on empirical, statistical observations about the extent of the effects of your actions. To me, it seems the virtues of Stoic philosophy were originally designed to counter the same futile hedonistic egoism I oppose, not explicitly to advocate resignation to the world as a place full of disease, abuse, and suffering you can’t do anything to alleviate (impartial empirical altruism just sort of hadn’t been invented yet). Accepting personal emotional setbacks as something not super morally relevant reinforces a Stoic acceptance of things outside of your control, and lets you focus on achieving vastly more important things without disproportional aversion or fear of personal discomfort.

Personal wellbeing is probably going to be a part of the stuff you value, and it might not be very universalizable to assign no intrinsic value at all on one’s own happiness, but in the end, you still get to decide whether your subjective experience is restricted to a solely instrumental role, a parameter you only need to improve insofar as it helps you advance other features of the world. From this, it follows that even if you’ll never feel especially “happy”, even if you’ll never be successful or particularly high status or whatever it is that you feel you may so far have failed at, you may still be doing your best towards progressing the things that you truly value. Even persistent anhedonia doesn’t have to mean anything at all: even without the associated emotional reward, you can still succeed at the things that actually matter.

More on status-based motivation

People frequently speak of social status like it’s something virtuous people aren’t supposed to be super concerned about: caring a lot about how others see you is for shallow people, while respectable people do things “for their own sake”, driven by “genuine passion”. Still, when investigated further – according to the most cynical interpretations at least – most of these supposedly noble urges, such as caring about the suffering of distant people or devoting one’s time to scientific research, are in fact also best explained by the motivation to increase and signal one’s social status: many of the states that the human brain recognizes as rewarding are probably triggered by social cues, and the rest by things that traditionally are closely associated with them, which according to the cynical view means that what other people think about us is in fact effectively the only motivation we have (though it sometimes is coded in values that inherently feel important regardless of social benefits, precisely due to their long history of being tightly coupled with status).

I basically agree with the gist of this view, though I guess its concrete empirical basis is currently a bit too thin to justify extremely high confidence – anyway, most of my quibbles are about the frequently implied moral implications of status-based motivation. Even after people understand how much of human motivation is actually based on pursuing status and how crucial one’s social standing can be to their happiness and mental health, the vague disapproval surrounding status-seeking behaviour seems to persist. Anecdotally at least, it’s common even for reasonable people to simultaneously (1) acknowledge that everyone is motivated to a great extent by social status, (2) admit that alternative, inherently prosocial incentive structures don’t seem to be feasible at all, and (3) view status as a morally reprehensible source of motivation, something good people should basically be ashamed to be motivated by. This introduces a feeling of dissonance that I think is needless.


Social status is a versatile good with many different sources, effects, uses and forms. When judging whether it is in some sense virtuous to strive for it, we tend to lump all of these forms together just because there’s a convenient word we can use when we mean any of them. So when good is done in order to gain status, the deed has a negative connotational load, because seeking status is more often talked of in contexts where people act in harmful,  pretentious, and frivolous ways (such as buying bigger cars, or whatever the current universal status-seeking caricature is at the moment); but this kind of otherwise reasonable conceptual connection doesn’t mean these things are morally equivalent or comparable, it just means they are similar in some ways crucial enough for our languages to have developed a broad term for all of them.

Compare seeking status with seeking material wealth (which of course can ultimately be seen as another way to gain status, but let’s roll with it for now). It doesn’t matter much where your money comes from or how you acquire it: it’s always the same, more or less, and owning it has roughly the same effects to you no matter its source. Contrary to this, there are countless different kinds of social status, only some of which usually are experienced as rewarding by any given person with certain goals and values, and only some of which should be seen as shallow or useless, and some of which we should hope more people to seek – the positive attention received by giving to charity, the admiration gained by doing or teaching valuable research, the reputation of an honest and trustworthy person. Most importantly, people seek status selectively and very often only feel good about some aspects of social status aligned with their values: a physicist might expend considerable effort to gain respect in their research group while getting little out of other superficial and morally irrelevant status sources. To be consequentially virtuous, then, doesn’t need to include being motivated by some pure morals completely decoupled from status, the virtues themselves: it can also mean being reliably motivated by the right kind of status while ignoring the more frivolous pursuits. It feels absurd to suggest that a person who has high status in an altruistic community due to their consistent charitable donations and effective outreach work could just as well have invested more in expensive clothes and hairstyles; the status gained by the latter actions would probably not have felt meaningful or worth it to the person in question, even if they could plausibly have resulted in more absolute admiration in some other social groups.


Humans are a cooperative species, and the fact that our survival and wellbeing is determined by whether our peers want us around or not is the force behind this cooperativeness. Even if we are just a bunch of genes trying to replicate, the dynamics of social status make us adopt being useful to each other as a part of our value systems, resulting in things like wishing to avoid hurting others, wanting to make people happy, and baking vegan cheesecakes. The mechanisms with which we judge a person’s status are (from a very roughly sketched evo-psych perspective that should be taken as non-literally as possible) concerned with how powerful and benevolent an ally they would be, and to what extent we can expect them to be good to us; likewise, when we want to increase our own status, we strive to prove ourselves more valuable and more beneficial to others than we would otherwise be. Status-seeking isn’t something that undermines human morality, it’s part of the foundation on which it is originally built.

The system isn’t perfect, of course. When we optimize for status (instead of the good things we pretend to directly, often human-impossibly optimize for), we easily get stuck in counterproductive signalling competitions, and in many environments with weird social structures there are ways to increase status effectively by actively harming others (fear-based dominance, “evil is sexy”, super misguided charity work that mostly gets in the way of everyone trying to improve things) or yourself. When people complain about other people being motivated by status-seeking, this sort of things are usually what they are picking up on, and quite rightly too – but I wish the complaints were directed at seeking the wrong kinds of status, instead of pretending that it’s virtuous and good or indeed possible at all to be a human who doesn’t care about status. When people are reluctant to admit that they seek status rewards instead of something pure, or condemn other people who do so because it doesn’t fit their idea of good motives, they approach the issue from the wrong angle, and end up impairing their perfectly functional motivational systems. Understanding and accepting the principle of status-motivated behaviour, on the other hand, can be tremendously useful. Accepting status games as an okay thing for humans to be involved in lets us assess them more honestly and tie them more closely to positive externalities, so that status games become more about generating lasting value to morally relevant entities, less to senseless negative-to-zero-sum races.

Links & misc #4

All right, time to return from the unofficial summer break I almost accidentally took from blogging! I have actually written quite a lot this summer, but most of it is currently in a half-finished state, so here’s another exciting list of links and miscellany.


• A list of terms that psych papers frequently use but probably should generally avoid due to their ambiguity or inaccuracy (e.g. “a gene for”, “hard-wired”, “neural signature”). I’m so glad to see this recent trend of people calling out researchers using superficially convincing, but substantially vague and misleading terminology. (Related classic: Still Not Significant.)

• A group at Karolinska Institutet has built a functional artificial neuron capable of communicating with human cells. At this stage it looks pretty clumsy, but the authors believe that miniaturized devices following the same principles are probably feasible in the future and could be utilised e.g. for remote-controlled release of neurotransmitters, all of which makes this technique pretty interesting.

• A neat summary of the wide variety of problems that governments could face when trying to regulate the development of powerful AIs. Novel areas of technological progress tend to be difficult to control as a default, because changing laws is practically always a tremendously slow process compared to the speed of profitable technological advances, but there are many good reasons to assume that AI will be an exceptionally tricky research area to attempt to regulate.

• Similar things have been done in E. coli before, but this looks potentially very useful: a common gut bacterium can now be equipped with synthetic sensory receptors and other basic computing components which allow its gene expression to be programmed and controlled e.g. by the food the host eats.

• In a recent experiment, researchers induced in the subjects a sense of body ownership (similar to the rubber hand illusion) for “a virtual balloon changing in size, and a virtual square changing in size or color, in synchrony with movements of their real hand”. The authors suggest that previous experiments have overemphasised the resemblance between the subject’s own hand and the other object, and that sufficient synchrony does indeed let us identify with objects that don’t even look like our limbs. Woohoooooooooo etc.

• A strange art project trying to visualise what Windows93 could have looked like (utterly ridiculous, very colourful, a bit disturbing).

• According to one study, people on physically unstable surfaces are less likely to believe their current relationship is going to last (subjects also reported less affection towards their partner as well as generally lower relationship quality, so at least they’re, uh, being internally consistent in their assessments?).

Other stuff:

• I don’t even play video games myself, but I watched a friend’s gamedev stream the other day and it looked like a lot of fun, so I downloaded Unity, started learning C#, and am now in the process of making my first tiny little game (which is one of the reasons I have sort of neglected this blog lately). I’m not sure if the end result is going to be anything interesting – basically I just wanted to establish some sort of a coding habit, and messing around with games turned out to be a motivating way to do this, because I get to alternate between scripting which I enjoy on a challenging problem-solving level and making graphics which I enjoy because it’s relaxing, but I’m sure the process will remain more rewarding if I end up actually finishing something cool, so we’ll see. (Ha ha just kidding – when someone says they’ll “see” whether they’re going to Accomplish a Thing, they’re very reliably not going to Accomplish the Thing. But I’m learning new stuff anyway, so!)

I realised chalkboard paint exists and have now painted most of the suitable surfaces in my apartment with it. This is so worth it. I can now have text all around me and effortlessly keep all sorts of proto-insights in sight so I spend more time processing ideas just because they happen to grab my attention more often – I also used to have a problem with forgetting many potentially interesting thoughts because I couldn’t be arsed to keep a bunch of mostly low-quality jottings organised on paper, but now I can just write everything down with low threshold and erase it later on without cluttering my desk or text files.

My moods have been super positive for a few months now, which is v cool after a long time of mild-to-moderate anhedonia following a burnout/depression thing I had a couple of years ago. Things that may be going on here:
– Probiotics, which I started taking after noticing there’s a surge of recent articles about their potential mood benefits and noticing that supplements with a few lactobacillus strains are dirt cheap. Temporally this coincides extremely well with my moods starting to improve and in any case, even if the research on psychobiotics is still at an early stage, this is a great solution as far as placebos go: sufficiently convincing, cheap, completely safe AFAIK.
– Summer and sunlight – even when my moods are fine, I seem to be slightly less effective during the darker months of the year, so it’s pretty likely that this is a factor too. I have considered eventually moving someplace else, but most of the countries I have always wanted to move to (Iceland! Northern Norway!) are very Finland-like in this respect, so idk. To do: start wanting to move to places closer to the equator.
– Recovery/regression to the mean & this being something like my normal hedonic baseline. Mayyyybe? When I was depressed, I recall taking it as a given that my baseline mood was & had always been somewhat lower than average, but now I’m not so sure about it – this whole “feeling generally cheerful, getting stuff done as long as it’s interesting, finding new things to be fascinated about all the time” thing actually feels very very familiar and is also consistent with lots of my old writings, so yeah, this could just be how it used to be. It’s not like mood-related memories are going to be totally accurate when you’re depressed.

Definitely also related: reading lots of Douglas Hofstadter. I’m being at least 38% serious here. I mean I’m pretty certain I wouldn’t be able to enjoy his writing as much as I do now if I were still intensely anhedonic, but then again I pretty consistently feel slightly better on the days I start out reading one of his books, so what do I know. The super compassionate, curious and sincere mindset he has seems to be very effective at reminding me of All The Good In The World – I’m almost sure this guy is actually from some secret cotton-candy coated parallel universe where science and philosophy actually are full of romance and beauty instead of excessive cynicism and sketchy frustrating signalling competitions, debates don’t have people resenting and wanting to hurt each other as a default component, and everything just generally is at least 30% more motivating, wonderful, pure and good, and yes I am having a huge crush on this person.

Notes from Toward a Science of Consciousness 2015

tsckynä2TSC, the legendary interdisciplinary conference on consciousness studies, took place in Helsinki this year. Probably because of how it overlaps with Every Area Of Science And Philosophy Ever, this field has been one of my main interests for a while now (of course, no one seems to have a confident opinion on whether the philosophical work has accomplished anything else than a hopeless mess of  conceptual confusion so far, but at least everyone is having fun).

Anyway [fangirling intensifies] some of the people present this year were the actual people who have actually written all the actual books which originally inspired me to start studying neurostuff instead of marine biology a couple of years ago, so a chance to meet them before they retire or die or something served as a convenient excuse to participate, even though I had absolutely nothing to contribute yet. But I learned so much. And I got a ballpoint pen. Who doesn’t like ballpoint pens.

Day I

• The opening plenary featuring three speakers focused on social cognition. Riitta Hari, an influential Finnish neuroscientist, started out by urging researchers to consider a more interactive approach to brain imaging. Human action is interaction almost by default, and most of it includes complex cognitive loops where we constantly model, predict, and react to the thoughts and actions of the people around us – this means that some of the stuff that happens in the brain, especially in social interaction, is easy to miss if you only look at one person doing the tasks, and it makes sense in a lot of contexts to image human brains using dyads rather than individuals as the basic unit. This will allow researchers to see how the activity in various brain regions becomes synchronised between subjects, as often happens, and offer insights that the traditional one-person approach might not be able to provide. I have no objections to this, apart from the fact that a dual fMRI looks like the most awkward setting ever. Maybe you can use it to find an awkwardness center.

• The next talk by Lara Maister was an overview of the research surrounding the rubber hand illusion (in which the subject experiences a rubber hand in front of them as their own hand when the actual hand is hidden from view and both hands are stroked synchronously with a brush) and enfacement (which is basically the same thing, but with faces instead of hands).
Apparently, this type of identity manipulation can have an influence not only on the bodily but also on the conceptual sense of self: even after the experiment has ended, people tend to judge themselves to be more similar to a stranger whose face they have “borrowed” in this way. White subjects also show less negative implicit racial stereotypes after they have experienced a dark rubber hand as their own for a while – IIRC implicit bias as a measure isn’t very reliably connected to the behaviour people engage in, so I’m not sure if the direct implications are very significant, but the finding is still pretty cool and probably says, uh, something about something.

I talked about the illusion with a friend at the conference later on, and he wondered whether it works with objects that are only vaguely hand-shaped. I’m still incredibly intrigued by this. I mean if the experiment is done skillfully enough, and the subjects are people susceptible enough to illusions like this, would they feel less willing to eat a cheese burrito afterwards because they’re identifying with a cheese burrito and  don’t want to eat this thing that to their System 1 feels almost like their own hand, wrapped in plastic, with a tag that says Cheese Burrito?  (There is no deep philosophical point here or anything. I just. I don’t know. Humans are amazing. Burritos are amazing. We could combine them, we could become both. #transcendence)

• The final talk on the opening plenary was by Dan Zahavi about the properties of empathy & shared affective states. A lot of translations of German nouns describing very specific subjective states! I’m a fan. Anyway, this is something I’ve been thinking about a lot. In common usage at least, empathy as a term seems to refer to a number of things that aren’t necessarily all that connected to each other on the information processing level, and I’d like to have a better understanding about how they are related both conceptually and cognitively. Zahavi challenged the view that empathy is about simulating others, as it quite clearly has a strong basis in perception; nor is it necessarily about similarity, or sharing a “same” mental state in any relevant sense. This talk gave me a lot to think about and probably changed my mind on some things, but I think I just need to read more empirical literature on the subject.

• Skimming through the abstract book, I felt a bit overwhelmed by the huge amount of potentially interesting talks and symposiums overlapping with each other, but noticed that there was going to be a lecture on time travel the following day, so I figured I don’t have to worry about missing out on anything. To stick with the theme of social cognition, I went to a symposium on modelling the self through others, which was THE BEST THING.

By memetic osmosis, I have recently been influenced a lot by various post-rationalist ideas about the importance of this pretty traditional type of social closeness and belonging, something I have generally neglected (like any self-respecting nerd, of course) but would like to know more about because I’m very curious about things (like any self-respecting nerd, of course). One of the recurring themes there is that synchronised movement of the kind often seen in rituals effectively facilitates pro-social affective states. The talks in this symposium focused on exactly this: synchrony, imitation, coordination, and their social causes and effects.

Lately, I’ve been noticing that I immensely appreciate looking at and in some contexts also participating in complex coordination. It could just be some superficial aesthetic thing, but until now I never made the connection to what it means for people socially to be able to coordinate in this way: apparently groups of humans are judged as being socially closer when they’re engaged in behaviour that requires skillful coordination than when they’re merely doing skillful synchronised movement. This social-closeness aspect, if it indeed holds up to scrutiny, might be what I’ve been picking up on. These lectures made a lot of things click, I guess, but I need to think about it. (Maybe everything just feels more significant than it actually is because I’m learning about it in an Exciting Conference Environment. Oh man, I’m a sucker for affect biases.)

Also, I wonder how this relates to how humans are used to living in the context of social status, or less hierarchically just different social roles. In synchrony, everyone does the same thing at the same time, which I guess is as close to role-and-status egalitarianism as you can get, and will probably temporarily reduce the perception of status and role differences. Usually, some vague leader/follower roles do emerge (even in synced finger tapping, some people tend to more readily adjust their own tapping to that of the other person, while some people focus mostly on their own performance and just expect the other participant to follow them), but there isn’t a lot of room for variation there.
In coordination, on the other hand, totally new levels of complexity arise from the way each individual is differently equipped to take care of different parts of the choreography at different times. This is more useful in exactly the type of situations that people usually have needed social groups to handle, and it requires and could plausibly strengthen deeply held pro-social attitudes such as trust, as you’re relying on the competence of the other participants even in areas you are not competent in yourself. Because of the usefulness of being able to coordinate versus being able to synchronise, and because of how it’s plausibly more familiar & comfortable for humans to exist in a social space where everyone has something of a defined role or status, it’s suddenly obvious how beautiful being able to coordinate is and why, and in conclusion I should probably start a jazz band?

tschintikka• Tuesday’s keynote speaker, Jaakko Hintikka, spoke of the levels of explanation in neuroscience. He based it on Marr’s influential framework with three levels of analysis you need to understand computational processes: the computational (what the system does, as in how does it seek to transform information), the algorithmic (how the system does what it does in terms of algorithms, or manipulating representations), and the implementational (how the system is realized physically).
I think Marr’s model is rather insightful, and was I looking forward to hearing Hintikka elaborate on how it can or can’t be utilized in neuroscience and whether or in which sense the three levels could be reduced to each other – unfortunately, he has this very slow but hard-to-follow, mumbling-rambling style of speaking you probably acquire after spending a significant portion of your life as the top philosopher in your country, and I was pretty exhausted at that point, so I wasn’t getting much out of it TBH. I soon decided to just relax listening to his talk with cozy ingroup-y concepts and ideas dropping here and there, enjoy the social atmosphere with lots of similarly half-excited-half-bored-to-death academics around me, and reflect on everything I had learned, which was a good, calming conclusion to a great day. (I promise I’m going to read some of his writing on the subject later on, because I am sure his thinking is immensely valuable, it’s just the verbal output that really doesn’t work for me.)

Day II

• Wednesday’s opening plenary was about dreams & virtual reality. I missed the first speaker, but arrived just in time to hear Antti Revonsuo describe his ideas about the possible evolutionary advantages of dreaming. A lot of dreams are threatening in nature, and could basically serve the purpose of preparing us to react to various threats, both social and physical, from shameful situations to aggressive encounters. To account for the dreams that instead are about harmless events, Revonsuo has recently revised his hypo to also include the rehearsal of social behaviour and reinforcement of social bonds, since most dreams also feature a simulated social reality with other people you interact with.
Supporting this simulation theory, we know that dreams form transient, but perceptually coherent realities, inside which they have causal powers similar to the sensorimotor activation that we exhibit in response to input when awake. Dreams are also quite effectively isolated from stimuli from the outside world, so they’re probably not just a half-assed attempt to stay aware of what happens around the dreamer’s body. (Under this bit in my notebook, I have written “Eyes taped open :D” which I take to mean that taping the test subjects’ eyes open before they went to sleep is how the isolation question was originally studied, and that Past Me From Wednesday Morning has for some reason found this hilarious instead of horrifying. All right. Okay.)

After Revonsuo, Jennifer Windt provided some slightly more abstract ideas about what dreams are made of. In addition to world-simulation, dreams are also self-simulation in an important sense – a phenomenal self is always present, but its attributes may change radically. We may perceive ourselves to exist as non-human subjects or even completely lack all bodily experience: however, the sense of location is something that seems to persist. Likewise, reports never seem to describe anything like multiple selves. A single self in a spatial location and also oriented in a certain direction could be thought of as a minimal experience of a self.
Windt continued to how this minimal selfhood might show up in virtual realities. We may eventually be able to volitionally control most of the perceptional input we receive, changing everything from the social world around us to the psychophysiological properties of our bodily self, and it’s intriguing to think about what the lowest common denominator in the experience of a self will turn out to be like.

• After some additional presentations related to dreaming, I had planned to go listen to a few talks on Tononi’s integrated information theory, but the room was totally crowded and I found it pretty hard to focus on anything, so after the first one I ended up in a nearby room with talks about phenomenal concepts instead. This was pretty great and the name sounds sort of vague and generic and would hence have been easy to miss, so I’m glad I chanced upon it like this.

So a basic counterargument to physicalist theories of consciousness, known as the knowledge argument (originally by Peter Jackson Jackson Pollock Frank Jackson) goes as following: Mary, a brilliant neuroscientist, spends all her life in a black-and-white room without ever experiencing colour qualia. She teaches herself absolutely everything there is to know about colours and colour vision on the physical level; still, when she finally exits the room for the first time in her life, and sees, say, a red apple, we intuitively know that she will be astonished or delighted or whatever because she learns something new – which shouldn’t happen if physicalism is correct and there is nothing more to conscious experience than physics (because Mary already knows all about the physics related to color experience: if she learns something by experience, it has to be extraphysical). I’ve never been very worried about this argument, because it just seems ridiculously overconfident to rely on your intuitions about anything that’s happening inside Mary’s head when she is, well, practically a superintelligence capable of processing all of physics in the blink of an eye (oops), but the PCS is an attempt to approach the issue from a different angle by suggesting that the confusion around the hard problem is rooted in us trying to use dualistic concepts even though reality is physicalistically monistic.

A particularly interesting thing I learned related to PCS & the knowledge argument is the concept of epistemic emotions, presented by Benedicte Veillet. These are the emotions people feel towards knowledge and knowing, e.g. curiosity, surprise, understanding, and astonishment; however, just like other emotions, they are not necessarily connected to anything especially relevant in the external world. Mary could in principle feel astonished when she experiences the redness of an apple even when she actually learns nothing new. Moreover, when we imagine how astonished she must feel in the situation,  or how genuinely she feels that she has learned something new or significant in the situation, what we are imagining and taking as conclusive evidence might actually be just the projection of an emotion we feel we would have in Mary’s shoes.
Veillet also quoted a physics student who had proven a theorem but didn’t feel it was true even though it couldn’t possibly be false (that’s pretty much what it means to prove a theorem). He was bothered by this for a long time, because he just never reached an intuitive feeling of the theorem being true, until he or someone else proved it in a different way, after which it suddenly felt just fine.
Veillet wasn’t sure yet what to make of it, and I think there’s not a lot of research on epistemic emotions the first place, but she thought and I agree that this kind of misled feelings of certainty or uncertainty are something to keep in mind when considering e.g. the possibility of p-zombies. Hmm. Anyway, the idea of epistemic emotions is  a valuable addition to my concept space and I’m going to start paying attention to them and how they relate to what I know on a System 2 level. (For example, I noticed that I really need a word for when I’m feeling that something is obvious even if I recognise it might not be true. Just saying that something “feels obvious” strongly implicates that you also think it is obviously true, which maybe should be rarer.)

• The day ended in a poster session, which as it turns out is not the optimal environment for me to learn anything. So many people with food and noise everywhere, I wonder if this person will try to talk to me if I check out their poster, maybe I can stand here and just discreetly look at it from the corner of my eye oh no now they noticed me well maybe I can escape if I just pretend this other poster next to is super interesting oh look the other poster appears to be about a crystal lightspirit brain-in-a-vat frog this is very embarrassing etc.


tscneyAlyssa Ney had a great talk about what it means for the metaphysics of consciousness that causation on a microscopic level may not exist – this will obviously put a dent in physicalism, which assumes the causal completeness of physics. Ney presented this idea of nomic sufficiency, which simply states that for every physical fact there is some other physical fact that, within certain laws, logically entails it. This isn’t a causal principle; it doesn’t require e.g. that there’s a cause that’s temporally located before the effect (the time invariance we see on the microscopic level is a major problem for causality). Anyway, I found her recent article on the same subject, so I’m just going to link to it instead of summarising further. Thank you Internet PDF God for this good Internet PDF I am forever in debt.

• David Papineau continued on the question of whether we can have a physicalist theory without causal closure. He offered some interesting alternatives to tackle the issue, including the possibility that macroscopic causation suffices to explain the causal phenomena that physicalism seems to require, a bit like how the increase in entropy is an emergent result of thermodynamics and seems to construe an arrow of time by probability distributions, quite independently of whether there is anything “temporal” on the level of fundamental dynamics at all. He also raised the question of whether the seemingly-problematic overdetermination in mental causality (in which things systematically have both a physical and a mental cause) should be such a problem in the first place, which is an intriguing question. Ehh causation is so weird, why do we have to have and/or not have causation.

• Finally: the superstar of consciousness research, David Chalmers, on the topic of… ok, wavefunction collapse and its possible relationship to consciousness. The nature of consciousness and the nature of the reality behind quantum mechanics are both pretty weird, so why not do what everyone else seems to be doing and, uh, parsimoniously assume that they’re both weird because they’re ontologically connected?
Chalmers didn’t go into the mathematics behind his hypo at all, which often is a reliable signal of that a person talking about QM has no idea what they’re talking about (case in point: me), but he worked with a physicist when developing this thing so I’m assuming it’s basically math-compatible and his talk just was a popularised account of it. I should stress that this isn’t something he absolutely thinks is true or anything, and he ended up finding a lot of flaws in it himself while developing it – he just wanted to look into it and figured it should maybe be taken more seriously than it is currently being taken, so even though this sounds pretty wild, he’s a sane guy with lots of valuable insights and I’m totally not dissing him.

So, Chalmers is going by a face-value interpretation of QM, in which the wavefunction actually collapses (every time I notice someone agreeing with a non-Everettian view, I feel this huge relief for a second, because I remember it’s entirely possible that MWI isn’t the correct interpretation even though all the people in my followsphere seem to take it as a given. Ughh I hate the possibility of MWI being correct, and I don’t hate a lot of things. Anyway.) A measurement is what causes a system to collapse into its determined state. Re the definition of “measurement”, Chalmers postulates the existence of operators that can never be superposed: interacting with systems in superposition, these measuring operators (or medusa operators, or m-operators) will naturally force them to the corresponding state, collapsing the wavefunction. The m-properties of these systems could in principle be anything, but it would be pretty rad if it were all about consciousness. This could lead us to a sort of interactionist view which could be compatible with both physicalism and dualism (probably not panpsychism though, omg).

That’s a simplification of a simplification, and I should check whether he has published something on this already, because I guess all of this has the potential to make more sense than what I extracted from a single short lecture on the subject and transmitted onwards here. Even if I’m not convinced about this at all, you got to admire how this guy has been working on consciousness for a couple of decades, developing a lot of incredibly influential stuff and bringing forth insights no one else has really been able to put into words and gathering so much in terms of academic prestige & followers, and then he just gets up one morning and decides he’s going to troll everyone so hard with this new quantum bonkers approach, just to see how many people will play along. That’s what I’m guessing is what happened here. He seems like the kind of guy to do that, sort of super laid-back. (He complimented a hat my friend was wearing, twice. It’s a pretty cool hat.)

Day IV

DSC_0242• The cognitive exhaustion combined with the lack of sleep and proper nutrition finally catches up, and as a result I’m sort of losing track on time and just generally getting very dizzy. I should try to eat more before tomorrow maybe. Toward a Science of Healthy Levels of Metabolism.

Anyway, the structuralism advocated by James Ladyman & Don Ross in Every Thing Must Go: Metaphysics Naturalized has long been on the neverending list of fascinating things I should get around learning about, so I was really curious about having Ladyman as a speaker this morning. Basically, his formulation of structuralism opposes strong physical realism by stating that physics might not be connected to a fundamental reality at all, because there may not be such a thing as a fundamental reality in the first place. (I’m probably not going to be able to articulate his metaphysics in a 100% convincing way here, seeing as I’m going to use a couple of paragraphs and these dudes wrote a book arguing for their ideas, but hopefully the summary isn’t going to be totally inaccurate.)

The existence and identity of everything, even the most fundamental-seeming entities such as mathematical concepts, seems to make sense only in terms of relationships, not of any self-subsisting entities themselves: there’s no such as thing as the number three, except as the apparent “thing” that is a successor to the apparent “thing” that is the number two, and so on. In the special sciences, there’s a lot of relational stuff that can’t be understood in terms of (fundamental) physics, such as the concept of evolutionary fitness: this means is that the structure of many things seen in special sciences can’t be reduced to physics. Generally I’m pretty averse to views that speculate on whether or not there is a fuuuuuundamental reaaaaaaaaality at aaaaaaallllll, because it seems very, I don’t know, poorly defined as a thing to be worried about in the first place? but if I’m getting what Ladyman says here, I think I’m liking it, and it looks like I will have to buy some books.

tscpat1x• Today’s keynote: Pat Churchland, who is hands down the most impressive speaker I have heard in this conference. It’s weird how she looks like a tiny person, but has an impossibly strong voice and generally just a huge presence compared to pretty much everyone else in the entire building.
After twenty years or so, Churchland is still a patient if a little bit cranky voice of reason in the community, trying to remind everyone else about how the apparent explanatory gap between the first person perspective and the third just might still be illusory: it’s entirely possible that the hard problem isn’t anything special – we used to think life was super special and something we could never explain in terms of purely physical things and interactions such as cells and mitochondria and ion channels, until one day, surpriiiiiise, we had it all figured out by accident just because we understood all the so-called lesser problems of life sufficiently well, and in the end they turned out to be all there was to understand.

In particular, this talk questioned the reliability of modal intuitions considering the identity of scientific findings. In the area of logic that is concerned with whether claims are true necessarily instead of just contingently, it’s not really well defined what this necessity means and especially whether it leads to useful conclusions in being able to understand things like consciousness. Is it necessarily true that light is electromagnetic radiation or that the way organisms transfer information is through DNA, and why should consciousness being neural processes be any different from this? We can imagine p-zombies and claim that they’re logically possible, but this might not mean they’re “possible” in any relevant sense of the word – we could also imagine a world where the metaphysical essence of light is maple syrup or whatever, but this doesn’t mean light isn’t electromagnetic radiation.

Most of her talk was more pragmatic in nature, however, and focused on the paths neuroscience could investigate to lead us to a better understanding of conscious processes. The bottom-level claim she was making is actually very minimal and easy to get behind: what neuroscience is doing is the best bet right now for figuring out consciousness, and also extremely useful even if it doesn’t end up solving the HP. Even though I’m still not sure if the metaphysical questions at play here are as trivial as Churchland suggests, I think it’s immensely valuable as a sanity check that there’s someone in the community sticking to this boring-but-rational attitude instead of trying to come up with all kinds of increasingly esoteric *cough cough* quantum voodoo *cough* explanations for consciousness, but then again as a neuro person I’m probably biased.

Day V

• Oh no oh no oh no. I almost got through the entire conference without stumbling upon a single speaker who was, well, mildly bonkers, but today was the day I finally did. Censored by the scientific community until now, a whole new doctrine of physics which shows how Einstein and everyone since him has been wrong about p much everything, all presented with annoying COLOURFUL FONTS and lots of CAPS LETTERS. Whyyyy is it always annoying COLOURFUL FONTS and lots of CAPS LETTERS? We will never understand. Probably something to do with quantum mechanics.


• After the exciting presentation above and a couple of short talks about psychiatry – one on an experience sampling app for psychiatrists which prompts patients to answer short questions about their mood during the course of the day and also tracks their movements through GPS and apparently reads their e-mail analysing the syntax of everything they write, and another on the definition of mental illnesses and the way they vary a lot depending on the culture you live in and the attitudes the people around you hold about them – the semi-final plenary lecture with Stuart Hameroff & other folks from Arizona State University researching transcranial ultrasound to treat depression. This was all pretty straightforward and it looks like an interesting area of research, though probably not the Holy Grail of Curing Depression. It’s 2015, I think we should have found a Holy Grail of Curing Depression already, and it’s such bullshit that we have to settle with researching things that are not the Holy Grail of Curing Depression, but it’s not the fault of these guys and they’re still doing some great stuff.

The group had a device with them, and they had offered conference participants a chance to get a 30-second stimulation and describe their experiences afterwards. Now that they published the results, something like 40% of the 80 or so participants had reported feeling a positive effect on their mood, and none of the 15 people who were without warning given a sham stimulation reported feeling anything. (The group presented actual published studies too, I just thought it was clever of them to demonstrate it like this.) They also asked volunteers to receive and describe the treatment live during the discussion, which was interesting but slightly unfortunate, because Sue Blackmore who happened to be one of them ended up feeling really awful and gave a vivid description of this awfulness and frightened everyone a little bit. But the treatment is safe in terms of long-term effects, and she was fine soon enough to participate in the closing plenary. (She is also not the guy in the picture above. The guy in the picture is Chalmers.)tscclose• Time for the final panel discussion with six participants: Hameroff, Churchland, Blackmore, Chalmers, Revonsuo, and Ladyman (who is missing from the picture above, and apparently not amused by Churchland’s metaphysical views in the one below).

tscxxAfter more than twenty years of Towards a Science of Consciousness, next year’s conference in Arizona will be adopting a more ambitious tone by starting to call itself simply the Science of Consciousness. This was the topic of this discussion – are we actually ready for a science of consciousness yet?
All of the speakers quickly described their views about where the field is currently heading, and whether the questions we are currently asking are defined well enough to lead us anywhere. It was a nice discussion to follow, though not very conclusive – more like a closing ritual for the conference than anything actually substantial. Though I could have been so sad about the conference ending at that point that I maybe just forgot to listen so who knows.

This was by far the best week I’ve had in a long time. I can’t remember the last time I’ve felt so genuinely motivated to learn everything in the whole entire world, and I met some cool new people and got to hang out with acquaintances I don’t usually see very often. I actually had to leave tons of interesting talks without mention in this post because there were so many of them, and I’m going to be full of weird ideas for the next three months probably.  5/5.

How defensible is pro-social lying?

Dishonesty is one of the most complex, most interesting problems in practical ethics. Having an accurate model of reality is the basis for sane decisions, and deliberately causing other people to believe falsehoods makes their models of reality less accurate, impairing their ability to act rationally – often in unpredictable ways, because in a causal universe, falsehoods have to be connected to other falsehoods in order to make any sense. Lying is a breach of the listener’s autonomy: it implies that you don’t see them as entitled to the same information you have, which practically means that their ability to make rational judgments is not very relevant to you, at least in comparison to whatever it is you’re trying to achieve by lying. As a component of social relations, being able to rely on other people sincerely cooperating with you is immensely important, and notable violations of trust tend to damage relationships in ways that are very difficult to heal.
Still, most of our conversations tend to be peppered with all kinds of almost meaningless, apparently useful offhand lies. While most of them don’t seem to result in significant negative consequences, they do lessen the extent to which things people say can be expected to correspond to the things they think are true. (Up until a few years ago, I used to think of this as pretty much acceptable, and would frequently resort to butler lies out of laziness or exaggerate events to make my stories sound more interesting. In retrospect, going by the values I hold now (in the ripe age of 22!), almost all lies of this kind seem like a moral net negative, and I wish I would have understood the value of integrity as a virtue a lot earlier.)

In most situations, it’s pretty easy to choose to just not be a dick who lies about actually meaningful information, but so-called white lies, supposedly pro-social, are an aspect of communication I often find very stressful. In a culture where various degrees of dishonesty are pervasive enough to be considered part of basic politeness, not participating in lying of any kind is extremely difficult. People are expected to resort to white lies – such as telling a friend they’re busy when they simply don’t feel like hanging out right now – pretty much automatically, so that beginning to explain one’s actual reasons for anything is bound to sound vaguely suspicious. If a person says they’re unfortunately just too busy to meet today when they actually mean they just don’t feel like hanging out right now, what could the person who says they just don’t feel like hanging out right now actually mean? The possibilities are endless and exciting! (Secretly disliking you, trying to avoid all interaction and hoping you won’t ever ask again can certainly not be ruled out.)
Of course, saying you’re busy even if you aren’t isn’t usually even meant to distribute false information with certainty – it’s common knowledge that the phrase is sometimes just a polite way of saying you don’t feel like hanging out, and most people will in most situations effortlessly understand this possibility. This dynamic makes it hard to approach discussions with explicit honesty precisely because people are so used to the process of taking hints: when all messages are softened just a little bit around the edges to avoid the potential maximum amount of mildly hurt feelings, and you show up trying to communicate without embracing these almost universally accepted rules – with explicit honesty – the result is usually a specific kind of uneasiness best described as a weird superposition of maybe-hurt feelings everywhere forever.

Still, this aversion to distributing literally-false information is a very reasonable heuristic. The reason many more or less nerdy people deviate from the social rules described above – the rules which dictate you need to soften everything you say with courteous dishonesty, even if you end up complicating all honest interactions of everyone else in the community – is that there is an alternative set of rules to these interactions, which intuitively makes a lot more sense: saying approximately the things you actually mean. In theory, when things are expressed adhering to this second set of rules, everyone should be able to understand them almost in the same way, which to me sounds like a pretty great basis for information sharing.

I should stress that I’m not opposed to most forms of ambiguousness – I like some social guessing games a lot and I think they contribute a lot to the interestingness of human interactions, and I personally don’t have a lot of trouble reading nonverbal signals from people I know and interpreting them accurately – it’s just that I don’t want to say things if their more literal interpretation (which people also often need to use) is false, because it’s also very reasonable to interpret things literally, and I know that many people will probably do so and would often prefer to do so myself. Ambiguousness should be done in a way that leaves room for the use of the explicitly honest rule set too, and doesn’t disrupt it as described above, because there are times when honesty is truly needed and it isn’t always obvious whether this is in fact the case.
Even in a world where the rules of social information sharing have developed into the current complicated abomination which takes ages to learn in its entirety, and will backfire every once in a while even after you do, speaking the truth still seems like a valuable Schelling point: if I just explain things explicitly, and the listener is charitable, everything will probably go better than if I resort to saying something basically unrelated when I don’t have a clear idea of what would be the best thing to say to convey what I mean. But this method almost always requires a lengthy uninteresting explanation and more effort than just knowing what you’re supposed to say.

This is related to the reason we see this abundance of non-malicious lying. Effectively conveying an approximation of the truth and its relevant aspects is probably the most important function of white lies: most truths have to be understood as related to a body of other truths to make sense, and sometimes the situation is simply too complex to be explained in sufficient detail – the result of a short explanation in this case is actually only a half-truth, which could actually cause more damage than a considerate lie does.
For example: A likes to spend time with other humans a lot less than B, because A places a lot more value on the time they spend alone. B easily gets bored in isolation, is always up to social interaction unless it’s with a person they don’t like very much or is currently mad at, and very reasonably assumes on an intuitive level that everyone else is pretty much like them in this respect, because the people B usually interacts with really are as extroverted as B is. (B may still be failing at rationality in this case, but certainly not outrageously, so this is something everyone does all the time.)
It’s clearly not enough for A to tell B the truth I don’t feel like hanging out with you right now, even if it’s completely honest and doesn’t in itself mean A dislikes B: to tell the entire relevant truth and leave B with an accurate picture of their relationship, A would also have to figure out how their fairly fundamental, complicated social preferences differ from each other, and explain all of this to B. In a situation where the inferential distance between the participants is this large, A saying I’m busy (even if they aren’t, and it’s a lie in the speaker’s end) is actually closer to the truth in the relevant respects than I don’t feel like hanging out; as interpreted by B: B will accurately feel that A values spending time together, and ask again some other time, and all is basically well (until A is on their deathbed and has to explain why they never accomplished anything in their life other than 350k Reddit karma points, despite supposedly being very busy with important things all the time. But that’s only going to be awkward for a little while).

Anyway, this seems like the most important point where pro-social/white lies – the not-literal or only barely dishonest social niceties which emerged to soften interactions and make them cognitively less demanding – differ from malicious lying, where you purposely deceive the listener & have them believe more false things about the world than would be useful for them (e.g. to get something you want from them). One of the reasons I’m still pretty averse to white lies is that there is no easily detectable distinction in kind between them and these more serious deceptions, as all dishonesty will necessarily deprive other people of information they need to be able to interact rationally with the rest of reality, which again is a dick move. More importantly, accepting one kind of outright lying as an OK thing to do will make the rationalization of actually harmful, but temptingly self-serving lies a gazillion times easier. No one will approximately ever assess a lie they strongly feel like telling, deem it malicious or wrong, and proceed to just not tell it. It’s easy to convince yourself of that you’re only lying to be nice and doing the right thing, as avoiding the effort of explaining complex things or of facing a conflict rooted in something you did will always, mysteriously, feel a lot more appealing than being honest.

So, while I understand that some white lies seem on the surface level to make everyone’s life mostly easier, I’m reluctant to accept this is actually the case once you look at all the externalities. On the other hand, I’ve noticed that I often get exhausted around people who are very committed to this hey-let’s-simplify-interactions-by-just-telling-everyone-everything-explicitly thing. It could just be that choosing to adopt this mindset tends to correlate with not being intuitively good at reading others in social situations, which means that many of the people doing it are going to be bad at assessing which bits of information are interesting to the other person in the interaction, and which ones will just needlessly add to the cognitive load. I don’t actually have a solution to any of this, but would sort of like to find one.

Framing purpose as a mood

Small doses of existential dread can be valuable: occasionally questioning the value of your deeds and goals is certainly essential in coming up with things that are endorseable and good, and wondering about your place in the universe is apparently an endless source of mediocre newbie poetry and other precious aesthetic endeavours of humanity.
Excessive amounts of this type of self-doubt and focus on one’s global & cosmic insignificance etc etc etc seem, however, to be both common among and absolutely destructive to a certain set of people. This is frequently a symptom of depression; from the inside, though, it instead looks like the genuine cause of said depression, which easily makes one worry about the wrong issue (incidentally also one it’s impossible to solve): here I am, having a reasonable and inevitable emotional response to metaphysical truths I should but can’t ever change, and this response is destroying my ability to find motivation or joy in anything and there is nothing I can do about it.

But the emotional response is neither reasonable or inevitable. Most non-depressed people probably hold the same set of beliefs about the world and their own purposelessness. In a world where no deity has, apparently, devised a great cosmic plan where everyone is crucially important and no one is replaceable, it’s common sense and common knowledge that there exists a large, cosmic scale on which none of us means a thing.
This means that unlike the feeling of fear upon encountering a snake, the feeling that everything you do is pretty insignificant actually holds no profound revelations for your executive functions to act upon. You already know it on all of the useful levels: it’s possible and sufficient to just know it, accept it, and move on without viscerally sensing any of it, like most non-depressed people automatically do. The only way the meaninglessness of everything can hurt anyone or do any damage at all is by making people feel bad about it.
I’m absolutely not saying it’s trivial to stop feeling the existential blues (“have you considered cheering up and not being depressed?”) – I’m trying to say that when paralyzed and profoundly unhappified by a sense of all-consuming lack of purpose, it’s wrong (but, unfortunately, extremely tempting) to conclude that the only solution to the problem would be, indeed impossibly, fixing the lack of purpose itself by becoming practically God. Fixing your mood may not be easy or simple either, but unlike the alternative, it’s worth focusing on because it’s possible – and also totally enough.

At some points in my life, I have felt pretty bad about this stuff: whether I can achieve anything actually important, whether I’m wasting potential or just lacking it in the first place, why strive to find things that matter when everything I end up doing could probably be done equally well by someone else, why even bother with anything at all when I’m like half the age of Elon Musk already and apparently not about to launch a single Mars-conquering startup in the foreseeable future.

At other points in my life, like now, I have felt totally OK with all that or generally just not given it a lot of thought because all of it is both trivial and pretty irrelevant, and simply been satisfied with and motivated to do my best to improve myself and subsequently the world in whatever ways I can.

As far as I know, this shift in my mood has never been caused by external factors, such as me occasionally becoming an all-powerful entity capable of creating and ruling heaps of Guaranteed Actually Important Meaningtronium all around the universe. My beliefs about factual matters – how meaningless everything I can hope to accomplish will be on the timescale of human history, for example – have never dramatically changed. It’s just that sometimes I feel really really bad about it, and sometimes it doesn’t bother me at all, on the emotional level.
The feeling of meaninglessness doesn’t really correspond to or inform me about anything relevant in the world: it is a response arbitrarily created by my occasionally depression-prone human brain, which then again is often also influenced by arbitrary things that tend to produce the sense of meaningfulness in human brains, such as social belonging & status and various fundamentally silly accomplishments that lead to positive feedback. You could have entire quasars full of Guaranteed Actually Important Meaningtronium to personally protect and push around the universe with a cosmic bulldozer and still feel like crap as long as you didn’t have a rewarding social environment, sufficient amounts of sunlight and nutrients, maybe a crush who likes your super cute cosmic bulldozer selfie on Facebook, or just generally the right combination of genetics and magic to support functional neurophysiological conditions. Meaning in the sense of one’s objective impact in the surrounding world can be measured and investigated in far mode, but the feeling of meaningfulness and purpose is definitely a near-mode thing.

It’s important to note that seeking to abandon the feeling of meaninglessness, even though it basically matches an important aspect of reality, isn’t an error in rational reasoning or in forming appropriate mental states. It’s true that your beliefs need to match reality as close as possible, if you want to make a lot of sense of anything. But even though beliefs (or aliefs, their intuitive & implicit counterpart) certainly form a significant component of any given mood or emotion, moods and emotions are not just compressed beliefs, with unambiguous truth values: they are a weird mix of beliefs and motivations compressed into subjective experience chunks to be acted upon by your wildly inaccurate, overall handwavey cognitive system 1 – which rarely is a reliable source of information about what you should fundamentally care about, seeing as it mostly wants to invest everything in avoiding threaths that no longer exist while impressing a lot of cute people, and apparently gets frustrated and dysfunctional when it can’t.
This means that the intensity of your moods and emotions can, without sacrificing intellectual honesty, be tuned to match only those aspects of your beliefs that can actually help with the things you need to do, (even if it’s currently just surviving as a functional human being). The appropriate amount of sadness about believing that everything you do is meaningless on a large enough scale, and that there are always going to be people more accomplished than you, is usually pretty close to 0.

In conclusion, the correct response to existential depression is not helplessly trying to find a way around fundamental features of existence: the correct response is alleviating the underlying depression in ways that probably have nothing to do with finding a deep and well-grounded purpose on the cosmic or even global scale. It may feel like a different, overwhelming and hopeless kind of depression because it seems to have such convincing reasons, and the reasons are sort of based on a true story even when assessed rationally, but it will almost certainly respond to the same things all other forms of depression respond to. No part of solving this problem requires anyone to do anything impossible, such as changing the underlying philosophical realities of everything. Changing one’s mood is a complete solution, because the sense of meaning will follow.