Re: “We just don’t know enough about ecology to reliably prevent wild animal suffering without causing more damage in the process”

Say there’s a horrible moral disaster going on at the moment that you (1) know about, (2) know something could at least plausibly be done about, (3) know we don’t currently have enough information to safely do much about, but (4) know how to collect at least some relevant and potentially useful information about. This is not a very difficult problem: if you know what kind of information could be helpful to alleviate the issue, the first step in alleviating the issue is in fact collecting more information, not just deciding whether or not you should help based only on your existing knowledge.

A simple miniature version of the same situation is knowing someone vulnerable is lost in the freezing cold forest area nearby, knowing you could (with your better navigation equipment and warm clothing) probably track and save them if you were there, knowing you wouldn’t be able to find your way to the forest from where you’re currently standing, but also knowing you have a smartphone you could probably use to find your way there. Ignoring other things, obviously you should consult the smartphone or otherwise seek the necessary information to help out the person in trouble: you may not immediately know the best map application or where your winter boots currently are located, but there are many ways to increase your relevant knowledge base here, and thinking about it for a while instead of dismissing the issue is probably going to help. If we expand the analogue to include the rest of civilization as well, there’s also the helpful official people with dogs and helicopters: maybe you could encourage them to do the job in case you think it’s not where your comparative advantage lies. What you probably wouldn’t do is shrug and accept that someone will definitely die out there just because there are multiple steps and some uncertainty in the process, and you don’t immediately know how to do the object-level helpful things.

When talking about wild animal suffering, all but the most radical utilitarians and altruists are understandably super cautious about doing anything substantial. Some people simply have strong intuitions against meddling with the natural order (no, not the natural order that directly hurts human societies, just the more natural natural order, the nature, you know) which I think is a weak position for reasons I won’t go into right now because other people have written about it in length before. Some people don’t think things are so bad for wild animals anyway, probably because they feel suffering and satisfaction are somehow hedonistically commensurable and animal lives have some good moments as well (whereas I reject this view of pain an pleasure as the opposite, positive and negative aspects of the same stuff – they can be indirectly compared using preferences for different tradeoffs, sure, but experience-wise they’re not simply opposites in valence, but fundamentally different (and the bad is more relevant than the good)).

But the majority of people I interact with seem to basically accept that the lives of wild animals are often really bad, nature isn’t inherently sacred to the extent that we couldn’t help sentient beings out there – it’s just that we don’t have enough information, so our hands are basically tied. Some of these people accept that there might be a point in the distant future where we could maybe do something about the issue, while some people don’t really think about this possibility either, because the task sounds so thoroughly daunting. Both responses ignore the possibility of actually immediately working to increase our understanding of ecosystems so as to build sufficiently informed, actionable plans to alleviate ecological suffering, which is exactly what we should urgently be doing, instead of just accepting our temporary helplessness.

Advertisements

Beware of sneaky malicious agents in moral thought experiments

In moral philosophy, thought experiments are supposed to use and clarify common intuitions to help us distinguish between acceptable and unacceptable states and actions in morally problematic situations. Moral frameworks are currently built mostly by trying to make these intuitions as coherent as possible, as people have various (interpersonally different but, all things considered, fairly convergent) intuitive responses to ethical dilemmas. In addition to these normative intuitions, the end results of a thought experiment also rely on intuitions about what exactly is *going on* in the situation in the first place, and subtle misunderstandings on the descriptive level could easily damage the reliability of our responses to the experiments.

So I’m pretty sure that suffering caused by violence or other malicious acts subjectively feels at least somewhat worse than suffering originating from impersonal causes, even when the tissue damage is equivalent: it certainly feels vastly more terrifying, disgusting, and unacceptable from a third-person perspective (to me at least). So, to evoke the sense of absolutely maximal suffering in moral thought experiments, it’s useful to describe the suffering as torture or violence of some kind, as this maxes out the intensity of the suffering current human brains are capable of (and capable of empathizing with, for the purposes of the thought experiment). But there’s a tradeoff here: yeah, we probably make the experiment more emotionally effective, which all things equal gives us a better understanding of the relevant moral intuitions – but we also lose some clarity regarding the full implicit consequences in these scenarios in a way that I’m pretty sure will bias our judgment.

This is because torture (or other personal, malicious causes of suffering) doesn’t actually happen in a consequentialist void where the torture is the only consequence and correlate of a choice, and no otherwise alarming or threatening events follow. Most importantly, this has basically never happened in the history of human evolution that has shaped all of our intuitions, which means they may not be well equipped to pretend there is such a void even when it’s required by the experiment. In our descriptive-level intuitive interpretation of the situation, the presence of torture or violence probably implies there is an obviously callous, probably unpredictable agent around who thinks it’s OK to hurt others, an unknown unsafe environment where such agents apparently are born, no one around with enough empathy and power to stop the violence, possibly something that poses a threat to us too, etc. (Though note that this is exactly the kind of idle evo-psych speculation your mother warned you about; I don’t know if anyone knows how intuitions really work.) Yes, often other causes of intense suffering, such as starvation or disease, also imply there’s something wrong on a community-affecting level – but probably not quite as saliently and alarmingly as personal violence does, because violence requires direct, immediate action. Whatever the exact cause is, I think there’s an intuitive emotional reaction against violence that isn’t purely reducible to its direct consequences (the suffering it causes), because it has always been such a different kind of a problem to solve, I guess?

Links & misc #5

petunia

• I took the Giving What We Can Pledge a few months ago! I’ll probably write about the reasoning behind my decision to do it now some more in the future, but basically people are bad at consistently caring about things for an entire lifetime, so if you have anything at all you care about (that you, with solid justification, hope to care about in the future as well) you should probably just sell your soul to GOOD before EVIL HEDONISTIC INDIFFERENCE has a chance to make an offer, because it will, and at that point refusing it could feel like giving up something cool you really really want. Human brains: not even once.

• Foundational Research Institute lists some basic intuitions that support suffering-based ethics.

• The inevitable evolution of bad science (original article here)

• Scott Aaronson v. Roger Penrose on conscious computers. Fave paragraph: “Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience.  I replied that, for me, the relevant issues here are ‘well below neuroscience’ in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: ‘give me a long enough lever and a place to stand, and I’ll move the earth,’ so too I can declare, ‘give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.’ The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.”

• I gave a presentation on Aaronson’s fantastic essay Why Philosophers Should Care About Computational Complexity for a philosophy class and will keep recommending this article to everyone forever.

• From the Weird Sun blog, a list of men. Also on a more serious note, Descriptive Before Normative.

• Cool project by Pippin Barr (known among other things as the Ancient Greek Punishment guy): It is as if you were playing a videogame (later applied to chess)

• “Existence values are an unusual and somewhat controversial class of economic value, reflecting the benefit people receive from knowing that a particular environmental resource, such as Antarctica, the Grand Canyon, endangered species, or any other organism or thing exists.”

• Based on this interview, Peter Unger sounds like a vaguely frustrating guy but I’m also tempted to agree with many of his pessimistic views regarding academic philosophy (with a great deal of exceptions though, such as many ideas in ethics and philosophy of science maybe?). Ugh now everything is going to feel slightly meaningless for a few days, yeah thanks a lot vaguely frustrating guy

• WikiHow seems like a rabbit hole you never really get around exploring because the concept sounds kinda boring and possibly not rabbit-hole-y enough on the surface, but occasionally you’ll bump into an article on how to pretend you have magical ice powers that leaves you in a weird state of low key wondering what else is out there though probably still not being sufficiently motivated to find out.

Your actual terminal values are your actual terminal values

One of the coolest things I’ve internalized during the past year is this: you don’t need to have a particular emotional response to the things you deem terminally valuable, and you don’t need to assign any moral relevance to the things you happen to feel, unless you endorse these feelings as part of your moral framework. You get to decide what your terminal, inherently important values are, with no particular obligation to infer them from simple unprocessed gut feelings, emotions, or other stuff outside of your direct conscious analysis and control. Sure, all sorts of affects and intuitions are going to influence your decisions and form the basis of your motivation, but still, to what extent you endorse a given potentially morally relevant feeling and how much it’s weighed if embedded into your explicit values is something you get to decide for yourself. And once you have a somewhat satisfactory set of terminal values, you don’t need to care about anything else than that (as well the identifiable instrumental goals that connect to it, which of course is a lot to care about, but like).

This dissociation between a genuinely meaningful life and a life full of conventional, individualistic, sometimes entirely superficial but emotionally salient markers of success is to me extremely liberating because of the relative fickleness of personal happiness. The causal pathways that lead to personal wellbeing are only somewhat reliable: unfortunately, it’s easy to overestimate the value of external life changes, or have a brain that just generally is never really satisfied in the stimuli it encounters. This is not to say that the actions you take to affect the rest of the world have certain effects either, of course – but you probably have more empirical information on how they transform the world, and these effects are usually large enough to make it a safer bet despite the uncertainties involved. Compared to altruism, hedonistic psychological egoism especially when tied to external factors is a treadmill that leads to distress and powerlessness: you can’t decide to be loved, wealthy, or high status in the same way you can decide to show love to someone else, help out people around the world with vastly lower incomes, or act respectfully towards others. Effort-wise, it’s cheaper to do your best to make sure other people have their basic needs fulfilled than to strive to improve your own state which probably already is subject to the principle of diminishing returns.

To not have to see myself as a morally special, intensely relevant entity just because this is how the subjective point of view I inhabit automatically feels like (and because any subjective entity as a result of evolutionary processes has a natural egoistic bias) lets me accept even severe personal emotional setbacks as the inevitable, but absolutely not catastrophic, infinitesimally small moral negatives that they are. After screwing something personal up I may be in pain I’m unable to alleviate, but there are other people in pain as well, some of which I may be able to do something about. This grants me safety, but also control and power over making the world as a whole better: maybe I can take a moderately laborious action that improves my long-term wellbeing by a puny hopeless expected 0.05%; but inherently it is exactly equally important that any other person is spared from unhappiness, so a similar amount of work will usually lead to larger expected effects by an order of magnitude or more if I simply reject my default intuitive human egoism (to the surprisingly large but of course not complete extent that such a rejection is possible). Again, this doesn’t mean you have direct control over how the actions you take will be transduced into qualia experienced by the entities whose lives you seek to improve, but e.g. making statistically sure fewer people suffer for weeks and maybe die of painful preventable diseases is very likely to be a good idea in this respect, and a benefit of a magnitude by which you will probably not be able to easily improve your own life.

At a glance, this seems somewhat like an inversion of Stoicism: instead of accepting that you have no control over external happenings and focusing on cultivating appropriate attitudes towards the world in order to reach an internal sense of peace, you accept you sometimes have very little power over your internal emotional states, and turn your attention to the outside world to at least reduce the distress of other sentient beings. This incompatibility is illusory, and I’ve found these attitudes reinforce each other quite neatly, at least if your preferred flavour of Stoicism is modern in that it’s based on empirical, statistical observations about the extent of the effects of your actions. To me, it seems the virtues of Stoic philosophy were originally designed to counter the same futile hedonistic egoism I oppose, not explicitly to advocate resignation to the world as a place full of disease, abuse, and suffering you can’t do anything to alleviate (impartial empirical altruism just sort of hadn’t been invented yet). Accepting personal emotional setbacks as something not super morally relevant reinforces a Stoic acceptance of things outside of your control, and lets you focus on achieving vastly more important things without disproportional aversion or fear of personal discomfort.

Personal wellbeing is probably going to be a part of the stuff you value, and it might not be very universalizable to assign no intrinsic value at all on one’s own happiness, but in the end, you still get to decide whether your subjective experience is restricted to a solely instrumental role, a parameter you only need to improve insofar as it helps you advance other features of the world. From this, it follows that even if you’ll never feel especially “happy”, even if you’ll never be successful or particularly high status or whatever it is that you feel you may so far have failed at, you may still be doing your best towards progressing the things that you truly value. Even persistent anhedonia doesn’t have to mean anything at all: even without the associated emotional reward, you can still succeed at the things that actually matter.

More on status-based motivation

People frequently speak of social status like it’s something virtuous people aren’t supposed to be super concerned about: caring a lot about how others see you is for shallow people, while respectable people do things “for their own sake”, driven by “genuine passion”. Still, when investigated further – according to the most cynical interpretations at least – most of these supposedly noble urges, such as caring about the suffering of distant people or devoting one’s time to scientific research, are in fact also best explained by the motivation to increase and signal one’s social status: many of the states that the human brain recognizes as rewarding are probably triggered by social cues, and the rest by things that traditionally are closely associated with them, which according to the cynical view means that what other people think about us is in fact effectively the only motivation we have (though it sometimes is coded in values that inherently feel important regardless of social benefits, precisely due to their long history of being tightly coupled with status).

I basically agree with the gist of this view, though I guess its concrete empirical basis is currently a bit too thin to justify extremely high confidence – anyway, most of my quibbles are about the frequently implied moral implications of status-based motivation. Even after people understand how much of human motivation is actually based on pursuing status and how crucial one’s social standing can be to their happiness and mental health, the vague disapproval surrounding status-seeking behaviour seems to persist. Anecdotally at least, it’s common even for reasonable people to simultaneously (1) acknowledge that everyone is motivated to a great extent by social status, (2) admit that alternative, inherently prosocial incentive structures don’t seem to be feasible at all, and (3) view status as a morally reprehensible source of motivation, something good people should basically be ashamed to be motivated by. This introduces a feeling of dissonance that I think is needless.

***

Social status is a versatile good with many different sources, effects, uses and forms. When judging whether it is in some sense virtuous to strive for it, we tend to lump all of these forms together just because there’s a convenient word we can use when we mean any of them. So when good is done in order to gain status, the deed has a negative connotational load, because seeking status is more often talked of in contexts where people act in harmful,  pretentious, and frivolous ways (such as buying bigger cars, or whatever the current universal status-seeking caricature is at the moment); but this kind of otherwise reasonable conceptual connection doesn’t mean these things are morally equivalent or comparable, it just means they are similar in some ways crucial enough for our languages to have developed a broad term for all of them.

Compare seeking status with seeking material wealth (which of course can ultimately be seen as another way to gain status, but let’s roll with it for now). It doesn’t matter much where your money comes from or how you acquire it: it’s always the same, more or less, and owning it has roughly the same effects to you no matter its source. Contrary to this, there are countless different kinds of social status, only some of which usually are experienced as rewarding by any given person with certain goals and values, and only some of which should be seen as shallow or useless, and some of which we should hope more people to seek – the positive attention received by giving to charity, the admiration gained by doing or teaching valuable research, the reputation of an honest and trustworthy person. Most importantly, people seek status selectively and very often only feel good about some aspects of social status aligned with their values: a physicist might expend considerable effort to gain respect in their research group while getting little out of other superficial and morally irrelevant status sources. To be consequentially virtuous, then, doesn’t need to include being motivated by some pure morals completely decoupled from status, the virtues themselves: it can also mean being reliably motivated by the right kind of status while ignoring the more frivolous pursuits. It feels absurd to suggest that a person who has high status in an altruistic community due to their consistent charitable donations and effective outreach work could just as well have invested more in expensive clothes and hairstyles; the status gained by the latter actions would probably not have felt meaningful or worth it to the person in question, even if they could plausibly have resulted in more absolute admiration in some other social groups.

***

Humans are a cooperative species, and the fact that our survival and wellbeing is determined by whether our peers want us around or not is the force behind this cooperativeness. Even if we are just a bunch of genes trying to replicate, the dynamics of social status make us adopt being useful to each other as a part of our value systems, resulting in things like wishing to avoid hurting others, wanting to make people happy, and baking vegan cheesecakes. The mechanisms with which we judge a person’s status are (from a very roughly sketched evo-psych perspective that should be taken as non-literally as possible) concerned with how powerful and benevolent an ally they would be, and to what extent we can expect them to be good to us; likewise, when we want to increase our own status, we strive to prove ourselves more valuable and more beneficial to others than we would otherwise be. Status-seeking isn’t something that undermines human morality, it’s part of the foundation on which it is originally built.

The system isn’t perfect, of course. When we optimize for status (instead of the good things we pretend to directly, often human-impossibly optimize for), we easily get stuck in counterproductive signalling competitions, and in many environments with weird social structures there are ways to increase status effectively by actively harming others (fear-based dominance, “evil is sexy”, super misguided charity work that mostly gets in the way of everyone trying to improve things) or yourself. When people complain about other people being motivated by status-seeking, this sort of things are usually what they are picking up on, and quite rightly too – but I wish the complaints were directed at seeking the wrong kinds of status, instead of pretending that it’s virtuous and good or indeed possible at all to be a human who doesn’t care about status. When people are reluctant to admit that they seek status rewards instead of something pure, or condemn other people who do so because it doesn’t fit their idea of good motives, they approach the issue from the wrong angle, and end up impairing their perfectly functional motivational systems. Understanding and accepting the principle of status-motivated behaviour, on the other hand, can be tremendously useful. Accepting status games as an okay thing for humans to be involved in lets us assess them more honestly and tie them more closely to positive externalities, so that status games become more about generating lasting value to morally relevant entities, less to senseless negative-to-zero-sum races.

Links & misc #4

All right, time to return from the unofficial summer break I almost accidentally took from blogging! I have actually written quite a lot this summer, but most of it is currently in a half-finished state, so here’s another exciting list of links and miscellany.

Links:

• A list of terms that psych papers frequently use but probably should generally avoid due to their ambiguity or inaccuracy (e.g. “a gene for”, “hard-wired”, “neural signature”). I’m so glad to see this recent trend of people calling out researchers using superficially convincing, but substantially vague and misleading terminology. (Related classic: Still Not Significant.)

• A group at Karolinska Institutet has built a functional artificial neuron capable of communicating with human cells. At this stage it looks pretty clumsy, but the authors believe that miniaturized devices following the same principles are probably feasible in the future and could be utilised e.g. for remote-controlled release of neurotransmitters, all of which makes this technique pretty interesting.

• A neat summary of the wide variety of problems that governments could face when trying to regulate the development of powerful AIs. Novel areas of technological progress tend to be difficult to control as a default, because changing laws is practically always a tremendously slow process compared to the speed of profitable technological advances, but there are many good reasons to assume that AI will be an exceptionally tricky research area to attempt to regulate.

• Similar things have been done in E. coli before, but this looks potentially very useful: a common gut bacterium can now be equipped with synthetic sensory receptors and other basic computing components which allow its gene expression to be programmed and controlled e.g. by the food the host eats.

• In a recent experiment, researchers induced in the subjects a sense of body ownership (similar to the rubber hand illusion) for “a virtual balloon changing in size, and a virtual square changing in size or color, in synchrony with movements of their real hand”. The authors suggest that previous experiments have overemphasised the resemblance between the subject’s own hand and the other object, and that sufficient synchrony does indeed let us identify with objects that don’t even look like our limbs. Woohoooooooooo etc.

• A strange art project trying to visualise what Windows93 could have looked like (utterly ridiculous, very colourful, a bit disturbing).

• According to one study, people on physically unstable surfaces are less likely to believe their current relationship is going to last (subjects also reported less affection towards their partner as well as generally lower relationship quality, so at least they’re, uh, being internally consistent in their assessments?).

Other stuff:

• I don’t even play video games myself, but I watched a friend’s gamedev stream the other day and it looked like a lot of fun, so I downloaded Unity, started learning C#, and am now in the process of making my first tiny little game (which is one of the reasons I have sort of neglected this blog lately). I’m not sure if the end result is going to be anything interesting – basically I just wanted to establish some sort of a coding habit, and messing around with games turned out to be a motivating way to do this, because I get to alternate between scripting which I enjoy on a challenging problem-solving level and making graphics which I enjoy because it’s relaxing, but I’m sure the process will remain more rewarding if I end up actually finishing something cool, so we’ll see. (Ha ha just kidding – when someone says they’ll “see” whether they’re going to Accomplish a Thing, they’re very reliably not going to Accomplish the Thing. But I’m learning new stuff anyway, so!)

I realised chalkboard paint exists and have now painted most of the suitable surfaces in my apartment with it. This is so worth it. I can now have text all around me and effortlessly keep all sorts of proto-insights in sight so I spend more time processing ideas just because they happen to grab my attention more often – I also used to have a problem with forgetting many potentially interesting thoughts because I couldn’t be arsed to keep a bunch of mostly low-quality jottings organised on paper, but now I can just write everything down with low threshold and erase it later on without cluttering my desk or text files.

My moods have been super positive for a few months now, which is v cool after a long time of mild-to-moderate anhedonia following a burnout/depression thing I had a couple of years ago. Things that may be going on here:
– Probiotics, which I started taking after noticing there’s a surge of recent articles about their potential mood benefits and noticing that supplements with a few lactobacillus strains are dirt cheap. Temporally this coincides extremely well with my moods starting to improve and in any case, even if the research on psychobiotics is still at an early stage, this is a great solution as far as placebos go: sufficiently convincing, cheap, completely safe AFAIK.
– Summer and sunlight – even when my moods are fine, I seem to be slightly less effective during the darker months of the year, so it’s pretty likely that this is a factor too. I have considered eventually moving someplace else, but most of the countries I have always wanted to move to (Iceland! Northern Norway!) are very Finland-like in this respect, so idk. To do: start wanting to move to places closer to the equator.
– Recovery/regression to the mean & this being something like my normal hedonic baseline. Mayyyybe? When I was depressed, I recall taking it as a given that my baseline mood was & had always been somewhat lower than average, but now I’m not so sure about it – this whole “feeling generally cheerful, getting stuff done as long as it’s interesting, finding new things to be fascinated about all the time” thing actually feels very very familiar and is also consistent with lots of my old writings, so yeah, this could just be how it used to be. It’s not like mood-related memories are going to be totally accurate when you’re depressed.

Definitely also related: reading lots of Douglas Hofstadter. I’m being at least 38% serious here. I mean I’m pretty certain I wouldn’t be able to enjoy his writing as much as I do now if I were still intensely anhedonic, but then again I pretty consistently feel slightly better on the days I start out reading one of his books, so what do I know. The super compassionate, curious and sincere mindset he has seems to be very effective at reminding me of All The Good In The World – I’m almost sure this guy is actually from some secret cotton-candy coated parallel universe where science and philosophy actually are full of romance and beauty instead of excessive cynicism and sketchy frustrating signalling competitions, debates don’t have people resenting and wanting to hurt each other as a default component, and everything just generally is at least 30% more motivating, wonderful, pure and good, and yes I am having a huge crush on this person.

Notes from Toward a Science of Consciousness 2015

tsckynä2TSC, the legendary interdisciplinary conference on consciousness studies, took place in Helsinki this year. Probably because of how it overlaps with Every Area Of Science And Philosophy Ever, this field has been one of my main interests for a while now (of course, no one seems to have a confident opinion on whether the philosophical work has accomplished anything else than a hopeless mess of  conceptual confusion so far, but at least everyone is having fun).

Anyway [fangirling intensifies] some of the people present this year were the actual people who have actually written all the actual books which originally inspired me to start studying neurostuff instead of marine biology a couple of years ago, so a chance to meet them before they retire or die or something served as a convenient excuse to participate, even though I had absolutely nothing to contribute yet. But I learned so much. And I got a ballpoint pen. Who doesn’t like ballpoint pens.

Day I

• The opening plenary featuring three speakers focused on social cognition. Riitta Hari, an influential Finnish neuroscientist, started out by urging researchers to consider a more interactive approach to brain imaging. Human action is interaction almost by default, and most of it includes complex cognitive loops where we constantly model, predict, and react to the thoughts and actions of the people around us – this means that some of the stuff that happens in the brain, especially in social interaction, is easy to miss if you only look at one person doing the tasks, and it makes sense in a lot of contexts to image human brains using dyads rather than individuals as the basic unit. This will allow researchers to see how the activity in various brain regions becomes synchronised between subjects, as often happens, and offer insights that the traditional one-person approach might not be able to provide. I have no objections to this, apart from the fact that a dual fMRI looks like the most awkward setting ever. Maybe you can use it to find an awkwardness center.

• The next talk by Lara Maister was an overview of the research surrounding the rubber hand illusion (in which the subject experiences a rubber hand in front of them as their own hand when the actual hand is hidden from view and both hands are stroked synchronously with a brush) and enfacement (which is basically the same thing, but with faces instead of hands).
Apparently, this type of identity manipulation can have an influence not only on the bodily but also on the conceptual sense of self: even after the experiment has ended, people tend to judge themselves to be more similar to a stranger whose face they have “borrowed” in this way. White subjects also show less negative implicit racial stereotypes after they have experienced a dark rubber hand as their own for a while – IIRC implicit bias as a measure isn’t very reliably connected to the behaviour people engage in, so I’m not sure if the direct implications are very significant, but the finding is still pretty cool and probably says, uh, something about something.

I talked about the illusion with a friend at the conference later on, and he wondered whether it works with objects that are only vaguely hand-shaped. I’m still incredibly intrigued by this. I mean if the experiment is done skillfully enough, and the subjects are people susceptible enough to illusions like this, would they feel less willing to eat a cheese burrito afterwards because they’re identifying with a cheese burrito and  don’t want to eat this thing that to their System 1 feels almost like their own hand, wrapped in plastic, with a tag that says Cheese Burrito?  (There is no deep philosophical point here or anything. I just. I don’t know. Humans are amazing. Burritos are amazing. We could combine them, we could become both. #transcendence)

• The final talk on the opening plenary was by Dan Zahavi about the properties of empathy & shared affective states. A lot of translations of German nouns describing very specific subjective states! I’m a fan. Anyway, this is something I’ve been thinking about a lot. In common usage at least, empathy as a term seems to refer to a number of things that aren’t necessarily all that connected to each other on the information processing level, and I’d like to have a better understanding about how they are related both conceptually and cognitively. Zahavi challenged the view that empathy is about simulating others, as it quite clearly has a strong basis in perception; nor is it necessarily about similarity, or sharing a “same” mental state in any relevant sense. This talk gave me a lot to think about and probably changed my mind on some things, but I think I just need to read more empirical literature on the subject.

• Skimming through the abstract book, I felt a bit overwhelmed by the huge amount of potentially interesting talks and symposiums overlapping with each other, but noticed that there was going to be a lecture on time travel the following day, so I figured I don’t have to worry about missing out on anything. To stick with the theme of social cognition, I went to a symposium on modelling the self through others, which was THE BEST THING.

By memetic osmosis, I have recently been influenced a lot by various post-rationalist ideas about the importance of this pretty traditional type of social closeness and belonging, something I have generally neglected (like any self-respecting nerd, of course) but would like to know more about because I’m very curious about things (like any self-respecting nerd, of course). One of the recurring themes there is that synchronised movement of the kind often seen in rituals effectively facilitates pro-social affective states. The talks in this symposium focused on exactly this: synchrony, imitation, coordination, and their social causes and effects.

Lately, I’ve been noticing that I immensely appreciate looking at and in some contexts also participating in complex coordination. It could just be some superficial aesthetic thing, but until now I never made the connection to what it means for people socially to be able to coordinate in this way: apparently groups of humans are judged as being socially closer when they’re engaged in behaviour that requires skillful coordination than when they’re merely doing skillful synchronised movement. This social-closeness aspect, if it indeed holds up to scrutiny, might be what I’ve been picking up on. These lectures made a lot of things click, I guess, but I need to think about it. (Maybe everything just feels more significant than it actually is because I’m learning about it in an Exciting Conference Environment. Oh man, I’m a sucker for affect biases.)

Also, I wonder how this relates to how humans are used to living in the context of social status, or less hierarchically just different social roles. In synchrony, everyone does the same thing at the same time, which I guess is as close to role-and-status egalitarianism as you can get, and will probably temporarily reduce the perception of status and role differences. Usually, some vague leader/follower roles do emerge (even in synced finger tapping, some people tend to more readily adjust their own tapping to that of the other person, while some people focus mostly on their own performance and just expect the other participant to follow them), but there isn’t a lot of room for variation there.
In coordination, on the other hand, totally new levels of complexity arise from the way each individual is differently equipped to take care of different parts of the choreography at different times. This is more useful in exactly the type of situations that people usually have needed social groups to handle, and it requires and could plausibly strengthen deeply held pro-social attitudes such as trust, as you’re relying on the competence of the other participants even in areas you are not competent in yourself. Because of the usefulness of being able to coordinate versus being able to synchronise, and because of how it’s plausibly more familiar & comfortable for humans to exist in a social space where everyone has something of a defined role or status, it’s suddenly obvious how beautiful being able to coordinate is and why, and in conclusion I should probably start a jazz band?

tschintikka• Tuesday’s keynote speaker, Jaakko Hintikka, spoke of the levels of explanation in neuroscience. He based it on Marr’s influential framework with three levels of analysis you need to understand computational processes: the computational (what the system does, as in how does it seek to transform information), the algorithmic (how the system does what it does in terms of algorithms, or manipulating representations), and the implementational (how the system is realized physically).
I think Marr’s model is rather insightful, and was I looking forward to hearing Hintikka elaborate on how it can or can’t be utilized in neuroscience and whether or in which sense the three levels could be reduced to each other – unfortunately, he has this very slow but hard-to-follow, mumbling-rambling style of speaking you probably acquire after spending a significant portion of your life as the top philosopher in your country, and I was pretty exhausted at that point, so I wasn’t getting much out of it TBH. I soon decided to just relax listening to his talk with cozy ingroup-y concepts and ideas dropping here and there, enjoy the social atmosphere with lots of similarly half-excited-half-bored-to-death academics around me, and reflect on everything I had learned, which was a good, calming conclusion to a great day. (I promise I’m going to read some of his writing on the subject later on, because I am sure his thinking is immensely valuable, it’s just the verbal output that really doesn’t work for me.)

Day II

• Wednesday’s opening plenary was about dreams & virtual reality. I missed the first speaker, but arrived just in time to hear Antti Revonsuo describe his ideas about the possible evolutionary advantages of dreaming. A lot of dreams are threatening in nature, and could basically serve the purpose of preparing us to react to various threats, both social and physical, from shameful situations to aggressive encounters. To account for the dreams that instead are about harmless events, Revonsuo has recently revised his hypo to also include the rehearsal of social behaviour and reinforcement of social bonds, since most dreams also feature a simulated social reality with other people you interact with.
Supporting this simulation theory, we know that dreams form transient, but perceptually coherent realities, inside which they have causal powers similar to the sensorimotor activation that we exhibit in response to input when awake. Dreams are also quite effectively isolated from stimuli from the outside world, so they’re probably not just a half-assed attempt to stay aware of what happens around the dreamer’s body. (Under this bit in my notebook, I have written “Eyes taped open :D” which I take to mean that taping the test subjects’ eyes open before they went to sleep is how the isolation question was originally studied, and that Past Me From Wednesday Morning has for some reason found this hilarious instead of horrifying. All right. Okay.)

After Revonsuo, Jennifer Windt provided some slightly more abstract ideas about what dreams are made of. In addition to world-simulation, dreams are also self-simulation in an important sense – a phenomenal self is always present, but its attributes may change radically. We may perceive ourselves to exist as non-human subjects or even completely lack all bodily experience: however, the sense of location is something that seems to persist. Likewise, reports never seem to describe anything like multiple selves. A single self in a spatial location and also oriented in a certain direction could be thought of as a minimal experience of a self.
Windt continued to how this minimal selfhood might show up in virtual realities. We may eventually be able to volitionally control most of the perceptional input we receive, changing everything from the social world around us to the psychophysiological properties of our bodily self, and it’s intriguing to think about what the lowest common denominator in the experience of a self will turn out to be like.

• After some additional presentations related to dreaming, I had planned to go listen to a few talks on Tononi’s integrated information theory, but the room was totally crowded and I found it pretty hard to focus on anything, so after the first one I ended up in a nearby room with talks about phenomenal concepts instead. This was pretty great and the name sounds sort of vague and generic and would hence have been easy to miss, so I’m glad I chanced upon it like this.

So a basic counterargument to physicalist theories of consciousness, known as the knowledge argument (originally by Peter Jackson Jackson Pollock Frank Jackson) goes as following: Mary, a brilliant neuroscientist, spends all her life in a black-and-white room without ever experiencing colour qualia. She teaches herself absolutely everything there is to know about colours and colour vision on the physical level; still, when she finally exits the room for the first time in her life, and sees, say, a red apple, we intuitively know that she will be astonished or delighted or whatever because she learns something new – which shouldn’t happen if physicalism is correct and there is nothing more to conscious experience than physics (because Mary already knows all about the physics related to color experience: if she learns something by experience, it has to be extraphysical). I’ve never been very worried about this argument, because it just seems ridiculously overconfident to rely on your intuitions about anything that’s happening inside Mary’s head when she is, well, practically a superintelligence capable of processing all of physics in the blink of an eye (oops), but the PCS is an attempt to approach the issue from a different angle by suggesting that the confusion around the hard problem is rooted in us trying to use dualistic concepts even though reality is physicalistically monistic.

A particularly interesting thing I learned related to PCS & the knowledge argument is the concept of epistemic emotions, presented by Benedicte Veillet. These are the emotions people feel towards knowledge and knowing, e.g. curiosity, surprise, understanding, and astonishment; however, just like other emotions, they are not necessarily connected to anything especially relevant in the external world. Mary could in principle feel astonished when she experiences the redness of an apple even when she actually learns nothing new. Moreover, when we imagine how astonished she must feel in the situation,  or how genuinely she feels that she has learned something new or significant in the situation, what we are imagining and taking as conclusive evidence might actually be just the projection of an emotion we feel we would have in Mary’s shoes.
Veillet also quoted a physics student who had proven a theorem but didn’t feel it was true even though it couldn’t possibly be false (that’s pretty much what it means to prove a theorem). He was bothered by this for a long time, because he just never reached an intuitive feeling of the theorem being true, until he or someone else proved it in a different way, after which it suddenly felt just fine.
Veillet wasn’t sure yet what to make of it, and I think there’s not a lot of research on epistemic emotions the first place, but she thought and I agree that this kind of misled feelings of certainty or uncertainty are something to keep in mind when considering e.g. the possibility of p-zombies. Hmm. Anyway, the idea of epistemic emotions is  a valuable addition to my concept space and I’m going to start paying attention to them and how they relate to what I know on a System 2 level. (For example, I noticed that I really need a word for when I’m feeling that something is obvious even if I recognise it might not be true. Just saying that something “feels obvious” strongly implicates that you also think it is obviously true, which maybe should be rarer.)

• The day ended in a poster session, which as it turns out is not the optimal environment for me to learn anything. So many people with food and noise everywhere, I wonder if this person will try to talk to me if I check out their poster, maybe I can stand here and just discreetly look at it from the corner of my eye oh no now they noticed me well maybe I can escape if I just pretend this other poster next to is super interesting oh look the other poster appears to be about a crystal lightspirit brain-in-a-vat frog this is very embarrassing etc.

Day III

tscneyAlyssa Ney had a great talk about what it means for the metaphysics of consciousness that causation on a microscopic level may not exist – this will obviously put a dent in physicalism, which assumes the causal completeness of physics. Ney presented this idea of nomic sufficiency, which simply states that for every physical fact there is some other physical fact that, within certain laws, logically entails it. This isn’t a causal principle; it doesn’t require e.g. that there’s a cause that’s temporally located before the effect (the time invariance we see on the microscopic level is a major problem for causality). Anyway, I found her recent article on the same subject, so I’m just going to link to it instead of summarising further. Thank you Internet PDF God for this good Internet PDF I am forever in debt.

• David Papineau continued on the question of whether we can have a physicalist theory without causal closure. He offered some interesting alternatives to tackle the issue, including the possibility that macroscopic causation suffices to explain the causal phenomena that physicalism seems to require, a bit like how the increase in entropy is an emergent result of thermodynamics and seems to construe an arrow of time by probability distributions, quite independently of whether there is anything “temporal” on the level of fundamental dynamics at all. He also raised the question of whether the seemingly-problematic overdetermination in mental causality (in which things systematically have both a physical and a mental cause) should be such a problem in the first place, which is an intriguing question. Ehh causation is so weird, why do we have to have and/or not have causation.

• Finally: the superstar of consciousness research, David Chalmers, on the topic of… ok, wavefunction collapse and its possible relationship to consciousness. The nature of consciousness and the nature of the reality behind quantum mechanics are both pretty weird, so why not do what everyone else seems to be doing and, uh, parsimoniously assume that they’re both weird because they’re ontologically connected?
Chalmers didn’t go into the mathematics behind his hypo at all, which often is a reliable signal of that a person talking about QM has no idea what they’re talking about (case in point: me), but he worked with a physicist when developing this thing so I’m assuming it’s basically math-compatible and his talk just was a popularised account of it. I should stress that this isn’t something he absolutely thinks is true or anything, and he ended up finding a lot of flaws in it himself while developing it – he just wanted to look into it and figured it should maybe be taken more seriously than it is currently being taken, so even though this sounds pretty wild, he’s a sane guy with lots of valuable insights and I’m totally not dissing him.

So, Chalmers is going by a face-value interpretation of QM, in which the wavefunction actually collapses (every time I notice someone agreeing with a non-Everettian view, I feel this huge relief for a second, because I remember it’s entirely possible that MWI isn’t the correct interpretation even though all the people in my followsphere seem to take it as a given. Ughh I hate the possibility of MWI being correct, and I don’t hate a lot of things. Anyway.) A measurement is what causes a system to collapse into its determined state. Re the definition of “measurement”, Chalmers postulates the existence of operators that can never be superposed: interacting with systems in superposition, these measuring operators (or medusa operators, or m-operators) will naturally force them to the corresponding state, collapsing the wavefunction. The m-properties of these systems could in principle be anything, but it would be pretty rad if it were all about consciousness. This could lead us to a sort of interactionist view which could be compatible with both physicalism and dualism (probably not panpsychism though, omg).

That’s a simplification of a simplification, and I should check whether he has published something on this already, because I guess all of this has the potential to make more sense than what I extracted from a single short lecture on the subject and transmitted onwards here. Even if I’m not convinced about this at all, you got to admire how this guy has been working on consciousness for a couple of decades, developing a lot of incredibly influential stuff and bringing forth insights no one else has really been able to put into words and gathering so much in terms of academic prestige & followers, and then he just gets up one morning and decides he’s going to troll everyone so hard with this new quantum bonkers approach, just to see how many people will play along. That’s what I’m guessing is what happened here. He seems like the kind of guy to do that, sort of super laid-back. (He complimented a hat my friend was wearing, twice. It’s a pretty cool hat.)

Day IV

DSC_0242• The cognitive exhaustion combined with the lack of sleep and proper nutrition finally catches up, and as a result I’m sort of losing track on time and just generally getting very dizzy. I should try to eat more before tomorrow maybe. Toward a Science of Healthy Levels of Metabolism.

Anyway, the structuralism advocated by James Ladyman & Don Ross in Every Thing Must Go: Metaphysics Naturalized has long been on the neverending list of fascinating things I should get around learning about, so I was really curious about having Ladyman as a speaker this morning. Basically, his formulation of structuralism opposes strong physical realism by stating that physics might not be connected to a fundamental reality at all, because there may not be such a thing as a fundamental reality in the first place. (I’m probably not going to be able to articulate his metaphysics in a 100% convincing way here, seeing as I’m going to use a couple of paragraphs and these dudes wrote a book arguing for their ideas, but hopefully the summary isn’t going to be totally inaccurate.)

The existence and identity of everything, even the most fundamental-seeming entities such as mathematical concepts, seems to make sense only in terms of relationships, not of any self-subsisting entities themselves: there’s no such as thing as the number three, except as the apparent “thing” that is a successor to the apparent “thing” that is the number two, and so on. In the special sciences, there’s a lot of relational stuff that can’t be understood in terms of (fundamental) physics, such as the concept of evolutionary fitness: this means is that the structure of many things seen in special sciences can’t be reduced to physics. Generally I’m pretty averse to views that speculate on whether or not there is a fuuuuuundamental reaaaaaaaaality at aaaaaaallllll, because it seems very, I don’t know, poorly defined as a thing to be worried about in the first place? but if I’m getting what Ladyman says here, I think I’m liking it, and it looks like I will have to buy some books.

tscpat1x• Today’s keynote: Pat Churchland, who is hands down the most impressive speaker I have heard in this conference. It’s weird how she looks like a tiny person, but has an impossibly strong voice and generally just a huge presence compared to pretty much everyone else in the entire building.
After twenty years or so, Churchland is still a patient if a little bit cranky voice of reason in the community, trying to remind everyone else about how the apparent explanatory gap between the first person perspective and the third just might still be illusory: it’s entirely possible that the hard problem isn’t anything special – we used to think life was super special and something we could never explain in terms of purely physical things and interactions such as cells and mitochondria and ion channels, until one day, surpriiiiiise, we had it all figured out by accident just because we understood all the so-called lesser problems of life sufficiently well, and in the end they turned out to be all there was to understand.

In particular, this talk questioned the reliability of modal intuitions considering the identity of scientific findings. In the area of logic that is concerned with whether claims are true necessarily instead of just contingently, it’s not really well defined what this necessity means and especially whether it leads to useful conclusions in being able to understand things like consciousness. Is it necessarily true that light is electromagnetic radiation or that the way organisms transfer information is through DNA, and why should consciousness being neural processes be any different from this? We can imagine p-zombies and claim that they’re logically possible, but this might not mean they’re “possible” in any relevant sense of the word – we could also imagine a world where the metaphysical essence of light is maple syrup or whatever, but this doesn’t mean light isn’t electromagnetic radiation.

Most of her talk was more pragmatic in nature, however, and focused on the paths neuroscience could investigate to lead us to a better understanding of conscious processes. The bottom-level claim she was making is actually very minimal and easy to get behind: what neuroscience is doing is the best bet right now for figuring out consciousness, and also extremely useful even if it doesn’t end up solving the HP. Even though I’m still not sure if the metaphysical questions at play here are as trivial as Churchland suggests, I think it’s immensely valuable as a sanity check that there’s someone in the community sticking to this boring-but-rational attitude instead of trying to come up with all kinds of increasingly esoteric *cough cough* quantum voodoo *cough* explanations for consciousness, but then again as a neuro person I’m probably biased.

Day V

• Oh no oh no oh no. I almost got through the entire conference without stumbling upon a single speaker who was, well, mildly bonkers, but today was the day I finally did. Censored by the scientific community until now, a whole new doctrine of physics which shows how Einstein and everyone since him has been wrong about p much everything, all presented with annoying COLOURFUL FONTS and lots of CAPS LETTERS. Whyyyy is it always annoying COLOURFUL FONTS and lots of CAPS LETTERS? We will never understand. Probably something to do with quantum mechanics.

tscchalmers1

• After the exciting presentation above and a couple of short talks about psychiatry – one on an experience sampling app for psychiatrists which prompts patients to answer short questions about their mood during the course of the day and also tracks their movements through GPS and apparently reads their e-mail analysing the syntax of everything they write, and another on the definition of mental illnesses and the way they vary a lot depending on the culture you live in and the attitudes the people around you hold about them – the semi-final plenary lecture with Stuart Hameroff & other folks from Arizona State University researching transcranial ultrasound to treat depression. This was all pretty straightforward and it looks like an interesting area of research, though probably not the Holy Grail of Curing Depression. It’s 2015, I think we should have found a Holy Grail of Curing Depression already, and it’s such bullshit that we have to settle with researching things that are not the Holy Grail of Curing Depression, but it’s not the fault of these guys and they’re still doing some great stuff.

The group had a device with them, and they had offered conference participants a chance to get a 30-second stimulation and describe their experiences afterwards. Now that they published the results, something like 40% of the 80 or so participants had reported feeling a positive effect on their mood, and none of the 15 people who were without warning given a sham stimulation reported feeling anything. (The group presented actual published studies too, I just thought it was clever of them to demonstrate it like this.) They also asked volunteers to receive and describe the treatment live during the discussion, which was interesting but slightly unfortunate, because Sue Blackmore who happened to be one of them ended up feeling really awful and gave a vivid description of this awfulness and frightened everyone a little bit. But the treatment is safe in terms of long-term effects, and she was fine soon enough to participate in the closing plenary. (She is also not the guy in the picture above. The guy in the picture is Chalmers.)tscclose• Time for the final panel discussion with six participants: Hameroff, Churchland, Blackmore, Chalmers, Revonsuo, and Ladyman (who is missing from the picture above, and apparently not amused by Churchland’s metaphysical views in the one below).

tscxxAfter more than twenty years of Towards a Science of Consciousness, next year’s conference in Arizona will be adopting a more ambitious tone by starting to call itself simply the Science of Consciousness. This was the topic of this discussion – are we actually ready for a science of consciousness yet?
All of the speakers quickly described their views about where the field is currently heading, and whether the questions we are currently asking are defined well enough to lead us anywhere. It was a nice discussion to follow, though not very conclusive – more like a closing ritual for the conference than anything actually substantial. Though I could have been so sad about the conference ending at that point that I maybe just forgot to listen so who knows.

This was by far the best week I’ve had in a long time. I can’t remember the last time I’ve felt so genuinely motivated to learn everything in the whole entire world, and I met some cool new people and got to hang out with acquaintances I don’t usually see very often. I actually had to leave tons of interesting talks without mention in this post because there were so many of them, and I’m going to be full of weird ideas for the next three months probably.  5/5.