Hide table of contents

Thanks for all the questions, all - I’m going to wrap up here! Maybe I'll do this again in the future, hopefully others will too!


Hi,

I thought that it would be interesting to experiment with an Ask Me Anything format on the Forum, and I’ll lead by example. (If it goes well, hopefully others will try it out too.)

Below I’ve written out what I’m currently working on. Please ask any questions you like, about anything: I’ll then either respond on the Forum (probably over the weekend) or on the 80k podcast, which I’m hopefully recording soon (and maybe as early as Friday). Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.

If you don't want to post your question publicly or non-anonymously (e.g. you're asking “Why are you such a jerk?” sort of thing), or if you don’t have a Forum account, you can use this Google form


What I’m up to

Book

My main project is a general-audience book on longtermism. It’s coming out with Basic Books in the US, Oneworld in the UK, Volante in Sweden and Gimm-Young in South Korea. The working title I’m currently using is What We Owe The Future

It’ll hopefully complement Toby Ord’s forthcoming book. His is focused on the nature and likelihood of existential risks, and especially extinction risks, arguing that reducing them should be a global priority of our time. He describes the longtermist arguments that support that view but not relying heavily on them. 

In contrast, mine is focused on the philosophy of longtermism. On the current plan, the book will make the core case for longtermism, and will go into issues like discounting, population ethics, the value of the future, political representation for future people, and trajectory change versus extinction risk mitigation. My goal is to make an argument for the importance and neglectedness of future generations in the same way Animal Liberation did for animal welfare.

Roughly, I’m dedicating 2019 to background research and thinking (including posting on the Forum as a way of forcing me to actually get thoughts into the open), and then 2020 to actually writing the book. I’ve given the publishers a deadline of March 2021 for submission; if so, then it would come out in late 2021 or early 2022.  I’m planning to speak at a small number of universities in the US and UK in late September of this year to get feedback on the core content of the book. 

My academic book, Moral Uncertainty, (co-authored with Toby Ord and Krister Bykvist) should come out early next year: it’s been submitted, but OUP have been exceptionally slow in processing it. It’s not radically different from my dissertation.  

Global Priorities Institute

I continue to work with Hilary and others on the strategy for GPI. I also have some papers on the go:

  • The case for longtermism, with Hilary Greaves. It’s making the core case for strong longtermism, arguing that it’s entailed by a wide variety of moral and decision-theoretic views. 
  • The Evidentialist’s Wager, with Aron Vallinder, Carl Shulman, Caspar Oesterheld and Johannes Treutlein arguing that if one aims to hedge under decision-theoretic uncertainty, one should generally go with evidential decision theory over causal decision theory.
  • A paper, with Tyler John, exploring the political philosophy of age-weighted voting.

I have various other draft papers, but have put them on the back burner for the time being while I work on the book.

Forethought Foundation

Forethought is a sister organisation to GPI, which I take responsibility for: it’s legally part of CEA and independent from the University, We had our first class of Global Priorities Fellows this year, and will continue the program into future years. 

Utilitarianism.net 

Darius Meissner and I (with help from others, including Aron Vallinder, Pablo Stafforini and James Aung) are creating an introduction to classical utilitarianism at utilitarianism.net. Even though ‘utilitarianism’ gets several times the search traffic of terms like ‘effective altruism,’ ‘givewell,’ or ‘peter singer’, there’s currently no good online introduction to utilitarianism. This seems like a missed opportunity. We aim to put the website online in early October. 

Centre for Effective Altruism

We’re down to two very promising candidates in our CEO search; this continues to take up a significant chunk of my time. 

80,000 Hours

I meet regularly with Ben and others at 80,000 Hours, but I’m currently considerably less involved with 80k strategy and decision-making than I am with CEA. 

Other

I still take on select media, especially podcasts, and select speaking engagements, such as for the Giving Pledge a few months ago. 

I’ve been taking more vacation time than I used to (planning six weeks in total this year), and I’ve been dealing on and off with chronic migraines. I’m not sure if the additional vacation time has decreased or increased my overall productivity, but the migraines have decreased it by quite a bit. 

I am continuing to try (and often fail) to become more focused in what work projects I take on. My long-run career aim is to straddle the gap between research communities and the wider world, representing the ideas of effective altruism and longtermism. This pushes me in the direction of prioritising research, writing, and select media, and I’ve made progress in that direction, but my time is still more split than I'd like.

Comments138
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Are you happy with where EA as a movement has ended up? If you could go back and nudge its course, what would you change?

Relative to the base rate of how wannabe social movements go, I’m very happy with how EA is going. In particular: it doesn’t spend much of its time on internal fighting; the different groups in EA feel pretty well-coordinated; it hasn’t had any massive PR crises; it’s done a huge amount in a comparatively small amount of time, especially with respect to moving money to great organisations; it’s in a state of what seems like steady, sustainable growth. There’s a lot still to work on, but things are going pretty well. 

What I could change historically:  I wish we’d been a lot more thoughtful and proactive about EA’s culture in the early days.  In a sense the ‘product’ of EA (as a community) is a particular culture and way of life. Then the culture and way of life we want is whatever will have the best long-run consequences. Ideally I’d want a culture where (i) 10% or so of people interact with the EA community are like ‘oh wow these are my people, sign me up’; (ii) 90% of people are like ‘these are nice, pretty nerdy, people; it’s just not for me’; and (iii) almost no-one is like, ‘wow, these people are jerks’. (On (ii) and (iii): I feel like the Quakers is the sort of thing I’m think... (read more)

7
Henry Stanley 🔸
Thanks for the thoughtful answer, Will! :)
3
Nathan Young
Had a chat with some people, was noted that Quakers are not evangelical whilst New Athiests are. People make snarky jokes about vegans for their evangelicalism. To what extent is it hard to be likeable when you think other people should believe what you do? To what extent is it better to spread ideologies by actions not words? Are there any ways EA culture can improve? Big things to start, big things to stop?
9
Julia_Wise🔸
Minor factual point that the more left-y type of Quakers you've probably encountered are not evangelical, but there are evangelical branches of Quakerism, and for this reason the country with the most Quakers is Kenya.

Do you think EA has the problem of "hero worship"? (I.e. where opinions of certain people, you included, automatically get much more support instead of people thinking for themselves) If yes, what can the "worshipped" people do about it?

Yeah, I do think there’s an issue of too much deference, and of subsequent information cascades. It’s tough, because intellectual division of labour and deference is often great, as it means not everyone has to reinvent the wheel for themselves. But I do think in the current state of play there’s too much deference, especially on matters that involve a lot of big-picture worldview judgments, or rely on priors a lot. I feel that was true in my own case - about a year ago I switched from deferring to others on a number of important issues to assessing them myself, and changed my views on a number of things (see my answer to ‘what have you changed your mind about recently’).

I wish more researchers wrote up their views, even if in brief form, so that others could see how much diversity there is, and where, and so we avoid a bias where the more meme-y views get more representation than more boring views simply by being more likely to be passed along communication channels. (Maybe more AMAs could help with this!) I also feel we could do more to champion less well-known people with good arguments, especially if their views are in some ways counter to the EA mainstream. (Two people I’d highlight here are Phil Trammell and Ben Garfinkel.)

Thank you, I'm flattered! But remember, all: Will MacAskill saying we have good arguments doesn't necessarily mean we have good arguments :)

I enjoy reading Phil's blog here: https://philiptrammell.com/blog/

7[anonymous]
Also, what can normal EAs do about it?

Anon asks: "Do you think climate change is neglected within EA?"

I think there’s a weird vibe where EA can feel ‘anti’ climate change work, and I think that’s an issue. I think the etiology of that sentiment is (i) some people raising climate change work as a proposal to benefit the global poor, and I think it’s very fair to argue that bednets do better than the best climate change actions with respect to that specific goal; (ii) climate change gets a lot of media time, including some claims that aren’t scientifically grounded (e.g. that climate change will literally directly kill everyone on the planet), and some people (fairly) respond negatively to those claims. 

But climate change is a huge problem, and working on clean tech, nuclear power, carbon policy etc are great things to do. And I think the upsurge of concern about the rights of future generations that we’ve seen from the wider public over the last couple of decades is really awesome, and I think that longtermists could do more to harness that concern and show how concern for future generations generalises to other issues too. So I want to be like, ‘Yes! And….’ with respect to climate change.

Then is climate chang... (read more)

What do you think the typical EA Forum reader is most likely wrong about?

I don’t know about ‘most likely’, but here’s one thing that I feel gets neglected: The value of concrete, short-run wins and symbolic actions. I think a lot about Henry Spira, the animal rights activist that Peter Singer wrote about in Ethics into Action. He led the first successful campaign to limit the use of animals in medical testing, and he was able to have that first win by focusing on science experiments at New York’s American Museum of Natural History, which involved mutilating cats in order to test their sexual performance after the amputation. From a narrow EA perspective, the campaign didn’t make any sense: the benefit was something like a dozen cats. But, at least as Singer describes it, it was the first real win in the animal liberation movement, and thereby created a massive amount of momentum for the movement.

I worry that in current EA culture people feel like every activity has to be justified on the basis of marginal cost-effectiveness, and that that the fact that an action would constitute some definite and symbolic, even if very small, step towards progress — and be the sort of thing that could provide fuel for a further movement — isn’t ‘allowable’ as a reason f... (read more)

4
Garrison
I participated in a civil disobedience direct action protesting an ICE-affiliated private detention center in Elizabeth, NJ. I was one of 36 people arrested for blocking traffic in and out of the facility (and nothing else). We spent hours traveling there, prepping for the action, blocking the road, being arrested and detained. All in, it was a full day of work for everyone involved, plus over 100 others who showed up. We raised money for a lawyer and travel expenses for people traveling for court. From an EA standpoint, this is really hard to justify. We shut down vehicle traffic from one facility for a few hours and got some press. But, that was the first action of now nearly 40 across the country in the past 7 weeks. People have shut down the ICE HQ for hours, disrupted companies working with ICE, and got a bunch of press coverage on the horrible treatment of immigrants. It sill remains to be seen what the final result will be, but it does seem like the Trump admin has responded to popular protests in the past (the airport protests in particular). Even if this ultimately fails, a ton of young people are getting trained in activism and organizing. One of the organizers cut her teeth organizing the Women's March. The downstream effects of getting young people involved in effective political organizing are hard to measure, but can change the course of history. Barry Goldwater lost the 1964 presidential campaign, but the young people who worked on his campaign went on to take over the Republican Party (see Rick Perlstein's book Before the Storm if you're interested in the story). While the org is definitely not EA, I found the organizing to be very well-thought through and effective, especially compared to other actions I've participated in. For anyone curious, the group that organized this is called Never Again Action (https://www.neveragainaction.com/).

Yes, some symbolic activities will turn out to be high-impact, but we have to beware survivorship bias (ie, think of all the symbolic activities that went nowhere).

6
CarlShulman
The annual total of all spending on electoral campaigns in the US is only a few billion dollars. So aggregating across all of that activity the per $ (and per staffer) impact is still going to be quite large.

I think we need to figure out how to better collectively manage the fact that political affiliation is a shortcut to power (and hence impact), yet politicisation is a great recipe for blowing up the movement. It would be a shame if avoiding politics altogether is the best we can do.

What have you changed your mind on recently?

Lots! Treat all of the following as ‘things Will casually said in conversation’ rather than ‘Will is dying on this hill’ (I'm worried about how messages travel and transmogrify, and I wouldn't be surprised if I changed lots of these views again in the near future!). But some things include:

  • I think existential risk this century is much lower than I used to think — I used to put total risk this century at something like 20%; now I’d put it at less than 1%. 
  • I find ‘takeoff’ scenarios from AI over the next century much less likely than I used to. (Fast takeoff in particular, but even the idea of any sort of ‘takeoff’, understood in terms of moving to a higher growth mode, rather than progress in AI just continuing existing two-century-long trends in automation.) I’m not sure what numbers I’d have put on this previously, but I’d now put medium and fast takeoff (e.g. that in the next century we have a doubling of global GDP in a 6 month period because of progress in AI) at less than 10%. 
  • In general, I think it’s much less likely that we’re at a super-influential time in history; my next blog post will be about this idea 
  • I’m much more worried about a great power war in my lifeti
... (read more)

This is just a first impression, but I'm curious about what seems a crucial point - that your beliefs seem to imply extremely high confidence of either general AI not happening this century, or that AGI will go 'well' by default. I'm very curious to see what guides your intuition there, or if there's some other way that first-pass impression is wrong.

I'm curious about similar arguments that apply to bio & other plausible x-risks too, given what's implied by low x-risk credence

The general background worldview that motivates this credence is that predicting the future is very hard, and we have almost no evidence that we can do it well. (Caveat I don’t think we have great evidence that we can’t do it either, though.) When it comes to short-term forecasting, the best strategy is to use reference-class forecasting (‘outside view’ reasoning; often continuing whatever trend has occurred in the past), and make relatively small adjustments based on inside-view reasoning. In the absence of anything better, I think we should do the same for long-term forecasts too. (Zach Groff is working on a paper making this case in more depth).

So when I look to predict the next hundred years, say, I think about how the past 100 years has gone (as well as giving consideration to how the last 1000 years and 10,000 years (etc) have gone).  When you ask me about how AI will go, as a best guess I continue the centuries-long trend of automation of both physical and intellectual labour; in the particular context of AI I continue the trend where within a task, or task-category, the jump from significantly sub-human to vastly-greater-than-human level performance is rapid (on the order o... (read more)

Finally, I expect the development of any new technology to be safe by default.

The argument you give in this paragraph only makes sense if "safe" is defined as "not killing everyone" or "avoids risks that most people care about". But what about "safe" as in "not causing differential intellectual progress in a wrong direction, which can lead to increased x-risks in the long run" or "protecting against or at least not causing value drift so that civilization will optimize for the 'right' values in the long run, whatever the appropriate meaning of that is"?

If short-term extinction risk (and in general risks that most people care about) is small compared to other kinds of existential risks, it would seem to make sense for longtermists to focus their efforts more on the latter.

7
William_MacAskill
I agree re value-drift and societal trajectory worries, and do think that work on AI is plausibly a good lever to positively affect them.
2
Ofer
I'd be happy to read more about this point. If we end up with powerful deep learning models that optimize a given objective extremely well, the main arguments in Superintelligence seem to go through. (If we end up with powerful deep learning models that do NOT optimize a given objective, it seems to me plausible that x-risks from AI are more severe, rather than less.) [EDIT: replaced "a specified objective function" with "a given objective"]
3
SiebeRozendal
Why do his beliefs imply extremely high confidence? Why do the higher estimates from other people not imply that? I'm curious what's going on here epistemologically.

If you believe "<1% X", that implies ">99% ¬X", so you should believe that too. But if you think >99% ¬X seems too confident, then you should modus tollens and moderate your <1% X belief. When other people give e.g. 30% X, that only implies 70% ¬X, which seems more justifiable to me.

I use AGI as an example just because if it happens, it seems more obviously transformative & existential than biorisk, where it's harder to reason about whether people survive. And because Will's views seem to diverge quite strongly from average or median predictions in the ML community, not that I'd read all too much into that. Perhaps further, many people in the EA community believe there's good reason to think those predictions are too conservative if anything, and have arguments for significant probability of AGI in the next couple decades, let alone century.

Since Will's implied belief is >99% no xrisk this century, this either means AGI won't happen, or that it has a very high probability of going well (getting or preserving most of the possible value in the future, which seems the most useful definition of existential for EA purposes). That's at first glance of course, so not wanting the whole book, just want an intuition for how you seem to get such high confidence ¬X, especially when it seems to me there's some plausible evidence for X.

I disagree with your implicit claim that Will's views (which I mostly agree with) constitute an extreme degree of confidence. I think it's a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for "events that are at least as transformative as the industrial revolution".

That base rate seems pretty low. And that's not actually what we're talking about - we're talking about AGI, a specific future technology. In the absense of further evidence, a prior of <10% on "AGI takeoff this century" seems not unreasonable to me. (You could, of course, believe that there is concrete evidence on AGI to justify different credences.)

On a different note, I sometimes find the terminology of "no x-risk", "going well" etc. unhelpful. It seems more useful to me to talk about concrete outcomes and separate this from normative judgments. For instance, I believe that extinction through AI misalignment is very unlikely. However, I'm quite uncertain about whether people in 2019, if you handed them a crystal ball that shows what will happen (regarding AI), would generally think that things ar... (read more)

Maybe one source of confusion here is that the word "extreme" can be used either to say that someone's credence is above (or below) a certain level/number (without any value judgement concerning whether that's sensible) or to say that it's implausibly high/low.

One possible conclusion would be to just taboo the word "extreme" in this context.

8
nonn
Agree, tried to add more clarification below. I'll try to avoid this going forward, maybe unsuccessfully. Tbh, I mean a bit of both definitions (Will's views are quite surprising to me, which is why I want to know more), but mostly the former (i.e. stating it's close to 0% or 100%).
I sometimes find the terminology of "no x-risk", "going well" etc.

Agree on "going well" being under-defined. I was mostly using that for brevity, but probably more confusion than it's worth. A definition I might use is "preserves the probability of getting to the best possible futures", or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we've locked in a substantially suboptimal moral situation, we've effectively lost most possible value - which I'd call x-risk.

The main point was fairly object-level - Will's beliefs imply it's near 1% likelihood of AGI in 100 years, or near 99% likelihood of it "not reducing the probability of the best possible futures", or some combination like <10% likelihood of AGI in 100 years AND even if we get it, >90% likelihood of it not negatively influencing the probability of the best possible futures. Any of these sound somewhat implausible to me, so I'm curious for the intuition behind whichever one Will believes.


I think it's a mistake to approach these questions with a 50-50 prior. Instea
... (read more)

Very interesting points! I largely agree with your (new) views. Some thoughts:

  • If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
    • a) that it's unlikely that transformative AI will be developed at all this century,
    • b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up some arguments for this a while ago.)
  • Which of the two do you believe to what extent? For instance, if you put 10% on transformative AI this century – which is significantly more conservative than "median EA beliefs" – then you’d have to believe that the conditional probability of extinction is less than 10%. (I’m not saying I disagree – in fact, I believe something along these lines myself.)
  • What do you think about the possibility of a growth mode change (i.e. much faster pace of economic growth and probably also social change, comparable to the industrial revolution) for reasons other than AI? I feel that this is somewhat neglected in EA
... (read more)

Thanks! I’ve read and enjoyed a number of your blog posts, and often found myself in agreement. 

If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
a) that it's unlikely that transformative AI will be developed at all this century,
b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up some arguments for this a while ago.)
Which of the two do you believe to what extent? For instance, if you put 10% on transformative AI this century – which is significantly more conservative than "median EA beliefs" – then you’d have to believe that the conditional probability of extinction is less than 10%. (I’m not saying I disagree – in fact, I believe something along these lines myself.)

See my comment to nonn. I want to avoid putting numbers on those beliefs to avoid anchoring myself; but I find them both very likely - it’s not that one is much more likely than the other. (Where ‘transformative AI not developed this... (read more)

4
Tobias_Baumann
Strongly agree. I think it's helpful to think about it in terms of the degree to which social and economic structures optimise for growth and innovation. Our modern systems (capitalism, liberal democracy) do reward innovation - and maybe that's what caused the growth mode change - but we're far away from strongly optimising for it. We care about lots of other things, and whenever there are constraints, we don't sacrifice everything on the altar of productivity / growth / innovation. And, while you can make money by innovating, the incentive is more about innovations that are marketable in the near term, rather than maximising long-term technological progress. (Compare e.g. an app that lets you book taxis in a more convenient way vs. foundational neuroscience research.) So, a growth mode could be triggered by any social change (culture, governance, or something else) resulting in significantly stronger optimisation pressures for long-term innovation. That said, I don't really see concrete ways in which this could happen and current trends do not seem to point in this direction. (I'm also not saying this would necessarily be a good thing.)
8
William_MacAskill
One thing that moves me towards placing a lot of importance on culture and institutions: We've actually had the technology and knowledge to produce greater-than-human intelligence for thousands of years, via selective breeding programs. But it's never happened, because of taboos and incentives not working out.
6
CarlShulman
People didn't quite have the relevant knowledge, since they didn't have sound plant and animal breeding programs or predictions of inheritance.
[anonymous]28
0
0

I'd be super interested in hearing you elaborate more on most of the points! Especially the first two.

4
SiebeRozendal
Me too! I'm quite surprised by many of them! (Not necessarily disagree, just surprised)

I’d like to vote for more detail on:

I find (non-extinction) trajectory change more compelling as a way of influencing the long-run future than I used to.

Unless the change in importance is fully explained by the relative reprioritization after updating downward on existential risks.

Do I understand you correctly that you’re relatively less worried about existential risks because you think they are less likely to be existential (that civilization will rebound) and not because you think that the typical global catastrophes that we imagine are less likely?

5
Leon_Lang
Thanks for these interesting points! About the first 3 statements on existential risks, takeoff scenarios and how influential our time is: How much is your view the general wisdom of experts in the corresponding research fields (I'm not sure what this field would be for assessing our influence on the future) and how much is it something like your own internal view?

It depends on who we point to as the experts, which I think there could be disagreement about. If we’re talking about, say, FHI folks, then I’m very clearly in the optimistic tail - others would put much higher x-risk, takeoff scenario, and chance of being superinfluential. But note I think there’s a strong selection effect with respect to who becomes an FHI person, so I don’t simply peer-update to their views. I’d expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view. If I were wrong about that I’d change my view. One relevant piece of evidence is that the Metaculus (a community prediction site) algorithm puts the chance of 95%+ of people dead by 2100 at 0.5%, which is in the same ballpark as me.

I think there's some evidence that Metaculus, while a group of fairly smart and well-informed people, are nowhere near as knowledgeable as a fairly informed EA (perhaps including a typical user of this forum?) for the specific questions around existential and global catastrophic risks.

One example I can point to is that for this question on climate change and GCR before 2100 (that has been around since October 2018), a single not-very-informative comment from me was enough to change the community median from 24% to 10%. This suggests to me that Metaculus users did not previously have strong evidence or careful reasoning on this question, or perhaps GCR-related thinking in general.

Now you might think that actual superforecasters are better, but based on the comments given so far for COVID-19, I'm unimpressed. In particular the selected comments point to use of reference classes that EAs and avid Metaculus users have known to be flawed for over a week before the report came out (eg, using China's low deaths as evidence that this can be easily replicated in other countries as the default scenario).

Now COVID-19 is not an existential risk or GCR, but it is an "out of distribution" problem showing clear and fast exponential growth that seems unusual for most questions superforecasters are known to excel at.

3
Vaughn Papenhausen
I'd be very interested in hearing more about the views you list under the "more philosophical end" (esp. moral uncertainty) -- either here or on the 80k podcast.

Hey, thanks so much for all the responses! I’m impressed by how much take-up this has had! My migraines issue has been worse over the past week so I’m sorry if my responses are slow and erratic (and the 80k podcast has been bumped back to early October), but they will come! And if I do 't respond to you yet, it might be just because the question is good and deserves thought!

What's one piece of research / writing that you think is missing from the public internet, but you think a Forum writer could create?

If I could pick just one, it would be an assessment of existential risk conditional on some really major global catastrophe (e.g. something that kills 90% / 99% / 99.9%) of the world’s population.  I think this is really crucial because: (i) for many of the proposed extinction risks (nuclear, asteroids, supervolcanoes, even bio), I find it really hard to see how they could directly kill literally everyone, but I find it much easier to see how they could kill some very large proportion of the population; (ii) there’s been very little work done on evaluating how likely (or not) civilisation would be to rebound from a really major global catastrophe. (This is the main thing I know of.)

Ideally, I’d want the piece of research to be directed at a sceptic. Someone who said: “Even if 99.9% of the world’s population were killed, there would still be 7 million people left, approximately the number of hunter-gatherers prior to the Neolithic revolution. It didn’t take very long — given Earth-level timescales — for hunter-gatherers to develop agriculture and then industry. And the catastrophe survivors would have huge benefits compared to them: inherited knowledge, leftover technology, low-lyin... (read more)

I’m also just really pro Forum users trying to independently verify arguments made by others in EA (or endorsed by others in EA), or check data that’s being widely used. E.g. I thought Jeff Kaufman’s series on AI risk was excellent. And recently Ben Garfinkel has been trying to locate the sources of the global population numbers that underlie the ‘hyperbolic growth’ idea and I’ve found that work important and helpful. 

(In general, I think we can sometimes have a double standard where we will happily tear apart careful, widely-cited research done by people outside the community, but then place a lot of weight on ideas or arguments that have come from within the community, even if they haven’t gone through the equivalent of rigorous peer-review.)

3
SiebeRozendal
If anyone decides to work one this, please feel free to contact me! There is a small but non-negligible probability I'll work on this question, and if I don't I'd be happy to help out with some contacts I made.

Do you have any thoughts on why there is not much engagement/participation in technical AI safety/alignment research by professional philosophers or people with philosophy PhDs? (I don't know anyone except one philosophy PhD student who is directly active in this field, and Nick Bostrom who occasionally publishes something relevant.) Is it just that the few philosophers who are concerned about AI risk have more valuable things to do, like working on macrostrategy, AI policy, or trying to get more people to take ideas like existential risk and longtermism seriously? Have you ever thought about at what point it would start to make sense for the marginal philosopher (or the marginal philosopher-hour) to go into technical AI safety? Do you have a sense of why "philosophers concerned about AI risk" as a class hasn't grown as quickly as one might have expected?

On a related note, I feel like encouraging EA people with philosophy background to go into journalism or tech policy (as you did in the recent 80,000 Hours career review) is a big waste, since an advanced education in philosophy does not seem to create an obvious advantage in those fields, whereas there are important philosophical questions in AI alignment for which such a background would be more obviously helpful. Curious what your thinking is here.

It occurs to me that another reason for the lack of engagement by people with philosophy backgrounds may be that philosophers aren't aware of the many philosophical problems in AI alignment that they could potentially contribute to. So here's a list of philosophical problems that have come up just in my own thinking about AI alignment.

EDIT: Since the actual list is perhaps only of tangential interest here (and is taking up a lot of screen space that people have to scroll through), I've moved it to the AI Alignment Forum.

7
richard_ngo
Wei's list focused on ethics and decision theory, but I think that it would be most valuable to have more good conceptual analysis of the arguments for why AI safety matters, and particularly the role of concepts like "agency", "intelligence", and "goal-directed behaviour". While it'd be easier to tackle these given some knowledge of machine learning, I don't think that background is necessary - clarity of thought is probably the most important thing.

Hey Wei_Dai, thanks for this feedback! I agree that philosophers can be useful in alignment research by way of working on some of the philosophical questions you list in the linked post. Insofar as you're talking about working on questions like those within academia, I think of that as covered by the suggestion to work on global priorities research. For instance, I know that working on some of those questions would be welcome at the Global Priorities Institute, and I think FHI would probably also welcome philosophers working on AI questions. But I agree that that isn’t clear from the article, and I’ve added a bit to clarify it.

But maybe the suggestion is working on those questions outside academia. We mention DeepMind and Open AI as having ethics divisions, but likely only some philosophical questions relevant to AI safely are done in those kinds of centers, and it could be worth listing more non-academic settings in which philosophers might be able to pursue alignment relevant questions. There are, for instance, lots of AI ethics organizations, though most are only focused on short-term issues, and are more concerned with 'implications' than with philosophical questions that arise

... (read more)

Thanks for making the changes. I think they address most of my concerns. However I think splitting the AI safety organizations mentioned between academic and non-academic is suboptimal, because it seems like what's most important is that someone who can contribute to AI safety go to an organization that can use them, whether that organization belongs to a university or not. On a pragmatic level, I'm worried that someone sees a list of organizations where they can contribute to AI safety, and not realize that there's another list in a distant part of the article.

Do you know of any other projects or organizations that might be useful to mention?

Individual grants from various EA sources seem worth mentioning. I would also suggest mentioning FHI for AI safety research, not just global priorities research.

As for the comparison with journalism and AI policy, in line with what Will wrote below I was thinking of those as suggestions for people who are trying to get out of philosophy or who will be deciding not to go into it in the first place, i.e., for people who would be good at philosophy but who choose to do something else that takes advantage of their general strengths.

Ok, tha

... (read more)
1
Arden Koehler
Re: these being alternatives to philosophy, I see what you mean. But I think it's ok to group together non-academic philosophy and non-philosophy alternatives since it's a career review of philosophy academia. However, I take the point that I can better connect the two 'alternatives' sections in the article and have added a link. As for individual grants, I'm hesitant to add that suggestion because I worry that that would encourage some people people who aren't able to get philosophy roles in academia or in other organizations to go the 'independent' route, and I think that will rarely be the right choice.
2
Wei Dai
I'm interested to hear why you think that. My own thinking is that a typical AI safety research organization may not currently be very willing to hire someone with mainly philosophy background, so they may have to first prove their value by doing some AI safety related independent research. After that they can either join a research org or continue down the 'independent' route if it seems suitable to them. Does this not seem like a good plan?
9
William_MacAskill
I don’t feel I have a great answer here. I think in part there’s just not that many philosophers in the world, and most of them are already wrapped up in existing research projects. Of those that are EA-aligned, I think the field of global priorities research probably tends to seem to them like a better fit with their skills than AI alignment. It also might be (this is a guess, based on maybe one or two very brief impressions) that philosophers in general aren’t that convinced of the value of the ‘agent foundations’ approach to AI safety, and feel that they’d need to spend a year getting to grips with machine learning before they could contribute to AI technical safety research.  Of your problems list, quite a number are more-or-less mainstream philosophical topics: standard debates in decision theory; infinite ethics; fair distribution of benefits; paternalism; metaphilosophy; nature of normativity. So philosophers are already working on those at least. I really like your ‘metaethical policing’ bullet points, and wish there was more work from philosophers there.  Arden Koehler wrote that part of the post (and is the main author of that post), so I'll leave that to her. But quite a number of people who leave philosophy do so because they no longer want to keep doing philosophy research, so it seems good to list other options outside of that.

Of your problems list, quite a number are more-or-less mainstream philosophical topics

Sure. To clarify, I think it would be helpful for philosophers to think about those problems specifically in the context of AI alignment. For example many mainstream decision theorists seem to think mostly in terms of what kind of decision theory best fit with our intuitions about how humans should make decisions, whereas for AI alignment it's likely more productive to think about what would actually happen if an AI were to follow a certain decision theory and whether we would prefer that to what would happen if it were to follow a different decision theory. Another thing that would be really helpful is to act as a bridge from mainstream philosophy research to AI alignment research, e.g., pointing out relevant results from mainstream philosophy when appropriate.

Arden Koehler wrote that part of the post (and is the main author of that post), so I’ll leave that to her. But quite a number of people who leave philosophy do so because they no longer want to keep doing philosophy research, so it seems good to list other options outside of that.

Ah ok. Any chance you could discuss this issue with h

... (read more)
8
William_MacAskill
That makes sense; agree there's lots of work to do there. Have sent an email! :)

What are your top 3 "existential risks" to EA? (i.e. risks that would permanently destroy or curtail the potential of Effective Altruism - both to the community and the ideas)

  1. The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism.)
  2. A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate).
  3. Fizzle - it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion.

What has been the biggest benefit to your well-being since getting into EA? What would you advice to the many EA's who struggle with being happy/not burning out? (our community seems to have a higher than average rate of mental illness)

Honestly, the biggest benefit to my wellbeing was taking action about depression, including seeing a doctor, going on antidepressants, and generally treating it like a problem that needed to be solved. I really think I might not have done that, or might have done it much later, were it not for EA - EA made me think about things in an outcome-oriented way, and gave me an extra reason to ensure I was healthy and able to work well.

For others: I think that Scott Alexander's posts on anxiety and depression are really excellent and hard to beat in terms of advice. Other things I'd add: I'd generally recommend that your top goal should be ensuring that you're in a healthy state before worrying too much about how to go about helping others; if you're seriously unhappy or burnt our, fixing that first is almost certainly the best altruistic thing you can do. I also recommend maintaining and cultivating a non-EA life: having a multi-faceted identity means that if one aspect of your life isn't going so well, then you can take solace in other aspects.


A significant amount of your effort and the focus of the EA movement as a whole is on longtermism. Can you steelman arguments for why this might be a bad idea?

No need to steelman - there are good arguments against this and it’s highly nonobvious what % of EA effort should be on longtermism, even from the perspective of longtermism.  Some arguments:

  • If longtermism is wrong (see another answer for more on this)
  • If getting a lot of short-run wins is important to have long-run influence
  • If longtermism is just too many inferential steps away from existing common-sense, and if more people would therefore get into longtermism if there were more focus on short-term wins
  • If now isn’t the right time for longtermism (because there isn’t enough to do) and instead it would be better if there were a push around longtermism at some time in the future 

I think all these considerations are significant, and are part of why I’m in favour of EA having a diversity of causes and worldviews. (Though not necessarily on the ‘three cause area’ breakdown which we currently have, which I think is a bit narrow).

1
WilliamKiely
Have you thought about whether there's a way you could write your book on longtermism to make it robustly beneficial even if it turns out that it's not yet a good time for a push around longtermism?

What mistake do you most commonly see EAs making?

Pretty hard to say, but the ‘hero worship’ comment (in the sense of ‘where opinions of certain people automatically get much more support instead of people thinking for themselves’) seems pretty accurate.

Insofar as this is a thing, it has a few bad effects: (i) means that more meme-y ideas get overrepresented relative to boring ideas; (ii) EA ideas don’t get stress-tested enough, or properly ‘voted’ on by crowds; (iii) there’s a problem of over-updating (“80k thinks everyone should earn to give!”; “80k thinks no-one should earn to give!” etc), especially on messages (like career advice) that are by their nature very person- and context-relative.

Very few of these questions are about you as a person. That seems worth noting. On the one hand I'd be interested in what your favourite novel is. On the other hand that seems an inappropriate question to ask - "Will isn't here to answer questions about his personality, he's here to maximise wellbeing". Should we want to humanise key figures within the EA ideological space (like you)?

If yes, what made you laugh recently?

I think asking more personal questions in AMAs is a good idea! 

Favourite novel: I normally say Crime and Punishment by Dosteovsky, but it’s been a long time since I’ve read it so I’m not sure I can still claim that. I just finished The Dark Forest by Liu Chixin and thought it was excellent. 

Laugh: my partner is a very funny person. Last thing that made me laugh was our attempt at making cookies, but it’s hard to convey by text.

This reminds me of the most important AMA question of all:

MacAskill, would you rather fight 1 horse-sized chicken, or 100 chicken-sized horses?

I'm pretty terrified of chickens, so I'd go for the horses.

1
Nathan Young
For what it's worth, my thoughts are that I'd love to humanise such people but I'd probably like to do that by consuming content they choose to produce or having dinner with them, etc. It does seem like a waste of valuable time that Will chooses to use here to ask him personal info when he could more valuably spend that time imparting information. We aren't friends we're working on a project together. Obviously that project might work better if we humanise each other. On the other hand in jokes cause cliques which inhibit perhaps inhibit the growth of ideologies - the lack of say athiest culture within EA allows Christains within EA to exist, which might not be possible if this forum contained loads of in jokes/ personal non-EA views. Interested to hear thoughts.

I remember going to a 'fireside chat' at EAGxOxford a few years ago - the first such conference I'd been to. The topic was general wellbeing amongst EAs. Hearing Will and the other participants talk candidly about difficulties they'd faced was very humbling and humanising.

I don't think we should necessarily shy away from such questions.

4
Nathan Young
Thank you. Though in this set of questions, I think we have, right?

What piece of advice would you give to you 20 year old self?

Because my life has been a string of lucky breaks, ex post I wouldn’t change anything. (If I’d gotten good advice age 20, my life would have gone worse than it in fact has gone.) But assuming I don’t know how my life would turn out: 

  • Actually think about stuff and look stuff up, including on big-picture questions, like 'what is the most important problem in the world?'
  • Take your career decision really seriously. Think of it as a research project, dedicate serious time to it. Have a timeline for your life-plans that’s much longer than your degree. Reach out to people you want to be like and try to talk with them for advice. 
  • It doesn’t matter whether the label ‘depressed’ applies to you or not, what matters is whether e.g. taking this pill, or seeing a counsellor, would be beneficial. (And it would.) 
  • You don’t need to be so scared - everyone else is just making it up as they go, too.

Then more concretely (again, this is assuming I don’t know how things actually turn out):

  • Switch degree from philosophy to maths with the aim afterwards of doing a PhD in economics. (At the time I had no idea what economics was about; I thought it was just for bankers.) But keep reading moral philosophy.  Accept that this will put you two years behind, but this isn’t a big deal. 

(I’m assuming that “Buy Apple stock” is not in the spirit of the question!)

What do you think the best argument is against strong longtermism?

9
William_MacAskill
I think cluelessness-ish worries. From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad. The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work, and there’s some principled grounds for saying “I’m confident that this short-run good thing I do is good, and (given my not-completely-precise credences) I shouldn’t think that the expected value of the more speculative stuff is either positive or negative.”

> From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad.

I worry that this type of problem is often exaggerated, e.g. with the suggestion that 'proposed x-risk A has some arguments going for it, but one could make arguments for thousands of other things' when the thousands of other candidates are never produced and could not be produced and appear to be in the same ballpark. When one makes a serious effort to catalog serious candidates at reasonable granularity the scope of considerations is vastly more manageable than initially suggested, but cluelessness is invoked in lieu of actually doing the search, or a representative subset of the search.

9
William_MacAskill
I think you might be misunderstanding what I was referring to. An example of what I mean: Suppose Jane is deciding whether to work for Deepmind on the AI safety team. She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad. Because there’s some precisification of her credences on which taking the job is good, and some on which taking the job is bad, then if she uses a Liberal decision rule (= it is permissible for you to perform any action that is permissible according to at least one of the credence functions in your set), it’s permissible for her to take the job or not take the job. The issue is that, if you have imprecise credences and a Liberal decision rule, and are a longtermist, then almost all serious contenders for actions are permissible. So the neartermist would need to have some way of saying (i) we can carve out the definitely-good part of the action, which is better than not-doing the action on all precisifications of the credence; (ii) we can ignore the other parts of the action (e.g. the flow-through effects) that are good on some precisifications and bad on some precisifications. It seems hard to make that theoretically justified, but I think it matches how people actually think, so at least has some common-sense motivation.  But you could do it if you could argue for a pseudodominance principle that says: "If there's some interval of time t_i over which action x does more expected good than action y on all precisifications of one's credence function, and there's no interval of time t_j at which action y does more expected good than action x on all precisifications of one's credence function, then you should choose x over y". (In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on
She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad.

That's an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.

The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work,

If the argument from cluelessness depends on giving that kind of special status to imprecise credences, then I just reject them for the general reason that coarsening credences leads to worse decisions and predictions (particularly if one has done basic calibration training and has some numeracy and skill at prediction). There is signal to be lost in coarsening on individual questions. And for compound questions with various premises or contributing factors making use of the signal on each of those means y... (read more)

3
Milan_Griffes
I give some examples here; the "stratospheric aerosol injection to blunt impacts of climate change" example is an x-risk reduction one. It's pretty straightforward to tell a story about how any well-intentioned action could have unintended, negative consequences in the long run. Lots of sci-fi uses this premise. This doesn't mean the stories are always plausible (though note that "plausibility" here is usually assessed by intuition), and it's not the same as generating a comprehensive catalog of stories about how an action could go (the state space here is too large to generate such a catalog).
2
Milan_Griffes
Shameless plug for my essay on cluelessness: 1, 2, 3, 4

What has been your biggest success? What has been your biggest mistake?

I guess simply getting the ball rolling on GWWC should probably win, but the thing I feel proudest of is probably DGB — I don’t think it’s perfect, but I think it came together well, and it’s something where I followed my gut even though others weren’t as convinced that writing a book was a good idea, and I’m glad I did. 

On mistakes:  Huge number in the early days, of which poor communication with GiveWell was huge and really could have led to EA as a genuine unified community never forming; the controversial early 80k campaign around earning to give was myopic, too.  More recently, I think I really messed up in 2016 with respect to coming on as CEA CEO. I think for being CEO you should be either in or out, where being ‘in’ means 100% committed for 5+ years. Whereas for me it was always planned as a transitional thing (and this was understood internally but I think not communicated properly externally), and when I started I had just begun a tutorial fellowship at Oxford, which other tutorial fellows normally describe as ‘their busiest ever year’, and was also still dealing with the follow-on PR from DGB, so it was like I already had one and a half other full-time jobs. And there wa... (read more)

To what extent, if any, have online sources (such as Less Wrong) influenced your thinking, as compared to "traditional" philosophy?

[anonymous]23
0
0

If you had the option of making a small change to EA by pressing a button, would you do it? If so, what would it be? What about a big change?

[anonymous]22
0
0

Is there a question you want to answer that hasn't been asked yet? What's your answer to it?

What topics do you wish were more discussed within EA?

[anonymous]21
0
0

What do you think are the things or ideas that most casual EAs don't know much about or appreciate enough, but are (deservedly or undeservedly) very influential in EA hubs or organizations like CEA, 80K, GPI, etc? Some candidates I have in mind for this are things like cluelessness, longtermism, the possibility of short AI timelines, etc.

6
SiebeRozendal
I find this also interesting to answer myself, although curious to see Will's answer. I think in general casual EA's have less nuanced views than those who work fulltime thinking about the issues (obviously..). For example, our certainty about the relative importance of AI compared to other x-risks is probably being overplayed in the community. In general, I find 'casual EA's' to have an overly simplistic view of how the world works, while engaging more with those topics brings to surface the complexity of issues. In a complex world, precise, quantitative models are more likely to be wrong, and it's worth pursuing a broader set of actions. I have seen multiple smart, motivated 'casual EA's' basically give up on EA because "they couldn't see themselves being an AI safety researcher". (I'd love to see a list like "20 things to do for the long-term future without being an AI safety researcher") I think simplification is definitely useful to get a basic grasp of issues and make headway. In fact, this "ignorance of complexity" may actually be a big strength of EA, because people don't get overwhelmed and demotivated by the daunting amount of complexity, and actually try to tackle issues that most of the world ignores because they're too big. However, EA's should expect things to become more complex, more nuanced, and less clear if they would learn more about a topic.

Even though ‘utilitarianism’ gets several times the search traffic of terms like ‘effective altruism,’ ‘givewell,’ or ‘peter singer’, there’s currently no good online introduction to utilitarianism. This seems like a missed opportunity.

What similar gaps in easily-accessible EA topics do you think exist?

(I think Rob Wiblin's now-archived effective altruism FAQ was the best intro to EA around - much better than anything similar offered 'officially'. I've also toyed with writing up some of David Pearce's work in a more accessible format.)

I'm surprised by how much low-hanging fruit there is still left to edit Wikipedia in order to make more people aware of (and provide them with a more sophisticated understanding of) important ideas that are relevant to EA. I've been adding and improving Wikipedia content on the side for two years now, with a clear focus on articles that are related to altruism.

In my experience, editing Wikipedia is really i) easy, ii) fun, iii) there are many content gaps left to fill, and iv) it exposes the content you write to a much larger audience (sometimes several orders of magnitude larger) than if you wrote instead for a private blog or the EA Forum. Against this background, I'm surprised that not more knowledgeable EAs contribute to Wikipedia (feel free to reach out to me if you would potentially like to do just that).

A word of caution: the quality control on Wikipedia is fairly strong and it is generally disliked if people make edits that come across as ideologically-motivated marketing rather than as useful information. For this reason, I aspire to genuinely improve the quality of the article with all the edits I make, though my choice of articles to edit is informed by ... (read more)

9
Ramiro
Maybe one could argue in favor of an article in the Stanford Encyclopedia of Philosophy or in the IEP, too.

Population ethics; moral uncertainty.

I wonder if someone could go through Conceptually and make sure that all the wikipedia entries on those topics are really good?

Rob's FAQ is also my favorite introduction to EA, and I'll be spending some time over the next month thinking about whether there's a good way to blend the style of that introduction with the current EA.org introduction (which is due for an update).

Anon asks: "1. Population ethics: what view do you put most credence in? What are the best objections to it?"

Total view: just add up total wellbeing. 

Best objection: very repugnant conclusion: Take any population Pi with N people in unadulterated bliss, for any N. Then there is some number M such that a population Pj that consists of 10^100(N) people living in utter hell, and M people with lives barely worth living, such that Pj is better than Pi. 

"2. Population ethics: do you think questions about better/worse worlds are sensibly addressed from a "fully impartial" perspective? (I'm unsure what that would even mean... maybe... the perspective of all possible minds?). Or do you prefer to anchor reflection on population ethics in the values of currently existing minds (e.g. human values)?"

Yeah, I think we should try to answer this ‘from the point of view of the universe’. 

"3. Given your work on moral uncertainty, how do you think about claims associated with conservative world views? In particular, things like (a) the idea that revolutionary individual reasoning is rather error prone, and requires the refining discipline of tradition as a guide... (read more)

I'm interested in hearing more about your thoughts on the Long Reflection. How likely is it to happen by default? How likely is it to produce a good outcome by default? What kind of things do you see as useful for making it more likely to happen and more likely to produce a good outcome? Anything else you want to say about it? Will you be writing it up somewhere in the near future (in which case I could just wait for that)?

The GPI Research Agenda references "Greg Lewis, The not-so-Long Reflection?, 2018" but I'm unable to find it anywhere.

ETA: I've been told that Greg's article is currently in draft form and not publicly available, and both Toby Ord and Will MacAskill's upcoming books will have some discussions of the Long Reflection.

If you could persuade people of any professional background to dedicate their careers to working for the current core EA orgs, what kinds of backgrounds/skill sets/career histories would be represented which aren't currently?

Have you considered doing the mainstream intellectual podcasts as a means of repping 80k? Eg David Pakman, Dave Rubin, whatever you might get onto? If you don't think that's a good idea, why not?

7
William_MacAskill
I’ve been on David Pakman; haven’t been invited onto Dave Rubin but I tend to do podcasts like those when I get the chance, unless I need to be in person.
0
Garrison
I wouldn't consider Dave Rubin's show intellectual, but he does have reach.

Do you worry that your involvement in utilitarianism.net could exacerbate the existing confusion and lead people to think that EA and utilitarianism are the same thing?

I am reminded of the story where Victor Hugo, who was away from Paris when Les misérables was first published, wrote his editor a letter inquiring about the sales of his much anticipated novel. The letter contained only one character: ?

A few days later, the reply arrived. It was equally brief: !

Les misérables was an immediate best-seller.

(Unfortunately, the story is likely apocryphal.)

What do you think is the biggest professional mistake you made? (of the ones you can share) What is the biggest single professional 'right choice' you made? [Side-note: interesting we don't have a word for the opposite of mistake, just like we don't have one for catastrophe..]

6
Denkenberger🔸
Well, there is "eucatastrophe" in existential hope.

Do you think economic growth is key to popular acceptance of longtermism, as increased wealth leads people to adopt post-materialist values?

[anonymous]13
0
0

What do you see as the best longterm path for EA? Should we try to stay small and weird, or try to get buy-in from the masses? How important is academic influence for the long term success of EA?

Will there be anything in the book new for people already on board with longtermism?

What is your opinion on Extinction Rebellion? (asking this question because they seem concerned about future generations, able to draw attention, and (somewhat) open to changing their mind.)

How would effective altruism be different if we're living in a simulation?

[anonymous]10
0
0

How do you decide your own cause prioritization? Relatedly, how do you decide where to donate to?

Do you have a coach? Why, or why not? (I feel they really help with stuff like "stay focused on a few topics" and keeping one accountable to those goals)

I note your main project is writing a book on longtermism. Would you like to see the EA movement going in a direction where it focuses exclusively, or almost exclusively, on longtermist issues? If not, why not?

To explain the second question, it would seem answering 'no' to the first question would be in tension with advocating (strong) longtermism.

I'm pro there being a diversity of worldviews and causes in EA - I'm not certain in longtermism, and think such diversity is a good thing even on longtermist grounds. I mention reasons in the 'steel manning arguments against EA's focus on longtermism' question. And I talked a little bit about this in my recent EAG London talk. Other considerations are helping to avoid groupthink (which I think is very important), positive externalities (a success in one area transfers to others) and the mundane benefit of economies of scale.

I do think that the traditional poverty/animals/x-risk breakdown feels a bit path-dependent though, and we could have more people pursuing cause areas outside of that. I think that your work fleshing out your worldview and figuring out what follows from it is the sort of thing I'd like to see more of.

Do you think that the empirical finding that pain and suffering are distributed along a lognormal distribution (cf. Logarithmic Scales of Pleasure and Pain) has implications for how to prioritize causes? In particular, what do you say about these tentative implications:

Of particular note as promising Effective Altruist careers, we would highlight working directly to develop remedies for specific, extremely painful experiences. Finding scalable treatments for migraines, kidney stones, childbirth, cluster headaches, CRPS, and fibromyalgia may be extremely hi
... (read more)

[Meta note: this post doesn't appear on the front page and it probably should! I only found it through the RSS feed.]

1
JP Addison🔸
There's a delay between when something gets posted and when a moderator categorizes it. That said, this seems like a classic community post to my eyes, but I'm not a moderator.
4
Aaron Gertler 🔸
Given the broad range of topics covered, it's difficult to place this post into a particular category -- some questions are more "Frontpage", others more "Community". Will's answers are likely to include a mix of content that fits into both categories, but is probably a better fit for "Community" overall, because the kinds of questions people wound up asking were mostly related to topics for which we use that category.

Hi William! Great idea.

Hope it's still possible to submit these!:

  • I love the EA movement - the community, values and work that goes on is just very aligned with me personally. One thing that stands out though is that every organization either recommends, was born out of, or has sent staff members to work at, seemingly every other organization within the movement - the OpenPhil/GiveWell/Good Ventures group, the 8ok/CEA group, some others like CFAR and so on. Do you see this as a risk, or a positive in terms of maintaining some unity around the overall m
... (read more)

Who do you think it would be most valuable if you could be put in touch with?

Can you speak to the expected value/impact (either marginal or total) of writing a book?

I've been trying to evaluate career decisions about studying psychology and neuroscience. Do you think that studying motivation from a neuroscientific perspective is an effective way to contribute to AI alignment work? Do you think that-considering the scale of mental illnesses such as anxiety of depression-doing work on better understanding anxiety and depression is also highly effective?

Personally, I would be leery of doing an AMA currently because I don't feel I have that much that the whole community ought to spend time reading.

7
Nathan Young
In response to "If it goes well hopefully others will try it to" - in this community I think there is a very high entry barrier to feeling like you have something to say in an AMA style.

Hmm, that's a shame. I hereby promise to ask some questions to whoever does the next AMA!

2
Ben Millwood🔸
Why would the whole community read it? You'd set out in the initial post, as Will has done, why people might or might not be interested in what you have to say, and only people who passed that bar would spend any real time on it. I don't think the bar should be that high.

PlayPumps: overrated or underrated?

4[anonymous]
I don't understand why this question is downvoted with so many votes. It seems like a reasonable, if underspecified, question to me. Edit: When I commented, this comment was at -2 with perhaps 15 votes.

If it goes well, hopefully others will try it out too.

LessWrong has a kind of AMA open thread where a bunch of people, including some EAs, have been doing AMAs. I'm not sure if others are still monitoring it and answering questions, but I am at least.

Anon asks: “When you gave evidence to the UK government about the impacts of artificial intelligence, why didn't you talk about AI safety (beyond surveillance)?

https://www.parliament.uk/ai-committee

I think you’re mistaking me for someone else!

I recently finished the last season of Vox's Future Perfect podcast. One of the focus areas was questions about democracy and charitable giving, like how Bill Gates has helped lots of folks, but his projects are determined ultimately by his personal decision matrix. There are many more examples: trust funds based on poorly conceived goals, private donations to public schools, social engineering by large companies. Do questions like this trouble you? Do you feel that democracy and the effective altruism movement are at odds?

If there were an election, how do you think you would decide who to vote for? Would you produce any content on that decision making process?

What are your thoughts on the rise of left-wing politics in the US (e.g. the Sanders campaign, the election of AOC and the rest of the squad, the victories and near-victories at the local levels)? Related: how do you think EAs should think about the 2020 US presidential race?

Hello Will,

I have a question about longtermism and its use within the EA movement. While I find your (strong) longtermism hypothesis quite plausible and convincing, I do consider some "short-termist" cause areas to be quite important, even in the long term. (I always go back to hearing that "you have to be a shorttermist to care about wild animal suffering" which striked me as odd).

Because of that, I liked that the classic longterm cause area was called x/s-risk prevention, because that was one way to create value in the longterm. I t... (read more)

Curated and popular this week
Relevant opportunities