This is a special post for quick takes by MichaelA🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
This RA says they definitely learned more from this RA role than they would’ve if doing a PhD
Mainly due to tight feedback loops
And strong incentives for the senior researcher to give good feedback
The RA is doing producing "intermediate products" for thesenior researcher. So the senior researcher needs and uses what the RA produces. So the feedback is better and different.
In contrast, if the RA was working on their own, separate projects, it would be more like the senior researcher just looks at it and grades it.
The RA has mostly just had to do literature reviews of all sorts of stuff related to the broad topic the senior researcher focuses on
So the RA person was incentivised more than pretty much anyone else to just get familiar with all the stuff under this umbrella
They wouldn’t be able or encouraged to do that in a PhD
The thing the RA hasn’t liked is that he hasn’t been producing his ow
For the last few years, I’ve been an RA in the general domain of ~economics at a major research university, and I think that while a lot of what you’re saying makes sense, it’s important to note that the quality of one’s experience as an RA will always depend to a very significant extent on one’s supervising researcher. In fact, I think this dependency might be just about the only thing every RA role has in common. Your data points/testimonials reasonably represent what it’s like to RA for a good supervisor, but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish. Furthermore, it’s harder than you’d think to screen for this in advance of taking an RA job. I feel particularly lucky to be working for a great supervisor, but/because I am quite familiar with how much the alternative sucks.
On a separate note, regarding your comment about people potentially specializing in RAing as a career, I don’t really think this would yield much in the way of productivity gains relative to the current state of affairs in academia (where postdocs often already fill the role that I think you envision for career RAs). I do, however, thi... (read more)
Thanks, I think this provides a useful counterpoint/nuance that I think should help people make informed decisions about whether to try to get RA roles, how to choose which roles to aim for/accept, and whether and how to facilitate/encourage other people to offer or seek RA roles.
Your second paragraph is also interesting. I hadn't previously thought about how there may be overlap between the skills/mindsets that are useful for RAs and those useful for research management, and that seems like an useful point to raise.
Minor point: That point was from the RA I spoke to, not from me. (But I do endorse the idea that such specialisation might be a good thing.)
More substantive point: It's worth noting is that, while a lot of the research and research training I particularly care about happens in traditional academia, a lot also happens in EA parts of academia (e.g., FHI, GPI), in EA orgs, in think tanks, among independent researchers, and maybe elsewhere. So even if this specialisation wouldn't yield much productivity gains compared to the current state of affairs in one of those "sectors", it could perhaps do so in others. (I don't know if it actually would, though - I haven't looked into it enough, and am just making the relatively weak claim that it might.)
3
HStencil
Yeah, I think it’s very plausible that career RAs could yield meaningful productivity gains in organizations that differ structurally from “traditional” academic research groups, including, importantly, many EA research institutions. I think this depends a lot on the kinds of research that these organizations are conducting (in particular, the methods being employed and the intended audiences of published work), how the senior researchers’ jobs are designed, what the talent pipeline looks like, etc., but it’s certainly at least plausible that this could be the case.
On the parallels/overlap between what makes for a good RA and what makes for a good research manager, my view is actually probably weaker than I may have suggested in my initial comment. The reason why RAs are sometimes promoted into research management positions, as I understand it, is that effective research management is believed to require an understanding of what the research process, workflow, etc. look like in the relevant discipline and academic setting, and RAs are typically the only people without PhDs who have that context-specific understanding. Plus, they’ll also have relevant domain knowledge about the substance of the research, which is quite useful in a research manager, too. I think these are pretty much all of the reasons why RAs may make for good research managers. I don’t really think it’s a matter of skills or of mindset anywhere near as much as it’s about knowledge (both tacit and not). In fact, I think one difficulty with promoting RAs to research management roles is that often, being a successful RA seems to select for traits associated with not having good management skills (e.g., being happy spending one’s days reading academic papers alone with very limited opportunities for interpersonal contact). This is why I limited my original comment on this to RAs who can effectively manage people, who, as I suggested, I think are probably a small minority. Because good research manager
2
MichaelA🔸
Ah, thanks for that clarification! Your comments here continue to be interesting food for thought :)
One idea that comes to mind is to set up an organization that hires RAs-as-a-service. Say, a nonprofit that works with multiple EA orgs and employees several RAs, some full-time and others part-time (think, a student job). This org can then handle recruiting, basic training, employment and some of the management. RAs could work on multiple projects with perhaps multiple different people, and tasks could be delegated to the organization as a whole to find the right RA to fit.
A financial model could be something like EA orgs pay 25-50% of the relevant salaries for projects they recruit RAs for, and the rest is complemented by donations to the non-profit itself.
Yeah, I definitely think this is worth someone spending at least a couple hours seriously thinking about doing, including maybe sending out a survey to or conducting interviews with non-junior researchers[1] to gauge interest in having an RA if it was arranged via this service.
I previously suggested a somewhat similar idea as a project to improve the long-term future:
And Daniel Eth replied there:
I'm going to now flag this idea to someone who I think might be able to actually make it happen.
5
MichaelA🔸
Someone pointed out to me that BERI already do some amount of this. E.g., they recently hired for or are hiring for RAs for Anders Sandberg and Nick Bostrom at FHI.
It seems plausible that they're doing all the stuff that's worth doing, but also seems plausible (probable?) that there's room for more, or for trying out different models. I think anyone interested in potentially actually starting an initiative like this should probably touch base with BERI before investing lots of time into it.
5
EdoArad
Ah, right! There still might be a need outside of longtermist research, but I definitely agree that it'd be very useful to reach out to them to learn more.
For further context for people who might potentially go ahead with this, BERI is a nonprofit that supports researchers working on existential risk. I guess that Sawyer is the person to reach out to.
3
MichaelA🔸
Btw, the other person I suggested this idea to today is apparently already considering doing this. So if someone else is interested, maybe contact both Sawyer and me, and I can put you in touch with this person.
And this person would do it for longtermist researchers, so yeah, it seems plausible/likely to me that there's more room for this for researchers focused on other cause area.
5
Jamie_Harris
These feel like they should be obvious points and yet I hadn't thought about them before. So this was also an update for me! I've been considering PhDs, and your stated downsides don't seem like big downsides for me personally, so it could be relevant to me too.
Ok, so the imagine you/we (the EA community) successfully make the case and encourage demand for RA positions. Is there supply?
* I don't recall ever seeing an RA position formally advertised (though I haven't been looking out for them per se, don't check the 80k job board very regularly, etc)
* If I imagine myself or my colleagues at Sentience Institute with an RA, I can imagine that we'd periodically find an RA helpful, but not enough for a full-time role.
* Might be different at other EA/longtermist nonprofits but we're primarily funding constrained. Apart from the sense that they might accept a slightly lower salary, why would we hire an RA when we could hire a full blown researcher (who might sometimes have to do the lit reviews and grunt-work themselves)?
6
HStencil
I actually think full-time RA roles are very commonly (probably more often than not?) publicly advertised. Some fields even have centralized job boards that aggregate RA roles across the discipline, and on top of that, there are a growing number of formalized predoctoral RA programs at major research universities in the U.S. I am actually currently working as an RA in an academic research group that has had roles posted on the 80,000 Hours job board. While I think it is common for students to approach professors in their academic program and request RA work, my sense is that non-students seeking full-time RA positions very rarely have success cold-emailing professors and asking if they need any help. Most professors do not have both ongoing need for an (additional) RA and the funding to hire one (whereas in the case of their own students, universities often have special funding set aside for students’ research training, and professors face an expectation that they help interested students to develop as researchers).
Separately, regarding the second bullet point, I think it is extremely common for even full-time RAs to only periodically be meaningfully useful and to spend the rest of their time working on relatively low-priority “back burner” projects. In general, my sense is that work for academic RAs often comes in waves; some weeks, your PI will hand you loads of things to do, and you’ll be working late, but some weeks, there will be very little for you to do at all. In many cases, I think RAs are hired at least to some extent for the value of having them effectively on call.
6
EdoArad
In regards to the third bullet point, there might be a nontrivial boost to the senior researchers' productivity and well-being.
Doing grunt-work can be disproportionally (to its time) tiring and demotivating, and most people have some type of work that they dislike or just not good at which could perhaps be delegated. Additionally, having a (strong and motivated) RA might just be more fun and help with making personal research projects more social and meaningful.
Regarding the salary, I've quickly checked GiveWell's salaries at Glassdoor
So from that I'd guess that an RA could cost about 60% as much as a senior researcher. (I'm sure that there is better and more relevant information out there)
2
MichaelA🔸
I think you're asking "...encourage that people seek RA positions. Would there be enough demand for those aspiring RAs?"? Is that right? (I ask because I think I'm more used to thinking of demand for a type of worker, and supply of candidates for those positions.)
I don't have confident answers to those questions, but here are some quick, tentative thoughts:
* I've seen some RA positions formally advertised (e.g., on the 80k job board)
* I remember one for Nick Bostrom and I think one for an economics professor, and I think I've seen others
* I also know of at least two cases where an RA positions was opened but not widely advertised, including one case where the researcher was only a couple years into their research career
* I have a vague memory of someone saying that proactively reaching out to researchers to ask if they'd want you to be an RA might work surprisingly often
* I also have a vague impression that this is common with university students and professors
* But I think this person was saying it in relation to EA researchers
* (Of course, a vague memory of someone saying this is not very strong evidence that it's true)
* I do think there are a decent number of EA/longtermist orgs which have or could get more funding than they are currently able or willing to spend on their research efforts, e.g. due to how much time from senior people would be consumed for hiring rounds or managing and training new employees
* Some of these constraints would also constrain the org from taking on RAs
* But maybe there are cases where the constraint is smaller for RAs than for more independent researchers?
* One could think of this in terms of the org having already identified a full researcher whose judgement, choices, output, etc. the org is happy with, and they've then done further work to get that researcher on the same page with the org, more trained up, etc. The RA can slot in under that researcher and help them do their work better. So
3
MichaelA🔸
See also 80k on the career idea "Be research manager or a PA for someone doing really valuable work".
This began as a Google Doc of notes to self. It's still pretty close to that status - i.e., I don't explain why each thing is relevant, haven't spent a long time thinking about the ideal way to organise this, and expect this shortform omits many great readings and tips. But seve... (read more)
Thanks for posting this! This is a gold mine of resources. This will save the Nonlinear team so much time.
2
Ramiro
Did you consider if this could get more views if it was a normal "longform" post? Maybe it's not up to your usual standards, but I think it's pretty good.
2
MichaelA🔸
Nice to hear you think so!
I did consider that, but felt like maybe it's too much of just a rough, random grab-bag of things for a top-level post. But if the shortform or your comment gets unexpectedly many upvotes, or other people express similar views in comments, I may "promote" it.
2
MichaelA🔸
More concretely, regarding generating and prioritising research questions, one place to start is these lists of question ideas:
* Research questions that could have a big social impact, organised by discipline
* A central directory for open research questions
* Crucial questions for longtermists
* Some history topics it might be very valuable to investigate
* This is somewhat less noteworthy than the other links
And for concrete tips on things like how to get started, see Notes on EA-related research, writing, testing fit, learning, and the Forum.
One to add to the list: More Than Just Good Causes. A Framework For Understanding How Social Movements Contribute To Change by Eugenia Lafforgue and Brett Mills (Future Matters Project)
7
Vaidehi Agarwalla 🔸
I have a list here that has some overlap but also some new things: https://docs.google.com/document/d/1KyVgBuq_X95Hn6LrgCVj2DTiNHQXrPUJse-tlo8-CEM/edit#
2
MichaelA🔸
That looks very helpful - thanks for sharing it here!
4
rosehadshar
Some more recent things:
* Mauricio, What Helped the Voiceless? Historical Case Studies (and a shorter version here)
* James Ozden, A case for the effectiveness of protest
Also fwiw, I have read the ACE case studies, and I think that the one on environmentalism is pretty high quality, more so than some of the other things listed here. I'd recommend people interested in working on this stuff to read the environmentalism one.
3
rosehadshar
Another one: Alex Hill and Jaime Sevilla, Attempt at understanding the role of moral philosophy in moral progress (on women’s suffrage and animal rights)
3
Shri_Samson
This is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth by Luke Muehlhauser.
Edit: fixed links
2
MichaelA🔸
Yeah, I think those are relevant, thanks for mentioning them!
It looks like the links lead back to your comment for some reason (I think I've done similar in the past). So, for other readers, here are the links I think you mean: 1, 2.
(Also, FWIW, I think if an analysis is by a non-EA by commissioned by an EA, I'd say that essentially counts as an "EA analysis" for my purposes. This is because I expect that such work's "precise focuses or methodologies may be more relevant to other EAs than would be the case with [most] non-EA analyses".)
Your independent impression about X is essentially what you'd believe about X if you weren't updating your beliefs in light of peer disagreement - i.e., if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic relative to yours. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.
Armed with this concept, I try to stick to the following epistemic/discussion norms, and think it's good for other people to do so as well:
Trying to keep track of my own independent impressions separately from my all-things-considered beliefs (which also takes into account peer disagreement)
Trying to be clear about whether I'm reporting my independent impression or my all-things-considered belief
Feeling comfortable reporting my own independent impression, even when I know it differs from the impressions of people with more expertise in a topic
Thanks, I appreciate having something to link to! My independent impression is that it would be even easier to link to and easier to find as a top-level post.
2
MichaelA🔸
Thanks for the suggestion - I've now gone ahead and made that top-level post :)
2
MichaelA🔸
I just re-read this comment by Claire Zabel, which is also good and is probably where I originally encountered the "impressions" vs "beliefs" distinction.
(Though I still think that this shortform serves a somewhat distinct purpose, in that it jumps right to discussing that distinction, uses terms I think are a bit clearer - albeit clunkier - than just "impressions" vs "beliefs", and explicitly proposes some discussion norms that Claire doesn't quite explicitly propose.)
The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)
Guns, Germs, and Steel - I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.
2
MichaelA🔸
Great, thanks for adding that to the collection!
3
MichaelA🔸
Suggested by a member of the History and Effective Altruism Facebook group:
* https://scholars-stage.blogspot.com/2019/07/a-study-guide-for-human-society-part-i.html
* Disputers of the Tao, by A. C. Graham
Note: This shortform is now superseded by a top-level post I adapted it into. There is no longer any reason to read the shortform version.
Book sort-of-recommendations
Here I list all the EA-relevant books I've read or listened to as audiobooks since learning about EA, in roughly descending order of how useful I perceive/remember them being to me.
I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) That said, this isn't exactly a recommendation list, because:
some of factors making these books more/less useful to me won't generalise to most other people
I'm including all relevant books I've read (not just the top picks)
Let me know if you want more info on why I found something useful or not so useful.
I recommend making this a top-level post. I think it should be one of the most-upvoted posts on the "EA Books" tag, but I can't tag it as a Shortform post.
2
MichaelA🔸
I had actually been thinking I should probably do that sometime, so your message inspired me to pull the trigger and do it now. Thanks!
(I also made a few small improvements/additions while I was at it.)
I've now turned this into a top-level post, and anyone who wants to read this should now read that version rather than this shortform.
Adding important nuances to "preserve option value" arguments
Summary
I fairly commonly hear (and make) arguments like "This action would be irreversible. And if we don't take the action now, we can still do so later. So, to preserve option value, we shouldn't take that action, even if it would be good to do the action now if now was our only chance."[1]
doing field-building to a new target audience for some important cause area
publicly discussing of some important issue in cases where that discussion could involve infohazards, cause polarization, or make our community seem wacky
I think this sort of argument is often getting at something important, but in my experience such arguments are usually oversimplified in some important ways. This shortform is a quickly written[2] attempt to provide a more nuanced picture of that kind of argument. My key points are:
"(Ir)reversibility" is a matter of degree (not a binary), and a matter of the expected extent to which the counterfactual effects we're cons
There may be some posts I missed with the European Union tag, and there are also posts with that tag that aren’t about AI governance but which address a similar ques... (read more)
EDIT: This is now superseded by a top-level post so you should read that instead.
tl;dr: Value large impacts rather than large inputs, but be excited about megaprojects anyway because they're a new & useful tool we've unlocked
A lot of people are excited about megaprojects, and I agree that they should be. But we should remember that megaprojects are basically defined by the size of their inputs (e.g., "productively" using >$100 million per year), and that we don't intrinsically value the capacity to absorb those inputs. What we really care about is huge positive impact, and megaprojects are just one means to that end, and actually (ceteris paribus) we should be even more excited about achieving the same impacts using less inputs & smaller projects. How can we reconcile these thoughts, and why should we still be excited about megaprojects?
I suggest we think about this as follows:
Imagine a Venn diagram with a circle for megaprojects and another circle for projects with great expected value (EV)
Projects with great EV are really the focus and always have been
Projects like 80,000 Hours, FHI, and Superintelligence were each far smaller than megaprojects, but in my view pr
I think the general thrust of your argument is clearly right, and it's weird/frustrating that this is not the default assumption when people talk about megaprojects (though maybe I'm not reading the existing discussions of megaprojects sufficiently charitably).
2 moderately-sized caveats:
Re 2) "Projects with great EV are really the focus and always have been", I think in the early days of EA, and to a lesser degree still today, a lot of focus of EA isn't on great EV so much as high cost-effectiveness. To some degree the megaprojects discourse was set to push back against this.
Re: 5, "It's probably also partly because a lot of people aren't naturally sufficiently ambitious or lack sufficient self-confidence" I think this is definitely true, but maybe I'd like to push back a bit on the individual framing of this lack of ambition, as I think it's partially cultural/institutional. That is, until very recently, we (EA broadly, or the largest funders etc), haven't made it as clear that EA supports and encourages extreme ambition in outputs in a way that means we (collectively) are potentially willing to pay large per-project costs in inputs.
Thanks - I think those are both really good points! I've now made a top-level post version of this shortform, with the main modifications being adjustments in light of your points (plus, unrelatedly, adding a colourful diagram because colourful diagrams are fun).
The article was far better than I expect most reporting on climate change as a potential existential risk to be
This is in line with Kelsey Piper generally seeming to do great work
I particularly appreciated that it (a) emphasised how the concepts of catastrophes in general and extinction in particular are distinct and why that matters, but (b) did this in a way that I suspect has a relatively low risk of seeming callous, nit-picky, or otherwise annoying to people who care about climate change
But I also had some substantive issues with the article, which I'll discuss below
The article conflated “existential threat”/“existential risk” with “extinction risk”, thereby ignoring two other types of existential catastrophe: unrecoverable collapse and unrecoverable dystopia
Some quotes from the article to demonstrate what the conflation I'm referring to:
“But there’s a standard meaning of that phrase [existential threat]: that it’s going to wipe out humanity — or even, as Warren implied Wednesday night, all life
(Perhaps some older Slate Star Codex posts? I can't remember for sure.)
Notes
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only analyses by EAs/EA-adjacent people here are that:
their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
links to non-EA work can be found in most of the things I list here
I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)
The x-risk policy pipeline & interventions for improving it: A quick mapping
I just had a call with someone who's thinking about how to improve the existential risk research community's ability to cause useful policies to be implemented well. This made me realise I'd be keen to see a diagram of the "pipeline" from research to implementation of good policies, showing various intervention options and which steps of the pipeline they help with. I decided to quickly whip such a diagram up after the call, forcing myself to spend no more than 30 mins on it. Here's the result.
(This is of course imperfect in oodles of ways, probably overlaps with and ignores a bunch of existing work on policymaking*, presents things as more one-way and simplistic than they really are, etc. But maybe it'll be somewhat interesting/useful to some people.)
(If the images are too small for you, you can open each in a new tab.)
Feel free to ask me to explain anything that see... (read more)
Here I’ll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.[1]
I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if you’d like me to add them in a comment.
(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)
The data
Q1:
Q2:
Q3:
Q4:
Q5: “If you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected ... (read more)
"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"
I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).
That said, as you know, I think your summaries/collections are useful and underprovided.
Good point.
Though I guess I suspect that, if the reason a person finds my original research not so useful is just because they aren't the target audience, they'd be more likely to either not explicitly comment on it or to say something about it not seeming relevant to them. (Rather than making a generic comment about it not seeming useful.)
But I guess this seems less likely in cases where:
* the person doesn't realise that the key reason it wasn't useful is that they weren't the target audience, or
* the person feels that what they're focused on is substantially more important than anything else (because then they'll perceive "useful to them" as meaning a very similar thing to "useful")
In any case, I'm definitely just taking this survey as providing weak (though useful) evidence, and combining it with various other sources of evidence.
To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?
Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?
One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.
But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.
The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage. [emphasis added]
risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial
There is now a Stanford Existential Risk Initiative, which (confusingly) describes itself as:
And they write:
That is much closer to a definition of an existential risk (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what they'll focus on, it appears this initiative is conflating these two terms/concepts.
This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesn't cause civilizational collapse, or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that aren't "global catastrophes" in the standard sense, such as "plateauing — progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity" (Bostrom).)
For further discussion, see Clarifying existential risks and existential catastrophes.
(I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that I'm very glad to see that this initiative has been set up, and that I expect they'll do very valuable work. I'm merely critiquing their use of terms.)
4
MichaelA🔸
Some more definitions, from or quoted in 80k's profile on reducing global catastrophic biological risks
Gregory Lewis, in that profile itself:
Open Philanthropy Project:
Schoch-Spana et al. (2017), on GCBRs, rather than GCRs as a whole:
2
MichaelA🔸
Metaculus features a series of questions on global catastrophic risks. The author of these questions operationalises a global catastrophe as an event in which "the human population decrease[s] by at least 10% during any period of 5 years or less".
2
MichaelA🔸
Baum and Barrett (2018) gesture at some additional definitions/conceptualisations of global catastrophic risk that have apparently been used by other authors:
1
MichaelA🔸
From an FLI podcast interview with two researchers from CSER:
"Ariel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change."
Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are.
So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.
Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a w
1
MichaelA🔸
Sears writes:
(Personally, I don't think I like that second sentence. I'm not sure what "threaten humankind" is meant to mean, but I'm not sure I'd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, I'd be meaning something like "threatens some humans", in which case I'd also count risks much smaller than GCRs. So this sentence sounds to me like it's sort-of conflating GCRs with existential risks.)
Why I'm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally
This is a lightly edited version of some quick thoughts I wrote in May 2020. These thoughts are just my reaction to some specific claims in The Precipice, intended in a spirit of updating incrementally. This is not a substantive post containing my full views on nuclear war or collapse & recovery.
In The Precipice, Ord writes:
[If a nuclear winter occurs,] Existential catastrophe via a global unrecoverable collapse of civilisation also seems unlikely, especially if we consider somewhere like New Zealand (or the south-east of Australia) which is unlikely to be directly targeted and will avoid the worst effects of nuclear winter by being coastal. It is hard to see why they wouldn’t make it through with most of their technology (and institutions) intact.
I share the view that it’s unlikely that New Zealand would be directly targeted by nuclear war, or that nuclear winter would cause New Zealand to suffer extreme agricultural losses or lose its technology. (That said, I haven't looked into that clos... (read more)
Reasons why EU laws/policies might be important for AI outcomes
Based on some reading and conversations, I think there are two main categories of reasons why EU laws/policies (including regulations)[1] might be important for AI risk outcomes, with each category containing several more specific reasons.[2] This post attempts to summarise those reasons.
But note that:
I wrote this quite quickly, and this isn't my area of expertise.
My aim is not to make people focus more on the EU, just to make it clearer what some possible reasons for that focus are. (Overall I actually think the EU is probably already getting enough attention from AI governance people.)
Please comment if you know of relevant prior work, if you have disagreements or think something should be added, and/or if you think I should make this a top-level post.
Note: I drafted this quickly, then wanted to improve it based on feedback & on things I read/remembered since writing it. But I then realised I'll never make the time to do that, so I'm just posting this~as-is anyway since maybe it'll be a bit useful to some people. See also Collection of work on whether/how much people should focus on the EU if they’r... (read more)
Scout Mindset was engaging, easy to read, and had interesting stories and examples
Galef covered a lot of important points in a clear way
She provided good, concrete advice on how to put things into practice
So I'm very likely to recommend this book to people who aren't in the EA community, are relatively new to it, or aren't super engaged with it
I also liked how she mentioned effective altruism itself several times and highlighted its genuinely good features in an accurate way, but without making this the central focus or seeming preachy
(At least, I'm guessing people wouldn't find it preachy - it's hard to say given that I'm already a convert...)
Conversely, I think I was already aware of and had internalised almost all the basic ideas and actions suggested in the book, and mostly act on these things
So I've put this 45th on my rough list of the 53 books I've read since learning about EA, in descending order of their perceived usefulness to me specifically
And I wouldn't necessarily recommend this to long-time, highly engaged members of the EA c
tl;dr: Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that; I think it’s plausible that economic stagnation would actually decrease economic risk, at least given certain types of stagnation and certain starting conditions.
(This is basically a nitpick I wrote in May 2020, and then lightly edited recently.)
---
In The Precipice, Toby Ord discusses the concept of existential risk factors: factors which increase existential risk, whether or not they themselves could “directly” cause existential catastrophe. He writes:
An easy way to find existential risk factors is to consider stressors for humanity or for our ability to make good decisions. These include global economic stagnation… (emphasis added)
This seems to me to imply that global economic stagnation is clearly and almost certainlyan existential risk factor.
He also discusses the inverse concept, existential security factors: factors which reduce existential risk. He writes:
Many of the things we commonly think of as social goods may turn out to also be existential security factors. Things such as education, peace or prosperity may help prot
The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)
The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)
A shift in arguments for AI risk - Tom Sittler (this has a brief but valuable section on robust totalitarianism) (discussion of the overall piece here)
Existential Risk Prevention as Global Priority - Nick Bostrom (this discusses the concepts of "permanent stagnation" and "flawed realisation", and very briefly touches on their relevance to e.g. lasting totalitarianism)
The Future of Human Evolution - Bostrom, 2009 (I think some scenarios covered there might count as dystopias, depe... (read more)
Interesting example: Leo Szilard and cobalt bombs
In The Precipice, Toby Ord mentions the possibility of "a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)" (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that "Such a 'doomsday device' was first suggested by Leo Szilard in 1950". Wikipedia similarly says:
That's the extent of my knowledge of cobalt bombs, so I'm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom's subtypes of information hazards:
It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised - or at least not acted on - the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.
I was a little surprised that Ord didn't discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.
I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace states:
I've made a database of AI safety/governance surveys & survey ideas. I'll copy the "READ ME" page below. Let me know if you'd like access to the database, if you'd suggest I make a more public version, or if you'd like to suggest things be added.
"This spreadsheet lists surveys & ideas for surveys that are very relevant to AI safety/governance, including surveys which are in progress, ideas for surveys, and published surveys. The intention is to make it easier for people to:
1. Find out about outputs or works-in-progress they might want to read... (read more)
Conditions capable of supporting multicellular life are predicted to continue for another billion years, but humans will inevitably become extinct within several million years. We explore the paradox of a habitable planet devoid of people, and consider how to prioritise our actions to maximise life after we are gone.
I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?
They go on to say:
Finally, we offer a personal challenge to everyone concerned about the Earth’s future: choose a lineage or a place that you care about and prioritise your actions to maximise the likelihood that it will outlive us. For us, the lineages we have dedicated our scientific and personal efforts towards are mistletoes (Santalales) and gulls and terns (Laridae), two widespread groups frequently regarded as pests that need to be controlled. The place we care most about is south-eastern Australia – a region where we raise a family, manage a property, restore habitats, and teach the next generations of conservation scientists. Playing
I recently finished reading Henrich's 2020 book The WEIRDest People in the World. I would highly recommend it, along with Henrich's 2015 book The Secret of Our Success; I've roughly ranked them the 8th and 9th most useful-to-me of the 47 EA-related books I've read since learning about EA.
In this shortform, I'll:
Summarise my "four main updates" from this book
Share the Anki cards I made for myself when reading the book[1]
I intend this as a lower-effort alternative to writing notes specifically for public consumption or writing a proper book review
If you want to download the cards themselves to import them into your own deck, follow this link.
My hope is that this will be a low-effort way for me to help some EAs to quickly:
Gain some key insights from the book
Work out whether reading/listening to the book is worth their time
oh, please, do post this type of stuff, specially in shortform... but, unfortunately, you can't expect a lot of karma - attention is a scarce resource, right?
I'd totally like to see you blog or send a newsletter with this.
2
MichaelA🔸
Meta: I recently made two similar posts as top-level posts rather than as shortforms. Both got relatively little karma, especially the second. So I feel unsure whether posts/shortforms like this are worth putting in the time to make, and are worth posting as top-level posts vs as shortforms. If any readers have thoughts on that, let me know.
(Though it's worth noting that making these posts takes me far less time than making regular posts does - e.g., this shortform took me 45 minutes total. So even just being mildly useful to a few people might be sufficient to justify that time cost.)
[Edited to add: I added the "My four main updates" section to this shortform 4 days after I originally posted it and made this comment.]
5
Habryka
I really like these types of posts. I have some vague sense that these both would get more engagement and excitement on LW than the EA Forum, so maybe worth also posting them to there.
4
MichaelA🔸
Thanks for that info and that suggestion. Given that, I've tried cross-posting my Schelling notes, as an initial experiment.
Collection of EA-associated historical case study research
This collection is in reverse chronological order of publication date. I think I'm forgetting lots of relevant things, and I intend to add more things in future - please let me know if you know of something I'm missing.
Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?
I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how much to focus on working at EA vs non-EA orgs, as well as which specific types of roles and orgs to focus on.
Having write-ups on that could be more efficient than people answering similar questions multiple times. And it could make it easier for people to learn about a wider range of "typical workdays", rather than having to extrapolate from whoever they happened to talk to and whatever happened to come to mind for that person at that time.
I think such write-ups are made and shared in some other "sectors". E.g. when I was applying for a job in the UK civil service, I think I recall there being a "typical day" writeup for a range of different types of roles in and branches of the civil service.
Animal Advocacy Careers skills profiles are a bit like this for various effective animal advocacy nonprofit roles. You can also just read my notes on the interviews I did (linked within each profile) -- they usually just start with the question "what's a typical day?" https://www.animaladvocacycareers.org/skills-profiles
(See the linked doc for the most up-to-date version of this.)
The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - it’s just the relevant things that I (Michael Aird) happen to have made or know of.
Some resources I think might be useful to the kinds of people who apply for research roles at Rethink Priorities
This shortform expresses my personal opinions only.
These resources are taken from an email I sent to AI Governance & Strategy researcher/fellowship candidates who Rethink Priorities didn't make offers to but who got pretty far through our application process. These resourc... (read more)
I'd now also suggest most people who are interested in AI governance and/or technical AI safety roles participate in the relevant track of the AGI Safety Fundamentals course (or read through the curriculum content if you see this at a time when you wouldn't be able to join the course for a while).
The only other very directly related resource I can think of is my own presentation on moral circle expansion, and various other short content by Sentience Institute's website, e.g. our FAQ, some of the talks or videos. But I think that the academic psychology literature you refer to is very relevant here. Good starting point articles are, the "moral expansiveness" article you link to above and "Toward a psychology of moral expansiveness."
Of course, depending on definitions, a far wider literature could be relevant, e.g. almost anything related to animal advocacy, robot rights, consideration of future beings, consideration of people on the other side of the planet etc.
There's some wider content on "moral advocacy" or "values spreading," of which work on moral circle expansion is a part:
Arguments for and against moral advocacy - Tobias Baumann, 2017
Values Spreading is Often More Important than Extinction Risk - Brian Tomasik, 2013
Against moral advocacy - Paul Christiano, 2013
Also relevant: "Should Longtermists Mostly Think About Animals?"
1
MichaelA🔸
Thanks for adding those links, Jamie!
I've now added the first few into my lists above.
3
Aaron Gertler 🔸
I continue to appreciate all the collections you've been posting! I expect to find reasons to link to many of these in the years to come.
2
MichaelA🔸
Good to hear!
Yeah, I hope they'll be mildly useful to random people at random times over a long period :D
Although I also expect that most people they'd be mildly useful for would probably never be aware they exist, so there may be a better way to do this.
Also, if and when EA coordinates on one central wiki, these could hopefully be folded into or drawn on for that, in some way.
Collection of everything I know of that explicitly uses the terms differential progress / intellectual progress / technological development, except Forum posts)
This originally collected Forum posts as well, but now that is collected by the Differential progress tag.
Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?
I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of:
disseminating important ideas to key decision-makers and thereby improving their decisions
either through the Bulletin articles themselves or through them allowing one to
Thanks for those links!
(I also realise now that I'd already seen and found useful Gregory Lewis's piece for the Bulletin, and had just forgotten that that's the publication it was in.)
4
MichaelA🔸
Here's the Bulletin's page on writing for them. Some key excerpts:
And here's the page on the Voices of Tomorrow feature:
If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?
tl;dr I think it's "another million years", or slightly longer, but I'm not sure.
How much of this future might we live to see? The fossil record provides some useful guidance. Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million.[38] If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence - sixteen years old, just coming into our power; just old enough to get ourselves into serious trouble.
(There are various extra details and caveats about these estimates in the footnotes.)
Ord also makes similar statements on the FLI Podcast, including the following:
If you think about the expected lifespan of humanity, a typical species lives for about a million years [I think Ord meant "mammalian species"]. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we pla
Project ideas / active grantmaking ideas I collected
Context: What follows is a copy of a doc I made quickly in June/July 2021. Someone suggested I make it into a Forum post. But I think there are other better project idea lists, and more coming soon. And these ideas aren't especially creative, ambitious, or valuable, and I don't want people to think that they should set their sights as low as I accidentally did here. And this is now somewhat outdated in some ways. So I'm making it just a shortform rather than a top-level post, and I'm not sure whether you ... (read more)
My review of Tom Chivers' review of Toby Ord's The Precipice
I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)
But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.
I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.
This was an excellent meta-review! Thanks for sharing it.
I agree that these little slips of language are important; they can easily compound into very stubborn memes. (I don't know whether the first person to propose a paperclip AI regrets it, but picking a different example seems like it could have had a meaningful impact on the field's progress.)
1
MichaelA🔸
Agreed.
These seem to often be examples of hedge drift, and their potential consequences seem like examples of memetic downside risks.
Types of downside risks of longtermism-relevant policy, field-building, and comms work [quick notes]
I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I'll probably occasionally update that but not this.
"Here’s my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-sc... (read more)
Some ideas for projects to improve the long-term future
In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. I’m also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck... (read more)
"Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI"
As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven't tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I'd like to throw my hat into the ring of "researchers who would plausibly be interested in assistants" if anyone does set this up.
The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:
focus only on how the debate over patient philanthropy applies to longtermists
generalise the debate to also include questions about work (e.g., should I do a directly useful job now, or build career capital and do directly useful work later?)
They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations a
I don't think "patient" and "urgent" are opposites, in the way Phil Trammell originally defined patience. He used "patient" to mean a zero pure time preference, and "impatient" to mean a nonzero pure time preference. You can believe it is urgent that we spend resources now while still having a pure time preference. Trammell's paper argued that patient actors should give later, irrespective of how much urgency you believe there is. (Although he carved out some exceptions to this.)
2
MichaelA🔸
Yes, Trammell writes:
And I agree that a person with a low or zero pure time preference may still want to use a large portion of their resources now, for example due to thinking now is a much "hingier"/"higher leverage" time than average, or thinking value drift will be high.
You highlighting this makes me doubt whether 80,000 Hours should've used "patient longtermism" as they did, whether they should've used "patient philanthropy" as they arguably did*, and whether I should've proposed the term "patient altruism" for the position that we should give/work later rather than now (roughly speaking).
On the other hand, if we ignore Trammell's definition of the term, I think "patient X" does seem like a natural fit for the position that we should do X later, rather than now.
Do you have other ideas for terms to use in place of "patient"? Maybe "delayed"? (I'm definitely open to renaming the tag. Other people can as well.)
*80k write:
This suggests to me that 80k is, at least in that post, taking "patient philanthropy" to refer not just to a low or zero pure time preference, but instead to a low or zero rate of discounting overall, or to a favouring of giving/working later rather than now.
Often proceed gradually toward soliciting forecasts and/or doing expert surveys
tl;dr: I think it's often good to have a pipeline from untargeted thinking/discussion that stumbles upon important topics, to targeted thinking/discussion of a given important topic, to expert interviews on that topic, to soliciting quantitive forecasts / doing large expert surveys.
I wrote this quickly. I think the core ideas are useful but I imagine they're already familiar to e.g. many people with experience making surveys.[1] I'm not personally aware of an existing write... (read more)
(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)
I think the ultimate answer to that question is really so
Turning the EA Wiki into a (huge) Anki deck is on my list of "Someday/Maybe" tasks. I think it might be worth waiting a bit until the Wiki is in a more settled state, but otherwise I'm very much in favor of this idea.
There is an Anki deck for the old LW wiki. It's poorly formatted and too coarse-grained (one note per article), and some of the content is outdated, but I still find it useful, which suggests to me that a better deck of the EA Wiki would provide considerable value.
2
MichaelA🔸
Why this might be worthwhile:
* The EA community has collected and developed a very large set of ideas that aren't widely known outside of EA, such that "getting up to speed" can take a similar amount of effort to a decent fraction of a bachelor's degree
* But the community is relatively small and new (compared to e.g. most academic fields), so we have relatively little in the way of textbooks, courses, summaries, etc.
* This means it can take a lot of effort and time to get up to speed, lots of EAs have substantial "gaps" in their "EA knowledge", lots of concepts are misinterpreted or conflated or misapplied, etc.
* The EA Wiki is a good step towards having good resources to help people get up to speed
* A bunch of research indicates retrieval practice, especially when spaced and interleaved, can improve long-term retention and can also help with things like application of concepts (not just memory)
* And Anki provides such spaced, interleaved retrieval practice
* I'm being lazy in not explaining the jargon or citing my sources, but you can find some explanation and sources here: Augmenting Long-term Memory
* If one person makes an Anki deck based on the EA Wiki entries, it can then be used and/or built on by other people, can be shared with participants in EA Fellowships, etc.
Possible reasons not to do this:
* "There's a lot of stuff it'd be useful for people to know that isn't on EA Wiki entries. Why not make Anki cards on those things instead? Isn't this a bit insular?"
* I think we can and should do both, rather than one or the other
* Same goes for having Anki cards based on EA sources vs Anki cards based on non-EA sources
* Personally, I'd guess ~25% of my Anki cards are based on EA sources, ~70% are based on non-EA sources but are about topics I see as important for EA reasons, and 5% are random personal life stuff
* See also Suggestion: Make Anki cards, share them as posts, and share key updates
* "This seems pretty t
Why I think The Precipice might understate the significance of population ethics
tl;dr: In The Precipice, Toby Ord argues that some disagreements about population ethics don't substantially affect the case for prioritising existential risk reduction. I essentially agree with his conclusion, but I think one part of his argument is shaky/overstated.
This is a lightly edited version of some notes I wrote in early 2020. It's less polished, substantive, and important than most top-level posts I write. This does not capture my full views on population ethics... (read more)
Bottom line up front: I think it'd be best for longtermists to default to using more inclusive term “authoritarianism” rather than "totalitarianism", except when a person really has a specific reason to focus on totalitarianism specifically.
I have the impression that EAs/longtermists have often focused more on "totalitarianism" than on "authoritarianism", or have used the terms as if they were somewhat interchangeable. (E.g., I think I did both of those things myself in the past.)
But my understanding is that political scientists typically consider to... (read more)
Update in April 2021: This shortform is now superseded bythe EA Wiki entry on Accidental harm. There is no longer any reason to read this shortform instead of that.
Collection of sources I've found that seem very relevant to the topic of downside risks/accidental harm
Someone shared a project idea with me and, after I indicating I didn't feel very enthusiastic about it at first glance, asked me what reservations I have. Their project idea was focused on reducing political polarization and is framed as motivated by longtermism. I wrote the following and thought maybe it'd be useful for other people too, since I have similar thoughts in reaction to a large fraction of project ideas.
"My main 'reservations' at first glance aren't so much specific concerns or downside risks as just 'I tentatively think that this doesn'
The Health Impact Fund (cited above by MichaelA) is an implementation of a broader idea outlined by Dr. Aidan Hollis here: An Efficient Reward System for Pharmaceutical Innovation. Hollis' paper, as I understand it, proposes reforming the patent system such that innovations would be rewarded by government payouts (based on impact metrics, e.g. QALYs) rather than monopoly profit/rent. The Health Impact Fund, an NGO, is meant to work alongside patents (for now) and is intended to prove that the broader concept outlined in the paper can work.
A friend and I are working on further broadening this proposal outlined by Dr. Hollis. Essentially, I believe this type of innovation incentive could be applied to other areas with easily measurable impact (e.g. energy, clean protein and agricultural innovations via a "carbon emissions saved" metric).
We'd love to collaborate with anyone else interested (feel free to message me).
2
EdoArad
Hey schethik, did you make progess with this?
1
schethik
@EdoArad
Summary: The broad concept that Hollis' paper proposes ("outcome-based financing") has already been applied to several other areas such as reducing homelessness, improving specific health outcomes, etc. Recently, McKinsey, Meta, and a few others agreed to spend $925m to fund a similar mechanism to incentivize carbon capture technology innovation. Seems like there's lots of interest in expanding this type of financing model from big funders. Maybe something for the EA community to become more engaged with since there seems to be an appetite.
More details: As I understand it, Hollis' paper's proposal fits into a broader concept known as "outcome-based financing". The space is much more developed than I had thought when I wrote this previous comment. Two primary outcome-based financing models exist -- pay-for-success ("PFS") contracts (also known as social impact bonds) and advanced market commitments ("AMCs"). Hollis' paper (from 2004) describes an application of PFS contracts. Both, PFS contracts and AMCs, are already applied to several industries including health and clean energy.
Definitions:
1. PFS rewards innovators based on some per unit metric (e.g., QALYs per drug sold in Hollis' example).
2. AMCs reward innovators in a pre-specified lump-sum fashion (e.g., the WHO, World Bank, a few countries, and the Bill and Melinda Gates Foundation funded a $1.5 billion AMC for entities that could create a vaccine for pneumococcal diseases).
Real-world Examples:
1. PFS
1. Here's a link to Oxford's PFS database (~200 projects//$500m since the concept was formalized in 2010). PFS contracts are used most commonly for reducing prison rates, improving health outcomes (in developed and developing countries), reducing homelessness, and upskilling labor. Check out the database for more details.
2. Hollis' org is trying to set up a clean energy PFS fund -- seems promising, but I think doing this in cleantech is an extra tricky.
3. I've been en
3
EdoArad
Thank you!! It'd be great if you want to write it as a top-level post, to get more visibility and to be more easily indexable, or maybe add something to this wiki page.
Crowd Funded Cures seems like an amazing initiative, wish you all the best!
Potential downsides of EA's epistemic norms (which overall seem great to me)
This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017... (read more)
If anyone reading this has read anything I’ve written on the EA Forum or LessWrong, I’d really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative.
And remember what mama always said: If you’ve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michael’s impact survey.
(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that ... (read more)
I've made a small "Collection of collections of AI policy ideas" doc. Please let me know if you know of a collection of relatively concrete policy ideas relevant to improving long-term/extreme outcomes from AI. Please also let me know if you think I should share the doc / more info with you.
Preferences for the long-term future [an abandoned research idea]
Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program.[1] I'm unsure how useful this idea is. But twice this week I felt it'd be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it.
Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with be... (read more)
What are the implications of the offence-defence balance for trajectories of violence?
Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?
Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please commen... (read more)
This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:
What is this doc, and why did I make it?
AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as I... (read more)
Thoughts on Toby Ord’s policy & research recommendations
In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didn’t post at the time).
Overall, I very much like Ord’s list, and I don’t think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.
On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:
a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.
I think we could flesh out this idea as the following argument:
Premise 1. We know of fewer incidents that could've escalated to nuclear war from the 70s onwards than from the 40s-60s.
Ah great, thanks!
Do you happen to recall if you encountered the term "moral weight" outside of EA/rationality circles? The term isn't in the titles in the bibliography (though it may be in the full papers), and I see one that says "Moral status as a matter of degree?", which would seem to refer to a similar idea. So this seems like it might be additional weak evidence that "moral weight" might be an idiosyncratic term in the EA/rationality community (whereas when I first saw Muehlhauser use it, I assumed he took it from the philosophical literature).
The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:
Capacity for welfare, which is how well or poorly a given animal's life can go
Average realized welfare, which is how well or poorly the life of a typical member of a given species actually goes
Moral status, which is how much the welfare of a given animal matters morally
Differences in any of those three things might generate differences in how we prioritize interventions that target different species.
Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!
Thanks, that's really helpful! I'd been thinking there's an important distinction between that "capacity for welfare" idea and that "moral status" idea, so it's handy to know the standard terms for that.
Looking forward to reading that!
[comment deleted]2
0
0
Deleted by MichaelA🔸,
Reason: Something weird happened with my internet, resulting in me posting this early and then posting the proper version later.
Notes from a call with someone who's a research assistant to a great researcher
(See also Matthew van der Merwe's thoughts. I'm sharing this because I think it might be useful to some people by itself, and so I can link to it from parts of my sequence on Improving the EA-Aligned Research Pipeline.)
- This RA says they definitely learned more from this RA role than they would’ve if doing a PhD
- Mainly due to tight feedback loops
- And strong incentives for the senior researcher to give good feedback
- The RA is doing producing "intermediate products" for the senior researcher. So the senior researcher needs and uses what the RA produces. So the feedback is better and different.
- In contrast, if the RA was working on their own, separate projects, it would be more like the senior researcher just looks at it and grades it.
- The RA has mostly just had to do literature reviews of all sorts of stuff related to the broad topic the senior researcher focuses on
- So the RA person was incentivised more than pretty much anyone else to just get familiar with all the stuff under this umbrella
- They wouldn’t be able or encouraged to do that in a PhD
- The thing the RA hasn’t liked is that he hasn’t been producing his ow
... (read more)For the last few years, I’ve been an RA in the general domain of ~economics at a major research university, and I think that while a lot of what you’re saying makes sense, it’s important to note that the quality of one’s experience as an RA will always depend to a very significant extent on one’s supervising researcher. In fact, I think this dependency might be just about the only thing every RA role has in common. Your data points/testimonials reasonably represent what it’s like to RA for a good supervisor, but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish. Furthermore, it’s harder than you’d think to screen for this in advance of taking an RA job. I feel particularly lucky to be working for a great supervisor, but/because I am quite familiar with how much the alternative sucks.
On a separate note, regarding your comment about people potentially specializing in RAing as a career, I don’t really think this would yield much in the way of productivity gains relative to the current state of affairs in academia (where postdocs often already fill the role that I think you envision for career RAs). I do, however, thi... (read more)
One idea that comes to mind is to set up an organization that hires RAs-as-a-service. Say, a nonprofit that works with multiple EA orgs and employees several RAs, some full-time and others part-time (think, a student job). This org can then handle recruiting, basic training, employment and some of the management. RAs could work on multiple projects with perhaps multiple different people, and tasks could be delegated to the organization as a whole to find the right RA to fit.
A financial model could be something like EA orgs pay 25-50% of the relevant salaries for projects they recruit RAs for, and the rest is complemented by donations to the non-profit itself.
Readings and notes on how to do high-impact research
This shortform contains some links and notes related to various aspects of how to do high-impact research, including how to:
I've also delivered a workshop on the same topics, the slides from which can be found here.
The document has less of an emphasis on object-level things to do with just doing research well (as opposed to doing impactful research), though that’s of course important too. On that, see also Effective Thesis's collection of Resources, Advice for New Researchers - A collaborative EA doc, Resources to learn how to do research, and various non-EA resources (some are linked to from those links).
Epistemic status
This began as a Google Doc of notes to self. It's still pretty close to that status - i.e., I don't explain why each thing is relevant, haven't spent a long time thinking about the ideal way to organise this, and expect this shortform omits many great readings and tips. But seve... (read more)
Collection of EA analyses of how social social movements rise, fall, can be influential, etc.
Movement collapse scenarios - Rebecca Baron
Why do social movements fail: Two concrete examples. - NunoSempere
What the EA community can learn from the rise of the neoliberals - Kerry Vaughan
How valuable is movement growth? - Owen Cotton-Barratt (and I think this is sort-of a summary of that article)
Long-Term Influence and Movement Growth: Two Historical Case Studies - Aron Vallinder, 2018
Some of the Sentience Institute's research, such as its "social movement case studies"* and the post How tractable is changing the course of history?
A Framework for Assessing the Potential of EA Development in Emerging Locations* - jahying
EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020
Hard-to-reverse decisions destroy option value - Schubert & Garfinkel, 2017
These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as:
It appears Animal C... (read more)
Independent impressions
Your independent impression about X is essentially what you'd believe about X if you weren't updating your beliefs in light of peer disagreement - i.e., if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic relative to yours. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.
Armed with this concept, I try to stick to the following epistemic/discussion norms, and think it's good for other people to do so as well:
One rationale for that bundle of norms is to avoid information cascades.
In contrast, when I actually make decisions, I try t... (read more)
Collection of sources that seem very relevant to the topic of civilizational collapse and/or recovery
Civilization Re-Emerging After a Catastrophe - Karim Jebari, 2019 (see also my commentary on that talk)
Civilizational Collapse: Scenarios, Prevention, Responses - Denkenberger & Ladish, 2019
Update on civilizational collapse research - Ladish, 2020 (personally, I found Ladish's talk more useful; see the above link)
Modelling the odds of recovery from civilizational collapse - Michael Aird (i.e., me), 2020
The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)
How much could refuges help us recover from a global catastrophe? - Nick Beckstead, 2015 (he also wrote a related EA Forum post)
Various EA Forum posts by Dave Denkenberger (see also ALLFED's site)
Aftermath of Global Catastrophe - GCRI, no date (this page has links to other relevant articles)
A (Very) Short History of the Collapse of Civilizations, and Why it Matters - David Manheim, 2020
A grant applic... (read more)
Note: This shortform is now superseded by a top-level post I adapted it into. There is no longer any reason to read the shortform version.
Book sort-of-recommendations
Here I list all the EA-relevant books I've read or listened to as audiobooks since learning about EA, in roughly descending order of how useful I perceive/remember them being to me.
I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) That said, this isn't exactly a recommendation list, because:
Let me know if you want more info on why I found something useful or not so useful.
(See also this list of EA-related podcasts and this list of sources of EA-related videos.)
- The Precipice, by Ord, 2020
- See here for a list of things I've written that summarise, comment on, or take inspiration from parts of The Precipice.
- I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good con
... (read more)I've now turned this into a top-level post, and anyone who wants to read this should now read that version rather than this shortform.
Adding important nuances to "preserve option value" arguments
Summary
I fairly commonly hear (and make) arguments like "This action would be irreversible. And if we don't take the action now, we can still do so later. So, to preserve option value, we shouldn't take that action, even if it would be good to do the action now if now was our only chance."[1]
This is relevant to actions such as:
I think this sort of argument is often getting at something important, but in my experience such arguments are usually oversimplified in some important ways. This shortform is a quickly written[2] attempt to provide a more nuanced picture of that kind of argument. My key points are:
- "(Ir)reversibility" is a matter of degree (not a binary), and a matter of the expected extent to which the counterfactual effects we're cons
... (read more)I've now turned this into a top-level post.
Collection of work on whether/how much people should focus on the EU if they’re interested in AI governance for longtermist/x-risk reasons
I made this quickly. Please let me know if you know of things I missed. I list things in reverse chronological order.
There may be some posts I missed with the European Union tag, and there are also posts with that tag that aren’t about AI governance but which address a similar ques... (read more)
EDIT: This is now superseded by a top-level post so you should read that instead.
tl;dr: Value large impacts rather than large inputs, but be excited about megaprojects anyway because they're a new & useful tool we've unlocked
A lot of people are excited about megaprojects, and I agree that they should be. But we should remember that megaprojects are basically defined by the size of their inputs (e.g., "productively" using >$100 million per year), and that we don't intrinsically value the capacity to absorb those inputs. What we really care about is huge positive impact, and megaprojects are just one means to that end, and actually (ceteris paribus) we should be even more excited about achieving the same impacts using less inputs & smaller projects. How can we reconcile these thoughts, and why should we still be excited about megaprojects?
I suggest we think about this as follows:
- Imagine a Venn diagram with a circle for megaprojects and another circle for projects with great expected value (EV)
- Projects with great EV are really the focus and always have been
- Projects like 80,000 Hours, FHI, and Superintelligence were each far smaller than megaprojects, but in my view pr
... (read more)I think the general thrust of your argument is clearly right, and it's weird/frustrating that this is not the default assumption when people talk about megaprojects (though maybe I'm not reading the existing discussions of megaprojects sufficiently charitably).
2 moderately-sized caveats:
Quick thoughts on Kelsey Piper's article Is climate change an “existential threat” — or just a catastrophic one?
- The article was far better than I expect most reporting on climate change as a potential existential risk to be
- This is in line with Kelsey Piper generally seeming to do great work
- I particularly appreciated that it (a) emphasised how the concepts of catastrophes in general and extinction in particular are distinct and why that matters, but (b) did this in a way that I suspect has a relatively low risk of seeming callous, nit-picky, or otherwise annoying to people who care about climate change
- But I also had some substantive issues with the article, which I'll discuss below
- The article conflated “existential threat”/“existential risk” with “extinction risk”, thereby ignoring two other types of existential catastrophe: unrecoverable collapse and unrecoverable dystopia
- See also Venn diagrams of existential, global, and suffering catastrophes
- Some quotes from the article to demonstrate what the conflation I'm referring to:
- “But there’s a standard meaning of that phrase [existential threat]: that it’s going to wipe out humanity — or even, as Warren implied Wednesday night, all life
... (read more)Collection of EA analyses of political polarisation
Book Review: Why We're Polarized - Astral Codex Ten, 2021
EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020
Adapting the ITN framework for political interventions & analysis of political polarisation - OlafvdVeen, 2020
Thoughts on electoral reform - Tobias Baumann, 2020
Risk factors for s-risks - Tobias Baumann, 2019
Other EA Forum posts tagged Political Polarization
(Perhaps some older Slate Star Codex posts? I can't remember for sure.)
Notes
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only analyses by EAs/EA-adjacent people here are that:
I've written some posts on related themes.
https://www.lesswrong.com/posts/k54agm83CLt3Sb85t/clearerthinking-s-fact-checking-2-0
https://forum.effectivealtruism.org/posts/pYaYtCT3Fc5H4rfWS/opinion-piece-on-the-swedish-network-for-evidence-based
https://forum.effectivealtruism.org/posts/CYyaQ3N4ipLFR4fzX/effective-altruism-s-fact-value-separation-as-a-weapon
https://forum.effectivealtruism.org/posts/yPkiBNW49NZvGvJ3q/political-debiasing-and-the-political-bias-test
The x-risk policy pipeline & interventions for improving it: A quick mapping
I just had a call with someone who's thinking about how to improve the existential risk research community's ability to cause useful policies to be implemented well. This made me realise I'd be keen to see a diagram of the "pipeline" from research to implementation of good policies, showing various intervention options and which steps of the pipeline they help with. I decided to quickly whip such a diagram up after the call, forcing myself to spend no more than 30 mins on it. Here's the result.
(This is of course imperfect in oodles of ways, probably overlaps with and ignores a bunch of existing work on policymaking*, presents things as more one-way and simplistic than they really are, etc. But maybe it'll be somewhat interesting/useful to some people.)
(If the images are too small for you, you can open each in a new tab.)
Feel free to ask me to explain anything that see... (read more)
Reflections on data from a survey about things I’ve written
I recently requested people take a survey on the quality/impact of things I’ve written. So far, 22 people have generously taken the survey. (Please add yourself to that tally!)
Here I’ll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.[1]
I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if you’d like me to add them in a comment.
For my thoughts on how worthwhile this was and whether other researchers/organisations should run similar surveys, see Should surveys about the quality/impact of research outputs be more common?
(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)
The data
Q1:
Q2:
Q3:
Q4:
Q5: “If you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected ... (read more)
"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"
I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).
That said, as you know, I think your summaries/collections are useful and underprovided.
To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?
Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?
One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.
But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.
Collection of some definitions of global catastrophic risks (GCRs)
See also Venn diagrams of existential, global, and suffering catastrophes
Bostrom & Ćirković (pages 1 and 2):
Open Philanthropy Project/GiveWell:
... (read more)I've recently collected readings and notes on the following topics:
Just sharing here in case people would find them useful. Further info on purposes, epistemic status, etc. can be found at those links.
Why I'm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally
This is a lightly edited version of some quick thoughts I wrote in May 2020. These thoughts are just my reaction to some specific claims in The Precipice, intended in a spirit of updating incrementally. This is not a substantive post containing my full views on nuclear war or collapse & recovery.
In The Precipice, Ord writes:
(See also the relevant section of Ord's 80,000 Hours interview.)
I share the view that it’s unlikely that New Zealand would be directly targeted by nuclear war, or that nuclear winter would cause New Zealand to suffer extreme agricultural losses or lose its technology. (That said, I haven't looked into that clos... (read more)
Reasons why EU laws/policies might be important for AI outcomes
Based on some reading and conversations, I think there are two main categories of reasons why EU laws/policies (including regulations)[1] might be important for AI risk outcomes, with each category containing several more specific reasons.[2] This post attempts to summarise those reasons.
But note that:
Please comment if you know of relevant prior work, if you have disagreements or think something should be added, and/or if you think I should make this a top-level post.
Note: I drafted this quickly, then wanted to improve it based on feedback & on things I read/remembered since writing it. But I then realised I'll never make the time to do that, so I'm just posting this~as-is anyway since maybe it'll be a bit useful to some people. See also Collection of work on whether/how much people should focus on the EU if they’r... (read more)
Notes on Galef's "Scout Mindset" (2021)
Overall thoughts
- Scout Mindset was engaging, easy to read, and had interesting stories and examples
- Galef covered a lot of important points in a clear way
- She provided good, concrete advice on how to put things into practice
- So I'm very likely to recommend this book to people who aren't in the EA community, are relatively new to it, or aren't super engaged with it
- I also liked how she mentioned effective altruism itself several times and highlighted its genuinely good features in an accurate way, but without making this the central focus or seeming preachy
- (At least, I'm guessing people wouldn't find it preachy - it's hard to say given that I'm already a convert...)
- Conversely, I think I was already aware of and had internalised almost all the basic ideas and actions suggested in the book, and mostly act on these things
- This is mostly due to the various things I've read or listened to since learning about EA
- So I've put this 45th on my rough list of the 53 books I've read since learning about EA, in descending order of their perceived usefulness to me specifically
- And I wouldn't necessarily recommend this to long-time, highly engaged members of the EA c
... (read more)tl;dr: Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that; I think it’s plausible that economic stagnation would actually decrease economic risk, at least given certain types of stagnation and certain starting conditions.
(This is basically a nitpick I wrote in May 2020, and then lightly edited recently.)
---
In The Precipice, Toby Ord discusses the concept of existential risk factors: factors which increase existential risk, whether or not they themselves could “directly” cause existential catastrophe. He writes:
This seems to me to imply that global economic stagnation is clearly and almost certainly an existential risk factor.
He also discusses the inverse concept, existential security factors: factors which reduce existential risk. He writes:
... (read more)Collection of sources related to dystopias and "robust totalitarianism"
(See also Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.?)
The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)
The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)
Reducing long-term risks from malevolent actors - David Althaus and Tobias Baumann, 2020
The Centre for the Governance of AI’s research agenda - Allan Dafoe (this contains discussion of "robust totalitarianism", and related matters)
A shift in arguments for AI risk - Tom Sittler (this has a brief but valuable section on robust totalitarianism) (discussion of the overall piece here)
Existential Risk Prevention as Global Priority - Nick Bostrom (this discusses the concepts of "permanent stagnation" and "flawed realisation", and very briefly touches on their relevance to e.g. lasting totalitarianism)
The Future of Human Evolution - Bostrom, 2009 (I think some scenarios covered there might count as dystopias, depe... (read more)
Collection of all prior work I found that seemed substantially relevant to information hazards
Information hazards: a very simple typology - Will Bradshaw, 2020
Information hazards and downside risks - Michael Aird (me), 2020
Information hazards - EA concepts
Information Hazards in Biotechnology - Lewis et al., 2019
Bioinfohazards - Crawford, Adamson, Ladish, 2019
Information Hazards - Bostrom, 2011 (I believe this is the paper that introduced the term)
Terrorism, Tylenol, and dangerous information - Davis_Kingsley, 2018
Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical - Gentzel, 2018
Horsepox synthesis: A case of the unilateralist's curse? - Lewis, 2018
Mitigating catastrophic biorisks - Esvelt, 2020
The Precipice (particularly pages 135-137) - Ord, 2020
Information hazard - LW Wiki
Thoughts on The Weapon of Openness - Will Bradshaw, 2020
Exploring the Streisand Effect - Will Bradshaw, 2020
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks - Alexey Turchin, 2018
A point of clarification on infohazard terminology - eukaryote, 2020
Somewhat less directly relevant
The Offense-Defense Balance of Scientific Knowledge: ... (read more)
I've made a database of AI safety/governance surveys & survey ideas. I'll copy the "READ ME" page below. Let me know if you'd like access to the database, if you'd suggest I make a more public version, or if you'd like to suggest things be added.
"This spreadsheet lists surveys & ideas for surveys that are very relevant to AI safety/governance, including surveys which are in progress, ideas for surveys, and published surveys. The intention is to make it easier for people to:
1. Find out about outputs or works-in-progress they might want to read... (read more)
Epistemic status: Unimportant hot take on a paper I've only skimmed.
Watson and Watson write:
I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?
They go on to say:
... (read more)Notes on The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous (2020)
Cross-posted to LessWrong as a top-level post.
I recently finished reading Henrich's 2020 book The WEIRDest People in the World. I would highly recommend it, along with Henrich's 2015 book The Secret of Our Success; I've roughly ranked them the 8th and 9th most useful-to-me of the 47 EA-related books I've read since learning about EA.
In this shortform, I'll:
My hope is that this will be a low-effort way for me to help some EAs to quickly:
(See also Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?)
You may find it also/more useful to read
- This review of the book on Less
... (read more)Collection of EA-associated historical case study research
This collection is in reverse chronological order of publication date. I think I'm forgetting lots of relevant things, and I intend to add more things in future - please let me know if you know of something I'm missing.
Possibly relevant things:
- Some book reviews by Scott Alexander, such as:
- https://slatestarcodex.com/2019/03/18/book-review-inventing-the-future/
- https://slatestarcodex.com/2018/04/30/book-review-history-of-the-fabian-society/
- It appears Animal Charity Evaluators did relevant re
... (read more)Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?
I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how much to focus on working at EA vs non-EA orgs, as well as which specific types of roles and orgs to focus on.
Having write-ups on that could be more efficient than people answering similar questions multiple times. And it could make it easier for people to learn about a wider range of "typical workdays", rather than having to extrapolate from whoever they happened to talk to and whatever happened to come to mind for that person at that time.
I think such write-ups are made and shared in some other "sectors". E.g. when I was applying for a job in the UK civil service, I think I recall there being a "typical day" writeup for a range of different types of roles in and branches of the civil service.
So do such write-ups exist for EA orgs? (Maybe some posts in the Working at EA organizations series ser... (read more)
Collection of collections of resources relevant to (research) management, mentorship, training, etc.
(See the linked doc for the most up-to-date version of this.)
The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - it’s just the relevant things that I (Michael Aird) happen to have made or know of.
- Management & mentoring - EA Forum
- Management-related books [shared] - me
- Meeting templates for mentors/managers [shared] - me
- Goal-setting templates or processes [shared] - me
... (read more)UPDATE: This is now fully superseded by my 2022 Interested in EA/longtermist research careers? Here are my top recommended resources, and there's no reason to read this one.
Some resources I think might be useful to the kinds of people who apply for research roles at Rethink Priorities
This shortform expresses my personal opinions only.
These resources are taken from an email I sent to AI Governance & Strategy researcher/fellowship candidates who Rethink Priorities didn't make offers to but who got pretty far through our application process. These resourc... (read more)
Collection of sources relevant to moral circles, moral boundaries, or their expansion
Works by the EA community or related communities
Moral circles: Degrees, dimensions, visuals - Michael Aird (i.e., me), 2020
Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018
The Moral Circle is not a Circle - Grue_Slinky, 2019
The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)
Radical Empathy - Holden Karnofsky, 2017
Various works from the Sentience Institute, including:
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence - Aird, work in progress
-Less relevant, or with only a small section that’s directly relevant-
Why do effective altruists support the causes we do? - Michelle Hutchinson, 2015
Finding more effective causes - Michelle Hutchinson, 2015
Cosmopolitanism - Topher Hallquist, 2014
Three Heuristics for Finding Cause X - Kerry Vaughan, 2016
The Drowning Child and the Expanding Circle - Peter Singer, 1... (read more)
Collection of everything I know of that explicitly uses the terms differential progress / intellectual progress / technological development, except Forum posts)
This originally collected Forum posts as well, but now that is collected by the Differential progress tag.
Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?" - Michael Aird (i.e., me), 2021
Differential Intellectual Progress as a Positive-Sum Project - Tomasik, 2013/2015
Differential technological development: Some early thinking - Beckstead (for GiveWell), 2015/2016
Differential progress - EA Concepts
Differential technological development - Wikipedia
Existential Risk and Economic Growth - Aschenbrenner, 2019 (summary by Alex HT here)
On Progress and Prosperity - Christiano, 2014
How useful is “progress”? - Christiano, ~2013
Differential intellectual progress - LW Wiki
Existential Risks: Analyzing Human Extinction Scenarios - Bostrom, 2002 (section 9.4) (introduced the term differential technological development, I think)
Intelligence Explosion: Evidence and Import - Muehlhauser & Salamon (for MIRI) (section 4.2) (introduced the term... (read more)
Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?
I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of:
- disseminating important ideas to key decision-makers and thereby improving their decisions
- either through the Bulletin articles themselves or through them allowing one to
... (read more)Collection of evidence about views on longtermism, time discounting, population ethics, significance of suffering vs happiness, etc. among non-EAs
Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)
The Long-Term Future: An Attitude Survey - Vallinder, 2019
Older people may place less moral value on the far future - Sanjay, 2019
Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017
The Psychology of Existential Risk: Moral Judgments about Human Extinction - Schubert, Caviola & Faber, 2019
Psychology of Existential Risk and Long-Termism - Schubert, 2018 (space for discussion here)
Descriptive Ethics – Methodology and Literature Review - Althaus, ~2018 (this is something like an unpolished appendix to Descriptive Population Ethics and Its Relevance for Cause Prioritization, and it would make sense to read the latter post first)
A Small Mechanical Turk Survey on Ethics and Animal Welfare - Brian Tomasik, 2015
Work on "future self continuity" might be relevant (I haven't looked into it)
Some evidence about the views of EA-aligned/EA-adjacent groups
Survey re... (read more)
If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?
tl;dr I think it's "another million years", or slightly longer, but I'm not sure.
In The Precipice, Toby Ord writes:
(There are various extra details and caveats about these estimates in the footnotes.)
Ord also makes similar statements on the FLI Podcast, including the following:
... (read more)Project ideas / active grantmaking ideas I collected
Context: What follows is a copy of a doc I made quickly in June/July 2021. Someone suggested I make it into a Forum post. But I think there are other better project idea lists, and more coming soon. And these ideas aren't especially creative, ambitious, or valuable, and I don't want people to think that they should set their sights as low as I accidentally did here. And this is now somewhat outdated in some ways. So I'm making it just a shortform rather than a top-level post, and I'm not sure whether you ... (read more)
My review of Tom Chivers' review of Toby Ord's The Precipice
I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)
But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.
I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.
I'll now quote and... (read more)
Types of downside risks of longtermism-relevant policy, field-building, and comms work [quick notes]
I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I'll probably occasionally update that but not this.
"Here’s my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-sc... (read more)
Some ideas for projects to improve the long-term future
In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. I’m also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck... (read more)
The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:
They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations a
... (read more)Often proceed gradually toward soliciting forecasts and/or doing expert surveys
tl;dr: I think it's often good to have a pipeline from untargeted thinking/discussion that stumbles upon important topics, to targeted thinking/discussion of a given important topic, to expert interviews on that topic, to soliciting quantitive forecasts / doing large expert surveys.
I wrote this quickly. I think the core ideas are useful but I imagine they're already familiar to e.g. many people with experience making surveys.[1] I'm not personally aware of an existing write... (read more)
Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?"
(This is related to the general topic of differential progress.)
(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)
- I think the ultimate answer to that question is really so
... (read more)Maybe someone should make ~1 Anki card each for lots of EA Wiki entries, then share that Anki deck on the Forum so others can use it?
Specifically, I suggest that someone:
- Read/skim many/most/all of the EA Wiki entries in the "Cause Areas" and "Other Concepts" sections
- Anki cards based on entries in the other sections (e.g., Organisations) would probably be less useful
- Make 1 or more Anki card for many/most of those entries
- In many cases, these cards might take forms like "The long reflection refers to... [answer]"
- In many other cases, the cards could cover othe
... (read more)Why I think The Precipice might understate the significance of population ethics
tl;dr: In The Precipice, Toby Ord argues that some disagreements about population ethics don't substantially affect the case for prioritising existential risk reduction. I essentially agree with his conclusion, but I think one part of his argument is shaky/overstated.
This is a lightly edited version of some notes I wrote in early 2020. It's less polished, substantive, and important than most top-level posts I write. This does not capture my full views on population ethics... (read more)
Bottom line up front: I think it'd be best for longtermists to default to using more inclusive term “authoritarianism” rather than "totalitarianism", except when a person really has a specific reason to focus on totalitarianism specifically.
I have the impression that EAs/longtermists have often focused more on "totalitarianism" than on "authoritarianism", or have used the terms as if they were somewhat interchangeable. (E.g., I think I did both of those things myself in the past.)
But my understanding is that political scientists typically consider to... (read more)
List of things I've written or may write that are relevant to The Precipice
Things I’ve written
- Some thoughts on Toby Ord’s existential risk estimates
- Database of existential risk estimates
- Clarifying existential risks and existential catastrophes
- Existential risks are not just about humanity
- Failures in technology forecasting? A reply to Ord and Yudkowsky
- What is existential security?
- Why I'm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally
- Thoughts on Toby Ord’s policy &am
... (read more)Update in April 2021: This shortform is now superseded by the EA Wiki entry on Accidental harm. There is no longer any reason to read this shortform instead of that.
Collection of sources I've found that seem very relevant to the topic of downside risks/accidental harm
Information hazards and downside risks - Michael Aird (me), 2020
Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel (for 80,000 Hours), 2018
How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and J... (read more)
Someone shared a project idea with me and, after I indicating I didn't feel very enthusiastic about it at first glance, asked me what reservations I have. Their project idea was focused on reducing political polarization and is framed as motivated by longtermism. I wrote the following and thought maybe it'd be useful for other people too, since I have similar thoughts in reaction to a large fraction of project ideas.
- "My main 'reservations' at first glance aren't so much specific concerns or downside risks as just 'I tentatively think that this doesn'
... (read more)Collection of sources relevant to impact certificates/impact purchases/similar
Certificates of impact - Paul Christiano, 2014
The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)
The Case for Impact Purchase | Part 1 - Linda Linsefors, 2020
Making Impact Purchases Viable - casebash, 2020
Plan for Impact Certificate MVP - lifelonglearner, 2020
Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019
Altruistic equity allocation - Paul Christiano, 2019
Social impact bond - Wikipe... (read more)
Potential downsides of EA's epistemic norms (which overall seem great to me)
This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017... (read more)
If anyone reading this has read anything I’ve written on the EA Forum or LessWrong, I’d really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative.
And remember what mama always said: If you’ve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michael’s impact survey.
(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that ... (read more)
I've made a small "Collection of collections of AI policy ideas" doc. Please let me know if you know of a collection of relatively concrete policy ideas relevant to improving long-term/extreme outcomes from AI. Please also let me know if you think I should share the doc / more info with you.
Preferences for the long-term future [an abandoned research idea]
Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program.[1] I'm unsure how useful this idea is. But twice this week I felt it'd be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it.
Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with be... (read more)
Collection of ways of classifying existential risk pathways/mechanisms
Each of the following works show or can be read as showing a different model/classification scheme/taxonomy:
- Defence in Depth Against Human Extinction:Prevention, Response, Resilience, and Why They All Matter - Cotton-Barratt, Daniel, and Sandberg, 2020
- The same model is also discussed in Toby Ord's The Precipice.
- Cotton-Barratt also discusses this model, and rationales for building such models, on the 80,000 Hours podcast.
- Classifying global catastrophic risks - Avin et al., 2018
- Causa
... (read more)What are the implications of the offence-defence balance for trajectories of violence?
Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?
Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please commen... (read more)
Update in April 2021: This shortform is now superseded by the EA Wiki entry on the Unilateralist's curse. There is no longer any reason to read this shortform instead of that.
Collection of all prior work I've found that seemed substantially relevant to the unilateralist’s curse
Unilateralist's curse [EA Concepts]
Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully connects the curse to other factors)
The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original pap... (read more)
Collection of AI governance reading lists, syllabi, etc.
This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:
What is this doc, and why did I make it?
AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as I... (read more)
Collection of work on value drift that isn't on the EA Forum
Value Drift & How to Not Be Evil Part I & Part II - Daniel Gambacorta, 2019
Value drift in effective altruism - Effective Thesis, no date
Will Future Civilization Eventually Achieve Goal Preservation? - Brian Tomasik, 2017/2020
Let Values Drift - G Gordon Worley III, 2019 (note: I haven't read this)
On Value Drift - Robin Hanson, 2018 (note: I haven't read this)
Somewhat relevant, but less so
Value uncertainty - Michael Aird (me), 2020
An idea for getting evidence on value drift in... (read more)
Thoughts on Toby Ord’s policy & research recommendations
In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didn’t post at the time).
Overall, I very much like Ord’s list, and I don’t think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.
Regarding “other anthropogenic
... (read more)Some concepts/posts/papers I find myself often wanting to direct people to
https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence
https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey
https://forum.effectivealtruism.org/posts/voDm6e6y4KHAPJeJX/act-utilitarianism-criterion-of-rightness-vs-decision
http://gcrinstitute.org/papers/trajectories.pdf
(Will likely be expanded as I find and remember more)
Notes on Victor's Understanding the US Government (2020)
Why I read this
- I’m interested in learning more about a wide variety of topics relevant to "longtermism-motivated AI governance/strategy/policy research, practice, advocacy, and talent-building"
- I decided that one strategy I should try for that purpose is listening to relevant Great Courses lecture series via Audible
- This decision was loosely informed by advice at the end of the post The Neglected Virtue of Scholarship
- See also
- I felt that the Understanding the US Government lecture series would be useful
... (read more)On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:
I think we could flesh out this idea as the following argument:
- Premise 1. We know of fewer incidents that could've escalated to nuclear war from the 70s onwards than from the 40s-60s.
- Premise
... (read more)Collection of sources relevant to the idea of “moral weight”
Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020
Preliminary thoughts on moral weight - Luke Muehlhauser, 2018
Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020
2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)
Notes
As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time... (read more)
A few months ago I compiled a bibliography of academic publications about comparative moral status. It's not exhaustive and I don't plan to update it, but it might be a good place for folks to start if they're interested in the topic.
The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:
Differences in any of those three things might generate differences in how we prioritize interventions that target different species.
Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!