Hide table of contents

I'm posting this in preparation for Draft Amnesty Week (Feb 24- March 2), but it's also (hopefully) valuable outside of that context. The last time I posted this question, there were some great responses. 

When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing and ideas can be voted on separately. 

If you see an answer here describing a post you think has already been written, please lend a hand and link it here. 

A few suggestions for possible answers:

  • A question you would like someone to answer: “How, historically, did AI safety become an EA cause area?”
  • A type of experience you would like to hear about: “I’d love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?”
  • A gap in an argument that you'd like someone to fill.

If you have loads of ideas, consider writing an entire "posts I would like someone to write" post.

Why put this up before Draft Amnesty Week?

If you see a post idea here that you think you might be positioned to answer, Draft Amnesty Week (Feb 24- March 2) might be a great time to post it. During Draft Amnesty Week, your posts don't have to be thoroughly thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting. More details.

New Answer
New Comment

12 Answers sorted by

I would like someone to write a post about almost every topic asked about in the Meta Coordination Forum Survey, e.g.

  • What should the growth rate of EA be?
  • How quickly should we spend EA resources?
  • How valuable is recruiting a highly engaged EA to the community?
  • How much do we value highly engaged EAs relative to a larger number of less engaged people hearing about EA?
  • How should we (decide how to) allocate resources across cause areas?
  • How valuable is a junior/senior staff hire at an EA org (relative to the counterfactual second best hire)?
  • What skills / audiences should we prioritise targeting?

I'm primarily thinking about core EA decision-makers writing up their reasoning, but I think it would be valuable for general community members to do this.

Prima facie, it's surprising that more isn't written publicly about core EA strategic questions.

Similar to Ollie's answer, I don't think EA is prepared for the world in which AI progress goes well. I expect that if that happens, there will be tons of new opportunities for us to spend money/start organizations that improve the world in a very short timeframe. I'd love to see someone carefully think through what those opportunities might be.

Obvious point, but I assume that [having a bunch of resources, mainly money] is a pretty safe bet for these worlds. 

AI progress could/should bring much better ideas of what to do with said resources/money as it happens. 

3
Karthik Tadepalli
Yeah I was referring more to whether it can bring new ways of spending money to improve the world. There will be new market failures to solve, new sorts of technology that society could gain from accelerating, new ways to get traction on old problems

Pertinent to this idea for a post I’m stuck on:

What follows from conditionalizing the various big anthropic arguments on one another? Like, assuming you think the basic logic behind the simulation hypothesis, grabby aliens, Boltzman brains, and many worlds all works, how do these interact with one another? Does one of them “win”? Do some of them hold conditional on one another but fail conditional on others? Do ones more compatible with one another have some probabilistic dominance (like, this is true if we start by assuming it, but also might be true if these others are true)? Essentially I think this confusion is pertinent enough to my opinions on these styles of arguments in general that I’m satisfied just writing about this confusion for my post idea, but I feel unprepared to actually do the difficult, dirty work, of pulling expected conclusions about the world from this consideration, and I would love it if someone much cleverer than me tried to actually take the challenge on.

I would be really interested in a post that outlined 1-3 different scenarios for post-AGI x-risk based on increasingly strict assumptions. So the first one would assume that misaligned superintelligent AI would almost instantly emerge from AGI, and describe the x-risks associated with that. Then the assumptions become stricter and stricter, like AGI would only be able to improve itself slowly, we would be able to align it to our goals etc.

I think this could be a valuable post to link people to, as a lot of debates around whether AI poses an x-risk seem to fall on accepting or rejecting potential scenarios, but it's usually unproductive because everyone has different assumptions about what AI will be capable of. 

So with this post, to say that AI x-risk is not tangible, then for each AI development scenario (with increasingly strict assumptions), you have to either:

  1. reject at least one of the listed assumptions (e.g. argue that computer chips are a limit on exponential intelligence increases)
  2. or argue that all proposed existential risks in that scenario are so impossible that even an AI wouldn't be able to make any of them work.

If you can't do either of those, you accept AI is an x-risk. If you can, you move on to the next scenario with stricter assumptions. Eventually you find the assumptions you agree with, and have to reject all proposed x-risks in that scenario to say that AI x-risk isn't real. 

The post might also help with planning for different scenarios if it's more detailed than I'm anticipating. 

Maybe too much for a Draft Amnesty week, but I'd be excited for someone / some people to think about how we'd prioritise R&D efforts if/when R&D is ~automated by very powerful narrow or general AI. "EA for the post-AGI world" or something.

I wonder if the ITN framework can offer an additional perspective to the one outlined by Dario in Machines of Loving Grace. He uses Alzheimer’s as an example of a problem he thinks could be solved soon, but is that one of the most pressing problems that becomes very tractable post-AGI? How does that trade-off against e.g. increasing life expectancy by a few years for everyone? (Dario doesn't claim Alzheimer’s is the most pressing problem, and I would also be very happy if we could win the fight against Alzheimer’s).

I'd love to read a deep-dive into a non-PEPFAR USAID program. This Future Perfect article mentioned a few. But it doesn't even have to be an especially great program, there are probably plenty of examples which don't near the 100-fold improvement over the average charity (or the marginal government expenditure), but are still very respectable nonetheless. 

There's in general a bit of knowledge gap in EA on the subject of more typical good-doing endeavors. Everyone knows about PlayPumps and Malaria nets, but what about all the stuff in-between? This likely biases our understanding of non-fungible good-doing.

The Groups team did a 3 minute brainstorm about this during our weekly meeting! In no particular order:

Community Building

  • What (mass) (media) campaigns can encourage EA growth?
  • Uni with both AIS and EA group coexisting writes how that works
  • Experience of staying with others in your university group at an EAG(x)
  • What is EA CB strategy in light of AI progress
  • Mistakes you made as a university group organiser
  • When to stop investing in community building
    • A post that explores the lag time required for CB investments to pay off, applied both to cause-neutral EA and cause specific AIS

Other
 

  • Reasons for longer timelines
  • A good post on scope insensitivity that explains what it is (it doesn’t exist)
  • Overview of corporate campaigns - strengths and weaknesses
  • Updated post on ITN we can use in fellowships
  • Exploration on whether AI progress should cause us to value protests more (or in general what tactics should be considered)
  • Aggregation of Will MacAskill comments on EA in the age of AI
  • AIS early career recommendations for non-stem people
  • On being ineffective to be effective

I’m not sure if this hits what you mean by ‘being ineffective to be effective’, but you may be interested in Paul Graham’s ‘Bus ticket theory of genius’.

I'd like to see

  1. an overview of simple AI safety concepts and their easily explainable real-life demonstrations
    1. For instance, to explain sycophancy, I tend to mention the one random finding from this paper that hallucinations are more frequent, if a model deems the user uneducated
  2. more empirical posts on near-term destabilization (concentration of power, super-persuasion bots, epistemic collapse)

Maybe an inherently drafty idea, but I would love if someone wrote a post on the feasibility of homemade bivalvegan cat food. I remember there was a cause area profile post a while ago talking about making cheaper vegan cat food, but I'm also hoping to see if there's something practical and cheap right now. Bivalves seem like the obvious candidate for - less morally risky and other animal products, probably enjoyable for cats or able to be made into something enjoyable, and containing the necessary nutrients. I don't know any of that for sure, or if there are other things you can add to the food or supplement on the side that would make a cat diet like this feasible, and I would love if someone wrote up a practical report on this. For current or prospective cat owners.

I had this idea a while ago and meant to see if I could collaborate with someone on the research, but at this point barring major changes I would rather just see someone else do it well and efficiently. Fentanyl tests strips are a useful way to avoid overdoses in theory, and for some drugs can be helpful for this, but in practice the market for opioids is so flooded with adulterated products that they aren't that useful, because opioid addicts will still use drugs with fentanyl in them if it's all that's available. Changes in policy and technology might help with this and obviously the best solution is for opioid addicts to detox on something like suboxone and then abstain, but a sort of speculative harm-reduction idea occurred to me at some point that seems actionable now with no change in the technological or political situation.

Presumably these test-strips have a concentration threshold below which they can't detect fentanyl, so it might be possible to dilute some of the drug enough that, if the concentration of fentanyl is above a given level it will set off the test, and if it's below a given level it won't. There are some complications with this friends have mentioned to me (fentanyl has a bit of a clumping tendency for instance), but I think it would be great if someone figured out a practical guide for how to use test strips to determine the over/under concentration of a given batch of opioids so that active users can adjust their dosage to try to avoid overdoses. Maybe someone could even make and promote an app based on the idea.

I would like to see a strong argument for the risk of "replaceability" as a significant factor in potentially curtailing someone's counterfactual impact in what might otherwise be a high-impact job. This central idea is that the "second choice" applicant, after the person who was chosen, might have done just as well, or near just as well as the "first choice" applicant, making the counterfactual impact of the first small. I would want an analysis of the cascading impact argument: that you "free up" the second choice applicant to do other impactful work, who then "frees up" someone else, etc., and this stream of "freeing up value" mostly addresses the "replaceability concern.

I second this. Mostly because I have doubts about the 80,000 hours cause area. I love their podcast, but I suspect they get a bit shielded from criticism in a way other cause areas aren't by virtue of being such a core EA organization. A more extensive and critical inquiry into "replaceability" would be welcome, whatever the conclusion.

More stuff about systems change! (complexity theory, phase shift, etc)

Being metacrisis aware and criticizing the whole "single cause area specialization" because many of the big problems are interweaving

Curated and popular this week
Relevant opportunities