A

arvomm

1172 karmaJoined arvomm.com

Bio

I am a researcher at Rethink Priorities' Worldview Investigations Team. I also do work for Oxford's Global Priorities Institute. Previously I was a research analyst at the Forethought Foundation for Global Priorities Research. I took the role after completing the MPhil in Economics at Oxford University. Before that, I studied Mathematics and Philosophy at the University of St Andrews.

Find out more about me here.

Posts
14

Sorted by New

Sequences
1

Worldview Investigations Team Research Agendas

Comments
25

Topic contributions
1

I agree with you Oscar, and we've highlighted this in the summary table where I borrowed your 'contrasting project preferences' terminology. Still, I think it could still be worth drawing the conceptual distinctions because it might help identify places where bargains can occur.

I liked your example too! We tried to add a few (GCR-focused agent believes AI advances are imminent, while a GHD agent is skeptical; AI safety view borrows resources from a Global Health to fund urgent AI research; meat-eater; gun rights and another supporting gun control both fund a neutral charity like Oxfam...) but we could have done better in highlighting them. I've also added these to the table.

I found your last mathematical note a bit confusing because I originally read A,B,C as projects they might each support. But if it's outcomes (i.e. pairs of projects they would each support), then I think I'm with you!

Just to flag that Derek posted on this very recently. It's directly connected to both the present post and Michael's.

That's fair. The main thought that came to mind, which might not be useful, is developing the patience (eagerness to get to conclusions is often incompatible with the work required) and choosing your battles early. As you say, it can be hard and time-consuming. So people in the community asking narrower questions and focusing on one or two is probably the way to go. 

Thanks for looking through our work and for your comment, Deborah. We recognise that different parts of our models are often interrelated in practice. In particular, we’re concerned about the problem of correlations between interventions too, as we flag here. This is an important area for further work. That being said, it isn’t clear that the cases you have in mind are problems for our tools. If you think, for instance, that environmental interventions are particularly good because they have additional (quantifiable or non-quantifiable) benefits, you can update the tool inputs (including the cause or project name) to reflect that and increase the estimated impact of that particular cause area. We certainly don't mean to imply that climate change is an unimportant issue.

 I think another common pitfall is not working through things from first principles. I appreciate that it’s challenging and that any model is unrealistic. Still, BOTECs, pre-established boundaries between cause-areas/worldviews and our first instincts more broadly are likely to (and often do) lead us astray. Separately, I’m glad EA is so self-aware and worried about healthier epistemics, but I think we could do more to guard against echo-chamber thinking. 

I was personally struck by how sensitive portfolios are to even modest levels of risk aversion. I don’t know what “correct” level of risk aversion is, or what the optimal decision procedure is in practice (even though most of my theoretical sympathies lie with expected value maximisation). Even so, seeing how introducing bits of risk aversion, even when using parameters relatively generous towards x-risk, still points towards spending most resources on animals (and sometimes global health) has led me to believe that type of work is robustly better than I used to think. There are many uncertainties and I don't think EA should be reduced to any one of its cause-areas but, especially given this update, I would be sad to see the animal space shrink in relative size any more than it has.

Thanks for the question Carter! Would you mind saying a bit more about the kind of empirical work you have in mind? Are you thinking about empirical research into the inputs to the tools? Or are you thinking about using the tools to conduct research on people’s views about cause prioritization? Do you have any concrete empirical projects you’d like to see WIT do?

Thanks for your question, Chris. We hear you about the importance of making the content accessible. We’ve aimed to include the main takeaways in intro and conclusion posts that can be easily skimmed. We also provide an executive summary at the beginning of each post. We hope that these help, but we take the point that it may not be obvious that we’ve taken these steps, and we’ll revisit this suggestion in future sequences to make sure the purposes of those posts and introductory materials are clear. It may also be useful for us to consider more visual summaries of some of our results, as we provided for our discussion of human extinction. Do you have any concrete suggestions given the approach we’ve adopted so far?

Thank you for your kind words Ben. A substantial amount of in-house software work went into both tools. We used react and vite to create these, and python for the server running the maths behind the scenes. If the interest in this type of work and the value-added are high enough, we'd likely want to do more of it. 

On your last point, we haven't done open-source development from scratch for the projects we've done so far, but it might be a good strategy for future ones. That said, for transparency, we've made all our code accessible here.

Thank you for your comment Lukas, we agree that this tool, and more generally this approach, could be useful even in that case, when all considerations are known. The ideas we built on and the language we used came from the literature on moral parliaments as an approach to better understand and tackle moral uncertainty, hence us borrowing from that framing.

Load more