EB

Erin Braid

509 karmaJoined

Bio

Research Assistant on the Global Health and Development team at Rethink Priorities. My opinions are my own.

Previously: Cause Prioritization Intern at Open Philanthropy; grad student at the Institute for Logic, Language and Computation (University of Amsterdam); volunteer analyst at SoGive;  Summer Research Analyst at GiveWell.

Comments
15

I just got my physical copy of the inaugural issue and I'm very happy with it! I love the cool-yet-professional design, color schemes, and infographics. I've ordered a copy for my parents and a copy for my parents-in-law, with offers to get them subscriptions if they like the first issue, and I'm feeling enthusiastic about this as a way to introduce people to EA-adjacent ideas and worldviews. I personally was nerdsniped into EA, drawn in by the fascinating problems and the efforts to think deeply and clearly about them. That's the experience I've wanted for other people too, but if they weren't drawn in by GiveWell spreadsheets or some pointers to a sprawling blogosphere, I wasn't sure what to offer them. I think Asterisk will be a great fit for people like my parents: smart and curious, subscribers to multiple paper newspapers, not super technical or super online. Thanks Asterisk team :)

Something I personally would like to see from this contest is rigorous and thoughtful versions of leftist critiques of EA, ideally translated as much as possible into EA-speak. For example, I find "bednets are colonialism" infuriating and hard to engage with, but things like "the reference class for rich people in western countries trying to help poor people in Africa is quite bad, so we should start with a skeptical prior here" or "isolationism may not be the good-maximizing approach, but it could be the harm-minimizing approach that we should retreat to when facing cluelessness" make more sense to me and are easier to engage with.

That's an imaginary example -- I myself am not a rigorous and thoughtful leftist critic and I've exaggerated the EA-speak for fun. But I hope it points at what I'd like to see!

I for one would listen to a podcast about shelters and their precedents! That's not to say you should definitely make it, since I'm not sure an audience of mes would be super impactful (I don't see myself personally working on shelters), but if you're just trying to judge audience enthusiasm, count me in!

Podcasts I've enjoyed on this topic (though much less impact-focused and more highly produced than I imagine you'd aim for): "The Habitat" from Gimlet Media; the Biosphere 2 episode of "Nice Try!"

I see [EA] as a key question of "how can we do the most good with any given unit of resource we devote to doing good" and then taking action upon what we find when we ask that.

I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it's too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:

  1. It's probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they're often the same people, and in the sense that even when they're different people, they'll share a lot of interests and it might make sense to share a movement.
  2. Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don't just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively - I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more - but negative framings are available too.

So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.

Love this question! I too would identify as a hopelessly pure mathematician (I'm currently working on a master's thesis in category theory), and I too spent some time trying to relate my academic interests to AI safety. I didn't have much success; in particular, nothing ML-related ever appealed. I hope it works out better for you!

Thanks for this post Julia! I really related to some parts of it, while other parts were very different from my experience. I'll take this opportunity to share a draft I wrote sometime last year, since I think it's in a similar spirit:

I used to be pretty uncomfortable with, and even mad about, the prominence of AI safety in EA. I always saw the logic – upon reading the sequences circa 2012, I quickly agreed that creating superintelligent entities not perfectly aligned with human values could go really, really badly, so of course AI safety was important in that sense – but did it really have to be such a central part of the EA movement, which (I felt) could otherwise have much wider acceptance and thus save more children from malaria? Of course, it would be worth allowing some deaths now to prevent a misaligned AI from killing everyone, so even then I didn’t object exactly, but I was internally upset about the perception of my movement and about the dead kids. 

I don’t feel this way anymore. What changed?

  1. [people aren’t gonna like EA anyways – I’ve gotten more cynical and no longer think that AI was necessarily their true objection]
  2. [AI safety more concrete now – the sequences were extremely insistent but without much in the way of actual asks, which is an unsettling combo all by itself. Move to Berkeley? Devote your life to blogging about ethics? Spend $100k on cryo? On some level those all seemed like the best available ways to prove yourself a True Believer! I was willing to lowercase-b believe, but wary of being a capital-B Believer, which in the absence of actual work to do is the only way to signal that you understand the Most Important Thing In The World]
  3. [practice thinking about the general case, longtermism]

Unfortunately I no longer remember exactly what I was thinking with #3, though I could guess. #1 and #2 still make sense to me and I could try to expand on them if they're not clear to others. 

Thinking about it now, I might add something like:

4. [better internalization of the fact that EA isn't the only way to do good lol – people who care about global health and wouldn't care about AI are doing good work in global health as we speak]

To support people in following this post's advice, employers (including Open Phil?) need to make it even quicker for applicants to submit the initial application materials

From my perspective as an applicant, fwiw, I would urge employers to reduce the scope of questions in the initial application materials, more so than the time commitment. EA orgs have a tendency to ask insanely big questions of their early-stage job applicants, like "How would you reason about the moral value of humans vs. animals?" or "What are the three most important ways our research could be improved?" Obviously these are important questions, but to my mind they have the perverse effect that the more an applicant has previously thought about EA ideas, the more daunting it seems to answer a question like that in 45 minutes. Case in point, I'm probably not going to get around to applying for some positions at this post's main author's organization, because I'm not sure how best to spend $10M to improve the long-term future and I have other stuff to do this week. 

Open Phil scores great on this metric by the way - in my recent experience, the initial screening was mostly an elaborate word problem and a prompt to explain your reasoning. I'd happily do as many of those as anyone wants me to.

Maybe the process of choosing a community service project could be a good exercise in EA principles (as long as you don't spend too long on it)? 

I like this idea and would even go further -- spend as much time on it as people are interested in spending, the decision-making process might prove educational!

I can't honestly say I'm excited about the idea of EA groups worldwide marching out to pick up litter. But it seems like a worthwhile experiment for some groups, to get buy-in on the idea of volunteering together, brainstorm volunteering possibilities, decide between them based on impact, and actually go and do it. 

Load more