Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

22
2d
2
I've been thinking a bunch about a fundamental difference between the EA community and the LessWrong community. LessWrong is optimized for the enjoyment of its members. Any LessWrong event I go to in any city the focus is on "what will we find fun to do?" This is great. Notice how the community isn't optimized for "making the world more rational." It is a community that selects for people interested in rationality and then when you get these kinds of people in the same room the community tries to optimize for FUN for these kinds of people. EA as a community is NOT optimized for the enjoyment of its members. It is optimized for making the world a better place. This is a feature, not a bug. And surely it should be net positive since its goal should by definition be net positive. When planning an EAG or EA event you measure it on impact and say professional connections made and how many new high quality AI Alignment researchers you might have created on the margin. You don't measure it on how much people enjoyed themselves (or you do, but for instrumental reasons to get more people to come so that you can continue to have impact). As a community organizer in both spaces, I do notice it is easier that I can leave EA events I organized feeling more burnt out and less fulfilled than compared to similar LW/ACX events. I think the fundamental difference mentioned before explains why. Dunno if I am pointing at anything that resonates with anyone. I don't see this discussed much among community organizers. Seems important to highlight.  Basically in LW/ACX spaces - specifically as an organizer - I more easily feel like a fellow traveller up for a good time. In EA spaces - specifically as an organizer - I more easily feel like an unpaid recruiter.
17
6d
1
I'm starting to put together plans for this year's Giving Season events (roughly, start of November to end of December). If you remember the events last year, it'd be cool to know: 1- What was memorable to you from that period? 2- Was anything in particular valuable for you or for your donation decisions? 3- Is there anything you would expect to see this year? 4- What would you hope to see this year? Thanks!
34
1mo
2
I'm concerned about the new terms of service for Giving What We Can, which will go into effect after August 31, 2024: This is a significant departure from the Effective Ventures' TOS (GWWC is spinning out of EV), which has users grant EV an unlimited but non-exclusive license to use feedback or suggestions they send, while retaining the right to do anything with it themselves. I've previously talked to GWWC staff about my ideas to help people give effectively, like a donation decision worksheet that I made. If this provision goes into effect, it would deter me from sharing my suggestions with GWWC in the future because I would risk losing the right to disseminate or continue developing those ideas or materials myself.
69
2mo
4
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune. Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take! https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
35
1mo
1
I had written up what I learned as a Manifund micrograntor a few months ago, but have never gotten around to polishing that for publication. Still, I think those reactions could be useful for people in the EA Community Choice program now. You've got the same basic pattern of a bunch of inexperienced grantmakers with a few hundred bucks to spend each and ~40-50 projects to look at quickly. I'm going to post those without much editing, since the program is fairly short. A few points are specific to the types of proposals that were in the microgranting experiment (which came from the ACX program). General Feedback for Grant Applicants [from ACX Microgrants Experience] Caution: This feedback is based on a single micrograntor's experience. It may be much less applicable to other contexts -- e.g., those involving larger grantors, grantors who do not need to evaluate a large number of proposals in a limited amount of time, or grantors who are in a position to fund a significant percentage of grants reviewed. I had pre-committed to myself that I would look at every single proposal unless the title convinced me that it was way too technical for me to understand. This probably affected my experience, and was done more for educational / information value reasons than anything else. * If you have a longer proposal, please start with an executive summary, limited to ~ 300 words. You may get only 2-3 minutes on an initial screen, maybe even less. * After getting a sense of the basic contours of the proposal, I found myself with a decent sense of where the weaker points probably were and wanted to see if these were clear dealbreakers in an efficient manner. Please be sure to red-team your proposal and address the weak points! * Use shorter paragraphs with titles or other clear, skimmable signals. As per above, I need to be able to quickly find your discussion on specific points. * One recurrent weakness involved an unclear theory of impact that I had to infer from the propo
25
1mo
9
An idea that's been percolating in my head recently, probably thanks to the EA Community Choice, is more experiments in democratic altruism. One of the stronger leftist critiques of charity revolves around the massive concentration of power in a handful of donors. In particular, we leave it up to donors to determine if they're actually doing good with their money, but people are horribly bad at self-perception and very few people would be good at admitting that their past donations were harmful (or merely morally suboptimal). It seems clear to me that Dustin & Cari are particularly worried about this, and Open Philanthropy was designed as an institution to protect them from themselves. However, (1) Dustin & Cari still have a lot of control over which cause areas to pick, and sort of informally defer to community consensus on this (please correct me if I have the wrong read on that) and (2) although it was intended to, I doubt it can scale beyond Dustin & Cari in practice. If Open Phil was funding harmful projects, it's only relying on the diversity of its internal opinions to diffuse that; and those opinions are subject to a self-selection effect in applying for OP, and also an unwillingness to criticise your employer. If some form of EA were to be practiced on a national scale, I wonder if it could take the form of an institution which selects cause areas democratically, and has a department of accountable fund managers to determine the most effective way to achieve those. I think this differs from the Community Choice and other charity elections because it doesn't require donors to think through implementation (except through accountability measures on the fund managers, which would come up much more rarely), and I think members of the public (and many EAs!) are much more confident in their desired outcomes than their desired implementations; in this way, it reflects how political elections take place in practice. In the near-term, EA could bootstrap such a fun
62
3mo
12
I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing. It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes. I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it. EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again. EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.
9
18d
The original website for Students for High Impact Charities (SHIC) at https://shicschools.org is down (You can find it in the Wayback Machine), but the program scripts and slides they used in high schools are still available at their google drive link at https://drive.google.com/drive/folders/0B_2KLuBlcCg4QWtrYW43UGcwajQ Could potentially be a valuable EA community building resource
Load more (8/99)