H

Habryka

19515 karmaJoined

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1229

Topic contributions
1

The point of Lightspeed Grants was explicitly to create a product that would allow additional funders (beyond Jaan) to distribute funding to important causes. 

It also had the more immediate positive effect of increasing the diversity and impact of Jaan's funding, though that's not where I expected the big wins to come from and not my primary motivation for working on it. I still feel quite excited about this, but stopped working on it in substantial parts because Open Phil cut funding for all non-LW programs to Lightcone.

The ask here was for development cost and operations, not for any regranting money.

Was the idea that, with more funding, these programs could be more successful and attract more mega donors from outside the community?

Basically.

To be clear, I think it's not a crazy decision for Open Phil to think that Jaan is in a better position to fund our SFF and Lightspeed work (though not funding us for Lightspeed did have a pretty substantial effect on our ability to work on it). The bigger effect came from both the implicit and explicit discouragement of working on SFF and Lightspeed over the years, mostly picked up from random conversations with Open Phil staff and people very close to Open Phil staff. 

I generally don't have a ton of bandwidth with my grantmakers at Open Phil, but during our last funding request around 14 months ago, I got the strong sense they thought working on SFF and Lightspeed was a waste of time and money (and indeed, they didn't give us approximately any funding for Lightspeed when we asked for money for it then). Also, to be clear, they never got to the point of asking us how much money we wanted, or how much it would cost, and just kind of told us out of the blue, after 6 months of delays, that they aren't interested in funding any non-LW projects when I was still expecting to communicate more of our plans and needs to them, so my best guess is they never actually considered it, or it was dismissed at a pretty early stage.

Yes, I think funding an endowment would have been a great thing to do (and something I advocated for many times over the years). My sense is that it's too late now.

My best guess at the situation is that the "limited capacity" is a euphemism for "Dustin/Cari don't think these things are valuable and Open Phil doesn't have time to convince them of their value".

Separately, my guess is one of the key dimensions on which Dustin/Cari have strong opinions here are things that affect Dustin and Cari's public reputation in an adverse way, or are generally "weird" in a way that might impose more costs on Dustin and Cari.

Habryka
88
11
9
2

I find myself particularly disappointed in this as I was working for many years on projects that were intended to diversify the funding landscape, but Open Phil declined to fund those projects, and indeed discouraged me from working on them multiple times (most notably SFF and most recently Lightspeed Grants).

I think Open Phil could have done a much better job at using the freedom it had to create a diverse funding landscape, and I think Open Phil is largely responsible for the degree to which the current funding landscape is as centralized as it currently is.

Do you mean financial costs, or all net costs together, including potentially through time, motivation, energy, cognition?

I meant net costs all together, tough I agree that if you take into account motivation "net costs" becomes a kind of tricky concept, and many people can find it motivating, and that is important to think about, but also really doesn't fit nicely into a harm-reducing framework.

Financial/donations: It's not clear to me that my diet is more expensive than if I were omnivorous. Some things I've substituted for animal products are cheaper and others are more expensive.

I mean, being an onmivore would allow you to choose between more options. Generally having more options very rarely hurts you.

Overall I like your comment.

I have had >30 conversations with EA vegetarians and vegans about their reasoning here. The people who thought about it the most seem to usually settle on it for signaling reasons. Maybe this changed over the last few years in EA, but it seemed to be where most people I talked to where at when I had lots of conversations with them in 2018. 

I agree that many people say (1), but when you dig into it it seems clear that people incur costs that would be better spent on donations, and so I don't think it's good reasoning. As far as I can tell most people who think about it carefully seem to stop thinking its a good reason to be vegan/vegetarian.

I do think the self-signaling and signaling effects are potentially substantial. 

I also think (4) is probably the most common reason, and I do think probably captures something important, but it seems like a bad inference that "someone is in it to prevent harm" if (4) is their reason for being vegetarian or vegan.

I am skeptical that exhibiting one of the least cost-effective behaviors for the purpose of reducing suffering would be correlated much with being "in it to prevent harm". Donating to animal charities, or being open to trade with other people about reducing animal suffering, or other things that are actually effective seem like much better indicator that people are actually in it to prevent harm. 

My best guess is that most people who are vegetarians, vegans or reducetarians, and are actually interested in scope-sensitivity, are explicitly doing so for signaling and social coordination reasons (which to be clear, I think has a bunch going for it, but seems like the kind of contingent fact that makes it not that useful as a signal of "being in it to prevent harm"). 

Answer by Habryka11
0
0

Copying over my thoughts from a recent comment thread (mostly because of the links to existing resources): 

I think matching fundraisers tend to generally be dishonest by overstating the counterfactualness of a matching fund. See this old post for a lot of the standard arguments for that: https://forum.effectivealtruism.org/posts/a2gYyTnAP36TxqdQp/matching-donation-fundraisers-can-be-harmfully-dishonest 

My response made some general points that I wish were more widely understood:

  • Pitching matching donations as leverage (e.g. "double your impact") misrepresents the situation by overassigning credit for funds raised.
  • This sort of dishonesty isn't just bad for your soul, but can actually harm the larger world - not just by eroding trust, but by causing people to misallocate their charity budgets.
  • "Best practices" for a charity tend to promote this kind of dishonesty, because they're precisely those practices that work no matter what your charity is doing.
  • If your charity is impact-oriented - if you care about outcomes rather than institutional success - then you should be able to do substantially better than "best practices".

See also this Jeff Kauffman post: https://forum.effectivealtruism.org/posts/hQtayqi3r6bo3EPoh/the-counterfactual-validity-of-donation-matching 

Or this old GiveWell post: https://blog.givewell.org/2011/12/15/why-you-shouldnt-let-donation-matching-affect-your-giving/ 

Agree with this. I think doing weird signaling stuff with bets worsens the signal that bets have on understanding people's actual epistemic states.

Load more