Ozzie Gooen

10085 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
919

Topic contributions
4

I feel like that's pretty unfair. 

You asked for a "rough fermi estimate of the trade-offs", I gave you a list of potential trade-offs. 

If we're willing to make decisions with logic like, "while genetically modifying unnaturally fast-growing chickens in factory farms would increase the pain of each one, perhaps the math works out so that there's less pain overall", I feel like adding considerations like, "this intervention will also make meat more expensive, which will reduce use" is a pretty vanilla consideration.

(Obvious flag that I know very little about this specific industry)

Agreed that this seems like an important issue. Some quick takes:

Less immediately- obvious pluses/minuses to this sort of campaign:
- Plus #1: I assume that anything the animal industry doesn't like would increase costs for raising chickens. I'd correspondingly assume that we should want costs to be high (though it would be much better if it could be the government getting these funds, rather than just decreases in efficiency).
- Plus #2: It seems possible that companies have been selecting for growth instead of for well-being. Maybe, if they just can't select for growth, then selecting more for not-feeling-pain would be cheaper.
- Minus #1: Focusing on the term "Frankenchicken" could discourage other selective breeding or similar, which could be otherwise useful for very globally beneficial attributes, like pain mitigation.
- Ambiguous #1: This could help stop further development here. I assume that it's possible to later use selective breeding and similar to continue making larger / faster growing chickens.

I think I naively feel like the pluses outweigh the negatives. Maybe I'd give this a 80% chance, without doing much investigation. That said, I'd also imagine there might well be more effective measures with a much clearer trade-off. The question of "is this a net-positive thing" is arguably not nearly as important as "are there fairly-clearly better things to do."

Lastly, for all of that, I do want to just thanks those helping animals like this. It's easy for me to argue things one way or the other, but I generally have serious respect for those working to change things, even if I'm not sure if their methods are optimal. I think it's easy to seem combative on this, but we're all on a similar team here.

In terms of a "rough fermi analysis", as I work in the field, I think the numeric part of this is less important at this stage than just laying out a bunch of the key considerations and statistics. What I first want is a careful list of costs and benefits - that seems mature, fairly creative, and unbiased.

I am in favor of people considering unconventional approaches to charity.

At the same time, I find it pretty easy to argue against this. Some immediate things that come to mind:
1. My impression is that gambling is typically net-negative to participants, often highly so. I generally don't like seeing work go towards projects that are net-negative to their main customers (among others).
2. Out of all the "do business X, but it goes to charity", why not pick something itself beneficial? There are many business areas to choose from. Insurance can be pretty great - I think Lemonade Insurance did something clever with charity.
3. I think it's easy to start out altruistic with something like this, then become a worse person as you respond to incentives. In the casino business, the corporation is highly incentivized to do increasingly sleazy tactics to find, bait, and often bankrupt whales. If you don't do this, your competitors will, and they'll have more money to advertise.
4. I don't like making this the main thing, but I'd expect the PR to be really bad for anything this touches. "EAs don't really care about helping people, they just use that as an excuse to open sleazy casinos." There are few worse things to be associate with. A lot of charities are highly protective of their brands (and often with good reason).
5. It's very easy for me to imagine something like this creating worse epistemics. In order to grow revenue, it will be very "convenient" if you downplayed the harms caused by the casino. If such a thing does catch on in a certain charitable cluster, very soon that charitable cluster will be encouraged to lie and self-deceive. We saw some of this with the FTX incident. 
6. The casino industry attracts and feeds off clients with poor epistemics. I'd imagine they (as in, the people the casino actually makes money from) wouldn't be the type who would care much about reasonable effective charities.

When I personally imagine a world where, "A significant part of the effective giving community is tied to high-rolling casinos", it's hard for me to imagine this not being highly distopic. 

By all this, I hope the author doesn't treat this at all on an attack on them specifically. But I would consider it an attack on specific future project proposals that suggest advancing manipulative and harmful industries and tying such work to the topics of effective giving or effective philanthropy. I very much do not want to see more work done here. I'm spending some time on this comment, mainly to use this as an opportunity to hopefully dissuade others considering this sort of thing in the future. 

On this note, I'd flag that I think a lot of the crypto industry has been full of scams and other manipulative and harmful behavior. Some of this got very close to EA (i.e. with FTX), and I'm sure with a long tail of much smaller projects. I consider much of this (the bad parts) a black mark on all connected+responsible participants and very much do not want to see more of it. 

Happy to see this! I continue to think that smart EA funding expansion is an important area and wish it got more attention.

Minor notes:

  • If I'm counting right, this comes to a total of approximately $362,000. The Funding Circle website states that "Our members give above $100,000 to this cause area each year, and this is the expected minimum annual giving to join.". So it seems like the funding circle is basically 2-3 people, I presume? Or is there money I'm missing?
  • Links to the nonprofits would be useful, in the post. As a simple example, I tried searching "Bedrock", and got many miscellaneous results.
  • I really hope this work can help us identify great founders in this area, and then we can scale up the work from those individuals.  
  • I'm surprised to see the focus fundraising charities focused on international countries. I'm looking now, and it seems like the giant majority of charitable funding is given by the top few countries. (Maybe this is where Ark and Bedrock are focused, that wasn't clear).
     

I'm also a broad fan of this sort of direction, but have come to prefer some alternatives. Some points:
1. I believe of this is being done at OP. Some grantmakers make specific predictions, and some of those might be later evaluated. I think that these are mostly private. My impression is that people at OP believe that they have critical information that can't be made public, and I also assume it might be awkward to make any of this public.
2. Personally, I'd flag that making and resolving custom questions for each specific grant can be a lot of work. In comparison, it can be great when you can have general-purpose questions, like, "how much will this organization grow over time" or "based on a public ranking of the value of each org, where will this org be?"
3. While OP doesn't seem to make public prediction market questions on specific grants, they do sponsor Metaculus questions and similar on key strategic questions. For example, there are a tournaments on AI risk, bio, etc. I'm overall a fan of this. 

4. In the future, AI forecasters could do interesting things. OP could take the best ones, then these could make private forecasts of many elements of any program. 

Kudos for bringing this up, I think it's an important area!

Do we, as a community, sometimes lean more toward unconsciously advocating specific outcomes rather than encouraging people to discover their own conclusions through the EA framework?

There's a lot to this question.

I think that many prestigious/important EAs have come to similar conclusions. If you've come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.

You'll see some discussions of "growing the tent" - this can often mean "partnering with groups that agree with the conclusions, not necessarily with the principles". 

One question here is something like, "How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?" This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don't have all too much work in this area now, compared to more object-level work. 

Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA - after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety. 

In terms of "What should the EA community do", I'd flag that a lot of the decisions are really made by funders and high-level leaders. It's not super clear to me how much agency the "EA community" has, in ways that aren't very aligned with these groups. 

All that said, I think it's easy for us to generally be positive towards people who take the principles in ways that don't match the specific current conclusions.

I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.

Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."

I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"

Happy to see more work here.

Minor question - but are you familiar with any experiments that might show which are the most understandable, especially at high speeds? It seems to me like some voices are much better than others at 2x+ speeds, I assume it should be possible to optimize this. This is probably the main thing I personally care about. 

I imagine a longer analysis would include factors like:
1. If intense AI happens in 10 to 50 years, that could do inventing afterwards. 
2. I expect that a very narrow slice of the population will be responsible for scientific innovations here, if humans do it. Maybe instead of considering the policies [increase the population everywhere] or [decrease the population everywhere], we could consider more nuanced policies. Related, if one wanted to help with animal welfare, I'd expect that [pro-natalism] would be an incredibly ineffective way of doing so, for the benefit of eventual scientific progress on animals. 

I can't seem to find much EA discussion about [genetic modification to chickens to lessen suffering]. I think this naively seems like a promising area to me. I imagine others have investigated and decided against further work, I'm curious why. 

Load more