A

Austin

Cofounder @ Manifund & Manifold
3653 karmaJoined San Francisco, CA, USA

Bio

Hey there~ I'm Austin, currently building https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !

Comments
205

Thanks for the recommendation, Benjamin! We think donating to Manifund's AI Safety regranting program is especially good if you don't have a strong inside view among the different orgs in the space, but trust our existing regrantors and the projects they've funded; or if you are excited about providing pre-seed/seed funding for new initiatives or individuals, rather than later-stage funding for more established charities (as our regrantors are similar to "angel investors for AI safety").

If you're a large donor (eg giving >$50k/year), we're also happy to work with you to sponsor new AI safety regrantors, or suggest to you folks who are particularly aligned with your interests or values. Reach out to me at austin@manifund.org!

This makes sense to me; I'd be excited to fund research or especially startups working to operationalize AI freedoms and rights.

FWIW, my current guess is that the proper unit to extend legal rights is not a base LLM like "Claude Sonnet 3.5" but rather a corporation-like entity with a specific charter, context/history, economic relationships, and accounts. Its cognition could be powered by LLMs (the way eg McDonald's cognition is powered by humans), but it fundamentally is a different entity due to its structure/scaffolding.

Thanks for cross posting, this got Shapley values to "click" for me!

No concrete timelines at the moment, almost definitely more than a few months from now.

That's good to know - I assume Oli was being somewhat hyperbolic here. Do you (or anyone else) have examples of right-of-center policy work that OpenPhil has funded?

I'm not aware of any projects that aim to advise what we might call "Small Major Donors": people giving away perhaps $20k-$100k annually.

We don't advertise very much, but my org (Manifund) does try to fill this gap:

  • Our main site, https://manifund.org/, allows individuals and orgs to publish charitable projects and raise funding in public, usually for projects in the range of $10k-$200k
  • We generally focus on: good website UX, transparency (our grants, reasoning, website code and meeting notes are all public), moving money fast (~1 week rather than months)
  • We are more self-serve than advisory; we mostly expect our donors to find projects they like themselves, which they can do because the grant proposals include large amounts of detail, plus they can directly chat with the project creators over our comments section
  • In the EA space, we're particularly open to weird arrangements; beyond providing lightweight fiscal sponsorship to hundreds of individuals and experimenting with funding mechanisms, we have eg loaned money to aligned orgs and invested in for-profit enterprises
    • If you're interested in donating medium-sized amounts in unusual ways, reach out to me at austin@manifund.org!

I encourage Sentinel to add a paid tier on their Substack, just as an easy mechanism for folks like you & Saul to give money, without paywalling anything. While it's unlikely for eg $10/mo subscriptions to meaningfully affect Sentinel's finances at this current stage, I think getting dollars in the bank can be a meaningful proof of value, both to yourselves and to other donors.

@Habryka has stated that Lightcone has been cut off from OpenPhil/GV funding; my understanding is that OP/GV/Dustin do not like the rationalism brand because it attracts right-coded folks. Many kinds of AI safety work also seem cut off from this funding; reposting a comment from Oli :

Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it's definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.

...

As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don't think this is because of any COIs, it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.

Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]

Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth's work, or Wei Dai's work, or Daniel Kokotajlo's work, or Brian Tomasik's work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]

I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]

In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has "good judgement" on public comms, and who isn't the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn't like, or that might strain Dustin's relationships with others in any non-trivial way. 

Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren't the kind of person who gets the hint that this is how the game is played now.

And to provide some pushback on things you say, I think now that OPs bridges with OpenAI are thoroughly burned after the Sam firing drama, OP is pretty OK with people criticizing OpenAI (since what social capital is there left to protect here?). My sense is criticizing Anthropic is slightly risky, especially if you do it in a way that doesn't signal what OP considers good judgement on maintaining and spending your social capital appropriately (i.e. telling them that they are harmful for the world, or should really stop, is bad, but doing a mixture of praise and criticism without taking any controversial top-level stance is fine), but mostly also isn't the kind of thing that OP will totally freak out about. I think OP used to be really crazy about this, but now is a bit more reasonable, and it's not the domain where OP's relationship to reputation-management is causing the worst failures.

I think all of this is worse in the longtermist space, though I am not confident. At the present it wouldn't surprise me very much if OP would defund a global health grantee because their CEO endorsed Trump for president, so I do think there is also a lot of distortion and skew there, but my sense is that it's less, mostly because the field is much more professionalized and less political (though I don't know how they think, for example, about funding on corporate campaign stuff which feels like it would be more political and invite more of these kinds of skewed considerations).

Also, to balance things, sometimes OP does things that seem genuinely good to me. The lead reduction fund stuff seems good, genuinely neglected, and I don't see that many of these dynamics at play there (I do also genuinely care about it vastly less than OPs effect on AI Safety and Rationality things).

Also, Manifold, Manifund, and Manifest have never received OP funding -- I think in the beginning we were too illegible for OP, and by the time we were more established and OP had hired a fulltime forecasting grantmaker, I would speculate that were seen as too much of a reputational risk given eg our speaker choices at Manifest.

This looks awesome! $1k struck me as a pretty modest prize pool given the importance of the questions; I'd love to donate $1k towards increasing this prize, if you all would accept it (or possibly more, if you think it would be useful.)

I'd suggest structuring this as 5 more $200 prizes (or 10 $100 honorable mentions) rather than doubling the existing prizes to $400 -- but really it's up to you, I'd trust your allocations here. Let me know if you'd be interested!

The act of raising funding from "EA general public" is quite rare at the moment - most orgs I'm familiar with get the vast majority of their funding from a handful of institutions (OP, EA Funds, SFF, some donor circles).

I do think fundraising from the public can be a good forcing function and I wish more EA nonprofits tried to do so. Especially meta/EA internal orgs like 80k or EA Forum or EAG (or Lightcone), since there, "how much is a user willing to donate" could be a very good metric for the how much value they are receiving from their work.

One of the best things that happened to Manifold early on was when our FTX Future Fund regrantor offered to cover up to half of our $2m seed round - contingent on us raising the other half from other sources. We then had to build the muscle of fundraising from regular Silicon Valley angels/VCs, which especially served us well when Future Fund went kaput.

Manifund tries to make public fundraising for EA projects much easier, and there have been a few success cases such as MATS and Act I - though in the end most of our dollars moved come from our regrantors.

Load more