J

Jason

16750 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
1964

Topic contributions
2

One's second, third, etc. choices would only come into play when/if their first choice is eliminated by the IRV system. Although there could be some circumstances in which voting solely for one's #1 choice could be tactically wise, I believe they are rather narrow and would only be knowable in the last day or two.

I think it's reasonable for a donor to decide where to donate based on publicly available data and to share their conclusions with others. Michael disclosed the scope and limitations of his analysis, and referred to other funders having made different decisions. The implied reader of the post is pretty sophisticated and would be expected to know that these funders may have access to information on initiatives that haven’t been/can't be publicly discussed.

While I appreciate why orgs may not want to release public information on all initiatives, the unavoidable consequence of that decision is that small/medium donors are not in a position to consider those initiatives when deciding whether to donate. Moreover, I think Open Phil et al. are capable of adjusting their own donation patterns in consideration of the fact that some orgs' ability to fundraise from the broader EA & AIS communities is impaired by their need for unusually-low-for-EA levels of public transparency.

"Run posts by orgs" is ordinarily a good practice, at least where you are conducting a deep dive into some issue on which one might expect significant information to be disclosed. Here, it seems reasonable to assume that orgs will have made a conscious decision about what general information they want to share with would-be small/medium donors. So there isn't much reason to expect that an inquiry (along with notice that the author is planning to publish on-Forum) would yield material additional information.[1] Against that, the costs of reaching out to ~28 orgs is not insignificant and would be a significant barrier to people authoring this kind of post. The post doesn't seem to rely on significant non-public information, accuse anyone of misconduct, or have other characteristics that would make advance notice and comment particularly valuable. 

Balancing all of that, I think the opportunity for orgs to respond to the post in comments was and is adequate here.

  1. ^

    In contrast, when one is writing a deep dive on a narrower issue, the odds seem considerably higher that the organization has material information that isn't published because of opportunity costs, lack of any reason to think there would be public interest, etc. But I'd expect most orgs' basic fundraising ask to have been at least moderately deliberate.

You may want to add something like [AI Policy] to the title to clue readers into the main subject matter and whether they'd like to invest the time to click on and read it. There's the AI tag, but that doesn't show up on the frontpage, at least on my mobile.

(Your ranking isn't displayed on the comment thread, so if you were intending to communicate which organizations you were referring to with the readership you may want to edit your comment here)

I don't have a lot of confidence in this vote, and it's quite possible my ranking will change in important ways. Because only the top three organizations place in the money, we will all have the ability to narrow down which placements are likely to be outcome-relevant as the running counts start displaying. I'm quite sure I have not given all 36 organizations a fair shake in the 5-10 minutes I devoted to actually voted.

Has there been any consideration of creating sub-funds for some or all of the critical ecosystem gaps? Conditioned on areas A, B, and C being both critical and ~not being addressed elsewhere, it would feel a bit unexpected if donors have no way to give monies to A, B, or C exclusively. 

If a donor values A, B, and C differently -- and yet the donor's only option is defer to LTFF's allocation of their marginal donation between A, B, and C -- they may "score" LTFF less well than they would score an opportunity to donate to whichever area they rated most highly by their own lights. 

The best reason to think this might not make a difference: If enough donors wanted to defer to LTFF's allocation among the three areas, then donor choice of a specific cause would have no practical effect due to funging.

Interesting lawsuit; thanks for sharing! A few hot (unresearched, and very tentative) takes, mostly on the Musk contract/fraud type claims rather than the unfair-competition type claims related to x.ai:

  1. One of the overarching questions to consider when reading any lawsuit is that of remedy. For instance, the classic remedy for breach of contract is money damages . . . and the potential money damages here don't look that extensive relative to OpenAI's money burn.
  2. Broader "equitable" remedies are sometimes available, but they are more discretionary and there may be some significant barriers to them here. Specifically, a court would need to consider the effects of any equitable relief on third parties who haven't done anything wrongful (like the bulk of OpenAI employees, or investors who weren't part of an alleged conspiracy, etc.), and consider whether Musk unreasonably delayed bringing this lawsuit (especially in light of those third-party interests). On hot take, I am inclined to think these factors would weigh powerfully against certain types of equitable remedies.
    1. Stated more colloquially, the adverse effects on third parties and the delay ("laches") would favor a conclusion that Musk will have to be content with money damages, even if they fall short of giving him full relief.
    2. Third-party interests and delay may be less of a barrier to equitable relief against Altman himself.
  3. Musk is an extremely sophisticated party capable of bargaining for what he wanted out of his grants (e.g., a board seat), and he's unlikely to get the same sort of solicitude on an implied contract theory that an ordinary individual might. For example, I think it was likely foreseeable in 2015 to January 2017 -- when he gave the bulk of the funds in question -- that pursuing AGI could be crazy expensive and might require more commercial relationships than your average non-profit would ever consider. So I'd be hesitant to infer much in the way of implied-contractual constraints on OpenAI's conduct than section 501(c)(3) of the Internal Revenue Code and California non-profit law require.
  4. The fraud theories are tricky because the temporal correspondence between accepting the bulk of the funds and the alleged deceit feels shaky here. By way of rough analogy, running up a bunch of credit card bills you never intended to pay back is fraud. Running up bills and then later deciding that you aren't going to pay them back is generally only a contractual violation. I'm not deep into OpenAI drama, but a version of the story in which the heel turn happened later in the game than most/all of Musk's donations and assistance seems plausible to me.

     

The surprise for me was that QURI has only been able to fundraise for ~24% of its lower-estimate CY25 funding needs. Admittedly, I don't follow funding trends in this space, so maybe that news isn't surprising to others. The budget seems sensible to me, by the way. Having less runway also makes sense in light of events over the past two years.

I think the confusion for me involves a perceived tension between numbers that might suggest a critical budget shortfall at present and text that seemed more optimistic in tone (e.g., talking about eagerness to expand). Knowing that there's a possible second major funder helps me understand why that tension might be there -- depending on the possible major funder's decision, it sounds like the effect of Forum-reader funding on the margin might range from ~"keeping the lights on" to "funding some expansion"?

We're looking to raise another ~$200k for 2025, to cover our current two-person team plus expenses. We'd also be enthusiastic about expanding our efforts if there is donor interest.

What is QURI's total budget for 2025? If I'm reading this correctly -- that there's currently a $200K funding gap for the fast-approaching calendar year -- that is surprising information to me in light of what I assumed the total budget for a two-person org would be.

Jason
52
4
1
1
1

Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.

Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US "right-of-center"[1] policy work to GV, I would be somewhat surprised that this well-written post didn't say that.

Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It's generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.

  1. ^

    I place this in quotes because the term is ambiguous.

Load more