Building bespoke quantitative models to support decisionmakers in AI and bio. Right now that means: forecasting capabilities gains due to post-training enhancements on top of frontier foundation models, and estimating the annual burden of airborne disease in the US.
I think 333 QARYs/$1m via the CAIS framework is significantly too optimistic, for two reasons:
I think my estimate isn't going to be very informative -- I intentionally spent more time than I might otherwise endorse working on Manifund stuff, because it was fun and seemed like good skills-building. My best guess as to how much time I would have spent on an otherwise similar process in the absence of this factor is (EDIT: there was a mistake in my BOTEC) 59 (42 to 85) hours.
Like @MarcusAbramovitch , I'd feel pretty comfortable allocating ~$1m part-time. I mean just on my existing grants I would've been happy to donate another ~$150k without thinking more about it! Concrete >$50k grants I had to pass up but would otherwise have wanted to fund total >$200k (extremely rough). So I'm already at >$400k (EDIT: per 5 months!) without even thinking about how my behavior or prospective grantee behavior might have changed if I had a larger pot.
That said, I think there's a sense in which I hit strongly diminishing returns at ~$10k, albeit still above-bar. The Robert Long grant was by far my best, and I knew from day 0 that I wanted to make it. After that, a bet on me became a bet on my taste, not a bet on my private information, which seems less exciting. (Again I'm optimistic that the past 5 months was an unusually low-private-information period for me, but you see my point.)
And I'm somewhat skeptical that others had $200k-$500k/year of productive grants to make. To me it's a bad sign that >30% of Manifund funding went to 3 projects (MATS, Apollo, and LTFF) that I wouldn't think especially benefit from the regranting model.
I took
If what you describe is actually what she told you, how dare you use it for your own gain here?
to imply something like "if the alleged victim shared private and very personal information, you should not publish it." This still makes most sense to me as a literal reading.
(I would agree that "don't publish plausibly false allegation [that you don't see reason to litigate]" feels like a stronger position.)
Contradicting myself to write comments that it wouldn't be helpful for me to sit with...
At that hourly rate, he spent perhaps ~$130,000 of Lightcone donors’ money on this. But it’s more than that. When you factor in our time, plus hundreds/thousands of comments across all the posts, it’s plausible Ben’s negligence cost EA millions of dollars of lost productivity. If his accusations were true, that could have potentially been a worthwhile use of time - it's just that they aren't, and so that productivity is actually destroyed. And crucially, it was very easy for him to have not wasted everybody’s time - he just had to be willing to look at our evidence.
Kat, thank you for this post. I appreciate the very helpful/understanding manner in which it is written. I'm really sorry that you needed to invest so much into this, although I think you made the right decision in doing so.
I'll read more fully, probably sit with this for some time, and respond properly after that. (Keeping in mind my conditional pre-commitments to signal boost and seriously engage.)
Strong agree -- I enjoyed Brad Delong on this point.