I expect it's still worth MIRI being aware that almost as many people still distrust as trust MIRI as being sufficiently honest in its public communications.
FWIW, I found this last bit confusing. In my experience chatting with folk, regardless of how much they agree with or like MIRI, they usually think MIRI is quite candid an honest in it’s communication.
(TBC, I do think the “Death with Dignity” post was needlessly confusing, but that’s not the same thing as dishonest.)
What's included in the "cost of doing business" category? $0.8M strikes me as high, but I don't have a granular understanding here.
It includes things like, rent, utilities, general office expenses, furnishings/equipment, bank/processing fees, software/services, insurance, bookkeeping/accounting, visas/legal. The largest expense that makes up the estimated ~$0.8M is rent, which accounts for just over half.
Is it right that you're estimating 2020's compensation expenditure at ~$182,000 per employee? (($3.56M + $1.4M + $0.51M) / 30 employees)
No, that will be an over estimate for a few reasons:
Were most of the 12 new staff onboarded early enough in 2019 such that it makes sense to include them in a 2019 per capita expenditure estimate?
We added 8 new staff in 2019. When I make our spending estimates, I assume new staff are added evenly throughout the year, i.e., I assume the spending on all new staff in a given year will be ~50% of their total annual cost. In practice given that we aren't talking about very large numbers here the accuracy of that estimate varies quite a bit. The distributions of when new staff were added in 2019 was pretty centered on the middle of the year, though salary level of those staff will likely complicate things here (I haven't run those numbers.)
(I'm COO at MIRI.)
Just wanted to provide some info that might be helpful:
Thanks :)
All grants we know we will receive (or are very likely to receive) have already been factored into our reserves estimates, which together with our budget estimate for next year, is the basis for the $1M fundraising goal. We haven't factored in any future grants where we're uncertain if we'll get the grant, uncertain of the size or structure of the grant, etc.
Update: Added an announcement of our newest hire, Edward Kmett, as well as, a list of links to relatively recent work we've been doing in Agent Foundations, and updated the post to reflect the fact that Giving Tuesday is over (though our matching opportunity continues)!
First, note that we’re not looking for “proven” solutions; that seems unrealistic. (See comments from Tsvi and Nate elsewhere.) That aside, I’ll interpret this question as asking: “if your research programs succeed, how do you ensure that the results are used in practice?” This question has no simple answer, because the right strategy would likely vary significantly depending on exactly what the results looked like, our relationships with leading AGI teams at the time, and many other factors.
For example:
While the strategy would depend quite a bit on the specifics, I can say the following things in general:
In short, my answer here is “AI scientists tend to be reasonable people, and it currently seems reasonable to expect that if we develop alignment tools that clearly work then they’ll use them.”
[1] MIRI’s current focus is mainly on improving the odds that the kinds of advanced AI systems researchers develop down the road are alignable, i.e., they’re the kinds of system we can understand on a deep and detailed enough level to safely use them for various “general-AI-ish” objectives.
[2] On the other hand, sharing sufficiently early-stage alignment ideas may be useful for redirecting research energies toward safety research, or toward capabilities research on relatively alignable systems. What we would do depends not only on the results themselves, but on the state of the rest of the field.
(Writing this while on a flight with bad Wi-Fi, so I’ll keep it brief.)
Just wanted to quickly drop a note to say that we also do work targeted at policymakers, e.g.,
(In general our work more directly targeted at policymakers has been less visible to date. That will definitely continue to be the case for some of it, but I’m hopeful that we’ll have more publicly observable outputs in the future.)