mjkerrison🔸️

Manager / Senior Data Scientist @ Nous Group
21 karmaJoined Working (6-15 years)Melbourne VIC, Australia
mjkerrison.com

Posts
1

Sorted by New

Comments
9

If someone isn't already doing so, someone should estimate what % of (self-identified?) EAs donate according to our own principles. This would be useful (1) as a heuristic for the extent to which the movement/community/whatever is living up to its own standards, and (1i) assuming the answer is 'decently' it would be useful evidence for PR/publicity/responding to marginal-faith tweets during bouts of criticism.

Looking at the Rethink survey from 2020, they have some info about which causes EAs are giving to but they seem to note that not many people respond on this? And it's not quite the same question. To do: check GWWC for whether they publish anything like this.

Edit to add: maybe an imperfect but simple and quick instrument for this could be something like "For what fraction of your giving did you attempt a cost-effectiveness assessment (CEA), read a CEA, or rely on someone else who said they did a CEA?". I don't think it actually has to be about whether the respondent got the "right" result per se; the point is the principles. Deferring to GiveWell seems like living up to the principles because of how they make their recommendations, etc.

Can you add / are you comfortable adding anything on who "us" is and which orgs or what kinds of orgs are hesitant? Is your sense this is universal, or more localised (geographically, politically, cause area...)?

Good point and good fact. 

My sense, though, is that if you scratch most "expand the moral circle" statements you find a bit of implicit moral realism. I think generally there's an unspoken "...to be closer to its truly appropriate extent", and that there's an unspoken assumption that there'll be a sensible basis for that extent. Maybe some people are making the statement prima facie though. Could make for an interesting survey.

Love to see these reports!

I have two suggestions/requests for 'crosstabs' on this info (which is naturally organised by evaluator, because that's what the project is!):

  1. As-of-today, which evaluators/charities sit where on the recommendation scale. The info for that is mostly on GWWC's website but not quite organised as such. I'm thinking of rows for cause areas, columns for buckets, e.g. 'Recommended' at one end and 'Maybe not cost-effective' at the other (though maybe you'd drop things off altogether). Just something to help visualise what's moved and by how much, and broadly why are things sitting where they are (e.g. THL corporate campaigns sliding off the recommended list for 'procedural' reasons, so not in the Recommended column but now in a 'Nearly' column or something).
  2. I'd love a clear checklist of what you think needs improvement per evaluated program to help with making the list a little more evergreen. I think all that info is in your reporting, but if you called it out I think it would
    1. help evaluated programs and
    2. help donors to
      1. get a sense for how up-to-date that recommendation is (given the rotating/rolling nature of the evaluation program)
      2. and possibly do their own assessment for whether the charity 'should' be recommended 'now'.

Is anyone keeping tabs on where AI's actually being deployed in the wild? I feel like I mostly see (and so this could be a me problem) big-picture stuff, but there seems to be a proliferation of small actors doing weird stuff. Twitter / X seems to have a lot more AI content, and apparently YouTube comments do now as well (per conversation I stumbled on while watching some YouTube recreationally - language & content warnings: https://youtu.be/p068t9uc2pk?si=orES1UIoq5qTV5TH&t=2240)

I think this is a really compelling addition to EA portfolio theory. Two half-formed thoughts:

  • Does portfolio theory apply better at the individual level than the community level? I think something like treating your own contributions (giving + career) as a portfolio makes a lot of sense, if you're explicitly trying to hedge personal epistemic risk. I think this is a slightly different angle on one of Jeff's points: is this "k-level 2" aggregate portfolio a 'better' aggregation of everyone's information than the "k-level 1" of whatever portfolio emerges from everyone individually optimising their own portfolios? You could probably look at this analytically... might put that on the to-do list.

  • At some point what matters is specific projects...? Like when I think about 'underfunded', I'm normally thinking there's good projects with high expected ROI that aren't being done, relative to some other cause area where the marginal project has a lower ROI. Maybe my point is something like - underfunding and accounting for it should be done at a different stage of the donation process, rather than in looking at overall what the % breakdown of the portfolio is. Maybe we're more private equity than index fund.

Excited to see this!

It seems useful and on priors more cost effective to centralise & outsource some of these things - avoiding reinventing the wheel, and producing the scale that (a) lets you build expertise and (b) makes it worthwhile investing in improvements.

I wonder if there might be particularly strong regional effects to this - maybe Goa had quite a large dog population, quite a lot of rabies, or quite dense dog/human populations (affecting rabies, bite, and transmission incidences).

I think there could be room for further research to identify whether there would be better-looking (sub-country) regions - though like Helene_K found, data would be difficult.

Hey Alexander - thanks for the write-up! I found it useful as a local, and it seems valuable to be sharing/coordinating on this globally.

One thing that occurred to me would be to zoom in on the sectors of the economy that are exposed to AI. I think that in Australia, it might be relatively more concentrated than elsewhere - specifically in education, which is one of our biggest exports (though it gets accounted for domestically I think).

That could mean:

  1. If there are distinct challenges in education vs other knowledge work, some calculus may change (not sure what exactly)
  2. There might be stakeholders/coalitions we haven't tapped yet to support less narrowly economic concerns