M

MichaelDickens

4562 karmaJoined

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
695

Can you explain how to use the discount rate parameter? From context, it looks like it's meant as the "social discount rate", not the "pure time preference" (which I would think should be 0). But for EAs, I think of the social discount rate as being positive mostly due to x-risk, and your model already separately accounts for AI x-risk, so should the social discount rate exclude discounting due to AI x-risk? In which case I would think it should be pretty low.

Edit: Never mind, I missed that you explained this in the post. Seems like you agree with my interpretation that the discount rate is based on non-AI x-risk.

(In which case 2% seems way too high to me, but that's just details.)

I'm not sure if this is the same phenomenon, or a different phenomenon that uses the same word, but I see calls for "open-mindedness" in the woo community. When expressing my disbelief in ghosts/astrology/ESP/etc., I'm told I "need to be more open-minded".

I interpreted disagreement to mean "EA is not reinventing the wheel with this concept".

I get the sense that GiveWell would not recommend any animal welfare intervention (nor would they recommend any x-risk or policy intervention). But I don't think that's because they think any intervention that doesn't meet their standards isn't worth funding—they fund a lot of more speculative interventions thru Open Philanthropy. I think GiveWell wants to be viewed as a reliable source for high-quality charities, so they don't want to recommend more speculative charities even if the all-things-considered EV is good.

(I'm just speculating here.)

As with ~all criticisms of EA, this open letter doesn't have any concrete description of what would be better than EA. Like just once, I would like to see a criticism say, "You shouldn't donate to GiveWell top charities, instead you should donate to X, and here is my cost-effectiveness analysis."

The only proposal I saw was (paraphrased) "EA should be about getting teenagers excited to be effectively altruistic." Ok, the movement-building arm of EA already does that. What is your proposal for what those teenagers should then actually do?

I only read the title and bolded summary of this post but I upvoted it because

  1. The title and summary were sufficient for me to understand the argument being made
  2. It was readily apparent to me that the argument makes sense
  3. I had never considered the argument before

Juncture 1 – Animal Friendly Researchers

There is some reason to believe that virtually everyone is too animal-unfriendly, including animal welfare advocates:

  1. Everyone is a human. Humans will naturally be biased toward humans over other species.
  2. An uninformed prior says all individuals have equal moral weight. Virtually all people—including animal advocates—give humans higher weight than any other species, which is definitely a bias in a technical sense, if not in the sense people usually use the word.

Animal welfare has much higher EV even under conservative assumptions. IMO only plausible argument against is that the evidence base for animal welfare interventions is much worse, so if you are very skeptical of unproven interventions, you might vote the other way. But you'd have to be very skeptical.

“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders”

After reading your post, I wasn't sure you were right about this. But after thinking about it for a few minutes, I can't come up with any serious mistakes I've made that were "point-in-time blunders".

The closest thing I can think of is when I accidentally donated $20,000 to the GiveWell Community Foundation instead of The Clear Fund (aka GiveWell), but fortunately they returned the money so it all worked out.

Some interesting implications about respondents' median beliefs:

  • It takes 93 typical policy people, or 4 and 1/3 extraordinary policy people, to improve US policy by 5 percentage points
  • Mass protests improve US policy by 0.06%
  • US policy matters 3.25x as much as individual BigCos' internal policies
  • An academic researcher is worth 5x as much as a typical technical researcher
  • An extraordinary technical researcher is worth 1 and 2/3 times as much as an entire new org

(A couple of these sound super wrong to me but I won't say which ones)

Load more