M

MichaelDickens

5407 karmaJoined

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
777

Increasing the amount of animal-friendly content that is likely to feature in AI training data

My understanding is that current AIs' (professed) values are largely determined by RLHF, not by training data. Therefore it would be more effective to persuade the people in charge of RLHF policies to make them more animal-friendly.

But I have no idea whether RLHF will continue to be relevant as AI gets more powerful, or if RLHF affects AI's actual values rather than merely its professed values.

This feels like a really big deal to me.

It is a big deal! It's sad that we live in a world where people in the developing world have serious health issues and even die from preventable causes, but it's wonderful that you're doing something about it (and I could say the same about most of the people on this forum).

I can't understand ~anything this post is trying to say.

  • It uses many terms that I've never heard before, and doesn't define them.
  • It makes references to concepts and seems to be trying to imply something with them, but I don't know what. For example, it references two historical case studies, but I don't get what I'm supposed to be learning from those case studies.

Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.

Over the years I've written some posts that are relevant to this week's debate topic. I collected and summarized some of them below:

"Disappointing Futures" Might Be As Important As Existential Risks

The best possible future is much better than a "normal" future. Even if we avert extinction, we might still miss out on >99% of the potential of the future.

Is Preventing Human Extinction Good?

A list of reasons why a human-controlled future might be net positive or negative. Overall I expect it to be net positive.

On Values Spreading

Hard to summarize but this post basically talks about spreading good values as a way of positively influencing the far future, some reasons why it might be a top intervention, and some problems with it.

I just had a brief look at Endaoment, later I will do a more careful look and update my post but here's what I noticed:

  • Endaoment appears to do some fancy stuff that other DAF providers don't do and it's very crypto-focused.
  • It looks like there are no AUM fees, but it charges 0.5% on deposits and 1% on donations, which is more expensive than a normal DAF if your DAF has high turnover, and cheaper than a normal DAF if you only donate once every few years or so.

I was not aware of Endaoment, I will look into it!

I don't know if any DAF providers support direct giving. But any provider should let you give stock to your DAF and then donate it to charity a few days later.

In terms of fees, if you only use your DAF as a convenient way to donate stock and you mostly maintain a $0 balance, then you'll just have to pay the minimum fee. I listed the minimum fees here—I think your best bet is Schwab because it has no minimum fee. Charityvest has no minimum for cash-only accounts, and I think you can still contribute stock to a cash-only account (they'll just liquidate the stock once they get it), but I'm not sure about that so you might want to ask them.

Another important consideration in favor of giving now—if you earn a steady income—is that your donations this year only represent a small % of your lifetime giving.

In fact, if you think the giving-now arguments strongly outweigh giving-later but you expect to earn most of your income in the future, then it might make sense to borrow money to donate and repay the loans out of future income. But that's difficult in practice.

I think the tendency to write unconstructive criticisms (at least for me) comes from the combination of:

  1. I have a strong urge to comment on anything that looks incorrect
  2. Writing substantive criticisms of a post (often) requires grokking the whole post and thinking deeply about it, which is hard. Criticizing some specific sentence is easy because my brain instantly surfaces the criticism when I read the sentence

I would like to publicly set a goal not to comment other people's posts with a criticism of some minor side point that doesn't matter. I have a habit of doing that, but I think it's usually more annoying than it is helpful so I would like to stop. If you see me doing it, feel free to call me out

(I reserve the right to make substantive criticisms of a post's central arguments)

Load more