M

MichaelDickens

5224 karmaJoined

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
760

I don't know of any reasonable justification for caring about expected log-welfare rather than expected welfare. For a welfare range estimate, the thing that matters is the expected value.

If you're clueless about an intervention and you use a fat-tailed prior, then the expected value might be very large but the median value will be very small, and most of the probability mass will be close to 0. For the RP welfare estimates, the median values make animal welfare interventions look highly effective.

I think it's very relevant that animal welfare interventions look better than global health interventions almost everywhere within the RP intervals.

Updated to include the new fee schedule. I decided not to update the flowchart because, although Charityvest is no longer cheaper than Schwab, I still like it better because it has lower fees on the default investment funds.

Aw man this means I'm gonna have to remake my flowchart

Vasco's corpus of cost-effectiveness estimates

Are you talking about this post? Looks like those cost-effectiveness estimates were written by Ambitious Impact so I don't know if there are some other estimates written by Vasco.

I do think there's concern with a popular movement that the movement will move in a direction you didn't want, but empirically this has already happened for "behind closed doors" lobbying so I don't think a popular movement can do worse.

There's also an argument that a popular movement would be too anti-AI and end up excessively delaying a post-AGI utopia, but I discussed in my post why I don't think that's a sufficiently big concern.

(I agree with you, I'm just anticipating some likely counter-arguments)

Quick thoughts on investing for transformative AI (TAI)

Some EAs/AI safety folks invest in securities that they expect to go up if TAI happens. I rarely see discussion of the future scenarios where it makes sense to invest for TAI, so I want to do that.

My thoughts aren't very good, but I've been sitting on a draft for three years hoping I develop some better thoughts and that hasn't happened, so I'm just going to publish what I have. (If I wait another 3 years, we might have AGI already!)

When does investing for TAI work?

Scenarios where investing doesn't work:

  1. Takeoff happens faster than markets can react, or takeoff happens slowly but is never correctly priced in.
  2. Investment returns can't be spent fast enough to prevent extinction.
  3. TAI creates post-scarcity utopia where money is irrelevant.
  4. It turns out TAI was already correctly priced in.

Scenarios where investing works:

  1. Slow takeoff, market correctly anticipates TAI after we do but before it actually happens, and there's a long enough time gap that we can productively spend the earnings on AI safety.
  2. TAI is generally good, but money still has value and there are still a lot of problems in the world that can be fixed with money.

(Money seems much more valuable in scenario #5 than #6.)

What is the probability that we end up in a world where investing for TAI turns out to work? I don't think it's all that high (maybe 25%, although I haven't thought seriously about this).

You also need to be correct about your investing thesis, which is hard. Markets are famously hard to beat.

Possible investment strategies

  1. Hardware makers (e.g. NVIDIA)? Anecdotally this seems to be the most popular thesis. This is the most straightforward idea but I am suspicious that a lot of EA support for investing in AI looks basically indistinguishable from typical hype-chasing retail investor behavior. NVIDIA already has a P/E of 56. There is a 3x levered long NVIDIA ETP. That is not the sort of thing you see when an industry is overlooked. Not to say NVIDIA is definitely a bad investment, it could be even more valuable than the market already thinks, I'm just wary.
  2. AI companies? This doesn't seem to be a popular strategy, the argument against is that it's a crowded space with a lot of competition which will drive margins down. (Whereas NVIDIA has a ~monopoly on AI chips.) Plus I am concerned that giving more money to AI companies will accelerate AI development.
  3. Energy companies? It's looking like AI will consume quite a lot of energy. But it's not clear that AI will make a noticeable dent on global energy consumption. This is probably the sort of thing you could make reasonable projections for.
  4. Out-of-the-money call options on a broad index (e.g. S&P 500 or NASDAQ)? This strategy avoids making a bet about which particular companies will do well, just that something will do much better than the market anticipates. But I'd also expect that unusually high market returns won't start showing up until TAI is close (even in a slow-takeoff world), so you have less time to use the extra returns to prevent AI-driven extinction.
  5. Commodities? The idea is that anything complicated will become much easier to produce thanks to AI, but commodities won't be much easier to get, so their prices will go up a lot. This is an interesting idea that I heard recently, I have no idea if it's correct.
  6. Momentum funds (e.g. VFMO or QMOM)? The general theory of momentum investing is that the market under-reacts to slow news. The pro of this strategy is that it should work no matter which stocks/industries benefit from AI. The con is that it's slower—you don't buy into a stock until it's already started going up. (I own both VFMO and QMOM (mostly QMOM), a bit because of AI but mainly because I think momentum is a good idea in general.)

The way they're usually done, awards counteract the negative:positive feedback ratio for a tiny group of people. I think it would be better to give positive feedback to a much larger group of people, but I don't have any good ideas about how to do that. Maybe just give a lot of awards?

Load more