Hide table of contents

I am asking this question to understand how an ideal EA would employ EA ideas, tools, and other frameworks practically - for instance, how would an ideal EA use the Scale, Neglectedness & Tractability arguments here? What about long-termism? Given the frameworks used, would this EA prioritize this cause over other causes?

I think thought experiments like this could help us properly explain to newcomers the intricacies of the EA thought process. If there are other thought experiments like this that serve the same purpose, do post those in your answer.

Here are some assumptions you are allowed to make:

  1. Assume that although you are living before 07-08, EA concepts and ideas are where they are at right now. For instance, by 07-08, EAs already think AI & GCRs are high priority and that Givewell has done all of the mistakes they quote here and that 80k hours has already realized that leading with Earning to give was a bad idea and so on.

  2. Assume the EA in this scenario is actually well aware of the major ideas of EA: say they took the Stanford Introductory fellowship or they might even be running a local EA group.

  3. Assume the EA has a personal fit to help with this problem: say they are in the final year of their Econ PhD. But remember that they can direct their career towards other causes too.

  4. Also assume that this is not about hunting and finding Cause X: One fine night before 07-08, God descended into the EA's bedroom and whispered in their ear, "You need to take a look at the US housing market". But God left without telling that there will be a crisis in 07-08 because of it or what exactly to look at in that market - God is too active on FB and just got distracted. You can also think that the EA is one of the protagonists at the beginning of the movie 'The Big Short" (the movie starts at '05).

If you are making additional assumptions then mention them in your answer.

26

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

This is an interesting thought experiment and I like the specific framing of the question.

My initial thoughts are that this clearly would have been a good thing to try to work on, mainly due to the fact that the 2008 financial crisis cost trillions of dollars and arguably also led to lots of bad political developments in the western world (eg. setting the stage for Trumpism). If you buy the Tyler Cowen arguemnts that economic growth is very important for the long term then that bolsters this case. However a caveat would be that due to moral uncertainty it's hard to actually know the long term consequences of such a large event.

Here are some other ways to think about this:

Neglectedness

As Ramiro mentions below, very few people were alert to the potential risk before the crisis, so an additional person thinking and advocating for this would have increased the proportion of people thinking about this by a lot.

Tractability 

Even if you had predicted how the crisis would have unfolded and what caused it, could you have actually done anything about it? Would you just ring up the federal reserve and tell them to change their policy? Would anyone have listened to you as a random EA person? This is potentially the biggest problem in my view.

Other benefits - better economic modelling

I am a strong believer in alternative approaches to economic modelling which can take into account things like financial contagion (such as agent-based models), so a potential benefit of working on this type of thing before the crisis is that you might have developed and promoted these techniques further, and these tools could help with other economic problems. In my view this is still a valuable and neglected thing to work on.

Other benefits - reputation

An additional benefit of calling the alarm on the financial crisis before it happened is the reputational benefit. Even if no one listened to you at the time, you would be recognised as one of the few people who "foresaw the crisis", and therefore your opinions might be given more weight, for example the people from The Big Short who are always wheeled out on TV to provide opinions. You could say "hey, I predicted the financial crisis, so maybe you should also listen to me about this other <<insert EA cause area here>> stuff"

As someone interested in Complexity Science I find the ABM point very appealing. For those of you with a further interest in this, I would highly recommend this paper by Richard Bookstaber as a place to start. He also wrote a book on this topic and was also one of the people to foreshadow the crisis.

Also if you are interested in Complexity Science but never got a chance to interact with people from the field/learn more about it, I would recommend signing up for this event.

2
Rory Greig
Hey Venkatesh, I am also really interested in Complexity Science, in fact I am going to publish a blog post on here soon about Complexity Science and how it relates to EA. I've also read Bookstaber's book, in fact Doyne Farmer has a similar book coming out soon which looks great, you can read the intro here. I hadn't heard of the Complexity Weekend event but it looks great, will check that out!

Well, if your EA were particularly well placed to tackle this problem, then the answer is likely yes: they would probably realize its scalable and (partially neglected). Plus, if God is reliable, then the Holy Advice would likely dominate other matters - AGI and x-risks are uncertain futures, and reducing present suffering would be greatly affected by the financial crisis. In addition, maybe this is not quite the answer you're looking for, but I believe personal features (like fit and comparative advantages) would likely trump other considerations when it comes to choosing a cause area to work on (but not to donate to).

"... I believe personal features (like fit and comparative advantages) would likely trump other considerations..." That is a very interesting point. Sometimes I do have a very similar feeling - the other 3 criteria are there mostly just so one doesn't base one's decision fully on personal fit but consider other things too. At the end of the day, I guess the personal fit consideration ends up weighing a lot more for a lot of people. Would love to hear from someone in 80k hours if this is wrong...

Editing to add this: I wonder if there is a survey somewhere out there that asked people how much do they weigh each of the 4 factors. That might validate this speculation...

My (naive) understanding is that the risk of a recession today is not much lower than in 2007-08.

So the answer to whether EAs would be working on this back then rounds down to whether EAs are looking into macroeconomic risk today.

And the answer to that is mixed: there is actually a tag in the Forum for this very problem, which includes a reference to OpenPhil's program on macroeconomic policy stabilization

But there are no articles under that tag, and I haven't heard much discussion on the topic outside of OpenPhil.

Note: that tag is currently a wiki-only tag/wiki page, but could be turned into a proper tag if desired.

Thanks for linking to that OpenPhil page! It is really interesting. In fact, one of the pages that page links to talks about ABMs that rory_greig mentioned in his comment.

What would it have taken to do something about this crisis in the first place? Back in 2008, central bankers were under the assumption that the theory of central banking was completely worked out. Academics were mostly talking about details (tweaking the tailor rule basically). 

The theory of central banking is already centuries old. What would it have taken for a random individual to overturn that establishment? Including the culture and all the institutional interests of banks etc? Are we sure that no one was trying to do exactly that, anyway?

It seems to me that it would have taken a major crisis to change anything, and that's exactly what happened. And now there are all kinds of regulations being implemented for posting collateral around swaps and stuff. It seems that regulators are fixing the issues as they come up (making the system antifragile), and I don't see how a marginal young naive EA would have the domain knowledge to make a meaningful difference here.

And that goes for most fields. Unless we basically invent the field (like AI Safety) or the strategy (like comparing charities), if the field is sufficiently saturated with smart and motivated people, I don't think EA's have enough domain knowledge to do anything. In most cases it takes decades of work to get anywhere.

Consider this - say the EA figured out the number of people the problem could affect negatively (i.e) the scale. Then even if there is a small probability that the EA could make a difference shouldn't they have just taken it? Also even if the EA couldn't avert the crisis despite their best attempts they still get career capital, right?

Another point to consider - IMHO, EA ideas have a certain dynamic of going against the grain. It challenged the established practices of charitable giving that existed for a long time. So an EA might be inspired by this and i... (read more)

Curated and popular this week
Relevant opportunities