I'm planning to spend time on the afternoon (UK time) of Wednesday 2nd September answering questions here (though I may get to some sooner). Ask me anything!
A little about me:
- I work at the Future of Humanity Institute, where I run the Research Scholars Programme, which is a 2-year programme to give space for junior researchers (or possible researchers) to explore or get deep into something
- (Applications currently open! Last full day we're accepting them is 13th September)
- I've been thinking about EA/longtermist strategy for the better part of a decade
- A lot of my research has approached the question of how we can make good decisions under deep uncertainty; this ranges from the individual to the collective, and the theoretical to the pragmatic
- e.g. A bargaining-theoretic approach to moral uncertainty; Underprotection of unpredictable statistical lives compared to predictable ones; or Defence in depth against human extinction
- Recently I've been thinking around the themes of how we try to avoid catastrophic behaviour from humans (and how that might relate to efforts with AI); how informational updates propagate through systems; and the roles of things like 'aesthetics' and 'agency' in social systems
- I think my intellectual contributions have often involved clarifying or helping build more coherent versions of ideas/plans/questions
- I predict that I'll typically have more to say to relatively precise questions (where broad questions are more likely to get a view like "it depends")
Does FHI or the RSP have a relatively explicit, shared theory of change? Do different people have different theories of change, but these are still relatively explicit and communicated between people? Is it less explicit than that?
Whichever is the case, could you say a bit about why you think that's the case?
For RSP, I think that:
Some general thoughts:
- Advantages of having an explicit theory of change:
- Makes it easier to sync up about direction/priorities/reasons for d
... (read more)I've heard many people express the view that in EA, and perhaps especially in longtermism:
1. Does all of those claims seem true to you?
2. If so, do you expect this to remain true for a long time, or do you think we're already moving rapidly towards fixing it? (E.g., maybe there are a lot of people already "in the pipeline", reducing the need for new people to enter it.)
3. Do you think there are other ways to potentially address this problem (if it exists) that deserve more attention or that I didn't mention above?
4. Do you think RSP, or things like it, are especially good ways to address this problem (if it exists)?
Yes, with some important comments:
- I don't think this is centrally about "researchers", but about "people-who-are-decent-at-working-out-what-to-do-amongst-the-innumerable-possibilities"
- This is a class we need more of in EA (and particularly longtermist EA); research is one of the applications of the (major) applications of such people, but far from the only one
- Mentorship/management is more like a thousand small things than two big things
- Often people will be better off learning from multiple strong mentors than one, because they'll be good at different subcomponents
- There are very substantial reasons beyond this to spend part of one's (research) career outside of explicitly EA orgs, particularly if you get an opportunity to work with outstanding people
- Such as:
- You can better learn the specialist knowledge belonging to the relevant domain by spending time working with top experts
- Or idiosyncratic-but-excellent pieces of mentorship
- To the extent that EA has important insights that are relevant in many domains, working closely with smart people is a good opportunity to share those insights
- It's a powerful way to develop a network
- I gave the reasons
... (read more)I think my second question was broad and vague.
I could operationalise part of it as: "Do you expect there's still high expected value in more people starting now at trying to get good at 'research mentorship/management'? Do you expect the same would be true if they started on that in, e.g., 2 years? Or do you think that, by the time people got good at this if they start now, the 'gap' will have been largely filled?"
It sounds like you think the answer is essentially "Yes, there's still high expected value in this"?
I'd agree that there are other strong arguments for many people working outside of explicitly EA orgs. And I think many EAs - myself included - are biased towards and often overemphasise working at explicitly EA orgs.
But "jobs/projects that are unusually good for getting better at 'research mentorship/management'" includes various jobs both within and outside of EA, as well as excluding various jobs both within and outside of EA. So I think the questions in this comment are distinct from - though somewhat related to - the question "Should more people work outside of EA orgs?"
Ahh, I think I was interpreting your general line of questioning as being:
A) Absent ability to get sufficient mentorship within EA circles, should people go outside to get mentorship?
... whereas this comment makes me think you were more asking:
B) Since research mentorship/management is such a bottleneck, should we get people trying to skill up a lot in that?
I think that some of the most important skills for research mentorship from an EA perspective include transferring intuitions about what is important to work on, and that this will be hard to learn properly outside an EA context (although there are probably some complementary skills one can effectively learn).
I do think that if the questions were in the vein of B) I'm more wary in my agreement: I kind of think that research mentorship is a valuable skill to look for opportunities to practice, but a little hard to be >50% of what someone focuses on? So I'm closer to encouraging people doing research that seems valuable to look for opportunities to do this as well. I guess I am positive on people practicising mentorship generally, or e.g. reading a lot of different pieces of research and forming inside views on what makes some pieces seem more valuable. I think the demand for these skills will become slightly less acute but remain fairly high for at least a decade.
Suppose, in 10 years, that the Research Scholar's Programme has succeeded way beyond what you expected now. What happened?
Interesting question!
Related: Suppose that, in 10 years, the RSP seems to have had no impact.* What signs would reveal this? And what seem the most likely explanations for the lack of impact?
*There already seem to be some indicators of impact, so feel free to interpret this as "seems to have had no impact after 2020", or as "seems to have had no impact after 2020, plus the apparent impacts by 2020 all ended up washing out over time".
Is there any impact measurement of RSP currently? I appreciate it is unusually hard, but have you had any thoughts on good ways to go about this?
What common belief in EA do you most strongly disagree with?
That personal dietary choices are important on consequentialist effectiveness grounds.
I actually think there are lots of legitimate and powerful reasons for EAs to consider veg*nism, such as:
... but it feels to me almost intellectually dishonest to have it be part of an answer to someone saying "OK, I'm bought into the idea that I should really go after what's important, what do I do now?"
(I'm not vegetarian, although I do try to only consume animals I think have had reasonable welfare levels, for reasons in the vicinity of the first two listed above. I still have some visceral unease about the idea of becoming vegetarian that is like "but this might be mistaken for being taken in by intellectually dishonest arguments".)
I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!
I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA. If something is a good thing, and provided it doesn't really have an opportunity cost, then it seems to me that a consequentialist EA should do it regardless of how good it is.
To illustrate my point, one can say it's a good thing to donate to a seeing eye dog charity. In a way it is, but an EA would say it isn't because there is an opportunity cost as you could instead donate to the Against Malaria Foundation for example which is more effective. So donating to a seeing eye dog charity isn't really a good thing to do.
Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different. It doesn't stop you doing something else. Therefore even if it realises a small benefit it seems worth it (and for the record I don't think the benefit is small).
Or perhaps you just think the... (read more)
That's fine! :)
In turn, an apology: my controversial view has baited you into response, and I'm now going to take your response as kind-of-volunteering for me to be critical. So I'm going to try and exhibit how it seems mistaken to me, and I'm going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it's helpful for the exhibition to be able to draw attention to features of a specific instance, and you're providing what-seems-like-implicit-permission for me to do that. Sorry!
To be clear: I strongly agree with this, and this was a big part of what I was trying say above.
This is... (read more)
I'm not 100% sure but we may be defining opportunity cost differently. I'm drawing a distinction between opportunity cost and personal cost. Opportunity cost relates to the fact that doing something may inhibit you from doing something else that is more effective. Even if going vegan didn't have any opportunity cost (which is what I'm arguing in most cases), people may still not want to do it due to high perceived personal cost (e.g. thinking vegan food isn't tasty). I'm not claiming there is no personal cost and that is indeed why people don't go / stay vegan - although I do think personal costs are unfortunately overblown.
Without addressing all of your points in detail I think a useful thought experiment might be to imagine a world where we are eating humans not animals. E.g. say there are mentally-challenged humans of a comparable intelligence/capacity to suffer to non-human animals and we farm them in poor conditions and eat them causing their suffering. I'd imagine most people would judge this as morally unacceptable and go vegan on consequentialist grounds (although perhaps not and it would actually be deontological grounds?). If you ... (read more)
What do you think is the most valuable research you've produced so far? Did you think it would be so valuable at the time?
Estimating the value of research seems really hard to me (and this is significantly true even in retrospect).
That said, some candidates are:
- Work making the point that we should give outsized attention to mitigating risks that might manifest unexpectedly soon, since we're the only ones who can
- At the time it didn't seem unusually valuable, but I think it was relatively soon after (a few months) that I saw some people changing behaviour in light of the point, which increased my sense of its importance
- Work on cost-effectiveness of research of unknown difficulty, particularly the principle of using log returns when you don't know where to start
- Felt sort-of important at the time, although I think the kind of value I anticipated hasn't really manifested
- I have felt like it's been useful for my thinking in a variety of domains, thinking about pragmatic prioritisation (and I've seen some others get some value from that); however logarithm is an obvious-enough functional form that maybe it didn't really add much
- Maybe something where it was more about dissemination of ideas than finding deep novel insights (I think it's very hard to draw a line between what counts as "research" or what doesn't
... (read more)You have a pure maths research background. What areas/problems do you think this background and way of thinking give you the strongest comparative advantage at?
Can you give any examples of times your background has felt like it helped you come to valuable insights?
Would you currently prefer a marginal resource to be used by an impatient longtermist (i.e. to reduce existential risk) or by a patient longtermist (i.e. to invest for the future)? Assume both would spend their resource as effectively as possible
Where do you think the impatient longtermist would spend their resource and where do you think the patient longtermist would spend their resource?
Finally, how do you best think we should proceed to answer these questions with more certainty?
P.S. there may well have been a much simpler way to formulate these questions, feel free to reformulate if you want to!
I'm not sure I really believe that "patient vs impatient longtermists" cleaves the world at its joins. I'll use the terms to mean something like resources aimed at reducing existential risk over the next fifty years or so, versus aiming to be helpful on a timescale of over a century?
In either case I think it depends a lot on the resource in question. Many resources (e.g. people's labour) are not fully fungible with one another, so it can depend quite a bit on comparative advantage.
If we're talking about financial resources, these are fairly fungible. There I tend to think (still applies to both "patient" and "impatient" flavours of longtermism):
- It doesn't make so much sense to analyse at the level of the individual donor
- Instead we should think about the portfolio we want longtermist capital as a whole to be spread across, and what are good ways to contribute to that portfolio at the margin
- Sometimes particular donors will have comparative advantage in giving to certain places (e.g. they have high visibility on a giving opportunity so it's less overhead for them to assess it, and it makes sense for them to fill it)
- Sometimes it's more about coordinating to have roughly the right amoun
... (read more)What do you believe* that seems important and that you think most EAs/longtermists/people at FHI would disagree with you about?
*Perhaps in terms of your independent impression, before updating on others' views.
That in thinking about community/movement building, it's more important to consider something like how people should be -- e.g. what virtues should be cultivated/celebrated -- rather than just what people should do (although of course both matter).
(That's in impression space. I have various drafts related to this, and I hope to get something public up in the next few months, so I'll leave it brief for now.)
Do you think Ellsberg preferences and/or uncertainty/ambiguity aversion are irrational?
Do you think it's a requirement of rationality to commit to a single joint probability distribution, rather than use multiple distributions or ranges of probabilities?
Related papers:
I think the debate about ambiguity aversion mostly comes down to a bucket error about the meaning of "rational":
Hey Owen, you have a background in mathematic. What is your favorite theorem/proof/object/definition/algorithm/conjecture/..?
One that comes to mind:
Theorem: Every finitely presented group is the fundamental group of some compact 4-manifold.
I like it because:
[With apologies for the fact that this likely makes no sense to most readers.]
My impression is that, of FHI's focus areas, biotechnology is substantially more credentialist than the others. I've been hesitant to recommend RSP to life scientists who are considering a PhD because I'm worried that not having a "traditional" degree is harmful to their job prospects.
Do you think that's an accurate concern? (I mostly speak with US-based people, if that's relevant.)
How do you think the EA community can improve its interactions and cooperation with the broader global community, especially those who might not be completely comfortable with the underlying philosophy? Do you think it's more of a priority to spread those underlying arguments, or to simply grow the network of people sympathetic to EA causes, even if they disagree with the principles of EA?
You've done research that seems to me very valuable, and now (I imagine) spend a lot of your time on something more like "facilitating and mentoring other researchers", in your role running the RSP.
1. Did you make an active decision to shift your priorities somewhat from doing to facilitating research? If so, what factors drove that decision? What would've made you not make that decision, or what would lead you to switch back to a larger focus on doing your own research?
2. What do you think makes running RSP your comparative advantage (assuming you think that)? More generally, what do you think makes that sort of "research facilitation/mentorship" someone's comparative advantage?
3. Any thoughts on how to test or build one's skills for that sort of role/pathway? (I guess I currently consider things like research management, project management at a research org, and coordinating fellowships to be in the same broad category. This may not be the best way of grouping things.)
(Feel free to just pick one question, or just say related things!)
There was something of an active decision here. It was partly based on a sense that the returns had been good when I'd previously invested attention in mentoring junior researchers, and partly on a sense that there was a significant bottleneck here for the research community.
Overall I'm not sure what my comparative advantage is! (At least in the long term.)
I think:
- Some things which makes me good at research mentoring are:
- being able to get up to speed on different projects quickly
- holding onto a sense of why we're doing things, and connecting to larger purposes
- finding that I'm often effective in 'reactive' mode rather than 'proactive' mode
- (e.g. I suspect this AMA has the highest ratio of public-written-words / time-invested of anything substantive I've ever done)
- being able to also connect to where the researcher in front of me is, and what their challenges are
- There are definitely parts of running RSP which seem not my comparat
... (read more)Thanks for doing this AMA!
Do you think "malevolence" (essentially, high levels of traits like Machiavellianism, narcissism, psychopathy, and/or sadism) may play an important role here? Or do other psychological traits, biases, and limitations seem far more important? Or values? Or things like game-theoretic dynamics, how groups interact, institutional structures, etc.?
(Feel free to just talk about this area in the terms that make sense to you, rather than answering that particular framing of the question.)
Which approaches and directions for decision-making under deep uncertainty seem most promising? Are there any that seem likely to be rational but not (apparently?) too permissive like Mogensen's maximality rule?
Which approaches do you see people using or endorsing that you think are bad (e.g. irrational)?
What intellectual progress did you make in the 2010s? (See SSC and Gwern's essays on the question.)
What percentage of "EA intellectual work" is done as part of the standard academic process? From your perspective, how far away is it from the optimal distribution?
What's the difference between deep uncertainty and (complex) cluelessness?
I could probably figure this out online so don't answer if you don't have a quick answer cached, but is it difficult for RSP scholars (who are not admitted through other channels to take other classes/do other studies at Oxford? Either at the philosophy department or elsewhere) For example if someone's interested in classes in philosophy, public health, ML, or statistical methods.