I’m excited to see more projects that are focused on improving EAs’ epistemics.
I work at the EA Infrastructure Fund, and I’m wanting to get a better sense of whether it’s worth us doing more grantmaking in this area, and what valuable projects in this area would look like. (These are my own opinions and this post isn’t intended to reflect the opinions of EAIF as a whole.)
I’m pretty early on in thinking about this. With this post I’m aiming to spell out a bit of why I’m currently excited about this area, and what kinds of projects I’m excited about in particular.
Why EA epistemics?
Basically, it seems to me that having good epistemics is pretty important for effectively improving the world. And I think EA could do better on this front.
I think EA already does well on this front. Relative to other communities I’m aware of, I think EA has pretty good epistemics. And I think we could do better, and if we did, that would be pretty valuable.
I don’t have a great sense of how tractable 'improving EA's epistemics' is, though I suspect that there’s some valuable work to be done here. In part this is because I think that EA and the world in general already contains a lot of good epistemic practices, tools and culture and that there’s significant gains to more widespread adoption of this within EA. Though I also think that there’s a lot of room for innovation here as well, or pushing the forefront of good epistemics - I think a bunch of this has happened within EA already, and I’d be surprised if there wasn’t more progress we could make.
My impression is that this area is somewhat neglected. I don’t know of many strongly epistemics focused projects, though it’s possible that there’s a bunch of things that are happening here that I’m unaware of. I’d find it particularly useful to hear any examples of things related to the following areas.
The kinds of projects I currently feel excited about
Projects which support the epistemics of EA related organisations
- Most EA work (eg. people working towards effectively improving the world) happens within organisations. And so being able to improve the epistemics of such organisations seems particularly valuable.
- I think having good epistemic culture and practices within an organisation can be hard. Often most of the focus of an organisation is on their object level work, and it can be easy for the project of improving the organisation’s epistemic culture and practices to fall by the wayside.
- The bar for providing value here is probably going to be pretty high. The opportunity cost of organisations’ time will be pretty high, and I could see it being difficult to provide value externally. That being said, I think if there was a project that ended up being significantly useful to EA related organisations, and ended up significantly improving the organisation's epistemics, that seems pretty good to me.
- An example of a potential project here: A consultancy which provides organisations support in improving their epistemics.
Projects which help people think better about real problems
- By real problems, I mean problems with stakes or consequences for the world, as opposed to toy problems.
- For example, epistemics coaching seems like a plausibly valuable service within the community. I’m a lot more excited about the version of this which is eg. you hire someone who is a great thinker to give you feedback on a problem that you are working on, and how to think well about it - and less excited about this which is eg. you hire someone to generally teach you how to have good epistemics, or things which treat epistemics as a ‘school subject’ (though maybe this could be good to). Main reasons here
- Thinking well on a toy problem can look pretty different to thinking well in a real world problem. Real world problems often have things like having stakeholders, deadlines, people with different opinions, limited information etc. And I think a lot of the value of having good epistemics is being able to think well in these kinds of situations.
- Also, I expect this to mean that any kind of epistemics service has a lot better feedback loops, and is less likely to end up promoting some kind of epistemic approach which isn’t useful (or even harmful).
Projects which are treat good epistemics as an area of active ongoing development for EAs
- Here’s an unfair caricature of EAs relationship with epistemics: Epistemic development is a project for people who are getting involved in EA. Once they’ve learnt the things (like BOTECs and what scout mindset is and so on), then job done, they are ready for the EA job market. I don’t think this is a fair characterisation, though I think EA’s relationship with epistemics resembles this more than I’d like it to.
- I’m more excited about a version of the EA community which more strongly treats having good epistemics as a value that people are continually aspiring towards (even once they are ‘working in EA’) and continues to support them in this aim.
Projects which are focused on building communities or social environments where having good epistemics is a core aspect of the community, if not the core aspect of the community.
- I think EA groups have ‘good epistemics’ as one focus of what they do, though I think they are also more focused on eg. people learning the ‘EA canon’ than I would like, and more focused on recruitment than I would like. (I’m not trying to make a claim here about what EA groups should be doing, plausibly having these focuses makes sense for other reasons, though I’d also additionally/ separately be excited about groups with a stronger focus on thinking well).
Final thoughts
In general I’m excited about EA being a great epistemic environment. Or maybe more specifically, I’m excited for EAs to be in a great epistemic environment (that environment might not necessarily be the EA community). Here’s a question:
How much do you agree with the following statement: ‘I’m in a social environment that helps me think better about the problems I’m trying to solve’?
I’m excited about a world where EAs can answer ‘yes, 10/10’ to this question, in particular EAs that are working on particularly pressing problems.
Thanks to Alejandro Ortega, Caleb Parikh and Jamie Harris for previous comments on this. They don’t necessarily (and in fact probably don’t) agree with everything I’ve written here
For a while, I've been thinking about the following problem: as you get better models of the world/ability to get better models of the world, you start noticing things that are inconvenient for others. Some of those inconvenient truths can break coordination games people are playing, and leave them with worse alternatives.
Some examples:
Poetically, if you stare into the abyss, the abyss then later stares at others through your eyes, and people don't like that.
I don't really have many conclusions here. So far when I notice a situation like the above I tend to just leave, but this doesn't seem like a great solution, or like a solution at all sometimes. I'm wondering whether you've thought about this, about whether and how some parts of what EA does are premised on things that are false.
Perhaps relatedly or perhaps as a non-sequitur, I'm also curious about what changed since your post a year ago talking about how EA doesn't bring out the best in you.
This seems related to me, and I don't have a full answer here, but some things that come to mind:
I haven't thought about this particular framing before, and it's interesting to me to think about - I don't quite have an opinion on it at the moment. Here's some of the things that are on my mind at the moment which feel related to this.
Three Potential Project Ideas:
Alternate Perspectives Fellowship:
A bunch of EA's explore different non-EA perspectives for X number of weeks then each participant writes up a blog post based on something that they learned or one place where they updated. There could also be a cohort post that contains a paragraph or two from each participant about their biggest updates.
It probably wouldn't make sense for this to scale massively. Instead, you'd want to try to recruit skilled communicators or decision-makers such that this course could be impactful even with only a small number of participants.
Rethinking EA Fellowship
Get a bunch of smart young EA's from different backgrounds into the same room for two weeks. In the first week, ask them to rethink EA/the EA community from the ground up taking into account how the situation has shifted since EA was founded.
During the second week, bring in a bunch of experienced EA's who can share their thoughts on why things are the way that they are. The young EA's can then revise their suggestions taking their feedback into account.
Alternate Positive Vision Competition
We had a criticism competition before. Unfortunately, only a small proportion of the criticism seemed valuable (no offense to anyone!) and too much criticism creates a negative social environment. So what I'd love to propose as an alternative would be a competition to craft an alternate positive vision.
I am happy to see this. Have you messaged people on the EA and epistemics slack?
Here are some epistemics projects I am excited about:
And a quick note:
I think the obvious question here should be "how would you know such a consultancy has good epistemics".
As a personal note, I've been building epistemic tools for years, eg estimaker.app or casting around for forecasting questions to write on. The FTXFF was pretty supportive of this stuff, but since it's fall I've not felt like big EA finds my work particularly interesting or worthy of support. Many of the people I see doing interesting tinkering work like this end up moving to AI Safety.
Not that powerful and positively impactful aren't the same thing, but here people who said Biden was too old should be glad he is gone, by their own lights
Though maybe we let Adam finish his honeymoon first. Congratulations to the happy couple!
... there is an EA and epistemics slack?? (cool!) if it's free for anyone to join, would you be able to send me an access link or somesuch?
Invited! Feel free for others who are somewhat active in the space to ping me for invites.
curious where you're getting this from?
I made it up[1].
But, as I say in the following sentences it seems plausible to me that without betting markets to keep the numbers accessible and Silver to keep pushing on them, it would have taken longer for the initial crash to become visible, it could have faded from the news and it could have been hard to see that others were gaining momentum.
All of these changes seem to increase the chance of biden staying in, which was pretty knife edgy for a long time.
https://nathanpmyoung.substack.com/p/forecasting-is-mostly-vibes-so-is
thanks for the response!
looks like the link in the footnotes is private. maybe there’s a public version you could share?
re: the rest — makes sense. 1%-5% doesn’t seem crazy to me, i think i would’ve “made up” 0.5%-2%, and these aren’t way off.
How about now https://nathanpmyoung.substack.com/p/forecasting-is-mostly-vibes-so-is
works! thx
I think that "awareness of important simple facts" is a surprisingly big problem.
Over the years, I've had many experiences of "wow, I would have expected person X to know about important fact Y, but they didn't".
The issue came to mind again last week:
My sense is that many people, including very influential folks, could systematically—and efficiently—improve their awareness of "simple important facts".
There may be quick wins here. For example, there are existing tools that aren't widely used (e.g. Twitter lists; Tweetdeck). There are good email newsletters that aren't reliably read. Just encouraging people to make this an explicit priority and treat it seriously (e.g. have a plan) could go a long way.
I may explore this challenge further sometime soon.
I'd like to get a better sense of things like:
a. What particular things would particular influential figures in AI safety ideally do?
b. How can I make those things happen?
As a very small step, I encouraged Peter Wildeford to re-share his AI tech and AI policy Twitter lists yesterday. Recommended.
Happy to hear from anyone with thoughts on this stuff (p@pjh.is). I'm especially interested to speak with people working on AI safety who'd like to improve their own awareness of "important simple facts".
Readit.bot turns any newsletter into a personal podcast feed.
TYPE III AUDIO works with authors and orgs to make podcast feeds of their newsletters—currently Zvi, CAIS, ChinAI and FLI EU AI Act, but we may do a bunch more soon.
I think this is a really important area, and it is great someone is thinking whether EAIF could expand more into this sort of area.
To provide some thoughts of my own having explored working with a few EA entities through our consultancy (https://www.daymark-di.com/) that works to help organisations improve their decision making processes, along with discussions I've had with others on similar endeavours:
There does appear a fairly frequent assumption that EA organisations suffer less from poor epistemics and decision making practices, which my experience suggests is somewhat true but unfortunately not entirely. I want to repeat what Jona from cFactual commented below that there are lots of actions EA organisations take that are very positive, such as BOTECs, likely failure models, and using decision matrices. This should be praised as many organisations don't even do these. However, the simple existence of these is too often assumed to mean good analysis and judgment will naturally occur and the systems/processes that are needed to make them useful are often lacking. To be more concrete, as an example few BOTECs incorporate second order probability/confidence correctly (or it is conflated with first order probability) and they subsequently fail to properly account for the uncertainty of the calculation and the accurate comparisons that can be made between options.
It has been surprising to observe the difference between some EA organisations and non-EA institutions when it comes to interest in improving their epistemics/decision making. With large institutions (including governmental) being more receptive and proactive in trying to improve - often those institutions are mostly being constrained by their slow procurement processes as opposed appetite.
When it comes to future projects, my recommendations of those with highest value add would be: