Hide table of contents

Does EA do any work to change our inadequate society into an adequate society? Is there any way to get involved with that? Any ongoing projects aiming at it? Any planning happening?

Note: If you’re not familiar with inadequate societies, see Inadequate Equilibria and Hero Licensing by Eliezer Yudkowsky.

22

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

My comment here lists a number of EA efforts that are aimed at general institutional reforms of various sorts: https://forum.effectivealtruism.org/posts/shdBgsL3ajcJ7XZbS/?commentId=FNe8oKmJ48dwnvpPy

Another notable recent project is Balsa Research: https://thezvi.substack.com/p/announcing-balsa-research

But despite the above, I still think that EA should be thinking much bigger in this direction; civilizational adequacy (sometimes known as "improving institutional decisionmaking" in EA circles) should IMO be elevated to a top-tier cause area alongside global health, biosecurity, and animal welfare (but not displacing AI as #1).

See my team's winning entry in the Future of Life Institute's "AI worldbuilding competition" for a more detailed vision of how I think charter cities, prediction markets, and other big ideas for improving civilizational adequacy might help create a better world: https://forum.effectivealtruism.org/posts/LLfaikCmysmdxussN/fiction-improved-governance-on-the-critical-path-to-ai

I feel like quite a few people are working on things related to this, with approaches I have different independent impressions about, but I'm very happy there's a portfolio.

Manifold Markets, Impact Markets, Assurance Contracts, Trust Networks, and probably very obvious stuff I'm forgetting right now but I thought I'd quickly throw these in here. I'm also kinda working on this, but it's in parallel with other things and it mostly consists of a long path of learning and trying to build up understanding of things.

The Effective Institutions Project might count as this. There may be more relevant projects, depending on what counts - like the Simon Institute for Longterm Governance, the Center for Election Science.

The kinds of things filed under "Broad Longtermism", perhaps.

Maybe work on impact markets and prediction markets.
(For some reason I didn't fully read acylhalide's answer and I see that I listed some of the same things.)

Roote views itself as part of a meta-movement including EA, and is interested in societal systems change (see Marriage Counseling with Capitalism as an example). We've been working on a few projects and are recently exploring granting to external projects. There are also a lot of tangential communities to EA like seasteading, charter cities, etc. with their own projects.

Comments2
Sorted by Click to highlight new comments since:

I've been thinking that the default existential risk framing might bias EAs to think that the world would eventually end up okay if it weren't for specific future risks (AI, nuclear war, pandemics). This framing downplays the possibility that things are on a bad trajectory by default because sane and compassionate forces ("pockets of sanity") aren't sufficiently in control of history (and perhaps have never been – though some of the successes around early nuclear risk strike me as impressive). 

We can view AI risk as an opportunity to attain control over history because AI, if it's aligned and things go well, could do it better. But how do you get from "not being in control of history" to solving a massive coordination problem (as well as technical problems around alignment)? It seems it's a top priority to grow/expand pockets of sanity. 

(Separate point: My intuition is that "pockets of sanity" are pretty black and white, so that if something isn't a pocket of sanity, marginal improvements to it will have little effect and it's better to focus on supporting (or building anew) something where the team, organization, government branch, etc., already has the type of leadership and culture you want to see more of.) 

Your impression of the default framing aligns with what I've heard from folks! In addition to the benefits of changing humanity's trajectory, there's also an argument that we should pursue systems change for the factors that are driving existential risk in the first place, in addition to addressing it from a research-focused angle. That's the argument of this article on meta existential risk!

Curated and popular this week
Relevant opportunities