I'm currently researching how to govern AI-driven explosive technological growth via a summer research fellowship with Pivotal Research.
Previously, I worked on executive support and events at the Centre for Effective Altruism (CEA). I also scaled up the EA Opportunity Board, interned at Global Challenges Project, and founded the EA student group at University of Wisconsin–Madison, where I studied Neurobiology and Psychology.
If you think we share an interest (we probably do!), don't hesitate to reach out :)
It's been a few years since I worked on the opportunity board so I don't really remember the decision to add this filter.
It's based on this database of EA orgs, which I haven't kept up to date. (I'm not sure if more recent opportunity board contributors have). I borrowed the language from a similar database Michael A made. Worth noting that both databases were made before or during summer 2022 where EA was much less politicized.
Looking at it now, I also find this filter a bit weird and would probably advise removing it or making sure it's up to date and uses language most organizations would endorse.
Thank you for taking this on!
This project that was started by motivated university students, sustained by grant funding (thanks Effective Altruism Infrastructure Fund! cc @calebp ), and now ultimately absorbed by a proper organization.
I like that arc. I think 'find an organizational home that doesn't depend on one-off grant funding and unusually dedicated/risk-tolerant organizers' is a worthy goal for early-stage project to orient towards. (This can be done either by starting a more mature org on the basis of a good pilot project or joining an org, like the board did here).
National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty.
I found this distinction really helpful.
It reminds me of Holden Karnofsky's piece on How to make the best of the most important century (2021), in which he presents two contrasting frames:
- The "Caution" frame. In this frame, many of the worst outcomes come from developing something like PASTA in a way that is too fast, rushed, or reckless. We may need to achieve (possibly global) coordination in order to mitigate pressures to race, and take appropriate care. (Caution)
- The "Competition" frame. This frame focuses not on how and when PASTA is developed, but who (which governments, which companies, etc.) is first in line to benefit from the resulting productivity explosion. (Competition)
- People who take the "caution" frame and people who take the "competition" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
- I worry that the "competition" frame will be overrated by default, and discuss why below. (More)
I think this is really fair pushback, thanks! Skeptical coverage of AI development is legitimate. I think the way I wrote this over-implied that these articles is a failing of journalism—the marketing hype claim is not baseless.
But I'm torn. I still think there's something off about current AI coverage, and this could be a valid reason to want more journalism on AI. Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate.
Also, I think my core point stands without conditioning on object-level views: we need more journalists who can dig deep into AI development. More investigation and scrutiny from all angles would serve us better than our current situation of relatively thin coverage.