We've just passed the half year mark for this project! If you're reading this, please consider taking this 5 minute survey - all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!
Back to our regularly scheduled intro...
This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
If you'd like to receive these summaries via email, you can subscribe here.
Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!
Object Level Interventions / Reviews
AI
Proposals for the AI Regulatory Sandbox in Spain
by Guillem Bas, Jaime Sevilla, Mónica Ulloa
Author’s summary: “The European Union is designing a regulatory framework for artificial intelligence that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.”
Power laws in Speedrunning and Machine Learning
by Jaime Sevilla
Paper by Epoch. World record progressions in video game speedrunning fit very well to a power law pattern. Due to lack of longitudinal data, the authors can’t provide definitive evidence of power-law decay in Machine learning benchmark improvements (though it is a better model than assuming no improvement over time). However, if they assume this model, it would suggest that a) machine learning benchmarks aren’t close to saturation and b) sudden large improvements are infrequent but aren’t ruled out.
No, the EMH does not imply that markets have long AGI timelines
by Jakob
Argues that interest rates are not a reliable instrument for assessing market beliefs about transformative AI (TAI) timelines, because of two reasons:
- Savvy investors have no incentive to bet on short timelines, because it will tie up their capital until it loses value (ie. they are dead, or they’re so rich it doesn’t matter).
- They do have incentive to increase personal consumption, as savings are less useful in a TAI future. However, they aren’t a large enough group to influence interest rates this way.
- This makes interest rates more of a poll of upper middle class consumers than investors, and reflects whether they believe that a) timelines are short and b) savings won’t be useful post-TAI (vs. eg. believing they are more useful, due to worries of losing their job to AI).
My Assessment of the Chinese AI Safety Community
by Lao Mein
On April 11th, the Cybersecurity Administration of China released a draft of “Management Measures for Generative Artificial Intelligence Services” for public comment. Some in the AI safety community think this is a positive sign that China is considering AI risk and may participate in a disarmament treaty. However, the author argues that it is just a PR statement, no-one in China is talking about it, and the focus if any is on near-term stability.
They also note that the EA/Rationalist/AI Safety forums in China are mostly populated by expats or people physically outside of China, most posts are in English, and there is little significant AI Safety work in China. They suggest there is a lack of people at the interface of Western EA and Chinese technical work, and that you can’t just copy Western EA ideas over to China due to different mindsets.
AI doom from an LLM-plateau-ist perspective
by Steven Byrnes
Transformative AI (TAI) might come about via a large language model (LLM), something similar to / involving LLMs, or a quite different algorithm. An ‘LLM-plateau-ist’ believes LLMs specifically will plateau in capabilities before reaching TAI levels. The author makes several points:
- LLM plateauists are likely to believe TAI isn’t imminent (eg. <2 years away), but might still have short timelines overall given how fast the field changes.
- Some people will mention they have credence in LLM plateau-ism, but not act in line with that (possibly due to the risk from LLMs being more urgent and tractable).
- A ‘pause’ might be bad if you believe LLMs will plateau, because if we pause giant AI experiments that gives more space to focus on algorithm development.
- They suggest keeping TAI-relevant algorithmic insights and tooling out of the public domain.
Transcript and Brief Response to Twitter Conversation between Yann LeCunn and Eliezer Yudkowsky
by Zvi
Transcript of a twitter conversation between Yann LeCun (Chief AI Scientist at Meta) and Eliezer Yudkowsky. Yann shares their proposal for making AIs more steerable by optimizing objectives at run time, rejects that matching objectives with human values is particularly difficult, and argues Eliezer needs to stop scaremongering. Eliezer argues that inner alignment is difficult, and there is a real risk being ignored by Yann.
by paulfchristiano
The author thinks the chances of humanity irreversibly messing up our future within 10 years of building powerful AI are a total of 46%, split into:
- 22% from AI takeover
- 9% from additional extinction probability as a direct result of AI or the rapid change associated with it (eg. via more destructive war or terrorism)
- 15% from messing up in other ways during a period of accelerated technological change (eg. creating a permanent dystopia or making unwise commitments)
Other Existential Risks (eg. Bio, Nuclear)
Genetic Sequencing of Wastewater: Prevalence to Relative Abundance
by Jeff Kaufman
Identifying future pandemics via sequencing wastewater is difficult because sequencing reads are several steps removed from infection rates. The author and several others at the Nucleic Acid Observatory are working through a plan to understand how relative abundance (fraction of sequencing reads matching an organism) varies with prevalence (what fraction of people are currently infected) and organism (eg. when sampling wastewater you'd expect disproportionately more gastrointestinal than blood pathogens). They’ve gathered some initial data from papers that published it in the Sequencing Read Archive, and begun cleaning it - they welcome others to let them know if anything looks off in this data.
Report: Food Security in Argentina in the event of an Abrupt Sunlight Reduction Scenario (ASRS)
by JorgeTorresC, Jaime Sevilla, Mónica Ulloa, Daniela Tiznado, Roberto Tinoco, JuanGarcia, Morgan_Rivers, Denkenberger, Guillem Bas
Linkpost for this report. According to Xia et al. (2022) ~75% of the world's population could starve to death in a severe nuclear winter. Argentina has better conditions to survive this scenario than most countries, and is one of the world’s leading producers and exporters of food. Because of this, the authors have put together a strategic proposal recommending initiatives and priority actions for the Argentinian government to consider, including:
- Development of response plans to ensure food and water supply in face of this risk.
- Formulate strategies and legal frameworks for internal food rationing and waste reduction.
- Maintain open trade policies to enhance food production and facilitate access to critical inputs and materials.
- Clear and centralized communication strategy through the dissemination of the emergency management plan.
- Redirection of animal feed and biofuel production resources towards human food consumption.
- Adaptation of agricultural systems to increase food production.
- Adaptations of aquaculture to increase food production.
- High-tech adaptations to increase food production.
Animal Welfare
by Jack_S, jahying
Asia holds >40% of farmed land animals and >85% of farmed fish, the majority in China. However Asian advocates only receive an estimated ~7% of global animal advocacy funding. Good Growth describes two stakeholder-engaged studies they conducted to better understand animal advocates and consumers in China.
Key findings about the animal welfare community:
- The public aren’t generally aware of farmed animal welfare issues.
- It’s difficult to operate as an animal non-profit.
- Vegetarianism can have religious connotations that are off-putting to some consumers.
- The animal welfare community is small, has few professional opportunities, and limited resources for capacity building.
- Opportunities exist in health, education and food-related messaging and events, and in integrating welfare concerns into sustainability and environment movements.
Key findings about attitudes of the public toward animal welfare:
- Language choice is key eg. ‘welfare’ (fúlì 福利) often makes people think of luxuries like massages for cattle. See a guide to appropriate language here.
- Crustacean welfare is a major turn-off for Chinese participants and received a lot of push-back across all demographics.
- Mothers were keen on safer, higher-welfare products for their children. Grandparents were also willing to try higher welfare products after seeing videos of animal suffering.
- Participants rarely saw animal welfare as a “foreign concept / conspiracy”, which some advocates had believed they would.
Key findings about attitudes of the public toward alternative proteins:
- Consumers are worried about food safety, and alt-protein companies need to carefully avoid the negative connotations associated with ‘fake meat’ (eg. old or ‘zombie’ meat, and meat from non-food animals).
- Because veg*nism in China is strongly associated with Buddhist and Daoist ideas of purity and good health, most veg*ns in China weren’t interested in alt proteins.
- Many people experiment with new food during communal meals, where 5-6 dishes might be shared per meal. Developing dishes for these types of restaurants represents an opportunity for alt proteins to spread.
- No-one thought they were the target market for plant-based meats.
These findings got a positive reception from both Chinese and international advocacy organisations. The authors suggest similar stakeholder-engaged and qualitative methods (see post for details on methodologies used) are under-utilized in EA. They’re happy to chat at info@goodgrowth.io with those interested in exploring this.
Global Health and Development
by Rethink Priorities, Aisling Leow, jenny_kudymowa, bruce, Tom Hird, JamesHu
Shallow investigation on whether improving weather forecasting could have benefits for agriculture in low- and lower-middle income countries. Global Numerical Weather (GNW) predictions are often used in these countries, and aren't of great quality. The author’s estimate additional observation stations would not cross Open Philanthropy's cost-effectiveness bar (16x - 162x vs. a bar of 1000x). However, they suggest other interventions like identifying where global numerical weather predictions are already performing well (they work better in some areas than others) or extending access to S2S databases could be worthwhile.
World Malaria Day: Reflecting on Past Victories and Envisioning a Malaria-Free Future
by 2ndRichter, GraceAdams, Giving What We Can
Giving What We Can is running a fundraiser for World Malaria Day, and overviews efforts to date in preventing the disease.
In 2021, over 600,000 people died of malaria. It costs ~$5000 USD to save one of these lives via bednets or seasonal medicine. Using data from openbook.fyi, the authors estimate that donations from EAs have saved >25,000 lives from malaria. Some EAs have also actively developed new interventions / implementations (eg. ZZapp Malaria).
They also note almost half of the world’s countries have eradicated malaria via public health efforts since 1945, with it being eradicated from Europe in 1970. 95% of malaria cases now occur in Africa. Recent advances in vaccines and gene drives provide hope for eradicating malaria in countries still affected.
Rationality, Productivity & Life Advice
What are work practices that you’ve adopted that you now think are underrated?
by Lizka
Top comments include:
- The concept of “who owns the ball” ie. ensuring clear ownership of every task.
- Using a ‘watch team backup’ to create a culture of double-checking without implying the other person is doing a bad job.
- Working from an office or coworking space.
- Time-capping ie. setting a limited amount of time to accomplish a specific goal.
No, *You* Need to Write Clearer
by NicholasKross
Suggests the AI alignment and safety community needs to write exceptionally clearly and specifically, spelling out full reasoning and linking pages that explain baseline assumptions as needed. This is because the field is pre-paradigmatic, so little can be assumed and there are no ‘field basics’ to fall back on.
Community & Media
Current plans as the incoming director of the Global Priorities Institute
by Eva
Eva Vivalt is Assistant Professor in the Department of Economics at the University of Toronto, and the new Executive Director of the Global Priorities Institute (GPI). Their current views on what GPI should do more of are:
- Research on decision-making under uncertainty.
- Increasing empirical research.
- Expanding GPI’s network in economics.
- Exploring expanding to other fields and topics (eg. psychology, and whether the existing economics and philosophy teams can contribute to conversations on AI).
- Mentoring students and early career researchers.
Suggest candidates for CEA's next Executive Director
by MaxDalton, Michelle_Hutchinson, ClaireZabel
The Centre for Effective Altruism (CEA) is searching for a new Executive Director. You can suggest candidates by May 3rd and/or provide feedback on CEA’s vision and hiring process via this form.
They are open to and enthusiastic about candidates who want to make significant changes (eg. shutting down or spinning off programs, focusing on specific causes areas vs. promoting general principles) - though this isn’t a requirement. It’s also not a requirement candidates have experience working in EA, are an unalloyed fan of EA, or live in Oxford. The post also lays out the hiring process, which includes input from an advisor outside of EA.
Seeking expertise to improve EA organizations
by Julia_Wise, Ozzie Gooen
A task force - including the authors and others in the EA ecosystem that are TBD - is being created to sort through reforms that EA organizations might enact and recommend the most promising ideas. As part of the process the authors are keen to gather ideas and best practices from people who know a lot about areas outside EA (eg. whistleblowing, nonprofit boards, COI policies, or organization and management of sizeable communities). You can recommend yourself or others here.
Life in a Day: The film that opened my heart to effective altruism
by Aaron Gertler
Life in a Day is a 90-minute film which shows what different people around the world are doing throughout a day. It shows in many ways we are all the same, and creates empathy. The author thinks without watching this, they may not have had the “yes, this is obviously right” experience when hearing about a philosophy dedicated to helping people as much as possible.
Two things that I think could make the community better
by Kaleem
1. CEA’s name should change because it leads to misunderstanding of what they do / are responsible for. Eg. see these two quotes by executive directors of CEA, which contrast with some community members' perceptions:
- “We do not think of ourselves as having or wanting control over the EA community” (Max Dalton, 2022)
- “I view CEA as one organization helping to grow and support the EA community, not the sole organization which determines the community’s future” (Joan Gass, 2021)
In the comments, Ben West (CEA Interim Managing Director) mentions renaming CEA would be a decision for a permanent Executive Director, so won’t happen in the short term.
2. The ‘community health team’ is part of CEA, which is something which might reduce the community’s trust in the community health team. Separating it would allow it to build an impartial reputation, and reduce worries of:
- Conflicts of interest (COIs).
- Community members who don’t like CEA / have had negative interactions with one team / aspect feeling hesitant to reach out to the community health team.
- Sensitive personal or confidential information being transferred between the community health team and other members of CEA.
In the comments, Chana Messinger (interim head of Community Health) mentions they’ve been independently thinking about whether to spin out or be more independent, and gives considerations for and against.
David Edmonds's biography of Parfit is out
by Pablo
A biography of philosopher Derek Parfit is now published, which includes coverage of his involvement with effective altruism.
Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)
by Chris Scammell, DivineMango
A collection of resources on how to be okay in the face of transformative AI approaching. Includes:
- Posts from community members on how to maintain wellbeing and / or determination.
- A list of tools with links such as therapy, meditation, and productivity sprints.
- Specific therapists and coaches you can reach out to.
- EA organisations that provide support in this area (eg. EA Mental Health Navigator, Rethink Wellbeing).
Story of a career/mental health failure
by zekesherman
The author shares their personal career story, involving attempting to switch pathway from finance (earning to give) into computer science in order to maximize impact, despite poor personal fit for the latter. This resulted in years of unemployment and poor mental health and is something they regret. They also suggest some actions the community could take to reduce these risks eg. being more proactive about watching and checking in on other members of the EA community.