One GHW example: The impact of AI tutoring on educational interventions (via Arjun Panickssery on LessWrong).
There have been at least 2 studies/impact evaluations of AI tutoring in African countries finding extraordinarily large effects:
Summer 2024 — 15–16-year olds in Nigeria
They had 800 students total. The treatment group studied with GPT-based Microsoft Copilot twice weekly for six weeks, studying English. They were just provided an initial prompt to start chatting—teachers had a minimal “orchestra conductor” role—but they achieved “the equivalent of two years of typical learning in just six weeks.”
February–August 2023 — 8–14-year-olds in Ghana
An educational network called Rising Academies tested their WhatsApp-based AI math tutor called Rori with 637 students in Ghana. Students in the treatment group received AI tutors during study hall. After eight months, 25% of the subjects attrited from inconsistent school attendance. Of the remainder, the treatment group increased their scores on a 35-question assessment by 5.13 points versus 2.12 points for the control group. This difference was “approximately equivalent to an extra year of learning” for the treatment group.
Should this significantly change how excited EAs are about educational interventions? I don't know, but I've also not seen a discussion of this on the forum (this post about MOOC & AI tutors that received ~zero engagement).
Exciting to hear about your growing team! I noticed the team page hasn’t been updated recently – are you able to share more about the new team members and their roles?
We've extended application to the 2025 Q1 Pivotal Research Fellowship!
We think this could be a great opportunity for many on the Forum!
Deadline: Tuesday 26. November.
Deadline Extended to Tuesday 26. November!
You can recommend others who may be a good fit. We'll give you $100 for each accepted candidate we contact through you.
Since Longtermism as a concept doesn't seem widely appealing, I wonder how other time-focused ethical frameworks fare, such as Shorttermism (Focusing on immediate consequences), Mediumtermism (Focusing on foreseeable future), or Atemporalism (Ignoring time horizons in ethical considerations altogether).
I'd guess these concepts would also be unpopular, perhaps because ethical considerations centered on timeframes feel confusing, too abstract or even uncomfortable for many people.
If true, it could mean that any theory framed in opposition, such as a critique of Shorttermism or Longtermism, might be more appealing than the time-focused theory itself. Critizising short-term thinking is an applause light in many circles.
Thanks for the thoughtful comment!
Re point 1: I agree that the likelihood and expected impact of transformative AI exist on a spectrum. I didn’t mean to imply certainty about timelines, but I chose not to focus on arguing for specific timelines in this post.
Regarding the specific points: they seem plausible but are mostly based on base rates and social dynamics. I think many people’s views, especially those working on AI, have shifted from being shaped primarily by abstract arguments to being informed by observable trends in AI capabilities and investments.