Happy May the 4th from Convergence Analysis! Cross-posted on LessWrong.
As part of Convergence Analysis’s scenario research, we’ve been looking into how AI organisations, experts, and forecasters make predictions about the future of AI. In February 2023, the AI research institute Epoch published a report in which its authors use neural scaling laws to make quantitative predictions about when AI will reach human-level performance and become transformative. The report has a corresponding blog post, an interactive model, and a Python notebook.
We found this approach really interesting, but also hard to understand intuitively. While trying to follow how the authors derive a forecast from their assumptions, we wrote a breakdown that may be useful to others thinking about AI timelines and forecasting.
In what follows, we set out our interpretation of Epoch’s ‘Direct Approach’...
Spring has sprung, the days are getting longer and it's just about getting warm enough to sit outside 🌤️🧺🌸
Come hang out with other Giving What We Can pledgers and Effective Givers in a lovely Bloomsbury park.
Bring your own picnic blankets, snacks and games (we'll have...
For my org, I can imagine using this if it was 2x the size or more, but I can't really think of events I'd run that would be worth the effort to organise for 15 people.
(Maybe like 30% chance I'd use it within 2 years if had 30+ bedrooms, less than 10% chance at the actual size.)
Cool idea though!
I'm confused. Don't you already have a second building? Is that dedicated towards events or towards more guests?
^I'm going to be lazy and tag a few people: @Joey @KarolinaSarek @Ryan Kidd @Leilani Bellamy @Habryka @IrenaK Not expecting a response, but if you are interested, feel free to comment or DM.
I'm posting it now because it's a pity that it wasn't uploaded even though it was a video that gave me a lot of motivation for effective altruism.
Edit: so grateful and positively overwhelmed with all the responses!
I am dealing with repetitive strain injury and don’t foresee being able to really respond to many comments extensively (I’m surprised with myself that I wrote all of this without twitching forearms lol!...
I think your current outlook should be the default for people who engage on the forum or agree with the homepage of effectivealtruism.com. I’m glad you got there and that you feel (relatively) comfortable about it. I’m sorry that the process of getting there was so trying. It shouldn't be.
It sounds like the tryingness came from a social expectation to identify as capital ‘E’ capital ‘A’ upon finding resonance with the basic ideas and that identifying that way implied an obligation to support and defend every other EA person and project.
I wish EA weren’t a ...
I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models...
This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.
Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...
What role do you think journalism can play in advancing the cause of farmed animals? Can you think of any promising topics journalists may want to prioritize in the European context in particular, i.e. topics that have the potential to unlock important gains for farmed animals if seriously investigated and publicized?
About a week ago, Spencer Greenberg and I were debating what proportion of Effective Altruists believe enlightenment is real. Since he has a large audience on X, we thought a poll would be a good way to increase our confidence in our predictions
Before I share my commentary...
GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it’s hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict...
I absolutely sympathize, and I agree that with the world view / information you have that advocating for a pause makes sense. I would get behind 'regulate AI' or 'regulate AGI', certainly. I think though that pausing is an incorrect strategy which would do more harm than good, so despite being aligned with you in being concerned about AGI dangers, I don't endorse that strategy.
Some part of me thinks this oughtn't matter, since there's approximately ~0% chance of the movement achieving that literal goal. The point is to build an...
Hey, we're on the blue tartan picnic blanket and I'm wearing a green skirt