When people write "more dakka," do they simply meaning that we need to try harder and/or try more things? I've seen this in two or three pieces of writing on the EA Forum, but I've never seen a clear explanation. Apparently "dakka" is slang from a sci-fi video game/tabletop RPG? Is this useless in-group terminology, or does this actually have value?
As best I can tell, "more dakka" is a reference to this quote. Can anyone point me to a more clear or authoritative explanation?
...We know the solution. Our bullets work. We just need more. We need More (and better
If you're at EAG BA this weekend, me and @Sarah Cheng are doing:
I've also got some 1:1 slots left, and it'd be great to talk to anyone with feedback on the forum experience/ forum events or the EA Newsletter.
"The bottom line is startling: Hansen’s team argues that mainstream climate science, as reflected in the IPCC’s reports, underestimates climate sensitivity to CO2 by about 50 percent. Their research suggests that the “short-term”—century time scale—equilibrium warming from a doubling of CO2e should be 4.5 degrees C, not the standard estimate of 3 degrees.
The reason for this error, in the view of Hansen and his team, is that conventional climate science...
I want to note that the political staffer talent gap previously observed in posts such as "Go Republican, Young EA" has swung in the opposite direction now.
America PAC ran a get out the vote campaign that Dems are still struggling to understand and most staffers in the party still don't grasp that it did this because "hurr hurr Elon dumb bad man". To the smartest among us it was obvious at the time that he was running circles around the Harris campaign, but the Harris campaign's leadership seemed to sincerely believe they had the stronger field...
The CEA Online Team (which runs this Forum) has finalized our OKRs for the first half-quarter of 2025 — here's a link to the new doc for a new year.
Long-awaited Swapcard feature releases!
The moment we have all been waiting for (and that I've been pushing on for coming up to two years) is finally here!
You can now:
Note: Calendar syncing needs to be enabled on the Web version of Swapcard.
It is becoming increasingly clear to many people that the term "AGI" is vague and should often be replaced with more precise terminology. My hope is that people will soon recognize that other commonly used terms, such as "superintelligence," "aligned AI," "power-seeking AI," and "schemer," suffer from similar issues of ambiguity and imprecision, and should also be approached with greater care or replaced with clearer alternatives.
To start with, the term "superintelligence" is vague because it encompasses an extremely broad range of capabilities above human...
If anyone is based in the Toronto area and wants to support challenge studies, there's a chance you could be quite helpful to 1Day Sooner's hepatitis c work. Please email me (josh@1daysooner.org) if you want to learn more.
Epistemic status: I've been to 2 EAGs, both were pretty life changing, and I think my preparation was a big factor in this.
My tips:
Take ~5 minutes to try to imagine positive (maybe impossible) outcomes. Consider brainstorming with someone.
For example "org X hires me" or "I find a co-founder" or "I get funded" (these sound ambitious to me, but pick whatever's ambitious to you).
Bad visions in my opinion: "meet people", "make friends", "talk to interesting people". Be more specific, for example, if you'd meet your new ...
Updates from Berkeley 2025:
I can't believe they finally added this feature!
See here: https://app.swapcard.com/settings
Don't forget you can manage your availability
I've substantially revised my views on QURI's research priorities over the past year, primarily driven by the rapid advancement in LLM capabilities.
Previously, our strategy centered on developing highly-structured numeric models with stable APIs, enabling:
However, the progress in LLM capabilities has updated my view. I now believe we should focus on developing and encouraging superior AI reasoning and forec...
A bit more on this part:
Generate high-quality forecasts on-demand, rather than relying on pre-computed forecasts for scoring
Leverage repositories of key insights, though likely not in the form of formal probabilistic mathematical models
To be clear, I think there's a lot of batch intellectual work we can do before users ask for specific predictions. So "Generating high-quality forecasts on-demand" doesn't mean "doing all the intellectual work on-demand."
However, I think there's a broad set of information that this batch intellectual work could look like. I ...
A quickly-written potential future, focused on the epistemic considerations:
It's 2028.
MAGA types typically use DeepReasoning-MAGA. The far left typically uses DeepReasoning-JUSTICE. People in the middle often use DeepReasoning-INTELLECT, which has the biases of a somewhat middle-of-the-road voter.
Some niche technical academics (the same ones who currently favor Bayesian statistics) and hedge funds use DeepReasoning-UNBIASED, or DRU for short. DRU is known to have higher accuracy than the other models, but gets a lot of public hate for having controversial ...
How should AI alignment and autonomy preservation intersect in practice?
We know that AI alignment research has made significant progress in embedding internal constraints that prevent models from manipulating, deceiving, or coercing users (to the extent that they don’t). However, internal alignment mechanisms alone don’t necessarily give users meaningful control over AI’s influence on their decision-making. Which is a mechanistic problem on its own, but…
This raises a question: Should future AI systems be designed to not only align with human values but als...
Quick list of some ideas I'm excited about, broadly around epistemics/strategy/AI.
1. I think AI auditors / overseers of critical organizations (AI efforts, policy groups, company management) are really great and perhaps crucial to get right, but would be difficult to do well.
2. AI strategists/tools telling/helping us broadly what to do about AI safety seems pretty safe.
3. In terms of commercial products, there’s been some neat/scary military companies in the last few years (Palantir, Anduril). I’d be really interested if there could be some companies to au...
Well done to the Shrimp Welfare Project for contributing to Waitrose's pledge to stun 100% of their warm water shrimps by the end of 2026, and for getting media coverage in a prominent newspaper (this article is currently on the front page of the website): Waitrose to stop selling suffocated farmed prawns, as campaigners say they feel pain
This is so great, congratulations team!! <3
One thing this makes me curious about: how good is the existing evidence base on electric stunning is better for the welfare of the shrimp, and how much better is stunning? I didn't realize SWP was thinking of using the corporate campaign playbook to scale up stunning, so it makes me curious how robustly good this intervention is, and I couldn't quickly figure this out from the Forum / website. @Aaron Boddy🔸 is there a public thing I can read by any chance? No pressure!
FWIW, "how good is stunning for welfare" ...
FYI rolling applications are back on for the Biosecurity Forecasting Group! We have started the pilot and are very excited about our first cohort! Don't want to apply but have ideas for questions? Submit them here (anyone can submit!).
A reflection on the posts I have written in the last few months, elaborating on my views
In a series of recent posts, I have sought to challenge the conventional view among longtermists that prioritizes the empowerment or preservation of the human species as the chief goal of AI policy. It is my opinion that this view is likely rooted in a bias that automatically favors human beings over artificial entities—thereby sidelining the idea that future AIs might create equal or greater moral value than humans—and treating this alternative perspective with unwarra...
I wrote a quick take on lesswrong about evals. Funders seem enchanted with them, and I'm curious about why that is.
https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=HzDD3Lvh6C9zdqpMh
Adrian Tchaikovsky, the science fiction writer, is a master at crafting bleak, hellish future worlds. In Service Model, he has truly outdone himself, conjuring an absurd realm where human societies have crumbled, and humanity teeters on the brink of extinction.
Now, that scenario isn't entirely novel. But what renders the book both tear-inducing and hilarious, is the presence in this world of numerous sophisticated robots, designed to eliminate the slightest discomfort from human existence. Yet, they adhere so strictly to their programmed rules, that it onl...
How might EA-aligned orgs in global health and wellness need to adapt calculations of cost-effective interventions given the slash-and-burn campaign currently underway against US foreign aid? Has anyone tried gaming out what different scenarios of funding loss look like (e.g., one where most of the destruction is reversed by the courts, or where that reversal is partial, or where nothing happens and the days are numbered for things like PEPFAR)? Since US foreign aid is so varied, I imagine that's a tall order, but I've been thinking about this quite a bit lately!
Although it's an interesting question, I'm not sure that gaming out scenarios is that useful in many cases. I think putting energy into responding to the funding reality changes as they appear may be more important. There are just so many scenarios possible in the next few months.
PEPFAR might be the exception to that, as if it gets permanently cut then there just has to be a prompt and thought through response. Other programs might be able to be responded to in the fly, but if The US do pull out of HUV funding there needs to be a contingency plan in ...