Quick takes

When people write "more dakka," do they simply meaning that we need to try harder and/or try more things? I've seen this in two or three pieces of writing on the EA Forum, but I've never seen a clear explanation. Apparently "dakka" is slang from a sci-fi video game/tabletop RPG? Is this useless in-group terminology, or does this actually have value?

As best I can tell, "more dakka" is a reference to this quote. Can anyone point me to a more clear or authoritative explanation?

We know the solution. Our bullets work. We just need more. We need More (and better

... (read more)
Showing 3 of 5 replies (Click to show all)
5
Arthur Malone🔸
I think the examples you give are actually contrary to the useful message of "more dakka."  Yours suggest "if something doesn't work, try more of it," which in general is poor advice. Sometimes it's true that you need more of something before you hit a threshold that generates results. But most of the time, negative results are informative and should guide you to change your approach. More dakka is about when something does work, but doesn't solve the problem entirely, or is easy to drop off rather than continue. It's a useful concept trying to correct for an observed tendency to ignore only-somewhat-positive results. Example: "bright lights seemed to help a bit, but my seasonal depression is still lingering." More dakka: "have you tried even brighter lights?" Example: "we brainstormed ten ideas and got some that seemed workable, but they still have issues." More dakka: "Try listing a 100 ideas before committing to a so-so one from the first ten." @Joseph "dakka" is just an onomatopoeic term for the sound of a machine gun ("dakka dakka dakka"), and the phrase comes from the TV tropes entry. The fanciful names there are useful for fun, reference-based humor (and I use them a lot in my persona life!), but I do think porting them over to EA-jargon is probably net negative for clarity/professionalism.

Not that it's super important, but TVTropes didn't invent the phrase (nor do they claim they did), it's from Warhammer 40,000.

3
Joseph
Thank you. Both the links to the tag on LessWrong and the dozen examples are helpful. I appreciate it.

If you're at EAG BA this weekend, me and @Sarah Cheng are doing:

I've also got some 1:1 slots left, and it'd be great to talk to anyone with feedback on the forum experience/ forum events or the EA Newsletter. 
 

Is Climate-sensitivity super-wrong?


Thomas Homer-Dixon on why James Hansen's latest climate findings matter




"The bottom line is startling: Hansen’s team argues that mainstream climate science, as reflected in the IPCC’s reports, underestimates climate sensitivity to CO2 by about 50 percent. Their research suggests that the “short-term”—century time scale—equilibrium warming from a doubling of CO2e should be 4.5 degrees C, not the standard estimate of 3 degrees.

The reason for this error, in the view of Hansen and his team, is that conventional climate science... (read more)

I want to note that the political staffer talent gap previously observed in posts such as "Go Republican, Young EA" has swung in the opposite direction now. 

 

America PAC ran a get out the vote campaign that Dems are still struggling to understand and most staffers in the party still don't grasp that it did this because "hurr hurr Elon dumb bad man". To the smartest among us it was obvious at the time that he was running circles around the Harris campaign, but the Harris campaign's leadership seemed to sincerely believe they had the stronger field... (read more)

The CEA Online Team (which runs this Forum) has finalized our OKRs for the first half-quarter of 2025 — here's a link to the new doc for a new year.

I've just updated the doc with a summary of the CEA Online Team's Q1.2 OKRs.

1
MvK🔸
I'm seeing lots of O's and no KR's. Is this intentional? (My understanding is that one of the main benefits of having KPIs or OKRs is the accountability that comes from having specific and measurable (i.e. quantifiable) outcomes that you can track and evaluate, and I can't see any of those in the document.)
3
Sarah Cheng
Ah yeah sorry, the doc just has a summary of our OKRs, which is meant to broadly communicate what our team is working on. It's not the actual text from our OKRs. I've added a sentence to clarify that.

Long-awaited Swapcard feature releases!

The moment we have all been waiting for (and that I've been pushing on for coming up to two years) is finally here!

You can now:

  1. Sync your event agenda with Google Calendar
  2. Reschedule meetings from your mobile

Note: Calendar syncing needs to be enabled on the Web version of Swapcard.

Showing 3 of 5 replies (Click to show all)
10
Ivan Burduk
Ah interesting, good to know! What kind of bugs have you encountered? I did some basic tests and it seemed to work smoothly for me.

I heard reports of it getting out of sync or being out of date in some way. For example, a room change on Swapcard not being reflected in the Google calendar. I haven't tried it myself, and I haven't heard anything less vague, sorry. 

2
Yonatan Cale
Oh, looking now - my calendar sync is on but none of the Swapcard events appear in my Google Calendar (not meetups, not 1-on-1s) (I synced to Google Calendar before scheduling anything) Do you have a way to debug it? Otherwise I'll disconnect and re-connect

It is becoming increasingly clear to many people that the term "AGI" is vague and should often be replaced with more precise terminology. My hope is that people will soon recognize that other commonly used terms, such as "superintelligence," "aligned AI," "power-seeking AI," and "schemer," suffer from similar issues of ambiguity and imprecision, and should also be approached with greater care or replaced with clearer alternatives.

To start with, the term "superintelligence" is vague because it encompasses an extremely broad range of capabilities above human... (read more)

Showing 3 of 6 replies (Click to show all)

How do you feel about this framework?

7
Habryka
I don't think this is true, or at least I think you are misrepresenting the tradeoffs and diversity here. There is some publication bias here because people are more precise in papers, but honestly, scientists are also not more precise than many top LW posts in the discussion section of their papers, especially when covering wider-ranging topics.  Predictive coding papers use language incredibly imprecisely, analytic philosophy often uses words in really confusing and inconsistent ways, economists (especially macroeconomists) throw out various terms in quite imprecise ways. But also, as soon as you leave the context of official publications, but are instead looking at lectures, or books, or private letters, you will see people use language much less precisely, and those contexts are where a lot of the relevant intellectual work happens. Especially when scientists start talking about the kind of stuff that LW likes to talk about, like intelligence and philosophy of science, there is much less rigor (and also, I recommend people read a human's guide to words as a general set of arguments for why "precise definitions" are really not viable as a constraint on language)
2[comment deleted]

EA Forum feature request: Can we get a bluesky profile link button for profile pages?

3
Sarah Cheng
Thanks for the suggestion! This should be relatively quick to add so I'll see if we can do it soon. :) I was also thinking of setting up a bluesky bot account similar to our twitter account. Do you know how active the EA-ish bluesky community is?

Just noticed the feature is deployed already. Thanks!

3
Milan Weibel🔹
High-variance. Most people seem to have created an account and then gone back to being mostly on (eX)twitter. However, there are some quite active accounts. I'm not the best to ask this question to, since I'm not that active either. Still, having the bluesky account post as a mirror of the twitter acccount maybe isn't hard to set up?

If anyone is based in the Toronto area and wants to support challenge studies, there's a chance you could be quite helpful to 1Day Sooner's hepatitis c work. Please email me (josh@1daysooner.org) if you want to learn more.  

EAG(x) tips

Epistemic status: I've been to 2 EAGs, both were pretty life changing, and I think my preparation was a big factor in this.

My tips:

Have some kind of vision

Take ~5 minutes to try to imagine positive (maybe impossible) outcomes. Consider brainstorming with someone. 

For example "org X hires me" or "I find a co-founder" or "I get funded" (these sound ambitious to me, but pick whatever's ambitious to you).

Bad visions in my opinion: "meet people", "make friends", "talk to interesting people". Be more specific, for example, if you'd meet your new ... (read more)

Showing 3 of 9 replies (Click to show all)

Updates from Berkeley 2025:

Google Calendar sync

I can't believe they finally added this feature!

See here: https://app.swapcard.com/settings

Don't forget you can manage your availability

4
Yonatan Cale
Just like you wouldn't schedule a meeting to ask someone what are the names of the U.S states (because you can check wikipedia), I'm against scheduling meetings to ask something you can check in the EA Forum (or lesswrong).  For example, if you're curious what's new in global health and wellbeing, check the "global health and wellbeing" tag, and sort by "new". (Maybe after checking the tag you'll still want a meeting for some reason, but I'd at least check the tag first).   Examples: * I wouldn't do: Ask an org if they're hiring without checking the 80k job board (and/or the org's website, and/or the org's tag). * Seems ok: Reaching out to someone from an org, saying "I see you're hiring, I'm considering studying for 3 months and then applying, do you think it would be better for me to apply now and if I don't pass then study and apply again in 3 months?" * Seems great: Asking this in Swapcard, maybe they can just reply in 5 seconds with "yeah sure apply, no problem to repeat after 3 months"? Or maybe they'll say it's better to meet.
2
Yonatan Cale
Before/after EAG(x) Usually there are small events 1 (2?) days before, and bigger events up to ~3 days after. But that's just my vague rule of thumb. It's better to try to find the groups where these events are organized

I've substantially revised my views on QURI's research priorities over the past year, primarily driven by the rapid advancement in LLM capabilities.

Previously, our strategy centered on developing highly-structured numeric models with stable APIs, enabling:

  1. Formal forecasting scoring mechanisms
  2. Effective collaboration between human forecasting teams
  3. Reusable parameterized world-models for downstream estimates

However, the progress in LLM capabilities has updated my view. I now believe we should focus on developing and encouraging superior AI reasoning and forec... (read more)

A bit more on this part:

Generate high-quality forecasts on-demand, rather than relying on pre-computed forecasts for scoring

Leverage repositories of key insights, though likely not in the form of formal probabilistic mathematical models

To be clear, I think there's a lot of batch intellectual work we can do before users ask for specific predictions. So "Generating high-quality forecasts on-demand" doesn't mean "doing all the intellectual work on-demand."

However, I think there's a broad set of information that this batch intellectual work could look like. I ... (read more)

A quickly-written potential future, focused on the epistemic considerations:

It's 2028.

MAGA types typically use DeepReasoning-MAGA. The far left typically uses DeepReasoning-JUSTICE. People in the middle often use DeepReasoning-INTELLECT, which has the biases of a somewhat middle-of-the-road voter.

Some niche technical academics (the same ones who currently favor Bayesian statistics) and hedge funds use DeepReasoning-UNBIASED, or DRU for short. DRU is known to have higher accuracy than the other models, but gets a lot of public hate for having controversial ... (read more)

How should AI alignment and autonomy preservation intersect in practice?

We know that AI alignment research has made significant progress in embedding internal constraints that prevent models from manipulating, deceiving, or coercing users (to the extent that they don’t). However, internal alignment mechanisms alone don’t necessarily give users meaningful control over AI’s influence on their decision-making. Which is a mechanistic problem on its own, but…

This raises a question: Should future AI systems be designed to not only align with human values but als... (read more)

Quick list of some ideas I'm excited about, broadly around epistemics/strategy/AI.

1. I think AI auditors / overseers of critical organizations (AI efforts, policy groups, company management) are really great and perhaps crucial to get right, but would be difficult to do well.

2. AI strategists/tools telling/helping us broadly what to do about AI safety seems pretty safe.

3. In terms of commercial products, there’s been some neat/scary military companies in the last few years (Palantir, Anduril). I’d be really interested if there could be some companies to au... (read more)

Well done to the Shrimp Welfare Project for contributing to Waitrose's pledge to stun 100% of their warm water shrimps by the end of 2026, and for getting media coverage in a prominent newspaper (this article is currently on the front page of the website): Waitrose to stop selling suffocated farmed prawns, as campaigners say they feel pain

This is so great, congratulations team!! <3

One thing this makes me curious about: how good is the existing evidence base on electric stunning is better for the welfare of the shrimp, and how much better is stunning? I didn't realize SWP was thinking of using the corporate campaign playbook to scale up stunning, so it makes me curious how robustly good this intervention is, and I couldn't quickly figure this out from the Forum / website. @Aaron Boddy🔸 is there a public thing I can read by any chance? No pressure!

FWIW, "how good is stunning for welfare" ... (read more)

FYI rolling applications are back on for the Biosecurity Forecasting Group! We have started the pilot and are very excited about our first cohort! Don't want to apply but have ideas for questions? Submit them here (anyone can submit!).

A reflection on the posts I have written in the last few months, elaborating on my views

In a series of recent posts, I have sought to challenge the conventional view among longtermists that prioritizes the empowerment or preservation of the human species as the chief goal of AI policy. It is my opinion that this view is likely rooted in a bias that automatically favors human beings over artificial entities—thereby sidelining the idea that future AIs might create equal or greater moral value than humans—and treating this alternative perspective with unwarra... (read more)

Showing 3 of 10 replies (Click to show all)

Thanks, that is very helpful to me in clarifying your position. 

7
quinn
I distinguish believing that good successor criteria are brittle from speciesism. I think antispeciesism does not oblige me to accept literally any successor. I do feel icky coalitioning with outright speciesists (who reject the possibility of a good successor in principle), but I think my goals and all of generalized flourishing benefits a lot from those coalitions so I grin and bear it. 
5
Matthew_Barnett
In your comment, you raise a broad but important question about whether, even if we reject the idea that human survival must take absolute priority other concerns, we might still want to pause AI development in order to “set up” future AIs more thoughtfully. You list a range of traits—things like pro-social instincts, better coordination infrastructures, or other design features that might improve cooperation—that, in principle, we could try to incorporate if we took more time. I understand and agree with the motivation behind this: you are asking whether there is a prudential reason, from a more inclusive moral standpoint, to pause in order to ensure that whichever civilization emerges—whether dominated by humans, AIs, or both at once—turns out as well as possible in ways that matter impartially, rather than focusing narrowly on preserving human dominance.  Having summarized your perspective, I want to clarify exactly where I differ from your view, and why. First, let me restate the perspective I defended in my previous post on delaying AI. In that post, I was critiquing what I see as the “standard case” for pausing AI, as I perceive it being made in many EA circles. This standard case for pausing AI often treats preventing human extinction as so paramount that any delay of AI progress, no matter how costly to currently living people, becomes justified if it incrementally lowers the probability of humans losing control.  Under this argument, the reason we want to pause is that time spent on “alignment research” can be used to ensure that future AIs share human goals, or at least do not threaten the human species. My critique had two components: first, I argued that pausing AI is very costly to people who currently exist, since it delays medical and technological breakthroughs that could be made by advanced AIs, thereby forcing a lot of people to die who could have otherwise been saved. Second, and more fundamentally, I argued that this "standard case" seems to r

I wrote a quick take on lesswrong about evals. Funders seem enchanted with them, and I'm curious about why that is. 

https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=HzDD3Lvh6C9zdqpMh 

Adrian Tchaikovsky, the science fiction writer, is a master at crafting bleak, hellish future worlds. In Service Model, he has truly outdone himself, conjuring an absurd realm where human societies have crumbled, and humanity teeters on the brink of extinction.

Now, that scenario isn't entirely novel. But what renders the book both tear-inducing and hilarious, is the presence in this world of numerous sophisticated robots, designed to eliminate the slightest discomfort from human existence. Yet, they adhere so strictly to their programmed rules, that it onl... (read more)

How might EA-aligned orgs in global health and wellness need to adapt calculations of cost-effective interventions given the slash-and-burn campaign currently underway against US foreign aid? Has anyone tried gaming out what different scenarios of funding loss look like (e.g., one where most of the destruction is reversed by the courts, or where that reversal is partial, or where nothing happens and the days are numbered for things like PEPFAR)? Since US foreign aid is so varied, I imagine that's a tall order, but I've been thinking about this quite a bit lately!

Although it's an interesting question, I'm not sure that gaming out scenarios is that useful in many cases. I think putting energy into responding to the funding reality changes as they appear may be more important. There are just so many scenarios possible in the next few months. 

PEPFAR might be the exception to that, as if it gets permanently cut then there just has to be a prompt and thought through response. Other programs might be able to be responded to in the fly, but if The US do pull out of HUV funding there needs to be a contingency plan in ... (read more)

Load more