M

Mau

2779 karmaJoined

Comments
209

Fair! Sorry for the slow reply, I missed the comment notification earlier.

I could have been clearer in what I was trying to point at with my comment. I didn't mean to fault you for not meeting an (unmade) challenge to list all your assumptions--I agree that would be unreasonable.

Instead, I meant to suggest an object-level point: that the argument you mentioned seems pretty reliant on a controversial discontinuity assumption--enough that the argument alone (along with other, largely uncontroversial assumptions) doesn't make it "quite easy to reach extremely dire forecasts about AGI." (Though I was thinking more about 90%+ forecasts.)

(That assumption--i.e. the main claims in the 3rd paragraph of your response--seems much more controversial/non-obvious among people in AI safety than the other assumptions you mention, as evidenced by researchers criticizing it and researchers doing prosaic AI safety work.)

Thanks for doing this! I think the most striking part of what you found is the donations to representatives who sit on the subcommittee that oversees the CFTC (i.e. the House Agriculture Subcommittee on Commodity Exchanges, Energy, and Credit), so I wanted to look into this more. From a bit of Googling:

  • It looks like you're right that Rep. Delgado sits on (and is even the Chair of) this subcommittee.
  • On the other hand, it looks like Rep. Spanberger doesn't actually sit on this subcommittee, and hasn't done so since 2021. In other words, she hasn't been on this subcommittee since the Protect our Future PAC was founded (which was early 2022).

I didn't spend much time on this, so I very possibly missed or misinterpreted things.

Nitpick: doesn't the argument you made also assume that there'll be a big discontinuity right before AGI? That seems necessary for the premise about "extremely novel software" (rather than "incrementally novel software") to hold.

Mau
53
25
7

why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better

My guess was that Buck was hopeful that, if the post authors focus their criticisms on the cruxes of disagreement, that would help reveal flaws in his and others' thinking ("inasmuch as I'm wrong it would be great if you proved me wrong"). In other words, I'd guess he was like, "I think you're probably mistaken, but in case you're right, it'd be in both of our interests for you to convince me of that, and you'll only be able to do that if you take a different approach."

[Edit: This is less clear to me now - see Gideon's reply pointing out a more recent comment.]

Mau
50
28
15

I interpreted Buck's comment differently. His comment reads to me, not so much like "playing the man," and more like "telling the man that he might be better off playing a different game." If someone doesn't have the time to write out an in-depth response to a post that takes 84 minutes to read, but they take the time to (I'd guess largely correctly) suggest to the authors how they might better succeed at accomplishing their own goals, that seems to me like a helpful form of engagement.

This seems helpful, though I'd guess another team that's in more frequent contact with AI safety orgs could do this for significantly lower cost, since they'll be starting off with more of the needed info and contacts.

Mau
12
9
1

Thanks for sharing! The speakers on the podcast might not have had the time to make detailed arguments, but I find their arguments here pretty uncompelling. For example:

  • They claim that "many belief systems they have a way of segregating and limiting the impact of the most hardcore believers." But (at least from skimming) their evidence for this seems to be just the example of monastic traditions.
  • A speaker claims that "the leaders who take ideas seriously don't necessarily have a great track record." But they just provide a few cherry-picked (and dubious) examples, which is a pretty unreliable way of assessing a track record.
    • Counting Putin a "man of ideas" because he made a speech with lots of historical references--while ignoring the many better leaders who've also made history-laden speeches--looks like especially egregious cherry-picking.

So I think, although their conclusions are plausible, these arguments don't pass enough of an initial sanity check to be worth lots of our attention.

Mau
38
11
1

Thanks for writing this! I want to push back a bit. There's a big middle ground between (i) naive, unconstrained welfare maximization and (ii) putting little to no emphasis on how much good one does. I think "do good, using reasoning" is somewhat too quick to jump to (ii) while passing over intermediate options, like:

  • "Do lots of good, using reasoning" (roughly as in this post)
  • "be a good citizen, while ambitiously working towards a better world" (as in this post)
  • "maximize good under constraints or with constraints incorporated into the notion of goodness"

There are lots of people out there (e.g. many researchers, policy professionals, entrepreneurs) who do good using reasoning; this community's concern for scope seems rare, important, and totally compatible with integrity. Given the large amounts of good you've done, I'd guess you're sympathetic to considering scope. Still, it seems important enough to include in the tagline.

Also, a nitpick:

now it's obvious that the idea of maximizing goodness doesn't work in practice--we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn't do that

This feels a bit fast; the fact that this example had to include a (dubious) "if" clause means it's not a really clear example, and maximizing goodness is compatible with constraints if we incorporate constraints into our notion of goodness (just by the fact that any behavior can be thought of as maximizing some notion of goodness).

(Made minor edits.)

Readers might be interested in the comments over here, especially Daniel K.'s comment:

The only viable counterargument I've heard to this is that the government can be competent at X while being incompetent at Y, even if X is objectively harder than Y. The government is weird like that. It's big and diverse and crazy. Thus, the conclusion goes, we should still have some hope (10%?) that we can get the government to behave sanely on the topic of AGI risk, especially with warning shots, despite the evidence of it behaving incompetently on the topic of bio risk despite warning shots.

Or, to put it more succinctly: The COVID situation is just one example; it's not overwhelmingly strong evidence.

Load more