Greg_Colbourn

5348 karmaJoined
Interests:
Slowing down AI

Bio

Participation
4

Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)

Comments
1017

(This was 1 Bitcoin btw. Austin helped me with the process of routing it to Manifund, allowing me to donate ~32% more, factoring in avoiding capital gains tax in the UK).

I've been impressed with both Holly and Pause AI US, and Joep and Pause AI Global, and intend to donate a similar amount to Pause AI Global.

[crossposted from Manifund] 
donated $90,000

It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.

There are massive conflicts of interest. We need a divestment movement within AI Safety / EA.

It's no secret that AI Safety / EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said:

Imagine if oil companies and environmental activists were both considered part of the broader “fossil fuel community”. Exxon and Shell would be “fossil fuel capabilities”; Greenpeace and the Sierra Club would be “fossil fuel safety” - two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties - fossil fuel community parties - and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.

This is how AI safety works now.

Going to flag that a big chunk of the major funders and influencers in the EA/Longtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)

  1. Species aren't lazy (those who are - or would be - are outcompeted by those who aren't).
  2. The pets scenario is basically an existential catastrophe by other means (who wants to be a pet that is a caricature of a human like a pug is to a wolf?). And obviously so is the torture/dystopia one (i.e. not an "OK outcome"). What mechanism would allow us to get alignment right on the first try?
  3. This seems like a very unstable equilibrium. All that is needed is for one of the experts to be as good as Ilya Sutskever at AI Engineering, to get past that bottleneck in short order (speed and millions of instances run at once) and foom to ASI.
  4. It would also need to stop all other AGIs who are less cautious, and be ahead of them when self-improvement becomes possible. Seems unlikely given current race dynamics. And even if this does happen, unless it was very aligned to humanity it still spells doom for us due to the speed advantage of the AGI and it's different substrate needs (i.e. it's ideal operating environment isn't survivable for us).

o1 is further evidence that we are living in a short timelines world, that timelines are short, p(doom) is high: a global stop to frontier AI development until x-safety consensus is our only reasonable hope

One high leverage thing people could do right now is encourage letter writing to California's Governor Newsom requesting he signs SB 1047. This would be a much needed precedent for enabling US federal legislation and then global regulation.

Load more