I think this kind of discussion is important, and I don't want to discourage it, but I do think these discussions are more productive when they're had in a calm manner. I appreciate this can be difficult with emotive topics, but it can be hard to change somebody's mind if they could interpret your tone as attacking them.
In summary: I think it would be more productive if the discussion could be less hostile going forwards.
We've banned the user denyeverywhere for a month for promoting violence on the forum.
As a reminder, bans affect the user, not the account(s).
If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision
Speaking as a moderator, this comment seems to break a number of our norms. It isn't on-topic, and it's an unnecessary personal attack. I'd like to see better on the forum.
I think this comment is flamebait, and broke Forum norms. Examples below:
"I bet somewhere there's a small group of rich elites who actually bet on the gang fights in Haiti, have their own private app for it, and I bet some of them are also in an EA circle"
"forget I mentioned the word 'neocolonialism' because you'll be just like every other woke white person here and take offense that I said something true, you can go spend more time debating gender."
I’d like if the discussion could be more civil going forwards.
Epistemic status: just a 5-minute collation of some useful sources, with a little explanatory text off the top of my head.
Stampy's answers to "Why is AI dangerous?"and "Why might we expect a superintelligence to be hostile by default?" seem pretty good to me.
Alignment seems hard. Humans value very complex things, which it seems both A) difficult to tell an AI to preserve and B) seem unlikely for AI to preserve by default.
A number of things seem to follow pretty directly from the idea of 'creating an agent which is much more intelligent than humans':
When you combine these things, you get an expectation that the default outcome of unaligned AGI is very bad for humans -- and an idea of why AI alignment may be difficult.
Humans have a pretty bad track record of not using massively destructive technology. It seems at least plausible that COVID-19 was a lab leak (and its plausibility is enough for this argument). The other key example to me is the nuclear bomb.
What's important is that both of these technologies are relatively difficult to get access to. At least right now, it's relatively easy to get access to state-of-the-art AI.
Why is this important? It's related to the unilateralist's curse. If we think that AI has the potential to be very harmful (which deserves its own debate), then the more people that have access to it, the more likely that harm becomes. Given our track record with lower-access technologies, it seems likely from this frame that accelerationism will lead to non-general artificial intelligence being used to do massive harm by humans.
TLDR: Diverse EA contractor looking for a part-time operations or comms role, remote.
Skills & background: I currently run a local EA group with a grant from OpenPhil as the only paid member. This role includes tasks like organising coworking space and checking our compliance with local charity regulations as well as running events and getting feedback from our members.
In the past, I've done short-term projects or contractor work for EA organisations. One project I'm particularly proud of was the Tools For Collaborative Truth-Seeking sequence, on which we got feedback that a number of the tools were now being used as part of the regular workflow in some big EA organisations.
Finally, I also work currently as a moderator for the forum, which involves communicating carefully around sensitive topics on a regular basis.
Location/remote: I'm based in Edinburgh, Scotland, and a remote role would be ideal.
Availability & type of work: I'm in the GMT timezone, but am happy to work outside of these hours (though regularly working in e.g. PDT would be a challenge). My ideal role would be around 20h/week, but I'm open to roles with more or fewer hours available.
Resume/CV/LinkedIn: LinkedIn
Email/contact: DM me on the forum.
Other notes: I'm open to any operations or comms/outreach role, but have a preference for AI safety or pandemic prevention organisations.