HE

Holly_Elmore

5773 karmaJoined

Sequences
2

Improving rodent welfare by reducing rodenticide use
The Rodenticide Reduction Sequence

Comments
286

All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.

These would all be considered wrong reasons in EA but PauseAI welcomes all.

Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.

It's not EA because it's for anyone who wants to PauseAI for any reason and does not share all the EA principles. It's just about pausing AI and it's a coalition. 

I personally still identify with EA principles and I came to my work at PauseAI through them, but I increasingly dislike the community and find it a drag on my work. That, combined with PauseAI being open to all comers, makes me want distance from the community and to keep a healthy distance between PauseAI and EA. More and more I think that the cost of remaining engaged with EA is too high because of how demanding EAs are and how little they contribute to what I'm doing.

I strongly relate to the philosophy here and I’m thrilled CEA is going to continue to be devoted to EA principles. EA’s principles will always be dear to me and a big part of my morality, but I’ve felt increasingly alienated from the community as it seemed to become only about technical AI Safety. I ended up going in my own direction (PauseAI is not an EA org) largely because the community was so reluctant to consider new approaches to AI Safety and Open Phil refused to fund it, a development that shocked and saddened me. I hope CEA will show strong leadership to keep the spirit of constant reevaluation of how we can do good alive. Imo having a preference as a community for only knowledge work and only associating with elite circles, as happened with technical AI Safety, is antithetical to EA scouty impact-focused thinking.

Huh, it shows me that it's available to anyone with the link. Here's it is again in case that helps: https://docs.google.com/document/d/1HiYMG2oeZO8krcCMEHlfAtHGTWuVDjUZQaPU9HMqb_w/edit?usp=sharing

Haven't always loved the SummaryBot summaries but this one is great

Agree, and my experience was also free of racism, although I only went to one session (my debate with Brian Chau) and otherwise had free-mingling conversations. It's possible the racist people just didn't gravitate to me.

I would never have debated Brian Chau for a podcast or video because I don't think it's worth /don't want to platform his org and its views more broadly, but Manifest was a great space where people who are sympathetic to his views are actually open to hearing PauseAI's case in response. I think conferences like that, with a strong emphasis on free speech and free exchange, are valuable.

Thank you :) (I feel I should clarify I'm lacto vegetarian now, at first as the result or a moral trade, but now that that's fallen apart I'm not sure if it's worth it to go back to full vegan.)

I agree! The focus on alignment is contingent on (now obsolete) historical thinking about the issue and it's time to update. The alignment problem is harder than we thought, AGI is closer at hand than we thought, no one was taking seriously how undemocratic pivotal act thinking was even if it had been possible for MIRI to solve the alignment problem by themselves, etc. Now that the problem is nearer, it's clearer to us and it's clearer to everyone else, so it's more possible to get government solutions implemented that both prevent AI danger and give us more time to work on alignment (if that is possible) rather than pursuing alignment as the only way to head off AI danger. 

Load more