If you think the expected value is negative regardless of what you can do or move, you should of course become the existential risk[1].
But, actually estimating whether humanity will be net-negative requires you to know what you value, which is something you're probably fuzzy about. We lack the technology so far to extract terminal goals from people, which you want to have before taking any irrevocable actions.
Future-you might resent past-you for publicly doubting the merits of humanity, since I reckon you'd want to be a secret existential risk.
Is there a central or topic-specific bounty board?
(I'm personally looking for AI interpretability tasks). I know there are distributed opinions on what's important but I'd like a central place for them and for them to be prioritized in some way:
- bounty price sizes
- (expert) authority
- popularity (votes or prediction market)
I know there's a job board, but I'd like a content/problem-focused board instead.
Let's put our mana where our text is with regard to AISafetyMemes' factual accuracy.
I am about to apply some effort fact-checking a randomly-sampled tweet of the account and I'd like to also see whether our community could predict the outcome of that.
https://manifold.markets/Jono3h/are-aisafetymemes-tweets-factually
This won't capture all aspects of communication, but at least the most important one. And the one that to-me is central to debating whether they should continue or stop.