As for existential risk from AI takeover, I don't think having a self-sustaining civilization on Mars would help much.
If an AI has completed takeover on earth and killed all humans on earth, taking over Mars too does not sound that hard, especially since the human civilization is likely quite fragile. (There might be some edge cases, where you solve the AI control problem well enough to guarantee that all advanced AIs leave Mars alone, but not well enough for AI to leave Australia alone, but I think scenarios like these are extremely unlikely).
For other existential risks, it might be in principle useful, but practically very difficult. Building a self-sustaining city on Mars will take a lot of time and resources. On the scale of centuries, it seems like a viable option though.
At the same time though I don't think you mean to endorse 1).
I have read or skimmed some of his posts and my sense is that he does endorse 1). But at the same time he says
critics seem to frequently conflate my arguments with other, simpler positions that can be more easily dismissed.
so maybe this is one of these cases and I should be more careful.
A recent comment says that restriction has been lifted and the website will be updated next week: https://forum.effectivealtruism.org/posts/aBkALPSXBRjnjWLnP/announcing-the-q1-2025-long-term-future-fund-grant-round?commentId=FFFMBth8v7WBqYFzP
the AI won't ever have more [...] capabilities to hack and destroy infrastructure than Russia, China or the US itself.
Having better hacking capability than China seems like a low bar for super-human AGI. The AGI would need to be better at writing and understanding code than a small group of talented humans, and have access to some servers. This sounds easy if you accept the premise of smarter-than-human AGI.
it's the arguments you least agree with that you should extend the most charity to
I strongly disagree with flat earthers, but I don't think that I should extend a lot of charity to arguments for a flat earth.
Also, on a quick skim, I could not find where this is argued for in the linked "I Can Tolerate Anything Except The Outgroup"
Caveat: I consider these minor issues, I hope I don't come across as too accusatory.
Interesting, why's that? :)
It seems that the reason for cross-posting was that you personally found it interesting. If you use the EA forum team account, it sounds a bit like an "official" endorsement, and makes the Forum Team less neutral.
Even if you use another account name (eg "selected linkposts") that is run by the Forum Team, I think there should be some explanation how those linkposts are selected, otherwise it seems like arbitrarily privileging some stuff over other stuff.
A "LinkpostBot" account would be good if the cross-posting is automated (e.g. every ACX article who mentions Effective Altruism).
I also personally feel kinda weird getting karma for just linking to someone else's work
I think its fine to gain Karma by virtue of linkposting and being an active forum member, I will not be bothered by it and I think you should not worry about that (although i can understand that it might feel uncomfortable to you). Other people are also allowed to link-post.
Personally when I see a linkpost, I generally assume that the author here is also the original author
I think starting the title with [linkpost] fixes that issue.
I think the comment already addresses that here: