[Epistemic status: unsure how much I believe each response but more pushing back against that "no well informed person trying to allocate a marginal dollar most ethically would conclude that GiveWell is the best option."]
I know Tarsney is a utilitarian but I'm just throwing him out there as a name that can change .
I think this is confused. WWOTF is obviously both aiming to be persuasive and coming from a place of academic analytical philosophical rigour. Many philosophers write books that are both, e.g. Down Girl by Kate Manne or The Right to Sex by Amia Srinivasan. I don't think a purely persuasive book would have so many citations.
.
[edited: last sentence for explicitness of my point]
I think this worry should be more a critique of the EA community writ-large for being overly deferential than for OP holding a contest to elicit critiques of its views and then following through with that in their own admittedly subjective criteria. OP themselves note in the post that people shouldn't take this to be OP's institutional tastes.
[edit: Fixed link for Stuart Russell's book. Initially linked to Brian Christiansen's Human Compatible.]
I think these polls would benefit from a clause along the lines of "On balance, EAs should X" because a lot of the discourse collapses into examples and corner cases about when the behaviour is acceptable (e.g. the discussion over illegal actions ending up being around melatonin). I think having a conversation centred about where the probability mass of these phenomena actually are is important.
I think this is imprecise. In my mind there are two categories:
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
To be clear I didn't downvote it because I didn't read it. I skimmed it and looked for the objectionable parts to steelman what I imagine the downvoter would have downvoted it for. I think the most egregious part of it is not understanding that there are costs to methods of zero fraud (literally means war torn areas get 0 aid and the risk tolerance is too high) and Vee just staunchly reiterates the claim we need to have 0 fraud.
I think Vee's posts read to me as very ChatGPT spambot as I have downvoted them in the past for the same issue. A key problem I have with the GiveDirectly post that would make me downvote it if I read it is that it doesn't actually explain anything the linked post doesn't say and if anything just takes the premise/title of the GiveDirectly post that GiveDirectly lost 900,000 and then doesn't do anything to analyse the trade offs of any of their "fixes". Moreover, both the linked post and commenters talk about the trade offs that are reasoned through and weighed up but Vee just doubles down. I don't think I would add anything to their criticisms and so I would just downvote and move on.
I think this is already done. The application asks if you are receiving OpenPhil funding for said project or have done so in the past. It also asks if you've applied. I think people also generally disclose because the payoff of not disclosing is pretty low compared to the costs. EA is a pretty small community I don't think non-disclosure ever helps.
https://www.alexirpan.com/2024/08/06/switching-to-ai-safety.html
This reaffirms my belief it's more important to look at the cruxes of existing ML researchers than internally within EAs on AI Safety.