If you don't want someone to do something, makes sense not to offer a large amount of $. For the second case, I'm a bit confused by this statement:
"the uncertainty of what the people would do was the key cause in giving a relatively small amount of money"
What do you mean here? That you were uncertain in which path was best?
Very interesting, valuable, and thorough overview!
I notice you mentioned providing grants of 30k and 16k that were or are likely to be turned down. Do you think this might have been due to the amounts of funding? Might levels of funding an order of magnitude higher have caused a change in preferences?
Given the amount of funding in longtermist EA, if a project is valuable, I wonder if amounts closer to that level might be warranted. Obviously the project only had 300k in funding, so that level of funding might not have been practical here. However, from the perspective of EA longtermist funding as a whole, routinely giving away this level of funding for projects would be practical.
I work in Democratic data analytics in the US and I agree that there's potentially a lot of value to EAs getting involved in the partisan side rather than just the civil service side to advance EA causes. If anyone is interested in becoming more involved in US politics, I'd love to talk to them. You can shoot me a message.
Independent of the desirability of spending resources on Andrew Yang's campaign, it's worth mentioning that this overstates the gains to Steyer. Steyer is running ads with little competition (which makes ad effects stronger), but the reason there is little competition is because decay effects are large; voters will forget about the ads and see new messaging over time. Additionally, Morning Consult shows higher support than all other pollsters. The average for Steyer in early states is considerably less favorable.
"My view is that - for the most part - people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it."
This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it's harder for defection to make sense.
Broadly, I think that worries about utilitarianism/consequentialism should lead to negative outcomes are often self defeating, because the utilitarians/consequentiality see the negative outcomes themselves. If you went around killing people for their organs, the consequences would obviously be negative; it's the same for going around lying or being an asshole toe people all the time.