I think there are more efficient paths to working on AI Safety than a PhD. This 80,000 Hours podcast episode has the story of someone who, if I remember correctly, decided to skip a PhD and start working directly: https://80000hours.org/podcast/episodes/olsson-and-ziegler-ml-engineering-and-safety/
Is there some value in dividing your donations among multiple organizations to lessen the risk that any one particular organization turns out to be less impactful than you thought? To me, it seems analogous to how investors divide their money among many companies to lessen the loss they'd suffer if any one company ended up losing money.
A related thought: if you have some level of moral uncertainty (e.g. about the value of human lives vs animal lives or lives saved vs lives improved), is donating to multiple charities together carrying out morally diverse interventions (e.g. animal welfare + global health; increasing consumption of the poor + saving lives) better? It seems like spreading out donations in this way would reduce the risk that you put all your money toward an intervention whose corresponding moral worldview turns out to you later to be less accurate than others.