Hi, I'm an 18 year old going into college in a week. I am studying Computer engineering and mathematics. Since I have a technical interest and AGI has a much higher probability ending humanity this century(1/10, I think) than other causes (that I would rather work on, like Biorisks is 1/10,000), would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?
I am at a mid-tier university. I think I could force myself to do AI alignment since I have a little interest, but not as much as the average EA. I wouldn't find as much engagement in it, but I also have an interest in starting a for-profit company, which couldn't happen with AGI alignment (most likely). I would rather work on a hardware/software combo for virus detection (Biorisks), climate change, products for 3rd world, other current problems, or other problems that will be found in the future.
Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?
Edit: made some people confused that I had a false dichotomy between "pursuing my passion" and doing EA alignment. Removed that comment.
I wasn't proposing that "follow your passion" was the other idea I was going with. I do think that some combination of personal interest and external importance will probably be highest utility for a given personality. I just wanted to make sure that AGI alignment wasn't so great that I would have to practically throw away my feelings for humanity's existence. I have also read a recent post questioning the basis of the "recommended careers" in EA and 80k hours (post). Thanks for the post!