Hide table of contents

I feel like all my career path choices are not as good as I want them to be: (Software Engineer, AI researcher, AI alignment researcher, all-in with PauseAI)

I am 19 and a second-year university student studying computer science.  I am very bought-into AI X/S-Risk and I want to help prevent these risks somehow.

Path 1: Software Engineer: 

I will have a bachelors in 2026 and I can start earning to give towards X-risk orgs. 

  • Pro: If the singularity is near <10 years (My belief), i would be able to have an impact in time.
  • Pro: Financially correct if there will not be jobs in 10 years.
  • Con: Programming will be automated before my other career path choices. Even by 2026, the job market might be partially or completely automated.
  • Con: Not as exciting as doing something in AI or AI safety

Path 2: AI researcher: Work at top-AI lab likely doing capabilities research

  • Pro: Last job to be automated
  • Pro: Chance of extremely high pay
  • Con: Against my values since I would likely accelerate doom scenarios.

Path 3: AI-Alignment researcher for AI-lab or non-profit

  • Pro: Fits my values
  • Pro: Last job to be automated
  • Con: I am skeptical about the impact of alignment research.
  • Con: Is there money in alignment research???
  • Con: Working at a top AI lab would still likely be against my values.

Path 4: All-in with PauseAI and do grunt work outreach

  • Pro: Given <10 year timelines, this could have the most impact
  • Con: My parents think i'm crazy, I don't make money.

Path 5: Other

  • Any other ideas?

6

0
0

Reactions

0
0
Comments10
Sorted by Click to highlight new comments since:

An other option: try to get a SWE internship now. Then, depending on how it goes, you might want to consider dropping out.

Some of my best swe colleagues dropped out because they had full-time jobs. It probably accelerated their career by 1 or 2 years.

I'm currently sitting at a desk at a SWE unpaid internship LOL.

Some of my best swe colleagues dropped out because they had full-time jobs. It probably accelerated their career by 1 or 2 years.

I don't think I currently have the skills to start getting paid for SWE work sadly.

I'm currently sitting at a desk at a SWE unpaid internship LOL.

Nice!

I don't think I currently have the skills to start getting paid for SWE work sadly.

Gotcha. Probably combining your studies with internships is the best option for now.

Con: Programming will be automated before my other career path choices.

Are you confident about this claim?

Thanks for your responses, they are very insightful.

As AI operations scale up, it feels like AI/ML engineers will become more valuable and mid-sized SWE jobs will be swallowed by LLMs and those building them.

I'm very curious about your opinion on this. 

An LLM capable of automating "mid-sized SWE jobs" would probably be able to accelerate AI research and would be capable of cyberattacks. I guess: AI labs would not release such a powerful model, they would just use it internally to reach ASI.

Con: Not as exciting as doing something in AI or AI safety

There's a lot of software engineering work around AI. https://x.com/gdb/status/1729893902814192096

This is something I have not considered, thank you.

I assume that ML skills are less in-supply however?

I assume that ML skills are less in-supply however?

I think there's enough demand for both.

You wrote that governance is more important than technical research. Have you considered technical work that supports governance? The AI Safety Fundamentals course has a week on this.

In any case, working in AI or AI safety would increase your credibility for any activism that you decide to engage in.

Curated and popular this week
Relevant opportunities