Yonatan Cale

@ Effective Developers
4766 karmaJoined Working (6-15 years)Seeking workTel Aviv-Yafo, Israel

Bio

Anonymous Feedback Form

(click "see more")

I'm happy to help

  • People running EA aligned software projects (about all the normal problems)
  • EA Software engineers (about.. all the normal problems)

Link to my coaching post.

I'd be happy for help from

  • People who think about global EA priorities:
    • Rewriting arxiv.org: Is this a high impact job?
    • Does EA need a really good hiring agency?
  • Funding my work would be nice

My opinions about hiring

A better job board

  • draft 1: 75% of 80k's engineering jobs are unrelated to software development. This board is the other 25%.

Tech community building & outreach

(apparently I'm doing some of this?)

  • Some ideas I'm working on or strongly considering working on
  • Are you talking to someone about working on strange neglected problems? Here's how I'd frame it

My opinions about EA software careers

  • An alternative career guide
  • Improving CVs (beyond what I saw any professional CV editor doing)
  • Getting your first paid software job
  • [more coming]

My personal fit for jobs

  • Owning the tech of a pre-production (helping with things around it, like some Product)
  • I really enjoy coaching, user research, explaining tech concepts and tradeoffs simply to non tech people, unclear if this will fit into some future job

Fun

  • I'm currently reading ProjectLawful and Worth A Candle [26-7-2022]
  • Big hpmor fan
  • I like VR
  • My shirts have cats on them

Contact details

How others can help me

  • Connections to EA aligned orgs that have software problems

How I can help others

  • Running software projects, specifically hiring
  • EA careers

Comments
910

My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.

 

((

making sure I'm not missing our crux completely: Do you agree:

  1. AI has a non negligable chance of being an existential problem
  2. Labs advancing capabilities are the main thing causing that

))

I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing "capabilities" (as the topic is often divided).

My main point is that I recommend checking the specific project you'd work on, and not only what it's branded as, if you think advancing AI capabilities could be dangerous (which I do think).

Zvi on the 80k podcast:

Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable thing to be doing.

I think that “I am going to take a job at specifically OpenAI or DeepMind for the purposes of building career capital or having a positive influence on their safety outlook, while directly building the exact thing that we very much do not want to be built, or we want to be built as slowly as possible because it is the thing causing the existential risk” is very clearly the thing to not do. There are all of the things in the world you could be doing. There is a very, very narrow — hundreds of people, maybe low thousands of people — who are directly working to advance the frontiers of AI capabilities in the ways that are actively dangerous. Do not be one of those people. Those people are doing a bad thing. I do not like that they are doing this thing.

And it doesn’t mean they’re bad people. They have different models of the world, presumably, and they have a reason to think this is a good thing. But if you share anything like my model of the importance of existential risk and the dangers that AI poses as an existential risk, and how bad it would be if this was developed relatively quickly, I think this position is just indefensible and insane, and that it reflects a systematic error that we need to snap out of. If you need to get experience working with AI, there are indeed plenty of places where you can work with AI in ways that are not pushing this frontier forward.

The transcript is from the 80k website. The episode is also linked to in the post. It also continues to Rob replying that the 80k view is "it's complicated" and Zvi replying to that.

Something like "noticing we are surprised". Also I think it would be nice to have prediction markets for studies in general and EA seem like early adopters (?)

I don't know why this was so downvoted/Xed 

:/

Hey :)

 

Looking at some of the engineering projects (which is closest to my field) :

  • API Development: Create a RESTful API using Flask or FastAPI to serve the summarization models.
  • Caching: Implement a caching mechanism to store and quickly retrieve summaries for previously seen papers.
  • Asynchronous Processing: Use message queues (e.g., Celery) for handling long-running summarization tasks.
  • Containerization: Dockerize the application for easy deployment and scaling.
  • Monitoring and Logging: Implement proper logging and monitoring to track system performance and errors.

I'm guessing Claude 3.5 Sonnet could do these things, probably using 1 prompt for each (or perhaps even all at once).

Consider trying, if you didn't yet. You might not need any humans for this. Or if you already did then oops and never mind!

 

Thanks for saving the world!

If you ever run another of these, I recommend opening a prediction market first for what your results are going to be :) 

cc @Nathan Young 

Any idea if these capabilities were made public or, for example, only used for private METR evals?

I'm not sure how to answer this so I'll give it a shot and tell me if I'm off:

 

Because usually they take more time, and are usually less effective at getting someone hired, than:

  1. Do an online course
  2. Write 2-3 good side projects

 

For example, in Israel pre-covid, having a CS degree (which wasn't outstanding) was mostly not enough to get interviews, but 2-3 good side projects were, and the standard advice for people who finished degrees was to go do 2-3 good side projects. (based on an org that did a lot of this and hopefully I'm representing correctly).


There is more that I can say about this, but I'm not sure I'm even answering the question.

 

Also note that the main point of this post is to recommend people do side projects, as opposed to recommending they don't get a CS degree. Maybe another point is "don't try to learn all the topics you heard about before you apply to any job", which is also important.

  1. If Conor thinks these roles are impactful then I'm happy we agree on listing impactful roles. (The discussion on whether alignment roles are impactful is separate from what I was trying to say in my comment)
  2. If the career development tag is used (and is clear to typical people using the job board) then - again - seems good to me.

My own intuition on what to do with this situation - is to stop trying to change your reputation using disclaimers. 

There's a lot of value in having a job board with high impact job recommendations. One of the challenging parts is getting a critical mass of people looking at your job board, and you already have that.

Load more