M

MvK🔸

Research Fellow @ Forethought Foundation
569 karmaJoined Working (0-5 years)

Comments
73

Yep, numbers ranged from 60% to 80% support for approving SB 1047, and it was impressively bipartisan, too.

(Just to correct the record for people who might have been surprised to see this comment: All of these people work for OpenPhilanthropy, not for OpenAI.)

Is there a reason it's impossible to find out who is involved with this project? Maybe it's on purpose, but through the website I couldn't find out who's on the team, who supports it, or what kind of organisation (nonprofit? For profit? Etc.) you are legally. If this was a deliberate and strategic choice against being transparent because of the nature of the work you expect to be doing, I'd love to hear why you made it!

[My 2 cents: As an org that is focused on advocacy and campaigns, it might be especially important to be transparent to build trust. It's projects like yours where I find myself MOST interested who is behind it to evaluate trustworthiness, conflicting incentives, etc. For all I know (from the website), you could be a competitor of the company you are targeting! I am not saying you need Fish-Welfare-Project-level transparency with open budgets etc.,and maybe I am just an overly suspicious website visitor, but I felt it was worth flagging]

This is a great idea. It's such a good idea that someone else (https://forum.effectivealtruism.org/users/aaronb50) has had it before and has already solved this problem for us:

https://podcasts.apple.com/us/podcast/eag-talks/id1689845820

Have you done some research on the expected demand (e.g. survey the organisers of the mentioned programs, community builders, maybe Wytham Abbey event organisers)? I can imagine the location and how long it takes to get there (unless you are already based in London, though even then it's quite the trip) could be a deterrent, especially for events <3 days. (Another factor may be "fanciness" - I've worked with orgs and attended/organise events where fancy venues were eschewed, and others where they were deemed indispensable. If that building is anything like the EA Hotel - or the average Blackpool building - my expectation is it would rank low on this. Kinda depends on your target audience/user.)

"It's not common" wouldn't by itself suffice as a reason though - conducting CEAs "isn't common" in GHD, donating 10% "isn't common" in the general population, etc. (cf. Hume, is-and-ought something something).

Obviously, something may be "common" because it reliably protects you from legal exposure, is too much work for too little a benefit etc., but then I'm much more interested in those underlying reasons.

Hey Charlotte! Welcome to the EA Forum. :) Your skillset and interest in consulting work in GHD seems a near-perfect fit for working with one of the charities incubated by Ambitious Impact. As I understand, they are often for looking for people like you! Some even focus on the same problems you mention (STDs, nutritional deficiencies, etc.).

You can find them here: https://www.charityentrepreneurship.com/our-charities

Thanks for the detailed reply. I completely understand the felt need to seize on windows of opportunity to contribute to AI Safety - I myself have changed my focus somewhat radically over the past 12 months.

I remain skeptical on a few of the points you mention, in descending order of importance to your argument (correct me if I'm wrong):

"ERA's ex-AI-Fellows have a stronger track record" I believe we are dealing with confounding factors here. Most importantly, AI Fellows were (if I recall correctly) significantly more senior on average than other fellows. Some had multiple years of work experience. Naturally, I would expect them to score higher on your metric of "engaging in AI Safety projects" (which we could also debate how good of a metric it is). [The problem here I suspect is the uneven recruitment across cause areas, which limits comparability.] There were also simply a lot more of them (since you mention absolute numbers). I would also think that there have been a lot more AI opportunities opening up compared to e.g. nuclear or climate in the last year, so it shouldn't surprise us if more Fellows found work and/or funding more easily. (This is somewhat balanced out by the high influx of talent into the space.) Don't get me wrong: I am incredibly proud of what the Fellows I managed have gone on to do, and helping some of them find roles after the Fellowship may have easily been the most impactful thing I've done during my time at ERA. I just don't think it's a solid argument in the context in which you bring it up.

"The infrastructure is here" This strikes me as a weird argument at least. First of all, the infrastructure (Leverhulme etc.) has long been there (and AFAIK, the Meridian Office has always been the home of CERI/ERA), so is this a realisation you only came to now? Also: If "the infrastructure is here" is an argument, would the conclusion "you should focus on a broader set of risks because CSER is a good partner nearby" seem right to you?

"It doesn't diminish the importance of other x-risks or GCR research areas" It may not be what you intended, but there is something interesting about an organisation that used to be called the "Existential Risk Alliance" pivot like this. Would I be right in assuming we can expect a new ToC alongside the change in scope? (https://forum.effectivealtruism.org/posts/9tG7daTLzyxArfQev/era-s-theory-of-change)

AI was - in your words - already "an increasingly capable emerging technology" in 2023. Can you share more information on what made you prioritize it to the exclusion of all other existential risk cause areas (bio, nuclear, etc.) this year?

[Disclaimer: I previously worked for ERA as the Research Manager for AI Governance and - briefly - as Associate Director.]

Load more