We, on behalf of the EV US and EV UK boards, are very glad to share that Zach Robinson has been selected as the new CEO of the Centre for Effective Altruism (CEA).

We can personally attest to his exceptional leadership, judgement, and dedication from having worked with him at Effective Ventures US. These experiences are part of why we unanimously agreed with the hiring committee’s recommendation to offer him the position.[1] We think Zach has the skills and the drive to lead CEA’s very important work.

We are grateful to the search committee (Max Dalton, Claire Zabel, and Michelle Hutchinson) for their thorough process in making the recommendation. They considered hundreds of potential internal and external candidates, including through dozens of blinded work tests. For further details on the search process, please see this Forum post

As we look forward, we are excited about CEA's future with Zach at the helm, and the future of the EA community.

Zach adds: “I’m thrilled to be joining CEA! I think CEA has an impressive track record of success when it comes to helping others address the world’s most important problems, and I’m excited to build on the foundations created by Max, Ben, and the rest of CEA’s team. I’m looking forward to diving in in 2024 and look forward to sharing more updates with the EA community.” 


 

  1. ^

    Technically, the selection is made by the US board, but the UK board unanimously encouraged the US board to extend this offer. Zach was recused throughout the process, including in the final selection. 

Comments52
Sorted by Click to highlight new comments since:

I'm excited that Zach is stepping into this role. Zach seems substantially better than my expectations for the new CEA CEO, and I expect the CEO hiring committee + Ben + the EV board had a lot to do with that (and probably lots of other people at CEA that I don't know about)!

Most CEA users and EA community members probably don't know Zach, so I thought it would be helpful to share some of my thoughts on them and this position (though I don't know Zach especially well, and these are just quick subjective takes). Thanks to @Ben_West for the nudge to do this.

Quick takes on my impression of Zach and his fit for this role
Zach seems very strong on typical management consultant skills, e.g. communication skills, professionalism, creative problem-solving in typical professional environments, and having difficult conversations. 

One aptitude that I would bet Zach is strong on that I think is very neglected in EA organisations is developing and mentoring mid-level staff. Many EA orgs have bright, agentic, competent young people in fairly high-responsibility roles. While you can learn a lot from these roles that you might not learn in a traditional organisation, I worry (particularly for myself) that there might be a lot of gains from being in a more typical management/organisational structure. I'm pretty excited for people like Jessica McCurdy, who runs the EA groups team, to have a solid manager like Zach (Edit: I didn't mean to imply here that leadership at the time of writing weren't providing good management; just that relative to people I thought might end up in this role, I expect ~Zach to be quite strong on managing mid-level people). I'd guess that CEA will become a substantially more attractive place to work (for senior people) because of Zach.

While I don't have much insight into Zach's vision for CEA, I remember thinking that Zach seemed sharp, thoughtful, and reasonable in conversations about EA and CEA. I also got the sense that he has thought about some of EA's more intellectual/philosophical parts - it's imo fairly rare to find people who can both do the philosophy part of EA and the execution part of EA, but both parts seem important for this role.

I do have some reservations about Zach entering this role related to the professional relationships/responsibilities that Zach holds.[1] 

Zach previously worked at Open Phil; this relationship seems particularly important for the future of CEA as that is where they get most of their funding from. I think it's reasonable for people to be increasingly concerned about the epistemic influence of Open Phil on EA, and having an ex-senior Open Phil employee, who is probably still good friends with Open Phil leadership, meaningfully reduces the epistemic independence of CEA. It also could make it hard for CEA or the EV board to push Zach out if he turns out to be a bad fit for CEA - and given CEA's history, I think this is worth bearing in mind (though I'd guess overall that Zach is net positive for CEA governance).

Zach is on Anthropic's Long-Term Benefit Trust. It's not super clear what this means, particularly in light of recent events with the Open AI board, but I am a bit concerned about the way that EA views Anthropic, and that the CEO of CEA being affiliated with Anthropic could make it more difficult for people within EA to speak out against Anthropic. Managing social/professional relationships within EA is challenging,[2] and I'd guess that overall this cost is worth paying to have a CEO of Zach's calibre - but I think it's a meaningful cost that people should be tracking. 

In an ideal world, I would prefer that Zach [3]was less strongly connected to Open Phil (weak confidence) and also less strongly connected to Anthropic (medium/high confidence).

CEAs future strategy
I don't have many thoughts at this time on what changes to CEA's strategy I’d like to see. To me it seems like CEA is at a bit of a crossroads in both a strategic and organisational sense:

  1. Organisational - various teams seem to want to spin out and be their own thing where the project leads get more of a say over the direction of their project, and they have less baggage from being part of CEA. I'd be excited about Zach working out whether this is the right move for individual projects and increasing the value that projects get from being part of CEA.
     
  2. Strategic - many CEA staff seem to have become most concerned about risk from AI and naturally want their work to focus on this topic. At the same time, it seems like relatively little money is available to the non-AI parts of EA for meta work. 

I am not sure what (if anything) should change on the funding side, but on the CEA side I'd be excited about:

  1. Zach/CEA figuring out a coherent vision for CEA that is transparent about its motivations and outcomes, CEA staff are excited about and doesn't leave various parts of the EA community feeling isolated.
  2. Zach figuring out how to increase the value that CEA's projects get from being part of CEA, or helping them spin out if that's what they want to do.
  3. Zach/CEA figuring out how to leverage and improve the CEA brand so that it doesn't restrict the actions of various projects (and ideally is an asset) and doesn't create negative externalities for organisations outside of CEA.
     

     
  1. ^

    FYI, I didn't run this by Zach, but as it's not really a criticism that could affect his reputation and is mostly just pointing at publicly available information, it didn't seem warranted to me.

  2. ^

    For example, I live with two people who work at Anthropic, and in general, living with people probably has substantive epistemic effects.

  3. ^

    Edit 2024-Apr-16: I meant the CEO of EV, as opposed to Zach specifically.

Zach is on Anthropic's Long-Term Benefit Trust. It's not super clear what this means, particularly in light of recent events with the Open AI board, but I am a bit concerned about the way that EA views Anthropic, and that the CEO of CEA being affiliated with Anthropic could make it more difficult for people within EA to speak out against Anthropic.

This is a very interesting point given that it seems that Helen's milquetoast criticism of OpenAI was going to be used as leverage to kick her off the OpenAI board, and that historically EV has aggressively censored its staff on important topics.

What are some instances of this: “historically EV has aggressively censored its staff on important topics”?

I'm not sure that I would use the word censoring, but there were strict policies around what kinds of communications various EV orgs could do around FTX for quite a long time (though I don't think they were particularly unusual for an organisation of EVs size in a similar legal situation).

EV was fine with me publishing this. My experience was that it was kind of annoying to publish FTX stuff because you had to get review first, but I can't recall an instance of being prevented from saying something.

"Aggressively censored its staff" doesn't reflect my experience, but maybe reflects others', not sure.

In fairness, I was prevented from posting a bunch of stuff and spent a long time (like tens of hours) workshopping text until legal council were happy with it. At least in one case I didn’t end up posting the thing because it didn’t feel useful after the various edits and it had been by then a long time since the event the post was about.

I think in hindsight the response (with the information I think the board had) was probably reasonable - but if similar actions were to be taken by EV when writing a post about Anthropic I’d be pretty upset about that. I wouldn’t use the word censoring in the real ftx case - but idk in the fictional Anthropic case I might?

I think in hindsight the response (with the information I think the board had) was probably reasonable

Reasonable because you were all the same org, or reasonable even if EA Funds was its own org

I think reasonable even if EA Funds was its own org.

I think it's worth not entangling the word 'censorship' with whether it is justified. During the Second World War the UK engaged in a lot of censorship, to maintain domestic morale and to prevent the enemy from getting access to information, but this seems to me to have been quite justified, because the moral imperative for defeating Germany was so great.

Similarly, it seems quite possible to me that in the future CEA might be quite justified in instituting AI-related censorship, preventing people from publishing writing that disagrees with the house line. It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs. In the wake of FTX's collapse there was a lot of soul-searching and thinking about how to continue in the EA community and we were deprived of input from many of the best informed and most thoughtful people. My guess is this censorship was especially onerous on more junior employees for whom it was harder to justify the attorney review time, leading to a default answer of 'no'.

So the reason I mentioned it wasn't that censorship is always a bad choice, or that, conditional on censorship being imposed, it is likely to be a mistake, given the situation. The argument is that who your leader is changes the nature of the situation, changing whether or not censorship is required, and the nature of that censorship. As an analogy, if Helen knew what was going to come, I imagine she might have written that report quite differently - with good reason. A hypothetical alternative CSET with a different leader would not have face such pressures.

It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs. 

I think it is highly likely that imposing a preclearance requirement on employees was justified. It would be extremely difficult for an attorney to envision everything that an employee might conceivably write and determine without even seeing it whether it would cause problem. Even if the attorney could, they would have to update their view of the universe of possible writings every time the situation materially changed. I just don't think a system without a preclearance requirement would have been workable.

It's more likely that some of the responses to proposed writings were more censorious than they should have been. That is really hard to determine, as we'll likely never know the attorney's reasoning (which is protected by privilege).

The wording of what Larks said makes it seem like over a number of years staff were prevented from expressing their true opinions on central EA topics

Caleb - thanks for this helpful introduction to Zach's talents, qualifications, and background -- very useful for those of us who don't know him!

I agree that EA organizations should try very hard to avoid entanglements with AI companies such as Anthropic - however well-intentioned they seem. We need to be able to raise genuine concerns about AI risks without feeling beholden to AI corporate interests.

What would be the best thing(s) to read for those of us who know ~nothing about Zach and his views/philosophy?

I'm planning to publish some forum posts as I get up to speed in the role, and I think those will be the best pieces to read to get a sense of my views. If it's helpful for getting a rough sense of timing, I'm still working full-time on EV at the moment, but will transition into my CEA role in mid-February.

Just wanted to chip in that I am quite positive about this choice and the direction that CEA could go in under Zach's leadership. I have found Zach to be thoughtful about a range of EA topics and to hold to core principles in a way that is both rare and highly valuable for the leader of a meta organization.

Do you know anything about the strategic vision that Zach has for CEA? Or is this just meant to be a positive endorsement of Zach's character/judgment? 

(Both are useful; just want to make sure that the distinction between them is clear). 

I find some of the comments posted here a bit unhelpful from a communications point of view. Frankly they read like prepared/PR statements.(lots of “excited” people who also happen to be employed/connected to CEA ) It would be helpful if people could clarify if they are posting in a professional or personal capacity going forward.

Congrats to Zach! I feel like this is mostly supposed to be a "quick update/celebratory post", but I feel like there's a missing mood that I want to convey in this comment. Note that my thoughts mostly come from an AI Safety perspective, so these thoughts may be less relevant for folks who focus on other cause areas.

My impression is that EA is currently facing an unprecedented about of PR backlash, as well as some solid internal criticisms among core EAs who are now distancing from EA. I suspect this will likely continue into 2024. Some examples:

  • EA has acquired several external enemies as a result of the OpenAI coup. I suspect that investors/accelerationists will be looking for ways to (further) damage EA's reputation.
  • EA is acquiring external enemies as a result of its political engagements. There have been a few news articles recently criticizing EA-affiliated or EA-influenced fellowship programs and think-tanks.
  • EA is acquiring an increasing number of internal critics. Informally, I feel like many people I know (myself included) have become increasingly dissatisfied with the "modern EA movement" and "mainstream EA institutions". Examples of common criticisms include "low integrity/low openness", "low willingness to critique powerful EA institutions", "low willingness to take actions in the world that advocate directly/openly for beliefs", "cozyness with AI labs", "general slowness/inaction bias", and "lack of willingness to support groups pushing for concrete policies to curb the AI race." (I'll acknowledge that some of these are more controversial than others and could reflect genuine worldview differences, though even so, my impression is that they're meaningfully contributing to a schism in ways that go beyond typical worldview differences).

I'd be curious to know how CEA is reacting to this. The answer might be "well, we don't really focus much on AI safety, so we don't really see this as our thing to respond to." The answer might be "we think these criticisms are unfair/low-quality, so we're going to ignore them." Or the answer might be "we take X criticism super seriously and are planning to do Y about it."

Regardless, I suspect that this is an especially important and challenging time to be the CEO of CEA. I hope Zach (and others at CEA) are able to navigate the increasing public scrutiny & internal scrutiny of EA that I suspect will continue into 2024.

I strongly agree that being associated with EA in AI policy is increasingly difficult (as many articles and individuals' posts on social media can attest), in particular in Europe, DC, and the Bay Area. 

I appreciate Akash's comment, and at the same time, I understand the object of this post is not to ask for people's opinions about what the priorities of CEA would be, so I won't go too much into detail. I want to highlight that I'm really excited for Zach Robinson to lead CEA!

With my current knowledge of the situation in three different jurisdictions, I'll simply comment that there is a huge problem related to EA connections and AI policy at the moment. I would support CEA getting strong PR support so that there is a voice defending EA rather than mostly receiving punches. I truly appreciate the CEA's communication efforts over the last year and it's very plausible that CEA needs more than one person working on this. One alternative is for most people working on AI policy to cut their former connections to EA which I think is a shame due to the usually good epistemics and motivation the community brings. (In any case, the AI safety movement should become more and more independent and "big tent" as soon as possible and I'm looking forward to more energy being put into PR there.)

These are all good points, but I suspect it could be a mistake for EA to focus too much on PR. Very important to listen carefully to people's concerns, but I also think we need the confidence to forge our own path.

Could you explain a bit more what you mean by "confidence to forge our own path"? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.

The costs of chasing good PR are larger than they first appear: at the start you're just talking about things differently, but soon enough it distorts your epistemics.

At the same time, these actions make less of a difference than you might expect. Some people are just looking for a reason to criticism you and will find a different reason. People will still attack you based on what happened in the past.

I’m delighted that Zach has agreed to join CEA, and I’m excited for CEA’s future under his leadership.

I think that Zach is an extremely strong leader and manager, who thrives under pressure and cares deeply about building a better world. We dug deep into his strengths and weaknesses, through strategy discussions, work-history interviews, and reference calls. He has many outstanding references from people who have worked closely with him at Open Philanthropy and Effective Ventures US.

Thank you to everyone on the search committee, advisors to the search committee, staff involved in the process (especially Caitlin Elizondo and Oscar Howie) and of course to our candidates for engaging with a broad and intensive search process. The depth and detail of the search makes me confident that Zach is the right person to lead CEA going forward.

I’m also incredibly grateful to Ben West, CEA’s leadership team, and all the staff for leading CEA to one of its best years ever in 2023. It’s been an honour and a joy to work with them.

I will be available to support and advise Zach and the CEA leadership team as needed, but after 7 years at CEA I’ll now be taking a break (with a new baby!) before exploring my next career steps. 

I look forward to seeing Zach, CEA, and EA flourish in the coming years!

I'm excited to be turning over the reins to Zach. I had the opportunity to see Zach in action at the Meta Coordination Forum earlier this year, and afterwards I sent Zach the following (edited) message:

(Low urgency) Hey, I have been going through my notes from MCF and just wanted to say that the brief moments I had of seeing you in action made me even more excited for you to lead CEA…

It was also cool seeing your leadership techniques to get people excited around specific projects: e.g. telling [person] that something has historically been his strength when motivating him to do more of it.

Lastly, it does feel like we are maybe entering a third wave of effective altruism, and the comments you made in our discussion about what principles-first EA should look like seemed true, more insightful than what most people (including myself) had to say, and like the kind of thing I would want the CEO of CEA to say

Welcome, Zach!

Where can I read more about the idea of ‘principles-first EA’?

Our (CEA's) website has a page about core EA principles.

And note that Zach has said elsewhere he intends to write more about his views in due course.

I'm surprised these are not identical to the ones on this page https://www.effectivealtruism.org/articles/introduction-to-effective-altruism#what-principles-unite-effective-altruism (although they are very similar)

I thought that page was also from CEA. Are they written for different audiences?

In particular, I like the "Collaborative spirit: It’s often possible to achieve more by working together" principle that seems to be missing from the CEA page.

Thanks for reading closely, and for flagging this! While CEA is the owner of EA.org, the intro essay was drafted by a collaborative process including non-CEA staff, and the final version was written by 80k's Ben Todd (more in the essay's announcement here).

The discrepancy is tracking the reality that there is no consensus about how best to define EA, although I think the omission of collaborative spirit from the CEA page is an oversight and I expect we will edit it accordingly soon.

Quick followup to note that collaborative spirit is included among CEA's Guiding Principles listed on the CEA site. Clearly it's confusing - including to a member of CEA's own staff like me! - that we refer to different things as 'principles' in different places, and that might be something we look to clarify if and when we revisit these pages as Zach Era CEA.

Thanks! It looks like the "Guiding Principles" page is older and seems to focus more on "altruism first" EA. There's also “EA as a university” [...] a place for intellectual exploration, incredible research, and real-world impact and innovation, which I think is the most recent and also feels different, although it doesn't talk explicitly about "principles".

Thanks for the CEA link. I had read and reacted to that comment from Zach, but I was looking to understand the broader concept that sounded like it might be a pre existing term in the discourse

This is a good question. It is a pre-existing term (for example the EAIF uses it here) but I'm having difficulty finding a canonical definition.

The definition they use in that post is "focusing on this odd community of people who are willing to impartially improve the world as much as possible, without presupposing specific empirical beliefs about the world (like AGI timelines or shrimp sentience)" which seems close but not exactly the same as my definition. Maybe @Zachary Robinson  can include a definition of the term in his forthcoming post.

Just wanted to quickly clarify that the entire wording of the EAIF "definition" was written by me, where I put in the level of care that I typically would for a phrase in an organizational blogpost: significant, but far from the level of precision that an important movement-wide phrase ought to have. I also meant more to gesture at the set of ideas that "Principles-Based EA" roughly points at, rather than to define them. 

All this to say that I'd be glad if Zachary or others can come up with their own versions, and I'd mortified if something like my "definition" becomes canonical.

Thanks to the search committee, the CEA team, the boards, and the kind commenters -- I'm looking forward to joining the team!

Thanks for sharing this and good luck to Zach.

Could you explain how this works structurally? My understanding is he is currently CEO of EV US. Will he be continuing in this role also, or will EV US have been wound down by February? Is CEA going to be one single international organization, or will there be separate CEA US and CEA UK, and he will be CEA of both? 

edit: upon further review of previous EV statements it appears there is another hiring round for an EV US CEO replacement, which answers part of the above.

Also, a tougher question: how over-determined was this hiring? As an outsider it seems like he (as currently one of the direct supervisors of CEA's CEO, and someone whose current role will soon be going away) would probably be one of the first names you'd consider. The reason I ask is because I feel like I spent too much time interviewing candidates for a hiring round when I could have narrowed the field much more quickly.

Also, a tougher question: how over-determined was this hiring?

[Just speaking for myself based on being a member of the hiring committee, without running this take past anyone else.]

I do think that Zach was in our top 5-10 most promising people at the start of the process. So I think that directionally the update is that we spent too much time/energy on this process, since the outcome wasn't that surprising.

However, I'm not sure if we should have spent that much less time/energy:

  • In general I think that this is a really crucial hire, and finding someone marginally better or avoiding making a hiring mistake is really valuable, and worth significant delays.
  • Some of our other top candidates were unknown to the hiring committee at the point where we started the process. So I think that there's a nearby-ish world where the broader/more-in-depth search led to a different outcome.
  • I think that the more in-depth process can help the board, staff, and community to be more confident in the decision, and that's useful for the new CEO's future in the role. If we had appointed Zach with no process at all then in some sense that would be the same outcome, but I think it would leave Zach in a weaker position to lead CEA.
  • Even if we'd appointed Zach sooner, I think that he might have only been able to start in mid-February anyway, because of his commitment to EV. Making the appointment sooner would still have been valuable in that it would have resolved some uncertainty sooner, but not as valuable as if Zach could have started several months sooner.
  • I think that some aspects of the process could have gone more quickly, as I noted in my last post on this topic. But there were some important aspects that the hiring committee couldn't have altered much, and some things that mean that the actual hiring process was shorter than it seems (e.g. it took us 3 weeks or so after Zach said yes to get this post together, partly because CEA staff were on team retreat).
  • I don't want to overupdate on one datapoint.
  • Outside view, I think that this is a fairly standard length of time for an exec search process.

So yeah, overall I think you're right that we spent too much time on this, and I'm still confused how much we should have compressed the process.

Is the plan for Effective Ventures to cease to exist? 

Good question! That was my interpretation of this, since if all the projects are offboarded I do not see what is left:

... we are planning to take significant steps to decentralize the effective altruism ecosystem by offboarding the projects which currently sit under the Effective Ventures umbrella. This means CEA, 80,000 Hours, Giving What We Can and other EV-sponsored projects will transition to being independent legal entities, with their own leadership, operational staff, and governance structures. We anticipate the details of the offboarding process will vary by project, and we expect the overall process to take some time – likely 1-2 years until all projects have finished. [emphasis added]

but I agree it is unclear, and EV did not clarify it when asked despite this being the most popular question.

I’m really excited that Zach will be coming on as CEO of CEA. After so many nominations and evaluations, it’s extremely gratifying to have found someone so qualified for the role. I’m grateful to the hard work everyone put into this, particularly to Max for coordinating and project managing incredibly smoothly, and for Oscar and Caitlin helping a tonne behind the scenes. 

Running CEA is an enormous responsibility, and one I’m glad to be able to trust Zach with. I very much look forward to watching him take CEA into the future.

Thanks so much for all of your work on the search committee!

Exciting news! I worked closely with Zach at Open Phil before he left to be interim CEO of EV US, and was sad to lose him, but I was happy for EV at the time, and I'm excited now for what Zach will be able to do at the helm of CEA.

I'm really excited about Zach coming on board as CEA's new CEO! 

Though I haven't worked with him a ton, the interactions I have had with him have been systematically positive: he's been consistently professional, mission-focused and inspiring. He helped lead EV US well through what was a difficult time, and I'm really looking forward to seeing what CEA achieves under his leadership!

I'm very excited to welcome you to the team, Zach! I like your vision for CEA, think you did a good job managing a very challenging situation with EV, and I'm personally very enthusiastic about the opportunity to work together!

Very exciting news! Welcome Zach — this is making me feel optimistic about EA in 2024.

Congratulations! I'm looking forward to understanding the strategic direction you and CEA will pursue going forward. Do you have a sense of when you'd be able to share a version of these?

Congratulations both to Zach for taking on this important role and to CEA for finding such a capable candidate! Based on my personal interactions with Zach, I'm excited to see where he'll lead CEA and optimistic about him contributing to a strong, principles-based EA community. He seems to me a person both of high integrity and professionalism, who deeply cares about making the world a better place, and who is able to set and execute on a vision. From a GWWC perspective, also looking forward to collaborating with him in his new capacity on making effective giving and effective altruism principles more broadly more of a global norm!

While its great that strong voices in this community such as Ord, MacAskill and others have come guaranteeing Zach's great qualities, I would like to read about Zach's concrete work, failures and achievements. This community lives to defer, and this seems like a great representative example of that, unfortunately I'm not sure how relevant it is here.

All I know about him is that he worked on global health and has tight relationships with Anthropic, thus AI safety. It's great for sure, I love the apparent ability to care about many causes at the same time. But what did he do? What is his path? How did he overcome challenges? What does he fail at? What embodies the most his greatness work-wise, which projects? And more importantly: for what impact?

My apologies if this comment is somewhat naive, but wouldn't it have been a more objective move to appoint an external headhunting company to supply a first group of candidates at least? This may also have brought in a more diverse group of candidates to select from, e.g. moral philosophers with management experience in a university or community setting who might not be part of the EA community (yet) but be sympathetic to it and who could be brought quickly up to speed on its issues.

Informally, I've heard from people at various EA orgs that using headhunters or recruiting firms generally hasn't worked out in the past. I wasn't told detailed information about why these experiences didn't work well, but my vague impression is something like headhunters/recruiters didn't understand important aspects of effective altruism, and thus lacked the ability to identify relevant criteria in potential candidates. While I do think there might be some value in using such services, my naïve assumption is that in the context of EA there are also many costs/challenges to doing so.

I think there's some implicit assumption here that external recruiters are the default option that you need some reason to move away from, but I think standard advice is the opposite: you should not use external recruiters, unless you have some unusual circumstance.

E.g. even this Forbes article, which is essentially an ad for a recruiting firm, says "Hiring internally should be your first choice whenever you're looking at your hiring plans."

Late to the party, but this appointment really was an absolutely stunning example of how dysfunctional CEA's internal processes are. You invite hundreds of applications, do screening interviews with over 50, get 20 serious applicants who all do work trials, do in-depth reference checks, and at the end of it you hire the most insidery of insidery insider candidates who any reasonably well-informed person would have fingered as the obvious candidate from the outset. I can only imagine that the people running this process either valued the time of the other applicants at approximately zero, or felt that they had to conduct this bureaucratic charade to appease some set of external stakeholders: neither option is especially edifying. Somehow you get neither the speed and efficiency advantages of trust-based nepotistic hiring, nor the respectability and cognitive diversity benefits of going through the painful process of hiring at least "EA-adjacent" external professional management. Zach is no doubt a perfectly reasonable choice for the role, and of course I wish him well, but this process is a dream case study in how not to do hiring.

[comment deleted]4
0
0
Curated and popular this week
Relevant opportunities