This is a special post for quick takes by Jaime Sevilla. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

My overall impression is that the CEA community health team (CHT from now on) are well intentioned but sometimes understaffed and other times downright incompetent. It's hard to me to be impartial here, and I understand that their failures are more salient to me than their successes. Yet I endorse the need for change, at the very least including 1) removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions, 2) hiring HR and mental health specialists with credentials, 3) publicly clarifying their role and mandate. 

My impression is that the most valuable function that the CHT provides is as support of community building teams across the world, from advising community builders to preventing problematic community builders from receiving support. If this is the case, I think it would be best to rebrand the CHT as a CEA HR department, and for CEA to properly hire the community builders who are now supported as grantees, which one could argue is an employee misclassification.

I would not be comfortable discussing these issues openly out of concern for the people affected, but here are some horror stories:

  1. A CHT staff pressured a community builder to put through with and include a community member with whom they weren't comfortable interacting.
  2. A CHT staff pressured a community builder to not press charges against a community member who they felt harassed by.
  3. After a restraining order was set by the police in place in this last case, the CHT refused to liaison with the EA Global team to deny access to the person restrained, even knowing that the affected community builder would be attending the event.
  4. My overall sense is that CHT is not very mindful of the needs of community builders in other contexts. Two very promising professionals I've mentored have dissociated from EA, and rejected a grant, in large part because of how they were treated by the CHT.
  5. My impression is that the CHT staff undermines the legitimacy of local communities to make their own decisions. CEA is often perceived as a source of authority, and the CHT has a lot of sway in funding decisions. This makes it so that it is really hard for local groups to go against the wishes of the CHT, who is the main intermediary with groups. I wish this relation was more transparent, so they could be hold accountable for it.

To be clear, I think that these stories have a lot of nuance to them and are in each cases the result of the CHT making what they thought were the best decisions they could make with the tools they had, but in each of them I noticed that I ended up disagreeing with the decisions made and feeling very uncomfortable with how the whole community structure was set up.

Catherine from CEA’s Community Health and Special Projects Team here.  I have a different perspective on the situation than Jaime does and appreciate that he noted that “these stories have a lot of nuance to them and are in each case the result of the CHT making what they thought were the best decisions they could make with the tools they had.” 

I believe Jaime’s points 1, 2 and 3 refer to the same conflict between two people. In that situation, I have deep empathy for the several people that have suffered during the conflict. It was (and still is) a complex and very upsetting situation.

Typically CEA’s Groups team is the team at CEA that interfaces most closely with EA groups. The conflict mentioned here was an unusual situation which led the Community Health team to have more contact with that group than usual. From the information we gathered after talking to several individuals affected, this was an interpersonal conflict. We made a judgement call about what was best given the information, which Jaime disagrees with. To be clear, based on the information we had, there were no threats of violence, sexual harassment, or other forms of seriously harmful behavior that would warrant us to take the steps that Jaime suggests.

Ultimately, I think both Jaime and I had the same goals of increasing the chances that the group thrives and continues to do its important work over the long term, but we had a different perspective on how to move towards that goal in this situation.

I don’t recognise the situation in 4. I’m not sure if that is because I’m unaware, or if I have a different understanding of the situation. If anyone reading knows and wants to share information or give us feedback I’d be very grateful. There are ways you can contact our community liaisons or managers Chana and Nicole anonymously. 

In service of clear epistemics, I want to flag that the "horror stories" that your are sharing are very open to interpretation. If someone pressured someone else, what does that really mean? If could be a very professional and calm piece of advice, or it could be a repulsive piece of manipulation. Is feeling harassed a state that allows someone to press charges, rather than actual occurrence of harassment? (of course, I also understand that due to privacy and mob mentality, you probably don't want to share all the details; totally understandable.)

So maybe these really are scenarios in which the community health team dropped the ball. But maybe they aren't. And the snippets shared here aren't enough for me to have confidence in either of those interpretations. I guess I mainly want to remind readers to not pass judgement based on tiny snippets of narratives.

In terms of 2) and 3), a restraining order being granted is decent evidence that someone didn't just mistakenly feel harassed.

I don't know anything about the case above, but I don't actually think it is that strong evidence? About a decade ago, our landlord got a harassment prevention restraining order issued against one of our housemates. The problem was, our landlord was schizophrenic (and unmedicated) and everything they wrote to the judge was hallucinated. My impression is that, at least in Massachusetts, the justice system has a relatively low bar for issuing these?

(In a follow-up, we were able to all get reciprocal orders put in place)

(Disclosure: married to a CH team member)

Seconding this: In my city, A TRO (temporary restraining order) is very easy to get:

“If the judge is convinced that a temporary restraining order is necessary*, he or she may issue the order immediately, without informing the other parties and without holding a hearing.”

*IMO, local judges are very lenient with TROs, issuing them “just in case” the complaint is valid, and reserving more conservative judgements for the actual hearing, 14+ days later.

That surprises me. Maybe I was just flat-out wrong!

I don't think EAs have any particular edge in managing harassment over the police, and I find it troublesome that they have far higher standards for creating safe spaces, specially in situations where the cost is relatively low such as inviting the affected members to separate EAG events while things cool down.

On another point, I don't think this was Jeff's intention, but I really dislike the unintentional parallel between an untreated schizophrenic and the CB who asked for the restraining order. I can assure you that this was not the case here, and I think the CB was sound of mind and fully justified in requesting it.

Whether it is easy to get a restraining order, IDK man. It is "easy" in the sense that there is a same-day procedure that only requires interrogating the requester. At least in the jurisdiction where this happened, it is also a gruelling process which requires you to consistently repeat your story to many police officials who keep asking you for details and are looking for contradictions, and takes a full work day to complete.

I was responding specifically to the claim that hearing that a restraining order has been granted is very informative. I didn't claim that getting one is easy or hard, or that the community health team should have higher or lower thresholds for action.

I'm also not trying to say anything either way about the community builder in question, and don't know any more about that situation than I've read in this thread. And specifically, I'm not saying that they are mentally ill or made a report based on hallucinations. Instead, what I'm saying is that because the decision to grant a restraining order is not the product of an investigative process and the amount of evidence necessary is relatively low, learning that one has been granted doesn't provide much evidence.

Agreed.

EDIT: based on on a comment from Jeff Kaufman, I am now somewhat less confident that a restraining order is strong evidence of harassment actually having occurred. I know practically nothing of the legalities of harassment and restraining orders, so I advice any readers to not consider my opinions on this topic very heavily.

I don't know about the sway the com health team has over decisions at other funders, but at EA Funds my impression is that they rarely give input on our grants, but, when they do, it's almost always helpful. I don't think you'd be concerned by any of the ways in which they've given input - in general, it's more like "a few people have reported this person making them feel uncomfortable so we'd advise against them doing in-person community building" than "we think that this local group's strategy is weak".

I think that whilst there are valid criticisms of the com health team, I generally think they are a very positive influence on the community.

Responding to some specific points raised

  1. removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions

I don't see a COI here, but if there is one I it would be great if you could share this with me. I don't plan to remove any com health team members from their advisory positions at this time (but I am of course open to this if I see sufficient evidence that this change would be better for the world).

  1. hiring HR and mental health specialists with credentials

I don't see their role as being particularly similar to HR (in most organisations). Julia Wise, worked as a mental health clinician for four years and has some experience in social care (but that ofc doesn't mean that they shouldn't have more experience in this area on the margin).

  1. publicly clarifying their role and mandate

I think that they have done a decent job of this on the forum and by setting up this page featured pretty prominently on the CEA website.

"at EA Funds my impression is that they rarely give input on our grants, but, when they do, it's almost always helpful"

This comment concerns me. I would have thought the community health team would never give input on a grant, only interact with funding bodies if there were red flags about an organisation that was either doing funding or getting funded. What kind of "helpful" advice have they given you?

"I don't see a COI here, but if there is one I it would be great if you could share this with me."

I might be missing something, but the potential for COI here seems obvious and quite high. Imagine you are applying for funds from an org which a community health team member is on the board, or an advisor of, while at the same time wanting to make a complaint or raise an issue about someone from that funding body. This would both make it harder 

  1. For someone to make the complaint, thinking it might compromise your chances of funding with the org
  2. For the community health person to respond to the complaint, given the conflict of interest.

From the CHT website, of example projects

"Someone feels they were treated unfairly by an EA organization they worked at but isn’t sure they want to pass on feedback to the organization. They’re not sure what to do and want to talk over the situation with someone neutral.

If they are an adviser for an EA org, then they aren't neutral any more.

For this reason I'm surprised CHT members have advisory roles with funding bodies

In general would have thought the community health team would only interact with funding bodies around community health issues. I think there's a pretty strong argument for ring fencing the community health team, as their position as an impartial mediator or support person could be compromised by being part of any other org that could potentially be complained about.

At the vary least, I would have thought the head, or top couple of people from the CHT would have no other roles within funding organisations at the least , and perhaps no other official roles at all in wider EA due to potential COI. 

The exception IMO would be that I would personally like the CHT to be part higher level boards like perhaps CEA, and also be involved "EA direction" meetings, where I think the value they could add would be higher than potential conflicts of interest (which would still be present)

Again though I might be missing something obvious here, as I'm not involved in any of this. 

All else equal, I have an extremely strong preference to avoid recommending funding to people who CH suspects with high probability to have committed sexual assault, or are violent, or abuse EA resources/power for personal gain, or are terrible to their employees, etc. 

I think funding is one of the very few levers EAs in practice have to enforce basic decency norms (others include public declarations of norms, stern talking-tos, Forum bans, bans from CEA events, spreading rumors along whisper networks, public callouts, police reports, etc). 

And on a personal level, I'd feel terrible if I knew I repeatedly recommended funding to people who end up being pretty harmful, and I'd feel betrayed if other EAs allowed me to repeatedly make unforced errors in that direction out of a confused sense of propriety.

I'm in general pretty confused about what the community want out of the Community Health team. It seems like people want to hold CH accountable for not taking sufficiently strong actions against problematic actors, and then also hobble what little power they do have. Seems strange to me. 

I agree that this is very valuable. I would want them to be explicit about this role, and be clear to community builders talking to them that they should treat them as if talking to a funder.

To be clear, in the cases where I have felt uncomfortable it was not "X is engaging in sketchy behaviour, and we recommend not giving them funding" (my understanding is that this happens fairly often, and I am glad for it. CHT is providing a very valuable function here, which otherwise would be hard to coordinate. If anything, I would want them to be more brazen and ready to recommend against people based on less evidence than they do now).

Is more cases like "CHT staff thinks that this subcommunity would work better without central coordination, and this staff is going to recommend against funding any coordinators going forward" or "CHT is pressuring me to make a certain choice such as not banning a community member I consider problematic, and I am afraid that if I don't comply I won't get renewed" (I've learned of situations like these happen at least thrice).

It is difficult to orient yourself towards someone who you are not sure whether your should treat as your boss or as neutral third party mediator. This is stressing for community builders.

**Just quickly responding to a few points **

This comment concerns me. I would have thought the community health team would never give input on a grant, only interact with funding bodies if there were red flags about an organisation that was either doing funding or getting funded. What kind of "helpful" advice have they given you?

The main way they give input on our grants is by reporting concerns about things we are interested in funding (e.g. red flags "a few people have reported this person making them feel uncomfortable so we'd advise against them doing in-person community building" as I said in my original comment). It sounds to me like we're aligned on the kind of input com health might be well-placed to give.

Re your example COIs

Hmm, I am a bit confused about which way this goes. Re (1), I could imagine someone having that concern. I don't think com health staff give us this kind of input so I'm not worried about it in practice but, I could imagine a grantee being worried about this.

Re (2), we don't pay any of the advisors from com health but I could see a COI if com health team members believed that their reputation was tied to our organisation in some way.

I think that both of these potential COIs are pretty weak (but thank you for flagging them), but I'll have a think about ways in which we might be able to further mitigate them.

Thanks for the reply

I was interpreting your comment that they had separate advisory roles for orgs like yours outside of the community health sphere, which would be much more problematic.

If their advisory role is around community health issues that makes more sense, It still is a potentially problematic COI, as there is potential to breach confidentiality in that role. For example hope they have permission to share info like "we would advise against them doing in-person community building", from the people who gave them that info. By default everything shared with community health should (I imagine) be confidential unless the person who shares it explicitly gives permission to pass the info on.

 but I agree with you its not as much of a concern, although it requires some care.

While I don't have an objection to the idea of rebranding the community health team, I want to push back a bit against labelling it as human resources.

HR already has a meaning, and there is relatively little overlap between the function of community health within the EA community and the function of a HR team. I predict it would cause a lot of confusion to have a group labelled as HR which doesn't do the normal things of an HR team (recruitment, talent management, training, legal compliance, compensation, sometimes payroll, etc.) but does do things that are frequently not part of a normal HR team (handle interpersonal disputes).

I don't have any proposals for a good label, but I predict that using HR as a label would cause a lot of confusion.

Could you explain what you perceive as the correct remedy in instance #1?

The implication seems like the solution you prefer is having the community member isolated from official community events. But I'm not sure what work "uncomfortable" is doing here. Was the member harassing the community builder? Because that would seem like justification for banning that member. But if the builder is just uncomfortable due to something like a personal conflict, it doesn't seem right to ban the member.

But maybe I'm not understanding what your corrective action would be here?

Could you add more detail regarding: "removing people from the CHT that serve as advisors to any EA funds or have other conflict of interest positions"?

I think people would be more likely to agree if you gave your reasons.

While true, I think this discussion is really hard to have. I don't think EA tends to be good at discussing it's internal workings. What scandal discussions have gone well?

A pattern I've recently encountered twice in different contexts is that I would recommend a more junior (but with relevant expertise) staff for a talk and I've been asked if I could not recommend a more "senior" profile instead.

I thought it was with the best of intentions, but it still rubbed me the wrong way. First, this diminishes the expertise of my staff, who have demonstrated mastery over their subject matter. Second, this systematically denies young people of valuable experience and exposure. You cannot become a "senior" profile without access to these speaking opportunities!

Please give speaking opportunities to less experienced professionals too!

A clarification question: when you recommend a someone for a talk, is this in the context of recommending that person A speak with person B (as in building a network), or more in the context of giving a presentation (like a TEDx conference)?

When people do this, do you think they mostly want someone with more skills or knowledge or someone with better, more prestigious credentials?

What's your rate of success after pushback? Do organisations usually take the more junior person as a speaker?

Within Epoch there is a months-long debate about how we should report growth rates for certain key quantities such as the amount of compute used for training runs.

I have been an advocate of an unusual choice: orders-of-magnitude per year (abbreviated OOMs/year). Why is that? Let's look at other popular choices.

Doubling times. This has become the standard in AI forecasting, and its a terrible metric. On the positive, it is an intuitive metric that both policy makers and researchers are familiar with. But it is absolutely horrid to make calculations. For example, if I know that the cost of AI training runs is doubling every 0.6 years, and the FLOP/$ is doubling every 2.5 years, then the FLOP per training run is doubling every  years, which is very difficult to solve in your head! [1] [2]

Percent growth. This is a choice often favoured in economics, where eg the growth rate of GDP is often reported as 3%, etc. Unlike doubling times, percent growth composes nicely - you just have to add them up! [3] However, I also find percent changes somewhat prone to confusion. For instance, when I tell people that model size has increased 200% since X years ago, I sometimes have had people misunderstand this as saying that it has increased by a factor of 2. 

Ultimately, a very common operation that I find myself doing in my head is "if the effective FLOP used in AI training runs grows at a certain rate, how quickly will we traverse from the scale of current training runs (1e25 FLOP) to a certain threshold (eg 1e30)". OOMs/year makes this computation easy, even if I need to account for multiple factors such as hardware increases, investment and algorithmic improvements. Eg if these grow respectively as 0.4 OOM/year, 0.1 OOM/year and 0.5 OOM/year, then I know the total effective growth is 1.0 OOM/year, and it will take 5 years to cross that 5 OOM scale gap. And if investment suddenly stopped growing, then I would be able to quickly understand that the pace would be halved, and the gap would then take 10 years to cross.

Sadly OOMs/year is uncommon, and both researchers and policy makers struggle to understand it. I think this is a missed opportunity, and that AI forecasting would be easier to reason about if we moved to it, or at the very least abandoned the very badly behaved doubling time framing.

What do you think? Do you agree we should move past doubling times to a better choice? Which choice would you favour?

  1. ^

     I won't enter into the technical details, but it also has some very unintuitive results when combining the results with uncertainty. Once we had a discussion because some doubling times looked like they had to be wrong. They spanned from days to years! But it turns out that doubling times are very sensitive to noise, which led to our intuitions being wrong.

  2. ^

    I'd also argue that Christiano's operationalization of slow takeoff is a terrible definition, and that a big part of that terribleness stems from doubling times being very unintuitive.

  3. ^

    This is because have the useful property that  for percent changes  close to zero, which links percent growth to a straightforward model of growth such as . But this approximation breaks down for percent changes over 1 (which are often seen in AI forecasting).

Agree that it's easier to talk about (change)/(time) rather than (time)/(change). As you say, (change)/(time) adds better. And agree that % growth rates are terrible for a bunch of reasons once you are talking about rates >50%.

I'd weakly advocate for "doublings per year:" (i) 1 doubling / year is more like a natural unit, that's already a pretty high rate of growth, and it's easier to talk about multiple doublings per year than a fraction of an OOM per year, (ii) there is a word for "doubling" and no word for "increased by an OOM," (iii) I think the arithmetic is easier.

But people might find factors of 10 so much more intuitive than factors of 2 that OOMs/year is better. I suspect this is increasingly true as you are talking more to policy makers and less to people in ML, but might even be true in ML since people are so used to quoting big numbers in scientific notation.

(I'd probably defend my definitional choice for slow takeoff, but that seems like a different topic.)

What about factor increase per year, reported alongside a second number to show how the increases compose (e.g. the factor increase per decade)? So "compute has been increasing by 1.4x per year, or 28x per decade" or sth.

The main problem with OOMs is fractional OOMs, like your recent headline of "0.1 OOMs". Very few people are going to interpret this right, where they'd do much better with "2 OOMs".

Factor increase per year is the way we are reporting growth rates by default now in the dashboard.

And I agree it will be better interpreted by the public. On the other hand, multiplying numbers is hard, so it's not as nice for mental arithmetic. And thinking logarithmically puts you in the right frame of mind.

Saying that GPT-4 was trained on x100 more compute than GPT-3 invokes GPT-3 being 100 times better, whereas I think saying it was trained on 2 OOM more compute gives you a better picture of the expected improvement.

I might be wrong here.

In any case, it is still a better choice than doubling times.

Our team at Epoch recently updated the org's website.
I'd be curious to receive feedback if anyone has any!
What do you like about the design? What do you dislike?
How can we make it more useful for you?

I think it’s a solid improvement! I only occasionally browsed the previous version, but I remember it being a bit tricky to find the headline figures I was interested after listening to them cited on podcasts, whereas now going to https://epochai.org/trends they seem all quite easy to find (plus dig into the details of) due to the intuitive/elegant layout.

I think it looks great! 👏👏

The only thing I'm uneasy about is the testimonial of an investor who's accelerating AI capabilities.

 

I think this is fine: Epoch's work appeals to a broad audience, and Nat Friedman is a well-respected technologist.

I agree that this testimonial adds credibility to Epoch, but it raises concerns about whether their work has negative impacts.

Just a quick comment for the devs - I saw the "More posts like this" lateral bar and it felt quite jarring. I liked it way better when it was at the end. Having it randomly in the middle of a post felt distracting and puzzling. 

ETA: the Give feedback button does not seem to work either. Also its purpose its unclear (give feedback on the post selection? on the feature?)

I've recorded the feedback, thank you! The anticipation that some this might be distracting was the motivation for the feedback button. Which makes me concerend to hear that it's not working for you. Could I ask if you could check your cookies to see if you've enabled functional cookies? (See the link in the second paragraph.)

As far as I can tell they are enabled - I see there is a cookie in storage for the intercom for example

Quick guide on the agree/disagree voting system:
 

  • When you upvote a post/comment, you are recommending that more people ought to read and engage with it.
  • When you agree vote a post/comment, you are communicating that you endorse its conclusions/recommendations.
  • Symmetrically, if you downvote a post/comment you are recommending against engaging with it.
  • And similarly, if you disagree vote a post/comment you are communicating that you don't endorse it's conclusions/recommendations.

Upvotes determine the order of posts of comments and determine which comments are automatically hidden, so have a measurable effect on how many people read them.

Agree votes AFAIK do not affect content recommendations, but are helpful to understand whether there is community support for a conclusion, and if so in which direction.

On getting research collaborators

(adapted from a private conversation)

The 80/20 advice I would give is: be proactive in reaching out to other people and suggesting to them to work for an evening on a small project, like writing a post. Afterwards you both can decide if you are excited enough to work together on something bigger, like a paper.

For more in depth advice, here are some ways I've started collaborations in the past:

  • Deconfusion sessions
    I often invite other researchers for short sessions of 1-2 hours to focus on a topic, with the goal of coding together a barebones prototype or a sketch of a paper.

    For example, I engaged in conversation with Pablo Moreno about Quantum Computing and AI Aligment. We found we disagreed, so I invited him to spend one hour discussing the topic more in depth. During the conversation we wrote down the key points of disagrement, and we resolved to expand them into an article.
     
  • Advertise through intermediate outputs
    I found it useful for many reasons to split big research projects into post-size bits. One of those reasons is to let other people know what I am working on, and that I am interested in collaborating.

    For example, for the project on studying macroscopic trends in Machine Learning, we resolved to first write a short article about parameter counts. I then advertised the post asking for potential collaborators to reach out.
     
  • Interview people on their interests
    Asking people what motivates them and what they want to work on can segue into an opportunity to say "actually, I am also interested in X, do you want to work together on it?". I think this requires some finesse, but it is a skill that can be practiced.

    For example, I had an in depth conversation with Laura González about her interests and what kinds of things she wanted to work on. It came up that she was interested in game design, so I prodded her on whether she would be interested in helping me refine a board game prototype I had previously shown her. This started our collaboration.
     
  • Join communities of practice.
    I found it quite useful to participate in small communities of people working towards similar goals. 

    For example, my supervisor helped me join a Slack group for people working on AI Explainability. I reached out to the people for one-on-one conversations, and suggested working together to a few. Miruna Clinciu accepted - and now we are buiding a small research project.
Curated and popular this week
Relevant opportunities