A

AGB 🔸

2987 karmaJoined

Posts
3

Sorted by New
6
· · 1m read

Comments
272

Thanks for taking the time to respond.

I think we’re pretty close to agreement, so I’ll leave it here except to clarify that when I’ve talked about engaging/engagement I mean something close to ‘public engagement’; responses that the person who raised the issue sees or could reasonably be expected to see. So what you’re doing here, Zach elsewhere in the comments, etc.

CEA discussing internally is also valuable of course, and is a type of engagement, but is not what I was trying to point at. Sorry for any confusion, and thanks for differentiating.

Thanks for sharing your experience of working on the Forum Sarah. It's good to hear that your internal experience of the Forum team is that it sees feedback as vital.

I hope the below can help with understanding the type of thing which can contribute to an opposing external impression. Perhaps some types of feedback get more response than others?

If you take one thing away from my comment, please remember that we love feedback - there are multiple ways to contact us listed here, including an anonymous option.

AFAICT I have done this twice, once asking a yes/no question about unclear forum policy and once about a Forum team post I considered mildly misleading. The first got no response, the other got a response which was inaccurate, which was unfortunate, though I certainly assume it was unintentionally so.

I want to be clear that I do not think I am entitled to get a response. I think the Forum team is entitled to decide it should focus on analytics not individuals, for example. I basically thought it had, and so mentally wrote off those pathways. But your comment paints a surprisingly different picture and repeatedly pushes these options, so it didn't feel right to say that I disagree without disclosing a big part of why I disagree.

Looking to public, and frankly far more important, examples of this, the top comment on CEA's last fundraising attempt is highly critical of the Forum / Online team's direction and spend. At time of writing the comment has 23/2 agree/disagree votes and more karma than the top level post it's under. This seems like the kind of thing one prioritises responding to if trying to engage, and 10 months ago Ben West responded "I mostly want to delay a discussion about this until the post fully dedicated to the Forum". That post never came out[1]. So again my takeaway was that the Forum team didn't value such engagement.

Given this, I personally disagree that we “relegate the EA community to an afterthought” and that we “largely ignore the views of people strongly involved with EA”, and I disagree that we implied that we plan to do these things in the future.

As someone who directionally agrees with the quoted sentiments, this was helpful in clarifying part of what's going on here. I personally think that CEA has been opaque for the last few years, for better or for worse[2]. Others I have heard from think the same [3]. So I naturally interpret a post which is essentially a statement of continuinty as a plan to continue down this road. Arepo makes a similar point in the 2nd paragraph of their first comment. But if you think CEA, or at least your team, has been responsive in the past, the same statement of continuity is not naturally interpreted that way. 

  1. ^

    To the best of my knowledge. If it did, please link to it as a response to the comment! This type of thing is hard to search for, but I did spend ~5 minutes trying.

  2. ^

    Since I've pushed CEA to be more responsive here and elsewhere, I want to note that distance is helpful in some contexts. I am unsurprised to hear that the Forum redesign in 2023 got negative feedback from entrenched users but positive feedback from new users, for example; seems a common pattern with design changes.

  3. ^

    Long comment, so pulling out the relevant quote:

    I think that OP / CEA board members haven't particularly focused on / cared about being open and transparent with the EA community....Remember that OP staff members are mainly accountable to their managers, not the EA community or others. CEA is mostly funded by OP, so is basically similarly accountable to high-level OP people.

That's fair, I didn't really explain that footnote. Note the original point was in the context of cause prioritisation, and I should probably have linked to this previous comment from Jason which captured my feeling as well:

A name change would be a good start.

By analogy, suppose there were a Center for Medical Studies that was funded ~80% by a group interested in just cardiology. Influenced by the resultant incentives, the CMS hires a bunch of cardiologists, pushes medical students toward cardiology residencies, and devotes an entire instance of its flagship Medical Research Global conference to the exclusive study of topics in cardiology. All those things are fine, but this org shouldn't use a name that implies that it takes a more general and balanced perspective on the field of medical studies, and should make very very clear that it doesn't speak for the medical community as a whole.

It seems possible, though far from obvious, that CEA's funding base is so narrow it's forced to focus on that target, in order to ensure the organisation's survival from that direction. This was something I thought Zach covered nicely:

The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies. While I don’t think it’s necessary for us to share the exact same priorities as our funders, I do feel there are some constraints based on donor intent, e.g. I would likely feel it is wrong for us to use the GCRCB team’s resources to focus on a conference that is purely about animal welfare. There are also practical constraints insofar as we need to demonstrate progress on the metrics our funders care about if we want to be able to successfully secure more funding in the future.

Note: I had drafted a longer comment before Arepo's comment, given the overlap I cut parts that they already covered and posted the rest here rather than in a new thread.

...it also presupposes that CEA exists solely to serve the EA community. I view the community as CEA’s team, not its customers. While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members

I agree with Arepo that both halves of this claim seem wrong. Four of CEA's five programs, namely Groups, Events, Online, and Community Health, have theories of change that directly route through serving the community. This is often done by quite literally providing them with services that are free, discounted, or just hard to acquire elsewhere. Sure, they are serving the community in order to have a positive impact on the wider world, but that's like saying a business provides a service in order to make a profit; true but irrelevant to the question of whether the directly-served party is a customer. 

I speculate that what's going on here is:

  1. CEA doesn't want to coordinate the community the way any leader or manager would be expected to coordinate their team. That (a) seems like a quick path to groupthink and (b) would be hard given many members do not recognise CEA's authority.
  2. CEA also doesn't want to feel responsible for making community members happy, because it feels the eternal critics that make up the community (hi!) will be unhappy regardless of what it does. 

I'm sympathetic to both impulses, but if taken too far they leave the CEA <-> EA community relationship at an impasse and make the name 'CEA' a real misnomer. Regardless of preferred language, I hope that CEA will rediscover its purpose of nurturing and supporting the EA community by providing valuable services to its members[1] - a lower bar than 'make these eternal critics happy' - and I believe the short descriptions of those four teams quoted below already clearly point in that direction. 

For me, this makes the served members customers, in the same sense that a parishioner is a customer of their church. Most businesses can't make all prospective customers happy either! But if that fact makes them forget that their continued existence is contingent upon their ability to serve customers, then they are truly lost.

Events: We run conferences like EA Global and support community-organized EAGx conferences...

Groups: We fund and advise hundreds of local effective altruism groups...

Online: We build and moderate the EA Forum...We also produce the Effective Altruism Newsletter.

Community Health: We aim to prevent and address interpersonal and community problems that can prevent community members and projects from doing their best work.

  1. ^

    As I hope comes across, I do not think this is at all radical. But if CEA cannot or will not do this, I think it should change its name.

I'm sorry you hear it that way, but that's not what it says; I'm making an empirical claim about how norms work / don't work. If you think the situation I describe is tenable, feel free to disagree.

But if we agree it is not tenable, then we need a (much?) narrower community norm than 'no donation matching', such as 'no donation matching without communication around counterfactuals', or Open Phil / EAF needs to take significantly more flak than I think they did. 

I hoped pointing that out might help focus minds, since the discussion so far had focused on the weak players not the powerful ones. 

A question I genuinely don’t know the answer to, for the anti-donation-match people: why wasn’t any of this criticism directed at Open Phil or EA funds when they did a large donation match?

I have mixed feelings on donation matching. But I feel strongly that it is not tenable to have a general community norm against something your most influential actors are doing without pushback, and checking the comments on the linked post I’m not seeing that pushback.

Relatedly, I didn’t like the assertion that the increased number of matches comes from the ‘fundraising’ people not the ‘community-building and epistemics’ people. I really don’t know who the latter refers to if not Open Phil / EAF.

https://forum.effectivealtruism.org/posts/zt6MsCCDStm74HFwo/ea-funds-organisational-update-open-philanthropy-matching

Thanks for clarifying. I agree that Gift Aid eligibility is the key question; HMRC does not expect me to have insight into the administration of every charity I donate to, and it’s not like they care if charities don’t take the ‘free’ money they are entitled to! In other words, whether CEA claims does not matter but whether it could claim does.

However, in order for the charity to be entitled a Gift Aid declaration must be completed:

https://www.gov.uk/government/publications/charities-detailed-guidance-notes/chapter-3-gift-aid#chapter-36-gift-aid-declarations

“Without this declaration, a donation from an individual will not qualify as a Gift Aid donation.”

I do not recall filling one in when I last paid for EAG - which was multiple years ago to be clear - and without that declaration it is not in fact a Gift Aid donation, in my non-professional opinion of course but it’s based on that link. So I did not feel comfortable claiming. Others’ mileage may vary.

I’m glad to hear you are reconsidering the website language.

How sure are you about this? The boxes on the UK Self Assessment Tax Return (link below, it’s on page 6) where I declare my donations ask for things like “Gift Aid Payments made in the year…”. So I wouldn’t include non-Gift-Aid payments there and I’m not sure where else they would go.

In general, the core tax concept for various reliefs in the UK is Adjusted Net Income. The page defining it (linked below) explicitly calls out Gift Aid donations as reducing it but not anything else.

I’d appreciate a link if I’m wrong about this.

https://assets.publishing.service.gov.uk/media/6613fc8a213873b991031b88/SA100_2024.pdf

https://www.gov.uk/guidance/adjusted-net-income

Thanks Arden. I suspect you don't disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself. 

One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:

We are cause neutral[1] – we prioritise x-risk reduction because we think it's most pressing, but it’s possible we could learn more that would make us change our priorities.

I don't think that's how it would go. If an individual 80,000 hours member learned things that cause them to downshift their x-risk or AI safety priority, I expect them to leave the org, not for the org to change. Similar observations on hiring. So while all the individuals involved may be cause neutral and open to change in the sense you describe, 80,000 hours itself is not, practically speaking. It's very common for orgs to be more 'sticky' than their constituent employees in this way. 

I appreciate it's a weekend, and you should feel free to take your time to respond to this if indeed you respond at all. Sorry for missing it in the first round. 

Meta note: I wouldn’t normally write a comment like this. I don’t seriously consider 99.99% of charities when making my donations; why single out one? I’m writing anyway because comments so far are not engaging with my perspective, and I hope more detail can help 80,000 hours themselves and others engage better if they wish to do so. As I note at the end, they may quite reasonably not wish to do so.

For background, I was one of the people interviewed for this report, and in 2014-2018 my wife and I were one of 80,000 hours’ largest donors. In recent years it has not made my shortlist of donation options. The report’s characterisation of them - spending a huge amount while not clearly being >0 on the margin - is fairly close to my own view, though clearly I was not the only person to express it. All views expressed below are my own.

I think it is very clear that 80,000 hours have had a tremendous influence on the EA community. I cannot recall anyone stating otherwise, so references to things like the EA survey are not very relevant. But influence is not impact. I commonly hear two views for why this influence may not translate into positive impact:

-80,000 hours prioritises AI well above other cause areas. As a result they commonly push people off paths which are high-impact per other worldviews. So if you disagree with them about AI, you’re going to read things like their case studies and be pretty nonplussed. You’re also likely to have friends who have left very promising career paths because they were told they would do even more good in AI safety. This is my own position.

-80,000 hours is likely more responsible than any other single org for the many EA-influenced people working on AI capabilities. Many of the people who consider AI top priority are negative on this and thus on the org as a whole. This is not my own position, but I mention it because I think it helps explain why (some) people who are very pro-AI may decline to fund.

I suspect this unusual convergence may be why they got singled out; pretty much every meta org has funders skeptical of them for cause prioritisation reasons, but here there are many skeptics in the crowd broadly aligned on prioritisation.

Looping back to my own position, I would offer two ‘fake’ illustrative anecdotes:

Alice read Doing Good Better and was convinced of the merits of donating a moderate fraction of her income to effective charities. Later, she came across 80,000 hours and was convinced by their argument that her career was far more important. However, she found herself unable to take any of the recommended positions. As a result she neither donates nor works in what they would consider a high-impact role; it’s as if neither interaction had ever occurred, except perhaps she feels a bit down about her apparent uselessness.

Bob was having impact in a cause many EAs consider a top priority. But he is epistemically modest, and inclined to defer to the apparent EA consensus- communicated via 80,000 hours - that AI was more important. He switched careers and did find a role with solid - but worse - personal fit. The role is well-paid and engaging day-to-day; Bob sees little reason to reconsider the trade-off, especially since ChatGPT seems to have vindicated 80,000 hours’ prior belief that AI was going to be a big deal. But if pressed he would readily acknowledge that it’s not clear how his work actually improves things. In line with his broad policy on epistemics, he points out the EA leadership is very positive on his approach; who is he to disagree?

Alice and Bob have always been possible problems from my perspective. But in recent years I’ve met far more of them than I did when I was funding 80,000 hours. My circles could certainly be skewed here, but when there’s a lack of good data my approach to such situations is to base my own decisions on my own observations. If my circles are skewed, other people who are seeing very little of Alice and Bob can always choose to fund.

On that last note, I want to reiterate that I cannot think of a single org, meta or otherwise, that does not have its detractors. I suspect there may be some latent belief that an org as central as 80,000 hours has solid support across most EA funders. To the best of my knowledge this is not and has never been the case, for them or for anyone else. I do not think they should aim for that outcome, and I would encourage readers to update ~0 on learning such.

Load more