This is a special post for quick takes by JWS 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

<edit: Ben deleted the tweets, so it doesn't feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here>

This makes me feel bad, and I'm going to try and articulate why. (This is mainly about my gut reaction to seeing/reading these tweets, but I'll ping @Benjamin_Todd because I think subtweeting/vagueposting is bad practice and I don't want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then.

  • At a gut-level, this feels like an influential member of the EA community deciding to 'defect' and leave when the going gets tough. It's like deciding to 'walk away from Omelas' when you had a role in the leadership of the city and benefitted from that position. In contrast, I think the right call is to stay and fight for EA ideas in the 'Third Wave' of EA.
  • Furthermore,if you do think that EA is about ideas, then I don't think dissassociating from the name of EA without changing your other actions is going to convince anyone about what you're doing by 'getting
... (read more)

Hey JWS, 

These comments were off-hand and unconstructive, have been interpreted in ways I didn't intend, and twitter isn't the best venue for them, so I apologise for posting, and I'm going to delete them. My more considered takes are here. Hopefully I can write more in the future.

Hey Ben, I'll remove the tweet images since you've deleted them. I'll probably rework the body of the post to reflect that and happy to make any edits/retractions that you think aren't fair.

I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/future affiliation with EA, I hope you're doing well.

3
Benjamin_Todd
Thank you, I appreciate that.
8
Benevolent_Rain
Side note: Is there a single EA culture? My experience is that GH, AI and Animal Welfare people are in general super different culturally, and even within these branches there is lots of heterogeneity. I think EA is and should be very loosely tied together, with only the minimal amount of overlap required (such as wanting to help others and prioritizing amongst ways of doing so). The looser EAs are tied together the more meaningless will it be to leave. Like "I'm leaving liberal democracies" just seems strange and people rarely say it even if they move countries or change political affiliations.
5
anormative
I’m sure you mean this in good faith, but I think we should probably try to consider and respond meaningfully to criticism, as opposed to making ad hominem style rebuttals that accuse betrayal. It seems to me to be serious epistemic error to target those who wish to leave a community or those who make criticism of it, especially by saying something akin to “you’re not allowed to criticize us if you’ve gained something from us.” This doesn’t mean at all that we shouldn’t analyze, understand, and respond to this phenomenon of “EA distancing”—just that we should do it with a less caustic approach that centers on trends and patterns, not criticism of individuals. 
4
JWS 🔸
I appreciate the pushback anormative, but I kinda stand by what I said and don't think your criticisms land for me. I fundamentally reject with your assessment of what I wrote/believe as 'targeting those who wish to leave', or saying people 'aren't allowed to criticise us' in any way. * Maybe your perception of 'accusation of betrayal' came from the use of 'defect' which was maybe unfortunate on my part. I'm trying to use it in a game theory 'co-operate/defect' framing. See Matthew Reardon from 80k here.[1] * I'm not against Ben leaving/disassociating (he can do whatever he wants), but I am upset/concerned that formerly influential people disassociating from EA leaves the rest of the EA community, who are by and large individuals with a lot less power and influence, to become bycatch.[2] * I think a load-bearing point for me is Ben's position and history in the EA Community.  * If an 'ordinary EA' were to post something similar, I'd feel sad but feel no need to criticise them individually (I might gather arguments that present a broader trend and respond to them, as you suggest). * I think there is some common-sense/value-ethics intuition I feel fairly strongly that being a good leader means being a leader when things are tough and not just when times are good.  * I think it is fair to characterise Ben as an EA Leader: Ben was a founder of 80,000 Hours, one of the leading sources of Community growth and recruitment. He was likely a part of the shift from the GH&D/E2G version of 80k to the longtermist/x-risk focused version, a move that was followed by the rest of EA. He was probably invited to attend (though I can't confirm if he did or not) the EA Leadership/Meta Co-ordination Forum for multiple years. * If the above is true, then Ben had a much more significant role shaping the EA Community than almost all other members of it. * To the extent Ben thinks that Community is bad/harmful/dangerous, the fact that he contributed to it implies som
2
NickLaing
Yes I agree with all those points, and in general I don't think these kind of tweets are the best approach to discussing tough and sensitive issues like this. Keep it in person or bring it to the forum.... @Nathan Young On a really basic object level as well, surely the careers of 80% of people who identify with EA don't depend on 10 people in San Francisco?

Not literally, but on a broader level I think that EA's reputation is too centralised in a handful of influential people, many of whom live in San Francisco and the Bay (also in the past tense when considering various scandals that have affected EA's current reputation)

Edit: Confused about the downvoting here - is it a 'the Forum doesn't need more of this community drama' feeling? I don't really include that much of a personal opinion to disagree with, and I also encourage people to check out Lincoln's whole response 🤷


For visibility, on the LW version of this post Lincoln Quirk - member of the EV UK board made some interesting comments (tagging @lincolnq to avoid sub-posting). I thought it'd be useful to have visibility of them on the Forum. A sentence which jumped out at me was this:

Personally, I'm still struggling with my own relationship to EA. I've been on the EV board for a year+ - an influential role at the most influential meta org - and I don't understand how to use this role to impact EA.

If one of the EV board members is feeling this way and doesn't know what to do, what hope for rank-and-file EAs? Is anyone driving the bus? Feels like a negative sign for the broader 'EA project'[1] if this feeling goes right to the top of the institutional EA structure.

That sentence comes near the end of a longer, reflective comment, so I recommend reading the full exchange to take in Lincoln's whole perspective. (I'll probably post my thoughts on... (read more)

The answer for a long time has been that it's very hard to drive any change without buy-in from Open Philanthropy. Most organizations in the space are directly dependent on their funding, and even beyond that, they have staff on the boards of CEA and other EA leadership organizations, giving them hard power beyond just funding. Lincoln might be on the EV board, but ultimately what EV and CEA do is directly contingent on OP approval.

OP however has been very uninterested in any kind of reform or structural changes, does not currently have any staff participate in discussion with stakeholders in the EA community beyond a very small group of people, and is majorly limited in what it can say publicly due to managing tricky PR and reputation issues with their primary funder Dustin and their involvement in AI policy.

It is not surprising to me that Lincoln would also feel unclear on how to drive leadership, given this really quite deep gridlock that things have ended up in, with OP having practically filled the complete power vacuum of leadership in EA, but without any interest in actually leading.

4
yanni kyriacos
Hello Habryka! I occasionally see you post something OP critical and am now wondering “is there a single post where Habryka shares all of his OP related critiques in one spot?” If that does exist I think it could be very valuable to do.
4
Habryka
I think this is the closest that I currently have (in-general, "sharing all of my OP related critiques" would easily be a 2-3 book sized project, so I don't think it's feasible, but I try to share what I think whenever it seems particularly pertinent):  https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st  I also have some old memos I wrote for the 2023 Coordination Forum I would still be happy to share with people if they DM me that I referenced a few times in past discussions.
4
yanni kyriacos
Yeah I've seen that. I think costly-signalling is very real, and the effort to create something formal, polished and thoughtful would go a long way. But obviously i have no idea what else you've got on your plate so YMMV
8
Sarah Cheng
I agree that it's useful to make this more visible here, thank you! [Flagging that I only know my own very limited perspective, and I don't expect that Lincoln has done anything wrong, it's just that I don't see much from where I'm sitting] I found that statement a bit confusing, because as a person who works at CEA, I barely knew that Lincoln was on the EV board, and I have never heard about him interacting with anyone at CEA (though it seems likely that he has done this and I just don't know about it). I feel like this is complicated because I don't know what the best practices are wrt how involved a board member should be in the details of the organization; I could imagine there might be good reason for board members not to be reaching out to individual employees and pushing their pet causes. But he specifically said he was "worried about the EA Forum" in his comment, and yet I do not know what any of his views on the Forum are, and I have never heard of him sharing any of his views on the Forum with anyone on the CEA Online Team. So I am left feeling pretty confused. I'll just say that I have directly reached out and offered to talk with him, because as you can imagine, I am very interested in understanding what his concerns are.

Reflections 🤔 on EA & EAG following EAG London (2024):

  • I really liked the location this year. The venue itself was easy to get to on public transport, seemed sleek and clean, and having lots of natural light on the various floors made for a nice environment. We even got some decent sun (for London) on Saturday and Sunday. Thanks to all the organisers and volunteers involved, I know it’s lot of work setting up an event like this us and making it run smoothly.
  • It was good to meet people in person who I previous had only met or recognised from online interaction. I won’t single out individual 1-on-1s I had, but it was great to be able to put faces to names, and hearing peoples stories and visions in person was hugely inspiring. I talked to people involved in all sorts of cause areas and projects, and that combination of diversity, compassion, and moral seriousness is one of the best things about EA.
  • Listening to the two speakers from the Hibakusha Project at the closing talk was very moving, and clear case of how knowing something intellectually is not the same thing as hearing personal (and in-person) testimony. I think it would’ve been one of my conference highlights in the feedba
... (read more)

In any case, I think it's clear that AI Safety is no longer 'neglected' within EA, and possibly outside of it.

I think this can't be clear based only on observing lots of people at EAG are into it. You have to include some kind of independent evaluation of how much attention the area "should" have. For example, if you believed that AI alignment should receive as much attention as climate change, then EAG being fully 100% about AI would still not be enough to make it no longer neglected.

(Maybe you implicitly do have a model of this, but then I'd like to hear more about it.)

FWIW I'm not sure what my model is, but it involves the fact that despite many people being interested in the field, the number actually working on it full time still seems kind of small, and in particular still dramatically smaller than the number of people working on advancing AI tech.

8
Habryka
It was really 90% coincidence in that Manifest and MATS basically fully determined when LessOnline would happen. I do think in a world where I considered myself more interested in investing in EA, or being involved in EA community building, I would have felt more sadness and hesitation about scheduling it at the same time, though I think it's unlikely that would have shifted the overall decision (~15% for this weird counterfactual).  As Phib also says, it is the case that at least historically very few people travel for EAG. I was surprised by this when I did the surveys and analytics for this when I ran EAG in 2015 and 2016. 

at least historically very few people travel for EAG. I was surprised by this when I did the surveys and analytics for this when I ran EAG in 2015 and 2016. 

 

Here are some numbers from Swapcard for EAG London 2024:

CountryCOUNTA of Country
United Kingdom608
United States196
Germany85
Netherlands48
France44
Switzerland34
India23
Sweden21
Canada21
Australia21
Norway17
Brazil15
Belgium13
Philippines12
Austria12
Spain11
Poland11
Czech Republic11
Singapore10
Nigeria10
Italy10
Denmark10
South Africa9
Kenya9
Finland8
Israel7
Hungary7
Mexico5
Ireland5
Hong Kong5
Malaysia4
Estonia4
China4
Turkey3
Taiwan3
Romania3
Portugal3
New Zealand3
Chile3
United Arab Emirates2
Peru2
Luxembourg2
Latvia2
Indonesia2
Ghana2
Colombia2
Zambia1
Uganda1
Thailand1
Slovakia1
Russia1
Morocco1
Japan1
Iceland1
Georgia1
Egypt1
Ecuador1
Cambodia1
Bulgaria1
Botswana1
Argentina1

55% of attendees were not from the UK, 14% of attendees were from the US, at least based on Swapcard data

6
Habryka
London is a particularly easy city to travel to from the rest of Europe, but that's still like 50% more than the baseline we had in 2015/2016/2017. The most relevant numbers here would be the people who would travel all the way from the U.S. and who would overlap with people who would want to attend LessOnline. My best guess is there are around 30-40 attendees for which there was a real conflict between the two events, though it wouldn't surprise me if that's off by a factor of 2-3 in either direction.
9
Jeff Kaufman 🔸
Raising my hand for an even more niche category: people who likely would have attended LessOnline had their partner not been attending EAG.
4
Stefan_Schubert
Detail, but afaict there were at least five Irish participants.
4
Lorenzo Buonanno🔸
Thanks! I was using old data, I updated the table. I'm surprised there were only five
4
NickLaing
"I do think in a world where I considered myself more interested in investing in EA, or being involved in EA community building, I would have felt more sadness and hesitation about scheduling it at the same time, though I think it's unlikely that would have shifted the overall decision (~15% for this weird counterfactual)" I find this comment quite discouraging that you didn't feel sadness and hesitation about scheduling it at the same time. I would have hoped that leaders like you who organised important events like LessOnline, Manifest and MATS, that have EA heritage and connection would have at least a little interest in doing what was best for EA and community building (even without having to "invest" in it yourself) and therefore at least trying to co-ordinate with the CEA events crew. I also think your comment partially refutes your assessment that it was "90% coincidence" that Manifest and MATS rather than EAG determined when LessOnline would be. If you care about the other 2 conferences but not much about clashes with EAG, then its hardly completely coincidence that you clashed with EAG....  

I find this comment quite discouraging that you didn't feel sadness and hesitation about scheduling it at the same time.

I didn't say that I didn't feel sadness or hesitation about scheduling it at the same time. Indeed, I think my comment directly implied that I did feel some sadness or hesitation, because I used the word "more", implying there was indeed a baseline level of sadness or hesitation that's non-zero.

Ignoring that detail, a bit of broader commentary on why I don't feel that sad:

I at the moment think that most EA community building is net-negative for the world. I am still here as someone trying to hold people accountable and because I have contributed to a bunch of the harm this community has caused. I am in some important sense an "EA Leader" but I don't seem to be on good terms with most of what you would call EA leadership, and honestly, I wish the EA community would disband and disappear and expect it to cause enormous harm in the future (or more ideally I wish it would undergo substantial reform, though my guess is the ship for that has sailed, which makes me deeply sad).

I have a lot of complicated opinions about what this implies about how I should relate to stuff... (read more)

I wish the EA community would disband and disappear and expect it to cause enormous harm in the future

Feels like you should resign from EA Funds grantmaking then

I've considered it! My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things (and IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment expressing such sadness and ask them to resign, that really seems like a great recipe for evaporative cooling).

Also separately, I am importantly on the Long Term Future Fund, not the EA Infrastructure Fund. I would have likely left or called for very substantial reform of the EA Infrastructure Fund, but the LTFF seems like it's probably still overall doing good things (though I am definitely not confident).

Precommitting to not posting more in this whole thread, but I thought Habryka's thoughts deserved a response

IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment

I think this is a fair cop.[1] I appreciate the added context you've added to your comment and have removed the downvote. Reforming EA is certainly high on my list of things to write about/work on, so would appreciate your thoughts and takes here even if I suspect I'll ending up disagreeing with diagnosis/solutions.[2]

My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things

I guess that depends on the theory of change for improving things. If it's using your influence and standing to suggest reforms and hold people accountable, sure. If it's asking for the community to "disband and disappear", I don't know. Like, I don't know in many other movements would that be tolerated with significant influence and funding power?[3] If one of the Lightcone Infrastructure team said "I think lightcone infrastructure in its entirety... (read more)

FWIW Habryka, I appreciate all that I know you’ve done and expect there’s a lot more I don’t know about that I should be appreciative of too.

I would also appreciate if you’d write up these concerns? I guess I want to know if I should feel similarly even as I rather trust your judgment. Sorry to ask, and thanks again

Editing to note I‘ve now seen some of comments elsewhere

I wish the EA community would disband and disappear and expect it to cause enormous harm in the future.

 

I would be curious to hear you expand more on this:

  • What is your confidence level? (e.g. is it similar to the confidence you had in "very few people travel for EAG", or is it something like 90%?)
  • What scenarios are you worried about? E.g. is it more about EA hastening the singularity by continuing to help research labs, or about EA making a government-caused slowdown less likely and less effective?
  • What is your main theory of change at the moment with rationalist community building, and how is it different from EA community building? Is it mostly focused on "slowing down AI progress, pivotal acts, intelligence enhancement"?

What is your confidence level? (e.g. is it similar to the confidence you had in "very few people travel for EAG", or is it something like 90%?)

Extremely unconfident, both in overall probability and in robustness. It's the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief I've gone back and forth on a lot over the years. 

On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.

What scenarios are you worried about? Hastening the singularity by continuing to help research labs or by making government intervention less like and less effective?

Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the worl... (read more)

OK your initial message makes more sense given your response here - Although I can't quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.

My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.

2
Linch
Yeah this is probably my biggest disagreement with Oli on this issue. 
2
Rebecca
I presume the person doesn’t realise those events are hosted at your venue
6
Stefan_Schubert
Fwiw I think there was such a tendency.
6
Sarah Cheng
+1, I always leave these conferences filled with inspiration and gratitude 😊 Yeah I was confused by this at first, but now I'm pretty sure this is a coincidence and there's a good chance the organizers just didn't think to check the dates of EAG. I'm not sure I understand what "this" is referring to, but in general I think discussing things on the Forum is a reasonable way to provide feedback and push for change within EA. Stuff on the Forum does get around.
5
Chris Leong
Maybe a better question is how neglected is this within society? And AI technical research is a lot less neglected than before, but governance work is still looking extremely neglected AND we appear to be in a critical policy window.
5
Phib
Hi, I went to Lessonline after registering for EAG London, my impression of both events being held on the same weekend is something like: 1. Events around the weekend (Manifest being held the weekend after Lessonline) informed Lessonline's dates (but why not the weekend after Manifest then?) 2. People don't travel internationally as much for EAGs (someone cited to me ~10% of attendees but my opinion on reflection is that this seems an underestimate). 3. I imagine EAG Bay Area, Global Catastrophic Risks in early Feb also somewhat covered the motivation for "AI Safety/EA conference". I think you're right that it's not entirely* a coincidence that Lessonline conflicted with EAG Bay Area, but I'm thinking this was done somewhat more casually and probably reasonably. I think it's odd, and other's have noted too, the most significant AI safety conference shares space with things unrelated on an object-level. I think it's further odd to consider, I've heard people say, why bother going to a conference like this when I live in the same city as the people I'd most want to talk with (Berkeley/SF). Finally, I feel weird about AI, since I think insiders are only becoming more convinced/confirmed of extreme event likelihoods (AI capabilities). I think it has only become more important by virtue of most people updating timelines earlier, not later, and this includes Open Phil's version of this (Ajeya and Joe Carlsmith's AI timelines). In fact, I've heard arguments that it's actually less important by virtue of, "the cat's out of the bag and not even Open Phil can influence trajectories here." Maybe AI safety feels less neglected because it's being advocated from large labs, but that may be both a result of EA/EA-adjacent efforts and not really enough to solve a unilateralizing problem.
5
Habryka
MATS is happening one week after Manifest. 
5
David Mathers🔸
In fairness you don't need high p|doom to think AI safety should be the no.1 priority if you a) think that AI is a non-negligible extinction risk (say >0.1%), b) no other extinction risk has an equal degree of combined neglectedness and size and c) the expected value of the future conditional on us not going extinct in the next 100 years is astronomically high, and d) AI safety work makes a significant difference to how likely doom is to occur.. None of these are innocent  or obvious assumptions, but I think a lot of people in the community hold all 3. I consider myself a critic of doomers in one sense, because I suspect p|doom is under 0.1%, and I think once you get down below that level, you should be nervous about taking expected value calculation that include your p|doom literally, because you probably don't really know whether you should be 0.09% or several orders of magnitude lower. But even I am not *sure* that this is not swamped by c). (The Bostrom Pascal's Mugging case involves probabilities way below 1 in a million, never mind 1 in a 1000.) Sometimes I get the impression though, that some people think of themselves as anti-doomer, when their p|doom is officially more like 1%. I think that's a major error, if they really believe that figure. 1% is not low for human extinction. In fact, it's not low even if you only care about currently existing people being murdered: in expectation that is 0.01x8 billion=80 million deaths(!). Insofar as what is going on is really just that people in their heart of hearts are much lower than 1%, but don't want to say that because it feels extreme maybe this is ok. But if people actually mean figures like 1% or 5%, they ought to basically be on the doomers' side, even if they think very high p|doom estimates given by some doomers are extremely implausible. 
3
JWS 🔸
Going to merge replies into this one comment, rather than sending lots and flooding the forum. If I've @ you specifically and you don't want to respond in the chain, feel free to DM: On neglectedness - Yep, fair point that our relevant metric here is neglectedness in the world, not in EA. I think there is a point to make here but it was probably the wrong phrasing to use, I should have made it more about 'AI Safety being too large a part of EA' than 'Lack of neglectedness in EA implies lower ITN returns overall' On selection bias/other takes - These were only ever meant to be my takes and reflections, so I definitely think they're only a very small part of the story. I guess @Stefan_Schubert would be interested to hear about your impression of 'lack of leadership' and any potential reasons why? On the Bay/Insiders - It does seem like the Bay is convinced AI is the only game in town? (Aschenbrenner's recent blog seems to validate this). @Phib would be interested to hear you say more on your last paragraph, I don't think I entirely grok it but it sounds very interesting. On the Object Level - I think this one for an upcoming sequence. Suffice to say that one can infer from my top level post that I have very different beliefs on this issue than many 'insider EAs', and I do work on AI/ML for my day job![1] But I think that while David sketches out a case for overall points, I think those points have been highly underargued and underscrutinised given their application in shaping the EA movement and its funding. So look it for a more specific sequence on the object level[2] maybe-soon-depending-on-writing-speed. 1. ^ Which I have recently left to do some AI research and see if it's the right fit for me. 2. ^ Currently tentatively titled "Against the overwhelming importance of AI x-risk reduction"
3
Phib
Yeah, thank you, I guess I was trying to say that the evidence only seems to be stronger over time that the Bay Area’s: ‘AI is the only game in town’, is accurate. Insofar as, timelines for various AI capabilities have outperformed both superforecasters’ and AI insiders’ predictions; transformative AI timelines (at Open Phil, prediction markets, AI experts I think) have decreased significantly over the past few years; the performance of LLMs have increased at an extraordinary rate across benchmarks; and we expect the next decade to extrapolate this scaling to some extent (w/ essentially hundreds of billions if not tens of trillions to be invested). Although, yeah, I think to some extent we can’t know if this continues to scale as prettily as we’d expect and it’s especially hard to predict categorically new futures like exponential growth (10%, 50%, etc, growth/year). Given the forecasting efforts and trends thus far it feels like there’s a decent chance of these wild futures, and people are kinda updating all the way? Maybe not Open Phil entirely (to the point that EA isn’t just AIS), since they are hedging their altruistic bets, in the face of some possibility this decade could be ‘the precipice’ or one of the most important ever. Misuse and AI risk seem like the negative valence of AI’s transformational potential. I personally buy the arguments around transformational technologies needing more reasoned steering and safety, and I also buy that EA has probably been a positive influence, and that alignment research has been at least somewhat tractable. Finally I think that there’s more that could be done to safely navigate this transition. Also, re David (Thorstad?) yeah I haven’t engaged with his stuff as I probably should, and I really don’t know how to reason for or against arguments around the singularity, exponential growth, and the potential of AI without deferring to people more knowledgeable/smarter than me. I do feel like I have seen the start and middle
3
Joseph Miller
I think this is basically entirely selection effects. Almost all the people I spoke to were "doomers" to some extent.
1
harfe
What do you mean by that? Presumably you do not mean it in any religious sense. Do you want to say that exclusively longtermist EA is much less popular among EAs than it used to be (i.e. the "heaven" is the opinion of the average EA)?
7
JWS 🔸
Ah sorry, it's a bit of linguistic shortcut, I'll try my best to explain more clearly: As David says, it's an idea from Chinese history. Rulers used the concept as a way of legitimising their hold on power, where Tian (Heaven) would bestow a 'right to rule' on the virtuous ruler. Conversely, rebellions/usurpations often used the same concept to justify their rebellion, often by claiming the current rulers had lost heaven's mandate. Roughly, I'm using this to analogise to the state of EA, where AI Safety and AI x-risk has become an increasingly large/well-funded/high-status[1] part of the movement, especially (at least apparently) amongst EA Leadership and the organisations that control most of the funding/community decisions. My impression is that there was a consensus and ideological movement amongst EA leadership (as opposed to an already-held belief where they pulled a bait-and-switch), but many 'rank-and-file' EAs simply deferred to these people, rather than considering the arguments deeply. I think that various amounts of scandals/bad outcomes/bad decisions/bad vibes around EA in recent years and at the moment can be linked to this turn towards the overwhelming importance of AI Safety, and as EffectiveAdvocate says below, I would like that part of EA to reduce its relative influence and power on the rest of it, and for rank-and-file EAs to stop deferring on this issue especially, but also in general. 1. ^ I don't like this term but again, I think people know what I mean when I say this
4
David Mathers🔸
It's a reference to an idea from old Chinese political thinking: https://en.wikipedia.org/wiki/Mandate_of_Heaven
1
harfe
That does not help me understand what is meant there. I fail to see relevant analogies to AI Safety.
5
EffectiveAdvocate🔸
I am fairly sure that the JWS means to say that these subgroups are about to / should lose some of their dominance in the EA movement. 
2
David Mathers🔸
Agree

Quick[1] thoughts on the Silicon Valley 'Vibe-Shift'

I wanted to get this idea out of my head and into a quick-take. I think there's something here, but a lot more to say, and I've really haven't done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.

The political outlook in Silicon Valley has changed.

Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/acc has claimed political victory, and lots of people have noticed the 'vibe shift'.[2] I think that, rather than this being a change in opinions, it's more an event allowing for the beginning of a preference cascade, but at least in Silicon Valley (if not yet reflected in national polling) it has happened. 

So it seems that a large section of Silicon Valley is now openly and confidently supporting Trump, and to a greater or lesser extent aligned with the a16z/e-acc worldview,[3] we know it's already reached the ears of VP candidate JD Vance.

How did we get here

You could probably write a book on this, so this is a highly ... (read more)

6
anormative
I've often found it hard to tell whether an ideology/movement/view has just found a few advocates among a group, or whether it has totally permeated that group.  For example, I'm not sure that Srinivasan's politics have really changed recently or that it would be fair to generalize from his beliefs to all of the valley. How much of this is actually Silicon Valley's political center shifting to e/acc and the right, as opposed to people just having the usual distribution of political beliefs (in addition to a valley-unspecific decline of the EA brand)? 
2
David Mathers🔸
A NYT article I read a couple of days ago claimed Silicon Valley remains liberal overall.
3
JWS 🔸
Folding in Responses here @thoth hermes (or https://x.com/thoth_iv if someone can get it to them if you're Twitter friends then pls go ahead.[1] I'm responding to this thread here - I am not saying "that EA is losing the memetic war because of its high epistemic standards", in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/not caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if there's a way for you to get in touch directly, I'd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking 'why is that? what are we getting wrong?' rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didn't make that clear enough in my OP though. @Iyngkarran Kumar - Thanks for sharing your thoughts, but I must say that I disagree with it. I don't think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while it's good for Eliezer to say what he thinks accurately, the 'bomb the datacenters'[3] piece has probably been h

Once again, if you disagree, I'd love to actually here why.

I think you're reading into twitter way too much.

6
richard_ngo
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
4
Ryan Greenblatt
I don't disagree with this statement, but also think the original comment is reading into twitter way too much.
7
Ryan Greenblatt
There are many (edit: 2) comments responding and offering to talk. 1a3orn doesn't appear to have replied to any of these comments. (To be clear, I'm not saying they're under any obligation here, just that there isn't a absence of attempted engagement and thus you shouldn't update in the direction you seem to be updating here.)
2
JWS 🔸
a) r.e. Twitter, almost tautologically true I'm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all. b) I haven't seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but I'm particularly concerned with the "Many AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AI" article - which I haven't seen good pushback to. Again, welcome to being wrong on this.  1. ^ Ok, I've seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.
1
Ryan Greenblatt
Scroll down to see comments.
1
Joseph Miller
Nit: Beff Jezos was doxxed and repeating him name seems uncool, even if you don't like him.
4
JWS 🔸
I think this case it's ok (but happy to change my mind) - afaict he owns the connection now and the two names are a bit like separate personas. He's gone on podcasts under his true name, for instance.
0
Joseph Miller
Ok thanks, I didn't know that.
1
Iyngkarran Kumar
Strongly agree. I think the TESCREAL/e-acc movements badly mischaracterise the EA community with extremely poor, unsubstantiated arguments, but there doesn’t seem to be much response to this from the EA side.  What does this refer to? I'm not familiar.  Other thoughts on this: Publicly, the quietness from the EA side in response to TESCREAL/e-acc/etc.  allegations is harming the community's image and what it stands for. But ‘winning’ the memetic war is important. If not, then the world outside EA - which has many smart, influential people - ends up seeing the community as a doomer cult (in the case of AI safety) or assigns some equally damaging label that lets them quickly dismiss many of the arguments being made.  I think this is a case where the the epistemic standards of the EA community work against it. Rigorous analysis, expressing second/third-order considerations, etc. are seen as the norm for most writing on the forum. However, in places such as Twitter, these sorts of analyses aren’t ‘memetically fit’ [1].  So, I think we're in need of more pieces like the Time essay on Pausing AI - a no-punches-pulled sort of piece that gets across the seriousness of what we’re claiming. I’d like to see more Twitter threads and op-ed’s that dismantle claims like “advancements in AI have solved it’s black-box nature”, ones that don't let clearly false claims like this see the light of day in serious public discourse.  1. ^ Don't get me wrong - epistemically rigorous work is great. But when responding to TESCREAL/e-acc 'critiques' that continuously hit below the belt, other tactics may be better. 

Many people find the Forum anxiety inducing because of the high amount of criticism. So, in the spirit of Giving Season, I'm going to give some positive feedback and shout-outs for the Forum in 2023 (from my PoV). So, without further ado, I present the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-2023-Forum-Awards: 🏆✨🎄[1]
 

Best Forum Post I read this year:

10 years of Earning to Give by @AGB: A clear, grounded, and moving look at what it actually means to 'Earn to Give'. In particular, the 'Why engage?' section really resonated with me.

Honourable Mentions:

Best ... (read more)

6
David Mathers🔸
Thanks for saying nice things about me! For the record, I also think David Thorstad's contributions are very valuable (whether or not his views are ultimately correct). 
2
Joseph Lemien
This is a lovely idea. Bravo!

A thought about AI x-risk discourse and the debate on how "Pascal's Mugging"-like AIXR concerns are, and where this causes confusion between those concerned and sceptical.

I recognise a pattern where a sceptic will say "AI x-risk concerns are like Pascal's wager/are Pascalian and not valid" and then an x-risk advocate will say "But the probabilities aren't Pascalian. They're actually fairly large"[1], which usually devolves into a "These percentages come from nowhere!" "But Hinton/Bengio/Russell..." "Just useful idiots for regulatory capture..." discourse doom spiral.

I think a fundamental miscommunication here is that, while the sceptic is using/implying the term "Pascallian" they aren't concerned[2] with the percentage of risk being incredibly small but high impact, they're instead concerned about trying to take actions in the world - especially ones involving politics and power - on the basis of subjective beliefs alone. 

In the original wager, we don't need to know anything about the evidence record for a certain God existing or not, if we simply Pascal's framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes... (read more)

4
MichaelStJules
I also think almost any individual person working on AI safety is very unlikely to avert existential catastrophe, i.e. they only reduce x-risk probability by say ~50 in a million (here's a model; LW post) or less. I wouldn't devote my life to religion or converting others for infinite bliss or to avoid infinite hell for such a low probability that the religion is correct and my actions make the infinite difference, and the stakes here are infinitely larger than AI risk's.[1] That seems pretty Pascalian to me. So spending my career on AI safety also seems pretty Pascalian to me. 1. ^ Maybe not actually infinitely larger. They could both be infinite.
9
Ryan Greenblatt
I think people wouldn't normally consider it Pascalian to enter a postive total returns lottery with a 1 / 20,000 (50 / million) chance of winning? And people don't consider it to be Pascalian to vote, to fight in a war, or to advocate for difficult to pass policy that might reduce the chance of nuclear war? Maybe you have a different-than-typical perspective on what it means for something to be Pascalian?
8
MichaelStJules
I might have higher probability thresholds for what I consider Pascalian, but it's also a matter of how much of my time and resources I have to give. This feels directly intuitive to me, and it can be cashed out in terms of normative uncertainty about decision theory/my risk appetite. I limit my budget for views that are more risk neutral. Voting is low commitment, so not Pascalian this way. Devoting your career to nuclear policy seems Pascalian to me. Working on nuclear policy among many other things you work on doesn't seem Pascalian. Fighting in a war only with deciding the main outcome of who wins/loses in mind seems Pascalian. Some of these things may have other benefits regardless of whether you change the main binary-ish outcome you might have in mind. That can make them not Pascalian. Also, people do these things without thinking much or at all about the probability that they'd affect the main outcome. Sometimes they're "doing their part", or it's a matter of identity or signaling. Those aren't necessarily bad reasons. But they're not even bothering to check whether it would be Pascalian. EDIT: I'd also guess the people self-selecting into doing this work, especially without thinking about the probabilities, would have high implied probabilities of affecting the main binary-ish outcome, if we interpreted them as primarily concerned with that.

I want to register that my perspective on medium-term[1] AI existential risk (shortened to AIXR from now on) has changed quite a lot this year. Currently, I'd describe it as moving from 'Deep Uncertainty' to 'risk is low in absolute terms, but high enough to be concerned about'. I guess atm I'd think that my estimates are moving closer toward the Superforecasters in the recent XPT report (though I'd still say I'm still Deeply Uncertain on this issue, to the extent that I don't think the probability calculus is that meaningful to apply)

Some points around this change:

  • I'm not sure it's meaningful to cleanly distinguish AIXR from other anthropogenic x-risks, especially since negative consequences of AI may plausibly increase other x-risks (e.g. from Nuclear War, biosecurity, Climate Change etc.)
  • I think in practice, the most likely risks from AI would come from deployment of powerful systems that have catastrophic consequences that are then rolled back. I'm think of Bing 'Sydney' here as the canonical empirical case.[2] I just don't believe we're going to get no warning shots.
  • Similary, most negative projections of AI don't take into account negative social reaction and systema
... (read more)
8
Guy Raveh
This seems like a very sensible and down-to-earth analysis to me, and I'm a bit sad I can't seem to bookmark it.
6
JWS 🔸
Thanks :) I might do an actual post at the end of the year? In the meantime I just wanted to get my ideas out there as I find it incredibly difficult to actually finish any of the many Forum drafts I have 😭
6
David Mathers🔸
Do the post :) 
4
NickLaing
I agree this feels plenty enough to be a post for me, but we all have different thresholds I guess!
4
Chris Leong
“AI may plausibly increase other x-risks (e.g. from Nuclear War, biosecurity, Climate Change etc.)” I’m extremely surprised to see climate change listed here. Could you explain?
2
JWS 🔸
Honestly I just wrote a list of potential x-risks to make a similar reference class. It wasn't mean to be a specific claim, just examples for the quick take! I guess climate change might be less of an existential risk in an of itself (per Halstead), but there might be interplays between them that increase their combined risk (I think Ord talks about this in the precipice). I'm also sympathetic to Luke Kemp's view that we should really just care about overall x-risk, regardless of cause area, as extinction by any means would be as bad for humanities potential.[1] I think it's plausible to consider x-risk from AI higher than Climate Change over the rest of this century, but my position at the moment is that this would be more like 5% v 1% or 1% v 0.01% than 90% v 0.001%, but as I said I'm not sure trying to put precise probability estimates is that useful. Definitely accept the general point that it'd be good to be more specific with this language in a front-page post though. 1. ^ Though not necessarily present, some extinctions may well be a lot worse than others there
4
Chris Leong
My point is that even though AI emits some amount of carbon gases, I'm struggling to find a scenario where it's a major issue for global warming as AI can help provide solutions here as well. (Oh, my point wasn't that climate change couldn't be an x-risk, though it has been disputed, more that I don't see the pathway for AI to exacerbate climate change).
1
David Johnston
I would take the proposal to be AI->growth->climate change or other negative growth side effects
1
Mo Putera
I was wondering why he said that, since I've read his report before and that didn't come up at all. I suppose a few scattered recollections I have are * Tom would probably suggest you play around with the takeoffspeeds playground to gain a better intuition (I couldn't find anything 1,000x-in-a-year-related at all though) * Capabilities takeoff speed ≠ impact takeoff speed (Tom: "overall I expect impact takeoff speed to be slower than capabilities takeoff, with the important exception that AI’s impact might mostly happen pretty suddenly after we have superhuman AI")

This is an off-the-cuff quick take that captures my current mood. It may not have a long half-life, and I hope I am wrong

Right now I am scared

Reading the tea-leaves, Altman and Brockman may be back at OpenAI, the company charter changed, and the board - including Toner and MacAulay - removed from the company

The mood in the Valley, and in general intellectual circles, seems to have snapped against EA[1]

This could be as bad for EA's reputation as FTX

At a time when important political decisions about the future of AI are being made, and potential coalitions are being formed

And this time it'd be second-impact syndrome

I am scared EA in its current form may not handle the backlash that may come

I am scared that we have not done enough reform in the last year from the first disaster to prepare ourselves

I am scared because I think EA is a force for making the world better. It has allowed me to do a small bit to improve the world. Through it, I've met amazing and inspiring people who work tirelessly and honestly to actually make the world a better place. Through them, I've heard of countless more actually doing what they think is right and giving what they can to make the world we find ourse... (read more)

As with Nonlinear and FTX, I think that for the vast majority of people, there's little upside to following this in real-time.

It's very distracting, we have very little information, things are changing fast, and it's not very action-relevant for most of us.

I'm also very optimistic that the people "who work tirelessly and honestly to actually make the world a better place" will keep working on it after this, whatever happens to "EA", and there will still be ways to meet them and collaborate.

Sending a hug

2
JWS 🔸
Thanks for this Lorenzo, I appreciate it <3

It's hard to see how the backlash could actually destroy GiveWell or stop Moskowitz and Tuna giving away their money through Open Phil/something that resembles Open Phil. That's a lot of EA right there.

It's hard yes, but I think the risk vectors are (note - these are different scenarios, not things that follow in chronological order, though they could):

  • Open Philanthropy gets under increasing scrutiny due to its political influence
  • OP gets viewed as a fully politicised propaganda operation from EA, and people stop associating with it, accepting its money, or call for legal or political investigations into it etc
  • Givewell etc dissassociate themselves from EA due to EA having a strong negative social reaction from potential collaborators or donors
  • OP/Givewell dissociate from and stop funding the EA community for similar reasons as the above, and the EA community does not survive

Basically I think that ideas are more important than funding. And if society/those in positions of power put the ideas of EA in the bin, money isn't going to fix that

This is all speculative, but I can't help the feeling that regardless of how the OpenAI crisis resolves a lot of people now consider EA to be their enemy :(

3
Sharmake
My general thoughts on this can be stated as: I'm mostly of the opinion that EA will survive this, bar something massively wrong like the board members willfully lying or massive fraud from EAs, primarily because most of the criticism is directed to the AI safety wing, and EA is more than AI safety, after all. Nevertheless, I do think that this could be true for the AI safety wing, and they may have just hit a key limit to their power. In particular, depending on how this goes, I could foresee a reduction in AI safety power and influence, and IMO this was completely avoidable.
4
JWS 🔸
I think a lot will depend on the board justification. If Ilya can say "we're pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldn't be trusted to manage this safely" with proof that might work - but then why not say that?[1] If it's just "we decided to go in a different direction", then firing him and demoting Brockman with little to no notice, and without informing their largest business partner and funder, its bizarre that they took such a drastic step in the way they did I was actually writing up my AI-risk sceptical thoughts and what EA might want to take from that, but I think I might leave that to one side for now until I can approach it with a more even mindset 1. ^ Putting aside that I feel both you and I are sceptical that a new capability jump has emerged, or that scaling LLMs is actually a route to existential doom
7
Sharmake
I suspect this is due to the fact that quite frankly, the concerns they had about Sam Altman being unsafe on AI basically had no evidence except speculation from the EA/LW forum, which is not enough evidence at all in the corporate world/legal world, and to be quite frank, the EA/LW standard of evidence on AI risk being a big deal enough to investigate is very low, sometimes non-existent, and that simply does not work once you have to deal with companies/the legal system. More generally, EA/LW is shockingly loose, sometimes non-existent in its standards of evidence for AI risk, which doesn't play well with the corporate/legal system. This is admittedly a less charitable take than say, Lukas Gloor's take.
8
Lukas_Gloor
Haha, I was just going to say that I'd be very surprised if the people on the OpenAI board didn't have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines. TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. That's something that EA opinion leaders could maybe think about and address. But I don't think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I'm strongly disagreeing with the idea that it's likely that the board "basically had no evidence except speculation from the EA/LW forum". I think one thing EA is unusually good at – or maybe I should say "some/many parts of EA are unusually good at" – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn't any groupthink among EAs. Also, "unusually good" isn't necessarily that high of a bar.])  I don't know for sure what they did or didn't consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I don't know much about Tasha. I've briefly met Helen but either didn't speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)

While I generally agree that they almost certainly have more information on what happened, which is why I'm not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it's slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and that's going to conflict with corporate/legal standards of evidence.

But I don't think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I'm strongly disagreeing with the idea that it's likely that the board "basically had no evidence except speculation from the EA/LW forum". I think one thing EA is unusually g

... (read more)
2
David Mathers🔸
Why did you unendorse?
5
Sharmake
I unendorsed primarily because apparently, the board didn't fire because of safety concerns, though I'm not sure this is accurate.
2
akash 🔸
I am unsure how I feel about takes like this. On one hand, I want EAs and the EA community to be a supportive bunch. So, expressing how you are feeling and receiving productive/helpful/etc. comments is great. The SBF fiasco was mentally strenuous for many, so it is understandable why anything seemingly negative for EA elicits some of the same emotions, especially if you deeply care about this band of people genuinely aiming to do the most good they can.  On the other hand, I think such takes could also contribute to something I would call a "negative memetic spiral." In this particular case, several speculative projections are expressed together, and despite the qualifying statement at the beginning, I can't help but feel that several or all of these things will manifest IRL. And when you kind of start believing in such forecasts, you might start saying similar things or expressing similar sentiments. In the worst case, the negative sentiment chain grows rapidly. It is possible that nothing consequential happens. People's mood during moments of panic are highly volatile, so five years in, maybe no one even cares about this episode. But in the present, it becomes a thing against the movement/community. (I think a particular individual may have picked up one such comment from the Forum and posted it online to appease to their audience and elevate negative sentiments around EA?). Taking a step back, gathering more information, and thinking independently, I was able to reason myself out of many of your projections. We are two days in and there is still an acute lack of clarity about what happened. Emmett Shear, the interim CEO of OpenAI, stated that the board's decision wasn't over some safety vs. product disagreement. Several safety-aligned people at OpenAI signed the letter demanding that the board should resign, and they seem to be equally disappointed over recent events; this is more evidence that the safety vs. product disagreement likely didn't lead to Altman's

Thanks for your response Akash. I appreciate your thoughts, and I don't mind that they're off-the-cuff :)

I agree with some of what you say, and part of what I think is your underlying point , but in some others I'm a bit less clear. I've tried to think about two points where I'm not clear, but please do point if I've got something egregiously wrong!

1) You seem to be saying that sharing negative thoughts and projections can lead others to do so, and this can then impact other people's actions in a negative way. It could also be used by anti-EA people against us.[1]

I guess I can kind of see some of this, but I guess I'd view the cure as being worse than the disease sometimes. I think sharing how we're thinking and feeling is overall a good thing that could help us understand each other more, and I don't think self-censorship is the right call here. Writing this out I think maybe I disagree with you about whether negative memetic spirals are actually a thing causally instead of descriptively. I think people may be just as likely apriori to have 'positive memetic spirals' or 'regressions to the vibe mean' or whatever

2) I'm not sure what 'I was able to reason myself out of many of your ... (read more)

3
akash 🔸
'Hold fire on making projections' is the correct read, and I agree with everything else you mention in point 2.  About point 1 — I think sharing negative thoughts is absolutely a-ok and important. I take issue with airing bold projections when basic facts of the matter aren't even clear. I thought you were stating something akin to "xyz are going to happen," but re-reading your initial post, I believe I misjudged. 

The HLI discussion on the Forum recently felt off to me, bad vibes all around. It seems very heated, not a lot of scout mindset, and reading the various back-and-forth chains I felt like I was  'getting Eulered' as Scott once described. 

I'm not an expert on evaluating charities, but I followed a lot of links to previous discussions and found this discussion involving one of the people running an RCT on Strongminds (which a lot of people are waiting for the final results of) who was highly sceptical of SM efficacy. But the person offering counterarguments in the thread seems to be just as valid to me? My current position, for what it's worth,[1] is:

  • the initial Strongminds results of 10x cash transfer should raise a sceptical response. most things aren't that effective
  • it's worth there being exploration of what the SWB approach would recommend as the top charities (think of this as trying other bandits in a multi-armed bandit charity evaluation problem)
  • it's very difficult to do good social science, and the RCT won't give us dispositive evidence about the effectiveness of Strongminds (especially at scale), but it may help us update. In general we should be mindful of how
... (read more)

Some people with high-karma accounts seem to be making some very strong votes on that thread, and very few are making their reasoning clear (though I salute those who are in either direction).

I think this is a significant datum in favor of being able to see the strong up/up/down/strong down spread for each post/comment. If it appeared that much of the karma activity was the result of a handful of people strongvoting each comment in a directional activity, that would influence how I read the karma count as evidence in trying to discern the community's viewpoint. More importantly, it would probably inform HLI's takeaways -- in its shoes, I would treat evidence of a broad consensus of support for certain negative statements much, much more seriously than evidence of carpet-bomb voting by a small group on those statements.

4
JP Addison🔸
Indeed our new reacts system separates them. But our new reacts system also doesn't have strong votes. A problem with displaying the number of types of votes when strong votes are involved is that it much more easily allows for deanonymization if there are only a few people in the thread.
2
Jason
That makes sense. On the karma side, I think some of my discomfort comes from the underlying operationalization of post/comment karma as merely additive of individual karma weights.  True opinion of the value of the bulk of posts/comments probably lies on a bell curve, so I would expect most posts/comments to have significantly more upvotes than strong upvotes if voters are "honestly" conveying preferences and those preferences are fairly representative of the user base. Where the karma is coming predominately from strongvotes, the odds that the displayed total reflects the opinion of a smallish minority that feels passionately is much higher. That can be problematic if it gives the impression of community consensus where no such consensus exists. If it were up to me, I would probably favor a rule along the lines of: a post/comment can't get more than X% of its net positive karma from strongvotes, to ensure that a high karma count reflects some degree of breadth of community support rather than voting by a small handful of people with powerful strongvotes. Downvotes are a bit trickier, because the strong downvote hammer is an effective way of quickly pushing down norm-breaking and otherwise problematic content, and I think putting posts into deep negative territory is generally used for that purpose.
2
David M
Looks like this feature is being rolled out on new posts. Or at least one post: https://forum.effectivealtruism.org/posts/gEmkxFuMck8SHC55w/introducing-the-effective-altruism-addiction-recovery-group
9
Sol3:2
EA is just a few months out from a massive scandal caused in part by socially enforced artificial consensus (FTX), but judging by this post nothing has been learned and the "shut up and just be nice to everyone else on the team" culture is back again, even when truth gets sacrificed on the process. No thinks HLI is stealing billions of dollars of course, but the charge that they keep quasi-deliberately stacking the deck in StrongMinds' favour is far from outrageous and should be discussed honestly and straightforwardly.

JWS' quick take has often been in negative agreevote territory and is +3 at this writing. Meanwhile, the comments of the lead HLI critic suggesting potential bad faith have seen consistent patterns of high upvote / agreevote. I don't see much evidence of "shut up and just be nice to everyone else on the team" culture here.

5
JWS 🔸
Hey Sol, some thoughts on this comment: * I don't think the Forum's reaction to the HLI post has been "shut up and just be nice to everyone else on the team", as Jason's response suggested. * I don't think mine suggests that either! In fact, my first bullet point has a similar sceptical prior to what you express in this comment[1] I also literally say "holding charity evaluators to account is important to both the EA mission and EAs identity", and point that I don't want to sacrifice epistemic rigour. In fact, one of my main points is that people - even those disagreeing with HLI, are shutting up too much! I think disagreement without explanation is bad, and I salute the thorough critics on that post who have made their reasoning for putting HLI in 'epistemic probation' clear. * I don't suggest 'sacrificing the truth'. My position is that the truth on StrongMind's efficacy is hard to get a strong signal on, and therefore HLI should have been more modest early on their history, instead of framing it as the most effective way to donate. * As for the question of whether HLI were "quasi-deliberately stacking the deck", well I was quite open that I think I am confused on where the truth is, and find it difficult to adjudicate what the correct takeway should be. I don't think we really disagree that much, and I definitely agree that the HLI discussion should proceed transparently and EA has a lot to learn from the last year, including FTX. I think if you maybe re-read my Quick Take, I'm not taking the position you think I am. 1. ^ That's my interpretation of course, please correct me if I've misunderstood

[edit: a day after posting, I think this perhaps reads more combative that I intended? It was meant to be more 'crisis of faith, looking for reassurance if it exists' than 'dunk on those crazy longtermists'. I'll leave the quick take as-is, but maybe clarification of my intentions might be useful to others]

Warning! Hot Take! 🔥🔥🔥 (Also v rambly and not rigorous)

A creeping thought has entered my head recently that I haven't been able to get rid of... 

Is most/all longtermist spending unjustified?

The EA move toward AI Safety and Longtermism is often based on EV calculations that show the long term future is overwhelmingly valuable, and thus is the intervention that is most cost-effective.

However, more in-depth looks at the EV of x-risk prevention (1, 2) cast significant doubt on those EV calculations, which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones.

But my doubts get worse...

GiveWell estimates around $5k to save a life. So I went looking for some longtermist calculations, and I really couldn't fund any robust ones![1] Can anyone point me in some robust calculations for longtermist funds/organisations where they ... (read more)

which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones.

Why do you think this?

For some very rough maths (appologies in advance for any errors), even Thorstad's paper (with a 2 century long time of perils, a 0.1% post-peril risk rate, no economic/population growth, no moral progress, people live for 100 years) suggests that reducing p(doom) by 2% is worth as much as saving 16x8billion lives - i.e. each microdoom is worth 6.4million lives. I think we can buy microdooms more cheaply than $5,000*6.4million = $32billion each. 

(I can't actually find those calculations in Thorstad's paper, could you point them out to me? afaik he mostly looks at the value of fractional reduction in x-risk, while microdooms are an absolute reduction if I understand correctly? happy to be corrected or shown in the right direction!)

My concerns here are twofold:

1 - epistemological: Let's say those numbers are correct from the Thorstad paper, that a microdoom has to cost <= $32bn to be GiveWell cost-effective. The question is, how would we know this. In his recent post Paul Cristiano thinks that RSPs could lead to a '10x reduction' in AI risk. How does he know this? Is this just a risk reduction this century? This decade? Is it a permanent reduction?

It's one thing to argue that under set of conditions X work on x-risk reduction is cost-effective as you've done here. But I'm more interested in the question of whether conditions X hold, because that's where the rubber hits the road. If those conditions don't hold, then that's why longtermism might not ground x-risk work.[1]

There's also the question of persistence. I think the Thorstad model either assumes the persistence of x-risk reduction, or the persistence of a low-risk p... (read more)

6
Larks
He assumes 20% risk and a 10% relative risk reduction, which I translate into 2% absolute risk of doom, and then see the table on p12.
3
nevakanezzar
Isn't the move here something like, "If doom soon, then all pre-doom value nets to zero"? Which tbh I'm not sure is wrong. If I expect doom tomorrow, all efforts today should be to reduce it; one night's sleep not being bitten by mosquitoes doesn't matter. Stretching this outward in time, doesn't change the calculus much for a while, maybe about a lifetime or a few lifetimes or so. And a huge chunk of xrisk is concentrated in this century.
7
JWS 🔸
The x-risk models actually support the opposite conclusion though. They are generally focused on the balance of two values, v and r - where v the value of a time period and r is the risk of extinction in that period. If r is sufficiently high, then it operates as a de facto discount rate on the future, which means that the most effective way to increase good is to increase the present v rather than reduce r. For an analogy, if a patient has an incredibly high risk of succumbing to terminal cancer, the way to increase wellbeing may be to give them morphine and palliative care rather than perscribe them risky treatments that may or may not work (and might only be temporary) Now one could argue against this by saying 'do not get go gentle into that good night, in the face of destruction we should still do our best'. I have sympathy with that view, but it's not grounded in the general 'follow the EV' framework of EA, and it would have consequences beyond supporting longtermism

Some personal reflections on EAG London:[1]

  • Congrats to the CEA Events Team for their hard work and for organising such a good event! 👏
  • The vibe was really positive! Anecdotally I had heard that the last EAG SF was gloom central, but this event felt much more cheery. I'm not entirely sure why, but it might have had something to do with the open venue, the good weather, or there being more places to touch grass in London compared to the Bay. 
  • I left the conference intellectually energised (though physically exhausted). I'm ready to start drafting some more Forum Post ideas that I will vastly overestimate my ability to finish and publish 😌
  • AI was (unsurprisingly) the talk of the town. But I found that quite a few people,[2] myself included, were actually more optimistic on AI because of the speed of the social response to AI progress and how pro-safety it seems to be, along with low polarisation along partisan lines.
  • Related to the above, I came away with the impression that AI Governance may be as if not more important than Technical Alignment in the next 6-12 months. The window for signficiant political opportunity is open now but may not stay open forever, so the AI Governa
... (read more)
7
Robi Rahman
I assume any event in SF gets a higher proportion of AI doomers than one in London.

I think (at least) somebody at Open Philanthropy needs to start thinking about reacting to an increasing move towards portraying it, either sincerely or strategically, as a shadowy cabal-like entity influencing the world in an 'evil/sinister' way, similar to how many right-wingers across the world believe that George Soros is contributing to the decline of Western Civilization through his political philanthropy.

Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles. In many think pieces responding to WWOTF or FTX or SBF, they get extensively cited as a primary EA-critic, for example.

I think the 'ignore it' strategy was a mistake and I'm afraid the same mistake might happen again, with potentially worse consequences.

  1. ^

    Do people realise that they've going to release a documentary sometime soon?

Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles.

You say this as if there were ways to respond which would have prevented this. I'm not sure these exist, and in general I think "ignore it" is a really really solid heuristic in an era where conflict drives clicks.

I think responding in a way that is calm, boring, and factual will help. It's not going to get Émile to publicly recant anything. The goal is just for people who find Émile's stuff to see that there's another side to the story. They aren't going to publicly say "yo Émile I think there might be another side to the story". But fewer of them will signal boost their writings on the theory that "EAs have nothing to say in their own defense, therefore they are guilty". Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.

4
Ebenezer Dukakis
Maybe it would be useful to discuss concrete examples of engagement and think about what's been helpful/harmful. Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive. I think this post was probably positive (I read it around a year ago, my recollections are a bit vague). But I guess that post itself could be an argument that even attempting to engage with Émile in good faith is potentially dangerous. Perhaps the right strategy is something like: assume good faith, except with specific critics who have a known history of bad faith. And consider that your comparative advantage may lie elsewhere, unless others would describe you as unusually good at being charitable.

Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive.

What makes you think this? I would guess it was pretty negative, by legitimizing Torres, and most of the donations funging heavily against other EA causes.

5
Ebenezer Dukakis
I would guess any legitimization of Émile by Nathan was symmetrical with a legitimization of Nathan by Émile. However I didn't get the sense that either was legitimizing the other, so much as both were legitimizing GiveDirectly. It seems valuable to legitimize GiveDirectly, especially among the "left-ish academic circles" reading Émile who might otherwise believe that Émile is against all EA causes/organizations. (And among "left-ish academics" who might otherwise believe that Nathan scorns "near-termist" causes.) There's a lot of cause prioritization disagreement within EA, but it doesn't usually get vicious, in part because EAs have "skin in the game" with regard to using their time & money in order to make the world a better place. One hypothesis is that if we can get Émile's audience to feel some genuine curiosity about how to make their holiday giving effective, they'll wonder why some people are longtermists. I think it's absolutely fine to disagree with longtermism, but I also think that longtermists are generally thoughtful and well-intentioned, and it's worth understanding why they give to the causes they do. Do you have specific reasons to believe this? It's a possibility, but I could just as easily see most donations coming from non-EAs, or EAs who consider GiveDirectly a top pick anyways. Even if EA donors didn't consider GiveDirectly a top pick on its own, they might have considered "GiveDirectly plus better relations with Émile with no extra cost" to be a top pick, and I feel hesitant to judge this more harshly than I would judge any other EA cause prioritization. BTW, a mental model here is: https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty If Émile is motivated to attack EA because they feel rejected by it, it's conceivable to me that their motivation for aggression would decrease if a super kind and understanding therapist-type person listened to them really well privately and helped them feel heard & understood. The fun

FYI, Émile’s pronouns are they/them.

[Edit: I really don't like that this comment got downvoted and disagree voted...]

7
JWS 🔸
I agree it's a solid heuristic, but heuristics aren't foolproof and it's important to be able to realise where they're not working. I remembered your tweet about choosing intellectual opponents wisely because I think it be useful to show where we disagree on this: 1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, it's worth considering that letting hostile ideas spread unchallenged may work out badly in the future. 2 - I'm not sure it's clear that "the silent majority can often already see their mistakes" in this case. I don't think this is a minor view on EA. I think a lot of people are sympathetic to Torres' point of view, and a signficiant part of that is (in my opinion) because there wasn't a lot of pushback when they started making these claims in major outlets. On my first comment, I agree that I don't think much could have been much done to stop Émile turning against EA,[1] but I absolutely don't think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! They're partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.  Was some pushback going to happen? Yes, but I don't think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who don't fully agree with us. My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could
8
titotal
I don't think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.  The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.  Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I can't find any big name EA's on this list, for example). This probably contributed to her current anti-EA stance.  As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.  The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI. 
8
JWS 🔸
I don't want to fully re-litigate this history, as I'm more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but you'd think OpenPhil would be open to being concerned about low-chance high-impact threats to it) Agreed. It predated Emile's public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldn't have become allies who have strong ideological influence over a large part of AI research space. I'd like to think so too, but this is a bridge that needs to build from both ends imo, as I wouldn't recommend a unilateral action unless I really trusted the other parties involved. There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
2[anonymous]
What do you see as the risk of building a bridge if it's not reciprocated? 
8
David Mathers🔸
'The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.' What would this look like? I feel like, if all you do is say nice things,  that is a good idea usually, but it won't move the dial that much (and also is potentially lying, depending on context and your own opinions; we can't just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if you're advocating spending actual EA money and labour on this, surely you'd first need to make a case that stuff "dealing with the short term harms of AI" is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, don't believe in AI X-risk*, so you think it's an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/feminist/socialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/better as a cause area than standard near-term EA stuff or biorisk, can't take that line.  *I am also fairly skeptical it is a good use of EA money and effort for what it's worth, though I've ended up working on it anyway. 
5
titotal
This seems a little zero-sum, which is not how successful social movements tend to operate. I'll freely confess that I am on the "near term risk" team, but that doesn't mean the two groups can't work together.  A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.  Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run. 
5
harfe
You haven't actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well. It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on "short-term harms"? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on "short-term harms" that we buy sympathy with the group of people concerned about "short-term harms", so we can later pass regulations together with them to reduce both "short-term harm" and AI x-risk.
5
MvK🔸
https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different (It's been a while since I read this so I'm not sure it is what you are looking for, but Gideon Futerman had some ideas for what "bridge building" might look like.)
3
harfe
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of "whether AI is closer to a stupid ‘stochastic parrot’ or on the ‘verge-of-superintelligence’ doesn’t really matter; ". If we were to adopt Gideon's desired framing, it looks like we would need to make sacrifices in epistemics. Related: Some of Gideon's suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
6
AnonymousTurtle
.
2
JWS 🔸
For the record, I'm very willing to be corrected/amend my Quick Take (and my beliefs on this is in general) if "ignore it" isn't an accurate summary of what was done. Perhaps there was internal action taken with academic spaces/EA organisations that I'm not aware of? I still think the net effect of EA actions in any case was closer to "ignore it", but the literal strong claim may be incorrect.
9
JWS 🔸
Edit: I actually think these considertions should go for many of the comments in this sub-thread, not just my own. There's a lot to disagree about, but I don't think any comment in this chain is worthy of downvotes? (Especially strong ones) A meta-thought on this take given the discussion its generated. Currently this is at net 10 upvotes from 20 total votes at time of writing - but is ahead 8 to 6 on agree/disagree votes. Based on Forum voting norms, I don't think this is particularly deserving of downvotes given the suggested criteria? Especially strong ones? Disagreevotes - go ahead, be my guest! Comments pointing out where I've gone wrong - I actively encourage you to do so! I put this in a Quick Take, not a top-level post so it's not as visible on the Forum front page. (and the point of a QT is for 'exploratory, draft-stage, rough thoughts like this).I led off with saying "I think" - I'm just voicing my concerns about the atmosphere surrounding OpenPhil and its perception. It's written in good faith, albeit with a concerned tone. I don't think it violates what the EA Forum should be about.[1] I know these kind of comments are annoying but still, I wanted to point out that this vote distribution feels a bit unfair, or at least unexplained to me. Sure, silent downvoting is a signal, but it's a crude and noisy signal and I don't really have much to update off here. If you downvoted but don't want to get involved in a public discussion about it, feel free to send me a DM with feedback instead. We don't have to get into a discussion about the merits (if you don't want to!), I'm just confused at the vote distribution. 1. ^ Again, especially in Quick Takes
5
Radical Empath Ismam
The harsh crticism of EA has only been a good thing, forcing us to have higher standards and rigour. We don't want an echochamber. I would see it as a thoroughly good thing if Open Philanthropy were to combat the protrayal of itself as a shadowy cabal (like in the recent politico piece) for example by: * Having more democratic buy-in with the public * e.g. Having a bigger public presence in media, relying on a more diverse pool of funding than (i.e. less billionarie funding) * Engaged in less political lobbying * More transparent about the network of organisations around them * e.g. from the Politico article: "... said Open Philanthropy’s use of Horizon ... suggests an attempt to mask the program’s ties to Open Philanthropy, the effective altruism movement or leading AI firms"
8
MvK🔸
1. I am not convinced that "having a bigger public presence in media" is a reliable way to get democratic buy-in. (There is also some "damned if you, damned if you don't" dynamic going on here - if OP was constantly engaging in media interactions, they'd probably be accused of "unduly influencing the discourse/the media landscape") Could you describe what a more democratic OP would look like? 2. You mention "less billionaire funding" - OP was built on the idea of giving away Dustin's and Cari's money in the most effective way. OP is not fundraising, it is grantmaking! So how could it, as you put it, "rely on a more diverse pool of funding"? (also: https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money) I also suspect we would see the same dynamic as above: If OP did actively try to secure additional money in the forms of government grants, they'd be maligned for absorbing public resources in spite of their own wealth. 3. I think a blanket condemnation of political lobbying or the suggestion to "do less" of it is not helpful. Advocating for better policies (in animal welfare, GHD, pandemic preparedness etc.) is in my view one of the most impactful things you can do. I fear we are throwing the baby out with the bathwater here.

Suing people nearly always makes you look like the assholes I think. 

As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise. 

In some cases, I think that outrage fairly clearly isn't really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist.  But in other cases well, it's hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And 'oh, but I don't endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertainty' does not sound very reassuring to a norma... (read more)

First, I want to thank you for engaging David. I get the sense we've disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith - it's not my intention, but I do admit I've somewhat lost my cool on this topic of late. But in my defence, sometimes that's the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.

As for your comment/reply though, I'm not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former.  Instead, I feel like you've steered the conversation away to a discussion about the implications of naïve utilitariansim. I also fee... (read more)

9
quinn
I mean in a sense a venue that hosts torres is definitionally trashy due to https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty except insofar as they haven't seen or don't believe this Fuentes person. 
4
David Mathers🔸
I guess I thought my points about total utilitarianism were relevant, because 'we can make people like us more by pushing back more against misrepresentation' is only true insofar as the real views we have will not offend people. I'm also just generically anxious about people in EA believing things that feel scary to me.  (As I say, I'm not actually against people correcting misrepresentations obviously.)  I don't really have much sense of how reasonable critics are or aren't being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that it's a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people. On Torres specifically: I don't really follow them in detail (these topics make me anxious), but I didn't intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff that's just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but that's just because that's my prior about most people. (Edit: Actually, I'm a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)  Regarding Gebru calling Will a eugenicist. Well, I really doubt you could "sue" over that, or demonstrate to the people most concerned about this that he doesn't count as one by any reasonable definition. Some people use "eugenicist" for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future,

Does anyone work at, or know somebody who works at Cohere?

Last November their CEO Aidan Gomez published an anti-effective-altruism internal letter/memo (Bloomberg reporting here, apparently confirmed as true though no further comment)

I got the vibe from Twitter/X that Aidan didn't like EA, but making an internal statement about it to your company seems really odd to me? Like why do your engineers and project managers need to know about your anti-EA opinions to build their products? Maybe it came after the AI Safety Summit?

Does anyone in the AI Safety Space... (read more)

9
gwern
I don't think it's odd at all. As the Bloomberg article notes, this was in response to the martyrdom of Saint Altman, when everyone thought the evil Effective Altruists were stealing/smashing OA for no reason and destroying humanity's future (as well as the fortune of many of the individuals involved, to a degree few of them bothered to disclose) and/or turning it over to the Chinese commies. An internal memo decrying 'Effective Altruism' was far from the most extreme response at the time; but I doubt Gomez would write it today, if only because so much news has come out since then and it no longer looks like such a simple morality play. (For example, a lot of SV people were shocked to learn he had been fired from YC by Paul Graham et al for the same reason. That was a very closely held secret.)
2
JWS 🔸
Ah good point that it was in the aftermath of the OpenAI board weekend, but it still seems like a very extreme/odd reaction to me (though I have to note the benefit of hindsight as well as my own personal biases). I still think it'd be interesting to see what Aidan actually said, and/or why he's formed such a negative opinion of EA, but I think your right than the simplest explanation here is:
2
Linch
I'd be a bit surprised if you could find people on this forum who (still) work at Cohere. Hard to see a stronger signal to interview elsewhere than your CEO explaining in a public memo why they hate you. I agree it's odd in the sense that most companies don't do it. I see it as a attempt to enforce a certain kind of culture (promoting conformity, discouragement of dissent, "just build now" at the expense of ethics, etc) that I don't much care for. But the CEO also made it abundantly clear he doesn't like people who think like me either, so ¯\_(ツ)_/¯. 

Going to quickly share that I'm going to take a step back from commenting on the Forum for the foreseeable future. There are a lot of ideas in my head that I want to work into top-level posts to hopefully spur insightful and useful conversation amongst the community, and while I'll still be reading and engaging I do have a limited amount of time I want to spend on the Forum and I think it'd be better for me to move that focus to posts rather than comments for a bit.[1]

If you do want to get in touch about anything, please reach out and I'll try my very best... (read more)

status: very rambly. This is an idea I want to explore in an upcoming post about longtermism, would be grateful to anyone's thoughts. For more detailed context, see https://plato.stanford.edu/entries/time/ for debate on the nature of time in philosophy

Does rejecting longtermism require rejecting the B-Theory of Time (i.e. eternalism, the view that the past, present, and future have the same ontological status)? Saying that future people don't exist (and therefore can't be harmed, can't lose out by not existing, don't have the same moral rights as present '... (read more)

5
MichaelStJules
FWIW, someone could reject longtermism for reasons other than specific person-affecting views or even pure time preferences. Even without a universal present, there's still your present (and past), and you can do ethics relative to that. Maybe this doesn't seem impartial enough, and could lead to agents with the same otherwise impartial ethical views and same descriptive views disagreeing about what to do, and those are undesirable? OTOH, causality is still directed and we can still partially order events that way or via agreement across reference frames. The descendants of humanity, say, 1000 years from your present (well, humanity here in our part of the multiverse, say) are still all after your actions now, probably no matter what (physically valid) reference frame you consider, maybe barring time travel. This is because humans' reference frames are all very similar to one another, as differences in velocity, acceleration, force and gravity are generally very small. So, one approach could be to rank or estimate the value of your available options on each reference frame and weigh across them, or look for agreement or look for Pareto improvements. Right now for you, the different reference frames should agree, but they could come apart for you or other agents in the future if/when we or our descendants start colonizing space, traveling at substantial fractions of the speed of light. Also, people who won't come to exist don't exist under the B-theory, so they can’t experience harm. Maybe they're harmed by not existing, but they won't be around to experience that. Future people could have interests, but if we only recognize interests for people that actually exist under the B-theory, then extinction wouldn't be bad for those who never come to exist as a result, because they don't exist under the B-theory.
2
JWS 🔸
Great response Michael! Made me realise I'm conflating a lot of things (so bear with me) By longtermism, I really mean MacAskill's: * future people matter * there could be a lot of them * our actions now could affect their future (in some morally significant way) And in that sense, I really think only bullet 1 is the moral claim. The other two are empirical, about what our forecasts are, and what our morally relevant actions are. I get the sense that those who reject longtermism want to reject it on moral grounds, not empirical ones, so they must reject bullet-point 1. The main ways of doing so are, as you mention, person-affecting views or a pure-rate of time preference, both of which I am sceptical are without their difficult bullets to bite.[1] The argument I want to propose is this: 1. The moral theories we regard as correct ought to cohere with our current best understandings of physics[2] 2. The Special Theory of Relativity (STR) is part of our current best understanding of physics 3. STR implies that there is no universal present moment (i.e. without an observer's explicit frame of reference) 4. Some set of person-affecting views [P] assume that there is a universal present moment (i.e. that we can clearly separate some people as not in the present and therefore not worthy of moral concern, and others which do have this property) 5. From 3 & 4, STR and P are contradictory 6. From 1 & 5, we ought to reject all person-affecting views that are in P And I would argue that a common naïve negative reactions to longtermism (along the liens of) "potential future people don't matter, and it is evil to do anything for them at the expense of present people since they don't exist" is in P, and therefore ought to be rejected. In fact, the only ways to really get out of this seem to be either that someone's chosen person-affecting views are not in P, or that 1 is false. The former is open to question of course, and the latter seems highly suspect. The point

I've generally been quite optimistic that the increased awareness AI xRisk has got recently can lead to some actual progress in reducing the risks and harms from AI. However, I've become increasingly sad at the ongoing rivalry between the AI 'Safety' and 'Ethics' camps[1] 😔 Since the CAIS Letter was released, there seems to have been an increasing level of hostility on Twitter between the two camps, though my impression is that the holistility is mainly one-directional.[2]

I dearly hope that a coalition of some form can be built here, even if it is an... (read more)

5
David Mathers🔸
What does not "remaining passive" involve? 
2
JWS 🔸
I can't say I have a strategy David. I've just been quite upset and riled up by the discourse over the last week just as I had gained some optimism :( I'm afraid that by trying to turn the other cheek to hostility, those working to mitigate AI xRisk end up ceding the court of public opinion to those hostile to it. I think some suggestions would be: * Standing up to, and callying out, bullying in these discussions can cause a preference cascade of pushback to it - see here - but someone needs to stand up for people to realise that dominant voices are not representative of a field, and silence may obscure areas for collaboration and mutual coalitions to form. * Being aware of what critiques of EA/AI xRisk get traction in adjacent communities. Some of it might be malicious, but a lot of it seems to be a default attitude of scepticism merged with misunderstandings. While not everyone would change their mind, I think people reaching 'across the aisle' might correct the record in many people's minds. Even if not for the person making the claims, perhaps for those watching and reading online.  * Publicly pushing back on Torres. I don't know what went down when they were more involved in the EA movement that caused their opinion to flip 180 degrees, but I think the main 'strategy' has been to ignore their work and not respond to their criticism. The result: their ideas gaining prominence in the AI Ethics field, publications in notable outlets, despite acting consistently in bad faith. To their credit, they are voraciously productive in their output and I don't expect to it slow down. Continuing with a failed strategy doesn't sound like the right call here. * In cases of the most severe hostility, potential considering legal or institutional action? In this example, can you really just get away with calling someone a eugenicist when it's so obviously false? But there have been cases where people have successfully sued for defamation for statements made on Twitter. That
2
quinn
I would recommend trying to figure out how much loud people matter. Like it's unclear if anyone is both susceptible to sneer/dunk culture and potentially useful someday. Kindness and rigor come with pretty massive selection effects, i.e., people who want the world to be better and are willing to apply scrutiny to their goals will pretty naturally discredit hostile pundits and just as naturally get funneled toward more sophisticated framings or literatures.  I don't claim this attitude would work for all the scicomm and public opinion strategy sectors of the movement or classes of levers, but it works well to help me stay busy and focused and epistemically virtuous.  I wrote some notes about a way forward last february, I just CC'd them to shortform so I could share with you https://forum.effectivealtruism.org/posts/r5GbSZ7dcb6nbuWch/quinn-s-shortform?commentId=nskr6XbPghTfTQoag  related comment I made: https://forum.effectivealtruism.org/posts/nsLTKCd3Bvdwzj9x8/ingroup-deference?commentId=zZNNTk5YNYZRykbTu 

Oh hi. Just rubber-ducking a failure mode some of my Forum takes[1] seem to fall into, but please add your takes if you think that would help :)

----------------------------------------------------------------------------

Some of my posts/comments can be quite long - I like responding with as much context as possible on the Forum, but as some of the original content itself is quite long, that means my responses can be quite long! I don't think that's necessarily a problem in itself, but the problem then comes with receiving disagree votes without commen... (read more)

Has anyone else listened to the latest episode of Clearer Thinking ? Spencer interviews Richard Lang about Douglas Harding's "Headless Way", and if you squint enough it's related to the classic philosophical problems of consciousness, but it did remind me a bit of Scott A's classic story "Universal Love, Said The Cactus Person" which made me laugh. (N.B. Spencer is a lot more gracious and inquisitive than the protagonist!)

But yeah if you find the conversation interesting and/or like practising mindfulness meditation, Richard has a series of guided meditati... (read more)

In this comment I was going to quote the following from R. M. Hare:

"Think of one world into whose fabric values are objectively built; and think of another in which those values have been annihilated. And remember that in both worlds the people in them go on being concerned about the same things - there is no difference in the 'subjective' concern which people have for things, only in their 'objective' value. Now I ask, What is the difference between the states of affairs in these two worlds? Can any other answer be given except 'None whatever'?"

I remember... (read more)

I go on holiday for a few days and like everything community-wise explodes. Current mood.

Curated and popular this week
Relevant opportunities