This is a special post for quick takes by Ozzie Gooen. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism").
If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin.
I feel incredibly unsatisfied with the public dialogue around AI safety strategy now. From what I can tell, there's some intelligent conversation happening by a handful of people at the Constellation coworking space, but a lot of this is barely clear publicly. I think many people outside of Constellation are working on simplified models, like "AI is generally dangerous, so we should slow it all down," as opposed to something like, "Really, there are three scary narrow scenarios we need to worry about."
I recently spent a week in DC and found it interesting. But my impression is that a lot of people there are focused on fairly low-level details, without a great sense of the big-picture strategy. For example, there's a lot of work i... (read more)
Thanks for the comment, I think this is very astute.
~
I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.
I don't think that all AI safety orgs are actually fully funded since there are orgs that OP cannot fund for reasons (see Trevor's post and also OP's individual recommendations in AI) other than cost-effectiveness and also OP cannot and should not fund 100% of every org (it's not sustainable for orgs to have just one mega-funder; see also what Abraham mentioned here). Also there is room for contrarian donation takes like Michael Dickens's.
2
Ozzie Gooen
That makes sense, but I'm feeling skeptical. There are just so many AI safety orgs now, and the technical ones generally aren't even funded by OP.
For example: https://www.lesswrong.com/posts/9n87is5QsCozxr9fp/the-big-nonprofits-post
While a bunch of these salaries are on the high side, not all of them are.
OP has provided very mixed messages around AI safety. They've provided surprisingly little funding / support for technical AI safety in the last few years (perhaps 1 full-time grantmaker?), but they have seemed to provide more support for AI safety community building / recruiting
Yeah, I find myself very confused by this state of affairs. Hundreds of people are being funneled through the AI safety community-building pipeline, but there’s little funding for them to work on things once they come out the other side.[1]
As well as being suboptimal from the viewpoint of preventing existential catastrophe, this also just seems kind of common-sense unethical. Like, all these people (most of whom are bright-eyed youngsters) are being told that they can contribute, if only they skill up, and then they later findout that that’s not the case.
These community-building graduates can, of course, try going the non-philanthropic route—i.e., apply to AGI companies or government institutes. But there are major gaps in what those organizations are working on, in my view, and they also can’t absorb so many people.
Yea, I think this setup has been incredibly frustrating downstream. I'd hope that people from OP with knowledge could publicly reflect on this, but my quick impression is that some of the following factors happened:
1. OP has had major difficulties/limitations around hiring in the last 5+ years. Some of this is lack of attention, some is that there aren't great candidates, some is a lack of ability. This effected some cause areas more than others. For whatever reason, they seemed to have more success hiring (and retaining talent) for community than for technical AI safety.
2. I think there's been some uncertainties / disagreements into how important / valuable current technical AI safety organizations are to fund. For example, I imagine if this were a major priority from those in charge of OP, more could have been done.
3. OP management seems to be a bit in flux now. Lost Holden recently, hiring a new head of GCR, etc.
4. I think OP isn't very transparent and public with explaining their limitations/challenges publicly.
5. I would flag that there are spots at Anthropic and Deepmind that we don't need to fund, that are still good fits for talent.
6. I think some of the Paul Christiano - connected orgs were considered a conflict-of-interest, given that Ajeya Cotra was the main grantmaker.
7. Given all of this, I think it would be really nice if people could at least provide warnings about this. Like, people entering the field are strongly warned that the job market is very limited. But I'm not sure who feels responsible / well placed to do this.
7
Ozzie Gooen
On AI safety, I think it's fairly likely (40%?) that the risk of x-risk (upon a lot of reflection) in the next 20 years is less than 20%, and that the entirety of the EA scene might be reducing it to say 15%.
This means that the entirety of the EA AI safety scene would help the EV of the world by ~5%.
On one hand, this is a whole lot. But on the other, I'm nervous that it's not ambitious enough, for what could be one of the most [combination of well-resourced, well-meaning, and analytical/empirical] groups of our generation.
One thing I like about epistemic interventions is that the upper-bounds could be higher.
(There are some AI interventions that are more ambitious, but many do seem to be mainly about reducing x-risk by less than an order of magnitude, not increasing the steady-state potential outcome)
I'd also note here that an EV gain of 5% might not be particularly ambitious. It could well be the case that many different groups can do this - so it's easier than it might seem if you think goodness is additive instead of multiplicative.
One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.
Related, I'm curious if future versions could feature specific subprojects/teams within charities. "Rethink Priorities" is a rather large project compared to "PauseAI US", I assume it would be interesting if different parts of it were put here instead.
(That said, in terms of the donation, I'd hope that we could donate to RP as a whole and trust RP to allocate it accordingly, instead of formally restricting the money, which can be quite a hassle in terms of accounting)
I guess this isn't necessarily a weakness if the more well-known charities are more effective? I can see the case that: a) they might not be neglected in EA circles, but may be very neglected globally compared to their impact and that b) there is often an inverse relationship between tractability/neglectedness and importance/impact of a cause area and charity. Not saying you're wrong, but it's not necessarily a problem.
Furthermore, my anecdotal take from the voting patterns as well as the comments on the discussion thread seem to indicate that neglectedness is often high on the mind of voters - though I admit that commenters on that thread are a biased sample of all those voting in the election.
Is it underwhelming? I guess if you want the donation election to be about spurring lots of donations to small, spunky EA-startups working in weird-er cause areas, it might be, but I don't think that's what I understand the intention of the experiment to be (though I could be wrong).
My take is that the election is an experiment with EA democratisation, where we get to see what the community values when we do a roughly 1-person-1-ballot system instead of those-with-the-moeny decide system which is how things work right now. Those takeaways seem to be:
* The broad EA community values Animal Welfare a lot more than the current major funders
* The broad EA community sees value in all 3 of the 'big cause areas' with high-scoring charities in Animal Welfare, AI Safety, and Global Health & Development.
6
Ozzie Gooen
I (with limited information) think the EA Animal Welfare Fund is promising, but wonder how much of that matches the intention of this experiment. It can be a bit underwhelming if an experiment to try to get the crowd's takes on charities winds up determining to, "just let the current few experts figure it out." Though I guess, that does represent a good state of the world. (The public thinks that the current experts are basically right)
I occasionally hear implications that cyber + AI + rogue human hackers will cause mass devastation, in ways that roughly match "lots of cyberattacks happening all over." I'm skeptical of this causing over $1T/year in damages (for over 5 years, pre-TAI), and definitely of it causing an existential disaster.
There are some much more narrow situations that might be more X-risk-relevant, like [A rogue AI exfiltrates itself] or [China uses cyber weapons to dominate the US and create a singleton], but I think these are so narrow they should really be identified i... (read more)
I’ve heard multiple reports of people being denied jobs around AI policy because of their history in EA. I’ve also seen a lot of animosity against EA from top organizations I think are important - like A16Z, Founders Fund (Thiel), OpenAI, etc. I’d expect that it would be uncomfortable for EAs to apply or work in to these latter places at this point.
This is very frustrating to me.
First, it makes it much more difficult for EAs to collaborate with many organizations where these perspectives could be the most useful. I want to see more collaborations and cooperation - not having EAs be allowed in many orgs makes this very difficult.
Second, it creates a massive incentive for people not to work in EA or on EA topics. If you know it will hurt your career, then you’re much less likely to do work here.
And a lighter third - it’s just really not fun to have a significant stigma associated with you. This means that many of the people I respect the most, and think are doing some of the most valuable work out there, will just have a much tougher time in life.
Who’s at fault here? I think the first big issue is that resistances get created against all interesting and powerful groups. There are sim... (read more)
On the positive front, I know its early days but GWWC have really impressed me with their well produced, friendly yet honest public facing stuff this year - maybe we can pick up on that momentum?
Also EA for Christians is holding a British conference this year where Rory Stewart and the Archbishop of Canterbury (biggest shot in the Anglican church) are headlining which is a great collaboration with high profile and well respected mainstream Christian / Christian-adjacent figures.
I think in general their public facing presentation and marketing seems a cut above any other EA org - happy to be proven wrong by other orgs which are doing a great job too. What I love is how they present their messages with such positivity, while still packing a real punch and not watering down their message. Check out their web-page and blog to see their work.
A few concrete examples
- This great video "How rich are you really?"
- Nice rebranding of "Giving what we can pledge" to the snappier and clearer "10% pledge"
- The diamond symbol as a simple yet strong sign of people taking the pledge, both on the forum here and linkedin
- An amazing linked-in push with lots of people putting the diamond and explaining why they took the pledge. Many posts have been received really positively on my wall.
That's just what I've noticed.
(Jumping in for our busy comms/exec team) Understanding the status of the EA brand and working to improve it is a top priority for CEA :) We hope to share more work on this in future.
I wrote a downvoted post recently about how we should be warning AI Safety talent about going into labs for personal branding reasons (I think there are other reasons not to join labs, but this is worth considering).
I think people are still underweighting how much the public are going to hate labs in 1-3 years.
I was telling organizers with PauseAI like Holly Elmore they should be emphasizing this more several months ago.
2
yanni kyriacos
I think from an advocacy standpoint it is worth testing that message, but based on how it is being received on the EAF, it might just bounce off people.
My instinct as to why people don't find it a compelling argument;
1. They don't have short timelines like me, and therefore chuck it out completely
2. Are struggling to imagine a hostile public response to 15% unemployment rates
3. Copium
3
Evan_Gaensbauer
At least at the time, Holly Elmore seemed to consider it at least somewhat compelling. I mentioned this was an argument I provided framed in the context of movements like PauseAI--a more politicized, and less politically averse coalition movement, that includes at least one arm of AI safety as one of its constituent communities/movements, distinct from EA.
>They don't have short timelines like me, and therefore chuck it out completely
Among the most involved participants in PauseAI, presumably there may estimates of short timelines comparable to the rate of such estimates among effective altruists.
>Are struggling to imagine a hostile public response to 15% unemployment rates
Those in PauseAI and similar movements don't.
>Copium
While I sympathize with and appreciate why there would be high rates of huffing copium among effective altruists (and adjacent communities, such as rationalists), others who have been picking up slack effective altruists have dropped in the last couple years, are reacting differently. At least in terms of safeguarding humanity from both the near-term and long-term vicissitudes of advancing AI, humanity has deserved better than EA has been able to deliver. Many have given up hope that EA will ever rebound to the point it'll be able to muster living up to the promise of at least trying to safeguard humanity. That includes both many former effective altruists, and those who still are effective altruists. I consider there to still be that kind of 'hope' on a technical level, though on a gut level I don't have faith in EA. I definitely don't blame those who have any faith left in EA, let alone those who see hope in it.
Much of the difference here is the mindset towards 'people', and how they're modeled, between those still firmly planted in EA but somehow with a fatalistic mindset, and those who still care about AI safety but have decided to move on in EA. (I might be somewhere in between, though my perspective as a single individual a
Very sorry to hear these reports, and was nodding along as I read the post.
If I can ask, how do they know EA affiliation was the decision? Is this an informal 'everyone knows' thing through policy networks in the US? Or direct feedback for the prospective employer than EA is a PR-risk?
Of course, please don't share any personal information, but I think it's important for those in the community to be as aware as possible of where and why this happens if it is happening because of EA affiliation/history of people here.
I think that certain EA actions in ai policy are getting a lot of flak.
On Twitter, a lot of VCs and techies have ranted heavily about how much they dislike EAs.
See this segment from Marc Andreeson, where he talks about the dangers of Eliezer and EA. Marc seems incredibly paranoid about the EA crowd now.
(Go to 1 hour, 11min in, for the key part. I tried linking to the timestamp, but couldn't get it to work in this editor after a few minutes of attempts)
Organized, yes. And so this starts with a mailing list. In the nineties is a transhumanist mailing list called the extropions. And these extropions, they might have got them wrong, extropia or something like that, but they believe in the singularity. So the singularity is a moment of time where AI is progressing so fast, or technology in general progressing so fast that you can't predict what happens. It's self evolving and it just. All bets are off. We're entering
To be fair to the CEO of Replit here, much of that transcript is essentially true, if mildly embellished. Many of those events or outcomes associated with EA, or adjacent communities during their histories, that should be the most concerning to anyone other than any FTX-related events and for reasons beyond just PR concerns, can and have been well-substantiated.
My guess is this is obvious, but the "debugging" stuff seems as far as I can tell completely made up.
I don't know of any story in which "debugging" was used in any kind of collective way. There was some Leverage-research adjacent stuff that kind of had some attributes like this, "CT-charting", which maybe is what it refers to, but that sure would be the wrong word, and I also don't think I've ever heard of any psychoses or anything related to that.
The only in-person thing I've ever associated with "debugging" is when at CFAR workshops people were encouraged to create a "bugs-list", which was just a random list of problems in your life, and then throughout the workshop people paired with other people where they could choose any problem of their choosing, and work with their pairing partner on fixing it. No "auditing" or anything like that.
I haven't read the whole transcript in-detail, but this section makes me skeptical of describing much of that transcript as "essentially true".
I have personally heard several CFAR employees and contractors use the word "debugging" to describe all psychological practices, including psychological practices done in large groups of community members. These group sessions were fairly common.
In that section of the transcript, the only part that looks false to me is the implication that there was widespread pressure to engage in these group psychology practices, rather than it just being an option that was around. I have heard from people in CFAR who were put under strong personal and professional pressure to engage in *one-on-one* psychological practices which they did not want to do, but these cases were all within the inner ring and AFAIK not widespread. I never heard any stories of people put under pressure to engage in *group* psychological practices they did not want to do.
I think it might describe how some people experienced internal double cruxing. I wouldn't be that surprised if some people also found the 'debugging" frame in general to give too much agency to others relative to themselves, I feel like I've heard that discussed.
Based on the things titotal said, seems like it very likely refers to some Leverage stuff, which I feel a bit bad about seeing equivocated with the rest of the ecosystem, but also seems kind of fair. And the Zoe Curzi post sure uses the term "debugging" for those sessions (while also clarifying that the rest of the rationality community doesn't use the term that way, but they sure seemed to)
For what it’s worth, I was reminded of Jessica Taylor’s account of collective debugging and psychoses as I read that part of the transcript. (Rather than trying to quote pieces of Jessica’s account, I think it’s probably best that I just link to the whole thing as well as Scott Alexander’s response.)
I presume this account is their source for the debugging stuff, wherein an ex-member of the rationalist Leverage institute described their experiences. They described the institute as having "debugging culture", described as follows:
In the larger rationalist and adjacent community, I think it’s just a catch-all term for mental or cognitive practices aimed at deliberate self-improvement.
At Leverage, it was both more specific and more broad. In a debugging session, you’d be led through a series of questions or attentional instructions with goals like working through introspective blocks, processing traumatic memories, discovering the roots of internal conflict, “back-chaining” through your impulses to the deeper motivations at play, figuring out the roots of particular powerlessness-inducing beliefs, mapping out the structure of your beliefs, or explicating irrationalities.
and:
1. 2–6hr long group debugging sessions in which we as a sub-faction (Alignment Group) would attempt to articulate a “demon” which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.
The podcast statements seem to be an embel... (read more)
Leverage was an EA-aligned organization, that was also part of the rationality community (or at least 'rationalist-adjacent'), about a decade ago or more. For Leverage to be affiliated with the mantles of either EA or the rationality community was always contentious. From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn't have then, and couldn't now, say or do more to disown and disavow Leverage's practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever. They have. To be clear, so has Leverage in its own way.
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with--to put it bluntly--the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside. In time, Leverage came to take that in stride, as the break-up between Leverage, and the rest of the institutional polycule that is EA/rationality, was extremely mutual.
Ien short, the course of events, and practices at Leverage that led to them, as presented by Zoe Curzi and others as a few years ago from that time circa 2018 to 2022, can scarcely be attributed to either the rationality or EA communities. That's a consensus between EA, Leverage, and the rationality community agree on--one of few things left that they still agree on at all.
From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn't have then, and couldn't now, say or do more to disown and disavow Leverage's practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever…
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with--to put it bluntly--the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside.
While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage.
Quick point - I think the relationship between CEA and Leverage was pretty complicated during a lot of this period.
There was typically a large segment of EAs who were suspicious of Leverage, ever since their founding. But Leverage did collaborate with EAs on some specific things early on (like the first EA Summit). It felt like an uncomfortable alliance type situation. If you go back on the forum / Lesswrong, you can read artifacts.
I think the period of 2018 or so was unusual. This was a period where a few powerful people at CEA (Kerry, Larissa) were unusually pro-Leverage and got to power fairly quickly (Tara left, somewhat suddenly). I think there was a lot of tension around this decision, and when they left (I think this period lasted around 1 year), I think CEA became much less collaborative with Leverage.
One way to square this a bit is that CEA was just not very powerful for a long time (arguably, its periods of "having real ability/agency to do new things" have been very limited). There were periods where Leverage had more employees (I'm pretty sure). The fact that CEA went through so many different leaders, each with different stances and strategies, makes it more confusing to look back on.
I would really love for a decent journalist to do a long story on this history, I think it's pretty interesting.
2
Habryka
Huh, yeah, that sure refers to those as "debugging". I've never really heard Leverage people use those words, but Leverage 1.0 was a quite insular and weird place towards the end of its existence, so I must have missed it.
I think it's kind of reasonable to use Leverage as evidence that people in the EA and Rationality community are kind of crazy and have indeed updated on the quotes being more grounded (though I also feel frustration with people equivocating between EA, Rationality and Leverage).
(Relatedly, I don't particularly love you calling Leverage "rationalist" especially in a context where I kind of get the sense you are trying to contrast it with "EA". Leverage has historically been much more connected to the EA community, and indeed had almost successfully taken over CEA leadership in ~2019, though IDK, I also don't want to be too policing with language here)
2
Evan_Gaensbauer
I wouldn't and didn't describe that section of the transcript, as a whole, as essentially true. I said much of it is. As the CEO might've learned from Tucker Carlson, who in turned learned from FOX News, we should seek to be 'fair and balanced.'
As to the debugging part, that's an exaggeration that must have come out the other side of a game of broken telephone on the internet. It seems that on the other side of that telephone line would've been some criticisms or callouts I've read years ago of some activities happening in or around CFAR. I don't recollect them in super-duper precise detail right now, nor do I have the time today to spend an hour or more digging them up on the internet
For the perhaps wrongheaded practices that were introduced into CFAR workshops for a period of time other than the ones from Leverage Research, I believe the others were some introduced by Valentine (e.g., 'againstness,' etc.). As far as I'm aware, at least as it was applied at one time, some past iterations of Connection Theory bore at least a superficial resemblance to some aspects of 'auditing' as practiced by Scientologists.
As to perhaps even riskier practices, I mean they happened not "in" but "around" CFAR in the sense of not officially happening under the auspices of CFAR, or being formally condoned by them, though they occurred within the CFAR alumni community and the Bay Area rationality community. It's murky, though there was conduct in the lives of private individuals that CFAR informally enabled or emboldened, and could've/should've done more to prevent. For the record, I'm aware CFAR has effectively admitted those past mistakes, so I don't want to belabor any point of moral culpability beyond what has been drawn out to death on LessWrong years ago.
Anyway, activities that occurred among rationalists in the social network that in CFAR's orbit, that arguably arose to the level of triggering behaviour comparable in extremity to psychosis, include 'dark arts' rationalit
3
Gil
I think it's worth noting that the two examples you point to are right-wing, which the vast majority of Silicon Valley is not. Right-wing tech ppl likely have higher influence in DC, so that's not to say they're irrelevant, but I don't think they are representative of silicon valley as a whole
2
Ozzie Gooen
I think Garry Tan is more left-wing, but I'm not sure. A lot of the e/acc community fights with EA, and my impression is that many of them are leftists.
I think that the right-wing techies are often the loudest, but there are also lefties in this camp too.
(Honestly though, the right-wing techies and left-wing techies often share many of the same policy ideas. But they seem to disagree on Trump and a few other narrow things. Many of the recent Trump-aligned techies used to be more left-coded.)
2
Ozzie Gooen
Random Tweet from today: https://x.com/garrytan/status/1820997176136495167
Garry Tan is the head of YCombinator, which is basically the most important/influential tech incubator out there. Around 8 years back, relations were much better, and 80k and CEA actually went through YCombinator.
I'd flag that Garry specifically is kind of wacky on Twitter, compared to previous heads of YC. So I definitely am not saying it's "EA's fault" - I'm just flagging that there is a stigma here.
I personally would be much more hesitant to apply to YC knowing this, and I'd expect YC would be less inclined to bring in AI safety folk and likely EAs.
5
Rebecca
I find it very difficult psychologically to take someone seriously if they use the word ‘decels’.
3
JWS 🔸
Want to say that I called this ~9 months ago.[1]
I will re-iterate that clashes of ideas/worldviews[2] are not settled by sitting them out and doing nothing, since they can be waged unilaterally.
1. ^
Especially if you look at the various other QTs about this video across that side of Twitter
2. ^
Or 'memetic wars', YMMV
4
Ozzie Gooen
My impression is that the current EA AI policy arm isn't having much active dialogue with the VC community and the like. I see Twitter spats that look pretty ugly, I suspect that this relationship could be improved on with more work.
At a higher level, I suspect that there could be a fair bit of policy work that both EAs and many of these VCs and others would be more okay with than what is currently being pushed. My impression is that we should be focused on narrow subsets of risks that matter a lot to EAs, but don't matter much to others, so we can essentially trade and come out better than we are now.
3
Chris Leong
That seems like the wrong play to me. We need to be focused on achieving good outcomes and not being popular.
6
Ozzie Gooen
My personal take is that there are a bunch of better trade-offs between the two that we could be making. I think that the narrow subset of risks is where most of the value is, so from that standpoint, that could be a good trade-off.
I'm nervous that the EA Forum might be having a small role for x-risk and some high-level prioritization work. - Very little biorisk content here, perhaps because of info-hazards. - Little technical AI safety work here, in part because that's more for LessWrong / Alignment Forum. - Little AI governance work here, for whatever reason. - Not too much innovative, big-picture longtermist prioritization projects happening at the moment, from what I understand. - The cause of "EA community building" seems to be fairly stable, not much bold/controversial experimentation, from what I can tell. - Fairly few updates / discussion from grantmakers. OP is really the dominant one, and doesn't publish too much, particularly about their grantmaking strategies and findings.
It's been feeling pretty quiet here recently, for my interests. I think some important threads are now happening in private slack / in-person conversations or just not happening.
I don't comment or post much on the EA forum because the quality of discourse on the EA Forum typically seems mediocre at best. This is especially true for x-risk.
The whole manifund debacle has left me quite demotivated. It really sucks that people are more interested debating contentious community drama, than seemingly anything else this forum has to offer.
I think there signal vs noise tradeoffs, so I'm naively tempted to retreat toward more exclusivity.
This poses costs of its own, so maybe I'd be in favor of differentiation (some more and some less exclusive version).
Low confidence in this being good overall.
4
Vasco Grilo🔸
Hi Ryan,
Could you share a few examples of what you consider good quality EA Forum posts? Do you think the content linked on the EA Forum Digest also "typically seems mediocre at best"?
Very little biorisk content here, perhaps because of info-hazards.
When I write biorisk-related things publicly I'm usually pretty unsure of whether the Forum is a good place for them. Not because of info-hazards, since that would gate things at an earlier stage, but because they feel like they're of interest to too small a fraction of people. For example, I could plausibly have posted Quick Thoughts on Our First Sampling Run or some of my other posts from https://data.securebio.org/jefftk-notebook/ here, but that felt a bit noisy?
It also doesn't help that detailed technical content gets much less attention than meta or community content. For example, three days ago I wrote a comment on @Conrad K.'s thoughtful Three Reasons Early Detection Interventions Are Not Obviously Cost-Effective, and while I feel like it's a solid contribution only four people have voted on it. On the other hand, if you look over my recent post history at my comments on Manifest, far less objectively important comments have ~10x the karma. Similarly the top level post was sitting at +41 until Mike bumped it last week, which wasn't even high enough that (before I changed my personal settings to boo... (read more)
I'd be excited to have discussions of those posts here!
A lot of my more technical posts also get very little attention - I also find that pretty unmotivating. It can be quite frustrating when clearly lower-quality content on controversial stuff gets a lot more attention.
But this seems like a doom loop to me. I care much more about strong technical content, even if I don't always read it, than I do most of the community drama. I'm sure most leaders and funders feel similarly.
Extended far enough, the EA Forum will be a place only for controversial community drama. This seems nightmarish to me. I imagine most forum members would agree.
I imagine that there are things the Forum or community can do to bring more attention or highlighting to the more technical posts.
Here you go: Detecting Genetically Engineered Viruses With Metagenomic Sequencing
But this was already something I was going to put on the Forum ;)
9
Vaidehi Agarwalla 🔸
I wonder if the forum is even a good place for a lot of these discussions? Feels like they need some combination of safety / shared context, expertise, gatekeeping etc?
7
Ozzie Gooen
If it's not, there is a question of what the EA Forum's comparative advantage will be in the future, and what is a good place for these discussions.
Personally, I think this forum could be good for at least some of this, but I'm not sure.
9
Seth Ariel Green
Three use cases come to mind for the forum:
* establishing a reputation in writing as a person who can follow good argumentative norms (perhaps as a kind of extended courtship of EA jobs/orgs)
* disseminating findings that are mainly meant for other forums, e.g. research reports
* keeping track of what the community at large is thinking about/working on, which is mostly facilitated by organizations like RP & GiveWell using the forum to share their work.
I don’t think I would use the forum for hashing out anything I was really thinking hard about; I’d probably have in-person conversations or email particular persons.
7
JP Addison🔸
I don't know about you but I just learned about one of the biggest updates to OPs grantmaking in a year on the Forum.
That said, the data does show some agreement with your and commenters vibe of lowering quantity.
I agree that the Forum could be a good place for a lot of these discussions. Some of them aren't happening at all to my knowledge.[1] Some of those should be, and should be discussed on the Forum. Others are happening in private and that's rational, although you may be able to guess that my biased view is that a lot more should be public, and if they were, should be posted on the Forum.
Broadly: I'm quite bullish on the EA community as a vehicle for working on the world's most pressing problems, and of open online discussion as a piece of our collective progress. And I don't know of a better open place on the internet for EAs to gather.
1. ^
Part of that might be because as EA gets older the temperature (in the annealing sense) rationally lowers.
6
Ozzie Gooen
Yep - I liked the discussion in that post a lot, but the actual post seemed fairly minimal, and written primarily outside of the EA Forum (it was a link post, and the actual post was 320 words total.)
For those working on the forum, I'd suggest work on bringing in more of these threads to the forum. Maybe reach out to some of the leaders in each group and see how to change things.
I think that AI policy in particular is most ripe for better infrastructure (there's a lot of work happening, but no common public forums, from what I know), though it probably makes sense to be separate from the EA Forum (maybe like the Alignment Forum), because a lot of them don't want to be associated too much with EA, for policy reasons.
I know less about Bio governance, but would strongly assume that a whole lot of it isn't infohazardous. That's definitely a field that's active and growing.
For foundational EA work / grant discussions / community strategy, I think we might just need more content in the first place, or something.
I assume that AI alignment is well-handled by LessWrong / Alignment Forum, difficult and less important to push to happen here.
4
Nathan Young
So I did used to do more sort of back of the envelope stuff, but it didn't get much traction and people seemed to think it was unfished (it was) so I guess I had less enthusiasm.
4
NickLaing
Yeah even on the global health front the last 3 months or so have felt especially quiet
2
Vaidehi Agarwalla 🔸
Curious if you think there was good discussion before that and could point me to any particularly good posts or conversations?
5
NickLaing
There are still bunch of good discussions (see mostly posts with 10+ comments) in the last 6 months or so, its just that we can sometimes even go a week or two without more than one or two ongoing serious GHD chats. Maybe I'm wrong and there hasn't actually been much (or any) meaningful change in activity this year looking at this.
https://forum.effectivealtruism.org/?tab=global-health-and-development
3
Tristan Williams
As a random datapoint, I'm only just getting into the AI Governance space, but I've found little engagement with (some) (of[1]) (the) (resources) I've shared and have just sort of updated to think this is either not the space for it or I'm just not yet knowledgeable enough about what would be valuable to others.
1. ^
I was especially disappointed with this one, because this was a project I worked on with a team for some time, and I still think it's quite promising, but it didn't receive the proportional engagement I would have hoped for. Given I optimized some of the project for putting out this bit of research specifically, I wouldn't do the same now and would have instead focused on other parts of the project.
2
Ozzie Gooen
It seems from the comments that there's a chance that much of this is just timing - i.e. right now is unusually quiet. It is roughly mid-year, maybe people are on vacation or something, it's hard to tell.
I think that this is partially true. I'm not interested in bringing up this point to upset people, but rather to flag that maybe there could be good ways of improving this (which I think is possible!)
When EA was starting, there was a small amount of talent, and a smaller amount of funding. As one might expect, things went slowly for the first few years.
Then once OP decided to focus on X-risks, there was ~$8B potential funding, but still fairly little talent/capacity. I think the conventional wisdom then was that we were unlikely to be bottlenecked by money anytime soon, and lots of people were encouraged to do direct work.
Then FTX Future Fund came in, and the situation got even more out-of-control. ~Twice the funding. Projects got more ambitious, but it was clear there were significant capacity (funder and organization) constraints.
Then (1) FTX crashed, and (2) lots of smart people came into the system. Project capacity grew, AI advances freaked out a lot of people, and successful community projects helped train a lot of smart young people to work on X-risks.
But funding has not kept up. OP has been slow to hire for many x-risk roles (AI safety, movement building, outreach / fundraising). Other large funders have been slow to join in.
So now there's a crunch for funding. There are a bunch of smart-seeming AI people now who I bet could have gotten fun... (read more)
Any thoughts on where e.g. 50K could be well spent?
4
Ozzie Gooen
(For longtermism)
If you have limited time to investigate / work with, I'd probably recommend either the LTFF or choosing a regranter you like at Manifund.
If you have a fair bit more time, and ideally the expectation of more money in the future, then I think a lot of small-to-medium (1-10 employee) organizations can use some long-term, high-touch donors. Honestly this may settle more down to fit / relationships than identifying the absolute best org - as long as it's funded by one of the groups listed above or OP, as money itself is a bit fungible between orgs.
I think a lot of nonprofits have surprisingly few independent donors, or even strong people that can provide decent independent takes. I might write more about this later.
(That said, there are definitely ways to be annoying / a hindrance, as an active donor, so try to be really humble here if you are new to this)
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)
Thoughts on the OpenAI Strategy
OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten.
First, they say flat out that they're going for AGI.
Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.
"Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1]
On Hacker News, one of their employees says,
"We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2]
You can read more about this mission on the charter:
"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of a
My sense of self-worth often comes from guessing what people I respect think of me and my work.
In EA... this is precarious. The most obvious people to listen to are the senior/powerful EAs.
In my experience, many senior/powerful EAs I know: 1. Are very focused on specific domains. 2. Are extremely busy. 3. Have substantial privileges (exceptionally intelligent, stable health, esteemed education, affluent/ intellectual backgrounds.) 4. Display limited social empathy (ability to read and respond to the emotions of others) 5. Sometimes might actively try not to sympathize/empathize with many people, because they are judging them for grants, and want don't want to be biased. (I suspect this is the case for grantmakers). 6. Are not that interested in acting as a coach/mentor/evaluator to people outside their key areas/organizations. 7. Don't intend or want others to care too much about what they think outside of cause-specific promotion and a few pet ideas they want to advance.
A parallel can be drawn with the world of sports. Top athletes can make poor coaches. Their innate talent and advantages often leave them detached from the experiences ... (read more)
Who, if anyone, should I trust to inform my self-worth?
My initial thought is that it is pretty risky/tricky/dangerous to depend on external things for a sense of self-worth? I know that I certainly am very far away from an Epictetus-like extreme, but I try to not depend on the perspectives of other people for my self-worth. (This is aspirational, of course. A breakup or a job loss or a person I like telling me they don't like me will hurt and I'll feel bad for a while.)
A simplistic little thought experiment I've fiddled with: if I went to a new place where I didn't know anyone and just started over, then what? Nobody knows you, and you social circle starts from scratch. That doesn't mean that you don't have a worth as a human being (although it might mean that you don't have any worth in the 'economic' sense of other people wanting you, which is very different).
There might also be an intrinsic/extrinsic angle to this. If you evaluate yourself based on accomplishments, outputs, achievements, and so on, that has a very different feeling than the deep contentment of being okay as you are.
In another comment Austin mentions revenue and funding, but that seems to be a measure of things V... (read more)
I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.
I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.)
IMO you should be thinking about things like, how to do better work, but in the frame of "this is something I enjoy / consider important" rather than in the frame of "because otherwise I'm not worthy". It's also legitimate to want other people to appreciate and respect you for your work (I definitely have a strong desire for that), but IMO here also the right frame is "this is something I want" rather than "this is something that's necessary for me to be worth something".
It's funny, I think you'd definitely be in the list of people I respect and care about their opinion of me. I think it's just imposter syndrome all the way up.
Personally, one thing that seemed to work a bit for me is to find peers which I highly appreciate and respect and schedule weekly calls with them to help me prioritize and focus, and give me feedback.
6
Austin
A few possibilities from startup land:
* derive worth from how helpful your users find your product
* chase numbers! usage, revenue, funding, impact, etc. Sam Altman has a line like "focus on adding another 0 to your success metric"
* the intrinsic sense of having built something cool
9
Patrick Gruban 🔸
After transitioning from for-profit entrepreneurship to co-leading a non-profit in the effective altruism space, I struggle to identify clear metrics to optimize for. Funding is a potential metric, but it is unreliable due to fluctuations in donors' interests. The success of individual programs, such as user engagement with free products or services, may not accurately reflect their impact compared to other potential initiatives. Furthermore, creating something impressive doesn't necessarily mean it's useful.
Lacking a solid impact evaluation model, I find myself defaulting to measuring success by hours worked, despite recognizing the diminishing returns and increased burnout risk this approach entails.
5[anonymous]
This is brave of you to share. It sounds like there are a few related issues going on. I have a few thoughts that may or may not be helpful:
1. Firstly, you want to do well and improve in your work, and you want some feedback on that from people who are informed and have good judgment. The obvious candidates in the EA ecosystem are people who actually aren't well suited to give this feedback to you. This is tough. I don't have any advice to give you here.
2. However it also sounds like there are some therapeutic issues at play. You mention therapists as a favored option but one you're not satisfied with and I'm wondering why? Personally I suspect that making progress on any therapeutic issues that may be at play may also end up helping with the professional feedback problem.
3. I think you've unfairly dismissed the best option as to who you can trust: yourself. That you have biases and flaws is not an argument against trusting yourself because everyone and everything has biases and flaws! Which person or AI are you going to find that doesn't have some inherent bias or ability to Goodhart?
4
Sam_Coggins
Five reasons why I think it's unhelpful connecting our intrinsic worth to our instrumental worth (or anything aside from being conscious beings):
1. Undermines care for others (and ourselves): chickens have limited instrumental worth and often do morally questionable things. I still reckon chickens and their suffering are worthy of care. (And same argument for human babies, disabled people and myself)
2. Constrains effective work: continually assessing our self-worth can be exhausting (leaving less time/attention/energy for actually doing helpful work). For example, it can be difficult to calmy take on constructive feedback (on our work, or instrumental strengths or instrumental weaknesses) when our self-worth is on the line.
3. Constrains our personal wellbeing and relationships: I've personally found it hard to enjoy life when continuously questioning my self-worth and feeling guilty/shameful when the answer seems negative
4. Very hard to answer: including because the assessment may need to be continuously updated based on the new evidence from each new second of our lives
5. Seems pointless to answer (to me): how would accurately measuring our self-worth (against a questionable benchmark) make things better? We could live in a world where all beings are ranked so that more 'worthy' beings can appropriately feel superior, and less 'worthy' beings can appropriately feel 'not enough'. This world doesn't seem great from perspective
Despite thinking these things, I often unintentionally get caught up muddling my self-worth with my instrumental worth (can relate to the post and comments on here!) I've found 'mindful self-compassion' super helpful for doing less of this
4
Ben_West🔸
This is an interesting post and seems basically right to me, thanks for sharing.
4
Patrick Gruban 🔸
Thank you, this very much resonates with me
4
Ozzie Gooen
The most obvious moves, to me, eventually, are to either be intensely neutral (as in, trying not to tie my emotional state to my track record), or to iterate on using AI to help here (futuristic and potentially dangerous, but with other nice properties).
2
EdoArad
How would you use AI here?
2
Ozzie Gooen
A very simple example is, "Feed a log of your activity into an LLM with a good prompt, and have it respond with assessments of how well you're doing vs. your potential at the time, and where/how you can improve." You'd be free to argue points or whatever.
7
Joseph Lemien
Reading this comment makes me think that you are basing your self-worth on your work output. I don't have anything concrete to point to, but I suspect that this might have negative effects on happiness, and that being less outcome dependent will tend to result in a better emotional state.
4
EdoArad
That's cool. I had the thought of developing a "personal manager" for myself of some form for roughly similar purposes
Around discussions of AI & Forecasting, there seems to be some assumption like:
1. Right now, humans are better than AIs at judgemental forecasting. 2. When humans are better than AIs at forecasting, AIs are useless. 3. At some point, AIs will be better than humans at forecasting. 4. At that point, when it comes to forecasting, humans will be useless.
This comes from a lot of discussion and some research comparing "humans" to "AIs" in forecasting tournaments.
As you might expect, I think this model is incredibly naive. To me, it's asking questions like, "Are ... (read more)
I am not so aware of the assumption you make up front, and would agree with you that anyone making such an assumption is being naive. Not least because humans on average (and even supers under many conditions) are objectively inaccurate at forecasting - even if relatively good given we don’t have anything better yet.
I think the more interesting and important when it comes to AI forecasting and claiming they are “good”, is to look at the reasoning process that they undertaken to do that. How are they forming reference classes, how are they integrating specific information, how are they updating their posterior to form an accurate inference and likelihood of the event occurring? Right now, they can sort of do (1), but from my experience don’t do well at all at integration, updating, and making a probabilistic judgment. In fairness, humans often don’t either. But we do it more consistently than current AI.
For your post, this suggests to me that AI could be used to help base rate/reference class creation, and maybe loosely support integration.
When I hear of entrepreneurs excited about prediction infrastructure making businesses, I feel like they gravitate towards new prediction markets or making new hedge funds.
I really wish it were easier to make new insurance businesses (or similar products). I think innovative insurance products could be a huge boon to global welfare. The very unfortunate downside is that there's just a ton of regulation and lots of marketing to do, even in cases where it's a clear win for consumers.
Ideally, it should be very easy and common to get insurance for all of the key insecurities of your life.
Having children with severe disabilities / issues
Having business or romantic partners defect on you
Having your dispreferred candidate get elected
Increases in political / environmental instability
Some unexpected catastrophe will hit a business
Nonprofits losing their top donor due to some unexpected issue with said donor (i.e. FTX)
I think a lot of people have certain issues that both:
They worry about a lot
They over-weight the risks of these issues
In these cases, insurance could be a big win!
In a better world, almost all global risks would be held primarily by asset managers / insurance agencies. Individuals could have highly predictable lifestyles.
(Of course, some prediction markets and other markets can occasionally be used for this purpose as well!)
Some of these things are fundamentally hard to insure against, because of information asymmetries / moral hazard.
e.g. insurance against donor issues would disproportionately be taken by people who had some suspicions about their donors, which would drive up prices, which would get more people without suspicions to decline taking insurance, until the market was pretty tiny with very high prices and a high claim rate. (It would also increase the incentives to commit fraud to give, which seems bad.)
2
Jason
Some of these harms seem of a sort that does not really feel compensable with money. While romantic partner's defection might create some out-of-pocket costs, but I don't think the knowledge that I'd get some money out of my wife defecting would make me feel any better about the possibility!
Also, I'd note that some of the harms are already covered by social insurance schemes to a large extent. For instance, although parents certainly face a lot of costs associated with "[h]aving children with severe disabilities / issues," a high percentage of costs in the highest-cost scenarios are already borne by the public (e.g., Medicaid, Social Security/SSI, the special education system, etc.) or by existing insurers (e.g., employer-provided health insurance). So I'd want to think more about the relative merits of novel private-sector insurance schemes versus strengthening the socialized schemes.
4
Ozzie Gooen
Consider this, as examples of where it might be important:
1. You are financially dependent on your spouse. If they cheated on you, you would likely want to leave them, but you wouldn't want to be trapped due to finances.
2. You're nervous about the potential expenses of a divorce.
I think that this situation is probably a poor fit for insurance at this point, just because of moral risks that would happen, but perhaps one day it might be viable to some extent.
> So I'd want to think more about the relative merits of novel private-sector insurance schemes versus strengthening the socialized schemes.
I'm all for improvements on socialized schemes too. No reason not for both strategies to be tested and used. In theory, insurance could be much easier and faster to be implemented. It can take ages for nation-wide reform to happen.
I really don't like the trend of posts saying that "EA/EAs need to | should do X or Y".
EA is about cost-benefit analysis. The phrases need and should implies binaries/absolutes and having very high confidence.
I'm sure there are thousands of interventions/measures that would be positive-EV for EA to engage with. I don't want to see thousands of posts loudly declaring "EA MUST ENACT MEASURE X" and "EAs SHOULD ALL DO THING Y," in cases where these mostly seem like un-vetted interesting ideas.
In almost all cases I see the phrase, I think it would be much better replaced with things like; "Doing X would be high-EV" "X could be very good for EA" "Y: Cost and Benefits" (With information in the post arguing the benefits are worth it) "Benefits|Upsides of X" (If you think the upsides are particularly underrepresented)"
I think it's probably fine to use the word "need" either when it's paired with an outcome (EA needs to do more outreach to become more popular) or when the issue is fairly clearly existential (the US needs to ensure that nuclear risk is low). It's also fine to use should in the right context, but it's not a word to over-use.
Strong disagree. If the proponent of an intervention/cause area believes the advancement of it is extremely high EV such that they believe it is would be very imprudent for EA resources not to advance it, they should use strong language.
I think EAs are too eager to hedge their language and use weak language regarding promising ideas.
For example, I have no compunction saying that advancement of the Profit for Good (companies with charities in vast majority shareholder position) needs to be advanced by EA, in that I believe it not doing results in an ocean less counterfactual funding for effective charities, and consequently a significantly worse world.
What about social norms, like "EA should encourage people to take care of their mental health even if it means they have less short-term impact"?
2
Ozzie Gooen
Good question.
First, I have a different issue with that phrase, as it's not clear what "EA" is. To me, EA doesn't seem like an agent. You can say, "....CEA should" or "...OP should".
Normally, I prefer one says "I think X should". There are some contexts, specifically small ones (talking to a few people, it's clearly conversational) where saying, "X should do Y" clearly means "I feel like X should do Y, but I'm not sure". And there are some contexts where it means "I'm extremely confident X should do Y".
For example, there's a big difference between saying "X should do Y" to a small group of friends, when discussing uncertain claims, and writing a mass-market book titled "X should do Y".
1
NickLaing
I haven't noticed this trend, could you list a couple of articles like this? Or even DM me if you're not comfortable listing them here.
9
Ozzie Gooen
I recently noticed it here:
https://forum.effectivealtruism.org/posts/WJGsb3yyNprAsDNBd/ea-orgs-need-to-tabletop-more
Looking back, it seems like there weren't many more very recently. Historically, there have been some.
EA needs consultancies
EA needs to understand its “failures” better
EA needs more humor
EA needs Life-Veterans and "Less Smart" people
EA needs outsiders with a greater diversity of skills
EA needs a hiring agency and Nonlinear will fund you to start one
EA needs a cause prioritization journal
Why EA needs to be more conservative
Looking above, many of those seem like "nice to haves". The word "need" seems over-the-top to me.
3
VictorW
There are a couple of strong "shoulds" in the EA Handbook (I went through it over the last two months as part of an EA Virtual program) and they stood out to me as the most disagreeable part of EA philosophy that was presented.
Some musicians have multiple alter-egos that they use to communicate information from different perspectives. MF Doom released albums under several alter-egos; he even used these aliases to criticize his previous aliases.
Some musicians, like Madonna, just continued to "re-invent" themselves every few years.
Youtube personalities often feature themselves dressed as different personalities to represent different viewpoints.
It's really difficult to keep a single understood identity, while also conveying different kinds of information.
Narrow identities are important for a lot of reasons. I think the main one is predictability, similar to a company brand. If your identity seems to dramatically change hour to hour, people wouldn't be able to predict your behavior, so fewer could interact or engage with you in ways they'd feel comfortable with.
However, narrow identities can also be suffocating. They restrict what you can say and how people will interpret that. You can simply say more things in more ways if you can change identities. So having multiple identities can be a really useful tool.
Sadly, most academics and intellectuals can only really have one public identity.
Some AI questions/takes I’ve been thinking about: 1. I hear people confidently predicting that we’re likely to get catastrophic alignment failures, even if things go well up to ~GPT7 or so. But if we get to GPT-7, I assume we could sort of ask it, “Would taking this next step, have a large chance of failing?“. Basically, I’m not sure if it’s possible for an incredibly smart organization to “sleepwalk into oblivion”. Likewise, I’d expect trade and arms races to get a lot nicer/safer, if we could make it a few levels deeper without catastrophe. (Note: This is... (read more)
How do you know it tells the truth or its best knowledge of the truth without solving the "eliciting latent knowledge" problem?
2
Ozzie Gooen
Depends on what assurance you need. If GPT-7 reliably provides true results in most/all settings you can find, that's good evidence.
If GPT-7 is really Machiavellian, and is conspiring against you to make GPT-8, then it's already too late for you, but it's also a weird situation. If GPT-7 were seriously conspiring against you, I assume it wouldn't need to wait until GPT-8 to take action.
EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals.
If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields.
If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice.
Some recent tech unions, like the one in Google, have been pushing more for moral reforms than for payment changes.
Likewise, a bunch of AI engineers could use collective bargaining to help ensure that safety measures get more attention, in AI labs.
There are definitely net-negative unions out there too, so it would need to be done delicately.
In theory there could be some unions that span multiple organizations. That way one org couldn't easily "fire all of their union staff" and hope that recruiting others would b... (read more)
A few junior/summer effective altruism related research fellowships are ending, and I’m getting to see some of the research pitches.
Lots of confident-looking pictures of people with fancy and impressive sounding projects.
I want to flag that many of the most senior people I know around longtermism are really confused about stuff. And I’m personally often pretty skeptical of those who don’t seem confused.
So I think a good proposal isn’t something like, “What should the EU do about X-risks?” It’s much more like, “A light summary of what a few people so far think about this, and a few considerations that they haven’t yet flagged, but note that I’m really unsure about all of this.”
Many of these problems seem way harder than we’d like for them to be, and much harder than many seem to assume at first. (perhaps this is due to unreasonable demands for rigor, but an alternative here would be itself a research effort).
I imagine a lot of researchers assume they won’t stand out unless they seem to make bold claims. I think this isn’t true for many EA key orgs, though it might be the case that it’s good for some other programs (University roles, perhaps?).
Not sure how to finish this post here. I... (read more)
Relevant post by Nuño: https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers?fbclid=IwAR1M0zumAQ452iOAOVKGEcOdI4MwORfVSX4H1S2zLhyUXrWjarvUt31mKsg
Could/should altruistic activist investors buy lots of Twitter stock, then pressure them to do altruistic things?
---
So, Jack Dorsey just resigned from Twitter.
Some people on Hacker News are pointing out that Twitter has had recent issues with activist investors, and that this move might make those investors happy.
From a quick look... Twitter stock really hasn't been doing very well. It's almost back at its price in 2014.
Square, Jack Dorsey's other company (he was CEO of two), has done much better. Market cap of over 2x Twitter ($100B), huge gains in the last 4 years.
I'm imagining that if I were Jack... leaving would have been really tempting. On one hand, I'd have Twitter, which isn't really improving, is facing activist investor attacks, and worst, apparently is responsible for global chaos (of which I barely know how to stop). And on the other hand, there's this really tame payments company with little controversy.
Being CEO of Twitter seems like one of the most thankless big-tech CEO positions around.
That sucks, because it would be really valuable if some great CEO could improve Twitter, for the sake of humanity.
One futarchy/prediction market/coordination idea I have is to find some local governments and see if we could help them out by incorporating some of the relevant techniques.
This could be neat if it could be done as a side project. Right now effective altruists/rationalists don't actually have many great examples of side projects, and historically, "the spare time of particularly enthusiastic members of a jurisdiction" has been a major factor in improving governments.
Berkeley and London seem like natural choices given the communities there. I imagine it could even be better if there were some government somewhere in the world that was just unusually amenable to both innovative techniques, and to external help with them.
Given that EAs/rationalists care so much about global coordination, getting concrete experience improving government systems could be interesting practice.
There's so much theoretical discussion of coordination and government mistakes on LessWrong, but very little discussion of practical experience implementing these ideas into action.
(This clearly falls into the Institutional Decision Making camp)
I have a bunch of friends/colleagues who are either trying to slow AGI down (by stopping arms races) or align it before it's made (and would much prefer it be slowed down).
Then I have several friends who are actively working to *speed up* AGI development. (Normally just regular AI, but often specifically AGI)[1]
Then there are several people who are apparently trying to align AGI, but who are also effectively speeding it up, but they claim that the trade-off is probably worth it (to highly varying degrees of plausibility, in my rough opinion).
In general, people seem surprisingly chill about this mixture? My impression is that people are highly incentivized to not upset people, and this has led to this strange situation where people are clearly pushing in opposite directions on arguably the most crucial problem today, but it's all really nonchalant.
[1] To be clear, I don't think I have any EA friends in this bucket. But some are clearly EA-adjacent.
More discussion here: https://www.facebook.com/ozzie.gooen/posts/10165732991305363
There seem to be several longtermist academics who plan to spend the next few years (at least) investigating the psychology of getting the public to care about existential risks.
This is nice, but I feel like what we really could use are marketers, not academics. Those are the people companies use for this sort of work. It's somewhat unusual that marketing isn't much of a respected academic field, but it's definitely a highly respected organizational one.
There are at least a few people in the community with marketing experience and an expressed desire to help out. The most recent example that comes to mind is this post.
If anyone reading this comment knows people who are interested in the intersection of longtermism and marketing, consider telling them about EA Funds! I can imagine the LTFF or EAIF being very interested in projects like this.
(That said, maybe one of the longtermist foundations should consider hiring a marketing consultant?)
2
Ozzie Gooen
Yep, agreed. Right now I think there are very few people doing active work in longtermism (outside of a few orgs that have people for that org), but this seems very valuable to improve upon.
4
Jamie_Harris
If you're happy to share, who are the longtermist academics you are thinking of? (Their work could be somewhat related to my work)
2
Ozzie Gooen
No prominent ones come to mind. There are some very junior folks I've recently seen discussing this, but I feel uncomfortable calling them out.
I made a quick Manifold Market for estimating my counterfactual impact from 2023-2030.
One one hand, this seems kind of uncomfortable - on the other, I'd really like to feel more comfortable with precise and public estimates of this sort of thing.
Feel free to bet!
Still need to make progress on the best resolution criteria.
If someone thinks LTFF is net negative, but your work is net positive, should they answer in the negative ranges?
2
Ozzie Gooen
Yes. That said, this of course complicates things.
2
Linch
Note that while we'll have some clarity in 2030, we'd presumably have less clarity than at the end of history (and even then things could be murky, I dunno)
2
Ozzie Gooen
For sure. This would just be the mean estimate, I assume.
You could use prediction setups to resolve specific cruxes on why prediction setups outputted certain values.
My guess is that this could be neat, but also pretty tricky. There are lots of "debate/argument" platforms out there, it's seemed to have worked out a lot worse than people were hoping. But I'd love to be proven wrong.
P.S. I'd be keen on working on this, how do I get involved?
If "this" means the specific thing you're referring to, I don't think there's really a project for that yet, you'd have to do it yourself. If you're referring more to for... (read more)
1) Humanity has a >80% chance of completely perishing in the next ~300 years.
2) The expected value of the future is incredibly, ridiculously, high!
The trick is that the expected value of a positive outcome could be just insanely great. Like, dramatically, incredibly, totally, better than basically anyone discusses or talks about.
Expanding to a great deal of the universe, dramatically improving our abilities to convert matter+energy to net well-being, researching strategies to expand out of the universe.
I've been getting more requests recently to have calls/conversations to give advice, review documents, or be part of extended sessions on things. Most of these have been from EAs.
I find a lot of this work fairly draining. There can be surprisingly high fixed costs to having a meeting. It often takes some preparation, some arrangement (and occasional re-arrangement), and a fair bit of mix-up and change throughout the day.
My main work requires a lot of focus, so the context s... (read more)
Around EA Priorities:
Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism").
If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin.
I feel incredibly unsatisfied with the public dialogue around AI safety strategy now. From what I can tell, there's some intelligent conversation happening by a handful of people at the Constellation coworking space, but a lot of this is barely clear publicly. I think many people outside of Constellation are working on simplified models, like "AI is generally dangerous, so we should slow it all down," as opposed to something like, "Really, there are three scary narrow scenarios we need to worry about."
I recently spent a week in DC and found it interesting. But my impression is that a lot of people there are focused on fairly low-level details, without a great sense of the big-picture strategy. For example, there's a lot of work i... (read more)
Yeah, I find myself very confused by this state of affairs. Hundreds of people are being funneled through the AI safety community-building pipeline, but there’s little funding for them to work on things once they come out the other side.[1]
As well as being suboptimal from the viewpoint of preventing existential catastrophe, this also just seems kind of common-sense unethical. Like, all these people (most of whom are bright-eyed youngsters) are being told that they can contribute, if only they skill up, and then they later find out that that’s not the case.
These community-building graduates can, of course, try going the non-philanthropic route—i.e., apply to AGI companies or government institutes. But there are major gaps in what those organizations are working on, in my view, and they also can’t absorb so many people.
I think I broadly like the idea of Donation Week.
One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.
Related, I'm curious if future versions could feature specific subprojects/teams within charities. "Rethink Priorities" is a rather large project compared to "PauseAI US", I assume it would be interesting if different parts of it were put here instead.
(That said, in terms of the donation, I'd hope that we could donate to RP as a whole and trust RP to allocate it accordingly, instead of formally restricting the money, which can be quite a hassle in terms of accounting)
I occasionally hear implications that cyber + AI + rogue human hackers will cause mass devastation, in ways that roughly match "lots of cyberattacks happening all over." I'm skeptical of this causing over $1T/year in damages (for over 5 years, pre-TAI), and definitely of it causing an existential disaster.
There are some much more narrow situations that might be more X-risk-relevant, like [A rogue AI exfiltrates itself] or [China uses cyber weapons to dominate the US and create a singleton], but I think these are so narrow they should really be identified i... (read more)
I’ve heard multiple reports of people being denied jobs around AI policy because of their history in EA. I’ve also seen a lot of animosity against EA from top organizations I think are important - like A16Z, Founders Fund (Thiel), OpenAI, etc. I’d expect that it would be uncomfortable for EAs to apply or work in to these latter places at this point.
This is very frustrating to me.
First, it makes it much more difficult for EAs to collaborate with many organizations where these perspectives could be the most useful. I want to see more collaborations and cooperation - not having EAs be allowed in many orgs makes this very difficult.
Second, it creates a massive incentive for people not to work in EA or on EA topics. If you know it will hurt your career, then you’re much less likely to do work here.
And a lighter third - it’s just really not fun to have a significant stigma associated with you. This means that many of the people I respect the most, and think are doing some of the most valuable work out there, will just have a much tougher time in life.
Who’s at fault here? I think the first big issue is that resistances get created against all interesting and powerful groups. There are sim... (read more)
On the positive front, I know its early days but GWWC have really impressed me with their well produced, friendly yet honest public facing stuff this year - maybe we can pick up on that momentum?
Also EA for Christians is holding a British conference this year where Rory Stewart and the Archbishop of Canterbury (biggest shot in the Anglican church) are headlining which is a great collaboration with high profile and well respected mainstream Christian / Christian-adjacent figures.
(Jumping in for our busy comms/exec team) Understanding the status of the EA brand and working to improve it is a top priority for CEA :) We hope to share more work on this in future.
I wrote a downvoted post recently about how we should be warning AI Safety talent about going into labs for personal branding reasons (I think there are other reasons not to join labs, but this is worth considering).
I think people are still underweighting how much the public are going to hate labs in 1-3 years.
Very sorry to hear these reports, and was nodding along as I read the post.
If I can ask, how do they know EA affiliation was the decision? Is this an informal 'everyone knows' thing through policy networks in the US? Or direct feedback for the prospective employer than EA is a PR-risk?
Of course, please don't share any personal information, but I think it's important for those in the community to be as aware as possible of where and why this happens if it is happening because of EA affiliation/history of people here.
(Feel free to DM me Ozzie if that's easier)
I'm thinking of around 5 cases. I think in around 2-3 they were told, the others it was strongly inferred.
On Twitter, a lot of VCs and techies have ranted heavily about how much they dislike EAs.
See this segment from Marc Andreeson, where he talks about the dangers of Eliezer and EA. Marc seems incredibly paranoid about the EA crowd now.
(Go to 1 hour, 11min in, for the key part. I tried linking to the timestamp, but couldn't get it to work in this editor after a few minutes of attempts)
... (read more)I also came across this transcript, from Amjad Masad, CEO of Replit, on Tucker Carlson, recently
https://www.happyscribe.com/public/the-tucker-carlson-show/amjad-masad-the-cults-of-silicon-valley-woke-ai-and-tech-billionaires-turning-to-trump
To be fair to the CEO of Replit here, much of that transcript is essentially true, if mildly embellished. Many of those events or outcomes associated with EA, or adjacent communities during their histories, that should be the most concerning to anyone other than any FTX-related events and for reasons beyond just PR concerns, can and have been well-substantiated.
I have personally heard several CFAR employees and contractors use the word "debugging" to describe all psychological practices, including psychological practices done in large groups of community members. These group sessions were fairly common.
In that section of the transcript, the only part that looks false to me is the implication that there was widespread pressure to engage in these group psychology practices, rather than it just being an option that was around. I have heard from people in CFAR who were put under strong personal and professional pressure to engage in *one-on-one* psychological practices which they did not want to do, but these cases were all within the inner ring and AFAIK not widespread. I never heard any stories of people put under pressure to engage in *group* psychological practices they did not want to do.
I think it might describe how some people experienced internal double cruxing. I wouldn't be that surprised if some people also found the 'debugging" frame in general to give too much agency to others relative to themselves, I feel like I've heard that discussed.
For what it’s worth, I was reminded of Jessica Taylor’s account of collective debugging and psychoses as I read that part of the transcript. (Rather than trying to quote pieces of Jessica’s account, I think it’s probably best that I just link to the whole thing as well as Scott Alexander’s response.)
I presume this account is their source for the debugging stuff, wherein an ex-member of the rationalist Leverage institute described their experiences. They described the institute as having "debugging culture", described as follows:
and:
The podcast statements seem to be an embel... (read more)
Leverage was an EA-aligned organization, that was also part of the rationality community (or at least 'rationalist-adjacent'), about a decade ago or more. For Leverage to be affiliated with the mantles of either EA or the rationality community was always contentious. From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn't have then, and couldn't now, say or do more to disown and disavow Leverage's practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever. They have. To be clear, so has Leverage in its own way.
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with--to put it bluntly--the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside. In time, Leverage came to take that in stride, as the break-up between Leverage, and the rest of the institutional polycule that is EA/rationality, was extremely mutual.
Ien short, the course of events, and practices at Leverage that led to them, as presented by Zoe Curzi and others as a few years ago from that time circa 2018 to 2022, can scarcely be attributed to either the rationality or EA communities. That's a consensus between EA, Leverage, and the rationality community agree on--one of few things left that they still agree on at all.
While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage.
Circa 2018 CEA was literally supporting Leverage/Paradigm on an EA community building strategy event. In August 2018 (right in the middle of the 2017-2... (read more)
I'm nervous that the EA Forum might be having a small role for x-risk and some high-level prioritization work.
- Very little biorisk content here, perhaps because of info-hazards.
- Little technical AI safety work here, in part because that's more for LessWrong / Alignment Forum.
- Little AI governance work here, for whatever reason.
- Not too much innovative, big-picture longtermist prioritization projects happening at the moment, from what I understand.
- The cause of "EA community building" seems to be fairly stable, not much bold/controversial experimentation, from what I can tell.
- Fairly few updates / discussion from grantmakers. OP is really the dominant one, and doesn't publish too much, particularly about their grantmaking strategies and findings.
It's been feeling pretty quiet here recently, for my interests. I think some important threads are now happening in private slack / in-person conversations or just not happening.
I don't comment or post much on the EA forum because the quality of discourse on the EA Forum typically seems mediocre at best. This is especially true for x-risk.
I think this has been true for a while.
Any ideas for what we can do to improve it?
The whole manifund debacle has left me quite demotivated. It really sucks that people are more interested debating contentious community drama, than seemingly anything else this forum has to offer.
When I write biorisk-related things publicly I'm usually pretty unsure of whether the Forum is a good place for them. Not because of info-hazards, since that would gate things at an earlier stage, but because they feel like they're of interest to too small a fraction of people. For example, I could plausibly have posted Quick Thoughts on Our First Sampling Run or some of my other posts from https://data.securebio.org/jefftk-notebook/ here, but that felt a bit noisy?
It also doesn't help that detailed technical content gets much less attention than meta or community content. For example, three days ago I wrote a comment on @Conrad K.'s thoughtful Three Reasons Early Detection Interventions Are Not Obviously Cost-Effective, and while I feel like it's a solid contribution only four people have voted on it. On the other hand, if you look over my recent post history at my comments on Manifest, far less objectively important comments have ~10x the karma. Similarly the top level post was sitting at +41 until Mike bumped it last week, which wasn't even high enough that (before I changed my personal settings to boo... (read more)
I'm with Ozzie here. I think EA Forum would do better with more technical content even if it's hard for most people to engage with.
I'd be excited to have discussions of those posts here!
A lot of my more technical posts also get very little attention - I also find that pretty unmotivating. It can be quite frustrating when clearly lower-quality content on controversial stuff gets a lot more attention.
But this seems like a doom loop to me. I care much more about strong technical content, even if I don't always read it, than I do most of the community drama. I'm sure most leaders and funders feel similarly.
Extended far enough, the EA Forum will be a place only for controversial community drama. This seems nightmarish to me. I imagine most forum members would agree.
I imagine that there are things the Forum or community can do to bring more attention or highlighting to the more technical posts.
On the funding-talent balance:
When EA was starting, there was a small amount of talent, and a smaller amount of funding. As one might expect, things went slowly for the first few years.
Then once OP decided to focus on X-risks, there was ~$8B potential funding, but still fairly little talent/capacity. I think the conventional wisdom then was that we were unlikely to be bottlenecked by money anytime soon, and lots of people were encouraged to do direct work.
Then FTX Future Fund came in, and the situation got even more out-of-control. ~Twice the funding. Projects got more ambitious, but it was clear there were significant capacity (funder and organization) constraints.
Then (1) FTX crashed, and (2) lots of smart people came into the system. Project capacity grew, AI advances freaked out a lot of people, and successful community projects helped train a lot of smart young people to work on X-risks.
But funding has not kept up. OP has been slow to hire for many x-risk roles (AI safety, movement building, outreach / fundraising). Other large funders have been slow to join in.
So now there's a crunch for funding. There are a bunch of smart-seeming AI people now who I bet could have gotten fun... (read more)
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)
Thoughts on the OpenAI Strategy
OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten.
First, they say flat out that they're going for AGI.
Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.
On Hacker News, one of their employees says,
You can read more about this mission on the charter:
... (read more)Personal reflections on self-worth and EA
My sense of self-worth often comes from guessing what people I respect think of me and my work.
In EA... this is precarious. The most obvious people to listen to are the senior/powerful EAs.
In my experience, many senior/powerful EAs I know:
1. Are very focused on specific domains.
2. Are extremely busy.
3. Have substantial privileges (exceptionally intelligent, stable health, esteemed education, affluent/ intellectual backgrounds.)
4. Display limited social empathy (ability to read and respond to the emotions of others)
5. Sometimes might actively try not to sympathize/empathize with many people, because they are judging them for grants, and want don't want to be biased. (I suspect this is the case for grantmakers).
6. Are not that interested in acting as a coach/mentor/evaluator to people outside their key areas/organizations.
7. Don't intend or want others to care too much about what they think outside of cause-specific promotion and a few pet ideas they want to advance.
A parallel can be drawn with the world of sports. Top athletes can make poor coaches. Their innate talent and advantages often leave them detached from the experiences ... (read more)
My initial thought is that it is pretty risky/tricky/dangerous to depend on external things for a sense of self-worth? I know that I certainly am very far away from an Epictetus-like extreme, but I try to not depend on the perspectives of other people for my self-worth. (This is aspirational, of course. A breakup or a job loss or a person I like telling me they don't like me will hurt and I'll feel bad for a while.)
A simplistic little thought experiment I've fiddled with: if I went to a new place where I didn't know anyone and just started over, then what? Nobody knows you, and you social circle starts from scratch. That doesn't mean that you don't have a worth as a human being (although it might mean that you don't have any worth in the 'economic' sense of other people wanting you, which is very different).
There might also be an intrinsic/extrinsic angle to this. If you evaluate yourself based on accomplishments, outputs, achievements, and so on, that has a very different feeling than the deep contentment of being okay as you are.
In another comment Austin mentions revenue and funding, but that seems to be a measure of things V... (read more)
I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.
I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.)
IMO you should be thinking about things like, how to do better work, but in the frame of "this is something I enjoy / consider important" rather than in the frame of "because otherwise I'm not worthy". It's also legitimate to want other people to appreciate and respect you for your work (I definitely have a strong desire for that), but IMO here also the right frame is "this is something I want" rather than "this is something that's necessary for me to be worth something".
Around discussions of AI & Forecasting, there seems to be some assumption like:
1. Right now, humans are better than AIs at judgemental forecasting.
2. When humans are better than AIs at forecasting, AIs are useless.
3. At some point, AIs will be better than humans at forecasting.
4. At that point, when it comes to forecasting, humans will be useless.
This comes from a lot of discussion and some research comparing "humans" to "AIs" in forecasting tournaments.
As you might expect, I think this model is incredibly naive. To me, it's asking questions like,
"Are ... (read more)
When I hear of entrepreneurs excited about prediction infrastructure making businesses, I feel like they gravitate towards new prediction markets or making new hedge funds.
I really wish it were easier to make new insurance businesses (or similar products). I think innovative insurance products could be a huge boon to global welfare. The very unfortunate downside is that there's just a ton of regulation and lots of marketing to do, even in cases where it's a clear win for consumers.
Ideally, it should be very easy and common to get insurance for all of the key insecurities of your life.
I think a lot of people have certain issues that both:
In these cases, insurance could be a big win!
In a better world, almost all global risks would be held primarily by asset managers / insurance agencies. Individuals could have highly predictable lifestyles.
(Of course, some prediction markets and other markets can occasionally be used for this purpose as well!)
I really don't like the trend of posts saying that "EA/EAs need to | should do X or Y".
EA is about cost-benefit analysis. The phrases need and should implies binaries/absolutes and having very high confidence.
I'm sure there are thousands of interventions/measures that would be positive-EV for EA to engage with. I don't want to see thousands of posts loudly declaring "EA MUST ENACT MEASURE X" and "EAs SHOULD ALL DO THING Y," in cases where these mostly seem like un-vetted interesting ideas.
In almost all cases I see the phrase, I think it would be much better replaced with things like;
"Doing X would be high-EV"
"X could be very good for EA"
"Y: Cost and Benefits" (With information in the post arguing the benefits are worth it)
"Benefits|Upsides of X" (If you think the upsides are particularly underrepresented)"
I think it's probably fine to use the word "need" either when it's paired with an outcome (EA needs to do more outreach to become more popular) or when the issue is fairly clearly existential (the US needs to ensure that nuclear risk is low). It's also fine to use should in the right context, but it's not a word to over-use.
See also EA should taboo "EA should"
Related (and classic) post in case others aren't aware: EA should taboo "EA should".
Lizka makes a slightly different argument, but a similar conclusion
Strong disagree. If the proponent of an intervention/cause area believes the advancement of it is extremely high EV such that they believe it is would be very imprudent for EA resources not to advance it, they should use strong language.
I think EAs are too eager to hedge their language and use weak language regarding promising ideas.
For example, I have no compunction saying that advancement of the Profit for Good (companies with charities in vast majority shareholder position) needs to be advanced by EA, in that I believe it not doing results in an ocean less counterfactual funding for effective charities, and consequently a significantly worse world.
https://forum.effectivealtruism.org/posts/WMiGwDoqEyswaE6hN/making-trillions-for-effective-charities-through-the
Around prediction infrastructure and information, I find that a lot of smart people make some weird (to me) claims. Like:
There are definitely ways to steelman these, but I think on the face they represent oversimplified models of how information leads to changes.
I'll introduce a different model, which I think is much more sensible:
- Whenever some party advocates for belief
... (read more)Some musicians have multiple alter-egos that they use to communicate information from different perspectives. MF Doom released albums under several alter-egos; he even used these aliases to criticize his previous aliases.
Some musicians, like Madonna, just continued to "re-invent" themselves every few years.
Youtube personalities often feature themselves dressed as different personalities to represent different viewpoints.
It's really difficult to keep a single understood identity, while also conveying different kinds of information.
Narrow identities are important for a lot of reasons. I think the main one is predictability, similar to a company brand. If your identity seems to dramatically change hour to hour, people wouldn't be able to predict your behavior, so fewer could interact or engage with you in ways they'd feel comfortable with.
However, narrow identities can also be suffocating. They restrict what you can say and how people will interpret that. You can simply say more things in more ways if you can change identities. So having multiple identities can be a really useful tool.
Sadly, most academics and intellectuals can only really have one public identity.
---
EA research... (read more)
Some AI questions/takes I’ve been thinking about:
1. I hear people confidently predicting that we’re likely to get catastrophic alignment failures, even if things go well up to ~GPT7 or so. But if we get to GPT-7, I assume we could sort of ask it, “Would taking this next step, have a large chance of failing?“. Basically, I’m not sure if it’s possible for an incredibly smart organization to “sleepwalk into oblivion”. Likewise, I’d expect trade and arms races to get a lot nicer/safer, if we could make it a few levels deeper without catastrophe. (Note: This is... (read more)
EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals.
If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields.
If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice.
I like the idea of AI Engineer Unions.
Some recent tech unions, like the one in Google, have been pushing more for moral reforms than for payment changes.
Likewise, a bunch of AI engineers could use collective bargaining to help ensure that safety measures get more attention, in AI labs.
There are definitely net-negative unions out there too, so it would need to be done delicately.
In theory there could be some unions that span multiple organizations. That way one org couldn't easily "fire all of their union staff" and hope that recruiting others would b... (read more)
A few junior/summer effective altruism related research fellowships are ending, and I’m getting to see some of the research pitches.
Lots of confident-looking pictures of people with fancy and impressive sounding projects.
I want to flag that many of the most senior people I know around longtermism are really confused about stuff. And I’m personally often pretty skeptical of those who don’t seem confused.
So I think a good proposal isn’t something like, “What should the EU do about X-risks?” It’s much more like, “A light summary of what a few people so far think about this, and a few considerations that they haven’t yet flagged, but note that I’m really unsure about all of this.”
Many of these problems seem way harder than we’d like for them to be, and much harder than many seem to assume at first. (perhaps this is due to unreasonable demands for rigor, but an alternative here would be itself a research effort).
I imagine a lot of researchers assume they won’t stand out unless they seem to make bold claims. I think this isn’t true for many EA key orgs, though it might be the case that it’s good for some other programs (University roles, perhaps?).
Not sure how to finish this post here. I... (read more)
Could/should altruistic activist investors buy lots of Twitter stock, then pressure them to do altruistic things?
---
So, Jack Dorsey just resigned from Twitter.
Some people on Hacker News are pointing out that Twitter has had recent issues with activist investors, and that this move might make those investors happy.
https://pxlnv.com/linklog/twitter-fleets-elliott-management/
From a quick look... Twitter stock really hasn't been doing very well. It's almost back at its price in 2014.
Square, Jack Dorsey's other company (he was CEO of two), has done much better. Market cap of over 2x Twitter ($100B), huge gains in the last 4 years.
I'm imagining that if I were Jack... leaving would have been really tempting. On one hand, I'd have Twitter, which isn't really improving, is facing activist investor attacks, and worst, apparently is responsible for global chaos (of which I barely know how to stop). And on the other hand, there's this really tame payments company with little controversy.
Being CEO of Twitter seems like one of the most thankless big-tech CEO positions around.
That sucks, because it would be really valuable if some great CEO could improve Twitter, for the sake of humanity.
One smal... (read more)
One futarchy/prediction market/coordination idea I have is to find some local governments and see if we could help them out by incorporating some of the relevant techniques.
This could be neat if it could be done as a side project. Right now effective altruists/rationalists don't actually have many great examples of side projects, and historically, "the spare time of particularly enthusiastic members of a jurisdiction" has been a major factor in improving governments.
Berkeley and London seem like natural choices given the communities there. I imagine it could even be better if there were some government somewhere in the world that was just unusually amenable to both innovative techniques, and to external help with them.
Given that EAs/rationalists care so much about global coordination, getting concrete experience improving government systems could be interesting practice.
There's so much theoretical discussion of coordination and government mistakes on LessWrong, but very little discussion of practical experience implementing these ideas into action.
(This clearly falls into the Institutional Decision Making camp)
Facebook Thread
On AGI (Artificial General Intelligence):
I have a bunch of friends/colleagues who are either trying to slow AGI down (by stopping arms races) or align it before it's made (and would much prefer it be slowed down).
Then I have several friends who are actively working to *speed up* AGI development. (Normally just regular AI, but often specifically AGI)[1]
Then there are several people who are apparently trying to align AGI, but who are also effectively speeding it up, but they claim that the trade-off is probably worth it (to highly varying degrees of plausibility, in my rough opinion).
In general, people seem surprisingly chill about this mixture? My impression is that people are highly incentivized to not upset people, and this has led to this strange situation where people are clearly pushing in opposite directions on arguably the most crucial problem today, but it's all really nonchalant.
[1] To be clear, I don't think I have any EA friends in this bucket. But some are clearly EA-adjacent.
More discussion here: https://www.facebook.com/ozzie.gooen/posts/10165732991305363
There seem to be several longtermist academics who plan to spend the next few years (at least) investigating the psychology of getting the public to care about existential risks.
This is nice, but I feel like what we really could use are marketers, not academics. Those are the people companies use for this sort of work. It's somewhat unusual that marketing isn't much of a respected academic field, but it's definitely a highly respected organizational one.
When discussing forecasting systems, sometimes I get asked,
The obvious answer is,
Or,
For example,
- We make a list of 10,000 potential government forecasting projects.
- For each, we will have a later evaluation for “how valuable/successful was this project?”.
- We then open forecasting ques
... (read more)I’m sort of hoping that 15 years from now, a whole lot of common debates quickly get reduced to debates about prediction setups.
“So, I think that this plan will create a boom for the United States manufacturing sector.”
“But the prediction markets say it will actually lead to a net decrease. How do you square that?”
“Oh, well, I think that those specific questions don’t have enough predictions to be considered highly accurate.”
“Really? They have a robustness score of 2.5. Do you think there’s a mistake in the general robustness algorithm?”
—-
Perhaps 10 years ... (read more)
Epistemic status: I feel positive about this, but note I'm kinda biased (I know a few of the people involved, work directly with Nuno, who was funded)
ACX Grants just announced.~$1.5 Million, from a few donors that included Vitalik.
https://astralcodexten.substack.com/p/acx-grants-results
Quick thoughts:
- In comparison to the LTFF, I think the average grant is more generically exciting, but less effective altruist focused. (As expected)
- Lots of tiny grants (<$10k), $150k is the largest one.
- These rapid grant programs really seem great and I look forward to the
... (read more)I made a quick Manifold Market for estimating my counterfactual impact from 2023-2030.
One one hand, this seems kind of uncomfortable - on the other, I'd really like to feel more comfortable with precise and public estimates of this sort of thing.
Feel free to bet!
Still need to make progress on the best resolution criteria.
My guess is that this could be neat, but also pretty tricky. There are lots of "debate/argument" platforms out there, it's seemed to have worked out a lot worse than people were hoping. But I'd love to be proven wrong.
If "this" means the specific thing you're referring to, I don't think there's really a project for that yet, you'd have to do it yourself. If you're referring more to for... (read more)
The following things could both be true:
1) Humanity has a >80% chance of completely perishing in the next ~300 years.
2) The expected value of the future is incredibly, ridiculously, high!
The trick is that the expected value of a positive outcome could be just insanely great. Like, dramatically, incredibly, totally, better than basically anyone discusses or talks about.
Expanding to a great deal of the universe, dramatically improving our abilities to convert matter+energy to net well-being, researching strategies to expand out of the universe.
A 20%, or e... (read more)
Opinions on charging for professional time?
(Particularly in the nonprofit/EA sector)
I've been getting more requests recently to have calls/conversations to give advice, review documents, or be part of extended sessions on things. Most of these have been from EAs.
I find a lot of this work fairly draining. There can be surprisingly high fixed costs to having a meeting. It often takes some preparation, some arrangement (and occasional re-arrangement), and a fair bit of mix-up and change throughout the day.
My main work requires a lot of focus, so the context s... (read more)