L

lilly

2936 karmaJoined

Posts
3

Sorted by New
6
lilly
· · 1m read

Comments
136

No shade to the mods, but I'm just kind of bearish on mods' ability to fairly determine what issues are "difficult to discuss rationally," just because I think this is really hard and inevitably going to be subject to bias. (The lack of moderation around the Nonlinear posts, Manifest posts, Time article on sexual harassment, and so on makes me think this standard is hard to enforce consistently.) Accordingly, I would favor relying on community voting to determine what posts/comments are valuable and constructive, except in rare cases. (Obviously, this isn't a perfect solution either, but it at least moves away from the arbitrariness of the "difficult to discuss rationally" standard.)

Yeah, just to be clear, I am not arguing that the "topics that are difficult to discuss rationally" standard should be applied to posts about community events, but instead that there shouldn't be a carveout for political issues specifically. I don't think political issues are harder to discuss rationally or less important.

lilly
32
11
8
1

This is weird to me. There are so many instances of posts on this forum having a “strong polarizing effect… [consuming] a lot of the community’s attention, and [leading] to emotionally charged arguments.” The several posts about Nonlinear last year strike me as a glaring example of this.

US presidential candidates’ positions on EA issues are more important to EA—and our ability to make progress on these issues—than niche interpersonal disputes affecting a handful of people. In short, it seems like posts about politics are ostensibly being held to a higher standard than other posts. I do not think this double standard is conducive to healthy discourse or better positions the EA community to achieve its goals.

Two separate points:

  1. I am one of those people who, having seen the Twitter post with the letter, scanned the Forum home page for the letter and didn't see it! And regardless of what you think of the letter, I think the discussion in the comments here is useful; I am glad I did not miss it. So I agree with what others have said—there are real downsides to downvoting things just because you disagree with them; I would encourage people not to do this. (And if you downvoted this because you don't think a Stanford professor making a sincere effort to engage with EA ideas is valuable/warrants engagement then... yeah, I just disagree. But I would be eager to hear downvoters' best defense of doing this.)
  2. Regarding the letter itself: one thing I am struck by is the number of claims in this letter that go without citations. This is frustrating to me, especially given the letter repeatedly appeals to academic authority. As just one example, claims like "It has lots of premises that GiveWell says depend on guesswork, and it runs against some of the literature in fields like development economics" warrant a citation—what literature in development economics?
lilly
32
5
1
1
3

I think there’s a lot of truth to this; the part about sanctifying criticism and critical gadflies especially resonated with me. I think it is rational to ~ignore a fair bit of criticism, especially online criticism, though this is easier said than done.

Two pieces of advice I encountered recently that I’m trying to implement more in my life (both a bit trite, but perhaps helpful as heuristics):

  1. don’t take criticism from someone you wouldn’t take advice from
  2. when you write/post/say something, have a panel of people in mind whose opinions you most care about/who you are speaking to; do not try to appease/appeal to/convince everyone

Despite working in global health myself, I tend to moderately favor devoting additional funding to animal welfare vs. global health. There are two main reasons for this:

  1. Neglectedness: global health receives vastly more funding than animal welfare. 
  2. Importance: The level of suffering and cruelty that we inflict on non-human animals is simply unfathomable. 

    I think the countervailing reason to instead fund global health is:

  3.  Tractability: my sense is that, due in part to the far fewer resources that have gone into investigating animal welfare interventions and policy initiatives, it could be difficult to spend $100m in highly impactful ways. (Whereas in global health, there would be obviously good ways to use this funding.) That said, this perhaps just suggests that a substantial portion of additional funding should go towards research (e.g., creating fellowships to incentivize graduate students to work on animal welfare). 

It’s super cool to see USAID and OP partnering very publicly on such an important project. In addition to the obvious good this will do via the project’s direct impact on lead exposure, I’m glad to see such a powerful and reputable government agency implicitly endorsing OP as an organization. I hope this will help legitimize some of OP’s other important work, and pave the way for similar partnerships in other arenas.

Looking forward to this! I hope there will also be some "lessons learned"—it seems like Leverage included many EA-oriented people who prided themselves on their altruistic tendencies, rational thinking, willingness to question/subert certain social norms, and so on. I'd be curious to hear involved parties' reflections on how similarly well-motivated people can avoid inadvertently veering off the rails in their pursuit of ambitious/weird projects.

Thanks; this is helpful, and I appreciate your candor. I’m not questioning whether 80k’s advising overall is valuable, and am thus willing to grant stuff like “most of the shifts people make as a result of 80k advising are +EV”. My reservations mainly pertain to the following:

  1. does this grant effectively incentivize referrals?
  2. are those referrals of high quality?
  3. contingent on 80k agreeing to meet with a referred party, is that party liable to make career shifts based on the advising they receive?
  4. (to a lesser extent) will the recipients of the career grants use the money well?

I get that it’s easy to be critical of (1) post-hoc, but I think we should subject the general model of “give EAs a lot of money to do things that are easy and that have very uncertain and difficult to quantify value” to a high degree of scrutiny because (as best I can tell based on a small n) this: (a) hasn’t tended to work that well, (b) is self-serving, and (c) often seems to be held to a lower evidentiary standard than other kinds of interventions EAs fund. (A countervailing piece of evidence is that OP does this for hiring referrals, and they presumably do have good evidence re: efficacy, although the benefits there also seem much clearer for the reasons you mention.)

Regarding (2), my worry is that the people who get referred as a result of this program will be importantly different from the general population of people who receive 80k career advising. This is because I suspect highly engaged EAs will have already applied for or received 80k advising. Conversely, people who are not familiar enough with EA to have previously heard of 80k advising—which I think is a low bar, given many people learn about EA via 80k—probably won’t have successful applications. Thus, my model of the median successful referral is “someone who has heard of 80k but not previously opted to pursue 80k advising.” Which brings me to (3): by virtue of these people having not previously opted into a free service, I suspect that they’re less likely to benefit from it. In other words, I suspect that people referred as a result of this program will be less likely (or less able) to make changes as a result of their advising meetings. (Or at least this was the conclusion I came to in deciding who to send my referral links to.)

Regarding (4), I haven’t seen evidence to support the claim that “very engaged and agentic EAs… will use $5,000 very well to advance their careers and create good down the line,” and while this seems prima facie plausible, I don’t think that is the standard of evidence we should apply to this—or any—intervention. (This is a less important point, because if this program generated tons of great referrals, it wouldn’t really matter how the $50k was spent.)

I am a big fan of 80k, and have found talking to 80k advisors helpful. But this program feels reminiscent of the excesses of pre-FTX-implosion EA, in that this is a lot of money to be giving people to do something that is not very hard and (in my view) of questionable value, though maybe I’m underestimating the efficacy of 80k’s filtering process, how much these conversations will shift the career paths of the referred parties, how well people will use the career grants, or something else. I’m sure a lot of thought went into doing this, so I’d be curious to see the BOTEC that led to these career grants.

Load more