I think this is unacceptable, and unless serious evidence appears that Ben behaved dishonestly in a way nobody seems to currently be claiming (e.g. if he had personally doctored the texts from Kat to add incriminating phrases), I think filing this kind of lawsuit would be cause for the EA community to permanently cut all ties with Nonlinear and with Emerson in particular. I believe this even if it turns out Nonlinear has evidence that the main claims in the post are false.
[Edit 12/15/23 -- Nonlinear's update makes stronger claims about Ben's actions than I'd seen anyone make when I wrote this, so I'm crossing out "in a way nobody seems to currently be claiming" because it's no longer accurate. So the applicability of this argument hinges a lot more on the "unless" clause now. ]
Reasoning: I think the question of whether Ben should have waited a week is difficult, and I have felt differently about it at different times over the past few days. But the question of whether the choice he made was justifiable is easy: the people he spoke to seem to be terrified of retaliation, and he has at least two strong pieces of direct evidence (Kat's text, Emerson's lawsuit threat) and several pieces of indirect evidence (Emerson's stories about behavior that while legal strike me as highly unethical, Kat offering a very vulnerable Alice housing only under the condition that she not say mean things about Nonlinear, some of the Glassdoor comments) that these fears of retaliation are well-founded. The fear of that Emerson or Nonlinear might retaliate in some way in the intervening week to stop the post from being posted seems very reasonable to me, and acting on this fear is justifiable even if it overall turned out to be the wrong choice.
Even if you think Ben made the wrong decision (I currently think maybe he did?), the question is not whether he was correct but whether his choice was so unacceptable that it's appropriate to respond in a way that has a high risk of direct financially ruining him (defamation lawsuits are notoriously a tool used by abusers to silence their critics because the costs of defense are so high, and based on Emerson's business experience I am unwilling to believe he doesn't know this.) It clearly wasn't, and I think it's imperative we make clear that using expensive lawsuits to win arguments is utterly unacceptable in a community like this one.
I want to second this! Not a mental health expert, but I have depression and so have spent a fair amount of time looking into treatments / talking to doctors / talking to other depressed people / etc.
I would consider a treatment extremely good if it decreased the amount of depression a typical person experienced by (say) 20%. If a third of people moved from the "depression" to "depression-free" category I would be very, very impressed. Ninety-five percent of people moving from "depressed" to "depression free" sets off a lot of red flags for me, and makes me think the program has not successfully measured mental illness.
(To put this in perspective: 95% of people walking away depression-free would make this far effective than any mental health intervention I'm aware of at any price point in any country. Why isn't anyone using this to make a lot of money among rich American patients?)
honestly re-reading my comment, that is a very fair question. That part was very poorly phrased.
I think what I had in mind is that the issue with continuous DID goes away if you assume constant effect sizes that are linear in treatment effect. When this doesn't hold, you start to estimate some weird parameter, which Goodman-Bacon, Sant'Anna, and Callaway describe in detail in the link you provided.
I like this paper because it tells us what happens under misspecification, which is exciting because in practice everything is misspecified all the time! But a concern I have with interpreting it is that I think the problem is inherent to linear regression, not the DID case specifically, which means we should really have this kind of problem in mind any time anybody linearly controls for anything.
(So maybe a better way of phrasing this would have been "we should be this nervous all the time, except in cases where misspecification doesn't matter" rather than "it isn't a huge issue here.")
I don't think the recent diff-in-diff literature is a huge issue here -- you're computing a linear approximation, which might be bad if the actual effect size isn't linear, but this is just the usual issue with linear regression. The main problem the recent diff-in-diff literature addresses is that terrible things can happen if a) effects are heterogenous (probable here!) and b) treatment timing is staggered (I'm not super concerned here since the analysis is so course and assumes roughly similar timing for all units getting potatos.)
They try to establish something like a pretrends analysis in table II, but I agree that it would be helpful to have a lot more -- like an event-study type plot would be nice. In general diff-in-diff is a nice way to get information about really hard to answer questions, but I wouldn't take the effect size estimates too literally.
My initial thoughts for the first question, quoted from the linked post (so not all examples are EA-specific):
So if I want people to grow in the ways I’ve grown, I think I need to do a lot less arguing and a lot more applying. Less “here’s why I’m right” and more “here’s a question that’s important to me we figure out together.” Less “agree with my worldview” and more “walk this walk with me.” A few ideas have come to mind so far:
Framing: instead of a “do you believe in X?” or “do you think X is important?” conversation, starting a “what do you think we can do about X?”
Sincerity: actually listening to what other people say and learning from it, rather than “listening to respond.” You’ll understand your friend (and how to love them) better, and might be convicted of something you could change.
Patience: not every conversation needs to be “the big one,” and you might not see the fruits immediately. One of the biggest moments in my “conversion” towards Effective Altruism came after a summer missions trip, when a friend asked me whether I thought paying for a group of college students to travel was the best way to do good with the money we’d raised. She listened attentively to my answer and didn’t press the point, but I thought about it for a long time afterwards.
Creativity: the question doesn’t always have to be “here’s why we should care about the global poor.” Maybe it can be “I read a really cool paper this week about the effects of this nonprofit” or “I’m trying to spend less money on food. Do you have any advice?” or “Can we pray about the violence in Ethiopia this week?”. Give people a chance to care about things that are important to you!
Specificity: instead of “our current immigration policy is evil”, it might be easier to discuss something more concrete — e.g. “I’m really upset that so many of the refugees we’re requiring to Remain in Mexico are being assaulted and killed. Is there anything we as a church can do to help them?”
These thoughts are super helpful for understanding where you're coming from, so thank you!! I really appreciate you taking the time to write them all out -- my thoughts will be much shorter because I don't have much to add, not because they weren't thought-provoking and interesting!
I think we have somewhat different beliefs about what makes the speaker's actions wrong --I think for me it lands very far to one side of the "clearly evil" to "clearly good" trolley problem spectrum and it's wrongness is a) very clear to me, and b) very hard for me to pin down a reason for: I don't really find any of the answers I can think of satisfying (including the ones in the works you mentioned -- the fact that plans can fail and change the planner in unforeseen ways is a beautiful and important observation, but in this case the plan more-or-less succeeds and it still feels evil to me.) I find this combination fascinating, but I can see how this comes across rather differently if you don't share this dissatisfaction, which it sounds like is less universal than I believed.
Unsong sounds like a very interesting piece of writing, I will have to check it out!
You both raise very good points, and I think you've convinced me there are ways to do this that don't come across as propaganda.
At the same time, I would still stand by my stance that having more EA villains in fiction would overall be a good thing for EA. Good villains are thought-provoking even though their actions are evil -- Killmonger in Black Panther and Karli in the Falcon/Winter Soldier series come to mind as pop culture characters who've made me think much more than the heroes in their respective films/shows.
I think that the rationalist/EA fiction I've seen always falls into a very propaganda-adjacent territory rather than the versions you've described-- things like HPMOR, which I've never heard good feedback on from anybody who wasn't essentially already in the rationalist community. (I'm sure such feedback exists, but the response in my friend groups has been overwhelmingly negative.) It feels to me like the goal of attracting people outside the community by portraying EA/rationality as positively as possible is self-defeating, because it results as stories that just aren't very interesting to people who are here for a good story rather than an exposition of the author's philosophy.
I would much prefer a story that works as a story, even if it's from a perspective of a villain and doesn't give you a clear authorial point of view on any of the relevant questions. (Whether or not this works as a story is of course a separate question I'm too biased to judged.) My general sense from my test readers has been that the questions (was what Mr. Green was doing in fact wrong? What's wrong with the speaker's super-harsh utilitarianism?) are capable of starting interesting EA-type conversations, and that we can trust readers to have interesting and ethical thoughts on their own.
To be completely honest, I think that "making people reading it think more kindly of effective altruism" is a good goal for creative nonfiction, but not a very helpful goal for fiction. My experience with writing fiction (mostly plays) is that fiction is a really poor platform for convincing people of ideas (I almost always zone out if I feel like a playwright is trying to convince me to believe something), but it's a really good platform for raising difficult questions that readers have to think through themselves. I suppose my hope with this villain is to confront people somewhat graphically with questions that are important and answers to those questions that are terrible, in the hopes of sparking further thought rather than coming to a specific answer.
[Edited to add that I am the author of the above piece, not sure if that is clear from the rest of the comment]
I fully agree with your first statement and disagree with the second. I think maybe some of this is a disagreement on the goal of stories: I really don't like morality plays where I feel like the author is trying to tell me what to believe. I much prefer stories of flawed people ending up in terrible places or doing terrible things that force me to figure out for myself where the protagonist went wrong. This is, of course, a personal preference and not something that's "true" or "false".
But I guess more to the point I don't think that the typical person will find themselves convinced to join EA just because somebody in a story did good EA things. I think the path to changing one's worldview is long and complicated and comes more from tricky thought-provoking discussions than directly absorbing the worldviews of fictional characters.
I find it very unlikely that this story would lead anybody to think that buying mosquito nets will lead you to commit this protagonist's actions. But, at least anecdotally, I've found that this story starts conversations about why the protagonist is wrong and what we as ordinary individuals might or might not owe to people dying of malaria.
I don't think this is right -- whether it's okay to sue Ben surely depends on the information Ben had at the time of making his decision, not information he didn't have access to?