I've been writing a few posts critical of EA over at my blog. They might be of interest to people here:
- Unflattering aspects of Effective Altruism
- Alternative Visions of Effective Altruism
- Auftragstaktik
- Hurdles of using forecasting as a tool for making sense of AI progress
- Brief thoughts on CEA’s stewardship of the EA Forum
- Why are we not harder, better, faster, stronger?
- ...and there are a few smaller pieces on my blog as well.
I appreciate comments and perspectives anywhere, but prefer them over at the individual posts at my blog, since I have a large number of disagreements with the EA Forum's approach to moderation, curation, aesthetics or cost-effectiveness.
Important Update: I've made some changes to this comment given the feedback by Nuño & Arepo.[1] I was originally using strikethroughs, but this seemed to make it very hard to read, so I've instead edited it inline. Thus, the comment now is therefore fairly different from the original one (though I think that's for the better).
On reflection, I think that Nuño and I are very different people, with different backgrounds, experiences with EA, and approaches to communication. This leads to a large 'inferential distance' between us. For example:
or
I think while some of my interpretations were obviously not what Nuño intended to communicate, I think this is partly due to Nuño's bellicose framings (his words, see Unflattering aspects of Effective Altruism footnote-3) which were unhelpful for productive communication on a charged issue. I still maintain that EA is primarily a set of ideas,[3] not institutions, and it's important to make this distinction when criticising EA organisations (or 'The EA Machine'). In retrospect, I wonder if it should have been titled something like "Unflattering Aspects of how EA is structured" or something like that, which I'd have a lot of agreement with in many respects.
I wasn't sure what to make of this, personally. I appreciate a valued member of the community offering criticism of the establishment/orthodoxy, but some of this just seemed... off to me. I've weakly down-voted, and I'll try to explain some of the reasons why below:
Nuño's criticism of the EA Forum seems to be:
But the examples Nuño gives in the Brief thoughts on CEA’s stewardship of the EA Forum post, especially Sabs, seem to be people being incredibly rude and not contributing to the Forum in helpful or especially truth-seeking ways to me. Nuño does provide some explanation (see the links Arepo provides), but not in the 'Unflattering Aspects' post, and I think that causes confusion. Even in Nuño's comment on another chain, I don't understand summarising their disagreement as "I disagree with the EA Forum's approach to life". Nuño has since changed that phrasing, and I think the new wording is better.
Still, it seemed like a very odd turn of phrase to use initially, and one that was unproductive to getting their point across, which is one of my other main concerns about the post. To me, some of the language in Unflattering aspects of Effective Altruism appeared to me as hostile and not providing much context for readers. For example:[4] I don't think the Forum is "now more of a vehicle for pushing ideas CEA wants you to know about", I don't think OpenPhil uses "worries about the dangers of maximization to constrain the rank and file in a hypocritical way". I don't think that one needs to "pledge allegiance to the EA machine" in order to be considered an EA. It's just not the EA I've been involved with, and I'm definitely not part of the 'inner circle' and have no special access to OpenPhil's attention or money. I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plant's What is Effective Altruism? How could it be improved? post.
There are some parts of the essay where Nuño and I very much agree. I think the points about the leadership not making itself accountable the community are very valid, and a key part of what Third Wave Effective Altruism should be. I think depicting it as a "leadership without consent" is pointing at something real, and in the comments on Nuño's blog Austin Chen is saying a lot that makes sense. I agree with Nuño that the 'OpenPhil switcheroo' phenomenon is concerning and bad when it happens. Maybe this is just a semantic difference by what Nuño and I mean by 'EA', but to me EA is more than OpenPhil. If tomorrow Dustin decided to wind down OpenPhil in it's entirety, I don't think the arguments in Famine, Affluence, or Morality lose their force, or that factory farming becomes any less of a moral catastrophe, or that we should not act prudently on our duties toward future generations.
Furthermore, while criticising OpenPhil and EA leadership, Nuño appears to claim that these organisations need to do more 'unconstrained' consequentialist reasoning,[5] whereas my intuition is that many in the community see the failure of SBF/FTX as a case where that form of unconstrained consequentialism went disastrously wrong. While many of you may be very critical of EA leadership and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. This is probably the weakest concern I have with the piece though, especially on reflection.
The edits are still under construction - I'd appreciate everyone's patience while I finish them up.
I'm actually not sure what the right interpretation is
And perhaps the actions they lead to if you buy moral internalism
I think it's likely that Nuño means something very different with this phrasing that I do, but I think the mix of ambiguity/hostility can led these extracts to be read in this way
Not to say that Nuño condones SBF or his actions in any way. I think this is just another case of where someone's choice to get off the 'train to crazy town' can be viewed as another's 'cop-out'.
Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other people's impressions, I decided to share it more widely.
You are picking on the weakest example. The strongest one might be Sapphire. A more recent one might have been John Halstead, who had a bad day, and despite his longstanding contributions to the community was treated with very little empathy and left the forum.
I think this paragraph misrepresents me:
So if leadership has priorities different from the rest of the movement, the rest of the movement should be more reluctant to follow. But this is for people to decide individually, I think.
You can see some examples in section 5.
I think the strongest version of my current beliefs is that quantification is underdeployed on the margin and it can unearth Pareto improvements. This is joined with an impression that we should generally be much more ambitious. This doesn't require me to believe that more maximization will always be good, rather than, at the current margin, more ambition is.
Really appreciate your reply Nuno, and apologies if I've misrepresented you, or if I'm coming across as overly hostile. I'll edit my original comment given your & Arepo's response. I think part of why I posted my comment (even though I was nervous to), is that you're a highly valued member of the community[1], and your criticisms are listening to and carrying weight. I am/was just trying to do my part to kick the tires, and distinguish criticisms I think are valid/supported from those which are less so.
On the object level claims, I'm going to come over to your home turf (blog) and discuss it there, given you expressed a preference for it! Though if you don't think it'll be valuable for you, then by all means feel free to not engage. I think there are actually lots of points where we agree (at least directionally), so I hope it may be productive, or at least useful for you if I can provide good/constructive criticism.
I very much value you and your work, even if I disagree
I think much of this criticism is off. There are things I would disagree with Nuno on, but most of what you're highlighting doesn't seem to fairly represent his actual concerns.
He does. Also, I suspect his main concern is with people being banned rather than having their posts moderated.
I don't know what Nuno actually believes, but he carefully couches both of these as hypotheticals, so I don't think you should cite them as things he believes. (in the same section, he hypothetically imagines 'What if EA goes (¿continues to go?) in the direction of being a belief that is like an attire, without further consequences. People sip their wine and participate in the petty drama, but they don’t act differently.' - which I don't think he advocates).
Also, you're equivocating the claim that EA is too naive (which he certainly seems to believe), too consequentialist (which I suspect but don't know he believes), ignores common sense (which I imagine he believes), what he's actually said he believes - that he thinks it should optimise more vigorously - what the hypothetical you quote.
I'm not sure what you want here - his blog is full of criticisms of EA organisations, including those linked in the OP.
He literally links to why he thinks their priorities are bad in the same sentence.
I don't think it's reasonable to assert that he conflates them in a post that estimates the degree to which OP money dominates the EA sphere, that includes the header 'That the structural organization of the movement is distinct from the philosophy', and that states 'I think it makes sense for the rank and file EAs to more often do something different from EA™'. I read his criticism as being precisely that EA, the non-OP part of the movement has a lot of potential value, which is being curtailed by relying too much on OP.
I think you're mispresenting the exact sentence you quote, which contains the modifier 'to constrain the rank and file in a hypocritical way'. I don't know how in favour of maximisation Nuno is, but what he actually writes about in that section is the ways OP has pursued maximising strategies of their own that don't seem to respect the concerns they profess.
You don't have to agree with him on any of these points, but in general I don't think he's saying what you think he's saying.
Hey Arepo, thanks for the comment. I wasn't trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and I'm going to make some edits to clear up confusion I might have introduced. Some quick points of note:
Perhaps part of this is that, while I did read some of Nuño's other blogs/posts/comments, there's a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I don't, so I've had trouble doing that here.
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it's like: oh well, I guess I'm now optimizing for getting funding from Open Phil/getting hired at this limited set of institutions/etc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But I'm repeating myself, because this is one of the main threads in the post. I have the weird feeling that I'm not being your interlocutor here.
Hey Nuño,
I've updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the 'switcheroo' you mention is problematic, and a lot of the 'EA machinery' should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic you're pointing to, but to me that dynamic isn't EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts you've linked in this post to understand your position better and look at the examples/evidence you provide in more detail. Your post didn't connect with me, but it did for a lot of people, so I think it's on me to go away and try harder to see things from your perspective and do my bit to close that 'inferential distance'.
I wasn't intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. They're obviously connected, but not the same thing. In my edited comment, the 'what if Dustin shut down OpenPhil' scenario proves this.
I see saying that I disagree with the EA Forum's "approach to life" rubbed you the wrong way. It seemed low cost, so I've changed it to something more wordy.
I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.
(Comment is mostly cross-posted comment from Nuño's blog.)
In "Unflattering aspects of Effective Altruism", you write:
I think the claim that Open Philanthropy is hypocritical re: the unilateralist's curse doesn't quite make sense to me. To explain why, consider the following two scenarios.
Scenario 1: you and 999 other people smart, thoughtful people have a button. You know there's 1000 people with such a button. If anyone presses the button, all mosquitoes will disappear.
Scenario 2: you and you alone have a button. You know that you're the only person with such a button. If you press the button, all mosquitoes will disappear.
The unilateralist's curse applies to Scenario 1 but *not* Scenario 2. That's because, in Scenario 1, your estimate of the counterfactual impact of pressing the button should be your estimate of the expected utility of all mosquitoes disappearing, *conditioned on no one else pressing the button*. In Scenario 2, where no one else has the button, your estimate of the counterfactual impact of pressing the button should be your estimate of the (unconditional) expected utility of all mosquitoes disappearing.
So, at least the way I understand the term, the unilateralist's curse refers to the fact that taking a unilateral action is worse than it naively appears, *if other people also have the option of taking the unilateral action*.
This relates to Open Philanthropy because, at the time of buying the OpenAI board seat, Dustin was one of the only billionaires approaching philanthropy with an EA mindset (maybe the only?). So he was sort of the only one with the "button" of having this option, in the sense of having considered the option and having the money to pay for it. So for him it just made sense to evaluate whether or not this action was net positive in expectation.
Now consider the case of an EA who is considering launching an organization with a potentially large negative downside, where the EA doesn't have some truly special resource or ability. (E.g., AI advocacy with inflammatory tactics -- think DxE for AI.) Many people could have started this organization, but no one did. And so, when deciding whether this org would be net positive, you have to condition on this observation.
[Answered over on my blog]
Could you elaborate on what you mean by this?
Thanks for referring to these blog posts!
Over the last few years, the EA Forum has taken a few turns that have annoyed me:
Initially I dealt with this by writing my own frontend, but I ended up just switching to my blog instead.
Just as a piece of context, the EA Forum now has about ~8x more active users than it had at the beginning of those few years. I think it's uncertain how good growth of this type is, but it's clear that the forum development had a large effect in (probably) the intended direction of the people who run the forum, and it seems weird to do an analysis of the costs and benefits of the EA Forum without acknowledging this very central fact.
(Data: https://data.centreforeffectivealtruism.org/)
I don't have data readily available for the pre-CEA EA Forum days, but my guess is it had a very different growth curve (due to reaching the natural limit of the previous forum platform and not getting very much attention), similar to what LessWrong 1.0 was at before I started working on it.
$2m / 4000 users = $500/user. They have every reason to inflate their figures, and there's no oversight, so it's not even clear these numbers can be trusted.
Many subreddits cost nothing, despite having 1000x more engagement. They could literally have just forked LessWrong and used volunteer mods.
No self-interested person is ever going to point this out because it pisses off the mods and CEA, who ultimately decide whose voices can be heard - collectively, they can quietly ban anyone from the forum / EAG without any evidence, oversight, or due process.
I've heard the claim that the EA Forum is too expensive, repeatedly, on the EA Forum, from diverse users including yourself. If CEA is trying to suppress this claim, they're doing a very bad job of it, and I think it's just silly to claim that making that first claim is liable to get you banned.
$500/monthly user is actually pretty reasonable. As an example, Facebook revenue in the US is around $200/user/year, which is roughly in the same ballpark (and my guess is the value produced by the EA Forum for a user is higher than for the average Facebook user, though it's messy since Facebook has such strong network effects).
Also, 4000 users is an underestimate since the majority of people benefit from the EA Forum while logged out (on LW about 10-20% of our traffic comes from logged-in users, my guess is the EA Forum is similar, but not confident), and even daily users are usually not logged in. So it's more like $50-$100/user, which honestly seems quite reasonable to me.
No subreddit is free. If there is a great subreddit somewhere, it is probably the primary responsibility of at least one person. You can run things on volunteer labor but that doesn't make them free. I would recommend against running a crucial piece of infrastructure for a professional community of 10,000+ people on volunteer labor.
You posted this graph
If I understand it correctly, it shows that about 50% of EA forum traffic comes from logged-in users, not 10%-20%.
Also, it seems that EA forum gets about 14,000 views per day. So you spend about $2,000,000/(365*14,000) = $0.4 per view. That's higher than I would expect.
Note that many of these views might not be productive. For me personally, most of the views are like "I open the frontpage automatically when I want to procrastinate, see that nothing is new & interesting or that I didn't even want to use the forum, and then close it". I also sometimes incessantly check if anyone commented or voted on my post or comment, and that sort of behaviour can drive up the view count.
That is definitely relevant data! Looking at the recent dates (and hovering over the exact data at the link where the graphs are from) it looks like its around 60% logged-out, 40% logged in.
I do notice I am surprised by this and kind of want confirmation from the EA Forum team they are not doing some kind of filtering on traffic here. When I compare these numbers naively to the Google Analytics data I have access to for those dates, they seem about 20%-30% too low, and it makes me think there is some filtering going on (though my guess is that 80%-90% logged-out traffic definitely still does not seem representative)
I think you're comparing costs for EAF to revenue on FB.
Yeah, that seems like the right comparison? Revenue is a proxy for value produced, so if you are arguing about whether something is worth funding philanthropically, revenue seems like the better comparison than costs. Though you can also look at costs, which I expect to not be more than a factor 2 off.
Isn't a lot of FB's revenue generated by owning a cookie, and cooperating with other websites to track you across pages? I don't think it's fair to count that revenue as generated by the social platform, for these purposes.
Your argument also feels slippery to me in general. Registering that now in case you have a good answer to my specific criticism and the general motte-and-bailey feeling sticks around.
I am not sure what you mean by the first. Facebook makes almost all of its revenue with ads. It also does some stuff to do better ad-targeting, for which it uses cookies and does some cross-site tracking, which I do think drives up profit, though my guess is that isn't responsible for a large fraction of the revenue (though I might be wrong here).
But that doesn't feel super relevant here. The primary reason why I brought up FB is to establish a rough order-of-magnitude reference class for what normal costs and revenue numbers are associated with internet platforms for a primarily western educated audience.
My best guess is the EA Forum could probably also finance itself with subscriptions, ads and other monetization strategies at its current burn rate, based on these number, though I would be very surprised if that's a good idea.
I think the more relevant order of magnitude reference class would be the amount per user Facebook spent on core platform maintenance and moderation (and Facebook has a lot more scaling challenges to solve as well as users to spread costs over, so a better comparator would be the running expenses of a small professional forum)
I don't think FB revenues are remotely relevant to how much value the forum creates, which may be significantly more per user than Facebook if it positively influences decisions people make about employment, founding charities and allocating large chunks of money to effective causes. But the effectiveness of the use of the forum budget isn't whether the total value created is more than the total costs of running, it's decided at the margin by whether going the extra mile with the software and curation actually adds more value.
Or put another way, would people engage differently if the forum was run on stock software by a single sysadmin and some regular posters granted volunteer mod privileges?
Well, I mean it isn't a perfect comparison, but we know roughly what that world looks like because we have both the LessWrong and OG EA Forum datapoints, and both point towards "the Forum gets on the order of 1/5th the usage" and in the case of LessWrong to "the Forum dies completely".
I do think it goes better if you have at least one well-paid sysadmin, though I definitely wouldn't remotely be able to do the job on my own.
As to costs, I'd have to dig further but looking at the net profit margin for Meta as a whole suggests a fairly significant adjustment. Looking at the ratio between cost of revenue and revenue suggests an even larger adjustment, but is probably too aggressive of an adjustment.
If Meta actually spent $200 per user to achieve the revenue associated with Facebook, that would be a poor return on investment indeed (i.e., 0%). So I think comparing its revenue per user figure to the Forum's cost per user creates too easy of a test for the Forum in assessing the value proposition of its expenditures.
Facebook ARPU (average revenue per user) in North America is indeed crazy high
, but I think misleading as for some reason they include revenues from Whatsapp and Instagram, but only count Facebook users as MAU in the denominator.(edit: I think this doesn't matter that much) Also, they seem to be really good at selling adsIn any case:
I'm not sure if $6000 is in the same ballpark as $200(edit: oops, see comment below, the number is $500 not $6000)But I strongly agree with your other points, and mostly I think this is a discussion for donors to the EA Forum, not for users. If someone wants to donate $2M to the EA Forum, I wouldn't criticize the forum for it or stop using it. It's not my money.
Users might worry about why someone would donate that much to the Forum, and what kind of influence comes with it, but I think that's a separate discussion, and I'm personally not that worried about it. (But of course, I'm biased as a volunteer moderator)
I think criticizing CEA for the Forum expenditures is fair game. If an expenditure is low-value, orgs should not be seeking funding for it. Donors always have imperfect information, and the act of seeking funding for an activity conveys the organization's tacit affirmation that the activity is indeed worth funding. I suppose things would be different if a donor gave an unsolicited $2MM/year gift that could only be used for Forum stuff, but that's not my understanding of EVF's finances.
I also think criticizing donors is fair game, despite agreeing that their funds are not our money. First, charitable donations are tax advantaged, so as a practical matter those of us who live in the relevant jurisdiction are affected by the choice to donate to some initiative rather than pay taxes on the associated income. I also think criticizing non-EA charitable donors for their grants is fair game for this reason as well.
Second, certain donations can make other EA's work more difficult. Suppose a donor really wants to pay all employees at major org X Google-level wages. It's not our money, and yet such a policy would have real consequences on other orgs and initiatives. Here, I think a pattern of excessive spending on insider-oriented activities, if established, could reasonably be seen as harmful to community values and public perception.
(FWIW, my own view is that spending should be higher than ~$0 but significantly lower than $2MM.)
I think the $500 figure is derived from ($2MM annual spend / 4000 monthly active users). The only work monthly is doing there is helping define who is a user. So I don't think multiplying the figure by 12 is necessary to provide comparison to Facebook.
That being said, I think there's an additional reason the $200 Facebook figure is inflated. If we're trying to compare apples to apples (except for using revenue as an overstated proxy for expenditure), I suggest that we should only consider the fraction of implied expenses associated with the core Facebook experience that is analogous to the Forum. Thus, we shouldn't consider, e.g., the implied expenditures associated with Facebook's paid ads function, because the Forum has no real analogous function.
Where does the "$200/user/year" figure come from? They report $68.44 average revenue per user for the US and Canada in their 2023 Q4 report.
ARPU is per quarter. $68.44/quarter or $200/year is really high but:
1. it includes revenues from Instagram and Whatsapp, but only counts Facebook users
2. Facebook is crazy good at selling ads, compared to e.g. Reddit (or afaik anything else)
Thanks, many websites seem to report this without the qualifier "per quarter", which confused me.
Yeah I had the exact same reaction, I couldn't believe it was so high but it is
Wild
My understanding is that moderation costs comprise only a small portion of Forum expenditures, so you don't even need to stipulate volunteer mods to make something close to this argument.
(Also: re Reddit mods, you generally get what you pay for . . . although there are some exceptions)
This is really useful context!
I've (so far) read the first post and love it! But when I was working full-time on trying to improve grantmaking in EA (with GiveWiki, aggregating the wisdom of donors), you mostly advised me against it. (And I am actually mostly back to ETG now.) Was that because you weren't convinced by GiveWiki's approach to decentralizing grantmaking or because you saw little hope of it succeeding? Or something else? (I mean, please answer from your current perspective; no need to try to remember last summer.)
Iirc I was skeptical but uncertain about GiveWiki/your approach specifically, and so my recommendation was to set some threshold such that you would fail fast if you didn't meet it. This still seems correct in hindsight.
Yep, failing fast is nice! So you were just skeptical on priors because any one new thing is unlikely to succeed?
Yes, and also I was extra-skeptical beyond that because you were getting a too small amount of early traction.
Yep, makes a lot of sense!
It makes sense for the dynamics of EA to naturally go in this way (Not endorsing). It is just applying the intentional stance plus the free energy principle to the community as a whole. I find myself generally agreeing with the first post at least and I notice the large regularization pressure being applied to individuals in the space.
I often feel the bad vibes that are associated with trying hard to get into an EA organisation. I'm doing for-profit entrepreneurship for AI safety adjacent to EA as a consequence and it is very enjoyable. (And more impactful in my views)
I will however say that the community in general is very supportive and that it is easy to get help with things if one has a good case and asks for it, so maybe we should make our structures more focused around that? I echo some of the things about making it more community focused, however that might look. Good stuff OP, peace.
FWIW, the "deals and fairness agreement" section of this blogpost by Karnofsky seems to agree about (or at least discuss) trade between different worldviews :
Different worldviews are discussed as being incommensurable here (under which maximizing expected choice-worthiness doesn't work). My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview's preferred cause area will always win out in utility calculations, which makes sort of deals proposed in "A flaw in a simple version of worldview diversification" not possible/useful.
In practice I don't think these trades happen, making my point relevant again.
I'm not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign 1/3rd of your wealth to each. But then:
Then you either add the epicycles or you're doing something really dumb.
I think looking at the relative value of marginal grants in each worldview is going to be a good intuition pump for worldview diversification type stuff. Then even if, every year, every worldview prefers their marginal grants over those of other worldviews, you can/will still have cases where the worldviews can shift money between years and get more than what they all want.
My thinking about EAforum over the years has typically been "Jesus, why on earth would they people deliberately set things like that" and then maybe a couple months later, maybe a couple years later, I start to notice a possible explanation, and I'm like "oooooooooooohhhhhh, actually, that might make a lot of sense, I wish I had noticed that immediately".
Large multi-human systems tend to be pretty complicated and counterintuitive, but it becomes way, way more so when most of the people are extremely thoughtful. Plus, the system changes in complicated and unprecedented ways as the world changes around it, or as someone here or there discovered a game-changing detail about the world, meaning that EAforum is entering uncharted territory and tearing down Schelling fences rather frequently.
I'm interested in examples of this if you have them.
Yeah, a lot of them are not openly advertised for good reasons. One example that's probably fine to talk about is NunoSempere's claim that EAforum is shifting towards catering to new or marginal users.
The direct consequence is reducing the net quality of content on EAforum, but it also allows it to steer people towards events as they get more interested in various EA topics, where they can talk more freely without worrying about saying things controversial, or get involved directly with people working on those areas via face-to-face interaction. And it doesn't stop EAforum from remaining a great bulletin board for orgs to publish papers and updates and get feedback.
But at first glance, catering towards marginal users normally makes you think that they're just trying to do classic user retention. That's not what's happening; this is not a normal forum and that's the wrong way to think about it.