J

Jason

15266 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
1726

Topic contributions
2

I don't think "the black people who attended manifest and are part of forecasting more generally" is a valid sample to survey. People who attended Manifest presumably knew who the special guests were, so people with a strong desire not to attend a conference with those people already selected themselves out. Moreover, it's difficult to exclude the hypothesis that other people might counterfactually be in the forecasting community but for a more general feeling that it tolerates racism.

No, I think that extends beyond what I'm saying. I am not proposing a categorical rule here.

 However, the usual considerations of neglectedness and counterfactual analysis certainly apply. If someone outside of EA is likely to done the work at some future time, then the cost of an "error" is the utility loss caused by the delay between when we would have done it and when it was done by the non-EA. If developments outside EA convince us to change our minds, the utility loss is measured between now and the time we change our minds. I've seen at least one comment suggesting "HBD" is in the same ballpark as AI safety . . . but we likely only get one shot at the AGI revolution for the rest of human history. Even if one assumes p(doom) = 0, the effects of messing up AGI are much more likely to be permanent or extremely costly to reverse/mitigate.

From a longtermist perspective, [1] I would assume that "we are delayed by 20-50 years in unlocking whatever benefit accepting scientific racism would bring" is a flash in the pan over a timespan of millions of years. In fact, those costs may be minimal, as I don't think there would be a whole lot for EA to do even if it came to accept this conclusion. (I should emphasize that this is definitely not implying that scientific racism is true or that accepting it as true would unlock benefits.)

 

 

  1. ^

    I do not identify as a longtermist, but I think it's even harder to come up with a theory of impact for scientific racism on neartermist grounds.

Thanks, that is a helpful data point. I speculate, though, that EAs may be less likely to fall neatly into a left-right continuum and so (e.g.) the "center-left" respondents could have quite a bit more libertarianism mixed in than the US/UK general center-left population despite identifying more as center-left than libertarian or other.

I know EA Survey space is limited, but a single question on Forum usage (which could be, e.g., no, lurker, >100 karma, 100-999, >1000 / or could be frequency/intensity of use) would be useful in obtaining hard data on the extent to which the active Forum userbase has different characteristics than the EA population as a whole. That might be useful context when something goes haywire on the Forum in a way we think is unrepresentative of the larger population.[Tagging @David_Moss with the question request]

It feels like in the past, more considerateness might have led to less hard discussions about AI or even animal welfare.

Could you say more about why you feel that way?

Certainly lots of people would have concluded that WAW and AI as subjects of inquiry and action were weird, pointless, stupid, etc. But that's quite different from the reactions to scientific racism.

Jason
10
4
1
1

Most truths have ~0 effect magnitude concerning any action plausibly within EA's purview. This could be because knowing that X is true, and Y is not true (as opposed to uncertainty or even error regarding X or Y) just doesn't change any important decision. It also can be because the important action that a truth would influence/enable is outside of EA's competency for some reason. E.g., if no one with enough money will throw it at a campaign for Joe Smith, finding out that he would be the candidate for President who would usher in the Age of Aquarius actually isn't valuable.

As relevant to the scientific racism discussion, I don't see the existence or non-existence of the alleged genetic differences in IQ distributions by racial group as relevant to any action that EA might plausibly take. If some being told us the answers to these disputes tomorrow (in a way that no one could plausibly controvert), I don't think the course of EA would be different in any meaningful way.

More broadly, I'd note that we can (ordinarily) find a truth later if we did not expend the resources (time, money, reputation, etc.) to find it today. The benefit of EA devoting resources to finding truth X will generally be that truth X was discovered sooner, and that we got to start using it to improve our decisions sooner. That's not small potatoes, but it generally isn't appropriate to weigh the entire value of the candidate truth for all time when deciding how many resources (if any) to throw at it. Moreover, it's probably cheaper to produce scientific truth Z twenty years in the future than it is now. In contrast, global-health work is probably most cost-effective in the here and now, because in a wealthier world the low-hanging fruit will be plucked by other actors anyway.

I read your comment as "have people pay for finding out information via subsidies for markets" being your "alternative" model, rather than being the "take a cut of the trading profits/volume/revenue" model. Anyway, I mentioned earlier why I don't think being "controversial" (~ too toxic for the reputational needs of many businesses with serious money and information needs) fits in well with that business model. Few would want to be named in this sentence in the Guardian in 2028: "The always-controversial Manifest conference was put on by Manifold, a prediction market with a similarly loose moderation norms whose major customers include . . . ."

In the "take a rake of trading volume" model without any significant exogenous money coming it, there have to be enough losses to (1) fund Manifold, and (2) make the platform sufficiently positive in EV to attract good forecasters and motivate them to deploy time and resources. Otherwise, either the business model won't work, or the claimed social good is seriously compromised. In other words, there need to be enough people who are fairly bad at forecasting, yet pump enough money into the ecosystem for their losses to fund (1) and (2). Loosely: whales. 

If that's right, the business rises or falls predominately by the amount of unskilled-forecaster money pumped into the system. Good forecasters shouldn't be the limiting factor in the profit reaction; if the unskilled users are subsidizing the ecosystem enough; the skilled users should come. The model should actually work without good forecasters at all; it's just that the aroma of positive EV will attract them.

This would make whales the primary customers, and would motivate Manifold to design the system to attract as much unskilled-forecaster money as possible, which doesn't seem to jive well with its prosocial objectives. Cf. the conflict in "free-to-play" video game design between design that extracts maximum funds from whales and creating a quality game and experience generally.

While it certainly can be appropriate to criticize religious beliefs, the last sentence feels quite gratuitous and out of left field. [I assume/hope that "Quackerism" is a either a typo or a group I've never heard of.]

For me, "zakat being compatible with EA" means "its possible to increase the impact of zakat and allocate it in the most cost-effective way" [ . . . .]

Indeed, effective giving being subject to donor-imposed constraints is the norm, arguably even in EA. Many donors are open only to certain cause areas, or to certain levels of risk tolerance, or to projects with decent optics, etc. Zakat compliance does not seem fundamentally different from donor-imposed constraints that we're used to working within.

Although I have mixed feelings on the proposal, I'm voting insightful because I appreciate that you are looking toward an actual solution that at least most "sides" might be willing to live with. That seems more insightful than what the Forum's standard response soon ends up as: rehashing fairly well-worn talking points every time an issue like this comes up.

Load more