RJH

Rubi J. Hudson

768 karmaJoined

Comments
36

I also think of Manifest as something like, a scaled up house party, rather than an arbiter of who is good or notable in forecasting/EA

I've made a similar point in other comments, but this framing makes things worse. Then it's not that Richard Hanania has relevant expertise in spite of his controversial statements, it's that you think he's a fun guy that you'd like to hang out with. Where people are willing to stomach unsavory co-attendees at a conference in their field, they're more than happy to skip out on a scaled up house party with them.

I think my peers would factually right, at least directionally, in that attending with someone who has controversial views is evidence of favoring those views. 

As for whether society is a better place if we enforce the norm, I think there's a couple relevant considerations. The first is the degree of abhorrence. To pull Godwin's Law, I think that knowingly attending a conference with literal neo-Nazis should be seen as support for their views. The second is the degree of focus. Where it should be fine to attend an academic conference with subject matter experts talking about their work, choosing to attend a "fun forecasting-adjacent festival" where attendees are encouraged to pal around with each other is more deserving of judgment. 

I would also like to clarify, when I talk about peer judgments, I'm not making a point about how Manifest looks to those people. I'm making a point about how it would feel for them to attend. While I understand there are tradeoffs involved and you can't make the event welcoming to arbitrary potential attendees, I would say by the time you've lost Peter Wildeford you've gone too far.  I would also throw my hat in the ring as someone who works on prediction science and would be hesitant to attend the next Manifest if it had a similar invitee list.

> My plan was then to invite & highlight folks who could balance this out  

I think this is basically a misconception of how the social dynamics at play work. People aren't worried about the relative number of "racists", they're worried about the absolute number. The primary concern is not that they will exposed to racism at the conference itself, but rather that attending a conference together will be taken as a signal of support for the racists, saying that they are welcome in the community.

To pick Hanania as an example, since he has the most clearly documented history of racist statements, I have peers who would absolutely see me choosing to attend the same conference as him as a sign that I don't think he's too bad. And if I know that expectation and chose to go anyway, there would be additional merit to that reading.

To an extent, the more that Manifest is focused on discussions of prediction, the more leeway there is to invite controversial speakers. You can say make a case for ignoring views that are not relevant to the topic at hand. But as Saul says in his other post "although manifest is nominally about prediction markets, it's also about all the ideas that folks who like prediction markets are also into — betting, philosophy, mechanism design, writing, etc". In other words, it's about forming a broader intellectual community. And people are obviously going to be uncomfortable identifying with an intellectual community that includes people that they, and the broader world, consider to be racist.
 

Does the LTFF ever counter-offer with an amount that would move the grant past the funding bar for cost-effectiveness? I would guess that some of these hypothetical applicants would accept a salary at 80% of what they applied for, and if the grants are already marginal then a 25% increase in cost-effectiveness could push them over the bar.

Pronatalist.org is not an EA group. It's great that EA considerations have started entering the public consciousness and I would love if every charity was expected to answer "why is this the most effective thing you could be doing?", but that doesn't mean that any group claiming their mission is really important is part of EA. It's very difficult to argue a rigorous case that promoting pronatalist sentiment is an effective use of money or time, and so far they haven't.

Rather than ask how we can build more (and better) groups, ask whether we should.

Was Ben Pace shown these screenshots before he published his post?

With regards to #2, I shared your concern, and I thought Habryka's response didn't justify that the cost of a brief delay was sufficient if there was a realistic chance of evidence being provided to contradict the main point of this post.

However, upon reflection, I am skeptical that such evidence will be provided. Why did Nonlinear not provide at least some of the proof they claim to have, in order to justify time for a more comprehensive rebuttal? Or at least describe the form the proof will take? That should be possible, if they have specific evidence in mind. Also, a week seems like longer time than should be needed to provide such proof, which increases my suspicion that they're playing for time. What does delaying for a week do that a 48h delay would not?

Edit: Nonlinear has begun posting some evidence. I remain skeptical that the bulk of the evidence supports their side of the narrative, but I no longer find the lack of posting evidence as a reason for additional suspicion.

Thanks for writing this up, I find quadratic funding falls into a class of mechanisms that are too clever by half in a way that makes them very fragile to the modelling assumptions. 

Also, I love how it's often posts with innocuous titles like 'Some thoughts on quadratic funding" that completely demolish an idea.

"People saying things that are mildly offensive but not worth risking an argument by calling out, and get tiring after repeated exposure" is just obviously a type of comment that exists, and is what most people mean when they say microaggression. Your paper debunking it alternates between much stricter definitions and claiming an absence of evidence for something that very clearly is going to be extremely hard to measure rigorously.

I'll edit to comment to note that you dispute it, but I stand by the comment. The AI system trained is only as safe as the mentor, so the system is only safe if the mentor knows what is safe. By "restrict", I meant for performance reasons, so that it's feasible to train and deploy in new environments.

Again, I like your work and would like to see more similar work from you and others. I am just disputing the way you summarized it in this post, because I think that portrayal makes its lack of splash in the alignment community a much stronger point against the community's epistemics than it deserves.

Load more