I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all.
My claim is that the org values your time at a rate that is significantly higher than the rate they pay you for it, because the cost of employment is higher than just salary and because the employer needs to value your work above its cost for them to want to hire you. I don't see how this is unfalsifiable. Mostly you could falsify them by asking orgs how they think about the cost of staff time, though I guess some wouldn't model it as explicitly as this.
They do mean that we're forced to estimate the relevant threshold instead of having a precise number, but a precise wrong number isn't better than an imprecise (closer to) correct number.
Notice that we are discussing a concrete empirical data point, that represents a 600% difference, while you've given a theoretical upper bound of 100%. That leaves a 500% delta.
No, if you're comparing the cost of doing 10 minutes of work at salary X and 60 minutes of work compensated by Y, but I argue that salary X underestimates the cost of your work by a factor of 2, your salary now only needs to be more than 3 times larger than the work trial compensation, not 5 times.
When it comes to concretising "how much does employee value exceed employee costs", it probably varies a lot from organisation to organisation. I think there are several employers in EA who believe that after a point, paying more doesn't really get you better people. This allows their estimates of value of staff time to exceed employee costs by enormous margins, because there's no mechanism to couple the two together. I think when these differences are very extreme we should be suspicious if they're really true, but as someone who has multiple times had to compare earning to give with direct work, I've frequently asked an org "how much in donations would you need to prefer the money over hiring me?" and for difficult-to-hire roles they frequently say numbers dramatically larger than the salary they are offering.
This means that your argument is not going to be uniform across organisations, but I don't know why you'd expect it to be: surely you weren't saying that no organisation should ever pay for a test task, but only that organisations shouldn't pay for test tasks when doing so increases their costs of assessment to the point where they choose to assess fewer people.
My expectation is that if you asked orgs about this, they would say that they already don't choose to assess fewer people based on cost of paying them. This seems testable, and if true, it seems to me that it makes pretty much all of the other discussion irrelevant.
So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
Strictly speaking your salary is the wrong number here. At a minimum, you want to use the cost to the org of your work, which is your salary + other costs of employing you (and I've seen estimates of the other costs at 50-100% of salary). In reality, the org of course values your work more highly than the amount they pay to acquire it (otherwise... why would they acquire it at that rate) so your value per hour is higher still. Keeping in mind that the pay for work tasks generally isn't that high, it seems pretty plausible to me that the assessment cost is primarily staff time and not money.
For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it's worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, it's possible this doesn't get noticed because people take it as a background assumption and don't tend to discuss it directly, so they don't realize how much they disagree and how crucial that disagreement is.
Scope insensitivity has some empirical backing -- e.g. the helping birds study -- and some theorised mechanisms of action, e.g. people lacking intuitive understanding of large numbers.
Scope oversensitivity seems possible in theory, but I can't think of any similar empirical or theoretical reasons to think it's actually happening.
To the extent that you disagree, it's not clear to me whether it's because you and I disagree on how EAs weight things like animal suffering, or whether we disagree on how it ought to be weighted. Are you intending to cast doubt on the idea that a problem that is 100x as large is (all else equal) 100x more important, or are you intending to suggest that EAs treat it as more than 100x as important?
While My experience at the controversial Manifest 2024 (and several related posts) was (were) not explicitly about policies or politicians, I think it's largely the underlying political themes that made it so heated.
I have a broad sense that AI safety thinking has evolved a bunch over the years, and I think it would be cool to have a retrospective of "here are some concrete things that used to be pretty central that we now think are either incorrect or at least incorrectly focused"
Of course it's hard enough to get a broad overview of what everyone thinks now, let alone what they used to think but discarded.
(this is probably also useful outside of AI safety, but I think it would be most useful there)
I feel like my experience with notifications has been pretty bad recently – something like, I'll get a few notifications, go follow the link on one, and then all the others will disappear and there's no longer any way to find out what they were. Hard to confidently replicate because I can't generate notifications on demand, but that's my impression.
[edit: it's been changed I think?]
FWIW when I saw the title of this post I assumed you were going to be asking for advice rather than offering it. Something like "My advice on whether it's worth [...]" would be less ambiguous, though a bit clumsier – obv this is partly a stylistic thing and I won't tell you what style is right for you :)
(this point is different enough I decided to make a separate comment for it)
I feel like when people talk about criticism on the Forum, they often point to how it can be very emotionally difficult for the person being criticised, and then I feel like they stop and say "this means there's something wrong with how we do criticism, and we should change it until it's not like this".
I think this is overly optimistic. I find it highly implausible that there's some way we could be, some tone we could set, that would make criticism not hurt. It hurts to be wrong, and it hurts more to hear this from other people, and it hurts more if you're hearing it unexpectedly. These pains are dramatically worsened by hostility or insensitivity or other markers of bad criticism, but even if you do everything you're supposed to in tone and delivery, the truth is going to hurt, and sometimes it's going to hurt a lot.
So, even perfect criticism hurts. Moreover, it's highly implausible that we can aspire to perfect criticism, or a particularly great approximation to it. Anywhere on the forum, people get misread, people fail to make their point clearly, people have tricky and complex ideas that take a lot of digesting to make sense. In criticism, all of that happens in an emotionally volatile environment. It takes a lot of discipline to stay friendly in that context, and I don't think the fact that it sometimes doesn't happen is a uniquely EA failure. No-one anywhere has criticism that stays clean and charitable all the time. If you're thinking "how are we going to ensure that bad ideas don't absorb attention and funding and other resources that could have gone to good ideas", I really struggle to imagine a system that always avoids arguments and hostility, and I think the EA forum honestly does better than the peers I can think of.
We're all here to do things we think are important and high-stakes, involving the suffering of those we care about. It's going to be emotionally fraught. People who write critical comments should try hard to do so in a way that minimises the harm they cause. IMO there should also be more said on the Forum about how people can receive criticism in a way that minimises harm (primarily to them, but perhaps also to others). I do think that "sometimes just ignore the criticism" is good advice, actually. But I don't think we should aspire to "people aren't upset by what is said on the Forum", or "posting about your project on the Forum doesn't make you anxious". Reduce these things as much as possible, but be realistic about how much is possible.
No desire to close down ideas for how to make things better. Please do continue to think and talk about whether criticism on the Forum could be done better. But I want better indicators of disease than "people are hurt when other people tell them they don't like their work".
Does this mean everything that we used to call "10x cash" we're now calling "3x cash"?
very inconsiderate of GiveDirectly to disrupt our benchmarks like this 😛