This is a special post for quick takes by deep. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
In light of the FTX thing, maybe a particularly important heuristic is to notice cases where the worst-case is not lower-bounded at zero. Examples:
Buying put options (value bounded at zero) vs shorting stock (unbounded)
Running an ambitious startup that fails is usually just zero, but what if it's committed funding & tied its reputation to lots of important things that will now struggle?
More twistily -- what if you're committing to a course of action s.t. you'll likely feel immense pressure to take negative-EV actions later on, like committing fraud in order to save your company or pushing for more AI progress so you can stay in the lead?
Not that you should definitely not do things that potentially have large-negative downsides, but you can be a lot more willing to experiment when the downside is capped at zero.
Indeed, a good norm in many circumstances is to do lots of exploration and iteration. This is how science, software development, and most research happens. Things get a lot tricker when even this stage has potential deep harms -- as in research with advanced AI. (Or, more boundedly & fixably, infohazard risks from x- and s-risk reduction research.)
In practice, people will argue about what counts as effectively zero harm, vs nonzero. Human psychology, culture, and institutions are sticky, so exploration that naively looks zero-bounded can have harm potential via locking in bad ideas or norms. I think that harm is often fairly small, but it might be both important and nontrivial to notice when it's large -- e.g., which new drugs are safe to explore for a particular person? caffeine vs SSRIs vs weed vs alcohol vs opioids...
(Note that the "zero point" I'm talking about here is an outcome where you've added zero value to the world. I'm thinking of the opportunity cost of the time or money you invested as a separate term.)
Zero-bounded vs negative-tail risks
(adapted from a comment on LessWrong)
In light of the FTX thing, maybe a particularly important heuristic is to notice cases where the worst-case is not lower-bounded at zero. Examples:
Not that you should definitely not do things that potentially have large-negative downsides, but you can be a lot more willing to experiment when the downside is capped at zero.
Indeed, a good norm in many circumstances is to do lots of exploration and iteration. This is how science, software development, and most research happens. Things get a lot tricker when even this stage has potential deep harms -- as in research with advanced AI. (Or, more boundedly & fixably, infohazard risks from x- and s-risk reduction research.)
In practice, people will argue about what counts as effectively zero harm, vs nonzero. Human psychology, culture, and institutions are sticky, so exploration that naively looks zero-bounded can have harm potential via locking in bad ideas or norms. I think that harm is often fairly small, but it might be both important and nontrivial to notice when it's large -- e.g., which new drugs are safe to explore for a particular person? caffeine vs SSRIs vs weed vs alcohol vs opioids...
(Note that the "zero point" I'm talking about here is an outcome where you've added zero value to the world. I'm thinking of the opportunity cost of the time or money you invested as a separate term.)