SM

Simon_M

1279 karmaJoined

Comments
101

I have written a bit about this (and related topics) in the past:

 

Our society has voted to get the policy which is most likely to achieve a value of .5 [...]

I think you make a fairly good argument (in iv) about trying to maximise the probability of achieving outcome x where x could vary to being a small number, but I expect futarchy proponents would argue that you can fix this by returning E[outcome] rather than P(outcome > x). So society would vote to get the policy that maximises the expected outcome rather than the probability of an outcome. (Or you could look at P(outcome > x) for a range of x).

 

You wrote on reddit:

I have written a blog post exploring why the prices in a prediction market may not reflect the true probability of an event when the things we want to hedge against are correlated

But I think none of your explanation here actually relies on this correlation. (And I think this is extremely important). I think risk-neutrality arguments are actually not the right framing. For example, a coin flip is a risky bet, but that doesn't mean the price will be less than 1/2 because there's a symmetry in whether or not you are bidding on heads or tails. It's just more likely you don't bet at all because if you are risk-neutral, you value H at 0.45 and T at 0.45. 

The key difference is that if the coin flip is correlated to the real economy, such that the dollar-weighted average person would rather live in a world where heads come up than tails, they will pay more for tails than heads. 

Claude's Summary:

Here are a few key points summarizing Will MacAskill's thoughts on the FTX collapse and its impact on effective altruism (EA):

  • He believes Sam Bankman-Fried did not engage in a calculated, rational fraud motivated by EA principles or long-termist considerations. Rather, it seems to have stemmed from hubris, incompetence and failure to have proper risk controls as FTX rapidly grew.
  • The fraud and collapse has been hugely damaging to EA's public perception and morale within the community. However, the core ideas of using reason and evidence to do the most good remain valid.
  • Leadership at major EA organizations has essentially turned over in the aftermath. Will has stepped back from governance roles to allow more decentralization.
  • He does not think the emphasis on long-termism within EA was a major driver of the FTX issues. If anything, near-term considerations like global health and poverty reduction could provide similar motivation for misguided risk-taking.
  • His views on long-termism have evolved to be more focused on short-term AI risk over cosmic timescales, given the potential for advanced AI systems to pose existential risks to the current generation within decades.
  • Overall, while hugely damaging, he sees the FTX scandal as distinct from the valid principles of effective altruism rather than undermining them entirely. But it has prompted substantial re-evaluation and restructuring within the movement.

If you click preview episode on that link you get the full episode. I also get the whole thing on my podcast feed (PocketCasts, not Spotify). Perhaps it's a Spotify issue?

Sorry, I edited while you posted. I see US as 1.44% * 27tn = ~400bn, which is the vast majority of charitable giving when I add in the rest of the countries Wikipedia lists and interpolate based on location for other biggish economies

Our friends estimate the cost at about $258 billion dollars to end extreme poverty for a year, and point out that this is a small portion of yearly philanthropic spending or [...]

Is that true?

Just ballparking this based on fractions of GDP given to charitable organisations (big overestimate imo), I get global giving at ~500bn/year. So I don't believe this is true

Now this is not… great, and certainly quite different from the data by tenthkrige. I'm pretty sure this isn't a bug in my implementation or due to the switch from odds to log-odds, but a deeper problem with the method of rounding for perturbation.

It's not particularly my place to discuss this, but when I replicated his plots I also found got very different results, and since then he shared his code with me and I discovered bug in it.

Basically it simulates the possible outcomes of all the other bets you have open.

How can I do that without knowing my probabilities for all the other bets? (Or have I missed something on how it works?)

Less concave = more risk tolerant, no?

Argh, yes. I meant more concave.

The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don't know if it's more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?

No, it doesn't make sense. "We don't know the curvature, ergo it could be anything" is not convincing. What you seem to think is "concrete" seems entirely arbitrary to me.

As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.

I appreciate you think that, and I agree that Michael has said he agrees, but I don't understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don't see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments.

Beyond an intuition-based re-weighting of the considerations,

If you think my weighings and comments about your conclusions relied a little too much on intuituion, I'll happily spell out those arguments in more detail. Let me know which ones you disagree with and I'll go into more detail.

But to my mind, the way this flattening could work is explained in the “Arguments from uncertainty” section:

I think we might be talking cross purposes here. By flattening here, I meant "less concave" - hence more risk averse. I think we agree on this point?

Could you point me to what you're referring to, when you say you note this above?

Ah - this is the problem with editing your posts. It's actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don't find any of the arguments convincing. For example

To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.

I just thought this was a totally unrealistic model in multiple dimensions, and don't really think it's relevant to anything? I didn't see it as being any different from me just saying "Imagine a philanthropist with arbitrary utility function which is less more curved than an individual".

but these arguments are not as strong as people claim, so we shouldn't say EAs should have high risk tolerance

I don't get the same impression from reading the post especially in light of the conclusions, which even without my adjustments seems in favour of taking more risk.

Load more