Hide table of contents

What is this?

  • I thought it would be useful to assemble an FAQ on the FTX situation.
  • This is not an official FAQ. I'm not writing this in any professional capacity.
  • This is definitely not legal or financial advice or anything like that.
  • Please let me know if anything is wrong/unclear/misleading. 
  • Please suggest questions and/or answers in the comments. 

What is FTX?

Who is Sam Bankman-Fried (SBF)?

How is FTX connected to effective altruism?

  1. In the last couple of years, effective altruism received millions of dollars of funding from SBF and FTX via the Future Fund
  2. SBF was following a strategy of "make tons of money to give it to charity." This is called "earning to give", and it's an idea that was spread by EA in early-to-mid 2010s. SBF was definitely encouraged onto his current path by engaging with EA.
  3. SBF was something of a "golden boy" to EA. For example, this.

How did FTX go bankrupt?

  • FTX gambled with user deposits rather than keeping them in reserve. 
  • Binance, a competitor, triggered a run on the bank where depositors attempted to get their money out. 
  • It looked like Binance was going to acquire FTX at one point, but they pulled out after due diligence.
  • Source

How bad is this?

  • "It is therefore very likely to lead to the loss of deposits which will hurt the lives of 10,000s of people eg here" (Source)
  • Also:

Does EA still have funding?

  • Yes. 
  • Before FTX there was Open Philanthropy (OP), which has billions in funding from Dustin Muskovitz and Cari Tuna. None of this is connected to FTX.

Is Open Philanthropy funding affected?

  • Global health and wellbeing funding will continue as normal.
  • Because the total pool of funding to longtermism has shrunk, Open Philanthropy will have to raise the bar on longtermist grant making.
  • Thus, Open Philanthropy is "pausing most new longtermist funding commitments" (longtermism includes AI, Biosecurity, and Community Growth) for a couple of months to recalibrate.
  • Source

How much of EA's money came from FTX Future Fund?

If you got money from FTX, do you have to give it back?

  • "If you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back."
  • "you will receive formal notice and have an opportunity to make your own case"
  • "this process is likely to take months to unfold" and is "going to be a multi-year legal process"
  • If this affects you, please read this post from Open Philanthropy. They also made an explainer document on clawbacks.

What if you've already spent money from FTX?

  • It's still possible that you may have to give it back.

If you got money from FTX, should you give it back?

  • You probably shouldn't, at least for the moment. If you gave the money back, there's the possibility that because it wasn't done through the proper legal channels you end up having to give the money back twice.

If you got money from FTX, should you spend it?

  • Probably not. At least for the next few days. You may have to give it back.

I feel bad about having FTX money.

  • Reading this may help.
  • "It's fine to be somebody who sells utilons for money, just like utilities sell electricity for money."
  • "You are not obligated to return funding that got to you ultimately by way of FTX; especially if it's been given for a service you already rendered, any more than the electrical utility ought to return FTX's money that's already been spent on electricity"

What if I'm still expecting FTX money?

  • The FTX Future Fund team has all resigned, but "grantees may email grantee-reachout@googlegroups.com."
  • "To the extent you have a binding agreement entitling you to receive funds, you may qualify as an unsecured creditor in the bankruptcy action. This means that you could have a claim for payment against the debtor’s estate" (source)

I needed my FTX money to pay the rent!

I have money and would like to help rescue EA projects that have lost funding.

  • You can contribute to the Nonlinear emergency fund mentioned above.
  • "If you’re a funder and would like to help, please reach out: katwoods@nonlinear.org"

How can I get support/help (for mental health, advice, etc)?

  • "Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon." (source)
  • Some mental health advice here.
  • Someone set up a support network for the FTX situation.
    • This table lists people you can contact for free help. It includes experienced mental health supporters  and EA-informed coaches and therapists.
    • This Slack channel is for discussing your issues, and getting support from the trained helpers as well as peers.

How are people reacting?

  • Will MacAskill: "If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception."
  • Rob Wiblin: "I am ******* appalled. [...] FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help."
  • Holden Karnofsky: "I dislike “end justify the means”-type reasoning. The version of effective altruism I subscribe to is about being a good citizen, while ambitiously working toward a better world. As I wrote previously, I think effective altruism works best with a strong dose of pluralism and moderation."
  • Evan Hubinger: "We must be very clear: fraud in the service of effective altruism is unacceptable"

Was this avoidable?

  • Nathan Young notes a few red flags in retrospect.
  • But on the other hand:

Did leaders in EA know about this?

Will there be an investigation into whether EA leadership knew about this?

  • Tyrone-Jay Barugh has suggested this, and Max Dalton (leader of CEA) says "this is something we’re already exploring, but we are not in a position to say anything just yet."

Why didn't the EA criticism competition reveal this? 

  • (This question was posed by Matthew Yglesias)
  • Bruce points out that there were a number of relevant criticisms which questioned the role of FTX in EA (eg). However, there was no good system in place to turn this into meaningful change.

Does the end justify the means?

  • Many people are reiterating that EA values go against doing bad stuff for the greater good.
  • Will MacAskill compiled a list of times that prominent EAs have emphasised the importance of integrity over the last few years.
  • Several people have pointed to this post by Eliezer Yudkowsky, in which he makes a compelling case for the rule "Do not cheat to seize power even when it would provide a net benefit."

Did Qualy the lightbulb break character?

Comments80
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Nitpick: the sources for "SBF is bankrupt" and for "SBF will likely be convicted of a felony" lead to Nathan's post which I do not believe says either, and which has hundreds of comments so if it's one of them maybe link to that?

For the first of those, while I know FTX filed for bankruptcy (and had their assets frozen by the Bahamas), I haven't heard about SBF personally filing for bankruptcy (which doesn't rule it out, of course).

For the second, it is probably premature to assume he'll be convicted? This depends not only on what he did, but also on the laws and the authorities in several countries. On the other hand, I'm not actually knowledgeable about this, and here is a source claiming that he will probably go to jail.

1
Hamish McDoodles
Thanks! You are right about SBF personally bankruptcy. I was confused. The felony conviction comes from a Manifold market embedded within the Nathan post. I have added a link directly to Manifold to make this clearer.

Thanks for this; it's a nicely compact summary of a really messy situation that I can quickly share if necessary.

FYI: this post has been linked to by this article on Semafor, with the description:

The Centre for Effective Altruism has had to deal with a lot of questions about Bankman-Fried since FTX’s collapse. Here’s an FAQ it put together.

I don't think you're from CEA... Right?

4
Hamish McDoodles
Semafor has corrected the article.

The "correction" is:

The Centre for Effective Altruism has had to deal with a lot of questions about Bankman-Fried since FTX’s collapse. Here’s an FAQ put together by the Effective Altruism Forum.

Which is of course patently false. What does it mean for a forum to put together an FAQ? Have Semafor ever used a forum before?

2
Hamish McDoodles
If you interpret "the Effective Altruism Forum" as a metonym for "the people who use the forum", then it is true (like how you can say "Twitter is going nuts over this"). It's weird, but I don't see any reason to make a fuss about it.
6
Linch
If someone says "Twitter is going nuts over this" and I learned the source was one Tweet, I'd consider what they said to be pretty inaccurate. (There is a bit of nuance here since your post is highly upvoted and Twitter has more users than EAF, but I would also think "EA Twitter is going nuts" over one highly liked Tweet by an EA to be a severe exaggeration). Similarly, this FAQ was never put together by either a) the EAF team, or b) crowdsourced from a bunch of users.  I expect most people reading this to think of this FAQ as substantially more official than what your own caveats at the top of the page said.
5
Linch
The Centre for Effective Altruism has had to deal with a lot of questions about Bankman-Fried since FTX’s collapse. Here’s an FAQ Hamish Doodles, a user on the Effective Altruism Forum, put together in a personal capacity.
2
Hamish McDoodles
I gather that you think it's an issue worth correcting? Feel free to suggest a more correct phrasing for semafor and I'll pass it on.
3
Hamish McDoodles
Ah, thanks. You are right.  I have tweeted them a correction.

I got funding approved from FTX (but they didn't transfer any money to me). Will anyone else look at my funding application and consider funding it instead?

5
Neel Nanda
My guess is that there are a lot of people in that situation (including people who already made important and irreversible decisions based on that funding existing) and these opportunities are being desperately triaged. We don't have any official word on this though. Do you have urgent need for word on the funding? If not, I'd recommend waiting a few weeks and enquiring again once things have calmed down.
4
Yonatan Cale
Waiting a few weeks would be a big problem with me (I estimate my co-founder might leave meanwhile) (Still, thank you for your reply)
4
Guy Raveh
For you personally, and for some other people, the Long Term Future Fund might be a good idea if you haven't tried them yet.
4
Hamish McDoodles
Have you tried grantee-reachout@googlegroups.com?
2
Yonatan Cale
Yes, I didn't get any reply yet (emailed them yesterday) Thanks for the suggestion though
3
aj
There are now people who are trying to help those in your situation. https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/announcing-nonlinear-emergency-funding 
7
Yonatan Cale
Thanks, I feel this is less of an "emergency funding" situation and more of "trying to quickly understand if we can get normal funding", since the worst case here is going back to "normal" jobs, and not, as the Nonlinear post says, "facing an existential financial crisis". Still, thank you for the recommendation.

nitpick: It's Cari Tuna, not Kari Tuna

9
Hamish McDoodles
Thanks! Now we need to hide the evidence to avert EA having no billionaire funders.
0
Guy Raveh
And certainly not Carry Tuna, because carrying tuna goes against our animal welfare ideals.

Nitpick: It is actually appropriate to carry tuna, but only if you are carrying distressed tuna to safety

Other possible question for this FAQ:

How much of EA's money came from FTX Future Fund?
As per the post Historical EA funding data from August 2022, the estimation for 2022 was:
* Total EA funds: 741 M$
* FTX Future Fund contribution: 262 M$ (35%)

If anyone has more up to date analysis, or better data, please report it.

And many thanks for the very clear and useful summary. Well done.

3
Hamish McDoodles
Thanks! I have added your contribution here.

Expanding on what  Nathan Young said about the dangers of wealthy celebrities mentioning Effective Altruism, I am wondering if it's EA's best interest to certify donors and spokespeople before mentioning EA. The term "effective altruism" itself is ambiguous and having figures such as Musk or FTX using their own definitions without going through the rigor of studying the established definition only makes the problem worse. With certification (one that needs to be renewed annually I must add), it ensures that there's agreement between well-known figures and the EA community that they are in alignment with what EA really means. It also adds accountability to their pledges and donations.

8
Jay Bailey
It seems like this only works for people who want to be aligned with EA but are unsure if they're understanding the ideas correctly. This does not seem to apply for Elon Musk (I doubt he identifies as EA, and he would almost certainly simply ignore this certification and tweet whatever he likes) or SBF (I am quite confident he could have easily passed such a certification if he wanted to) Can you identify any high-profile individuals right now who think they understand EA but don't, who would willingly go through a certification like this and thus make more accurate claims about EA in the future?

Thank you for providing this FAQ. Maybe you want to add this:

A support network has been set up for people going through a rough patch due to the FTX situation.

In this table, you can find the experienced mental health supporters to talk to.

  • These want to help (for free), and you can just contact them. The community health team, as well as some EA-informed coaches and therapists, are listed already.

You can join the new Support Slack here.
People can share and discuss their issues, and get support from the trained helpers as well as peers.

2
Hamish McDoodles
Thanks! I have added this to the "Where can I get help" section.
[anonymous]2
16
17

Re: do the ends justify the means? 

It is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it. Because the ends (better PR) would justify the means (lying). So we cannot simply take these people at their word. Because whatever they truly believe, we should expect their answers to be the same. 

Let's thin... (read more)

Because a double-or-nothing coin-flip scales; it doesn't stop having high EV when we start dealing with big bucks.

Risky bets aren't themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA's impact scales with the logarithm of its wealth. If you're gambling a small amount of money, that means you should be ~indifferent to 50/50 double or nothing (note that even in this case it doesn't have positive EV). But if you're gambling with the majority of wealth that's predictably committed to EA causes, you should be much more scared about risky bets.

(Also in this case the downside isn't "nothing" — it's much worse.)

I think marginal returns probably don't diminish nearly as quickly as the logarithm for neartermist cause areas, but maybe that's true for longtermist ones (where FTX/Alameda and associates were disproportionately donating), although my impression is that there's no consensus on this, e.g. 80,000 Hours has been arguing for donations still being very valuable.

(I agree that the downside (damage to the EA community and trust in EAs) is worse than nothing relative to the funds being gambled, but that doesn't really affect the spirit of the argument. It's very easy to underappreciate the downside in practice, though.)

2
Neel Nanda
I'd actually guess that longtermism diminishes faster than logarithmic, given how much funders have historically struggled to find good funding opportunities.
0
Lukas Finnveden
Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes. Re 80,000 Hours: I don't know exactly what they've argued, but I think "very valuable" is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn't mean that money becomes unimportant compared to people, or anything like that.
2
MichaelStJules
(I didn't vote on your comment.) Here's Ben Todd's post on the topic from last November: Despite billions of extra funding, small donors can still have a significant impact I'd especially recommend this part from section 1: So he thought the marginal cost-effectiveness hadn't changed much while funding had dramatically increased within longtermism over these years. I suppose it's possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it. Personally, I'd guess funding students' university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing,  the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses. I also got the impression that the Future Fund's bar was much lower, but I think this was after Ben Todd's post.

Caroline Ellison literally says this in a blog post: 

"If you abstract away the financial details there’s also a question of like, what your utility function is. Is it infinitely good to do double-or-nothing coin flips forever? Well, sort of, because your upside is unbounded and your downside is bounded at your entire net worth. But most people don’t do this, because their utility is more like a function of their log wealth or something and they really don’t want to lose all of their money. (Of course those people are lame and not EAs; this blog endorses double-or-nothing coin flips and high leverage.)"

 

So no, I don't think anyone can deny this. 

5[anonymous]
Link?
6
Lauren Maria
https://at.tumblr.com/worldoptimization/slatestarscratchpad-all-right-more-really-stupid/8ob0z57u66zr   EDIT: The tumblr has been taken down.  EDIT #2: Someone archived it: https://web.archive.org/web/20210625103706/https://worldoptimization.tumblr.com/
1
Lin BL
That link doesn't work for me. Do you have another one, or has it been taken down?
2
Lauren Maria
It looks like the tumblr was actually deleted, unfortunately. I spent quite a bit of time going through it last night because I saw screenshots of it going around. 
1
Lauren Maria
Hey @Lin BL, someone archived it! I just found this link:  https://web.archive.org/web/20210625103706/https://worldoptimization.tumblr.com/

Because utility and integrity are wholly independent variables, so there is no reason for us to assume a priori that they will always correlate perfectly. So if we wish to believe that integrity and expected value correlated for SBF, then we must show it. We must actually do the math.

This feels a bit unfair when people (i) have argued that utility and integrity will correlate strongly in practical cases (why use "perfectly" as your bar?), and (ii) that they will do so in ways that will be easy to underestimate if you just "do the math".

You might think they're mistaken, but some of the arguments do specifically talk about why the "assume 0 correlation and do the math"-approach works poorly, so if you disagree it'd be nice if you addressed that directly.

3
MichaelStJules
Utility and integrity coming apart, and in particular deception for gain, is one of the central concerns of AI safety. Shouldn't we similarly be worried at the extremes even in human consequentialists? It is somewhat disanalogous, though, because 1. We don't expect one small group of humans to have so much power without the need to cooperate with others, like might be the case for an AGI taking over. Furthermore, the FTX/Alameda leaders had goals that were fairly aligned with a much larger community (the EA community), whose work they've just made harder. 2. Humans tend to inherently value integrity, including consequentialists. However, this could actually be a bias among consequentialists that consequentialists should seek to abandon, if we think integrity and utility should come apart at the extremes and we should go for the extremes. 3. (EDIT) Humans are more limited cognitively than AGIs, and are less likely to identify net positive deceptive acts and more likely to identify net negative one than AGIs. EDIT: On the other hand, maybe we shouldn't trust utilitarians with AGIs aligned with their own values, either.
2[anonymous]
Assuming zero correlation between two variables is standard practice. Because for any given set of two variables, it is very likely that they do not correlate. Anyone that wants to disagree must crunch the numbers and disprove it. That's just how math works. And if we want to treat ethics like math, then we need to actually do some math. We can't have our cake and eat it too
4
Lukas Finnveden
I'm not sure how literally you mean "disprove", but at it's face, "assume nothing is related to anything until you have proven otherwise" is a reasoning procedure that will never recommend any action in the real world, because we never get that kind of certainty. When humans try to achieve results in the real world, heuristics, informal arguments, and looking at what seems to have worked ok in the past are unavoidable.
2[anonymous]
I am talking about math. In math, we can at least demonstrate things for certain (and prove things for certain, too, though that is admittedly not what I am talking about). But the point is that we should at least be to bust out our calculators and crunch the numbers. We might not know if these numbers apply to the real world. That's fine. But at least we have the numbers. And that counts for something. For example, we can know roughly how much wealth SBF was gambling. We can give that a range. We also can estimate how much risk he was taking on. We can give that a range too. Then we can calculate if the risk he took on had net positive expected value in expectation It's possible that it has expected value in expectation, only above a certain level of risk, or whatever. Perhaps we do not know whether he faced this risk. That is fine. But we can still at any rate see in under what circumstances SBF would have been rational, acting on utilitarian grounds, to do what he did. If these circumstances sound like do or could describe the circumstances that SBF was in earlier this week, then that should give us reason to pause.

Fair. 

TBH, this has put me off of utilitarianism somewhat. Those silly textbook counter-examples to utilitarianism don't look quite so silly now.

Except the textbook literally warns about this sort of thing:

This is a generalizable defense of utilitarianism against a wide range of alleged counterexamples. Such “counterexamples” invite us to imagine that a typically-disastrous class of action (such as killing an innocent person) just so happens, in this special case, to produce the best outcome. But the agent in the imagined case generally has no good basis for discounting the typical risk of disaster. So it would be unacceptably risky for them to perform the typically-disastrous act.3 We maximize expected value by avoiding such risks.4 For all practical purposes, utilitarianism recommends that we should refrain from rights-violating behaviors.

Again, warnings against naive utilitarianism have been central to utilitarian philosophy right from the start.  If I could sear just one sentence into the brains of everyone thinking about utilitarianism right now, it would be this: If your conception of utilitarianism renders it *predictably* harmful, then you're thinking about it wrong.

9
publius
There's the case that such distinctions are too complex for a not insignificant proportion of the public and therefore utilitarianism should not be promoted at all for a larger audience, since all the textbooks filled with nuanced discussion will collapse to a simple heuristic in the minds of some, such as 'ends justifying the means' (which is obviously false).
-4
Richard Y Chappell🔸
I don't think we should be dishonest.  Given the strong case for utilitarianism in theory, I think it's important to be clear that it doesn't justify criminal or other crazy reckless behaviour in practice.  Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point. If you just mean that we shouldn't promote context-free, easily-misunderstood utilitarian slogans in superbowl ads or the like, then sure, I think that goes without saying.
9
publius
It's quite evident people do follow discussions on utilitarianism but fail to understand the importance of integrity in a utilitarian framework, especially if one is unfamiliar with Kant. If the public finds SBF's system of moral beliefs to blame for his actions, it will most likely be for being too utilitarian rather than not being utilitarian enough – a misunderstanding which will be difficult to correct.
0
Richard Y Chappell🔸
Are you disagreeing with something I've said? I'm not seeing the connection.  (I obviously agree that many people currently misunderstand utilitarianism, or I wouldn't spend my time trying to correct those misunderstandings.)
-41[anonymous]
4
David Mathers🔸
People believing utilitarianism could be predictably harmful, even if the theory actually says not to do the relevant harmful things. (Not endorsing this view: I think if you've actually spent time socially in academic philosophy, it is hard to believe that people who profess to be utilitarians are systematically more or less trustworthy than anyone else.)
5
Erich_Grunewald 🔸
As someone who has doubts about track record arguments for utilitarianism, I want to go on the record as saying I think that cuts both ways – that I don't think SBF's actions are a reason to think utilitarianism is false or bad (nor true or good). Like, in order to evaluate a person's actions morally we already need a moral theory in place. So the moral theory needs to be grounded in something else (like for example intuitions, human nature and reasoned argument).
2
Richard Y Chappell🔸
Sure, it's possible that misunderstandings of the theory could prove harmful.  I think that's a good reason to push back against those misunderstandings! I'm not a fan of the "esoteric" reasoning that says we should hide the truth because people are too apt to misuse it.  I grant it's a conceptual possibility.  But, in line with my general wariness of naive utilitarian reasoning, my priors strongly favour norms of openness and truth-seeking as the best way to ward off these problems.

Also note Sam's own blog

[anonymous]13
7
7

Interesting, thanks. This quote from SBF's blog is particularly revealing:

The argument, roughly goes: when computing expected impact of causes, mine is 10^30 times higher than any other, so nothing else matters.  For instance, there are 10^58 future humans, so increasing the odds that they exist by even .0001% is still worth 10^44 times more important that anything that impacts current humans. 

Here SBF seems to be going full throttle on his utilitarianism and EV reasoning. It's worth noting that many prominent leaders in EA also argue for this sort of thing in their academic papers (their public facing work is usually more tame).

For example, here's a quote from Nick Bostrom (head huncho at the Future of Humanity Institute). He writes:

Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.

That sentence is in the third paragraph.

Then you have Will MacAskill and Hilary Greaves saying stuff like:

On these estimates, $1 billion of spending would provide at least a 0.001% absolute
reduction in existential ris

... (read more)

I think the quotes from Sam's blog are very interesting and are pretty strong evidence for the view that Sam's thinking and actions were directly influenced by some EA ideas

I think the thinking around EA leadership is way too premature and presumptive. There are many years (like a decade?) of EA leadership generally being actually good people and not liars. There are also explicit calls in "official" EA sources that specifically say that the ends do not justify the means in practice, honesty and integrity are important EA values, and pluralism and moral humility are important (which leads to not doing things that would transgress other reasonable moral views). 

Most of the relevant documentation is linked in Will's post

Edit: After reading the full blog post, the quote is actually Sam presenting the argument that one can calculate which cause is highest priority, the rest be damned. 

He goes on to say in the very next paragraph:

This line of thinking is implicitly assuming that the impacts of causes add together rather than multiply, and I think that's probably not a very good model.

He concludes the post by stating that the multiplicative model, which he thinks ... (read more)

7
MichaelStJules
Ya, they aren't really talking about the numbers, even though a utilitarian should probably accept instrumental harm to innocents for a large enough benefit, at least in theory. Maybe they distrust this logic so much in practice, possibly based on historical precedent like communism, that they endorse a general rule against it. But it would still be good to see some numbers. I read that the Future Fund has granted something like $200 million already, and FTX/Alameda leadership invested probably something like half a billion dollars in Anthropic. And they were probably expecting to donate more. Pesumably they didn't expect to get caught or have a bank run, at least not this soon. Maybe they even expected that they could eventually make sure they had enough cash to cover all customer investments, so no customer would actually ever be harmed even in the case of a bank run (although they'd still be exposed to risks they were lied to about until then). Plausibly they underestimated the risk of getting caught, but maybe by their own lights, it'll already have been worth it even with getting caught, as long as the EA community doesn't pay it all back. If our integrity, public trust/perception, lost potential EAs and ability to cooperate with others are worth this much, should we* just pay everything we got from FTX and associates back to FTX customers? And maybe more for deterence and the cases that don't get caught? *possibly our major funders, not individual grantees.
6
Jeff Kaufman 🔸
I think that's part of why Will etc are giving lots of examples of things they said publicly before FTX exploded where they argued against this kind of reasoning.
2
Jason
I think there may be two separate actions to analyze here: the decisions to take extreme risks with FTX/Alameda's own assets to start with, and the decision to convert customer funds in an attempt to prevent Alameda, FTT , FTX, SBF, and the Future Fund from collapsing in that order. If that is true, it isnt an answer to say SBF shouldn't have been taking extreme risks with a huge fraction of EA aligned money. At the time the fraud / no fraud decision was to be made, that may no longer have been an option. So EA needs to be clear on whether SBF should have allowed his wealth / much of the EA Treasury to collapse rather than risk/convert customer funds, because that may have been the choice he was faced with a week ago.
1
carboniferous_umbraculum
One reaction when reading this is that you might be kind of eliding the difference between utilitarianism per se and expected value decision analysis.
3[anonymous]
Fair enough. I tried to explain that they were different in the comment section of another post, but was meet with downvotes and whole walls of text trying to argue with me. So I've largely given up trying to make those distinctions clear on this forum. It's too tiresome
1
Ben Auer
I believe the ‘walls of text’ that Adrian is referring to are mine. I'd just like to clarify that I was not trying to collapse the distinction between a decision procedure and the rightness criterion of utilitarianism. I was merely arguing that the concept of expected value can be used both to decide what action should be taken (at least in certain circumstances)[1] and whether an action is / was morally right (arguably in all circumstances) - indeed, this is a popular formulation of utilitarianism. I was also trying to point out that whether an action is good, ex ante, is not necessarily identical to whether the consequences of that action are good, ex post. If anyone wants more detail you can view my comments here. 1. ^ Although usually other decision procedures, like following general rules, are more advisable, even if one maintains the same rightness criterion.

Hi @Hamish Doodle. My post cited here with regards to  "If you got money from FTX, do you have to give it back?"  and "If you got money from FTX, should you spend it?" was intended to inform people about not spending FTX money (if possible) until Molly's announced EA forum post. She has now posted it here

Could you please add/link that source? I believe the takeaway is simillar but it's a much more informative post. The key section is:

Essentially, if you received money from an FTX entity in the debtor group anytime on or after approximately

... (read more)
1
Hamish McDoodles
Thanks!  I have incorporated information from Molly's post. 

How much taxpayer money was funnelled through Ukraine to FTX and back to American politicians

Curated and popular this week
Relevant opportunities