I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence[1] and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact,[2] and I wanted to explore some conjectures about what that plausibility would entail.
I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work.
Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are:
- To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened.
- Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.
- Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into.
- Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption.
- FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be.
- EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future.
Here are some conjectures I’d make for potential implications of believing my plausibility claim:
- Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above.
- Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “meta” (or “community building”) donation amounts as separate numbers (e.g. “I gave X to charity this year in total of which, Y was to EA front-line stuff, Z to EA community stuff, and W was non-EA stuff”). I think there may be intelligent principles to develop about how the amounts of EA front-line funding and meta-level funding should relate to one another, but I have less of a sense of what those principles might be than a belief that starting to account for them as separate types of activities in separate categories will be productive.
- Integrate Future Community Building More Closely with Front-Line Work: Insofar as it makes sense to have less of a default presumption towards the value of community building, a way of de-risking community building activities is to link them more closely to activities where the case for direct impact is stronger. For example, personally I hope for some of my kidney donation, challenge trial recruitment, and Rikers Debate Project work to have significant EA community-building upshots, even though that meta level is not those projects’ main goal or the metric I use to evaluate them. For what it’s worth, I think pursuing “double effect” strategies (e.g projects that simultaneously have near-termist and longtermist targets or animal welfare and forecasting-capacity targets) is underrated in current EA thinking. I also think connecting EA recruitment to direct work may mitigate certain risks of community building (e.g. the risks of creating an EA apparatchik class, recruiting “EAs” not sufficiently invested in having an actual impact, or competing with direct work for talent)
- Implement Carla Zoe Cremer’s Recommendations: Maybe I’m biased because we’re quoted together in some of the same articles but I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing (e.g. whistleblowing protections). Some (such as democratizing funding decisions) are more complicated to implement, and I acknowledge the concern that these procedural measures create friction that could reduce the efficacy of EA organizations, but I think (a) minimizing unnecessary burden is a design challenge likely to yield fairly successful solutions and (b) FTX clearly strengthens the arguments in favor of bearing the cost of that friction. Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.
- Consideration of a “Pulse” Approach to Funding EA Community Building: It may be the case that large EA funders should do time-limited pulses of funding towards EA community building goals or projects with the intention of building institutions that can sustain themselves off of separate funds in the future. The logic of this is: (a) insofar as EAs may be bad judges of the value of our own community building, requiring something appealing to external funders helps check that bias, (b) creating EA community institutions that must be attractive to outsiders to survive may avoid certain epistemic and political risks inherent to being too insular
- EA as a Method and not a Result: The concept of effective altruism (rationally attempting to do good) has broad consensus but particular conceptions may be parochial or clash with one another.[3] A “thinner” effective altruism that emphasizes EA as an idea akin to the scientific method rather than a totalizing identity or community may be less vulnerable to FTX-like mistakes.
- Develop Better Logic for Weighing Harms Caused by EA against EA Benefits: An EA logic that assumes resources available to EAs will be spent at (say) GiveWell benefit levels (which I take to be roughly $100/DALY or equivalent) but that resources available to others are spent at (say) US government valuations of a statistical life (I think roughly $100,000/DALY) seems to justify significant risks of incurring very sizable harms to the public if they are expected to yield additional resources for EA. Clearly, EA's obligations to avoid direct harms (or certain types of direct harms) are at least somewhat asymmetric to obligations/permissions to generate benefits. But at the same time, essentially any causal act will have some possibility of generating harm (which in the case of systemic change efforts can be quite significant), so a precautionary principle designed in an overly simplistic way would kneecap the ability of EAs to make the world better. I don't know the right answer to this challenge, but clearly "defer to common sense morality" has proven insufficient, and I think more intellectual work should be done.
I'm not at all certain about the conjectures/claims above, but I think it's important that EA deals with the intellectual implications of the FTX crisis, so I hope they can provoke a useful discussion.
- ^
Am basing this on reporting in Semafor and the New Yorker. To be clear, I'm not saying that once you assume Alameda/FTX's existence, the ideology of effective altruism necessarily made it more likely that those entities would commit fraud. But I do think it is unlikely they would have existed in the first place without the support of institutional EA.
- ^
To be clear, my claim is not "the impact of the FTX fraud incident plausibly outweighs benefits of EA community building efforts to date" (though that may be true and would be useful to publicly disprove if possible) but that the FTX fraud should demonstrate there are a range of harms we may have missed (which collectively could plausibly outweigh benefits) and that "investing in EA community building is self-evidently good" is a claim that needs to be reexamined.
- ^
I find the distinction between concept and conception to be helpful here. Effective altruism as a concept is broadly unobjectionable, but particular conceptions of what effective altruism means or ought entail involve thicker descriptions that can be subject to error or clash with one another. For example, is extending present-day human lifespans default good because human existence is generally valuable or bad because doing so tends to create greater animal suffering that outweighs the human satisfaction in the aggregate? I think people who consider the principles of effective altruism important to their thinking can reasonably come down on both sides of that question (though I, and I imagine the vast majority of EAs, believe the former). Moreover efforts to build a singular EA community around specific conceptions of effective altruism will almost certainly exclude other conceptions, and the friction of doing so may create political dynamics (and power-seeking behavior) that can lead to recklessness or other problems.
ok, an incomplete and quick response to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don't spend my time on your forum, so please do just pm me if you think I should respond to something)
1.
- regarding blaming Will or benefitting from the media attention
- i don't think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)
- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures
- given that he only says this after FTX collapsed, I'm saying, it's annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised
- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position
2.
- i'm rather sick of hearing from EAs that i'm arguing in bad faith
- if I wanted to play nasty it wouldn't be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse t... (read more)
For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.
Thank you for taking the time to write this up, it is encouraging - I also had never thought to check my karma ...
It would be a bit rude to focus on a minor part of your comment after you posted such a comprehensive reply, so I first want to say that I agreed with some of the points.
With that out of the way, I even more want to say that the following perspective strikes me as immoral, in that it creates terrible, unfair incentives:
The problem I have with this framing is that it "punishes" EA (by applying isolated demands of "justify yourselves") ... (read more)
Indeed Lukas, I guess what I'm saying is: given what I know about EA, I would not entrust it with the ring .
I think the global argument is that power in EA should be deconcentrated/diffused across the board, and subjected to more oversight across the board, to reduce risk from its potential misuse. I dont think Zoe is suggesting that any actor should get a choice on how much power to lose or oversight to have. Could you say more about how adverse selection interacts with that approach?
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely going to apply extra scrutiny towards you." Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,"Are EAs following democratic processes and why does their funding come from very few sources?" is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
... (read more)I think ever since EA has become more of an “expected value maximisation” movement rather than a “doing good based on high quality evidence” movement, it has been quite plausible for EA activity overall, or community building specifically, to turn out to be net-negative in retrospect, but I think the expected value of community building remains extremely high.
I support more emphasis on thin EA and the development of a sort of rule of thumb for what a good ratio of meta spending vs object level impact spending would be.
Strongly agree that it is surprising that some of Carla Zoe Kremer’s reforms haven’t been implemented. Frankly, I would guess the reason is that too many leadership EAs are overconfident in their decision making and are much too focused on “rowing” instead of “steering” in Holden Karnofsky’s terms.
“ Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.” Why do you think this, it is mostly intuition?
My view of other social movements is that they undervalue efforts to increase power which is why most are unsuccessful. I credit a lot of EA’s success in terms of object level impact to a healthy degree of focus on increasing power as a means to increasing impact.
Just wanted to flag that I personally believe
- most of Cremer's proposed institutional reforms are either bad or zero impact, this was the case when proposed, and is still true after updates from FTX
- it seems clear proposed reforms would not have prevented or influenced the FTX fiasco
- I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you
present yourself asare willing to cooperate in being presented as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this.edit:
present yourself asreplaced with are willing to cooperate in being presentedI don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was.
For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think the amount of flack she's taken for this has been disproportionate and sends the wrong signal to others about dissenting.
I think this aspect of the comment is particularly harsh, which is in and of itself likely counterproductive. But on top of that, it's not the type that should be made lightly or without a lot of evidence that that is the person's agenda (bold for emphasis):
This discussion here made me curious, so I went to Zoe's twitter to check out what she's posted recently. (Maybe she also said things in other places, in which case I lack info.) The main thing I see her taking credit for (by retweeting other people's retweets saying Zoe "called it") is this tweet from last August:
That seems legitimate to me. (We can debate whether institutional safeguards would have been the best action against FTX in particular, but the more general point of "EAs have a blind spot around tail risks due to an elated self-image of the movement" seems to have gotten a "+1" score with the FTX collapse (and EAs not having seen it coming despite some concerning signs).
... (read more)There's also a tweet by a journalist that she retweeted:
I think the FTX regranting program was the single biggest push to decentralize funding EA has ever seen, and it's crazy to me that anyone could look at what FTX Foundation was doing and say that the key problem is that the funding decisions were getting more, rather than less, centralized. (I would be interested in hearing from those who had some insight into the program whether this seems incorrect or overstated.)
That said, first, I was a regrantor, so I am biased, and even aside from the tremendous damage caused by the foundation needing to back out and the possibility of clawbacks, the fact that at least some of the money which was being regranted was stolen makes the whole thing completely unacceptable. However, it was unacceptable in ways that have nothing to do with being overly centralized.
This seems right within longtermism, but, AFAIK, the vast majority of FTX's grantmaking was longtermist. This decision to focus on longtermism seemed very centralized and might otherwise have shaped the direction and composition of EA disproportionately towards longtermism.
To be fair, this seems like a reasonable statement on Zoe's part:
- If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely.
- If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the
... (read more)Why do you think so? Whistleblowers inside of FTX would have been protected under US law, and US institution like SEC offer them multi-million dollar bounties. Why would EA scheme create stronger incentive?
Also: even if the possible whistleblowers inside of FTX were EAs, whistleblowing about fraud at FTX not directed toward authorities like SEC, but toward some EA org scheme, would have been particularly bad idea. The EA scheme would not be equipped to deal with this and would need to basically immediately forward it to authorities, leading to immediate FTX collapse. Main difference would be putting EAs in the centre of the happenings?
I think the 'diversified our portfolio' frame is ... (read more)
And even then, decisions about accepting funding are made by individuals and individual organizations. Would there be someone to kick you out of EA if you accept "unapproved" funding? The existing system is, in a sense, fairly democratic in that everyone gets to decide whether they want to take the money or not. I don't see how Cremer's proposal could be effective without a blacklist to enforce community will against anyone who chose to take the money anyway, and that gives whoever maintains the blacklist great power (which is contrary to Cremer's stated aims).
The reality, perhaps unfortunate, is that charities need donors more than donors need specific charities or movements.
Why do you believe this? To me, FTX fits more in the reference class of financial firms than EA orgs, and I don't see how EA whistleblower protections would have helped FTX employees whistleblow (I believe that most FTX employees were not EAs, for example). And it seems much more likely to me that an FTX employee would be able to whistle-blow than an EA at a non-FTX org.
Also, my current best guess is that only the top 4 at FTX/Alameda knew about the fraud, and I have not come across anyone who seems like they might have been a whistleblower (I'd love to be corrected on this though!)
I was reacting mostly to this part of the post
I think it's fine for a comment to engage with just a part of the original post. Also, if a posts advocates for giving someone some substantial power, it seems fair to comment on media presence of the person.
Overall, to me, it seem you advocate for double-standard / selective demand for rigour.
Post-FTX discussion of Zoe's proposals seems mostly on the level 'Implement Carla Zoe Cremer’s Recommendations' or 'very annoyed this all had to happen before a rethink, given that 10 months earlier, I sat in his office proposing whistleblower protections, transparency over funding sources, bottom-up control over risky donations' or similar high level supportive comm... (read more)
Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.
I can't speak to orgs, but the scope of legal protection for whistleblowing protection for US private employees is quite narrow -- I think people are calling for something much more robust. Also, I believe those protections often only cover an organization's actions against current employees -- not non-employer actions like blacklisting the whistleblower against receiving grants or trashing them to potential future employers.
The Cremer document mixes two different types of whistleblower policies: protection and incentives. Protection is about trying to ensure that organisations do not disincentivize employees or other insiders from trying to address illegal/undesired activities of the organisation through for example threats or punishments. Whistleblower incentives are about incentivizing insiders to address illegal/undesired activities.
The recent EU whistleblowing directive for example is a rather complex piece of legislation that aims to protect whistleblowers from e.g. being fired by their employers in some situations.
The US SEC whistleblowing program on the other hand incentivizes whistleblowing by providing financial awards, some 10-30% of sanctions collected, for information that leads to significant findings. This policy, for the US, has a quickly estimated return of 5-10x through first order effects, and possibly many times that in second order effects through stopping fraud and reducing the expected value of fraud in general. The SEC gives several awards each month. A report about the program is available here for those interested.
Whistleblower protections tend to be more bureaucra... (read more)
"It seems clear proposed reforms would not have prevented or influenced the FTX fiasco" doesn't really engage with the original poster's argument (at least as I understand it). The argument, I think, is that FTX revealed the possibility that serious undiscovered negatives exist, and that some of Cremer's proposed reforms and/or other reforms would reduce those risks. Given that they involve greater accountability, transparency, and deconcentration of power, this seems plausible.
Maybe Cremer is arguing that her reforms would have likely prevented FTX, but that's not really relevant to the discussion of the original post.
Thank you for this post. The framing of your points as conditional is especially helpful.
I strongly agree with lots here. As someone who has worked on community building-ish projects that are very far from or very close to frontline/object-level work, this part rang especially true:
People interested in the claim might be interested in this related post and discussion.
A milder statement of this is almost certainly already accepted by EA leadership and we should see the impact when the EA brownout ends.
A year ago, generating more SBFs was the brief argument for the high EV of community building. A common refrain: "SBF is contributing so much to EA causes, if what we're spending on community building generates even just one more SBF it will be worth it."
Now turn SBF to a negative value in that equation, or even merely a zero. The end result may be non-negative, but the EV of community building is greatly reduced.
Many in EA positions who have funded community-building orgs are probably now smarting at having mis-invested based on a false perception of SBF's value.
If there is a hard part, it will be convincing ourselves that although SBF was not high value, it will be hard to resist including hypothetical non-fraudulent SBFs in our EV calculations, as we have habituated to that way of thinking.
I don’t see how this statement can be justified:
8 billion in value was not destroyed. The net effect is mainly distributional. Financial markets are largely zero sum. Some investors lost a lot, others gained. If it hurt the price of crypto assets this means that overall, those who have assets other than crypto are marginally better off.
Of course the chaos causes some value to be lost, but not 8 billion.
If someone steals my car, is there is no "economic damage" because the thief is now better off to the extent of my loss? I would say I suffered economic damage and someone else got a benefit; the existence of that benefit does not negate the damage I incurred.
There is economic damage, but not necessarily equal to the headline number. It is reduced by netting against the gains to the thief, but increased by things like stress, required investments in security, disruption to plans, degraded incentives, and so on. In this case I would guess the economic damage is very large but still less than $8bn. In the case of a personal mugging I would guess the economic damage far exceeds the value of the contents of your wallet.
You might also reasonably object that the gains to the thief shouldn't count because they are illegitimate. However, in the FTX case many of the gains seem to have gone to other traders who profited without being guilty.
I find it implausible that EA movement building is net-negative (<10%). However, I do appreciate the importance of not being unconditionally enthusiastic about movement-building as some specific forms may very well be net-negative. Some things I'd like to be aware of going forward:
1. Attempt to do things that reasonable non-EA entities will find valuable (e.g., by not being dependent on EA funders and collaborating more with non-EA actors).
2. Be very aware of who we put on a pedestal as promoters and social role models. E.g., I appreciate Macaskill in many ways and have been inspired by him but I think he's too emphasized as the EA leader/role model and would like to hear other voices better represented.
I think I'm not following the first stage of your argument. Why would the FTX fiasco imply that community building specifically (rather than EA generally) might be net-negative?
I think the idea is that EA institutions look much worse after FTX but EA causes do not. SBF being a fraud may cause you to update about whether (e.g.) CEA is a good organization but should not cause you to update on bednets/AI.
Reading the first paragraph of the OP, here's me trying to excavate the argument:
I think the argument is incomplete. Other things to think about:
After reading also the other parts of the post, I think the OP makes further claims about how the best way to counteract the risks of unintended negative impact is via "institutional reforms" and "democratization."
I'm not convinced that this is the best response. I think overdoing it with institutional reforms would add a bunch of governance overhead that unnecessarily* slows down good actors and can easily be exploited/weaponized (or even just sidestepped) by bad actors. Also, "democratization" sounds virtuous in theory, but large groups of people collectively tend to have messed up epistemics, since the discourse amplifies applause lights or even quickly becomes toxic because of dynamics where we mostly hear from the most vocal skeptics (who often have a personal grudge or some other problem) and all the armchair quarterbacks who don't have a clue of what they're missing. There comes a point where you'll get scrutinized a lot more for bad actions than for bad omissions (or for other things that somewhat randomly and unjustifiably evoke moral outrage in specific people – see this comment by Jonas Vollmer).
Maybe I'm strawmanning the calls for reform and people who want govern... (read more)
The problem is that most calls for reform lack specifics, and it is very difficult to meaningfully assess most reform proposals without them.
However, that is not necessarily the reformers' fault. In my view, it's not generally appropriate to deduct points for not offering more specific proposals if the would-be reformer has good reason to believe that reasonable proposals would be summarily sent to the refuse bin.
If Cremer's proposals in particular are getting a lot of glowing media attention, it seems like it would be worthwhile to do a clearer job as a community explaining why her specific proposals lack enough promise in their summary form to warrant further investigation, and to make an attempt to operationalize ideas that might be warranted and feasible. Even if the ideas were ultimately rejected, "the community discussed the ideas, fleshed some of them out, and decided that the benefits did not exceed the costs" is a much more convincing response from an optics perspective than blanket dismissals.
My own tentative view is that her specific ideas range from the fanciful (and thus unworthy of further investigation/elaboration) to the definitely-plausible-if-fleshed-out, so I think it's important to take each on its own merits. That is, of course, not a suggestion that any individual poster here has an obligation to do that, only that it would be a good thing if done in some manner. On the average, the ideas I've seen described on the forum are better because they are less grand / more targeted + specific.
[Giving myself 5 minutes to reply with a quick point - and failing!] Thank you for writing this. Here are some quick low confidence thoughts on the main argument you made.
I don't think I understand why you attribute any issues from FTX to community building specifically. The FTX outcome was a convergence of many factors, and movement building doesn't obviously seem to be the most important. So many other EA adjunct practices like philosophising, overconfidence, prioritisation or promoting earning to give could be similarly implicated etc.
I agre... (read more)
Could you say more about the possibility of "external" funders for EA community building? It's probably not realistic to get major funding from a Big-Name Generalist Foundation, given that many of EA's core ideas inevitably constitute a severe criticism of how Big Philantrophy works. And it would be otherwise hard to decide who an "external" funder was -- in my book, "gives lots of money to EA community building" is pretty diagnostic for being an EA and thus not external.
One possibility might be that major funders would only pick up (say) 50% of the tab fo... (read more)