(I know I'm late again replying to this thread.)
What surprises me about this whole situation is that people seem surprised at the executive leadership at a corporation worth an estimated $61.5B would engage in big-corporation PR-speak. The base rate for big-corporation execs engaging in such conduct in their official capacities seems awfully close to 100%.
Hm, good point. This gives me pause, but I'm not sure what direction to update in. Like, maybe I should update "corporate speak is just what these large orgs do and it's more like a fashion thing than a signal of their (lack of) integrity on things that matter most." Or maybe I should update in the direction you suggest, namely "if an org grows too much, it's unlikely to stay aligned with its founding character principles."
I'm getting the sense that a decent number of people assume that being "EA aligned" is somehow a strong inoculant against the temptations of money and power.
I would have certainly thought so. If anything can be an inoculant against those temptations, surely a strong adherence to a cause greater than oneself packaged in lots warnings against biases and other ways humans can go wrong (as is the common message in EA and rationalist circles) seems like the best hope for it? If you don't think it can be a strong inoculant, that makes you pretty cynical, no? (I think cynicism is often right, so this isn't automatically a rejection of your position. I just want to flag that yours is a claim with quite strong implications on its own.)
Arguably the FTX scandal -- which after all involved multiple EAs, not just SBF -- should have already caused people to update on how effective said inoculant is, at least when billions of dollars were floating around.
If you were just talking about SBF, then I'd say your point is weak because he probably wasn't low on dark triad traits to start out with. But you emphasizing how other EAs around him were also involved (the direct co-conspirators at Alameda and FTX) is a strong point.
Still, in my mind this would probably have gone very differently with the same group of people minus SBF and with a leader with a stronger commitment and psychological disposition towards honesty. (I should flag that parts of Caroline Ellison's blog also gave me vibes of "seems to like having power too much" -- but at least it's more common for young people to later change/grow.) That's why I don't consider it a huge update for "power corrupts". To me, it's a reinforcement of "it matters to have good leadership."
My worldview(?) is that "power corrupts" doesn't apply equally to every leader and that we'd be admitting defeat straight away if we stopped trying to do ambitious things. There doesn't seem to be a great way to do targeted ambitious things without some individual acquiring high amounts of power in the process.(?) We urgently need to do a better job at preventing that those who end up with a lot of power are almost always those with kind of shady character. The fact that we're so bad at this suggests that these people are advantaged at some aspects of ambitious leadership, which makes the whole thing a lot harder. But that doesn't mean it's impossible.
I concede that there's a sense in which this worldview of mine is not grounded in empiricism -- I haven't even looked into the matter from that perspective. Instead, it's more like a commitment to a wager: "If this doesn't work, what else are we supposed to do?"
I'm not interested in concluding that the best we can do is criticise the powers that be from the sidelines.
Of course, if leaders exhibit signs of low integrity, like in this example of Anthropic's communications, it's important not to let this slide. The thing I want to push back against is an attitude of "person x or org y has acquired so much power, surely that means that they're now corrupted," and this leading to no longer giving them the benefit of the doubt/not trying to see the complexities of their situation when they do something that looks surprising/disappointing/suboptimal. With great power comes great responsiblity, including a responsibility to not mess up your potential for doing even more good later on. Naturally, this does come with lots of tradeoffs and it's not always easy to infer from publicly visible actions and statements whether an org is still culturally on track. (That said, I concede that you can often tell quite a lot about someone's character/an org's culture based on how/whether they communicate nuances, which is sadly why I've had some repeated negative updates about Anthropic lately.)
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I never interpreted that to be the crux/problem here. (I know I'm late replying to this.)
People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.
I don't care about it because I still identify with EA; instead, I care because it goes under "not being consistently candid." (I quite like that expression despite its unfortunate history). I'd be equally annoyed if they downplayed some significant other thing unrelated to EA.
Sure, you might say it's fine not being consistently candid with journalists. They may quote you out of context. Pretty common advice for talking to journalists is to keep your statements as short and general as possible, esp. when they ask you things that aren't "on message." Probably they were just trying to avoid actually-unfair bad press here? Still, it's clumsy and ineffective. It backfired. Being candid would probably have been better here even from the perspective of preventing journalists from spinning this against them. Also, they could just decide not to talk to untrusted journalists?
More generally, I feel like we really need leaders who can build trust and talk openly about difficult tradeoffs and realities.
I agree that these statements are not defensible. I'm sad to see it. There's maybe some hope that the person making these statements was just caught off guard and it's not a common pattern at Antrhopic to obfuscate things with that sort of misdirection. (Edit: Or maybe the journalist was fishing for quotes and made it seem like they were being more evasive than they actually were.)
I don't get why they can't just admit that Anthropic's history is pretty intertwined with EA history. They could still distance themselves from "EA as the general public perceives it" or even "EA-as-it-is-now."
For instance, they could flag that EA maybe has a bit of a problem with "purism" -- like, some vocal EAs in this comment section and elsewhere seem to think it is super obvious that Anthropic has been selling out/became too much of a typical for-profit corporation. I didn't myself think that this was necessarily the case because I see a lot of valid tradeoffs that Anthropic leadership is having to navigate, and the armchair quarterbacks EAs seem to be failing to take that into account? However, the communications highlighted in the OP made me update that Anthropic leadership probably does lack the integrity needed to do complicated power-seeking stuff that has the potential to corrupt. (If someone can handle the temptions from power, they should at the very least be able to handle the comparatively easy dynamics of don't willingly distort the truth as you know it.)
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don't have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we'd want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you're setting up this argument turns controversial, though, is when you suggest that "D>C" is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing people) under the stipulation of starting out in one of the worlds (that already contains all the relevant people).
Let's think about the case where no one exists so far, where we're the population planners for a new planet that can either shape into C or D. (In that scenario, there's no relevant difference between B and C, btw.) I'd argue that both options are now equally defensible because the interests of possible people are under-defined* and there are defensible personal stances on population ethics for justifying either.**
*The interests of possible people are underdefined not just because it's open how many people we might create. In addiiton, it's also open who we might create: Some human psychological profiles are such that when someone's born into a happy/priviledged life, they adopt a Buddhist stance towards existence and think of themselves as not having benefitted from being born. Other psychological profiles are such that people do think of themselves as grateful and lucky for having been born. (In fact, others yet even claim that they'd consider themselves lucky/grateful even if their lives consisted of nothing but torture). These varying intuitions towards existence can inspire people's population-ethical leanings. But there's no fact of the matter of "which intuitions are more true." These are just difference interpretations for the same sets of facts. There's no uniquely correct way to approach population ethics.
**Namely, C is better on anti-natalist harm reduction grounds (at least depending on how we interpret the scale/negative numbers on the scale), whereas D is better on totalist grounds.
All of that was assuming that C or D are the only options. If we add a third alternative, say, "create no one," the ranking between C and D (previously they were equally defensible) can change.
At this point, the moral realist proponents of an objective "theory of the good" might shriek in agony and think I have gone mad. But hear me out. It's not crazy at all to think that choices depend on the alternatives we have available. If we also get the option, "create no one," then I'd say C becomes worse than the two other options because there's no approach to population ethics according to which C is optimal from the three options. My person-affecting stance on population ethics says that we're free to do a bunch of things, but the one thing we cannot do is do things that reflect a negligent disregard for the interests of potential people/beings.
Why? Essentially for similar reasons why common-sense morality says that struggling lower-class families are permitted to have children that they raise under harship with little means (assuming their lives are still worth living in expectation), but if a millionaire were to do the same to their child, he'd be an asshole. The fact that the millionaire has the option "give my child enough resources to have a high chance at happiness" makes it worse if he then proceeds to give his child hardly any resources at all. Bringing people into existence makes you responsible for them. If you have the option to make your children really well off, but you decide not to do that, you're not taking into consideration the interests of your child, which is bad. (Of course, if the millionaire donates all their money to effective causes and then raises a child in relative poverty, that's acceptable again.)
I think where the proponents of an objective theory of the good go wrong is the idea that you keep track, on the same objective scoreboard, no matter whether it concerns existing people or potential people. But those are not commensurate perspectives. This whole idea of an "objective axiology/theory of the good" is dubious to me. It also has pretty counterintuitive implications to try to squeeze these perspectives under one umbrella. As I wrote elsewhere:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
Many effective altruists hesitate to say, “One of you must be wrong!” when one person cares greatly about living forever while the other doesn’t. By contrast, when two people disagree on population ethics “One of you must be wrong!” seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/pursue, I suggest they lean in on this belief. I expect that resolving the tension in that way – leaning in on the belief “people are free to choose their life goals;” giving up on “there’s an axiology that applies to everyone” – makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
Here's a framework for doing population ethics without an objective axiology. In this framework, person-affecting views seem quite intuitive because we can motivate them as follows:
When I said earlier that some people form non-hedonistic life goals, I didn't mean that they commit to the claim that there are things that everyone else should value. I meant that there are non-hedonistic things that the person in question values personally/subjectively.
You might say that subjective (dis)value is trumped by objective (dis)value -- then we'd get into the discussion of whether objective (dis)value is a meaningful concept. I argue against that in my above-linked post on hedonist axiology. Here's a shorter attempt at making some of the key points from that post:
Earlier, when I agreed with you that we can, in a sense, view "suffering is bad" as moral fact, I would still maintain that this way of speaking makes sense only as a shorthand pointing towards the universality and uncontroversialness of "suffering is bad," rather than it pointing to some kind objectivity-that-through-its-nature-trumps-everything-else that suffering is supposed to have (again, I don't believe in that sort of objectivity). By definition, when there's suffering, there's a felt sense (by the sufferer) of wanting the experience to end or change, so there's dissatisfaction and a will towards change. The definition of suffering means it's a motivational force. But whether it is the only impetus/motivational force that matters to someone, or whether there are other pulls and pushes that they deem equally worthy (or even more worthy, in many cases), depends on the person. So, that's where your question about the non-hedonistic life goals comes in.
But why do they say so? Because they have a feeling that something or other has value?
People choosing life goals is a personal thing, more existentialism than morality. I wouldn't even use the word "value" here. People adopt life goals that motivate them to get up in the morning and go beyond the path of least resistance (avoiding short-term suffering). If I had tto sum it up in one word, I'd say it's about meaning rather than value. See my post on life goals, which also discusses my theory of why/how people adopt them.
If you feel that we're talking past each other, it's likely because we're thinking in different conceptual frameworks.
Let's take a step back. I see morality as having two separate parts:
Separately, there are non-moral life goals (and it's possible for people to have no life goals, if there's nothing that makes them go beyond the path of least resistance). Personally, I have a non-moral life goal (being a good husband to my wife) and a moral one (reducing suffering subject to low-effort cooperation with other people's life goals).
That's pretty much it. As I say in my post on life goals, I subscribe to the Wittgensteinian view of philosophy (summarized in the Stanford Encyclopedia of Philosophy):
[...] that philosophers do not—or should not—supply a theory, neither do they provide explanations. “Philosophy just puts everything before us, and neither explains nor deduces anything. Since everything lies open to view there is nothing to explain (PI 126).”
Per this perspective, I see the aim of moral philosophy as to accurately and usefully describe our option space – the different questions worth asking and how we can reason about them.
I feel like my framework lays out the option space and lets us reason about (the different parts of) morality in a satisfying way, so that we don't also need the elusive concept of "objective value". I wouldn't understand how that concept works and I don't see where it would fit in. On the contrary, I think thinking in terms of that concept loses us clarity.
Some people might claim that they can't imagine doing without it or would consider everything meaningless if they had to do without it (see "Why realists and anti-realists disagree"). I argued against that here, here and here. (In those posts, I directly discuss the concept of "irreducible normativity" instead of "objective value," but those are very closely linked, such that objections against one also apply against the other, mostly.)
Depends what you mean by "moral realism."
I consider myself a moral anti-realist, but I would flag that my anti-realism is not the same as saying "anything goes." Maybe the best way to describe my anti-realism to a person who thinks about morality in a realist way is something like this:
"Okay, if you want to talk that way, we can say there is a moral reality, in a sense. But it's not a very far-reaching one, at least as far as the widely-compelling features of the reality are concerned. Aside from a small number of uncontroversial moral statements like 'all else equal, more suffering is worse than less suffering,' much of morality is under-defined. That means that several positions on morality are equally defensible. That's why I personally call it anti-realism: because there's not one single correct answer."
See section 2 of my post here for more thoughts on that way of defining moral realism. And here's Luke Muehlhauser saying a similar thing.
I agree that hedonically "neutral" experiences often seem perfectly fine.
I suspect that there's a sleight of hand going on where moral realist proponents of hedonist axiology try to imply that "pleasure has intrinsic value" is the same claim as "pleasure is good." But the only sense in which "pleasure is good" is obviously uncontroversial is merely the sense of "pleasure is unobjectionable." Admittedly, pleasure also often is something we desire, or something we come to desire if we keep experiencing it -- but this clearly isn't always the case for all people, as any personal hedonist would notice if they stopped falling into the typical mind fallacy and took seriously that many other people sincerely and philosophically-unconfusedly adopt non-hedonistic life goals.
See also this short form or this longer post.
With Chollet acknowledging that o1/o3 (and ARC 1 getting beaten) was a significant breakthrough, how much is this talk now outdated vs still relevant?