CT

c.trout

96 karmaJoined www.ctrout.art/

Bio

Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: neo-artistotelian naturalist realism + virtue ethics.

Unvarnished critical (but constructive) feedback is welcome.

 

[Out-of-date-but-still-sorta-representative-of-my-thoughts hot takes below]

Thinks longtermism rests on a false premise – some sort of total impartiality

Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet "luddite" so long as this is understood to describe someone who:

  • suspects that on net, technological progress yields diminishing returns in human flourishing.
  • OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing). Fighting to uphold higher working standards is to be on the front lines fighting against Moloch (see e.g. Fleming's vanishing economy dilemma and how decreased working hours offers a simple solution).
  • OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing.
  • OR considering the common-sensey thought that societies have a maxmimum rate of adaptation, suspects excessive rates of technological change can lead to harms, independent of how the technology is used. (This thought is more speculative/less researched – would love to hear evidence for or against).

Comments
19

I agree that's a reason to believe people would be in favor of such a radical change (and Shulman makes the same point). I don't think it's nearly as strong a reason as you and Shulman seem to think it is, because of the broader changes that would come with this dramatic increase in income. We're talking about a dramatic restructuring of the economic and social order. We're probably talking about, among other things, the end of work and with that, probably the end of earning your place in your community. We're talking about frictionless effectively free substitutes for everything we might have received from the informal economy, the economy of gifts and reciprocity. What does that do to friendship and family? I don't want to know. 

It appears to me there are plenty of examples of people sacrificing large potential increases in their income in order to preserve the social order they are accustomed to. (I would imagine e.g. conservatives in e.g. the Rust Belt not moving to a coastal city with clearly better income prospects being a good example, but admit I haven't studied the issue in-depth). 

Basically, I think this focus on income is myopic.

That was the intention – I'm not sure how to remove the community tag...

Updated research on the Easterlin Paradox here. Free working draft here. Nice audio/visual overview from one of the authors here. Good discussion on the EA forum here.

Thank you very much for your perspective! I recently wrote about something closely related to this "emotions problem" but hadn't considered how the EA community offered a home for neurodivergent folks. I have now added a disclaimer making sure we 'normies' remember to keep you in mind!

No worries about the strong response – I misjudged how my words would be interpreted. I'm glad we sorted that out.

Regarding overthinking ethical stuff and SBF: 
Unfortunately I fear you've missed my point. First of all, I wasn't really talking about any fraud/negligence that he may have committed. As I said in the 2nd paragraph:

Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it's saying "all the right shibboleths" that he spoke of as mere means to an end, not the doing of sketchy things. 

My subject was his attitude/comments towards ethics. Second, my diagnosis was not that:

SBF's problems with ethics came from careful debate in business ethics and then missing a decimal point in the relevant calculations.

My point was that it's getting too comfortable approaching ethics like a careful calculation that can be dangerous in the first place – no matter how accurate the calculation is. It's not about missing some decimal points. Please reread this section if you're interested. I updated the end of it with a reference to a clear falsifiable claim.

Fair enough!

But also: if the EA community will only correct the flaws in itself that it can measure then... good luck. Seems short-sighted to me.

I may not have the data to back up my hypothesis, but it's also not as if I pulled this out of thin air. And I'm not the first to find this hypothesis plausible.

I claim that there is a healthy amount of moral calculation one should do, but doing too much of it has harmful side-effects. I claim, for these reasons, that Consequentialism (and the culture surrounding it) tends to result in abuse of moral calculation more so than VE. I don't expect abuse to arise in the majority of people who engage with/follow Consequentialism or something – just more than among those who engage with/follow VE. I also claim, for reasons at the end of this section, that abuse will be more prevalent among those who engage with rationalism than those who don't.

If I'm right about this flaw in the community culture around here, and this flaw in anyway contributed to SBF talking the way he did, shouldn't the community consider taking some steps to curb that problematic tendency?

It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.

Moral calculation (and faking it 'til you make it) can be helpful in becoming more virtuous, but to a limited extent – you can push it too far. And anyway, its not the only way to become a better person. I think more helpful is what I mentioned at the end of my post:

Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness...

If you want to see how the psych literature intersects on a related topic (romantic relationships instead of ethics in general) see Eva Illouz's Why love hurts: A sociological explanation (2012), Chapter 3. Search for the heading "The New Architecture of Romantic Choice or the Disorganization of the Will" (p 90 in my edition) if you want to skip right to it. You might be able to read the entire section through Google books preview? I recommend the book though, if you're interested.

Yikes! Thank you for letting me know! Clearly a very poor choice of words: that was not at all my intent!

To be clear, I agree with EAs on many many issues. I just fear they suffer from "overthinking ethical stuff too often" if you will.

Load more