M

MalcolmOcean

26 karmaJoined

Comments
12

There are methods of sleep training that involve little-to-no crying.  I don't know much about it but I'm 3 weeks into parenthood and was binge-reading babysleepsite.com and I loved the vibe of the Fading Method where you gradually remove scaffolds and retrain sleep associations, without ever having enough overwhelming change that the baby cries a bunch.

Update! Since I posted the above comment, I've actually written up my story of finding more resolution around this! The post is called confronting & forgiving the people who instilled fear in my heart.

Ah I realized I actually wanted to quote this paragraph (though the one I quoted above is also spot on)

 It made me angry. I felt like I’d drunk the kool-aid of some pervasive cult, one that had twisted a beautiful human desire into an internal coercion, like one for a child you’re trying to get to do chores while you’re away from home.

I felt similarly angry when I realized that my well-meaning friends had installed a shard of panic in my body that made "I'm safe" feel like it would always be false until we had a positive singularity. I had to reclaim that, in several phases. And on reflection I had to digest some sense of social obligation there, like fear of people judging me, whether EAs or other obligation-driven activists. And maybe they do or will! But I'm not compromising on catching my breath.

Appreciating this. It's helping me see that part of how I didn't fall deeper into EA than I did is that I already had a worldview that viewed obligations as confused in much the sort of way you describe... and so I saw EA as sort of "to the extent that you're going to care about the world, do so effectively" and also "have you noticed these particular low-hanging fruit?" and also just "here's a bunch of people who care about doing good things and are interesting". These obligations are indeed a kind of confusion—I love how you put it here: 

The thing underlying my moral “obligation” came from me, my own mind. This underlying thing was actually a type of desire. It turned out that I wanted to help suffering people. I wanted to be in service of a beautiful world. Hm.

I did get infected with a bit of panic about x-risk stuff though, and this caused me to flail around a bunch and try to force reality to move faster than it could. I think Val describes the structure of my panic quite aptly in Here's the exit. It wasn't a sense of obligation, but it was a sense of "there is a danger; feeling safe is a lie" and this was keeping me from feeling safe in each moment even in the ways in which I WAS safe in those moments (even if a nuke were to drop moments later, or I were to have an aneurysm or whatever). It was an IS not an OUGHT but nonetheless generated an urgent sense of "the world's on fire and it's up to me to fix that". But no degree of shortness to AI timelines benefits from  adrenaline—even if you needed to pull an all-nighter to stop something happening tomorrow, calm steady focus will beat neurotic energy.

It seems to me that the obligation structure and the panic structure form two pieces of this totalizing memeplex that causes people to have trouble creatively finding good win-wins between all the things that they want. Both of them have an avoidant quality, and awayness motivation is WAY worse at steering than towardsness motivation.

Are there other elements? That seems worth mapping out!

There's also the EA Workspace, a virtual pomodoro coworking room on Complice. It hasn't been that active lately but maybe this new influx of people will reinvigorate it.

(I'm the creator of Complice (and also an effective altruist!) I found this EA forum post from seeing a bunch of new people sign up for Complice citing this as the source.)

Additions:

  • space travel could include more details, like lowering launch costs, and stuff like what Deep Space Industries is doing with asteroid mining (in some ways making money from mining asteroids is kind of an instrumental goal for them, with the terminal goal being to get humans living in space full-time as opposed to just being on the ISS briefly)
  • preventing large-scale violence could include some component about shifting cultural zeitgeists to be more open and collaborative. This is hella hard, but would be very valuable to the extent that it can be done
  • I would add something like "collecting warning signs" under disaster prediction. For instance, what AI Impacts is doing with trying to come up with a bunch of concrete tasks that AIs currently can't beat humans at, which we could place on a timeline and use to assess the rate of AI progress. There might be a better name than "collecting warning signs" though.

Props for doing this. I was recently reflecting that it would be great to have a bunch of the LW Sequences or other works describing AI value-alignment problems translated into Chinese. If anyone who knows Chinese sees this and it seems like their kind of thing, I'd say go for it!

Hmm... I'll gesture back at the "Effective Giving vs Effective Altruism" thing, and say that maybe while EAs qua "identify as part of EA movement and comment on the EA forum and hang out with other EAs" might be under 35, we might be able to find lots of candidate Effective Givers who are part of a totally different demographic.

I like the ideas here. I see a lot of potential value in having a core group of EAs who are focused on the movement itself and on cause prioritization, crucial considerations, etc... and then also trying to shift the mindset of the wider population of people-who-donate-to-things so that they tend to look at GiveWell's recommendations and so on... without trying to get those people to join the movement as a movement or whatever.

I will second the sentiment that this post seems super overmentiony of Intentional Insights. For a non-profit, it feels awfully self-interested. I'm not sure what I'd recommend instead exactly, but maybe if you're following the "do things + tell people" approach, shift your focus a bit more towards doing things.

Load more