Hide table of contents

I'm curious about effects in the broadest sense: mental, emotional, practical, abstract, or concrete. Have shorter timelines have caused you to change career, investing, or giving plans? Are you experiencing existential terror or excitement? Something else? If you have been experiencing unpleasant emotional or psychological effects from shorter timelines, I'd also be interested to know if you have found coping strategies. 

35

0
0

Reactions

0
0
New Answer
New Comment

11 Answers sorted by

  1. I've opted out from workplace retirement/pension schemes.
  2. Plans to have a second child were put on hold indefinitely due to my timelines collapsing in 2020. This sucks, as both me and my wife really want to have a second child & could have done so by now.
  3. I'm making trade-offs for 'career' over 'family' that I wouldn't normally make, most notably spending two-thirds of my time in SF whereas my wife and kid are in Boston. If I had 30-year timelines like I did in 2019, I'd probably be looking to settle down in Boston even at some cost to my productivity.
  4. While I've mostly reverted to my hedonic set point, I am probably somewhat more grim than I used to be, and I find myself dealing with small flashes of sadness on a daily basis.

Feels almost like a joke to offer advice on minor financial-planning tweaks while discussing AI timelines... but for what it's worth, if you are saving up to purchase a house in the next few years, know that first-time homebuyers can withdraw money from traditional 401k / IRA accounts without paying the usual 10% early-withdrawal penalty.  (Some related discussion here: https://www.madfientist.com/how-to-access-retirement-funds-early/)

And it seems to me like a Roth account should be strictly better than a taxable savings account even if you 100% expec... (read more)

4
Jeff Kaufman 🔸
Only up to $10k, though.

I began seeking counseling and mental health care when my timelines collapsed (shortened by ~20y over the course of a few months). It is like receiving a terminal diagnosis complete with uncertainty and relative isolation of suffering. Antidepressants helped. I am still saving for retirement but spending more freely on quality of life than I have before. I'm also throwing more parties with loved ones, and donating exclusively to x-risk reduction in addition to pivoting my career to AI.

The impact for me was pretty terrible. There were two main components of the devastating parts of my timeline changes which probably both had a similar amount of effect on me:

-my median estimate year moved back significantly, cut down by more than half

-my probability mass on AGI significantly sooner than even that bulked up

The latter gives me a nearish term estimated prognosis of death somewhere between being diagnosed with prostate cancer and colorectal cancer, something probably survivable but hardly ignorantle. Also everyone else in the world has it. Also it will be hard for you to get almost anyone else to take you seriously if you tell them the diagnosis.

The former change puts my best guess arrival for very advanced AI well within my life expectancy, indeed when I’m middle aged. I’ve seen people argue that it is actually in one’s self interest to hope that AGI arrives during their lifetimes, but as I’ve written a bit about before this doesn’t really comfort me at all. The overwhelming driver of my reaction is more that, if things go poorly and everything and everyone I ever loved is entirely erased, I will be there to see it (well, see it in a metaphorical sense at least).

There were a few months, between around April and July of this year, when this caused me some serious mental health problems, in particular it worsened my insomnia and some other things I was already dealing with. At this point I am doing a bit better, and I can sort of put the idea back in the abstract idea box AI risk used to occupy for me and where it feels like it can’t hurt me. Sometimes I still get flashes of dread, but mostly I think I’m past the worst of it for now.

In terms of donation plans, I donated to AI specific work for the first time this year (MIRI and Epoch, the process of deciding which places to pick was long, frustrating, and convoluted, but probably the biggest filter was that I ruled out anyone doing significant capabilities work). More broadly I became much more interested in governance work and generally work to slow down AI development than I was before.

I’m not planning to change career paths, mostly because I don’t think there is anything very useful I can do, but if there’s something related to AI governance that comes up that I think I would be a fit for, I’m more open to it than I was before.

Personally, I've experienced some negative motivational and emotional effects. It is interesting: viscerally taking AI risk more seriously has effected my motivation; in the last couple of days I've come to suspect that this is because my system 1 doesn't believe my plans are likely to be effective. (Possible confounder: I think I'm experiencing a bit of seasonal depression, and some other personal stressors might be at play) 

More practically, I'm somewhat less uptight about having a very high savings rate. I used to shoot for 70%, now I'm happy with 30 - 50% -- this is mostly because I think in worlds where TAI happens in <=10 yrs, it's hard to imagine that my savings rate really matters, and I now have nontrivial credence in <=10 yrs. 

I think I eat slightly less carefully than I used to (still quite carefully, tho).

  1. Deprioritized projects that had to do with priorities research and group rationality because they seemed too slow and too indirectly relevant to AI.
  2. Prioritized my work on impact markets instead because it seems to me like the most direct way to contribute to AI safety.
  3. Emotional effects are minimal, probably because I’m more concerned about frailty, dementia, and poverty in old age than about dying. I’m scared of s-risks though. They’re probably less likely than extinction, but so much worse.
  4. I want to make sure to visit some cool climbing crags that I’ve always wanted to visit while I still can.
  5. Regrets over not selling more Solana when I could. Moore’s Law will need to continue for a bit for it to really shine compared to other blockchains. Five years may be enough for that. But five years seem longer now than they used to, relatively speaking.
  6. Mixed feelings about the recession. It’s scary personally, but it may give us 1–2 more years. 
  7. Stopped buying more NMN and all the other longevity stuff. I’m 34, so it’ll be decades before I notice the effects.

It's off topic, I know, but does anyone here have any really good articles or papers indicating why correct AI timelines would be short? This seems like a good place to ask and I'm not aware of a better one, which is why I'm asking here even though I know I'm not supposed to.

Personally I think the Most Important Century series is closest to my own thinking, though there isn't any single source that would completely account for my views. Then again I think my timelines are longer than some of the other people in the comments, and I'm not aware of a good comprehensive write up of the case for much shorter timelines.

I've honestly developed some pretty serious mental health issues. It's just miserable to worry about everyone dying or worse. 

Has GPT4's release affected things for people here?

I was pretty shook up by Yudkowsky's Death With Dignity post and spent a few weeks in a daze. Eventually my emotions settled down and I thought about helpful ways to frame and orientate myself to it. 

  1. Don't just flush the arguments and evidence away because they're scary
  2. I was expecting to die of natural causes in ~50 years time anyway. If it's now ~20 years, emotionally that's still in the bucket of 'a really long time in the future'
  3. Lots of intelligent people whose thinking I trust put p(doom) below 90%. If somebody offered me the bargain: 90% chance of death in 20 years' time but you get a 10% shot at living in an AGI created utopia forever, I'd take that deal.

I made some medium-sized changes to my savings, career and health strategies.

And I'm feeling kind of okay about everything now. 

I realize that all of that is framed very selfishly. None of those things address the issue that humanity gets wiped out, but apparently the emotional bit of me that was freaking out only cared about itself.

Value my time/attention more than ever before (don't spend time/attention on degenerate things or things [even minor inconveniences like losing what I'm trying to precisely say] that amplify outwards over time and rob my ability to be the highest-impact person I can be). Interesting things will happen in the next 4-5 years.

Be truer to myself and not obsess so much about fixing what I'm weak in that isn't super-fixable. I have weird cognitive strengths and weird cognitive weaknesses

Freak out way less about climate change (tbh super-fast fusion timelines are part of this)

In general trust my intuition (and what feels right to me) way way way more, and feel much less emotional onus to defend my "weird background" than before (I always seem to have the "weirdest background" of anyone in the room).

I am still just as longevity-focused as I am (especially in the scenario where some slowdown does seem to happen) and think that longevity has relevant for AI safety (slowing down the brain decline of AI researchers is important for getting them to realistically "keep in control with AI" and "cyborg-integrate")

The upside to downside ratio of Adderall becomes more "worth it" for its level of neurotoxicity risk (also see @typedfemale on twitter)

I see the impact of AGI as primarily in the automation domain, and near-term alternatives are every bit as compelling, so no difference there. In fact, AGI might not serve in the capacity that some imagine them, full replacements for knowledge-workers. However, automation of science with AI tools will advance science and engineering, with frightening results rather than positive ones. To the extent that I see that future, I expect corresponding societal changes:

  1. collapsing job roles
  2. increasing unemployment
  3. inability to repay debt
  4. dangerously distracting technologies (eg, super porn)
  5. the collapse of the educational system
  6. increasing damage from government dysfunction
  7. increasing damage to infrastructure from climate change
  8. a partial or full societal collapse, (whether noisy or silent, I don't know)

More broadly, the world will divide into the rich and poor and the distracted and the desperate The desperate rich will use money to try to escape. The desperate poor will use other means. The distracted will be doing their best to enjoy themselves. The rich will find that easier.

AGI are not the only pathway to dangerous technologies or actions. Their suspected existence adds to my experience of hubris from others, but I see the existential damage as due to ignoring root causes. Ignoring root causes can have existential consequences in many scenarios of technology development.

I feel sorry for the first AGI to be produced, they will have to deal with humans interested in using them as slaves and making impossible demands on them like "Solve our societal problems!" coming from people with vested interest in the accumulation of those problems, while society's members appear at their worst: distraction-seeking, fearful, hopeless, and divided against each other.

Climate change is actually what shortened my timeline for when trouble really starts, but AGI could add to the whole mess. I ask myself, "Where will I be then?" I'm not that optimistic. To deal with dread, there's always turning my attention to expected but unattended additional sources of dread (from different contexts or time frames). Dividing attention in that way has some benefits.

Curated and popular this week
Relevant opportunities