A

Arepo

4849 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
679

Topic contributions
17

Thanks Henri, that's useful context. Obviously we had to make pretty quick decisions on low info, and I guess the same was true for many other participants. If anything like this happens in future it might be worth including something like this about the broad counterfactuality of the decision in (or at least linked from) the proposal.

Yeah, I was somewhat lazily referring to planets and similar as 'units'. I wrote a lot more about this here.

I don't think precariousness would be that much of an issue by the time we have the technology to travel between stars. Humans can be bioformed, made digital, replaced by AGI shards, or just master their environments enough to do brute force terraforming. 

Even if you do think they're more precarious, over a long enough expansion period the difference is going to be eclipsed by the difference in colony-count.

This is a cool piece of work! I have one criticism, which is much the same as my criticism of Thorstad's argument:

However, endorsing this view likely requires fairly speculative claims about how existing risks will nearly disappear after the time of perils has ended.

I think not believing this requires fairly speculative claims if a potential 'end of time of perils' we envisage is just human descendants spreading out across planets and then stars. Keeping current nonspeculative risks (eg nukes, pandemics, natural disasters) approximately constant per unit volume, the risk to all of human descendants would rapidly approach 0 as the volume we inhabited increased.

So for it to stay anywhere near constant, you need to posit that there's some risk that is equally as capable of killing an interstellar civilisation as a single-planet one. This could be misaligned AGI, but AGI development isn't constant - if there's something that stops us from creating it in the next 1000 years, that something might be evidence that we'll never create it. If we have created it by then, and it hasn't killed us, then it seems likely that it never will.

So you need something else, like the possibility of triggering false vacuum decay, to imagine a 'baseline risk' scenario.

That might be part of the effect, but I would think it would apply more strongly to EA community building than AI (which has been around for several decades with vastly more money flowing into it) - and the community projects were maybe slightly better over all? At least not substantially worse.

I don't really buy that concrete steps are hard to come up with for good AI or even general longtermism projects - one could for e.g. aim to show or disprove some proposition in a research program, aim to reach some number of views, aim to produce x media every y days (which IIRC one project did), or write x-thousand words or interview x industry experts, or use some tool for some effect, or any one of countless ways of just breaking down what your physical interactions will be with the world between now and your envisioned success. 

Fwiw it felt like a more concrete difference than that. My overall sense is that the animal welfare projects tended to be backed by multiple people with years of experience doing something relevant, have a concrete near term goal or set of milestones, and a set of well-described steps for moving forwards, while the longtermist/AI stuff tended to lack some or all of that.

I agree with all this. I was just commenting on the issue of debris specifically.

Right, sorry. Yeah, I have no view on whether it's a consideration at all, just that it seems unlikely to be a primary one from my rudimentary understanding of the issue.

It seems like it might not be, for the reasons I suggested or others?

Sanjay's link about Kessler Syndrome above describes the problem. If you mean about the ISS specifically, I think it's just an instance of the general concern.

I'm deeply concerned about space debris, but I don't think it alone could justify this project. A 'controlled' descent sounds like it's about targeting a specific landing spot - an 'uncontrolled' descent could still lower the ISS sufficiently fast as to minimise its chance of hitting orbiting debris (it probably lowers it faster!).

Also the ISS is also already well within Earth's atmosphere, and the lower it gets, the shorter the life of debris hitting it would be due to atmospheric resistance, and it would presumably be relatively easy to control it from hitting anything near the start of its descent, when you can choose when to start the process and only run serious risk as it started to lose control 0 in the lower, thicker atmosphere.

Load more