H

Habryka [Deactivated]

CEO @ Lightcone Infrastructure
22114 karmaJoined Working (6-15 years)

Bio

Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. I've historically been one of the most active and highest karma commenters on the forum. I no longer post or comment here, and recommend the same to others. 

My best guess is EA at large is causing large harm for the world, and there is no leadership or accountability in-place to fix it. Many of the principles are important, but I don't think this specific community embodies those virtues very much and often actively sabotages them. You can find me on LessWrong or Twitter.

Posts
39

Sorted by New

Comments
1441

Topic contributions
1

To be clear, many of my links were to archive.is and archive.org and stuff, and they still broke. I do agree I could have taken full offline copies, and the basic problem here seems overcomable (if requiring at least a small amount of web-development expertise and understanding).

(I think this level of brazenness is an exception, the broader thing has I think occurred many dozens of times. My best guess, though I know of no specific example, is that probably as a result of the FTX stuff, many EA organizations changed websites and made requests to delete references from archives, in order to lower their association with FTX)

Yes, many of my links over the years broke, and I haven't been able to get any working copy.

> Risk 1: Charities could alter, conceal, fabricate and/or destroy evidence to cover their tracks.

I do not recall this having happened with organisations aligned with effective altruism.

(FWIW, it happened with Leverage Research at multiple points in time, with active effort to remove various pieces of evidence from all available web archives. My best guess is it also happened with early CEA while I worked there, because many Leverage members worked at CEA at the time and they considered this relatively common practice. My best guess is you can find many other instances.)

Now, consider this in the context of AI. Would the extinction of shumanity by AIs be much worse than the natural generational cycle of human replacement?

I think the answer to this is "yes", because your shared genetics and culture create much more robust pointers to your values than we are likely to get with AI. 

Additionally, even if that wasn't true, humans alive at present have obligations inherited from the past and relatedly obligations to the future. We have contracts and inheritance principles and various things that extend our moral circle of concern beyond just the current generation. It is not sufficient to coordinate with just the present humans, we are engaging in at least some moral trade with future generations, and trading away their influence to AI systems is also not something we have the right to do.

(Importantly, I think we have many fewer such obligations to very distant generations, since I don't think we are generally borrowing or coordinating with humans living in the far future very much).

From a more impartial standpoint, the mere fact that AI might not care about the exact same things humans do doesn’t necessarily entail a decrease in total impartial moral value—unless we’ve already decided in advance that human values are inherently more important. 

Look, this sentence just really doesn't make any sense to me. From the perspective of humanity, which is composed of many humans, of course the fact that AI does not care about the same things as humans creates a strong presumption that a world optimized for those values will be worse than a world optimized for human values. Yes, current humans are also limited to what degree we successfully can delegate the fulfillment of our values to future generations, but we also just share, on-average, a huge fraction of our values with future generations. That is a struggle every generation faces, and you are just advocating for... total defeat being fine for some reason? Yes, it would be terrible if the next generation of humans suddenly did not care about almost anything I cared about, but that is very unlikely to happen, but it is quite likely to happen with AI systems. 

Yeah, this. 

From my perspective "caring about anything but human values" doesn't make any sense. Of course, even more specifically, "caring about anything but my own values" also doesn't make sense, but in as much as you are talking to humans, and making arguments about what other humans should do, you have to ground that in their values and so it makes sense to talk about "human values". 

The AIs will not share the pointer to these values, in the same way as every individual does to their own values, and so we should a-priori assume the AI will do worse things after we transfer all the power from the humans to the AIs. 

In the absence of meaningful evidence about the nature of AI civilization, what justification is there for assuming that it will have less moral value than human civilization—other than a speciesist bias?

You know these arguments! You have heard them hundreds of times. Humans care about many things. Sometimes we collapse that into caring about experience for simplicity. 

AIs will probably not care about the same things, as such, the universe will be worse by our lights if controlled by AI civilizations. We don't know what exactly those things are, but the only pointer to our values that we have is ourselves, and AIs will not share those pointers.

Your opening line seems to be trying to mimic the tone of mocking someone obnoxiously. Then you follow-up with an exaggerated telling of events. Then another exaggerated comparison. 

Weird bug. But it only happens when someone votes and unvotes multiple times, and when you vote again the count resets. So this is unlikely to skew anything by much.

Load more