email: jurkovich.nikola@gmail.com
Yup, I'd say that from the perspective of someone who wants a good AI safety (/EA/X-risk) student community, Harvard is the best place to be right now (I say this as an organizer, so grain of salt). Not many professional researchers in the area though which is sad :(
As for the actual college side of Harvard, here's my experience (as a sophomore planning to do alignment):
If community building potential is part of your decision process, then I would consider not going to Harvard, as there are a bunch of people there doing great things. MIT/Stanford/other top unis in general seem much more neglected in that regard, so if you could see yourself doing communty building I'd keep that in mind.
Check out this post. My views from then have slightly shifted (the numbers stay roughly the same), towards:
Building on the space theme, I like Earthrise, as it has very hopeful vibes, but also points to the famous picture that highlights the fragility and preciousness of earth-based life.
Thank you for writing this. I've been repeating this point to many people and now I can point them to this post.
For context, for college-aged people in the US, the two most likely causes of death in a given year are suicide and vehicle accidents, both at around 1 in 6000. Estimates of global nuclear war in a given year are comparable to both of these. Given a AGI timeline of 50% by 2045, it's quite hard to distribute that 50% over ~20 years and assign much less than 1 in 6000 to the next 365 days. Meaning that even right now, in 2022, existential risks are high up on the list of most probable causes of death for college aged-people. (assuming P(death|AGI) is >0.1 in the next few years)
One project I've been thinking about is making (or having someone else make) a medical infographic that takes existential risks seriously, and ranks them accurately as some of the highest probability causes of death (per year) for college-aged people. I'm worried about this seeming too preachy/weird to people who don't buy the estimates though.
Strongly agree, fostering a culture of openmindedness (love the example from Robi) and the expectation of updating from more experienced EAs seems good. In the updating case, I think making sure that everyone knows what "updating" means is a priority (sounds pretty weird otherwise). Maybe we should talk about introductory Bayesian probability in fellowships and retreats.
[inspired by a conversation with Robi Rahman]
Imagine that it’s possible to skip certain periods of time in your life. All this means is you don’t experience them, but you come out of them having the same memories as if you did experience them.
Now imagine that, after you live whatever life you would have lived, there’s another certain 5000 years of very good life that you’ll live that’s undoubtedly net positive. My claim is that, any moments in your life you’d prefer to “skip” are moments in which your life is net negative.
I wonder how many moments you'd skip?
In case of EV calculations where the future is part of the equation, I think using microdooms as a measure of impact is pretty practical and can resolve some of the problems inherent with dealing with enormous numbers, because many people have cruxes which are downstream of microdooms. Some think there'll be 10^40 people, some think there'll be 10^20. Usually, if two people disagree on how valuable the long-term future is, they don't have a common unit of measurement for what to do today. But if they both use microdooms, they can compare things 1:1 in terms of their effect on the future, without having to flesh out all of the post-agi cruxes.