I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
This seems to me like an attempt to run away from the premise of the thought experiment. I'm seeing lot's of "maybes" and "mights" here, but we can just explain them away with more stipulations: You've only seen the outside of their ship, you're both wearing spacesuits that you can't see into, you've done studies and found that neuron count and moral reasoning skills are mostly uncorrelated, and that spacefilight can be done with more or less neurons, etc.
None of these avert the main problem: The reasoning really is symmetrical, so both perspectives should be valid. The EV of saving the alien is 2N, where N is the human number of neurons, and the EV of saving the human from the alien perspective is 2P, where P is the is alien number of neurons. There is no way to declare one perspective the winner over the other, without knowing both N and P. Remember in the original two envelopes problem, you knew both the units, and the numerical value in your own envelope: this was not enough to avert the paradox.
See, the thing that's confusing me here is that there are many solutions to the two envelope problem, but none of them say "switching actually is good". They are all about how to explain why the EV reasoning is wrong and switching is actually bad. So in any EV problem which can be reduced to the two envelope problem, you shouldn't switch. I don't think this is confined to alien vs human things either: perhaps any situation where you are unsure about a conversion ratio might run into two envelopy problems, but I'll have to think about it.
I think switching has to be wrong, for symmetry based reasons.
Let's imagine you and a friend fly out on a spaceship, and run into an alien spaceship from an another civilisation that seems roughly as advanced as you. You and your buddy have just met the alien and their buddy but haven't learnt each others languages, when an accident occurs: your buddy and their buddy go flying off in different directions and you collectively can only save one of them. The human is slightly closer and a rescue attempt is slightly more likely to be successful as a result: based solely on hedonic utilitarianism, do you save the alien instead?
We'll make it even easier and say that our moral worth is strictly proportional to number of neurons in the brain, which is an actual, physical quantity.
I can imagine being an EA-style reasoner, and reasoning as follows: obviously I should anchor that the alien and humans have equal neuron counts, at level N. But obviously there's a lot of uncertainty here. Let's approximate a lognormal style system and say theres a 50% chance the alien is also level N, a 25% chance they have N/10 neurons, and a 25% chance they have 10N neurons. So the expected number of neurons in the alien is 0.25*N/10 + 0.5*N + 0.25*(10N) = 3.025N. Therefore, the alien is worth 3 times as much a human in expectation, so we should obviously save it over the human.
Meanwhile, by pure happenstance, the alien is also a hedonic EA-style reasoner with the same assumptions, with neuron count P. They also do the calculation, and reason that the human is worth 3.025P, so we should save the human.
Clearly, this reasoning is wrong. The cases of the alien and human are entirely symmetric: both should realise this and rate each other equally, and just save whoevers closer.
If your reasoning gives the wrong answer when you scale it up to aliens, it's probably also giving the wrong answer for chickens and elephants.
If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesn't work for aliens, that's an indication that something is wrong with it.
Chickens don't hold a human-favouring position because they are not hedonic utilitarians, and aren't intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think it's simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if it's wrong in that case, what is it about the elephant case that has changed?
So in the two elephants problem, by pinning to humans are you affirming that switching from the 1 human EV to 1 elephant EV, when you are unsure about the HEV to EEV conversion, actually is the correct thing to do?
Like, option 1 is 0.25 HEV better than option 2, but option 2 is 0.25 EEV better than option 1, but you should pick option 1?
what if instead of an elephant, we were talking about a sentient alien? Wouldn't they respond to this with an objection like "hey, why are you picking the HEV as the basis, you human-centric chauvinist?"
Maybe it's worth pointing out that Bostrom, Sandberg, and Yudkowsky were all in the same extropian listserv together (the one from the infamous racist email), and have been collaborating with each other for decades. So maybe it's not precisely a geographic distinction, but there is a very tiny cultural one.
A couple of astronauts hanging out on a dome on mars is not the same thing as an interplanetary civilization. I expect mars landings to follow the same trajectory as the moon landings: put a few people on there for the sake of showing off, then not bother about it for half a century, then half-assedly discuss putting people on there long term, again for the sake of showing off.
I recommend the book A city on mars for an explanation of the massive social and economic barriers to space colonisation.
I hope you don't take this the wrong way, but this press release is badly written, and it will hurt your cause.
I know you say you're talking about more than extinction risks, but when you put: "The probability of AGI causing human extinction is greater than 99%" in bold and red highlight, that's all anyone will see. And then they can go on to check what experts think, and notice that only a fringe minority, even among those concerned with AI risk, believe that figure.
By declaring your own opinion as the truth, over that of experts, you come off like an easily dismissible crank. One of the advantages of the climate protest movements is that they have a wealth of scientific work to point to for credibility. I'm glad you are pointing out current day harms later on in the article, but by then it's too late and everyone will have written you off.
In general, there are too many exclamation points! It comes off as weird and offputting! and RANDOMLY BREAKING INTO ALLCAPS makes you look like you're arguing on an internet forum. And there's way too long paragraphs full of confusing phrases that are not understandable by a layperson.
I suggest you find some people who have absolutely zero exposure to AI safety or EA at all, and run these and future documents by them for ideas on improvements.
No worries, and I have finally managed to replicate Laura's results, and find the true source of disagreement. The key factor missing was the period of egg laying: I put in ~1 year year for both Caged and uncaged, as is assumed on the site that provided the hours of pain figures. This 1 year of laying period assumption seems to match with other sources. Whereas in the causal model, the caged length of laying is given as 1.62 years, and the cage free length of laying is given as 1.19 years. The causal model appears to have tried to calculate this, but it makes more sense to me to use the site that measured the pains estimate: I feel they made they measurements, they are unlikely to be 150% off, and we should be comparing like with like here.
When I took this into account, I was able to replicate Lauras results, which I have summarised in this google doc, which also contains my own estimate and another analysis for broilers, as well as the sources for all the figures.
My DALY weights were using the geometric means (I wasn't sure how to deal with lognormal), but switching to regular averages like you suggest makes things match better.
Under lauras laying period, my final estimate is 3742 Chicken-Dalys/thousand dollars, matching well with the causal number of 3.5k (given i'm not using distributions). Discounting this by the 0.332 figure from moral weights (this includes sentience estimates, right?) gives a final DALY's per thousand of 1242 (or 1162 if we use the 3.5k figure directly)
Under my laying period figures, the final estimate is 6352 Chicken-Dalys/thousand, which discounted by the RP moral weights comes to 2108 DALYs/thousand dollars. A similar analysis for broilers gives 1500 chicken-dalys per thousand dollars and 506 DALY's per thousand dollars.
The default values from the cross cause website should match with either Laura's or mines estimates.
I agree that when it comes to decision making, Leifs objection doesn't work very well.
However, when it comes to communication, I think there is a point here (although I'm not sure it was the one Leif was making). If Givewell communicates about the donation and how many lives you saved, and don't mention the aid workers and mothers who put up nets, aren't they selling them short here, and dismissing their importance?
In Parfits experiment, obviously you should go on the four person mission and help save the hundred lives. But if you then went on to do a book tour and touted what a hero you are for saving the hundred lives, and don't mention the other three people, you are being a jerk.
I could imagine an aid worker in Uganda being kind of annoyed that they spent weeks working full time in sweltering heat handing out malaria nets for low pay, and then watching some tech guy in america take all the credit for the lifesaving work. It could hurt EA's ability to connect with the third world.
If we're listing factors in EA leading to mental health problems, I feel like it's worth pointing that a portion of EA thinks there's a high chance of an imminent AI apocalypse that will kill everybody.
I myself don't believe this at all, but to the people that do believe this, there's no way it doesn't affect your mental health.