these were ballparky estimates created by claude. to me, it seems obvious this is the biggest issue for humanity, because it affects every single other policy issue we care about. as i point out, you can't educate people at scale. but you can absolutely do it with a small statistically representative sample. so no matter what public policy you care about, this is the #1 issue with a bullet.
of course we want to do more to give this the kind of "objective" impact analysis we get via e.g. voter satisfaction efficiency metrics with voting methods. that would require a pretty substantial research budget and involve a massive amount of ballparky estimation. my point here is just to lay out the case at a high level. i've worked in electoral reform and "human welfare optimization" and economics for 20 years, and it seems so obvious to me that this is the solution, that i'm merely trying to pose the idea and get more people thinking about it. if someone thinks there's any other reform that can come close to competing with this for impact, i'd be floored.
genes care about getting themselves copied, not getting other genes copied. the game theoretical approach is to reduce the size of the group that has influence to the smallest possible set that includes itself. so we don't have an incentive to expand the franchise to non-humans. ideally, you want to even exclude other humans from having influence if possible. ethics is just selfishness plus game theory.
https://music.youtube.com/watch?v=MWgZviLNPCM&si=76Z_UkNmRo_fgW3j
i think i did a pretty good job summarizing "ethics" here. my position is that ethics is just the behaviors genes deploy to help get themselves copied, and that there's no such thing as normativity. words like "ought" and "should" are just expressing a subjective preference.
https://music.youtube.com/watch?v=MWgZviLNPCM&si=76Z_UkNmRo_fgW3j
> It's somewhat unclear if it means utility in the sense of a function that maps preference relations to real numbers, or utility in axiological sense.
there's only one notion of utility. if your utilities for x,y, and z are 0,3,5 respectively, then you'd find a 60% chance of the 5 option equally preferable to the guarantee of y, and so on.
> preferences are changed by the political process.
well, no. your estimate of how much a given policy will benefit you, that is changed by the political process. the actual utilities aren't.
> The second is that people have stable preferences for terrible things like capital punishment.
no. people have utilities that relate to things like "being murdered walking down a dark alley". the preferences they form for policies like capital punishment are estimates of how well off they'll be under a given legal policy regime. in reality, most people would prefer a world where capital punishment is illegal. but they erroneously think capital punishment is good becaus ethey don't understand how ineffective it is, and how they themselves could end up being unjustly killed via capital punishment.
you need to update your mental model with the disparity between actual utility from policy, versus the assume utilities that form your espoused political preferences.
that disparity between actual and assumed preferences was already accounted for by "ignorance factors" in the bayesian regret calculations, fyi.
great points John!