Introduction
The idea
The idea is to get the UN to facilitate agreement amongst world leaders on what moral philosophy they should make decisions on, and this article expands on that idea.
Preface
Iâm mostly asking for feedback[1], assistance [contacting the UN system about this], help actually starting this project (if your job is relevant) and any information (such as if any world leaders donât seem willing to change their moral values, relevant phycology, how the UN implements projects, how slow of a process this might be, etc.) that can aide in deciding how (Or, I suppose, if) this project should be implemented.
Why this might be good
As I am sure you know, everyone makes decisions based on three things:
- The information they have,
- Their decision-making process (which is often assumed to be their values, such as in game theory, and when it comes to geopolitics, thereâs much less human error than in day-to-day life, so it should roughly be their values. and divergence from that is human error.)
- Their options.
This is because when a person decides something, they use the information they have (1) and their decision-making process (2), and ONLY based on those things do they choose one of their options (3).
Itâs abundantly clear why it would be good for world leaders to agree on what they value[2]: said world leaders would always want the same thing, assuming they have the same or similar enough information and human error doesnât get in the way.
Why this might work:
Most people are aligned not with their goals at the moment, but rather with their goals overall. Thatâs phrased a bit weirdly, so Iâll expand on it:
Someone might be a member of the Democratic party at one time, but they probably wouldnât take a pill that made them always hold the opinions of a member of the Democratic party of that time.
- If this isnât caused by human error, it must be caused by their values.
- If their values only cared about their opinions and beliefs at that time, then they would take the pill.
- Since they likely wouldnât, their values must also care, in part, about their future opinions or beliefs.
In addition, most people wouldnât take a pill that made them highly addicted to ice cubes.
4. If this isnât caused by human error, it must be caused by their values.
5. If their values only care about satisfying whatever opinions and beliefs they have at the time, then they would take the pill since they could easily satisfy [the belief that they would hold, in the future] that eating ice cubes is extremely valuable by going to the fridge and grabbing a handful of ice cubes.
6. Since they donât eat the cube, they must not entirely only care about whatever opinions and beliefs they hold at the time.
IF a person wouldnât take the pill in the first case, but would in the second, and IF it isnât caused by human error, then statements 3 and 6 MUST be satisfied.
In order to satisfy statements 3 and 6, said personâs true values could be many things. Three very reasonable possibilities are:
- They value doing good, and their beliefs and opinions change as they think about them more. (e.g., someone might switch from acting like a nihilist to acting like a utilitarian since that better aligns with their values. They allow THIS change because they know that some reasonable process caused said change. If they thought they would start going crazy soon, they might go through steps to stop a change in their values since they wouldnât trust the decision of a crazy person, even when the crazy person is them.)
- They care about making decisions based on logical reasoning and âreasonableâ values. Valuing ice cubes so highly is not âreasonableâ.
- More generally, they might have one overarching goal (for example, âdoing goodâ), and they change their opinions and beliefs to better align with that goal.
This may be the case for very impactful people, and so any change to their values, when based on logical reasoning (and with THEIR consent, so they know that it meets THEIR goals (since I donât think any of them are actually crazy, but I donât know. If they are crazy, though, or if they often succumb to human error, they still would likely only change their values with THEIR consent.)), they would be welcome!
- In addition, not only would world leaders end up with moral values that are more logical, but they would end up with moral values that many more important people agree with!
- In addition, world leaders might hold off on major decisions since they know that, on average, they would make a more educated decision after their moral values improve and align with others. (đ¨đ¨đ¨IMPORTANT NOTE HERE: if a person in power knew that something such as this had a chance of occurring, they might hold off on major decisions[3]. So: if you have the ability to contact someone who might be considering a major decision, PLEASE consider telling me[4] to tell them about it (or otherwise get them to know), or enough about this for them to hold off on said decision. But make sure they wouldnât use that information for bad (e.g., if they are morally aligned with their current values or otherwise donât trust the UN, they might use their knowledge of this project to try and stop it from occurring, or atleast stopping it from applying to them.)
Why this might not work/factors that might cause this to not be implemented/factors that might make this a bad idea:
It might be too slow
One major issue is that the process might be too slow. Maybe it wonât be! I honestly don't know. Maybe thereâs some study on how long it takes to change the mind of someone who sees their opinion as important, and that might be useful in determining how long this would take.
I will note that this program can help plenty of world leaders decide on a moral value simultaneously (under some methods of the project being implemented), which could make the process much faster.
Certain assumptions might not be met
Another potential way this wouldn't work or be implemented is that many world leaders donât match the reasonable assumptions behind [the reasoning as to why it might work].
It might be too hard
Another one is that it might be very hard to convince everybody that it is important, especially if we define âimportantâ more loosely, allowing more people to fit the description, ESPECIALLY if it needs cooperation from a large group, such as the citizens of a nation, especially if the moral values go against that groupâs culture. Imagine youâre heavily a christian hearing that the UN decided that coveting oneâs neighborâs wife is really not that bad (and they mentioned that explicitly in a summary of the program report). Notably, many of these bigger groups are heavily influenced by smaller groups; Most unions have union leaders or union leader bodies, most armies have generals of differing ranks, most political groups have figureheads, most religions have priests or the equivalent of a priest, etc.
Adverse incentives
The issue
- Due to all the dynamics of politics, people in power are disproportionately not morally aligned: someone who values being in office the longest would, on average, be in office longer than someone who wants to do good, and those who are willing to become more morally aligned would disproportionately be put at a disadvantage, since this program would be more likely to make them more moral, and thus in power for less time in comparison to those who were less willing to budge: A change from mostly moral to moral might be the straw that broke the camelâs back, causing them to be in power for much less time.[5]
- This decrease might be especially extreme if their keys to power think that them being more moral is so bad that theyâd need to replace them. For example, a countryâs president might strongly disagree with the morals of what was settled on, and would replace an ambassador who attended the program. (This provides further reason as to why the keys to power of those in power should go through such a program.)
Both of these provide reason as to why a potential participant or person effected by the program might actively try to stop it.
Some counteracting forces
- This can be counteracted by having some of these dynamics of politics push towards being moral: moral world leaders might do better in a world filled with other moral world leaders than [immoral world leaders].
- Furthermore, you donât need to value staying in power to stay in power or try to. Suppose youâre a world leader with some moral values. In that case, youâd want to stay in power when the alternative is less moral than you.
- In addition to THIS, one of the main reasons moral world leaders do seemingly immoral things is to fend off less moral world leaders from taking their power. This force would be drastically counteracted by [the UN and most world leaders agreeing on a moral philosophy that they act upon.].
- IN ADDITION, a person in a position of power could act the same way as an immoral version of themselves, except for when being moral doesn't have a noticeable effect on how much power they have. (This is practically the bare minimum, namely since it would mean that a program like this would only have an effect in those edge-cases.)
- IN ADDITION, if world peace (or something similar) was achieved under said personâs leadership/[being in power], that would be a major boost to any campaigns that they endorse (for campaigns in democracies), it could make them seem like a better fit for most roles, it would boost their image, and more!
We've been trying part of this for a while
One glaring flaw is that it would be extremely difficult to land on the correct moral philosophy. Philosophers have been trying for years, and there still isnât a consensus!
One counterargument to this is that it doesnât need to be the RIGHT moral philosophy; it just has to be GOOD ENOUGH to be better than the current status quo, which is much less difficult.
Miscellaneous
- One general note is that the UN sort of already has this: the UN charter. However, it isnât enforced in this way, and major member states donât abide by it, or otherwise didnât in the past, such as Russia. They might want to apply the techniques proposed here, but not all of them would work if the final moral goal is set in stone: No-one can change it such that they agree with it, so if the charter doesnât agree with their values, No amount of convincing will change their mind, unless you change their fundamental values (something pretty hard to do - imagine how much convincing Iâd have to do to get you to stop doing good!)
- Presumably, since important people become more and less important over time, either due to them being elected, hired, resigning, dying, etc., this program would presumably continue throughout time (to get all the new world leaders and non-new world leaders to agree on a moral philosophy), or perhaps the UN would fully agree on one moral philosophy, or perhaps every few years world leaders and experts convene to decide if it should change, and if so, what it should change to. (This is one of the main ways this idea can be improved: âHow should this be implemented long-termâ?)
If you have any questions, feel free to ask![6]
I likely forgot about some variables that can be changed to make this idea better, as well as variables that could effect whether or not this is a good idea, so please let me know if you spot any, or if you know what those variables are âequal toâ. (that is, what should be adjusted, and what are the the real-world features that effect of this is a good or bad idea?)
- ^
So far,
The following people have given me feedback:
- 4 non-experts.
- 1 international relations expert
- Arturo Marcias
- Christopher Canal
- At least 4 people who work at the UN[7]
- Roughly 6 people quickly talked about it on the call, and none of them said it was crazy, either. They also gave a few notes before we switched topics.
- ^
I will note that there are a bunc of different extents to hoe succesful this could be. Here are some (
somewhatrushed) examples:- Scale & Moral success: this project results in all people (outside of the very occasional bad apple) agreeing exactly on what moral system to use such that no one would disagree on any decision unless they had different information or if human error got in the way, AND this moral system is the fundamentally correct one.
- The general impact of this would basically be that all of humanity would soon live in a utopia, and the world would basically be optimal, putting aside human error.
- This seems to be one of if not THE hardest option, but I don't know the specifics.
- Moral success: Most world leaders agree on the fundamentally correct moral system.
- The impact of this could be that most people live as normal, content in the knowledge that the world is much safer, but there are some cases where non-world leaders might be empowered to cooperate against a beter world order, perhaps through strikes, protests, or worse (which is potentialy much less likely) (e.g., governments losing a monopoly on power.), or perhaps smaller groups of people or individuals might try to interfere negatively through the common methods used throughout history. On the other hand, it might sort itself out and such a positive scenario might empower greater cooperation between good-doers, which seems like a more reasonable scenario. In addition, in many of these scenarios, such better leaders might encourage people to be more moral and logical.
- This one also seems very difficult, since landing on the exact crrect moral philosophy is something that philosophers have been working on for at least two millenia by now - of course, progress and the rate of good ideas and whatnot has dramatically increased in very recent history, so it's a possibility that shouldn't be ruled out.
- varied moral success: Most world leaders agree on most things, but there are some edge-cases where their slightly differing values make them have opposing interests on select issues.
- This probably will have less of a unionizing force on less powerful bad actors, since the situation is less compromizing to some goals bad actors might have; it might have a comperable unionizing force umongst good-doers. However, world peace might not be achieved here, though I imagine that, in many of these scenarios, it is easy to improve the scenario to one of the better ones. (In this case, 5 might be pretty achieveable.)
- This seems like one of the most likely scenarios, besides the scenario where this can't or wouldn't be implemented.
- less moral success: Like scenario 2, but with a "sub-par" moral system.
- This probably will have less of a unionizing force on less powerful bad actors, since the situation is less compromizing to some goals bad actors might have; it might also have less of a unionizing force umongst good-doers. While it might cause a world peace equivelant, the world might head in a slightly or largely more sub-par or misaligned long-term future for humanity.
- This also seems like one of the most likely scenarios, besides the scenario where this can't or wouldn't be implemented.
- moral stepping-stone success: World leaders agree that there is some correct moral system for which decisions should be made, but they might still disagree on some decisions.
- This might cause much less disagreement umongst world leaders, and world leaders would mostly always agree tat it is to find out what the right decision is, and thus cooperate to find what the right decision is.
- This seems like one of the easiest ways to get the ball rolling. In an emergency, such as a repeat of the cuban missile crisis, this seems like the best strategy for an emergency implementation, but any of them could result in world leaders holding off on decisions such as sending nukes, even if they were only announced.
- A version of 3 which works on all people.
- This might cause greater cooperation on many sides, which could be good or bad, depending on specifics.
- This seems possible, though I doubt the UN would play much of a direct role here. If this were to happen, I would imagine it would mostly arise from a general cultural shift that encourages people to think about their values. There certainly is precedent of major news having major cultural impacts, such as the events of 2001 that are so ubiquitous that the year is often associated with the tragedy.
- A version of 4 which works on all people.
- This might cause greater cooperation on many sides, which could be good or bad, depending on specifics.
- This seems much less possible given the sheer number of people and possibility for large groups of people who would reject a sub-par moral system, though I doubt the UN would play much of a direct role here. If this were to happen, I would imagine it would mostly arise from a very, very major cultural shift that encourages people to think about their values. I am unaware of any precedent of such a large cultural shift.
- A version of 5 which works on all people.
- This might cause greater cooperation on many sides, which could be good or bad, depending on specifics. It also mght make arguments resolve much better, and it could easily result in making up-and-coming world leaders already agree with certain moral values before rising to power. (This one applies to all of the ones that work on the general population.)
- This actually does seem like a reasonable possibility. Many moral values are already pretty widespread, even if the logic behing them isn't. One of the most noteable ones is "the golden rule". Something like this could certainly come as a result of the UN, especially since it agrees with most moral philosophies, (Namely because, for most moral philosophies, one could think "One moral philosophy is fundamentally the right one, and that one is MY moral philosophy.) and thus its widespread acceptance doesn't have to provide reasoning as to why other moral philosophies are wrong.
Generally, there are many easy-ish (specifics-dependent) ways of improving cooperation umongst good-doers and decreasing cooperation umongst bad actors, so it might be worth weighting those factors less as a result of the potential ease of controlling these factors through other means.
I will re-enphasize that this footnote doesn't account for every scenario, and is really not that comprehensive. It mainly provides a jumping-off point for which to develop specific parts of the project so as to lead to specific results, and as a jumping-off point to figure out what parts of the program might result in what outcomes, and how good said outcomes are/may be.
- Scale & Moral success: this project results in all people (outside of the very occasional bad apple) agreeing exactly on what moral system to use such that no one would disagree on any decision unless they had different information or if human error got in the way, AND this moral system is the fundamentally correct one.
- ^
This is because something like this is so major that it might impact their decision, so they might hold off on a decision until they can make a more educated one.
- ^
Iâd appreciate if you tell me so I can keep track of who knows.
If you think someone else should know instead of me, AND that they should know who knows, then please let me know, so I can send them [the info of who I know (are the people who know about it)].
If you want certain restrictions on [who I send any info on this to], please let me know. Iâd appreciate reasoning as to why.
- ^
Note that this is often not the people in powerâs fault.
- ^
To be clear: if you have ANY questions, please ask. Interpret this article the same way you would as if there was a version of it that included all the answers to any given question, and you may request access to see what the answer is. If there is a typo, for example, donât interpret this article as though said typo was intentional; interpret it as though there was a footnote right above the typo that said âthis is a typo. I meant to say ___â. In terms of game theory and soft power and deciding on a Nash equilibrium and whatnot, note that if anything is unclear, everyone else will have asked âhey, can you clarify this?â, and they would do so both for clarification and because they would want to know what clarification everyone else got.
- ^
They didn't give comments on how to improve, but they didn't say the idea was crazy, and they definitely read it.
Simple feedback: read this book:
https://www.amazon.com/Dictators-Handbook-Behavior-Almost-Politics/dp/1610391845
Think about politics in Darwinian terms: who survives the process?
Iâm pretty sure thatâs on my book list, but thanks anyways! Iâd say I watched the equivelant of the âmovie versionâ (which is missing some things; namely, it doesn'thave much on how easy it is to replace keys) https://youtu.be/rStL7niR7gs (Sorry if this comes off as passive aggressive; it isn't. Itâs passive.)
Iâll edit the idea accordingly though.
Do you think the video is missing any other important points that the book doesnât?
Of course! The detailed historical examples. No amount of abstract knowledge can substitute historical discussion.
In fact the academic version (the logic of political survival) is for me less interesting, because it is too much based on data analysis instead of cases.
Thanks! Iâll give it a read (or, more realistically, a listen if thereâs an audiobook version.)
it seems boggling at first glance that this would work, but in summary, it would work like this: Sometimes, in an argument, one or more sides doesnât care about reaching the RIGHT conclusion, they just care about it reaching a conclusion they approve of. This is often the difficulty with arguments.
However, when everyone is brought to the table and wants to reach the RIGHT conclusion, you find that the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
This project would basically bring world leaders to the table, where they would look for the RIGHT conclusion to major problems, which should lead to the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
There is sort of precedent for this: science used to be much more argumentative, and now, most of science is done in very intelligent ways, aimed at getting to the RIGHT answer, and not âtheir answerâ. This led to many, if not most or all, scientific problems being solved*.
In addition, if you aim to be a powerful scientist, fighting for âyour answerâ makes it much harder than it is if you were fighting for the RIGHT answer. Similarly, if this project worked well, it would be much harder to gain power if you fought for âyour valuesâ than if you fought for the RIGHT values!
Maybe this would start its rollout on the most major world leaders first? And then, over time, more and more people get added to the program once weâre ready for them
One way to advertise this idea is that it reminds people of what the UN/UN charter was for, and that it is an improvement upon it.
If this worked, it would probably result in a major culture shift throughout most major institutions, which would help keep the program from falling apart, and would help incorporate new members.
Exact information on this is dependent on data on phycology and whatnot. If you know about that stuff, please let me know or add it here.
And a good culture (say, in the UN) can also help with this project's success. A bad one can result in this project being harder.
Also, if world leaders spend a lot of time surrounded by a particular culture (e.g., a month at some event), they might carry some of that culture over when they get back home, but also they re-assimilate into their home culture.
I will also note that the possibility of more morally misaligned actors might use the information that world leaders now agree on X moral values to their advantage, in order to do bad things. Perhaps this force is counteracted by more morally aligned people using such information to do good things!
I will note that most change of this scale doesnât arise from methods like this. This could aide in giving a rough sense of how likely this is to work. Hereâs some examples of things like this working:
And here are some examples of efforts that have required broader support:
(Note: this was all off the top of my head.)
Message to any world leaders who arenât willing to change their values: If you can successfully stop this from happening if you tried, then it wouldnât work, so thereâs no point in trying to stop me. It would be comparable to voting in an election determined by peopleâs opinions, not by how they voted (the equivalent of writing on a random piece of paper, âI vote like so: __â).
I say this because in any scenario where, even assuming every world leader who has completely unwavering moral values tried really hard to stop our program AND cooperated with one another, IF such an effort would potentially be successful, then our program would fail.
To expand on that: If your efforts make the difference between our program succeeding and failing or otherwise affecting its success, we would have a huge incentive to ensure that this program isnât bad for you. This is because, if [you think it would be better for [your values] to try and prevent any given facet/part of our program], you would logically do so, and we donât want that, so we will make sure [You are happy with each of those facets of the program].
Basically, you donât need to stop our program. The threat that you might try to stop our program has the same effect.
If we can help you in a way that doesn't come at a cost to us (e.g., reschedule meetings so the time of the meetings work better for you), we will!
As an analogy, if you had the option to get rid of a country, then you donât have to worry about them being bad for you, because they have a massive incentive to be good for you: not getting destroyed.
Hereâs another analogy: Someone is making you food. You don't have to spend thousands of dollars to ensure that the person makes good food since you can simply throw the food away if the food does not taste good, and the person making the food already has a massive incentive to make food that tastes good to you: not getting the food thrown out.
All of this goes without saying, but saying it makes it clear.
A common strategy used to limit the effects of human error it to better account for it in models and whatnot, often by coming up with a value system that would make sense for any given set of decisions where some of them are due to human error. For example, in economics, one might say that a person ascribes inherent additional value to things that are on sale.
Another way is to try to make human error guide someone in a similar direction to logical decisions. For example, there is a major taboo against drug use in many areas, which supposedly decreases drug use when unnecessary.
More generally, a common strategy is to limit how much human error changes someoneâs decisions, on average.
A world leaderâs goals are probably adjustable one way or another. In the case where a world leader is committed to some values that depend on something (e.g., whatever is seen as âpatrioticâ, whatever their religion says (this only applies to some religions), changing those things changes their values. That might be very difficult for some value systems, but luckily [a commitment to the values of something that can easily change] has plenty of good logical arguments against them (https://youtu.be/wRHBwxC8b8I), which could be a better strategy to change someoneâs mind if they have such a commitment that is difficult to change, but for which one can change if they have such a commitment.
If you know about psychology or world leaders, please let me know how true this might be. If it isnât true, weâd have to work out how we might handle a world where only some people have their morals aligned. My first thought on this is that:
Maybe replacing the keys to power?
Supposedly, a more morally aligned global order might try to make itself more morally aligned. We only need this to work enough for it to sort itself out.
I imagine this would be implemented in a similar fashion ion to other UN programs when they started, but before that, we should work out key things that would change how or if the program should happen.
If anyone here knows any info that can help with this (e.g., Does any world leader have a commitment to their current values instead of their overall values?), please let me know in a comment, email, etc.
Quick note: (Note taken while I am tired, so medium âparse-abilityâ): this program should be able to adjust to new ideas such that [an idea on how this program can be improved] can be implemented as soon as possible, perhaps without having to do an event. This is tricky for some ideas (e.g., how the event could be more fun). This would cause ideas to be implemented sooner, and also thereâs be less of a cost to do the program sooner, since you wouldnât be âmissingâ most important ideas. One idea that MIGHT satisfy this is: Part of the UN normal chat space (slack, discord, or whatever they use, if anything) was a philosophy section on what philosophy to go by and why, so the discussion can continue 24/7, and ideas for improvement can get implemented for the next day (or sooner).