The current US administration is attempting an authoritarian takeover. This takes years and might not be successful. My manifold question puts an attempt to seize power if they lose legitimate elections at 30% (n=37). I put it much higher.[1]
Not only is this concerning by itself, this also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions. As a consequence, they may be willing to rush and cut corners on safety.
Crucially, this relies on them believing superintelligence can be achieved before a transfer of power.
I don't know how much the belief in superintelligence has spread into the administration. I don't think Trump is 'AGI-pilled' yet, but maybe JD Vance is? He made an accelerationist speech. Making them more AGI-pilled and advocating for nationalization (like Ashenbrenner did last year) could be very dangerous.
The forum likes to catastrophize Trump but I need to point out a few things for the sake of accuracy since this is very misleading and highly upvoted.
The current administration has done many things that I find horrible, but I don't see any evidence of an authoritarian takeover. Being hyperbolic isn't helpful.
Your Manifold question is horribly biased because you are the author and made it very biased. First, there is your bias in how you will resolve the question. Second, the wording of the question comes off as incredibly biased. For example, saying that Bush v Gore counts as a coup or "Anything that makes the person they try to put in power illegitimate in my judgment,". Your judgment is doing a lot of heavy lifting there.
I think it's important to quantify this supposed incentive. Needless to say, I think it's very low.
I don't think it matters much but I am Manifold's former #1 trader until I stopped and I'm fairly well regarded as a forecaster.
The forum likes to catastrophize Trump but I need to point out a few things for the sake of accuracy since this is very misleading and highly upvoted.
The current administration has done many things that I find horrible, but I don't see any evidence of an authoritarian takeover. Being hyperbolic isn't helpful.
I think this statement is highly misleading. First, I think compared to most other fora and groups, this Forum is decidedly not catastrophizing Trump.
Second, if you don't see "any evidence of an authoritarian takeover" then you are clearly not paying very much attention.
I think there is a fair debate to be had about how seriously to take various signs of authoritarianism on the part of the Administration, but "seeing no evidence of it" is not really consistent with the signals one does readily find when paying attention, such as:
- an attack on the independence of the judiciary and law firms, complaining about the fact that courts exercise their legitimate powers - flirting with the idea of being in defiance of court orders - talking about a third term - praising Putin, Orban, and other authoritarians - undermining due process
On a relative basis to other left-wing places, the forum is not catastrophizing Trump. I should have said that this post is catastrophizing Trump and is only getting the upvotes (at the time I posted, it was all upvotes and "agree" reacts), because of the forum's political bias.
Again, I should be more precise but this is a misinterpretation I think. There is always evidence of authoritarian takeover by any President. Every President does things that are supposed to be done through Congress (for example, most military action). I agree that Trump has more authoritarian impulses than most but this is not nearly clearing the bar for, as the author says, "The current US administration is attempting an authoritarian takeover.". That's a very strong statement and the evidence doesn't back that up. It's hyperbolic.
For the record, by authoritarian takeover I mean a gradual process aiming for a situation like Hungary (which they've frequently cited as their inspiration and something to aspire to). Given that Trump has tried to orchestrate a coup the last time he was in office, I don't think it's a hyperbolic claim to say he's trying again this time. I'm also not making any claims about the likelihood of success.
is only getting the upvotes (at the time I posted, it was all upvotes and "agree" reacts), because of the forum's political bias.
I think this is very uncharitable to other Forum users. (Unless you meant "is getting only upvotes [..]")
I was mostly objecting to your statement of "seeing no sign of authoritarian takeover", I do agree and mentioned in my comment that Siebe's statement was possibly too definite.
But I don't think it is hyperbolic to say that there are many signs of Trump's authoritarianism and signs consistent with an attempted authoritarian takeover and that this is qualitatively different than what we have seen from any other President in recent history, one has to go back to at least Nixon to get things in the same ballpark (and Nixon was arguably a lot more constrained by his own party than Trump is right now).
The examples you are citing "Presidents doing things that should be done through Congress" are not examples of authoritarian behavior and pretending that what Trump is doing is part of the regular testing of executive authority is also quite misleading.
Which other recent Administration was headed by someone denying a legitimate election result? Which other recent Administration had a VP flirting with the idea of not honoring Supreme Court rulings? Which other recent Administration was systematically invested in fighting against civil society institutions and law firms? Which other Administration has had so many people warning about authoritarian tendencies, both from their own party and from key senior staff from their own first administration?
A lie (it cannot be hyperbole as the claim he made was very specifically framed)
Legal under the constitution, because he would do it via running for Vice President and having the elected President resign, and anything technically legal is not an ‘authoritarian takeover’
Illegal under the constitution, but he would legally amend the constitution to remove term limits
Something else?
And then, for whichever you believe, could you explain how it isn’t an authoritarian takeover?
(I choose this example because it’s relatively clear-cut, but we could point to Trump vs. United States, the refusal to follow court orders related to deportations, instructing the AG not to prosecute companies for unbanning Tik Tok, the attempts from his surrogates to buy votes, freezing funding for agencies established by acts of Congress, bombing Yemen without seeking approval from Congress, kidnapping and holding legal residents without due process, etc. etc. etc., I just think those have greyer areas)
Trump and crew spout millions of lies. It's very common at this point. If you get worked up about every one of these, you're going to lose your mind.
Look, I'm not happy about this Trump stuff either. It's incredibly destabilizing for many reasons. But you are going to lose focus on important things if you get swept up into the daily Trump news. If you are focused on AI safety or animal welfare or poverty or whatever it may be, your most effective thing will almost certainly be focusing on something else.
I don't think discussing authoritarian takeover is against Forum rules, though EA is not the ideal place for political resistance given its broad amount of causes for which it needs political tractability. However, it's tricky because US political dynamics are currently extremely influential for EA cause areas, and I think we need to do better thinking through how various areas will be affected, and how policies might interact with the affect that the US administration is proto-authoritarian. We should not simply pretend the US administration is a normal one.
That said, in these discussion we should be careful to not descend into 'mere partisanship' though I don't know where that line is. I wish the Forum team would give more guidance.
7
Toby Tremlett🔹
This is something we should think about more as a mod team- I'll discuss it with them.
Our current politics policy is still this. But it arguably wasn't designed with our current situation in mind. In my view, it'd be a bad thing if discussions on the Forum became too tied to the news cycle (It generally seems true that once something is on the news, you are at least several years too late to change it), our impact has historically not been had by working in the most politically salient areas (neglectedness isn't a perfect proxy but it still matters). However, it'd also be wrong if the Forum couldn't discuss politically salient issues while they are going on, and there is something readers could do to stop them.
FWIW in this particular situation (and I haven't conferred with the mod team) I don't see this thread as being against Forum rules, because the participants could reasonably believe (or for that matter, not believe) that preventing authoritarian takeover in the US is a relevant cause area to EA.
2
SiebeRozendal
I don't think it's that misleading because
* Will Manifold think Trump made a serious about to remain in charge? (35%, n=26, would be ~26% without my bets). Resolves via Manifold poll
* Trump tried something arguably coup-like but it fails (25%, n=44) and the linked "Trump remains in office" is 15% (n=43), putting the total attempt probability at 40%. Other markets put success at lower rates though, which seems more realistic.
The President has also already tried a coup once (fake elector scheme, J6). There's a much bigger case I could make but I don't want to do that here
6
Larks
I realize 'Manifold questions with poor resolution criteria' is something of a repeated subject from me, but I think it's worth noting how perverse this criteria is. If traders are behaving rationally, for this contract to be trading at 30% implies 70% confidence that... the 2028 election will be more democratically legitimate than the 2000 election? As far as I can see, this market pricing is perfectly compatible with:
* 70% of being more democratically legitimate as a perfectly fine presidential election
* 30% of being equally legitimate to a perfectly fine presidential election
To the extent that you use the word 'coup' in a very expansive way that is not shared by most people, you should probably explicitly signpost this. The rest of your comment doesn't really follow as a result... why should SCOTUS deciding than you can't do cherry-picked county recounts create an incentive to rush to a strategic decisive advantage? The Absence of AGI was not an issue to the ruling back in 2000.
2
SiebeRozendal
I appreciate you looking into the resolution criteria, because they matter. And yes, partisan SCOTUS rulings being included muddles the evidence somewhat. That said, I don't think it's that misleading because
* Will Manifold think Trump made a serious about to remain in charge? (35%, n=26, would be ~26% without my bets). Resolves via Manifold poll
* Trump tried something arguably coup-like but it fails (25%, n=44) and the linked "Trump remains in office" is 15% (n=43), putting the total attempt probability at 40%. Other markets put success at lower rates though, which seems more realistic.
In hindsight I would've referenced the Manifold poll resolution.
I recommend everyone in this thread looking at the US Democracy topic on Manifold which I have added all relevant questions to that I could find (and also look elsewhere, e.g. Metaculus, which has much fewer questions but arguably better incentives for long-term questions)
P.S.
I also have a separate question for specifically a controversial SCOTUS ruling in favor of Republicans but it doesn't have enough traders.
7
Sator
Whilst I agree there is a disconcertingly high chance (i.e anything above 10% = very concerning) of coup/coup-adjacent actions by the Trump admin, it's worth remembering that 'coup attempt' ≠ 'coup success.' It's also worth remembering that 30% 'thing happens' means 70% 'thing doesn't happen.'
A larger market (n=158) on metaculus has 2% on "Will Trump win the 2024 presidential election and retain supreme executive power past 2028?" Given that he already won the election, the question is effectively "Will Trump retain supreme executive power past 2028?". Granted, this excludes coup attempts by Vance, but, eh, idk why he would have a substantially higher chance of pulling of a successful coup (admittedly without having given it much thought).
1. ^
Admittedly, I felt motivated by a gut level "I don't like politics/Trump posting on the EA forum" to write a response. Or maybe it was also partly the subtle alarmism (which isn't a claim that you intended to write an alarmist post, just that I read it as such).
4
Ozzie Gooen
Going meta, I think this thread demonstrates how the Agree/Disagree system can oversimplify complex discussions.
Here, several distinct claims are being made simultaneously. For example:
1. The US administration is attempting some form of authoritarian takeover
2. The Manifold question accurately represents the situation
3. "This also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions"
I think Marcus raises a valid criticism regarding point #2. Point #1 remains quite vague—different people likely have different definitions of what constitutes an "authoritarian takeover."
Personally, I initially used the agree/disagree buttons but later removed those reactions. For discussions like this, it might be more effective for readers to write short posts specifying which aspects they agree or disagree with.
To clarify my own position: I'm somewhat sympathetic to point #1, skeptical of point #2 given the current resolution criteria, and skeptical of point #3.
2
Ebenezer Dukakis
Speaking as an American -- I think a silver lining on recent tariff moves is that they may foster anti-American sentiment in e.g. Europe, which then makes Europeans more instinctively resistant to America's recklessness when it comes to AI. I think it could be really high-impact for EAs in e.g. the Netherlands to try and kickstart a conversation about how ASML may enable an American AI omnicide.
Never let a good crisis go to waste!
Probably worth red-teaming this suggestion, though. It would be bad if the MAGA crowd were to polarize in opposition, and embrace AI boosterism in order to stick it to Europe. Perhaps this effect could be mitigated if the discussion mostly happened in the Dutch language?
I'm starting a discussion group on Signal to explore and understand the democratic backsliding of the US at ‘gears-level’. We will avoid simply discussing the latest outrageous thing in the news, unless that news is relevant to democratic backsliding.
Example questions:
“how far will SCOTUS support Trump's executive overreach?”
“what happens if Trump commands the military to support electoral fraud?”
"how does this interact with potentially short AGI timelines?”
"what would an authoritarian successor to Trump look like?"
Here's an argument I made in 2018 during my philosophy studies:
A lot of animal welfare work is technically "long-termist" in the sense that it's not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People's work typically takes longer to impact animal welfare.
For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings.
But once you accept the moral goodness of that, there's little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely)
The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I'm not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.
That doesn't match the standard definition of longtermism: "positively influencing the long-term future is a key moral priority of our time", it seems to me that it's more about rejecting some narrow person-affecting views.
I think it's very tempting to assume that people who work on things that we don't consider the most important things to work on are doing so because of emotional/irrational/social reasons.
I'm imagine that some animal welfare people (and sometimes myself) see people working on extremely fun and interesting problems in AI, while making millions of dollars, with extremely vague theories for why this might be making things better and not worse for people millions of years for now, and imagine that they're doing so for non-philosophically-robust reasons. I currently believe that the social and economic incentives to work in AI are much greater than the incentives to work in animal welfare. But I don't think this is a useful framing (as it's too tempting and could explain anything), and we should instead weigh the arguments that people give for prioritizing one cause instead of another.
I think the tractability aspect of AI/s-risk work, and the fact that all previous attempts backfired (Singularity Institute, early MIRI, early DeepMind, early OpenAI, and we'll see with Anthropic) is the single main reason why some people are not prioritizing work in AI/s-risk at the moment, and it's not about extremely narrow person-affecting views (which I think are very rare).
I think those are different kinds of uncertainties, and it seems to me that they are both treated very seriously by people working in those fields.
2
SiebeRozendal
You make a lot of good points - thank you for the elaborate response.
I do think you're being a little unfair and picking only the worst examples. Most people don't make millions working on AI safety, and not everything has backfired. AI x-risk is a common topic at AI companies, they've signed the CAIS statement that it should be a global priority, technical AI safety has a talent pipeline and is a small but increasingly credible field, to name a few. I don't think "this is a tricky field to make a robustly positive impact so as a careful person I shouldn't work on it" is a solid strategy at the individual level, let alone at the community level.
That said, I appreciate your pushback and there's probably plenty of people working on either cause area for whom personal incentives matter more than philosophical ones.
A lot of post-AGI predictions are more like 1920s predicting flying cars (technically feasible, maximally desirable if no other constraints, same thing as current system but better) instead of predicting EasyJet: crammed low-cost airlines (physical constraints imposing economic constraints, shaped by iterative regulation, different from current system)
I just learned that Lawrence Lessig, the lawyer who is/was representing Daniel Kokateljo and other OpenAI employees, supported and encouraged electors to be faithless and vote against Trump in 2016.
He wrote an opinion piece in the Washington Post (archived) and offered free legal support. The faithless elector story was covered by Politico, and was also supported by Mark Ruffalo (the actor who recently supported SB-1047).
I think this was clearly an attempt to steal an election and would discourage anyone from working with him.
I expect someone to eventually sue AGI companies for endangering humanity, and I hope that Lessig won't be involved.
I don't understand why so many are disagreeing with this quick take, and would be curious to know whether it's on normative or empirical grounds, and if so where exactly the disagreement lies. (I personally neither agree nor disagree as I don't know enough about it.)
From some quick searching, Lessig's best defence against accusations that he tried to steal an election seems to be that he wanted to resolve a constitutional uncertainty. E.g.,: "In a statement released after the opinion was announced, Lessig said that 'regardless of the outcome, it was critical to resolve this question before it created a constitutional crisis'. He continued: 'Obviously, we don’t believe the court has interpreted the Constitution correctly. But we are happy that we have achieved our primary objective -- this uncertainty has been removed. That is progress.'"
But it sure seems like the timing and nature of that effort (post-election, specifically targeting Trump electors) suggest some political motivation rather than purely constitutional concerns. As best as I can tell, it's in the same general category of efforts as Giuliani's effort to overturn the 2020 election, though importantly different in that Giuliani (a) had the support and close collaboration of the incumbent, (b) seemed to actually commit crimes doing so, and (c) did not respect court decisions the way Lessig did.
Just did it, still works. You can donate to what looks like any registered US charity, so plenty of highly effective options whether you care about poverty or animal welfare.
7
MHR🔸
Worked for me just now, gave $50 to The Humane League :)
3
John Salter
Worked 20 minutes ago. Process took me ~5 minutes total.
Common prevalence estimates are often wrong.
Example: snakebites and my experience reading Long Covid literature.
Both institutions like the WHO and academic literature appear to be incentivized to exaggerate. I think the Global Burden of Disease might be a more reliable source, but have not looked into it.
I advise everyone using prevalence estimates to treat them with some skepticism and look up the source.
Global Burden of Disease (GBD) is okay, it depends a lot on what disease & metric you're looking at, and how aware you are of the caveats around it. Some of these:
* A lot of the data is estimated, rather than real measurements of prevalence
* I think most people understand this, but it's always worth a reminder
* The GBD provides credible intervals for all major statistics and these should definitely be used!
* This paper on the Major Depressive Disorder estimates is a good overview for a specific disease
* The moral weights for estimating the years lived with disability for a given disease are compiled from a wide survey of the general public
* This means they're based on people's general belief of what it would be like to have that condition, even if they haven't experienced it or know anyone who has
* Older GBDs included expert opinion in their moral weights, but to remove biases they don't do this anymore (IMHO I think this is the right call)
* The estimates for prevalence are compiled differently per condition by experts in that condition
* There is some overall standardisation, but equally, there's some wiggle room for a motivated researcher to inflate their prevalence estimates. I assume the thinking is that these biases cancel out in some overall sense.
Overall, I think the GBD is very robust and an extremely useful tool, especially for (a) making direct comparisons between countries or diseases and (b) where no direct, trustworthy, country-specific data is available. But you should be able to improve on its accuracy if you have an inside view on a particular situation. I don't think it's subject to the incentives you mention above in quite the same way.
6
jeeebz
Chiming in to note a tangentially related experience that somewhat lowered my opinion of IHME/GBD, though I'm not a health economist or anything. I interacted with several analysts after requesting information related to IHME's estimates for global hepatitis C burden (which differed substantially from the WHO's). After a meeting and some emails promising to followup, we were ghosted. I have heard from one other organization that they've had a really hard time getting similar information out of IHME as well. This may be more of an organizational/operational problem rather than a methodological one, but it wasn't very confidence-inspiring.
5
NickLaing
Whenever I do a sanity checks of GBD it usually make sense for UgAnda here where I live, with the possible exception of diarrhoea which I think is overrated (with moderate confidence).
I'm not sure exactly how GBD would "exaggerate" overall, because the contribution of every condition to the disease burden has to add up to the actual burden - if you were to exaggerate the effect of one condition you would have to intentionally downplay another to compensate, which seems unlikely. I would imagine mistakes on GBD are usually good faith mistakes rather than motivated exaggerations.
4
timunderwood
Nitpicky reply, but reflecting an attitude that I think has some value to emphasize:
Based on what you wrote, I think it would be far more accurate to describe GBD as 'robust enough to be an useful tool for specific purposes', rather than 'very robust'.
Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.
A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
Some initial insight on what this might look like practically, is that Trump has promised to repeal Biden's executive order on AI (YMMV on how serious you take trump's promises)
Really interesting initiative to develop ethanol analogs. If successful, replacing ethanol with a less harmful substance could really have a big effect on global health. The CSO of the company (GABA Labs) is prof. David Nutt, a prominent figure in drug science.
I like that the regulatory pathway might be different from most recreational drugs, which would be very hard to get de-scheduled.
I'm pretty skeptical that GABAergic substances are really going to cut it, because I expect them to have pretty different effects to alcohol. We already have those (L-theanine , saffran, kava, kratom) and they aren't used widely. But who knows, maybe that's just because ethanol-containing drinks have received a lot of optimization in terms of taste, marketing, and production efficiency.
It also seems like finding a good compound by modifying ethanol would be hard, because it's not a great lead compound in terms of toxicity (I expect).
People massively underestimate the damage alcohol causes per use because of how normalised it is.
1
Hayven Frienby
Agreed. Alcohol is ubiquitous because it’s normalized, and its damaging health effects are glossed over for the same reason (as well as corporate profits).
GABA Labs is a good initiative, I think. I do know kava (a popular drink in parts of Polynesia) acts on GABA receptors and can have similar effects to alcohol in high doses, but I’m not sure what the long-term health effects of kava use are.
3
SiebeRozendal
Heavy use of kava is associated with liver damage, but it seems much less toxic than alcohol. (I use it in my insomnia stack)
2
Elina Christian
Hi, I agree ETOH is extremely harmful. However, there are existing medications which act on GABA, many of which are both highly addictive and therefore highly regulated themselves. Barbituates are a (now outdated) drug class which acts on GABA, others include benzodiazepines and more modern sleep drugs like Zolpidem. All have significant side effects.
This website strikes me as very selective in how scientific it is - for example, "At higher levels (blood ethanol >400mg%, as would occur after drinking a litre of vodka) then these two effects of ethanol – the increase in GABA inhibition and the blockade of glutamate excitation – can combine to produce a lethal level of sedation and respiratory depression. In terms of health impacts, alcohol (strictly speaking, ethanol) is in a class of its own, and very different from GABA." ETOH is not that different from GABA, as you can also overdose and cause respiratory depression and death from GABA inhibition. I would like to see some more peer-reviewed studies around this new drink, and a comparison to placebo (if you're giving people this drink and saying it will enhance "conviviality and relaxation" then it probably will).
As with pretty much anything health related, there's no quick fix. Things which depress the CNS are addictive, and not that dissimilar from one another. I can see the marketing opportunity for this in the "health food" arena, which makes me more skeptical of this site. I imagine, if released, it may have a similar fate to cannabinoid molecules being included in all sorts of products - allowed because they are ineffective, or vapes - with a different risk profile to the original substance.
This idea had been floating in my head for a bit. Maybe someone else has made it (Bostrom? Schulman?), but if so I don't recall.
Humans have stronger incentives to cooperate with humans than AIs have with other AIs. Or at least, here are some incentives working against AI-AI cooperation.
When humans dominate other humans, there is only a limited ability to control them or otherwise extract value, in the modern world. Occupying a country is costly. The dominating party cannot take the brains of the dominated party and run i... (read more)
I guess this somewhat depends on how good you expect AI-augmented persuasion/propaganda to be. Some have speculated it could be extremely effective. Others are skeptical. Totalitarian regimes provide an existence proof of the feasibility of controlling populations on the medium term using a combination of pervasive propaganda and violence.
3
SiebeRozendal
That seems relevant for AI vs. Humans, but not for AI vs AI.
Most totalitarian regimes are pretty bad at creating value, with China & Singapore as exceptions. (But in many regimes, creating that value isn't necessary to remain in power of there's e.g. income from oil)
3
Milan Weibel🔹
Humans could use AI propaganda tools against other humans. Autonomous AI actors may have access to better or worse AI propaganda capabilities than those used by human actors, depending on the concrete scenario.
There is a natural alliance that I haven't seen happen, but both are in my network: pandemic preparedness and covid-caution. Both want clean indoor air.
The latter group of citizens is a very mixed group, with both very reasonable people and unreasonable 'doomers'. Some people have good reason to remain cautious around COVID: immunocompromised people & their household, or people with a chronic illness, especially my network of people with Long Covid, who frequently (~20%) worsen from a new COVID case.
But these concerned citizens want clean air, and are willing to take action to make that happen. Given that the riskiest pathogens trend to also be airborne like SARS-COV-2, this would be a big win for pandemic preparedness.
Specifically, I believe both communities are aware of the policy objectives below and are already motivated to achieve it:
1) Air quality standards (CO2, PM2.5) in public spaces.
Schools are especially promising from both perspectives, given that parents are motivated to protect their children & children are the biggest spreaders of airborne diseases. Belgium has already adopted regulations (although very weak, it's a good start), showing that this i... (read more)
Another group that naturally could be in a coalition with those 2 – parents who just want clean air for their children to breathe from a pollution perspective, unrelated to covid. (In principle, I think may ordinary adults should also want clean air for themselves to breath due to the health benefits, but in practice I expect a much stronger reaction from parents who want to protect their children's lungs.)
I am very concerned about the future of US democracy and rule of law and its intersection with US dominance in AI. On my Manifold question, forecasters (n=100) estimate a 37% that the US will no longer be a liberal democracy by the start of 2029 [edit: as defined by V-DEM political scientists].
Project 2025 is an authoritarian playbook, including steps like 50,000 political appointees (there are ~4,000 appointable positions, of which ~1,000 change in a normal presidency). Trump's chances of winning are significantly above 50%, and even if he loses, Republic... (read more)
On my Manifold question, forecasters (n=100) estimate a 37% that the US will no longer be a liberal democracy by the start of 2029.
Your question is about V-Dem's ratings for the US, but these have a lot of problems, so I think don't shine nearly as much light on the underlying reality as you suggest. Your Manifold question isn't really about the US being a liberal democracy, it's about V-Dem's evaluation. The methodology is not particularly scientific - it's basically just what some random academics say. In particular, this leaves it quite vulnerable to the biases of academic political scientists.
The election of Trump is a huge outlier, giving a major reduction to 'Liberal Democracy'.
This impact occurs immediately despite the fact that he didn't actually pass that many laws or make that many changes in 2017; I think this is more about perception than reality.
The impact appears to be larger than the abolition of slavery, the passage of the civil rights act, the second world war, conscription or female suffrage. This seems very implausible to me.
Freedom of domestic movement increased in 2020, despite the introdu
V-Dem indicators seem to take into account statements made by powerful politicians, not only their policies or other deeds. For example, I found this in one of their annual reports:
My guess is that statements made by Trump were extreme outliers in how they betrayed little respect to democratic institutions, compared to statements made by earlier US presidents, and that affected their model.
I think that's reasonable. It might not be fully reflective of lived reality for US citizens at the moment the statements are made, but it sure captures the beliefs and motives of powerful people, which is predictive of their future actions.
Indeed, one way to see the drop in 2017 is that it was able to predict a major blow to American democracy (Trump refusing to concede an election) 4 years in advance.
1
Larks
I'm not really sure this contradicts what I said very much. I agree the V-Dem evaluators were reacting to Trump's comments, and this made them reduce their rating for America. I think they will react to Trump's comments again in the future, and this will again make them likely reduce their rating for America. This will happen regardless of whether policy changes, and be poorly calibrated for actual importance - contra V-Dem, Trump getting elected was less important than the abolition of slavery. Since I think Siebe was interested in policy changes rather than commentary, this means V-Dem is a bad metric for him to look at.
2
SiebeRozendal
I would be very interested to hear whether you have a preferred metric!
8
Larks
Great question, and it is something I thought a little bit about.
My process was to ask "what are people really worried about from a Trump Presidency" and try to explicitly put that into questions.
One option is to think about the Presidency instrumentally. We can look at forecasts of object-level things that people care about, like unemployment, GPD/Capita, the murder rate, life expectancy and so on, and create markets for the 2028 value of these things conditional on different winners in the Presidential election.
We could also try to identify specific freedoms people care about - e.g. a market in whether anyone will be imprisoned for tweeting criticism of the government with no aggravating factors or the number of opposite-party state governors placed under house arrest, again conditional on the winner of the election.
What a lot of people seem to be most concerned about is the end of democracy. I think some of the most obvious metrics here - e.g. will elections be held in 2028, will the size of the electorate shrink dramatically - would be regarded as straw men by Trump-sceptics, though it could still be good to have markets for them just to prove this. We want something that captures whether there will be a 'real' (fair) election, without trusting partisan evaluations of said fairness, given both factions' repeated willingness to accuse elections they lost of being stolen/rigged etc. When I think about the difference between elections in the US or other democracies, and those in dictatorships, one of the key differences is uncertainty: I really don't know who will win the next UK election (though I favour Labour as more likely), but despite knowing little about its internal politics I'm pretty sure a CCP candidate will win in China.
This suggests a metric: whether the prediction markets in 2025 will be very confident that one party or other will win in 2028. Given the usual contestability of US elections, and the lack of specific information about the eco
2
SiebeRozendal
Interestingly, someone came up with a similar operationalisation just now! (Or maybe this is you?): https://manifold.markets/Siebe/if-trump-is-elected-will-the-us-sti#zdkmuetvo8c
I like the 1-year before more, because it takes time to accumulate power and overcome checks & balances.
I do think this has shortcomings, in that it's hard to predict what would be attempted, and whether that would be successful. But I'm very much in favor of having multiple imperfect operationalisations and triangulate from those.
2
Larks
Yeah I before before starting this you'd want to look at historical landslide but fair elections and see how far in advance they were known. Things like UK Labour in 1997 or Nixon in 1972 / Obama in 2008. I don't have a strong sense for the balance.
2
David Mathers🔸
This is a good comment, but I think I'd always seen Singapore classed as a soft authoritarian state where elections aren't really free and fair, because of things like state harassment of government critics, even though the votes are counted honestly and multiple parties can run?Though I don't know enough about Singapore to give an example. I have a vague sense Botswana might be a purer example of an actual Liberal democracy where one party keeps winning because they have a good record in power. It's also usually a safe bet the LDP will be in power in Japan, though they have occasionally lost.
2
SiebeRozendal
Thank you, these are some good points. When I made the question, I believed V-DEM had a more rigorous methodology, and I can't change it now.
I don't think the specific probability is necessary for my argument (and it depends on how one defines 'liberal democracy'): a Trump presidency with an enabling Supreme Court would be very harmful to US liberal democracy and the rule of law, and a nationalized AGI project under such a government would be very risky.
4
SiebeRozendal
I don't really understand why so many people are downvoting this. If anyone would like to explain, that'd be nice!
2
SiebeRozendal
P.P.S. I am also concerned about silencing/chilling effects: if you want to get anything political done in the next few years, it's probably strategic to refrain from criticizing Trump & his allies anywhere publicly, including the EA Forum.
2
SiebeRozendal
P.S. I don't think the Forum norm of non-partisanship should apply in its strong form in the case of the US. The Republican party has clearly become an anti-democratic, anti-rule of law, and anti-facts party. This has been concluded by many political scientists and legal scholars.
Chevron deference is a legal doctrine that limits the ability of courts to overrule federal agencies. It's increasingly being challenged, and may be narrowed or even overturned this year.
https://www.eenews.net/articles/chevron-doctrine-not-dead-yet/
This would greatly limit the ability of, for example, a new regulatory agency on AI Governance to function effectively.
I'm very skeptical of this. Chevron deference didn't even exist until 1984, and the US had some pretty effective regulatory agencies before then. Similarly, many states have rejected the idea of Chevron deference (e.g. Delaware) and I am not aware of any strong evidence that they have suffered 'chaos'.
In some ways it might be an improvement from the perspective of safety regulation: getting rid of Chevron would reduce the ability of future, less safety-cautious administrations to relax the rules without the approval of Congress. To the extent you are worried about regulatory capture, you should think that Chevron is a risk. I think the main crux is whether you expect Congress or the Regulators to have a better security mindset, which seems like it could go either way.
In general the ProPublica link seems more like a hatchet job than a serious attempt the understand the issue.
I am concerned about the H5N1 situation in dairy cows and have written and overview document to which I occasionally add new learnings (new to me or new to world). I also set up a WhatsApp community that anyone is welcome to join for discussion & sharing news.
In brief:
I believe there are quite a few (~50-250) humans infected recently, but no sustained human-to-human transmission
I estimate the Infection Fatality Rate substantially lower than the ALERT team (theirs is 63% that CFR >= 10%), something like 80%CI = 0.1 - 5.0
Given how bird flu is progressing (spread in many cows, virologists believing rumors that humans are getting infected but no human-to-human spread yet), this would be a good time to start a protest movement for biosafety/against factory farming in the US.
What are you referring to here?
We already have confirmation that it happened hundreds of times that people got infected with H5N1 from contact with animals (only 2 cases in the US so far, but one of them very recently). We can guess that there might be some percentage of unreported extra cases, but I'd expect that to be small because of the virus's high mortality rate in its current form (and how much vigilance there is now).
So, I'm confused whether you're referring to confirmed information with the word "rumors," or whether there are rumors of some new development that's meaningfully more concerning than what we already have confirmations of. (If so, I haven't come across it – though "virus particles in milk" and things like that do seem concerning.)
4
SiebeRozendal
in This Week in Virology, Vincent Racaniello says that he had visited Ohio farmers, and said that farm workers were getting specifically conjunctivitis rather than respiratory infections. He mentioned this really casually.
This Week in Virology TWiV 1108: Clinical update with Dr. Daniel Griffin
Also this:
From this opinion piece by Zeynep Tüfekçi in the NY Times: It's not like there's any at-scale human testing
However, I don't think these cases are likely to lead to sustained human-to-human transmission, of it's true that most have only conjunctivitis.
It's in line with the one confirmed case, which only had conjunctivitis and no other symptoms: https://www.cdc.gov/media/releases/2024/p0401-avian-flu.html
It's also in line with Fouchier et al., 2004
It spreading to pigs farms seems the biggest risk at the moment, and not unlikely.
2
SiebeRozendal
More links:
April 22, Science:
April 29, Daily Mail:
https://www.dailymail.co.uk/health/article-13363325/bird-flu-outbreak-humans-texas-farm-worker-sick.html
2
SiebeRozendal
Btw, I don't think the virus has a high mortality rate in its current form, based on these reported rumors
Monoclonal antibodies can be as effective as vaccines. If they can be given intramuscularly and have a long half life (like Evusheld, ~2 months), they can act as prophylactic that needs a booster once or twice a year.
They are probably neglected as a method to combat pandemics.
Their efficacy is easier to evaluate in the lab, because they generally don't rely on people's immune system.
This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!
I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study something along the lines of risk analysis, and its an especially valuable career path for people with an engineering background.
Why I think risk analysis is useful:
EA researchers rely a lot on quantification, but use a limited range of methods (simple Excel sheets or Guesstimate models). My impression is also that most EAs don't understand these methods enough to judge when they are useful or not (my past self included). Risk analysis expands this toolkit tremendously, and teaches stuff like the proper use of priors, underlying assumptions of different models, and common mistakes in risk models.
Aww yes, people writing about their life and career experiences! Posts of this type seem to have some of the best ratio of "how useful people find this" to "how hard it is to write" -- you share things you know better than anyone else, and other people can frequently draw lessons from them.
UPDATE NOV 2022: turns out the forecast was wrong and incidence (new cases) is decreasing, severity of new cases is decreasing, and significant amounts of people are recovering in the <1 year category. I now expect prevalence to be stagnating/decreasing for a while, and then slowly growing over the next few years.]
I still believe the other sections to be roughly correct, i... (read more)
I have a concept of paradigm error that I find helpful.
A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.
Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.
I think I call this "the wrong frame".
"I think you are framing that incorrectly etc"
eg in the UK there is often discussion of if LGBT lifestyles should be taught in school and at what age. This makes them seem weird and makes it seem risky. But this is the wrong frame - LGBT lifestyles are typical behaviour (for instance there are more LGBT people than many major world religions). Instead the question is, at what age should you discuss, say, relationships in school. There is already an answer here - I guess children learn about "mummies and daddies" almost immediately. Hence, at the same time you talk about mummies and daddies, you talk about mummies and mummies, and single dads and everything else.
By framing the question differently the answer becomes much clearer. In many cases I think the issue with bad frames (or models) is a category error.
1
Alexxxxxxx
I like this, I think i use the wrong models when trying to solve challenges in my life.
I'm predicting a 10-25% probability that Russia will use a weapon of mass destruction (likely nuclear) before 2024. This is based on only a few hours of thinking about it with little background knowledge.
Russian pro-war propagandists are hinting at use of nuclear weapons, according to the latest BBC podcast Ukrainecast episode. [Ukrainecast] What will Putin do next? #ukrainecast
https://podcastaddict.com/episode/145068892 via @PodcastAddict
There's a general sense that, in light of recent losses, something needs to change. My limited understanding sees 4 op... (read more)
The current US administration is attempting an authoritarian takeover. This takes years and might not be successful. My manifold question puts an attempt to seize power if they lose legitimate elections at 30% (n=37). I put it much higher.[1]
Not only is this concerning by itself, this also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions. As a consequence, they may be willing to rush and cut corners on safety.
Crucially, this relies on them believing superintelligence can be achieved before a transfer of power.
I don't know how much the belief in superintelligence has spread into the administration. I don't think Trump is 'AGI-pilled' yet, but maybe JD Vance is? He made an accelerationist speech. Making them more AGI-pilled and advocating for nationalization (like Ashenbrenner did last year) could be very dangerous.
So far, my pessimism about US Democracy has put me in #2 on the Manifold topic, with a big lead over other traders. I'm not a Superforecaster though.
The forum likes to catastrophize Trump but I need to point out a few things for the sake of accuracy since this is very misleading and highly upvoted.
I don't think it matters much but I am Manifold's former #1 trader until I stopped and I'm fairly well regarded as a forecaster.
I think this statement is highly misleading. First, I think compared to most other fora and groups, this Forum is decidedly not catastrophizing Trump.
Second, if you don't see "any evidence of an authoritarian takeover" then you are clearly not paying very much attention.
I think there is a fair debate to be had about how seriously to take various signs of authoritarianism on the part of the Administration, but "seeing no evidence of it" is not really consistent with the signals one does readily find when paying attention, such as:
- an attack on the independence of the judiciary and law firms, complaining about the fact that courts exercise their legitimate powers
- flirting with the idea of being in defiance of court orders
- talking about a third term
- praising Putin, Orban, and other authoritarians
- undermining due process
For the record, by authoritarian takeover I mean a gradual process aiming for a situation like Hungary (which they've frequently cited as their inspiration and something to aspire to). Given that Trump has tried to orchestrate a coup the last time he was in office, I don't think it's a hyperbolic claim to say he's trying again this time. I'm also not making any claims about the likelihood of success.
I think this is very uncharitable to other Forum users. (Unless you meant "is getting only upvotes [..]")
We’re probably already violating Forum rules by discussing partisan politics, but I’m curious to hear how you view Trump’s claim that he is “not joking” about a third term. Is this:
And then, for whichever you believe, could you explain how it isn’t an authoritarian takeover?
(I choose this example because it’s relatively clear-cut, but we could point to Trump vs. United States, the refusal to follow court orders related to deportations, instructing the AG not to prosecute companies for unbanning Tik Tok, the attempts from his surrogates to buy votes, freezing funding for agencies established by acts of Congress, bombing Yemen without seeking approval from Congress, kidnapping and holding legal residents without due process, etc. etc. etc., I just think those have greyer areas)
I think 1, 3, and 4 are all possible.
Trump and crew spout millions of lies. It's very common at this point. If you get worked up about every one of these, you're going to lose your mind.
Look, I'm not happy about this Trump stuff either. It's incredibly destabilizing for many reasons. But you are going to lose focus on important things if you get swept up into the daily Trump news. If you are focused on AI safety or animal welfare or poverty or whatever it may be, your most effective thing will almost certainly be focusing on something else.
What evidence would you need to see to conclude that an Orbanisation of the US government is beginning, but still early enough to prevent it?
I'm starting a discussion group on Signal to explore and understand the democratic backsliding of the US at ‘gears-level’. We will avoid simply discussing the latest outrageous thing in the news, unless that news is relevant to democratic backsliding.
Example questions:
- “how far will SCOTUS support Trump's executive overreach?”
- “what happens if Trump commands the military to support electoral fraud?”
- "how does this interact with potentially short AGI timelines?”
- "what would an authoritarian successor to Trump look like?"
- "are there any neglected, tractable
... (read more)Here's an argument I made in 2018 during my philosophy studies:
A lot of animal welfare work is technically "long-termist" in the sense that it's not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People's work typically takes longer to impact animal welfare.
For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings.
But once you accept the moral goodness of that, there's little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely)
The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I'm not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.
I suspect many peop... (read more)
A lot of post-AGI predictions are more like 1920s predicting flying cars (technically feasible, maximally desirable if no other constraints, same thing as current system but better) instead of predicting EasyJet: crammed low-cost airlines (physical constraints imposing economic constraints, shaped by iterative regulation, different from current system)
I just learned that Lawrence Lessig, the lawyer who is/was representing Daniel Kokateljo and other OpenAI employees, supported and encouraged electors to be faithless and vote against Trump in 2016.
He wrote an opinion piece in the Washington Post (archived) and offered free legal support. The faithless elector story was covered by Politico, and was also supported by Mark Ruffalo (the actor who recently supported SB-1047).
I think this was clearly an attempt to steal an election and would discourage anyone from working with him.
I expect someone to eventually sue AGI companies for endangering humanity, and I hope that Lessig won't be involved.
I don't understand why so many are disagreeing with this quick take, and would be curious to know whether it's on normative or empirical grounds, and if so where exactly the disagreement lies. (I personally neither agree nor disagree as I don't know enough about it.)
From some quick searching, Lessig's best defence against accusations that he tried to steal an election seems to be that he wanted to resolve a constitutional uncertainty. E.g.,: "In a statement released after the opinion was announced, Lessig said that 'regardless of the outcome, it was critical to resolve this question before it created a constitutional crisis'. He continued: 'Obviously, we don’t believe the court has interpreted the Constitution correctly. But we are happy that we have achieved our primary objective -- this uncertainty has been removed. That is progress.'"
But it sure seems like the timing and nature of that effort (post-election, specifically targeting Trump electors) suggest some political motivation rather than purely constitutional concerns. As best as I can tell, it's in the same general category of efforts as Giuliani's effort to overturn the 2020 election, though importantly different in that Giuliani (a) had the support and close collaboration of the incumbent, (b) seemed to actually commit crimes doing so, and (c) did not respect court decisions the way Lessig did.
Ray Dalio is giving out free $50 donation vouchers: tisbest.org/rg/ray-dalio/
Still worked just a few minutes ago
GiveWell is available (search Clear Fund)!
Common prevalence estimates are often wrong. Example: snakebites and my experience reading Long Covid literature.
Both institutions like the WHO and academic literature appear to be incentivized to exaggerate. I think the Global Burden of Disease might be a more reliable source, but have not looked into it.
I advise everyone using prevalence estimates to treat them with some skepticism and look up the source.
Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.
A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
Really interesting initiative to develop ethanol analogs. If successful, replacing ethanol with a less harmful substance could really have a big effect on global health. The CSO of the company (GABA Labs) is prof. David Nutt, a prominent figure in drug science.
I like that the regulatory pathway might be different from most recreational drugs, which would be very hard to get de-scheduled.
I'm pretty skeptical that GABAergic substances are really going to cut it, because I expect them to have pretty different effects to alcohol. We already have those (L-theanine , saffran, kava, kratom) and they aren't used widely. But who knows, maybe that's just because ethanol-containing drinks have received a lot of optimization in terms of taste, marketing, and production efficiency.
It also seems like finding a good compound by modifying ethanol would be hard, because it's not a great lead compound in terms of toxicity (I expect).
AI vs. AI non-cooperation incentives
This idea had been floating in my head for a bit. Maybe someone else has made it (Bostrom? Schulman?), but if so I don't recall.
Humans have stronger incentives to cooperate with humans than AIs have with other AIs. Or at least, here are some incentives working against AI-AI cooperation.
When humans dominate other humans, there is only a limited ability to control them or otherwise extract value, in the modern world. Occupying a country is costly. The dominating party cannot take the brains of the dominated party and run i... (read more)
There is a natural alliance that I haven't seen happen, but both are in my network: pandemic preparedness and covid-caution. Both want clean indoor air.
The latter group of citizens is a very mixed group, with both very reasonable people and unreasonable 'doomers'. Some people have good reason to remain cautious around COVID: immunocompromised people & their household, or people with a chronic illness, especially my network of people with Long Covid, who frequently (~20%) worsen from a new COVID case.
But these concerned citizens want clean air, and are willing to take action to make that happen. Given that the riskiest pathogens trend to also be airborne like SARS-COV-2, this would be a big win for pandemic preparedness.
Specifically, I believe both communities are aware of the policy objectives below and are already motivated to achieve it:
1) Air quality standards (CO2, PM2.5) in public spaces.
Schools are especially promising from both perspectives, given that parents are motivated to protect their children & children are the biggest spreaders of airborne diseases. Belgium has already adopted regulations (although very weak, it's a good start), showing that this i... (read more)
I am very concerned about the future of US democracy and rule of law and its intersection with US dominance in AI. On my Manifold question, forecasters (n=100) estimate a 37% that the US will no longer be a liberal democracy by the start of 2029 [edit: as defined by V-DEM political scientists].
Project 2025 is an authoritarian playbook, including steps like 50,000 political appointees (there are ~4,000 appointable positions, of which ~1,000 change in a normal presidency). Trump's chances of winning are significantly above 50%, and even if he loses, Republic... (read more)
Your question is about V-Dem's ratings for the US, but these have a lot of problems, so I think don't shine nearly as much light on the underlying reality as you suggest. Your Manifold question isn't really about the US being a liberal democracy, it's about V-Dem's evaluation. The methodology is not particularly scientific - it's basically just what some random academics say. In particular, this leaves it quite vulnerable to the biases of academic political scientists.
For example, if we look at the US:
A couple of things jump out at me here:
- The election of Trump is a huge outlier, giving a major reduction to 'Liberal Democracy'.
- This impact occurs immediately despite the fact that he didn't actually pass that many laws or make that many changes in 2017; I think this is more about perception than reality.
- The impact appears to be larger than the abolition of slavery, the passage of the civil rights act, the second world war, conscription or female suffrage. This seems very implausible to me.
- Freedom of domestic movement increased in 2020, despite the introdu
... (read more)Chevron deference is a legal doctrine that limits the ability of courts to overrule federal agencies. It's increasingly being challenged, and may be narrowed or even overturned this year. https://www.eenews.net/articles/chevron-doctrine-not-dead-yet/
This would greatly limit the ability of, for example, a new regulatory agency on AI Governance to function effectively.
More:
I'm very skeptical of this. Chevron deference didn't even exist until 1984, and the US had some pretty effective regulatory agencies before then. Similarly, many states have rejected the idea of Chevron deference (e.g. Delaware) and I am not aware of any strong evidence that they have suffered 'chaos'.
In some ways it might be an improvement from the perspective of safety regulation: getting rid of Chevron would reduce the ability of future, less safety-cautious administrations to relax the rules without the approval of Congress. To the extent you are worried about regulatory capture, you should think that Chevron is a risk. I think the main crux is whether you expect Congress or the Regulators to have a better security mindset, which seems like it could go either way.
In general the ProPublica link seems more like a hatchet job than a serious attempt the understand the issue.
I am concerned about the H5N1 situation in dairy cows and have written and overview document to which I occasionally add new learnings (new to me or new to world). I also set up a WhatsApp community that anyone is welcome to join for discussion & sharing news.
In brief:
Given how bird flu is progressing (spread in many cows, virologists believing rumors that humans are getting infected but no human-to-human spread yet), this would be a good time to start a protest movement for biosafety/against factory farming in the US.
Monoclonal antibodies can be as effective as vaccines. If they can be given intramuscularly and have a long half life (like Evusheld, ~2 months), they can act as prophylactic that needs a booster once or twice a year.
They are probably neglected as a method to combat pandemics.
Their efficacy is easier to evaluate in the lab, because they generally don't rely on people's immune system.
This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!
I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study something along the lines of risk analysis, and its an especially valuable career path for people with an engineering background.
Why I think risk analysis is useful:
EA researchers rely a lot on quantification, but use a limited range of methods (simple Excel sheets or Guesstimate models). My impression is also that most EAs don't understand these methods enough to judge when they are useful or not (my past self included). Risk analysis expands this toolkit tremendously, and teaches stuff like the proper use of priors, underlying assumptions of different models, and common mistakes in risk models.
The field of Risk Analysis
Risk analysis is... (read more)
Update to my Long Covid report: https://forum.effectivealtruism.org/posts/njgRDx5cKtSM8JubL/long-covid-mass-disability-and-broad-societal-consequences#We_should_expect_many_more_cases_
UPDATE NOV 2022: turns out the forecast was wrong and incidence (new cases) is decreasing, severity of new cases is decreasing, and significant amounts of people are recovering in the <1 year category. I now expect prevalence to be stagnating/decreasing for a while, and then slowly growing over the next few years.]
I still believe the other sections to be roughly correct, i... (read more)
I have a concept of paradigm error that I find helpful.
A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.
Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.
It is... (read more)
I'm predicting a 10-25% probability that Russia will use a weapon of mass destruction (likely nuclear) before 2024. This is based on only a few hours of thinking about it with little background knowledge.
Russian pro-war propagandists are hinting at use of nuclear weapons, according to the latest BBC podcast Ukrainecast episode. [Ukrainecast] What will Putin do next? #ukrainecast https://podcastaddict.com/episode/145068892 via @PodcastAddict
There's a general sense that, in light of recent losses, something needs to change. My limited understanding sees 4 op... (read more)
Large study: Every reinfection with COVID increases risk of death, acquiring other diseases, and long covid.
https://twitter.com/dgurdasani1/status/1539237795226689539?s=20&t=eM_x9l1_lFKqQNFexS6FEA
We are going to see a lot more issues with COVID still, including massive amounts of long COVID.
This will affect economies worldwide, as well as EAs personally.
Ah sorry I'm not going to do that, mix of reasons. Thanks for offering it though :)