You can send me a message anonymously here: https://www.admonymous.co/will
So I think the suggestion that there was some way to get to 65-90% certainty which apparently nobody was willing to either make with substantial evidence or cash in on to any significant extent is a pretty extraordinary claim...
I'm skeptical that nobody was rationally (i.e. not overconfidentally) at >65% belief Trump would win before election day. Presumably a lot of people holding Yes and buying Yes when Polymarket was at ~60% Trump believed Trump was >65% likey to win, right? And presumably a lot of them cashed in for a lot of money. What makes you think nobody was at >65% without being overconfident?
I'll grant the French whale was overconfident, since it seems very plausible that he was overconfident, though I don't know that for sure, but that doesn't mean everyone >65% was overconfident.
I'll also note that just because the market was at ~60% (or whatever precisely) does not mean that there could not have been people participating in the market who were significantly more confident yhat Trump would win and rationally-so.
The information that we gained between then and 1 week before the election was that the election remained close
I'm curious if by "remained close" you meant "remained close to 50/50"?
(The two are distinct, and I was guilty of pattern-matching "~50/50" to "close" even though ~50/50 could have meant that either Trump or Harris was likely to win by a lot (e.g. swing all 7 swing states) and we just had no idea which was more likely.)
Could you say more about "practically possible"?
Yeah. I said some about that in the ACX thread in an exchange with a Jeffrey Soreff here. Initially I was talking about a "maximally informed" forecaster/trader, but then when Jeffrey pointed out that that term was ill-defined, I realized that I had a lower-bar level of informed in mind that was more practically possible than some notions of "maximally informed."
What steps do you think one could have taken to have reached, say, a 70% credence?
Basically just steps to become more informed and steps to have better judgment. (Saying specifically what knowledge would be sufficient to be able to form a forecast of 70% seems borderline impossible or at least extremely difficult.)
Before the election I was skeptical that people like Nate Silver and his team and The Economist's election modeling team were actually doing as good a job as they could have been[1] forecasting who'd win the election and now post-election I still remain skeptical that their forecasts were close to being the best they could have been.
[1] "doing as good a job as they could have been" meaning I think they would have made substantially better forecasts in expectation (lower Brier scores in expectation) if figuring out who was going to win was really important to them (significantly more important than it actually was), and if they didn't care about the blowback for being "wrong" if they made a confident wrong-side-of-maybe forecast, and if they were given a big budget to use to do research and acquire information (e.g. $10M), and if they were highly skilled forecasters with great judgment (like the best in the world but not superhuman (maybe Nate Silver is close to this--IDK; I read his book The Signal and the Noise, but it seems plausible that there could still be substantial room for him to improve his forecasting skill)).
Note that I also made five Manifold Markets questions to also help evaluate my PA election model (Harris and Trump means and SDs) and the claim that PA is ~35% likely to be decisive.
(Note: I accidentally resolved my Harris questions (#4 & #5) to the range of 3,300,000-3,399,999 rather than 3,400,000-3,499,999. Hopefully the mods will unresolve and correct this for me per my comments on the questions.)
This exercise wasn't too useful as there weren't enough other people participating in the markets to significantly move the prices from my initial beliefs. But I suppose that's evidence that they didn't think I was significantly wrong.
Before the election I made a poll asking "How much would you pay (of your money, in USD) to increase the probability that Kamala Harris wins the 2024 Presidential Election by 0.0001% (i.e. 1/1,000,000 or 1-in-a-million)?"
You can see 12 answers from rationalist/EA people after submitting your answer to the poll or jumping straight to the results.
I think elections tend to have low aleatoric uncertainty, and that our uncertain forecasts are usually almost entirely due to high epistemic uncertainty. (The 2000 Presidential election may be an exception where aleatoric uncertainty is significant. Very close elections can have high aleatoric uncertainty.)
I think Trump was actually very likely to win the 2024 election as of a few days before the election, and we just didn't know that.
Contra Scott Alexander, I think betting markets were priced too low, rather than too high. (See my (unfortunately verbose) comments on Scott's post Congrats To Polymarket, But I Still Think They Were Mispriced.)
I think some people may have reduced their epistemic certainty significantly and had justified beliefs (not overconfident beliefs) that Trump was ~65-90% likely to win.
I totally am willing to believe that the French whale was not one of those people and actually just got lucky.
But I do think that becoming informed enough to rationally obtain a credence of >65% Trump was practically possible.
Thanks for writing up this post, @Eric Neyman . I'm just finding it now, but want to share some of my thoughts while they're still fresh in my mind before next election season.
This means that one extra vote for Harris in Pennsylvania is worth 0.3 μH. Or put otherwise, the probability that she wins the election increases by 1 in 3.4 million
My independent estimate from the week before the election was that Harris getting one extra vote in PA would increase her chance of winning the presidential election by about 1 in 874,000.
My methodology was to forecast the number of votes that Harris and Trump would each receive in PA, calculate the probability of a tie in PA given my probability distributions for the number of votes they would each get, then multiply the probability of the PA tie by the probability that PA is decisive (conditional on a tie).
I used normal distributions to model the expected number of votes Harris and Trump would each get for simplicity so I could easily model the outcomes in Google Sheets (even though my credence/PDF did not perfectly match a normal distribution). These were my parameters:
Harris Mean | Harris SD | Trump Mean | Trump SD |
3450000 | 80000 | 3480000 | 90000 |
Simulating 10,000 elections in Google Sheets with these normal distributions found that about 654 elections per 10,000 were within 20,000 votes, which translates to a 1 in ~306,000 chance of PA being tied. I then multiplied this by a ~35%[1] chance that PA would be decisive (conditional on it being tied), to get a 1 in ~874,000 chance of an extra vote for Harris in PA changing who won overall.
99% of the votes in PA are in right now, with the totals currently at: 3,400,854 for Harris and 3,530,234 for Trump.
This means that the vote totals for Harris and Trump are both within 1 standard deviation of my mean expectation. Harris is about half an SD low and Trump was about half an SD high.
From this, it's not clear that my SDs of 80,000 votes for Harris and 90,000 votes for Trump were too narrow, as your (or Nate's) model expected.
So I think my 1 in ~874,000 of an extra vote for Harris determining the president might have been more reasonable than your 1 in ~3.4 million.
[1] Note: I mistakenly privately thought models and prediction markets might be wrong about the chance of PA being decisive, and thought that maybe it was closer to ~50% rather than ~25-35%, but the reason I thought this was bad and I didn't realize until after the election: I made a simple pattern matching mistake by assuming "the election is a toss-up" meant "it will be a close election". I failed to consider other possibilities like "the election will not be close, but we just don't know which side will win by a lot." (In retrospect, this was a very silly mistake for me to make, especially since I had seen that as of some late-October date The Economist said that the two most likely outcomes of the 128 swing-state combinations was 20% that Trump swings all seven and 7% that Harris swings all 7.)
(copying my comment from Jeff's Facebook post of this)
I agree with this and didn't add it (the orange diamond or 10%) anywhere when I first saw the suggestions/asks by GWWC to do so for largely the same reasons as you.
I then added added it to my Manifold Markets *profile* (not name field) after seeing another user had done so. I didn't know who the user was and didn't know that they had any affiliation with effective giving or EA, and appreciated learning that, hence why I decided to do the same. I'm not committed to this at all and may remove it in the future. https://manifold.markets/WilliamKiely
I have not added it to my EA Forum name or profile. Everyone on the EA Forum knows that a lot of people there engage in effective giving, with a substantial percentage giving 10% or more. And unlike the Manifold Markets case where it was a pleasant surprise to learn that a person gives 10+% (where presumably a much lower percentage of people do than the EA Forum), I don't particularly care to know whether a person on he EA Forum does effective giving or not. I also kind of want to avoid actively proselytizing for it on the EA Forum, since for a lot of people trying to save money and focus on finding direct work instead may be a more valuable path than giving a mere 10% of a typical income at the expense of saving faster.
I have not added it anywhere else besides my Manifold Markets profile as far as I recall.
These new systems are not (inherently) agents. So the classical threat scenario of Yudkowsky & Bostrom (the one I focused on in The Precipice) doesn’t directly apply. That’s a big deal.
It does look like people will be able to make powerful agents out of language models. But they don’t have to be in agent form, so it may be possible for first labs to make aligned non-agent AGI to help with safety measures for AI agents or for national or international governance to outlaw advanced AI agents, while still benefiting from advanced non-agent systems.
Unfortunately, the industry is already aiming to develop agent systems. Anthropic's Claude noe has "computer use" capabilities, and I recall that Demis Hassabis has also stated that agentsbstr the next area to make big capability gains. I expect more bad news in this direction in the next five years.
I wasn't a particularly informed forecaster, so me not telling you what information would have been sufficient to justify a rational 65+% confidence in Trump winning shouldn't be much evidence to you about the practicality of a very informed person reaching 65+% credence rationally. Identifying what information would have been sufficient is a very time-intensive, costly project, and given I hadn't done it already I wasn't about to go spend months researching the data that people in principle had access to that might have led to a >65% forecast just to answer your question.
Prior to the election, I had an inside view credence of 65% that Trump would win, but considered myself relatively uninformed and so I meta-updated on election models and betting market prices to be more uncertain, making my all-things-considered view closer to 50/50. As I wrote on November 4th:
So I held this suspicion before the election, and I hold it still. I think it's likely that such forecasters with rational credences of 65+% Trump victory did exist, and even if they didn't, I think it's possible that they could have existed if more people cared more about finding out the truth of who would win.