Current takeaways from the 2024 US election <> forecasting community.
First section in Forecasting newsletter: US elections, posting here because it has some overlap with EA.
1. Polymarket beat legacy institutions at processing information, in real time and in general. It was just much faster at calling states, and more confident earlier on the correct outcome.
2. The OG prediction markets community, the community which has been betting on politics and increasing their bankroll since PredictIt, was on the wrong side of 50%—1, 2, 3, 4, 5. It was the democratic, open-to-all nature of it, the Frenchman who was convinced that mainstream polls were pretty tortured and bet ~$45M, what moved Polymarket to the right side of 50/50.
3. Polls seem like a garbage in garbage out kind of situation these days. How do you get a representative sample? The answer is maybe that you don't.
4. Polymarket will live. They were useful to the Trump campaign, which has a much warmer perspective on crypto. The federal government isn't going to prosecute them, nor bettors. Regulatory agencies, like the CFTC and the SEC, which have taken such a prominent role in recent editions of this newsletter, don't really matter now, as they will be aligned with financial innovation rather than opposed to it.
5. NYT/Siena really fucked up with their last poll and the coverage of it. So did Ann Selzer. Some prediction market bettors might have thought that you could do the bounded distrust, but in hindsight it turns out that you can't. Looking back, to the extent you trust these institutions, they can ratchet their deceptiveness (from misleading headlines, incomplete stories, incomplete quotes out of context, not reporting on important stories, etc.) for clicks and hopium, to shape the information landscape for a managerial class that... will no longer be in power in America.
6. Elon Musk and Peter Thiel look like geniuses. In contrast Dustin Moskovitz couldn't get SB 1047 passed despite being the s
How should AI alignment and autonomy preservation intersect in practice?
We know that AI alignment research has made significant progress in embedding internal constraints that prevent models from manipulating, deceiving, or coercing users (to the extent that they don’t). However, internal alignment mechanisms alone don’t necessarily give users meaningful control over AI’s influence on their decision-making. Which is a mechanistic problem on its own, but…
This raises a question: Should future AI systems be designed to not only align with human values but also expose their influence in ways that allow users to actively contest and reshape AI-driven inferences?
For example:
* If an AI model generates an inference about a user (e.g., “this person prefers risk-averse financial decisions”), should users be able to see, override, or refine that inference?
* If an AI assistant subtly nudges users toward certain decisions, should it disclose those nudges in a way that preserves user autonomy?
* Could mechanisms like adaptive user interfaces (allowing users to adjust how AI explains itself) or AI-generated critiques of its own outputs serve as tools for reinforcing autonomy rather than eroding it?
I’m exploring a concept I call Autonomy by Design, a control-layer approach that builds on alignment research but adds external, user-facing mechanisms to make AI’s reasoning and influence more contestable.
Would love to hear from interpretability experts, and UX designers: Where do you see the biggest challenges in implementing user-facing autonomy safeguards? Are there existing methodologies that could be adapted for this purpose?
Thank you in advance.
Feel free to shatter this if you must XD.
FYI rolling applications are back on for the Biosecurity Forecasting Group! We have started the pilot and are very excited about our first cohort! Don't want to apply but have ideas for questions? Submit them here (anyone can submit!).
I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!
Predict your year in 2025: a website for tracking your forecasts
2024 is over. Did your life this year align with your expectations? What came out of nowhere and threw off your predictions? Did your actions align with your intentions? What fresh goals are you planning?
We've built predict your year in 2025, a space for you to write down your predictions for the year. At the end of your year, you can return, resolve your predictions as YES, NO or AMBIGUOUS, and reflect.
We've written some starter questions to make it super easy to get started predicting your year. You can tweak these and write your own - those will likely be the most important questions for you.
You can use this tool to predict your personal life in 2025 - your goals, relationships, work, health, and adventures. If you like, you can share your predictions with friends - for fun, for better predictions, and for motivation to achieve your goals this year!
You can also use this tool to predict questions relevant to your team or organisation in the coming year - your team strategy, performance, big financial questions, and potentially disruptive black swans. You can share your predictions with your team and let everyone contribute, to build common knowledge about expectations and pool your insights.
If you use Slack, you can also share your page of predictions in a Slack channel (e.g. #2025-predictions or #strategy), so everyone can easily discuss in threads and return to it throughout the year.
I hope you have a good time thinking about your coming year, and that it sparks some great conversations with friends and teammates.
Happy new year!
Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.
A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
‘Five Years After AGI’ Focus Week happening over at Metaculus.
Inspired in part by the EA Forum’s recent debate week, Metaculus is running a “focus week” this week, aimed at trying to make intellectual progress on the issue of “What will the world look like five years after AGI (assuming that humans are not extinct)[1]?”
Leaders of AGI companies, while vocal about some things they anticipate in a post-AGI world (for example, bullishness in AGI making scientific advances), seem deliberately vague about other aspects. For example, power (will AGI companies have a lot of it? all of it?), whether some of the scientific advances might backfire (e.g., a vulnerable world scenario or a race-to-the-bottom digital minds takeoff), and how exactly AGI will be used for “the benefit of all.”
Forecasting questions for the week range from “Percentage living in poverty?” to “Nuclear deterrence undermined?” to “‘Long reflection’ underway?”
Those interested: head over here. You can participate by:
* Forecasting
* Commenting
* Comments are especially valuable on long-term questions, because the forecasting community has less of a track record at these time scales.[2][3]
* Writing questions
* There may well be some gaps in the admin-created question set.[4] We welcome question contributions from users.
The focus week will likely be followed by an essay contest, since a large part of the value in this initiative, we believe, lies in generating concrete stories for how the future might play out (and for what the inflection points might be). More details to come.[5]
1. ^
This is not to say that we firmly believe extinction won’t happen. I personally put p(doom) at around 60%. At the same time, however, as I have previously written, I believe that more important trajectory changes lie ahead if humanity does manage to avoid extinction, and that it is worth planning for these things now.
2. ^
Moreover, I personally take Nuño Sempere’s “Hurdles of using f
For a long time I found this surprisingly nonintuitive, so I made a spreadsheet that did it, which then expanded into some other things.
* Spreadsheet here, which has four tabs based on different views on how best to pick the fair place to bet where you and someone else disagree. (The fourth tab I didn't make at all, it was added by someone (Luke Sabor) who was passionate about the standard deviation method!)
* People have different beliefs / intuitions about what's fair!
* An alternative to the mean probability would be to use the product of the odds ratios.
Then if one person thinks .9 and the other .99, the "fair bet" will have implied probability more than .945.
* The problem with using Geometric mean can be highlighted if player 1 estimates 0.99 and player 2 estimates 0.01.
This would actually lead player 2 to contribute ~90% of the bet for an EV of 0.09, while player 1 contributes ~10% for an EV of 0.89. I don't like that bet. In this case, mean prob and Z-score mean both agree at 50% contribution and equal EVs.
* "The tradeoff here is that using Mean Prob gives equal expected values (see underlined bit), but I don't feel it accurately reflects "put your money where your mouth is". If you're 100 times more confident than the other player, you should be willing to put up 100 times more money. In the Mean prob case, me being 100 times more confident only leads me to put up 20 times the amount of money, even though expected values are more equal."
* Then I ended up making an explainer video because I was excited about it
Other spreadsheets I've seen in the space:
* Brier score betting (a fifth way to figure out the correct bet ratio!)
* Posterior Forecast Calculator
* Inferring Probabilities from PredictIt Prices
These three all by William Kiely.
Does anyone else know of any? Or want to argue for one method over another?