Bio

Participation
3



You can send me a message anonymously here: https://www.admonymous.co/will

Comments
442

I endorse this for non-EA vegans who aren't willing to donate the money to wherever it will do the most good in general, but as my other comments have pointed out if a person (vegan or non-vegan) is willing to donate the money to wherever it will so the most good then they should just do that rather than donate it for the purpose of offsetting.

Per my top-level comment citing Claire Zabel's post Ethical offsetting is antithetical to EA, offsetting past consumption seems worse than just donating that money to wherever it will do the most good in general.

I see you've taken the 10% Pledge, so I gather you're willing to donate effectively.

While you might feel better if you both donate X% to wherever you believe it will do the most good and $Y to the best animal charities to offset your past animal consumption, I think you instead ought to just donate X%+$Y to wherever it will do the most good.

NB: Maybe you happen to think the best giving opportunity to help animals is the best giving opportunity in general, but if not then my claim is that your offsetting behavior is a mistake.

This seems like a useful fundraising tool to target people who are unwilling to give their money to wherever it will do the most good, but I think it should be flagged that if a person is willing to donate their money to wherever it will do the most good then they should do that rather than donate to the best animal giving opportunities for the purpose of ethical offsetting. See Ethical offsetting is antithetical to EA.

I'm now over 20 minutes in and haven't quite figured out what you're looking for. Just to dump my thoughts -- not necessarily looking for a response:

On the one hand it says "Our goal is to discover creative ways to use AI for Fermi estimation" but on the other hand it says "AI tools to generate said estimates aren’t required, but we expect them to help."

From the Evaluation Rubric, "model quality" is only 20%, so it seems like the primary goal is neither to create a good "model" (which I understand to mean a particular method for making a Fermi estimate on a particular question) nor to see if AI tools can be used to create such models.

The largest score (40%) is whether the *result* of the model that is created (i.e. the actual estimate that the model spits out with the numbers put into it) is surprising or not, with more surprising being better. But it's unclear to me if the estimate actually needs to be believed or not for it to be surprising. Extreme numbers could just mean that the output is bad or wrong and not that the output should be evidence of anything.

I don't see any particular reason to believe the means to obtain that knowledge existed and was used when you can't tell me what that might look like, never mind how a small number of apparently resource-poor people obtained it... 

I wasn't a particularly informed forecaster, so me not telling you what information would have been sufficient to justify a rational 65+% confidence in Trump winning shouldn't be much evidence to you about the practicality of a very informed person reaching 65+% credence rationally. Identifying what information would have been sufficient is a very time-intensive, costly project, and given I hadn't done it already I wasn't about to go spend months researching the data that people in principle had access to that might have led to a >65% forecast just to answer your question.

Prior to the election, I had an inside view credence of 65% that Trump would win, but considered myself relatively uninformed and so I meta-updated on election models and betting market prices to be more uncertain, making my all-things-considered view closer to 50/50. As I wrote on November 4th:

My 2/10 low information inside view judgment is that Trump is about 65% likely to win PA and the election. My all-things-considered view is basically 50%.

However, notably, after about 10 hours of thinking about who will win in the last week, I don't know if I actually trust Nate and prediction markets to be doing a good job. I suspect that there may be well-informed people in the world who *know* that the markets are wrong and have justified "true" beliefs that one candidate is >65% likely to win. Such people presumably have a lot of money on the line, but not enough to more [sic] the market prices far from 50%.

So I held this suspicion before the election, and I hold it still. I think it's likely that such forecasters with rational credences of 65+% Trump victory did exist, and even if they didn't, I think it's possible that they could have existed if more people cared more about finding out the truth of who would win.

So I think the suggestion that there was some way to get to 65-90% certainty which apparently nobody was willing to either make with substantial evidence or cash in on to any significant extent is a pretty extraordinary claim...

I'm skeptical that nobody was rationally (i.e. not overconfidentally) at >65% belief Trump would win before election day. Presumably a lot of people holding Yes and buying Yes when Polymarket was at ~60% Trump believed Trump was >65% likey to win, right? And presumably a lot of them cashed in for a lot of money. What makes you think nobody was at >65% without being overconfident?

I'll grant the French whale was overconfident, since it seems very plausible that he was overconfident, though I don't know that for sure, but that doesn't mean everyone >65% was overconfident.

I'll also note that just because the market was at ~60% (or whatever precisely) does not mean that there could not have been people participating in the market who were significantly more confident yhat Trump would win and rationally-so.

The information that we gained between then and 1 week before the election was that the election remained close

I'm curious if by "remained close" you meant "remained close to 50/50"?

(The two are distinct, and I was guilty of pattern-matching "~50/50" to "close" even though ~50/50 could have meant that either Trump or Harris was likely to win by a lot (e.g. swing all 7 swing states) and we just had no idea which was more likely.)

Could you say more about "practically possible"?

Yeah. I said some about that in the ACX thread in an exchange with a Jeffrey Soreff here. Initially I was talking about a "maximally informed" forecaster/trader, but then when Jeffrey pointed out that that term was ill-defined, I realized that I had a lower-bar level of informed in mind that was more practically possible than some notions of "maximally informed."

What steps do you think one could have taken to have reached, say, a 70% credence?

Basically just steps to become more informed and steps to have better judgment. (Saying specifically what knowledge would be sufficient to be able to form a forecast of 70% seems borderline impossible or at least extremely difficult.)

Before the election I was skeptical that people like Nate Silver and his team and The Economist's election modeling team were actually doing as good a job as they could have been[1] forecasting who'd win the election and now post-election I still remain skeptical that their forecasts were close to being the best they could have been.

[1] "doing as good a job as they could have been" meaning I think they would have made substantially better forecasts in expectation (lower Brier scores in expectation) if figuring out who was going to win was really important to them (significantly more important than it actually was), and if they didn't care about the blowback for being "wrong" if they made a confident wrong-side-of-maybe forecast, and if they were given a big budget to use to do research and acquire information (e.g. $10M), and if they were highly skilled forecasters with great judgment (like the best in the world but not superhuman (maybe Nate Silver is close to this--IDK; I read his book The Signal and the Noise, but it seems plausible that there could still be substantial room for him to improve his forecasting skill)).

Note that I also made five Manifold Markets questions to also help evaluate my PA election model (Harris and Trump means and SDs) and the claim that PA is ~35% likely to be decisive.

  1. Will Pennsylvania be decisive in the 2024 Presidential Election?
  2. How many votes will Donald Trump receive in Pennsylvania? (Set)
  3. How many votes will Donald Trump receive in Pennsylvania? (Multiple Choice)
  4. How many votes will Kamala Harris receive in Pennsylvania? (Set)
  5. How many votes will Kamala Harris receive in Pennsylvania? (Multiple Choice)

(Note: I accidentally resolved my Harris questions (#4 & #5) to the range of 3,300,000-3,399,999 rather than 3,400,000-3,499,999. Hopefully the mods will unresolve and correct this for me per my comments on the questions.)

This exercise wasn't too useful as there weren't enough other people participating in the markets to significantly move the prices from my initial beliefs. But I suppose that's evidence that they didn't think I was significantly wrong.

Load more