D

dschwarz

CEO @ FutureSearch
252 karmaJoined
futuresearch.ai

Bio

Interested in forecasting, epistemology, and AI. Long-time LW lurker, https://www.lesswrong.com/users/dschwarz

CEO of FutureSearch. Previously CTO Metaculus, creator of Google's current internal prediction market.

How others can help me

Please reach out at dan at futuresearch dot ai if you're interested in getting involved in AI forecasting.

Comments
13

Definitely. I think all contribute to their thinking - their current finances, the growth rates, and the expected value of their future plans that don't generate any revenue today.

TechCongress, an org that places technologists in the offices of members of US Congress to write policy, has opened an AI Safety Fellowship with applications due in just 10 days, on June 12

https://www.techcongress.io/ai-safety-fellowship

It pays well too. It is distinct from U.S. Artificial Intelligence Safety Institute with Paul Christiano and others, which I believe is in the executive branch, not the legislative branch like this one.

Exciting stuff Ozzie! We really need people to specify models in their forecasts, and the fact that you can score those models directly, not numbers derived from it, is a great step forwards.

In your "Challenges for Forecasting Platforms", you write "Writing functions is more complicated than submitting point probability or single distribution forecasts". I'd go further and say that formulating forecasts as functions is pretty hard even for savvy programmers. Most forecasters vibe, and inside views (idiosyncratic) dominate base rates (things that look like functions).

Take your example "For any future year T (2025 to 2100) and country, predict the life expectancy..." Let's say my view is a standard time series regression on the last 20 years, plus a discontinuity around 2030 when cancer is cured, at which point we'll get a 3 year jump worldwide in the first world, and then a 1.5 year jump in the third world 2 years later.

Yes, I could express that as a fairly involved function, but isn't the sentence I wrote above a better description of my view? How do "inside view" forecasts get translated to functions?

FutureSearch is hiring! We're seeking Full-Stack Software Engineers, Research Engineers, and Research Scientists to help us build an AI system that can answer hard questions, including the forecasting and research questions critical for effective allocation of resources. You can read more our motivations, and how it works.

Salary and benefits: $70k - $120k, location and seniority depending. We aim to offer higher equity than startups at our size (6 people) typically do.

Location: Full remote. We pay for travel every few months to work together around the US and Europe. We are primarily based on SF and London.

Apply on our careers page.

Hiring Process: 

  1. Answer some questions about your understanding of our domain (1 hour)
  2. Offline technical screen (2 hours)
  3. Online non-technical founder interview (45 minutes)
  4. Offer

This seems plausible, perhaps more plausible 3 years ago. AGI is so mainstream now that I imagine there are many people who are motivated to advance the conversation but have no horse in the race.

If only the top cadre of AI experts are capable of producing the models, then yes, we might have a problem of making such knowledge a public good.

Perhaps philanthropists can provide bigger incentives to share than their incentives not to share.

Yeah, I do like your four examples of "just the numbers" forecasts that are valuable: weather, elections, what people believe, and "where is there lots of disagreement? I'm more skeptical that these are useful, rather than curiosity-satisfying.

Election forecasts are a case in point. People will usually prepare for all outcomes regardless of the odds. And if you work in politics, deciding who to choose for VP or where to spend your marginal ad dollar, you need models of voter behavior. 

Probably the best case for just-the-numbers is probably your point (b), shift-detection. I echo your point that many people seem struck by the shift in AGI risk on the Metaculus question.

I’m worried that in the context of getting high-stakes decision makers to use forecasts, some of the demand for rationales is due to lack of trust in the forecasts.

Undoubtedly some of it is. Anecdotally, though, high-level folks frequently take one (or zero) glances at the calibration chart, nod, and then say "but how I am supposed to use this?", even on questions I pick to be highly relevant to them, just like the paper I cited finding "decision-makers lacking interest in probability estimates."

Even if you're (rightly) skeptical about AI-generated rationales, I think the point holds for human rationales. One example: Why did DeepMind hire Swift Centre forecasters when they already had Metaculus forecasts on the same topics, as well as access to a large internal prediction market?

I suppose I left it intentionally vague :-). We're early, and are interested in talking to research partners, critics, customers, job applicants, funders, forecaster copilots, writers.

We'll list specific opportunities soon, consider this to be our big hello.

Agreed Eli, I'm still working to understand where the forecasting ends and the research begins. You're right, the distinction is not whether you put a number at the end of your research project.

In AGI (or other hard sciences) the work may be very different, and done by different people. But in other fields, like geopolitics, I see Tetlock-style forecasting as central, even necessary, for research.

At the margin, I think forecasting should be more research-y in every domain, including AGI. Otherwise I expect AGI forecasts will continue to be used, while not being very useful.

Nice post. I also have been exploring reasoning by analogy. I like some of the ones in your sheet, like "An international AI regulatory agency" --> "The International Atomic Energy Agency".

I think this effort could be a lot more concrete. The "AGI" section could have a lot more about specific AI capabilities (math, coding, writing) and compare them to recent technological capabilities (e.g. Google Maps, Microsoft Office) or human professions (accountant, analyst).

The more concrete it is, the more inferential power. I think the super abstract ones like "AGI" --> "Harnessing fire" don't give much more than a poetic flair to the nature of AGI.

Nicely done! The college campus forecasting clubs and competition model feels extremely promising to me. Really great to see a dedicated effort start to take off.

I'm especially happy to see an ACX Manifund mini-grant get realized so quickly. I admit I was skeptical of these grants.

Excited to see the next iteration of this, and hopefully many more to come on college campuses all over!

Load more