NunoSempere

Director of Foresight @ Sentinel
12559 karmaJoined
nunosempere.com/blog

Bio

I run Sentinel, a team that seeks to anticipate and respond to large-scale risks. You can read our weekly minutes here. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.


My career has been as follows:

  • Before Sentinel, I set up my own niche consultancy, Shapley Maximizers. This was very profitable, and I used the profits to bootstrap Sentinel. I am winding this down, but if you have need of estimation services for big decisions, you can still reach out. 
  • I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms—a more up to date alternative might be adj.news. I spent some time in the Bahamas as part of the FTX EA Fellowship, and did a bunch of work for the FTX Foundation, which then went to waste when it evaporated. 
  •  I write a Forecasting Newsletter which gathered a few thousand subscribers; I previously abandoned but have recently restarted it. I used to really enjoy winning bets against people too confident in their beliefs, but I try to do this in structured prediction markets, because betting against normal people started to feel like taking candy from a baby.
  • Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." 
  • Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1193

Topic contributions
14

Here are some I made for Benjamin Todd (previously mentioned here)... right before FTX went down. Not sure how well they've aged.

The previous version of this post had a comment from Julia Wise outlining some of her past mistakes, as well as a reply from Alexey Guzey (now deleted, but you can see some of the same contents below the table of contents here). You can also see comments from Julia here and here reflecting on her handling of complaints against Owen Cotton-Baratt. I think these are all informative in terms of predicting that sometimes the people pointed at in this post can fail as well.

It does feel like a just-so story either way

Yeah, possible. It's just been on my mind since FTX.

I thought it would be interesting to add uncertainty. If you have

20K 40K       # Mean annual salary 2025 pledgers
* 0.1         # 10% given 
* beta 1 4    # counterfactual adjustment. Differs from post
* beta 5 5    # effectiveness adjustment
* 5 20        # discounted living lifespan
* 1.1 2       # reporting adjustment
* 800 2K      # expected number of pledgers
* 1.2 1.5     # adjustment for largest donors
* beta 2 8    # more adjustments (the product of rows 27:37 is 0.18)
/ 209K        # cost of GWWC

The result is a giving multiplier of 0.2 to 30.

To me the key parameter is the counterfactuality of these donations. Your current number is 50%, but not super sure if you are accounting for people being less able to do ambitious things because they have fewer savings.

To some extent you may also want to account for adjustments you haven't thought of generally

Seems like a cry for help. In particular, instead of "isolating [yourself] from all sources of misaligned social motivation" you might be ''isolating yourself from all ways of realizing that you are falsifying your own preferences''.

It also seems dumb because it's not a particularly corrigible action.

Do you have people you can reach out though? Reading through your forum posts some of the projects you have are cool. Any collaborators which you can reach out to? Or are you already pretty isolated?

For a while, I've been thinking about the following problem: as you get better models of the world/ability to get better models of the world, you start noticing things that are inconvenient for others. Some of those inconvenient truths can break coordination games people are playing, and leave them with worse alternatives.

Some examples:

  • The reason why organization X is doing weird things is because their director is weirdly incompetent
  • XYZ is explained by people jockeying for influence inside some organization
  • Y is the case but would be super inconvenient to the ideology du jour
  • Z is the case, but our major funder is committed to believing that this is not the case
    • E.g., AI is important, but a big funder thinks that it will be important in a different way and there is no bandwidth to communicate.
  • Jobs in some area are very well paid, which creates incentives for people to justify that area
  • Someone builds their identity on a linchpin which is ultimately false ("my pet area is important")
  • "If I stopped believing in [my religion] my wife would leave me"—true story.
  • Such and such a cluster of people systematically overestimate how altruistic they are, which has a bunch of bad effects for themselves and others when they interact with organizations focused on effectiveness

Poetically, if you stare into the abyss, the abyss then later stares at others through your eyes, and people don't like that.

I don't really have many conclusions here. So far when I notice a situation like the above I tend to just leave, but this doesn't seem like a great solution, or like a solution at all sometimes. I'm wondering whether you've thought about this, about whether and how some parts of what EA does are premised on things that are false.

Perhaps relatedly or perhaps as a non-sequitur, I'm also curious about what changed since your post a year ago talking about how EA doesn't bring out the best in you.

I'm subscribed to the "Organizations update" tab, so I get notifications when a new post in that category appears, but I can't unsubscribe. This has been a mild annoyance for a few years. Clicking subscribe and unsubscribe on the page doesn't do anything. Could someone fix it?

Hey, I thought this was thought provoking.

I think with fictional characters, they could be suffering while they are being instantiated. E.g., I found the film Oldboy pretty painful, because I felt some of the suffering of the character while watching the film. Similarly, if a convincing novel makes its readers feel the pain of the characters, that could be something to care about.

Similarly, if LLM computations implement some of what makes suffering bad—for instance, if they simulate some sort of distress internally while stating the words "I am suffering", because this is useful in order to make better predictions—then this could lead to them having moral patienthood.

That doesn't seem super likely to me, but as you have llms that are more and more capable of mimicking humans, I can see the possibility that implementing suffering is useful in order to predict what an agent suffering would output.

Here is a chaser: How can the EA community be useful to you in helping you do more good? Are there any bottlenecks you have in doing more of this stuff that could be solved with a 10k strong but weakly coordinated community? In the hypothetical extreme where you Darren, or Mr Beast, were made king of EA for a week, or for a year, what would you do with that?

Load more