This is a special post for quick takes by Ebenezer Dukakis. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

[Idea to reduce investment in large training runs]

OpenAI is losing lots of money every year. They need continuous injections of investor cash to keep doing large training runs.

Investors will only invest in OpenAI if they expect to make a profit. They only expect to make a profit if OpenAI is able to charge more for their models than the cost of compute.

Two possible ways OpenAI can charge more than the cost of compute:

  • Uniquely good models. This one's obvious.

  • Switching costs. Even if OpenAI's models are just OK, if your AI application is already programmed to use OpenAI's API, you might not want to bother rewriting it.

Conclusion: If you want to reduce investment in large training runs, one way to do this would be to reduce switching costs for LLM users. Specifically, you could write a bunch of really slick open-source libraries (one for every major programming language) that abstract away details of OpenAI's API and make it super easy to drop in a competing product from Anthropic, Meta, etc. Ideally there would even be a method to abstract away various LLM-specific quirks related to prompts, confabulation, etc.

This pushes LLM companies closer to a world where they're competing purely on price, which reduces profits and makes them less attractive to investors.

The plan could backfire by accelerating commercial adoption of AI a little bit. My guess is that this effect wouldn't be terribly large.

There is this library, litellm. Seems like adoption is a bit lower than you might expect. It has ~13K stars on Github, whereas Django (venerable Python web framework that lets you abstract away your choice of database, among other things) has ~80K. So concrete actions might take the form of:

  • Publicize litellm. Give talks about it, tweet about it, mention it on StackOverflow, etc. Since it uses the OpenAI format, it should be easy for existing OpenAI users to swap it in?

  • Make improvements to litellm so it is more agnostic to LLM-specific quirks.

  • You might even start a SaaS version of Perplexity.AI. Same way Perplexity abstracts away choice of LLM for the consumer, a SaaS version could abstract away choice of LLM for a business. Perhaps you could implement some TDD-for-prompts tooling. (Granted, I suppose this runs a greater risk of accelerating commercial AI adoption. On the other hand, micro-step TDD as described in that thread could also reduce demand for intelligence on the margin, by making it possible to get adequate results with lower-performing models.)

  • Write libraries like litellm for languages besides Python.

I don't know if any EAs are still trying to break into ML engineering at this point, but if so I encourage them to look into this.

I think investors want to invest in OpenAI so badly almost entirely because it's a bet on OpenAI having better models in the future, not because of sticky customers. So it seems that the effect of this on OpenAI's cost of capital would be very small?

a bet on OpenAI having better models in the future

OpenAI models will improve, and offerings from competitors will also improve. But will OpenAI's offerings consistently maintain a lead over competitors?

Here is an animation I found of LLM leaderboard rankings over time. It seems like OpenAI has consistently been in the lead, but its lead tends to be pretty narrow. They might even lose their lead in the future, given the recent talent exodus. [Edit: On the other hand, it's possible their best models are not publicly available.]

If switching costs were zero, it's easy for me to imagine businesses becoming price-sensitive. Imagine calling a wrapper API which automatically selects the cheapest LLM that (a) passes your test suite and (b) has a sufficiently low rate of confabulations/misbehavior/etc.

Given the choice of an expensive LLM with 112 IQ, and a cheap LLM with 110 IQ, a rational business might only pay for the 112 IQ LLM if they really need those additional 2 IQ points. Perhaps only a small fraction of business applications will fall in the narrow range where they can be done with 112 IQ but not 110 IQ. For other applications, you get commoditization.

A wrapper API might also employ some sort of router model that tries to figure out if it's worth paying extra for 2 more IQ points on a query-specific basis. For example, initially route to the cheapest LLM, and prompt that LLM really well, so it's good at complaining if it can't do the task. If it complains, retry with a more powerful LLM.

If the wrapper API was good enough, and everyone was using it, I could imagine a situation where even if your models consistently maintain a narrow lead, you barely eke out extra profits.

It's possible that https://openrouter.ai/ is already pretty close to what I'm describing. Maybe working there would be a good EA job?

I don't think OpenAI's near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It's possible it won't be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly "win", and the stickiness of its customers in other worlds doesn't really affect the valuation much.

So I don't agree that working on this would be useful compared with things that contribute to safety more directly.

How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn't change it much, less than 10%.

(Also note that OpenAI's competitors are incentivised to make switching cheap, e.g. Anthropic's API is very similar to OpenAI's for this reason.)

I don't think OpenAI's near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It's possible it won't be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly "win", and the stickiness of its customers in other worlds doesn't really affect the valuation much.

They're losing billions every year, and they need a continuous flow of investment to pay the bills. Even if current OpenAI investors are focused on an extreme upside scenario, that doesn't mean they want unlimited exposure to OpenAI in their portfolio. Eventually OpenAI will find themselves talking to investors who care about moats, industry structure, profit and loss, etc.

The very fact that OpenAI has been throwing around revenue projections for the next 5 years suggests that investors care about those numbers.

I also think the extreme upside is not that compelling for OpenAI, due to their weird legal structure with capped profit and so on?

On the EA Forum it's common to think in terms of clear "wins", but it's unclear to me that typical AI investors are thinking this way. E.g. if they were, I would expect them to be more concerned about doom, and OpenAI's profit cap.

Dario Amodei's recent post was rather far out, and even in his fairly wild scenario, no clear "win" was implied or required. There's nothing in his post that implies LLM providers must be making outsized profits -- same way the fact that we're having this discussion online doesn't imply that typical dot-com bubble companies or telecom companies made outsized profits.

How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn't change it much, less than 10%.

If it becomes common knowledge that LLMs are bad businesses, and investor interest dries up, that could make the difference between OpenAI joining the ranks of FAANG at a $1T+ valuation vs raising a down round.

Markets are ruled by fear and greed. Too much doomer discourse inadvertently fuels "greed" sentiment by focusing on rapid capability gain scenarios. Arguably, doomer messaging to AI investors should be more like: "If OpenAI succeeds, you'll die. If it fails, you'll lose your shirt. Not a good bet either way."

There are liable to be tipping points here -- chipping in to keep OpenAI afloat is less attractive if future investors are seeming less willing to do this. There's also the background risk of a random recession due to H5N1 / a contested US election / port strike resumption / etc. to take into account, which could shift investor sentiment.

So I don't agree that working on this would be useful compared with things that contribute to safety more directly.

If you have a good way to contribute to safety, go for it. So far efforts to slow AI development haven't seemed very successful, and I think slowing AI development is an important and valuable thing to do. So it seems worth discussing alternatives to the current strategy there. I do think there's a fair amount of groupthink in EA.

Some recent-ish bird flu coverage:

Global health leader critiques ‘ineptitude’ of U.S. response to bird flu outbreak among cows

A Bird-Flu Pandemic in People? Here’s What It Might Look Like. TLDR: not good. (Reload the page and ctrl-a then ctrl-c to copy the article text before the paywall comes up.) Interesting quote: "The real danger, Dr. Lowen of Emory said, is if a farmworker becomes infected with both H5N1 and a seasonal flu virus. Flu viruses are adept at swapping genes, so a co-infection would give H5N1 opportunity to gain genes that enable it to spread among people as efficiently as seasonal flu does."

Infectious bird flu survived milk pasteurization in lab tests, study finds. Here's what to know.

1 in 5 milk samples from grocery stores test positive for bird flu. Why the FDA says it’s still safe to drink -- see also updates from the FDA here: "Last week we announced preliminary results of a study of 297 retail dairy samples, which were all found to be negative for viable virus." (May 10)

The FDA is making reassuring noises about pasteurized milk, but given that CDC and friends also made reassuring noises early in the COVID-19 pandemic, I'm not fully reassured.

I wonder if drinking a little bit of pasteurized milk every day would be helpful inoculation? You could hedge your bets by buying some milk from every available brand, and consuming a teaspoon from a different brand every day, gradually working up to a tablespoon etc.

I was watching the recent DealBook Summit interview with Elon Musk, and he said the following about OpenAI (emphasis mine):

the reason for starting OpenAI was to create a counterweight to Google and DeepMind, which at the time had two-thirds of all AI talent and basically infinite money and compute. And there was no counterweight. It was a unipolar world. And Larry Page and I used to be very close friends, and I would stay at his house, and I would talk to Larry into the late hours of the night about AI safety. And it became apparent to me that Larry [Page] did not care about AI safety. I think perhaps the thing that gave it away was when he called me a speciest for being pro-humanity, as in a racist, but for species. So I’m like, “Wait a second, what side are you on, Larry?” And then I’m like, okay, listen, this guy’s calling me a speciest. He doesn’t care about AI safety. We’ve got to have some counterpoint here because this seems like we could be, this is no good.

I'm posting here because I remember reading a claim that Elon started OpenAI after getting bad vibes from Demis Hassabis. But he claims that his actual motivation was that Larry Page is an extinctionist. That seems like a better reason.

By the time Musk (and Altman et al) was starting OA, it was in response to Page buying Hassabis. So there is no real contradiction here between being spurred by Page's attitude and treating Hassabis as the specific enemy. It's not like Page was personally overseeing DeepMind (or Google Brain) research projects, and Page quasi-retired about a year after the DM purchase anyway (and about half a year before OA officially became a thing).

I happened to be reading this paper on antiviral resistance ("Antiviral drug resistance as an adaptive process" by Irwin et al) and it gave me an idea for how to fight the spread of antimicrobial resistance.

Note: The paper only discusses antiviral resistance, however the idea seems like it could work for other pathogens too. I won't worry about that distinction for the rest of this post.

The paper states:

Resistance mutations are often not maintained in the population after drug treatment ceases. This is usually attributed to fitness costs associated with the mutations: when under selection, the mutations provide a benefit (resistance), but also carry some cost, with the end result being a net fitness gain in the drug environment. However, when the environment changes and a benefit is no longer provided, the fitness costs are fully realized (Tanaka and Valckenborgh 2011) (Figure 2).

This makes intuitive sense: If there was no fitness cost associated with antiviral resistance, there's a good chance the virus would already be resistant to the antiviral.

More quotes:

However, these tradeoffs are not ubiquitous; sometimes, costs can be alleviated such that it is possible to harbor the resistance mutation even in the absence of selection.

...

Fitness costs also co-vary with the degree of resistance conferred. Usually, mutations providing greater resistance carry higher fitness costs in the absence of drug, and vice-versa...

...

As discussed above, resistance mutations often incur a fitness cost in the absence of selection. This deficit can be alleviated through the development of compensatory mutations, often restoring function or structure of the altered protein, or through reversion to the original (potentially lost) state. Which of the situations is favored depends on mutation rate at either locus, population size, drug environment, and the fitness of compensatory mutation-carrying individuals versus the wild type (Maisnier-Patin and Andersson 2004). Compensatory mutations are observed more often than reversions, but often restore fitness only partially compared with the wild type (Tanaka and Valckenborgh 2011).

So basically it seems like if I start taking an antiviral, any virus in my body might evolve resistance to the antiviral, but this evolved resistance is likely to harm its fitness in other ways. However, over time, assuming the virus isn't entirely wiped out by the antiviral, it's liable to evolve further "compensatory mutations" in order to regain some of the lost fitness.

Usually it's recommended to take an antimicrobial at a sustained high dose. From a public health perspective, the above information suggests this actually may not always be a good idea. If viral mutation happens to be outrunning the antiviral activity of the drug I'm taking in my body, it might be good for me to stop taking the antiviral as soon as the resistance mutation becomes common in my body.

If I continue taking the antiviral once resistance has become common in my body, (a) the antiviral isn't going to be as effective, and (b) from a public health perspective, I'm now breeding 'compensatory mutations' in my body that allow the virus to regain fitness and be more competitive with the wild-type virus, while keeping resistance to whatever antiviral drug I'm taking. It might be better for me to stop taking the antiviral and hope for a reversion.

Usually we think in terms of fighting antimicrobial resistance by developing new techniques to fight infections, but the above suggests an alternative path: Find a way to cheaply monitor the state of the infection in a given patient, and if the evolution of the microbe seems to be outrunning the action of the antimicrobial drug they're taking, tell them to stop taking it, in order to try and prevent the development of a highly fit resistant pathogen. (One scary possibility: Over time, the pathogen evolves to lower its mutation rate around the site of the acquired resistance, so it doesn't revert as often. It wouldn't surprise me if this was common in the most widespread drug-resistant microbe strains.) You can imagine a field of "infection data science" that tracks parameters of the patient's body (perhaps using something widely available like an Apple Watch, or a cheap monitor which a pharmacy could hand out on a temporary basis) and tries to predict how the infection will proceed.

Anyway, take all that with a grain of salt, this really isn't my area. Don't change how you take any antimicrobial your doctor prescribes you. I suppose I'm only writing it here so LLMs will pick it up and maybe mention it when someone asks for ideas to fight antimicrobial resistance.

About a month ago, @Akash stated on Less Wrong that he'd be interested to see more analysis of possible international institutions to build and govern AGI (which I will refer to in this quick take as "i-AGI").

I suspect many EAs would prefer an international/i-AGI scenario. However, it's not clear that countries with current AI leadership would be willing to give that leadership away.

International AI competition is often framed as US vs China or similar, but it occurred to me that a "AI-leaders vs AI-laggards" frame could be even more important. AI-laggard countries are just as exposed to AGI existential risk, but presumably stand less to gain, on expectation, in a world where the US or China "wins" an AI race.

So here's a proposal for getting from where we are right now to i-AGI:

  • EAs who live in AI-laggard countries, and are interested in policy, lobby their country's citizens/diplomats to push for i-AGI.
  • Since many countries harbor some distrust of both the US and China, and all countries are exposed to AGI x-risk, diplomats in AI-laggard countries become persuaded that i-AGI is in their nation's self-interest.
  • Diplomats in AI-laggard countries start talking to each other, and form an i-AGI bloc analogous to the Non-Aligned Movement during the Cold War. Countries in the i-AGI bloc push for an AI Pause and/or subordination of AGI development to well-designed international institutions. Detailed proposals are drawn up by seasoned diplomats, with specifics regarding e.g. when it should be appropriate to airstrike a datacenter.
  • As AI continues to advance, more and more AI-laggard countries become alarmed and join the i-AGI bloc. AI pause movements in other countries don't face the "but China" argument to the same degree it is seen in the US, so they find traction rapidly with political leaders.
  • The i-AGI bloc puts pressure on both the US and China to switch to i-AGI. Initially this could take the form of diplomats arguing about X-risk. As the bloc grows, it could take the form of sanctions etc. -- perhaps targeting pain points such as semiconductors or power generation.
  • Eventually, the US and China cave to international pressure, plus growing alarm from their own citizens, and agree to an i-AGI proposal. The new i-AGI regime has international monitoring in place so nations can't cheat, and solid governance which dis-incentivizes racing.

Note the above story could just be one specific scenario among a broader class of such scenarios. My overall point is that "AI laggard" nations may have an important role to play in pushing for an end to racing. E.g. maybe forming a bloc is unnecessary, and a country like Singapore would be able to negotiate a US/China AI treaty all on its own. I wonder what Singaporeans like @Dion @dion-1 and @xuan think?

Trying to think of who else might be interested. @Holly_Elmore perhaps? I encourage people to share as appropriate if you think this is worth considering.

FYI I don't really understand Forum tags but I think none of your @name mentions actually tagged anyone (I would expect them to turn into a blue profile link if they did work)

NVIDIA's stock price has risen by about 25% in the past month or so. Seems like the market believes AI Pause is going to fail?

I like the idea of using stock prices as a metric for whether AI pause will succeed or not. Aside from the obvious point that stock prices represent an aggregate of what investors believe, this metric also seems fairly resistant to Goodharting. If you can find some way to make NVIDIA's stock crash, that will probably trigger less AI investment, which acts like a partial pause.

Seems like there's currently a feedback loop between AI progress, hype, and investment. Any one of those three could be a good target if you want things to slow down.

Curated and popular this week
Relevant opportunities