K

Kei

97 karmaJoined Boston, MA, USA

Posts
1

Sorted by New

Comments
12

I didn't check whether you addressed this, but an article from The Information claims that OpenAI's API ARR reached $1B as of March: https://www.theinformation.com/articles/a-peek-behind-openais-financials-dont-underestimate-china?rc=qcqkcj

A separate The Information article claims that OpenAI receives $200MM ARR as a cut of MSFT's OpenAI model-based cloud revenue, which I'm not sure is included in your breakdown: https://www.theinformation.com/articles/openais-annualized-revenue-doubles-to-3-4-billion-since-late-2023?rc=qcqkcj

These articles are not public though - they are behind a paywall.

Do we have any gauge on how accurate the FTX numbers ended up being? More specifically, how much of the donated FTX money ended up either not being distributed, or was ultimately clawed back?

How do you decide what data/research to prioritize?

An AI that could perfectly predict human text would have a lot of capabilities that humans don't have. (Note that it is impossible for any AI to perfectly predict human text, but an imperfect text-predictor may have weaker versions of many of the capabilities a perfect predictor would have.) Some examples include:

  • Ability to predict future events: Lots of text on the internet describes something that happened in the real world. Examples might include the outcome of some sports game, whether a company's stock goes up or down and by how much, or the result of some study or scientific research. Being able to predict such text would require the AI to have the ability to make strong predictions about complicated things.
  • Reversibility: There are many tasks that are easy to do in one direction but much harder to do in the reverse direction. Examples include factoring a number (it's easier to multiply two primes p and q to get a number N=pq, then to figure out p and q when given N), and hash functions (it's easy to calculate the hash of a number, but almost impossible to calculate the original number from the hash). An AI trained to do the reverse, more difficult direction of such a task would be incentivized to do things more difficult than humans could do.
  • Speed: Lots of text on the internet comes from very long and painstaking effort. If an AI can output the same thing a human can, but 100x faster, that is still a significant capability increase over humans.
  • Volume of knowledge: Available human text spans a wider breadth of subject areas than any single person has expertise in. An AI trained on this text could have a broader set of knowledge than any human - and in fact by some definition this may already be the case with GPT-4. To the extent that making good decisions is helped by having internalized the right information, advanced models may be able to make good decisions that humans are not able to make themselves.
  • Extrapolation: Modern LLMs can extrapolate to some degree from information provided in its training set. In some domains, this can result in LLMs performing tasks more complicated than any it had previously seen in the training data. It's possible with the appropriate prompt, these models would be able to extrapolate to generate text that would be made by slightly smarter humans.

In addition to this, modern LLM model training typically consists of two steps, a standard predict the next word first training step, and a reinforcement learning based second step. Models trained with reinforcement learning can in principle become even better than models just trained with next-token prediction.

For what it's worth, this is not a prediction, Sundar Pichai said it in an NYT interview: https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.html

My best guess is it will be announced once the switch happens in order to get some good press for Google Bard. 

Apparently Bard currently uses an older and less sizable language model called LaMDA as its base (you may remember it as the model a Google employee thought was sentient). They're planning on switching over to a more capable model PaLM sometime soon, so Bard should get much closer to GPT at that point.

Thanks for making this! It was a lot of fun to play and I imagine it will be good practice.

Kei
15
15
0

I think the implicit claim here is that because SBF (or Dustin/Cari for that matter) was a major EA donor, everything he donates counts as an EA donation. But I don't think that's the right way to look at it. It's not logic we'd apply to other people - I donate a chunk of my money to various EA-affiliated causes, but if I one day decided to donate to the Met most people would consider that separate from my EA giving. 

I would classify donations as EA donations if they fall into one of the below two buckets:

  1. Donations given out by a major EA org: Examples include the Open Philanthropy Project, GiveWell, and The FTX Future Fund.
  2. Donations given out by EAs or EA-affiliated people to causes that have been discussed and argued for a lot in the EA community. Bonus points if it's explicitly listed as a cause area on major EA org websites. Examples include anti-malaria nets, animal welfare charities, pandemic preparedness, and AI safety research. I also think donations to Carrick Flynn's campaign would fall into this bucket given the amount of discussion there was about it here.

Can someone who is not a student participate?

Load more