This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual...
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively...
We should expect that the incentives and culture for AI-focused companies to make them uniquely terrible for producing safe AGI.
From a “safety from catastrophic risk” perspective, I suspect an “AI-focused company” (e.g. Anthropic, OpenAI, Mistral) is abstractly pretty close to the worst possible organizational structure for getting us towards AGI. I have two distinct but related reasons:
From an incentives perspective, consider realistic alternative organizational structures to “AI-focused company” that nonetheless has enou...
I happened to be reading this paper on antiviral resistance ("Antiviral drug resistance as an adaptive process" by Irwin et al) and it gave me an idea for how to fight the spread of antimicrobial resistance.
Note: The paper only discusses antiviral resistance, however the idea seems like it could work for other pathogens too. I won't worry about that distinction for the rest of this post.
The paper states:
...Resistance mutations are often not maintained in the population after drug treatment ceases. This is usually attributed to fitness costs associated with
Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions:
1. Transparency and explainability of AI model data use (concern)
2. Importance of interpretability (solution)
3. Mis/dis information from deepfakes (concern)
4. Lack of liability for the creators of AI if any harms eventuate (concern + solution)
5. Unemployment without safety nets for Australians (concern)
6....
This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.
Relevant XKCD comic.
To further comment, this seems like it might be an intractable task, as the term "dependency hell" kind of implies. You'd have to scrap likely all of GitHub and calculate what libraries are used most frequently in all projects to get an accurate assessment. Then it's not clear to me how you'd identify their level of resourcing. Number of contributors? Frequency of commits?
Also, with your example of the XZ attack, it's not even clear who made the attack. If you suspect it was, say, the NSA, would you want to thwart them if their purpose ...
Status: Fresh argument I just came up with. I welcome any feedback!
Allowing the U.S. Social Security Trust Fund to invest in stocks like any other national pension fund would enable the U.S. public to capture some of the profits from AGI-driven economic growth.
Currently, and uniquely among national pension funds, Social Security is only allowed to invest its reserves in non-marketable Treasury securities, which are very low-risk but also provide a low return on investment relative to the stock market. By contrast, the Government Pension Fund of Norway (als...
We’re very excited to announce the following speakers for EA Global: London 2024:
Applications close 19 May. Apply here and find ...
The following is a collection of long quotes from Ozy Brennan's post On John Woolman (which I stumbled upon via Aaron Gertler) that spoke to me. Woolman was clearly what David Chapman would call mission-oriented with respect to meaning of and purpose in life; Chapman argues instead for what he calls "enjoyable usefulness", which is I think healthier in ~every way ... it just doesn't resonate. All bolded text is my own emphasis, not Ozy's.
...As a child, Woolman experienced a moment of moral awakening: ... [anecdote]
This anecdote epitomizes the two driving forc
Introducing Ulysses*, a new app for grantseekers.
We (Austin Chen, Caleb Parikh, and I) built an app! You can test the app out if you’re writing a grant application! You can put in sections of your grant application** and the app will try to give constructive feedback about your applicants. Right now we're focused on the "Track Record" and "Project Goals" section of the application. (The main hope is to save back-and-forth-time between applicants and grantmakers by asking you questions that grantmakers might want to ask.
Austin, Caleb, ...
For fun, I put one of my (approved) lightspeed applications through the app. This isn't a great test because Lightspeed told people to do crude applications and they'd reach out with questions if they had any. Additionally, the grantmakers already knew me and had expressed verbal interest in the project. But maybe it's still a useful data point.
My Track Record section
...
Unquantified review of MDMA risks
Semi-quantified review of binge drinking risks
[2 projects omitted for client privacy, but were included with permission in the or
This WHO press release was a good reminder of the power of immunization – a new study forthcoming publication in The Lancet reports that (liberally quoting / paraphrasing the release)
Help clear something up for me: I am extremely confused (theoretically) how we can simultaneously have:
1. An Artificial Superintelligence
2. It be controlled by humans (therefore creating misuse of concentration of power issues)
My intuition is that once it reaches a particular level of power it will be uncontrollable. Unless people are saying that we can have models 100x more powerful than GPT4 without it having any agency??
From the article:
...Effective Ventures has since come to a settlement with the FTX estate and paid back the $26.8 million given to it by FTX Foundation. [...] It’s amid such turmoil that Wytham Abbey is being listed on the open market for £15 million [...]
Adjusted for inflation, the purchase price of the house two years ago now equals £16.2 million. [...] The listing comes as homes on the UK’s once-hot country market are taking longer to sell, forcing some owners to offer disc
Common prevalence estimates are often wrong. Example: snakebites and my experience reading Long Covid literature.
Both institutions like the WHO and academic literature appear to be incentivized to exaggerate. I think the Global Burden of Disease might be a more reliable source, but have not looked into it.
I advise everyone using prevalence estimates to treat them with some skepticism and look up the source.
The banner will only be visible on desktop. If you can't see it, try expanding your window. It'll be up for a week.
If you’d like to delete your entry, click the cross that appears when you hover over it. It will be deleted for everyone.
Anything...
The banner looks lovely. Great work.
FYI, there is at least one bit of false/inaccurate information on the banner. The bit about universal right to vote is referencing a database that considers Chinese people as having the right to vote since the late 1940s. While there is some voting that occurs in China at the local level with candidates that have to be pre-approved by the ruling party, it strikes as pretty inaccurate to claim full adult suffrage for China. It appears to references a dataset from this research paper, and I'm not sure why that dataset has ...
Swapcard tips:
You can use Firefox/Safari/Chrome etc. on your phone, go to swapcard.com and use that instead of downloading the Swapcard app from your app store. As far as I know, the only thing the app has that the mobile site does not, is the QR code that you need when signing in when you first get to the venue and pick up your badge
The other fields, like 'How can I help othe... (read more)