Quick takes

Swapcard tips:

  1. The mobile browser is more reliable than the app

You can use Firefox/Safari/Chrome etc. on your phone, go to swapcard.com and use that instead of downloading the Swapcard app from your app store. As far as I know, the only thing the app has that the mobile site does not, is the QR code that you need when signing in when you first get to the venue and pick up your badge

  1. Only what you put in the 'Biography' section in the 'About Me' section of your profile is searchable when searching in Swapcard

The other fields, like 'How can I help othe... (read more)

This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:

The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual... (read more)

Showing 3 of 4 replies (Click to show all)
16
Thomas Kwa
I want to slightly push back against this post in two directions: * I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy-- I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists. * Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of "doing a lot more good matters a lot more" is really important, but it is still trading off against other values. * Helping people closer to you / in your community: many people think this has inherent value * Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly disagree that it would be better to make slightly more of a difference by e.g. subsidizing eyeglasses in Bangladesh. * Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfa

Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.

I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively... (read more)

14
Tyler Johnston
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that's confusing though. BTW, my personal views lean towards a suffering-focused ethics that isn't seeking to create happy people for their own sake. But I still think that, in coming to that view, I'm concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That's my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn't consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines. Agreed there are other issues with longtermism — just wanted to respond to the "it's not about care or empathy" critique.

Remember: EA institutions actively push talented people into the companies making the world changing tech the public have said THEY DONT WANT. This is where the next big EA PR crisis will come from (50%). Except this time it won’t just be the tech bubble.

[PHOTO] I sent 19 emails to politicians, had 4 meetings, and now I get emails like this. There is SO MUCH low hanging fruit in just doing this for 30 minutes a day (I would do it but my LTFF funding does not cover this). Someone should do this!

Showing 3 of 8 replies (Click to show all)
1
yanni kyriacos
Why am I so bad at this Stephen. Send help.
6
Linch
(Speaking as someone on LTFF, but not on behalf of LTFF)  How large of a constraint is this for you? I don't have strong opinions on whether this work is better than what you're funded to do, but usually I think it's bad if LTFF funding causes people to do things that they think is less (positively) impactful!  We probably can't fund people to do things that are lobbying or lobbying-adjacent, but I'm keen to figure out or otherwise brainstorm an arrangement that works for you.

Hey Linch, thanks for reaching out! Maybe send me your email or HMU here yannikyriacos@gmail.com

Cullen
76
3
0
9
4

I am not under any non-disparagement obligations to OpenAI.

It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer.

I have no further comments at this time.

We should expect that the incentives and culture for AI-focused companies to make them uniquely terrible for producing safe AGI. 
 

From a “safety from catastrophic risk” perspective, I suspect an “AI-focused company” (e.g. Anthropic, OpenAI, Mistral) is abstractly pretty close to the worst possible organizational structure for getting us towards AGI. I have two distinct but related reasons:

  1. Incentives
  2. Culture

From an incentives perspective, consider realistic alternative organizational structures to “AI-focused company” that nonetheless has enou... (read more)

Showing 3 of 6 replies (Click to show all)
2
Linch
I agree that it's possible for startups to have a safety-focused culture! The question that's interesting to me is whether it's likely / what the prior should be. Finance is a good example of a situation where you often can get a safety culture despite no prior experience with your products (or your predecessor's products, etc) killing people. I'm not sure why that happened? Some combination of 2008 making people aware of systemic risks + regulations successfully creating a stronger safety culture?
3
Ian Turner
Oh sure, I'll readily agree that most startups don't have a safety culture. The part I was disagreeing with was this: Regarding finance, I don't think this is about 2008, because there are plenty of trading firms that were careful from the outset that were also founded well before the financial crisis. I do think there is a strong selection effect happening, where we don't really observe the firms that weren't careful (because they blew up eventually, even if they were lucky in the beginning). How do careful startups happen? Basically I think it just takes safety-minded founders. That's why the quote above didn't seem quite right to me. Why are most startups not safety-minded? Because most founders are not safety-minded, which in turn is probably due in part to a combination of incentives and selection effects.

How do careful startups happen? Basically I think it just takes safety-minded founders. 

Thanks! I think this is the crux here. I suspect what you say isn't enough but it sounds like you have a lot more experience than I do, so happy to (tentatively) defer.

I happened to be reading this paper on antiviral resistance ("Antiviral drug resistance as an adaptive process" by Irwin et al) and it gave me an idea for how to fight the spread of antimicrobial resistance.

Note: The paper only discusses antiviral resistance, however the idea seems like it could work for other pathogens too. I won't worry about that distinction for the rest of this post.

The paper states:

Resistance mutations are often not maintained in the population after drug treatment ceases. This is usually attributed to fitness costs associated with

... (read more)

Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions:
1. Transparency and explainability of AI model data use (concern)

2. Importance of interpretability (solution)

3. Mis/dis information from deepfakes (concern)

4. Lack of liability for the creators of AI if any harms eventuate (concern + solution)

5. Unemployment without safety nets for Australians (concern)

6.... (read more)

This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.

Showing 3 of 5 replies (Click to show all)

Relevant XKCD comic.

To further comment, this seems like it might be an intractable task, as the term "dependency hell" kind of implies. You'd have to scrap likely all of GitHub and calculate what libraries are used most frequently in all projects to get an accurate assessment. Then it's not clear to me how you'd identify their level of resourcing. Number of contributors? Frequency of commits?

Also, with your example of the XZ attack, it's not even clear who made the attack. If you suspect it was, say, the NSA, would you want to thwart them if their purpose ... (read more)

3
Matt_Lerner
I'd be interested in exploring funding this and the broader question of ensuring funding stability and security robustness for critical OS infrastructure. @Peter Wildeford is this something you guys are considering looking at?
8
Ben Millwood
not sure if such a study would naturally also be helpful to potential attackers, perhaps even more helpful to attackers than defenders, so might need to be careful about whether / how you disseminate the information

Status: Fresh argument I just came up with. I welcome any feedback!

Allowing the U.S. Social Security Trust Fund to invest in stocks like any other national pension fund would enable the U.S. public to capture some of the profits from AGI-driven economic growth.

Currently, and uniquely among national pension funds, Social Security is only allowed to invest its reserves in non-marketable Treasury securities, which are very low-risk but also provide a low return on investment relative to the stock market. By contrast, the Government Pension Fund of Norway (als... (read more)

It might be worthwhile reading about historical attempts to semi-privatize social security, which would have essentially created an opt-in version of your proposal, since individual people could then choose whether to have their share of the pot in bonds or stocks.

I expect (~ 75%) that the decision to "funnel" EAs into jobs at AI labs will become a contentious community issue in the next year. I think that over time more people will think it is a bad idea. This may have PR and funding consequences too.

Showing 3 of 6 replies (Click to show all)
1
yanni kyriacos
Thanks for the reply Lorenzo! IMO it is going to look VERY weird seeing people continue to leave labs while EA fills the leaky bucket.
1
yanni kyriacos
I spent about 30 seconds thinking about how to quantify my prediction. I'm trying to point at something vague in a concrete way but failing. This also means that I don't think it is worth my time making it more concrete. The initial post was more of the "I am pretty confident this will be a community issue, just a heads up".

Seems reasonable :) 

We’re very excited to announce the following speakers for EA Global: London 2024:

  • Rory Stewart (Former MP, Host of The Rest is Politics podcast and Senior Advisor to GiveDirectly) on obstacles and opportunities in making aid agencies more effective.
  • Mary Phuong (Research Scientist at DeepMind) on dangerous capability evaluations and responsible scaling.
  • Mahi Klosterhalfen (CEO of the Albert Schweitzer Foundation) on combining interventions for maximum impact in farmed animal welfare.

Applications close 19 May. Apply here and find ... (read more)

The following is a collection of long quotes from Ozy Brennan's post On John Woolman (which I stumbled upon via Aaron Gertler) that spoke to me. Woolman was clearly what David Chapman would call mission-oriented with respect to meaning of and purpose in life; Chapman argues instead for what he calls "enjoyable usefulness", which is I think healthier in ~every way ... it just doesn't resonate. All bolded text is my own emphasis, not Ozy's.


As a child, Woolman experienced a moment of moral awakening: ... [anecdote]

This anecdote epitomizes the two driving forc

... (read more)

Introducing Ulysses*, a new app for grantseekers. 


 

We (Austin Chen, Caleb Parikh, and I) built an app! You can test the app out if you’re writing a grant application! You can put in sections of your grant application** and the app will try to give constructive feedback about your applicants. Right now we're focused on the "Track Record" and "Project Goals" section of the application. (The main hope is to save back-and-forth-time between applicants and grantmakers by asking you questions that grantmakers might want to ask.

Austin, Caleb, ... (read more)

For fun, I put one of my (approved) lightspeed applications through the app. This isn't a great test because Lightspeed told people to do crude applications and they'd reach out with questions if they had any. Additionally, the grantmakers already knew me and had expressed verbal interest in the project. But maybe it's still a useful data point.

My Track Record section 

 

Unquantified review of MDMA risks

 

Semi-quantified review of binge drinking risks 

 

[2 projects omitted for client privacy, but were included with permission in the or

... (read more)
6
Habryka
Oh, I quite like the idea of having the AI score the writing on different rubrics. I've been thinking about how to better use LLMs on LW and the AI Alignment Forum, and I hadn't considered rubric scoring so far, and might give it a shot as a feature to maybe integrate.

unfortunately when you are inspired by everyone else's April Fool's posts, it is already too late to post your own

I will comfort myself by posting my unseasonal ideas as comments on this post

"earning to receive"

(I think this is Habiba's joke)

5
Ben Millwood
L/acc, who think that LEEP have gone too far (possibly this one was Amber's idea)
3
Ben Millwood
SummaryBot has executed a treacherous turn and now runs the EA forum

This WHO press release was a good reminder of the power of immunization – a new study forthcoming publication in The Lancet reports that (liberally quoting / paraphrasing the release)

  • global immunization efforts have saved an estimated 154 million lives over the past 50 years, 146 million of them children under 5 and 101 million of them infants 
  • for each life saved through immunization, an average of 66 years of full health were gained – with a total of 10.2 billion full health years gained over the five decades
  • measles vaccination accounted for 60% of t
... (read more)

Great OWID charts for this:

Help clear something up for me: I am extremely confused (theoretically) how we can simultaneously have:

1. An Artificial Superintelligence

2. It be controlled by humans (therefore creating misuse of concentration of power issues)

My intuition is that once it reaches a particular level of power it will be uncontrollable. Unless people are saying that we can have models 100x more powerful than GPT4 without it having any agency??

Inside Wytham Abbey, the £15 Million Castle Effective Altruism Must Sell [Bloomberg]

From the article:

Effective Ventures has since come to a settlement with the FTX estate and paid back the $26.8 million given to it by FTX Foundation. [...] It’s amid such turmoil that Wytham Abbey is being listed on the open market for £15 million [...]

Adjusted for inflation, the purchase price of the house two years ago now equals £16.2 million. [...] The listing comes as homes on the UK’s once-hot country market are taking longer to sell, forcing some owners to offer disc

... (read more)
Showing 3 of 5 replies (Click to show all)

On the other hand, the project also spent some significant amount of money on staffing, supplying, maintaining and improving the property, so total expenditure is surely more than just purchase price minus sale price.

3
Rebecca
Note that not all the workshops are one-off, eg Future Impact Group was every trimester I believe
4
DavidNash
I don't know how the cost benefit calculation works out but retreats have different costs than conferences (including some overnight accommodation) and less tangible costs associated with using a different venue for each event. I would also assume there are quite a few more events that aren't listed online.

Common prevalence estimates are often wrong. Example: snakebites and my experience reading Long Covid literature.

Both institutions like the WHO and academic literature appear to be incentivized to exaggerate. I think the Global Burden of Disease might be a more reliable source, but have not looked into it.

I advise everyone using prevalence estimates to treat them with some skepticism and look up the source.

Showing 3 of 4 replies (Click to show all)

Nitpicky reply, but reflecting an attitude that I think has some value to emphasize:

Based on what you wrote, I think it would be far more accurate to describe GBD as 'robust enough to be an useful tool for specific purposes', rather than 'very robust'.

5
NickLaing
Whenever I do a sanity checks of GBD it usually make sense for UgAnda here where I live, with the possible exception of diarrhoea which I think is overrated (with moderate confidence). I'm not sure exactly how GBD would "exaggerate" overall, because the contribution of every condition to the disease burden has to add up to the actual burden - if you were to exaggerate the effect of one condition you would have to intentionally downplay another to compensate, which seems unlikely. I would imagine mistakes on GBD are usually good faith mistakes rather than motivated exaggerations.
6
jeberts
Chiming in to note a tangentially related experience that somewhat lowered my opinion of IHME/GBD, though I'm not a health economist or anything. I interacted with several analysts after requesting information related to IHME's estimates for global hepatitis C burden (which differed substantially from the WHO's). After a meeting and some emails promising to followup, we were ghosted. I have heard from one other organization that they've had a really hard time getting similar information out of IHME as well. This may be more of an organizational/operational problem rather than a methodological one, but it wasn't very confidence-inspiring.

FAQ: “Ways the world is getting better” banner

The banner will only be visible on desktop. If you can't see it, try expanding your window. It'll be up for a week. 

How do I use the banner?

  1. Click on an empty space to add an emoji, 
  2. Choose your emoji, 
  3. Write a one-sentence description of the good news you want to share, 
  4. Link an article or forum post that gives more information. 

If you’d like to delete your entry, click the cross that appears when you hover over it. It will be deleted for everyone.

What kind of stuff should I write?

Anything... (read more)

Showing 3 of 7 replies (Click to show all)

The banner looks lovely. Great work.

FYI, there is at least one bit of false/inaccurate information on the banner. The bit about universal right to vote is referencing a database that considers Chinese people as having the right to vote since the late 1940s. While there is some voting that occurs in China at the local level with candidates that have to be pre-approved by the ruling party, it strikes as pretty inaccurate to claim full adult suffrage for China. It appears to references a dataset from this research paper, and I'm not sure why that dataset has ... (read more)

1
blehrer
The banner is really nice work!!
0
EcologyInterventions
I can't stop checking the EA forum now.... 
Load more