New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
82
Cullen
17h
0
I am not under any non-disparagement obligations to OpenAI. It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer. I have no further comments at this time.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice. (I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
[PHOTO] I sent 19 emails to politicians, had 4 meetings, and now I get emails like this. There is SO MUCH low hanging fruit in just doing this for 30 minutes a day (I would do it but my LTFF funding does not cover this). Someone should do this!
Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions: 1. Transparency and explainability of AI model data use (concern) 2. Importance of interpretability (solution) 3. Mis/dis information from deepfakes (concern) 4. Lack of liability for the creators of AI if any harms eventuate (concern + solution) 5. Unemployment without safety nets for Australians (concern) 6. Rate of capabilities development (concern) They may even support the creation of an AI Safety Institute in Australia. Don't underestimate who could be allies moving forward!
Remember: EA institutions actively push talented people into the companies making the world changing tech the public have said THEY DONT WANT. This is where the next big EA PR crisis will come from (50%). Except this time it won’t just be the tech bubble.

Popular comments

Recent discussion

mhendric commented on Why not socialism?

I. Introduction and a prima facie case

It seems to me that most (perhaps all) effective altruists believe that:

  1. The global economy’s current mode of allocating resources is suboptimal. (Otherwise, why would effective altruism be necessary?)
  2. Individuals and institutions can
...
Continue reading

I am similarly unenthused about the weird geneticism. 

Insofar as somewhat more altruism in the economy is the aim, sure, why not! I'm not opposed to that, and you may think that e.g. giving pledges or founders pledge are already steps in that direction. But that seems different from what most people think of when you say socialism, which they associate with ownership of means of production, or very heavy state interventionism and planned economy! It feels a tiny bit bailey and motte ish.

To give a bit of a hooray for the survey numbers - at the German ... (read more)

4
Ebenezer Dukakis
Thanks for the response, upvoted. OP framed socialism in terms of resource reallocation. ("The global economy’s current mode of allocating resources is suboptimal" was a key point, which yes, sounded like advocacy for a command economy.) I'm trying to push back on millenarian thinking that 'socialism' is a magic wand which will improve resource allocation. If your notion of 'socialism' is favorable tax treatment for worker-owned cooperatives or something, that could be a good thing if there's solid evidence that worker-owned cooperatives achieve better outcomes, but I doubt it would qualify as a top EA cause. Here in EA, GiveDirectly (cash transfers for the poor) is considered a top EA cause. It seems fairly plausible to me that if the government cut a bunch of non-evidence-backed school and work programs and did targeted, temporary direct cash transfers instead, that would be an improvement. I'm skimming the post you linked and it doesn't look especially persuasive. Inferring causation from correlation is notoriously difficult, and these relationships don't look particularly robust. (Interesting that r^2=0.29 appears to be the only correlation coefficient specified in the article -- that's not a strong association!) As an American, I don't particularly want America to move in the direction of a Nordic-style social democracy, because Americans are already very well off. In 2023, the US had the world's second highest median income adjusted for cost of living, right after Luxembourg. From a poverty-reduction perspective, the US government should be focused on effective foreign aid and facilitating immigration. Similarly, from a global poverty reduction perspective, we should be focused on helping poor countries. If "socialism" tends to be good for rich countries but bad for poor countries, that suggests it is the wrong tool to reduce global poverty.
1
huw
Here is a very long list of large, organised groups failing to engineer transitions to socialism within individual countries, because the United Stated were larger, more organised, and better-funded.

A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.


Some quotes perhaps worth highlighting...

Continue reading
13
Larks
Kelsey suggests that OpenAI may be admitting defeat here: https://twitter.com/KelseyTuoc/status/1791691267941990764

What about for people who’ve already resigned?

31
jimrandomh
The language shown in this tweet says: It's a trick! Departing OpenAI employees are then offered a general release which meets the requirements of this section and also contains additional terms. What a departing OpenAI employee needs to do is have their own lawyer draft, execute, and deliver a general release which meets the requirements set forth. Signing the separation agreement is a mistake, and rejecting the separation agreement without providing your own general release is a mistake. I could be misunderstanding this; I'm not a lawyer, just a person reading carefully. And there's a lot more agreement text that I don't have screenshots of. Still, I think the practical upshot is that departing OpenAI employees may be being tricked, and this particular trick seems defeatable to me. Anyone leaving OpenAI really needs a good lawyer.

Introduction

When trying to persuade people that misaligned AGI is an X-risk, it’s important to actually explain how such an AGI could plausibly take control. There are generally two types of scenario laid out, depending on how powerful you think an early AGI would be. ...

Continue reading

For reference, here is a seemingly nice summary of Fearon's "Rationalist explanations for war" by David Patel.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
Rasool posted a Quick Take

Swapcard tips:

  1. The mobile browser is more reliable than the app

You can use Firefox/Safari/Chrome etc. on your phone, go to swapcard.com and use that instead of downloading the Swapcard app from your app store. As far as I know, the only thing the app has that the mobile site does not, is the QR code that you need when signing in when you first get to the venue and pick up your badge

  1. Only what you put in the 'Biography' section in the 'About Me' section of your profile is searchable when searching in Swapcard

The other fields, like 'How can I help others' and 'How can others help me' appear when you view someone's profile, but will not be used when searching using Swapcard search. This is another reason to use the Swapcard Attendee Google sheet that is linked-to in Swapcard to search

  1. You can use a (local!) LLM to find people to connect with

People might not want their data uploaded to a commercial large language model, but if you can run an open-source LLM locally, you can upload the Attendee Google sheet and use it to help you find useful contacts

Continue reading

Summary: This post argues that brain preservation via fluid preservation could potentially be a cost-effective method to save lives, meriting more consideration as an EA cause. I review the current technology, estimate its cost-effectiveness under various assumptions, and...

Continue reading

The difference is that if you are biologically dead, there is nothing you can do to prevent a malevolant actor to upload your mind. If you are terminally ill and are pessimistic about the future, you can at least choose cremation.

I am not saying that there should be no funding for brain preservation, but personally I am not very enthusiastic since there is the danger that we will not solve the alignment problem.

I am a lawyer. I am not licensed in California, or Delaware, or any of the states that likely govern OpenAI's employment contracts. So take what I am about to say with a grain of salt, as commentary rather than legal advice. But I am begging any California-licensed attorneys reading this to look into it in more detail. California may have idiosyncratic laws that completely destroy my analysis, which is based solely on general principles of contract law and not any research or analysis of state-specific statutes or cases. I also have not seen the actual contracts and am relying on media reports. But.

 

I think the OpenAI anti-whistleblower agreement is completely unenforceable, with two caveats. Common law contract principles generally don't permit surprises, and don't allow new terms unless some mutual promise is made in return for those new terms. A valid contract requires a "meeting...

Continue reading

(Apologies for errors or sloppiness in this post, it was written quickly and emotionally.)

Marisa committed suicide earlier this month. She suffered for years from a cruel mental illness, but that will not be legacy–her legacy will be the enormous amount of suffering she...

Continue reading

I am shocked and saddened. I did not know Marisa well but we were in the same EA Anywhere discussion group for several months. As you said she was quite funny and I enjoyed talking with her and hearing her ideas. 

6
Ozzie Gooen
I've known Marisa for a few years and had the privilege of briefly working with her. I was really impressed by her drive and excitement. She seemed deeply driven and was incredibly friendly to be around.  This will take me some time to process. I'm so sorry it ended like this.  She will be remembered.
34
Julia_Wise
I was so sorry to learn this. Some other resources: 5 steps to help someone who may be suicidal Crisis resources around the world Years ago Marisa was the first person to put in an application for several EA Globals, to where I was curious if she had some kind of notification set up. I asked her about it once, and she was surprised to hear that she’d been first; she was just very keen.
2
JackM
Hmm, I don't see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it's more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners. Ultimately what matters most is what the leadership's views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.

Ultimately what matters most is what the leadership's views are.

I'm skeptical this is true particularly as AI companies grow massively and require vast amounts of investment.

It does seem important, but unclear it matters most.

4
Habryka
Yeah, I don't think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don't expect it would and would instead make things worse).

Summary

  • Fish stocking[1] is the practice of raising fish in hatcheries and releasing them into rivers, lakes, or the ocean.
  • 35-150 billion finfish are stocked every year.
  • Fish are stocked to:
    • increase the catch in commercial fisheries (probably tens of billions of stocked fish
...
Continue reading

I'm writing a quick piece on the scale, in case you (or anyone else) is interested in giving feedback before I post it (probably next week).

2
MichaelStJules
Well fuck, I guess this probably explains it. Yao & Li, 2018: Also makes substitutes for fish fry not very promising; they'd probably also have to be other animals. But maybe we could find some that matter much less per kg. Otherwise, we'd probably just want to reduce mandarin fish production, which could be hard to target specifically, especially being in China.   Some different fry numbers in Hu et al., 2021: