D

det

255 karmaJoined
0

Comments
20

Yes, based on the last month, "the number of safety-focused researchers is dropping rapidly" certainly seems true.

I'd guess "most" is still an overstatement; I doubt the number of people has actually dropped by >50%. But the departures, plus learning that Superalignment never got their promised compute, have caused me to revise my fuzzy sense of "how much core safety work OpenAI is doing" down by a lot, probably over 50%.

det
12
0
0

Relevant: Émile Torres posted a "TESCREAL FAQ" today (unrelated to this article I assume; they'd mentioned this was in the works for a while).

I've only skimmed it so far, but here's one point that directly addresses a claim from the article.

Ozy:

However, Torres is rarely careful enough to make the distinction between people’s beliefs and the premises behind the conversations they’re having. They act like everyone who believes one of these ideas believes in all the rest. In reality, it’s not uncommon for, say, an effective altruist to be convinced of the arguments that we should worry about advanced artificial intelligence without accepting transhumanism or extropianism. All too often, Torres depicts TESCREALism as a monolithic ideology — one they characterize as “profoundly dangerous.”

TESCREAL FAQ:

5. I am an Effective Altruist, but I don't identify with the TESCREAL movement. Are you saying that all EAs are TESCREALists?

[...] I wouldn’t say—nor have I ever claimed—that everyone who identifies with one or more letters in the TESCREAL acronym should be classified as “TESCREALists.” ... There are some members of the EA community who do not care about AGI or longtermism; their focus is entirely on alleviating global poverty or improving animal welfare. In my view, such individuals would not count as TESCREALists.

Having followed Torres's work for a while, I felt like Ozy's characterization was accurate -- I've shared the impression that many uses of TESCREAL have blurred the boundaries between the different movements / treated them like a single entity. (I don't have time to go looking for quotes to substantiate this, however, so it's possible my memory isn't accurate -- others are welcome to check this if they want.) Either way, it seems like Torres is now making an effort to avoid this (mis)use of the label.

the number of safety focussed researchers employed by OpenAI is dropping rapidly

Is this true? The links only establish that two safety-focused researchers have recently left, in very different circumstances.

It seemed like OpenAI made a big push for more safety-focused researchers with the launch of Superalignment last July; I have no idea what the trajectory looks like more recently.

Do you have other information that shows that the number of safety-focused researchers at OpenAI is decreasing?

det
23
6
0
3

As an outsider to the field, here are some impressions I have:

  • NY declaration is very short and uses simple language, which makes it a useful tool for communicating with the public. Compare to this sentence from the Cambridge declaration:

The neural substrates of emotions do not appear to be confined to cortical structures. In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals.

  • The Cambridge declaration is over a decade old. Releasing a similar statement is another chance for media attention and indicates that the consensus hasn't shifted in the opposite direction.
  • The NY declaration places a clearer emphasis on "cephalopod mollusks, decapod crustaceans, and insects." 

    A footnote to the Cambridge declaration mentions "decapod crustaceans, cephalopod mollusks, and insects" in a somewhat confusing way: it says there's "very strong evidence" to support that these animals also "possess the neurological substrates of consciousness," but they aren't mentioned in the main declaration because there was not any presentation on them at the particular conference where it was signed.

    Insofar as the NY declaration is meant to support shrimp or insect welfare, this seems like a plus.
det
10
1
0
2
1

Feedback on third episode: Also really liked it! Felt different from the first two. Less free-wheeling, more clearly useful. (Still much more on the relaxed, informal side than main-feed 80k podcasts.)

Felt very useful to get an inside perspective on what 80k thinks its doing with career advising. I really appreciated Dwarkesh kicking the tires on the theory of change ("why not focus 100% on the tails?"), as well as the responses.

It wasn't entirely an easy listen. I identify with the common EA tropes of: trying to push myself to be more ambitious, but this doesn't come naturally so I end up often feeling bad about how non-agentic I am. Ex ante trying some things to see if I'm in the right tail of the distribution, figuring I'm probably not, and being kind of upset and adrift about it. 

I personally appreciate that 80k thinks a lot about doing right by people like me. It was somewhat hard to hear Dwarkesh focus so intently people at the tails, as if the other 99% of us are a rounding error, but I see the case for it and I'm not sure it's completely wrong. (I'm not supposed to be the primary beneficiary of 80k advising / other EA resources. If I voluntarily sign up to try being an ambitious altruist, and later feel bad about not (yet) succeeding, I'm not sure I get to blame anyone except myself.)

det
3
2
2
2

Feedback on first two episodes: I really enjoyed them, and was instantly sold this series. I felt like I was sitting in on a conversation with fun people having great conversations. Wasn't really sure what the impact case was for these, but they gave me a feeling I have at the best EA meetups: oh my gosh, these are my people. [1]

(Feedback on third episode in another comment)

  1. ^

    I have some reservations about this: the cultural characteristics that sets off the "my people" sense don't seem too strongly connected to doing the most good? So while I love finding "my people," it's strange that they are such a big fraction of EA, both at local meetups and apparently at 80k.

det
10
0
0

Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."

Doesn't seem like they have a website yet.

Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."

Doesn't seem like they have a website yet.

This seems relevant to any intervention premised on "it's good to reduce the amount of net-negative lives lived."

If factory-farmed chickens have lives that aren't worth living, then one might support an intervention that reduces the number of factory-farmed chickens, even if it doesn't improve the lives of any chickens that do come to exist. (It seems to me this would be the primary effect of boycotts, for instance, although I don't know empirically how true that is.)

I agree that this is irrelevant to interventions that just seek to improve conditions for animals, rather than changing the number of animals that exist. Those seem equally good regardless of where the zero point is.

det
11
1
0

I wholeheartedly agree, and think we need to look elsewhere to apply this model.

Donor Lotteries unhealthily exhibit winner-take-all dynamics, centralizing rather than distributing power. If this individual makes a bad decision, then the impact of that money evaporates -- it's a very risky proposition.

A more robust solution would be to proportionally distribute the funds to everyone who joins, based on the amount they put in. This would democratize funding ability throughout the EA ecosystem and lead to a much healthier funding ecosystem.

Load more