G

Garrison

1945 karmaJoined

Comments
80

Pasted from LW:

Hey Seth, appreciate the detailed engagement. I don't think the 2017 report is the best way to understand what China's intentions are WRT to AI, but there was nothing in the report to support Helberg's claim to Reuters. I also cite multiple other sources discussing more recent developments (with the caveat in the piece that they should be taken with a grain of salt). I think the fact that this commission was not able to find evidence for the "China is racing to AGI" claim is actually pretty convincing evidence in itself. I'm very interested in better understanding China's intentions here and plan to deep dive into it over the next few months, but I didn't want to wait until I could exhaustively search for the evidence that the report should have offered while an extremely dangerous and unsupported narrative takes off.

I also really don't get the error pushback. These really were less technical errors than basic factual errors and incoherent statements. They speak to a sloppiness that should affect how seriously the report should be taken. I'm not one to gatekeep ai expertise, but idt it's too much to expect a congressional commission with a top recommendation to commence in a militaristic AI arms race to have SOMEONE read a draft who knows that chatgpt-3 isn't a thing.

Yeah, I got some pushback on Twitter on this point. I now agree that it's not a great analogy. My thinking was that we technically know how to build a quantum computer, but not one that is economically viable (which requires technical problems to be solved and for the thing to be scalable/not too expensive). Feels like a not all squares are rectangles, but all rectangles are squares thing. Like quantum computing ISN'T economically viable but that's not the main problem with it right now. 

"With Islamic terrorism, these involved mass surveillance and detention without trial."

I think Islamist terrorism would be more accurate and less inflammatory. 

I think building AI systems with some level of autonomy/agency would make them much more useful, provided they are still aligned with the interests of their users/creators. There's already evidence that companies are moving in this direction based on the business case: https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=Further%2C%20academics%20and,is%20pretty%20good.%E2%80%9D

This isn't exactly the same as self-interest, though. I think a better analogy for this might be human domestication of animals for agriculture. It's not in the self-interest of a factory farmed chicken to be on a factory farm, but humans have power over which animals exist so we'll make sure there are lots of animals who serve our interests. AI systems will be selected for to the extent they serve the interests of people making and buying them.

RE international development: competition between states undercut arguments for domestic safety regulations/practices. These are exacerbated by beliefs that international rivals will behave less safely/responsibly, but you don't actually need to believe that to justify cutting corners domestically. If China or Russia built an AGI that was totally safe in the sense that it is aligned with its creators interests, that would be seen as a big threat by the US govt. 

If you think that building AGI is extremely dangerous no matter who does it, then having more well-resourced players in the space increases the overall risk. 

People can and should read whoever and whatever they want! But who a conference chooses to platform/invite reflects on the values of the conference organizers, and any funders and communities adjacent to that conference. 

Ultimately, I think that almost all of us would agree that it would be bad for a group we're associated with to platform/invite open Nazis. I.e. almost no one is an absolutist on this issue. If you agree, then you're not in principle opposed to exlcuding people based on the content of their beliefs, so the question just becomes: where do you draw the line? (This is not a claim that anyone at Manifest actually qualifies as an open Nazi, more just a reductio to illustrate the point.)

Answering this question requires looking at the actual specifics: what views do people hold? Were those views legible to the event organizers? I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to "truth-seeking," when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities. 

If you think that EA adjacent orgs/venues should platform open Nazis, as long as they use similar jargon, then I simply disagree with you, but at least you're being consistent. 

My mistake on the guardian US distinction but to call it a "small newspaper" is wildly off base, and for anyone interacting with the piece on social media, the distinction is not legible.

Candidly, I think you're taking this topic too personally to reason clearly. I think any reasonable person evaluating the online discussion surrounding manifest would see it as "controversial." Even if you completely excluded the guardian article, this post, Austin's, and the deluge of comments would be enough to show that.

It's also no longer feeling like a productive conversation and distracts from the object level questions.

Load more