B

🔸Zachary Brown

243 karmaJoined

Comments
25

Thanks for the comment. I was clearly too quick with that opening statement. Perhaps in part I let my epistemic guard down there out of general frustration at the neglectedness of the topic, and a desire to attract some attention with a bold opener. So much harm could accrue to nonhuman animals relative to humans, and I really want more discussion on this. PLF is -- I've argued, anyway -- a highly visible threat to the welfare of zillions, but rarely mentioned. I hope you'll forgive an immodest but emotional claim.

I've edited the opener and the footnote to be more defensible, in response to this comment.

I actually don't believe, in the median scenario, that AIs are likely to both outnumber sentient animals and have a high likelihood of suffering, but I don't really want that to be the focus of this piece. And either way, I don't believe that with high certainty: in that respect, the statement was not reflective of my views.

Some of this discussion reminds me of Mill's in his super underrated essay "Utility of Religion". He proposes there a kind of yangy humanistic religion, against a backdrop of atheism and concern about the evils of nature. Worth a read.

Thanks for the comment!

I agree that there's a mixed case for political tractability. I'm curious why you don't find the argument compelling about the particular people who have influence on AI policy being more amenable to animal-related concerns? (To put it bluntly, EAs care about animals and are influential in AI, and animal ag industry lobbying hasn't really touched this issue yet.)

I like the analogy to cage-free campaigns, although I think I would draw different lessons from the analogy. I don't really think that the support for cage-free campaigns comes from support for restrictions that help individual animals rather than support for restrictions that restrict the total number of farmed animals. Instead, I think it comes for support for traditional and "natural" ways of farming (where the chickens are imagined to roam free) rather than industrialised, modern, and intensive farming methods. On this view, cage-free campaigns succeed because they target only the farming methods that the public disapproves of. This theory can also explain why people express disapproval of factory farming, but a strong approval of farming and farmers.

I think PLF is a politically tractable target for regulation because, like cage-free campaigns, it targets only the type of farming people already dislike. When I say "End AI-run factory farms!", the slogan makes inherently salient the technological, non-natural, industrial nature of the farming method. Restrictions here might not be perceived as restrictions on farming, they'll be perceived only as restrictions on a certain sinister form of unnatural industrialised farming. (The general public mostly doesn't realise that most farming is industrialised.) To put this another way: I think the most politically tractable pro-animal movements are the ones that explicitly restrict their focus to Big Evil Factory Farms, and leave Friendly Farmer Joe alone. I think PLF restrictions share this character with cage-free campaigns.

And we know from cage-free campaigns that people are sometimes willing to tolerate restrictions of this sort even if they are personally costly.

I basically fail to imagine a scenario where publishing the Trust Agreement is very costly to Anthropic—especially just sharing certain details (like sharing percentages rather than saying "a supermajority")—except that the details are weak and would make Anthropic look bad.

Anthropic might be worried that the details are strong, and would make Anthropic look vulnerable to similar governance chaos to what happened at OpenAI during the board turnover saga. A large public conversation on this could be bad for Anthropic's reputation among its investors, team, or other stakeholders, who have concerns other than longterm safety, or might think that Anthropic's non profit-motivated governance is opaque or bad for whatever other reason. To put this another way: Anthropic is probably reputation-managing, but it might not be their safety reputation that they are trying to manage. It might be their reputation -- to potential investors, say -- as a reliable actor with predictable decision-making that won't be upturned at the whims of the trust.

I would expect, though, that Anthropic's major investors know the details of the governance structure and mechanics.

 

I'm in the early stages of corporate campaign work similar to what's discussed in this post. I'm trying to mobilise investor pressure to advocate for safety practices at AI labs and chipmakers. I'd love to meet with others working on similar projects (or anyone interested in funding this work!). I'd be eager for feedback.

You can see a write-up of the project here.

  • Frankenstein (Mary Shelley): moral circle expansion to a human created AI, kinda.
  • Elizabeth Costello (J M Coetzee): novel about a professor who gives animal rights lectures. The chapter that's most profoundly about animal ethics was published as "The Lives of Animals" which was printed with commentary from Peter Singer (in narrative form!).
  • Darkness at Noon (Arthur Koestler): Novel with reflections from an imprisoned old Bolshevik, reflecting on his past revolutionary activity. Interesting reflections on ends vs. means reasoning, and on weighing considerations of moral scale / the numbers affected vs personal emotional connection in moral tradeoff scenarios.

I really appreciated this post and it's sequel (and await the third in the sequence)! The "second mistake" was totally new to me, and I hadn't grasped the significance of the "first mistake". The post did persuade me that the case for existential risk reduction is less robust than I had previously thought. 

One tiny thing. I think this should read "from 20% to 10% risk":

More rarely, we talk about absolute reductions, which subtract an absolute amount from the current level of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 70% risk, from 20% to 18% risk, or from 10% to 0% risk. (Formally, relative risk reduction by f takes us from risk r to risk r – f).

Thanks for writing this! Hoping to respond more fully later. 
In the meantime: I really like the example of what a "near-term AI-Governance factor collection could look like". 

So the question is 'what governance hurdles decrease risk but don't constitute a total barrier to entry?'

I agree. There are probably some kinds of democratic checks that honest UHNW individuals don't mind, but have relatively big improvements for epistemics and community risk. Perhaps there are ways to add incentives for agreeing to audits or democratic checks? It seems like SBF's reputation as a businessman  benefited somewhat from his association with EA (I am not too confident in this claim). Perhaps offering some kind of "Super Effective Philanthropist" title/prize/trophy to particular UHNW donors that agree to subject their donations to democratic checks or financial audits might be an incentive? (I'm pretty skeptical, but unsure.) I'd like to do some more creative thinking here.

I wonder if submitting capital to your proposal seems a bit too much like the latter.

Probably.

Load more