H

henryj

151 karmaJoined

Posts
1

Sorted by New

Comments
10

JamesÖz's post explaining that the default trajectory for animal welfare is far worse than the default trajectory for global health.

I think this paragraph from the linked article captures the gist:

Near the end of most episodes, Tyler asks some version of this question to his guests: "What is your production function?". For those without an economics background, a "production function" is a mathematical equation that explains how to get outputs from inputs. For example, the relationship between the weather in Florida and the number of oranges produced could be explained by a production function. In this case, Tyler is tongue-in-cheek asking his guests what factors drive their success.

Not to anchor Singer too much, but it looks like other people seem to say things like "saying yes to new experiences," "reading a lot," and "being disciplined."

As I write this comment, the post has negative karma, but nobody seems to have explained why they disagree with the post. I haven’t made my mind up on this yet, and I’d love to hear from the people who push back on this (e.g. the people who are downvoting it).

Yeah, I'm really bullish on data privacy being an effective hook for realistic AI regulation, especially in CA. I think that, if done right, it could be the best option for producing a CA effect for AI. That'll be a section of my report :)

Funnily enough, I'm talking to state legislators from NY and IL next week (each for a different reason, both for reasons completely unrelated to my project). I'll bring this up.

Just as a caveat, this is me speculating and isn't really what I've been looking into (my past few months have been more "would it produce regulatory diffusion if CA did this?"). With that said, the location in which the product is being produced doesn't really effect whether regulating that product produces regulatory diffusion -- Anu Bradford's criteria are market size, regulatory capacity, stringent standards, inelastic targets, and non-divisibility of production. I haven't seriously looked into it, but I think that, even if all US AI research magically switched to, say, New York, none of those five factors would change for CA (though I do think any CA regulation merely targeting "systems being produced in CA" would be ineffective for a similar reason -- with remote work being more and more acceptable and the fact that, maybe aside from OpenAI, all these companies have myriad offices outside CA, AI production would be too elastic). In this hypothetical, though, CA still has huge consumer market (both inre: individuals and corporations --  >10% of 2021's Fortune 500 list is based in CA), it still has more regulatory capacity and stricter regulations than any other US state, and I think that certain components of AI production (e.g. massive datasets, the models themselves) are  inelastic and non-divisible enough that CA regulation could still produce regulatory diffusion. 

When it comes to why the presence of AI innovation in California makes potential California AI regulation more important, I imagine it being similar to your second suggestion, that "CA regulation is particularly likely to affect the norms of frontier AI companies," though I don't necessarily think awareness is the right vehicle for that change. After all, my intuition is that any company within an order of magnitude or two of Google or Meta would have somebody on staff whose job it is to stay abreast of regulation that affects them. I'm far from certain about it, but if I had to put it in words, I'd say that CA regulation could affect the norms of the field more broadly because of California's unique position at the center of technology and innovation. 

To use  American stereotypes as analogies, CA enacting AI regulations would feel to me like West Virginia suddenly enacting landmark coal regulation, or Iowa suddenly doing the same with corn. It seems much bigger than New Jersey regulating coal or Maine regulating corn, and it seems to me that WV wouldn't regulate coal unless it was especially important to do so. (This is a flawed analogy, though, since coal/corn is bigger for WV/IA than AI is for CA.) 
Either way, if California, the state which most likely stands to reap the greatest share of AI profits, home to Berkeley and Stanford and the most AI innovation in the US (maybe in the world? don't quote me on that) were to regulate AI, it would send an unmistakable signal about just how important they think that regulation is. 

Do you think that makes sense?

Great work! I think this is a really important report -- especially with so many regulatory entities only recently starting to put AI regulations into writing (I'm not at my computer right now, but a few that come to mind are the US's NIST and the British Department for Digital, Culture, Media, and Sport), it's really important that we get these regulations right.

Also, I'm currently working on a paper/forum post looking into which legislative pathways could produce a California Effect for AI, with a first draft (hopefully) finished in a week or so. Without giving too much away from that, it feels to me as though California can have a disproportionately-large effect on AI, not only because of a state-to-state or state-to-federal CA effect (which would still be huge), but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California. 

I've found '12ft.io' works similarly, fwiw. Per its FAQ, it shows the cached version of the page that Google uses to index content to show in search results

Same here -- Will MacAskill's publicists are doing a great job getting EA in the public eye right as What We Owe the Future looms. (Speaking of which, the front page of this Sunday's New York Times opinion section  is The Case for Longtermism!)  

On a slight tangent, as a university organizer, I've noticed that few college students have heard of EA at all (based on informal polling outside a dining hall, ~<10%). It'll be interesting to see if/how all this contemporary coverage changes that.

I'm also a bit surprised at how many of the comments are concerned about overpopulation. The most-recommended comment is essentially the tragedy of the commons. That comment's tone - and the tone of many like it, as well as a bunch of anti-GOP ones - felt really fatalistic, which worries me. So many of the comments felt like variations on "we're screwed", which goes against the belief in a net-positive future upon which longtermism is predicated.

On that note, I'll shoutout Jacy's post from about a month ago, echoing those fears in a more-EA way.

Load more