DM

David Mathers🔸

4631 karmaJoined

Posts
10

Sorted by New

Comments
517

Well, at a technical level the first is a conditional probability and the second is an unconditional probability of a conjunction. So the first is to be read as "the probability that alignment is achieved, conditional on humanity creating a spacefaring civilization" whilst the second is "the probability that the following is happens: alignment is solved and humanity creates a spacefaring civilization". If you think of probability as a space, where the likelihood of an outcome=the proportion of the space it takes up, then:

-the first is the proportion of the region of probability space taken up humanity creating a space-faring civilization in which alignment occurs.

-the second is the proportion of the whole of probability space in in which both alignment occurs and humanity creates a space-faring civilization.

But yes, knowing that does not automatically bring real understanding of what's going on. Or at least for me it doesn't. Probably the whole idea being expressed would better written up much more informally, focusing on a concrete story of how particular actions taken by people concerned with alignment might surprisingly be bad our suboptimal. 

Fair enough, I actually think it is very hard to discover causal relationship in any social scientific domain. I still strongly suspect that dictatorial governments are bad however. (It's almost impossible to get data on the effects of highly developed countries by modern standards ceasing to be democracy, because this has almost never happened.) 

Unclear what (economic) libertarianism implies about the Trump admin. They will cut taxes, but also they might put up tarifs. 

"Some existing AI Safety agendas may increase P(Alignment AND Humanity creates an SFC) while at the same time not increasing as much or even, if unlucky, reducing P(Alignment | Humanity creates an SFC). For example, such agendas may significantly prevent early AIs and AI usages from destroying, at the same time, the potential of Humanity and AIs. "

This is compressing a complicated line of thought into such a small number of words that I find it impossible to understand. 

"It is unclear to me whether less democracy would increase or decrease economic growth, which has been very connected to human welfare. So I do not know whether less democracy would increase or decrease human welfare."

I usually think your posts are very good because you are prepared to honestly and clearly state unpopular beliefs. But this seems a bit glib: economic growth is not the only thing that effects well-being, by any means, and so simply being unsure about how democracy effects it is not a strong case on its own for being unsure whether democracy increases or decreases human well-being. Growth might be the most important thing of course, but if you really are neutral on the effect of democracy on growth, other factors will still determine whether you should think democracy is net beneficial for humans in expectation. 

Also, in the particular case of the US to evaluate whether democracy continuing is a good thing for human well-being, what primarily matters is how democracy shapes up versus the realistic alternatives in the US, not whether democracy is the best possible system in principle, or even the best feasible system in most times and places. It's not like we are comparing democracy in the US to the Chinese communist system, market anarchism, sortition or the implementation of the knowledge-based restrictions on the franchise suggested by Jason Brennan in his book Against Democracy. We are comparing it to "on the surface democracy, but really Musk and Trump use the justice department to make it impossible for credible opponents to run against the Republican party for many national offices or against their favoured candidates in crucial Republican primaries, and also Musk can in practice stop any government payment to anyone so long as Trump himself doesn't prevent him doing so." Maybe you think the risk of that is low, but that's what people are worried about. Maybe you also think that might be good, because Republican policies might be better for growth and that dominates all other factors, but even then, it's worth being clear about what you are advocating agnosticism about and its not the merits of democracy in the abstract, but the current situation in the US. 

"Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate. "  Never agreed with anything as strongly in my life. Both these things are bad and we don't need to choose a side between them. And note that the issue here isn't about these things being "extreme". An article that actually tries to make a case for foom by 2027, or "this is all nonsense, it's just fancy autocomplete and overfitting on meaningless benchmarks" could easily be excellent. The problem is people not giving reasons for their stances, and either re-writing PR, or just expressing social distaste for Silicon Valley, as a substitute. 

Not surprising they are getting rid of the safety people, but getting rid of CHIPS act people seems to me to be evidence in favour of the "genuinely idiotic, rather than Machiavellian geniuses" theory of  Trump and Musk. Presumably Trump still wants to be more powerful than China even if he moves away from hawkishness towards making friends. And Musk presumably wants Grok to be better than the best Chinese models. (In Musk's case of course, it's possible he actually doesn't favour getting rid of the CHIPS staff.) 

Fair point. I certainly don't think it is established (or even more than 50% likely) that SBF was purely motivated by narrow personal gain to the exclusion of any real utilitarian convictions at all. But I do think he misrepresented his political convictions. 

"how much AGI companies have embedded with the national security state is a crux for the future of the lightcone"

What's the line of thought here? 

I don't think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.) 

For what it's worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other country's business. 

Load more