L

Larks

15738 karmaJoined

Comments
1491

Topic contributions
4

It's not clearly bad. It's badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers.

The topic here is whether the administration is good at using AI to identify things it dislikes. Whether or not you personally approve of using scientific grants to fund ideological propaganda is, as the OP notes, besides the point. Their use of AI thus far is, according to Scott's data, a success by their lights, and I don't see any much evidence to support huw's claim that their are being 'unthoughtful' or overconfident. They may disagree with huw on goals, but given those goals, they seem to be doing a reasonable job of promoting them.

It seems pretty appropriate and analogous to me - the administration wants to ensure 100% of science grants go to science, not 98%, and similarly they want to ensure that 0% of foreign students support Hamas, not 2%. Scott's data suggests that have done a reasonably good job with the former at identifying 2%-woke grants, and likewise if they identify someone who spends 2% of their time supporting Hamas they would consider this a win. 

Thanks for sharing this, very informative and helpful for highlighting a potential leverage point., strong upvoted.

One minor point of disagreement: I think you are being a bit too pessimistic here:

And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.

There are few examples of US courts blowing up large US corporations, but that is not exactly the situation here. OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. There is a long history of businesses exaggerating the harm from new regulations, claiming they will be ruinous when actually human ingenuity and entrepreneurship render them merely disadvantageous. The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue. 

I think a closer example might be when the DC District Court sided with the FTC and blocked the Staples-Office Depot merger on somewhat dubious grounds. The court didn't directly implode a massive retailer... but Staples did enter administration shortly afterwards, and my impression at the time was the causal link was pretty clear.

Larks
4
3
13
2

at best had a 40% hit rate on ‘woke science’ 

This is... not what the attached source says? Scott estimates 40% woke, 20% borderline, and 40% non-woke. 'at best' means an upper bound, which would be 60% in this case if you accept this methodology.

But even beyond that, I think Scott's grading is very harsh. He says most grants that he considered to be false positives contained stuff like this (in the context of a grant about Energy Harvesting Systems):

The project also aims to integrate research findings into undergraduate teaching and promote equitable outcomes for women in computer science through K-12 outreach program.

But... this is clearly bad! The grant is basically saying it's mainly for engineering research, but they're also going to siphon off some of the money to do some sex-discriminatory ideological propaganda in kindergartens. The is absolutely woke,[1] and it totally makes sense why the administration would want to stop this grant. If the scientists want to just do the actual legitimate scientific research, which seems like most of the application, they should resubmit with just that and take out the last bit.

Some people defend the scientists here by saying that this sort of language was strongly encouraged by previous administrations, which is true and relevant to the degree of culpability you assign to the scientists, but not to whether or not you think the grants have been correctly flagged.

His borderline categorisation seems similarly harsh. In my view, this is a clear example of woke racism:

enhance ongoing education and outreach activities focused on attracting underrepresented minority groups into these areas of research

Scott says these sorts of cases makes up 90% of his false positives. So I think we should adjust his numbers to produce a better estimate:

  • 40% woke according to scott
  • +20% borderline woke
  • +90%*40% incorrectly labeled as false positives

= 96% hit rate.

  1. ^

    If you doubt this, imagine how a typical leftist grant reviewer would evaluate a grant that said some of the money was going to support computer science for white men.

The vast majority of amicus curiae briefs are filed by allies of litigants and duplicate the arguments made in the litigants’ briefs, in effect merely extending the length of the litigant’s brief. Such amicus briefs should not be allowed. They are an abuse. The term 'amicus curiae' means friend of the court, not friend of a party.

I agree that most such briefs are often from close ideological allies, but I'm curious about you suggestion that the court would reject them on this ground. Surely all the organizations filing somewhat duplicative amicus curiae briefs all the time do so because they think it is helpful?

And EA is aimed in many ways at maintaining exclusivity, even while incredible people like Julia make great strides in making it more inclusive. For example, some people in EA think my EA-oriented after-school program is a waste of time because it's not directed at the highest achievers. 

This anecdote seems like very weak evidence for your claim. Claiming EA is 'aimed in many ways' at something implies a concerted effort to achieve it, even at the cost of other goals. In contrast, some people saying a program is a waste of time means just that - it's not producing much value. The whole point of EA is to prioritize - disfavoring donkey sanctuaries doesn't mean EAs hate donkeys, it just means there are other, better things to focus on.

Even 80,000 hours career advice applies not at all to the average person, but is oriented only to those who are already going to spend 6+ years shelling out money for undergrad and grad school, etc (at least last I checked). [emphasis added]

This seems clearly false to me. To test it, I looked at the very first article in their career guide, one of their flagship products. It is about doing engaging work that helps others, doesn't have any major downsides, etc. As far as I can see, almost every part of it applied to average people. The income-satisfaction charts they include have x-axis running from $10k to $210k, a range than covers the median income. It is not in any way dependent on your having a postgraduate qualification. And I have no idea where you get '6+ years shelling out money' from - surely most of their advice applies also to autodidacts, people who finish more quickly, people in countries with state-funded universities, people who get scholarships, etc.?

Globally, the general public would, I suspect, be much more sympathetic to a case brought by an AG than by Musk.

Is this very relevant? The case will be decided by a specialist judge, not the public, and Musk bringing a suit doesn't preclude the AGs bringing their own cases.

Thanks for sharing, interesting article.

It seems like we basically have a situation where supply is low but the marginal units are very valuable (as they are replacing small gensets). If so, is the underlying issue just that prices are too low? If electricity prices were allowed to rise to the market-clearing rate (ideally using the sort of market mechanisms used in PJM or ERCOT, but alternatively just by using PPAs) supply would increase, and there might be some degree of demand efficiency improvements. The government shouldn't need to build generation capacity - this is firmly within the expertise of the private sector.

$1 cost seems like a big underestimate of the cost to me. Aside from the discomfort being able to see people's lips is useful in a loud environment.

I think nasal sprays are a more cost-effective solution.

Load more