L

Larks

15838 karmaJoined

Comments
1498

Topic contributions
4

I'm not sure why you chose to frame your comment in such an unnecessarily aggressive way so I'm just going to ignore that and focus on the substance.

Yes, the Studio Ghibli example is representative of AI decentralizing power:

  • Previously, only a small group of people had an ability (to make good art, or diagnose illnesses, or translate a document, or do contract review, or sell a car, or be a taxi driver, etc.)
  • Now, due to a large tech company (e.g. Google, Uber, AirBnB, OpenAI) everyone who used to be able to still can, and also ordinary people can as well. This is a decentralization of power.
  • The fact that this was not due to an ideological choice made by AI companies is irrelevant. Centralization and decentralization often occurs for non ideological reasons.
  • The fact that things might change in the future is also not relevant. Yes, maybe one day Uber will raise prices to twice the level taxis used to charge, with four times the wait time and ten times the odor. But for now, they have helped decentralize power.
  • The group of producers who are now subject to increased competition are unsurprisingly upset. For fairly nakedly self-interested reasons they demand regulation.
  • Ideological leftists provide rhetorical ammunition to the rent-seekers, in classic baptists and bootleggers style.
  • These demands for regulation affect four different levels of the power hierarchy:
    • The government (most powerful): increases power
    • Tech platform: reduces power
    • Incumbent producers: increases power
    • Ordinary people (least powerful): reduces power
  • Because leftists focus on the second and third bullet points, characterizing it as a battle between small artisans and big business, they falsely claim to be pushing for power to be decentralized.
  • But actually they are pushing for power to be more centralized: from tech companies towards the leviathan, and from ordinary people towards incumbent producers.

AI art seems like a case of power becoming decentralized: before this week, few people could make Studio Ghibli art. Now everyone can.

Answer by Larks4
1
0

Intuitively it seems like the answer should typically be no, unless you do some sort of absurd trick to try to exploit this (e.g. you and your spouse both work for the same company, and you offer to take a paycut if your spouse gets an equal pay increase).

This seems like one of many Manifold markets with terrible resolution criteria. Wikipedia is not an oracle; it is a website run by Trump's political opponents, who are willing to use skullduggery to promote their political agendas. Even just looking at this page, it is a bizarre collection of events. It includes things like this:

In 2017, the eligibility of a number of Australian parliamentarians to sit in the Parliament of Australia was called into question because of their actual or possible dual citizenship. The issue arises from section 44 of the Constitution of Australia, which prohibits members of either house of the Parliament from having allegiance to a foreign power. Several MPs resigned in anticipation of being ruled ineligible, and five more were forced to resign after being ruled ineligible by the High Court of Australia, including National Party leader and Deputy Prime Minister Barnaby Joyce. This became an ongoing political event referred to variously as a "constitutional crisis"[34][35] or the "citizenship crisis".

Inclusion of this sort of event suggests a very low bar for what constitutes a crisis. But then many objectively much more major events are simply totally omitted! 

I can see why the market is trading above 50% - you can just look at the talk page to see people are leaning this way. Arguably this market should have already closed, because the wikipedia page did list it for a while (there was weasel language, but it clearly was 'listed', which was the resolution criteria), prior to the market's rules being [clarified/changed] to include a vague appeal to 'broader consensus'. But I think this mainly tells us about wikipedia, rather than about reality.

After all, we don't want to do the most good in cause area X but the most good, period.

Yes, and 80k think that AI safety is the cause area that leads to the most good. 80k never covered all cause areas - they didn't cover the opera or beach cleanup or college scholarships or 99% of all possible cause areas. They have always focused on what they thought were the most important cause areas, and they continue to do so. Cause neutrality doesn't mean 'supporting all possible causes' (which would be absurd), it means 'being willing at support any cause area, if the evidence suggests it is the best'.

Makes sense, seems like a good application of the principle of cause neutrality: being willing to update on information and focus on the most cost-effective cause areas.

Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources?

I think there are basically two ways of looking at this question.

One is the typical EA/'consequentialist' approach. Here you accept that some amount of the money will be wasted (fraud/corruption/incompetence), build this explicitly into your cost-effectiveness model, and then see what the bottom line is. If I recall correctly, GiveWell explicitly assumes something like 50% of insecticide-treated bednets are not used properly; their cost-effectiveness estimate would be double if they didn't make this adjustment. $1.6m of mismanagement seems relatively small compared to the total size of anti-malaria programs, so presumably doesn't move the needle much on the overall QALY/$ figure. This sort of approach is also common in areas like for-profit businesses (e.g. half of all advertising spending is wasted, we just don't know which half...) and welfare states (e.g. tolerated disability benefit fraud in the UK). To literally answer your question, that $1.6m is presumably not the best use of resources, but we're willing to tolerate that loss because the rest of the money is used for very good purposes so overall malaria aid is (plausibly) the best use of resources.

The alternative is a more deontological approach, where basically any fraud or malfeasance is grounds for a radical response. This is especially common in cases where adversarial selection is a big risk, where any tolerated bad actors will rapidly grow to take a large fraction of the total, or where people have particularly strong moral views about the misconduct. Examples include zero-tolerance schemes for harassment in the workplace, DOGE hunting down woke in USAID/NSF, or the Foreign Corrupt Practices Act. In cases like this people are willing to cull the entire flock just to stop a single infected bird—sometimes a drastic measure can be warranted to eliminate a hidden threat.

In the malaria example, if the cost is merely that $1.6m is set on fire, the first approach seems pretty appropriate. The second approach seems more applicable if you thought the $1.6m was having actively negative effects (e.g. supporting organised crime) or was liable to grow dramatically if not checked.

It's not clearly bad. It's badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers.

The topic here is whether the administration is good at using AI to identify things it dislikes. Whether or not you personally approve of using scientific grants to fund ideological propaganda is, as the OP notes, besides the point. Their use of AI thus far is, according to Scott's data, a success by their lights, and I don't see any much evidence to support huw's claim that their are being 'unthoughtful' or overconfident. They may disagree with huw on goals, but given those goals, they seem to be doing a reasonable job of promoting them.

It seems pretty appropriate and analogous to me - the administration wants to ensure 100% of science grants go to science, not 98%, and similarly they want to ensure that 0% of foreign students support Hamas, not 2%. Scott's data suggests that have done a reasonably good job with the former at identifying 2%-woke grants, and likewise if they identify someone who spends 2% of their time supporting Hamas they would consider this a win. 

Thanks for sharing this, very informative and helpful for highlighting a potential leverage point., strong upvoted.

One minor point of disagreement: I think you are being a bit too pessimistic here:

And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.

There are few examples of US courts blowing up large US corporations, but that is not exactly the situation here. OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. There is a long history of businesses exaggerating the harm from new regulations, claiming they will be ruinous when actually human ingenuity and entrepreneurship render them merely disadvantageous. The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue. 

I think a closer example might be when the DC District Court sided with the FTC and blocked the Staples-Office Depot merger on somewhat dubious grounds. The court didn't directly implode a massive retailer... but Staples did enter administration shortly afterwards, and my impression at the time was the causal link was pretty clear.

Load more