Hide table of contents

Update (happened after writing but before publishing this post): Chevron deference, which I speculate about in the post, has now been overturned. The cases that the court used are called Loper Bright and Relentless if you want to look them up.

 

Executive Summary

Background and Context

  • Epistemic status: Usually, my take is based on reading/talking to two to four people, but sometimes it’s only a single person. Usually, I would want to amend my take if I knew of only one person of similar expertise objecting confidently.
  • Federal agencies in the US must conduct cost-benefit analyses (CBAs) for large regulations (also called rules). I investigate whether these regulatory CBAs could cause frontier AI regulations to fail if they make the regulations look costly, e.g. due to slowing down AI innovation.
  • For context, this is a rough breakdown of the rulemaking (regulation-making) process for large rules:
  1. The agency sees a need to regulate and develops a proposed regulation.
  2. The proposed regulation is given to the Office of Information and Regulatory Affairs (OIRA) for review once or twice while being finalised.
  3. The agency publishes the final rule which takes effect at some specified date.
  4. Roughly speaking, any interest group, company, state, or other entity that is harmed by the final rule, and can find some legal basis, can challenge the rule in court. The process of courts deciding whether the rule is unlawful is called ‘judicial review’.
  • CBAs vary widely in quality, from a few pages qualitatively discussing pros and cons to hundreds of pages of studies, models, and analyses.

 

Could CBAs Stop Frontier AI Regulations?

  • Pathways unexplored in this piece: Theoretically, a CBA could already stop a regulation while it’s still in development in the federal agency. If the CBA looks bad, this could put a natural stop to the rule. Or the rule could fail due to outside influence by another agency, lobbyists, or political members of the administration at essentially any stage of rulemaking. I find these possibilities hard to investigate and therefore I focus on only OIRA and courts potentially stopping rules. So a sizable part of the top-level question goes unanswered in this doc.
  • OIRA: During review, OIRA works with the agency to make the regulatory CBA meet OIRA’s standards. In theory, this could lead to a frontier AI regulation being changed or even withdrawn if OIRA’s standards aren’t met.
    • OIRA is the centre of CBA expertise within the government and they do care about CBA quality.
    • Regulations aren’t currently usually withdrawn due to CBA reasons in OIRA review since the current administration is pro-regulation. Across administrations, around 6% are withdrawn.
    • Sometimes, the stringency of a regulation may be reduced due to CBA reasons in OIRA review though. This could remove the regulations’ teeth so it is worth paying attention to. This can only really happen when the CBA is of adequate quality for reasons I explain later. Most CBAs are pretty superficial and low-quality; essentially a few pages qualitatively discussing pros and cons. My guess is that <15% of all regulations are above the relevant quality threshold, but the number might be higher for AI regulations. (CBA quality depends on the specific agency and how ‘broad and fuzzy’ the object of regulation is.) I am unsure how many of these regulations are then indeed reduced in stringency due to CBA concerns.
    • Future administrations may change OIRA review. As an extreme example from the past, in Bush Senior’s administration, OIRA sometimes flat-out rejected regulations from agencies. I think Biden wouldn’t change how much OIRA review of CBAs bottlenecks regulations, but Trump might, though it is hard to predict in what direction.
  • Court: In judicial review, the court checks if a regulation is lawful. In only a small number of cases, this involves scrutinising CBAs substantively. If they do scrutinise a CBA, it’s usually just a basic quality and sanity check. So this won’t stop frontier AI regulations.
    • However, I’m not sure if these trends have been changing in recent years. There might be more cases of judges scrutinising CBA substantively, such as what assumptions were made or what studies were relied on. My guess is this would still be a minority of cases though.
    • Additionally, there is an adjacent legal doctrine (Chevron deference) that may be overturned soon. If this happens, the future of CBA scrutiny in court is pretty unclear to me.
    • Lastly, there are some judges who don’t stick to precedent and their judgments are a bit unpredictable. I’m not sure how big a deal this is and if it affects CBA scrutiny.

 

Acknowledgments

I’m thankful to John Halstead for mentorship and guidance throughout this research project. I’m thankful to Markus Anderljung, Ben Garfinkel, Andrew Stawasz, Marie Buhl, and Oscar Delaney for helpful guidance and comments on earlier versions of this piece. All mistakes are my own.

 

Problem Statement

Federal regulations (also called rules) in the US are made by federal agencies. Federal agencies must conduct cost-benefit analyses (CBAs) for large regulations.[1] I investigate whether these regulatory CBAs could cause frontier AI regulations to fail at some stage of the rulemaking process. A central motivating concern is that the CBA methodology used by federal agencies may emphasise the costs of regulating, such as slowing down AI innovation. This would be a natural push against frontier AI regulations. (Though I tried to answer the question in this piece without further investigating whether the CBA methodology is skewed. Indeed, I am now skeptical that it is.)

(Note that in regulatory CBA terminology, “benefits of AI” translate to “costs” of regulating, and “risks of AI” translate to “benefits” of regulating.)

 

Epistemic Status

I’ve read about three times the number of sources I cite in this piece. Different opinions exist in the literature on almost everything I say in this doc qualified with a “maybe”/“likely”/etc. Usually, my take is based on reading/talking to two to four people, but sometimes it’s only a single person. Usually, I would want to amend my take if I knew of only one person of similar expertise objecting confidently. This just goes to say how rickety my understanding of the subject is.

 

Fundamentals of Rulemaking

In this piece, I’m going to focus on economically significant rules: approximately those that have an effect of >$200 million on the economy in any one year.[2] The rulemaking process for these rules is roughly as follows[3]:

  • The development of some regulations is triggered by Congress giving an agency a new statutory mandate outlining what the agency is supposed to regulate. But most regulations are proactively initiated by agencies seeing a need to regulate in their domain. (They then ground this in a statute that already exists.)
  • When the agency thinks they are done developing their rule, they give it to the Office of Information and Regulatory Affairs (OIRA) for review.
  • In the review, OIRA works with the agency[4] to change the rule to meet OIRA’s standards. Mostly[5]:

  1. Ensuring the regulation is consistent with the current administration’s agenda.
  2. Ensuring the quality of the CBA.
  • CBAs vary widely in quality, from a few pages qualitatively discussing prominent costs and benefits[6] to hundreds of pages of studies, models, and analyses[7].

  • Around 6% of regulations are withdrawn at this stage[8] but they may be adjusted and resubmitted to OIRA.

  • After OIRA review, the agency publishes the proposed rule for public comment. Agencies often change some details in their rules based on public comments.[9]

  • The agency finalises the rule and passes the rule to OIRA again. The review process repeats.
  • After the second OIRA review, the agency publishes the final rule which takes effect at some specified date.
  • Roughly speaking, any interest group, company, state, or other entity that is harmed by the final rule, and can find some legal basis, can challenge the rule in court.[10] This happens to most large rules.[11] The process of a court deciding whether the rule is unlawful is called judicial review. Common courts involved in judicial review are District Courts, Courts of Appeals, and sometimes the Supreme Court.

(The process laid out above might be slightly different for so-called ‘independent agencies’.[12] But few AI-relevant agencies are independent. And these differences aren’t so problematic for frontier AI regulation since they push against CBA hurdles, not towards them.)

 

Could CBA Stop Frontier AI Regulations? A Breakdown by Rulemaking Actor

There are different actors that could stop a frontier AI regulation due to its CBA, starting with the issuing federal agency itself. I will go actor by actor.

 

Actor 1: The Issuing Federal Agency Itself

If the federal agency itself makes a CBA and it doesn’t look good for some reason, e.g. their methodology emphasises costs of regulating and deemphasises benefits, this would be a natural push against regulating.[13] My tentative impression is that CBA is more of an afterthought for agency staff and doesn’t actually influence regulations much.[14] But this might differ entirely depending on the agency and topic.

This question depends on somewhat fuzzy agency dynamics and culture and I deprioritised such questions. Therefore, I can’t currently judge whether the issuing federal agency might stop or tone down a frontier AI regulation because of its CBA.

 

Actor 2: Other Agencies, Lobbyists, or Political Members of the Administration

Other agencies, lobbyists, political members of the administration, etc. might be opposed to a frontier AI regulation. They might try to convince the issuing agency, or OIRA[15], or even the political administration to stop the regulation, essentially at any point of the rulemaking process.

In theory, these actors could target the rule’s CBA to convince the agency or administration to stop the rule. I’d say non-agency actors are a bit less likely to do this since they are less likely to have CBA expertise.[16] Overall, I find this possibility hard to assess, for similar reasons as those stated in the previous section, and I don’t have a probability estimate.

 

Actor 3: OIRA

In its review of a regulation, OIRA works with the agency to amend the rule to meet OIRA’s standards.[17] If OIRA is not convinced by the CBA for a frontier AI regulation, in theory, the regulation may be withdrawn or reduced in stringency.[18] Quality-checking CBAs is one of the main two purposes of OIRA review.[19] Further, OIRA has issued hundreds of pages of guidance for agencies on how to conduct CBA.[20] They are the centre of expertise on CBA within the government.[21]

Around 6% of regulations are withdrawn during OIRA review overall.[22] My impression is that regulations aren’t currently usually withdrawn due to OIRA’s CBA scrutiny though, since the current administration is pro-regulatory. (Regulations may still be withdrawn due to political and other reasons.)

This still leaves open the possibility of OIRA reducing the stringency of frontier AI regulations due to OIRA’s CBA scrutiny which is discussed in the next three subsections. And it leaves the possibility of OIRA starting to cause withdrawals again in a potential Republican administration, which is discussed in the last subsection.

 

OIRA Reducing the Stringency of Politically Salient Regulations

Where there is political will to pass a stringent regulation, this will generally override OIRA’s CBA considerations.[23] So if the administration wanted to pass stringent frontier AI regulations, these would likely not be toned down by OIRA. This requires that the administration care about the specific level of stringency though, and doesn’t merely want to pass “something on frontier AI”. I’m not sure how common it is for the administration to insist on a specific level of stringency. And I’m not sure if frontier AI regulations will get much political attention. So overall, I’m not sure how much political will can shelter frontier AI regulations from being reduced in stringency.

 

OIRA Reducing the Stringency of Regulations With Low-Quality CBAs

Sometimes, making a regulation less stringent removes its teeth and renders it effectively useless. Therefore, it is relevant to also look at how OIRA might reduce the stringency of frontier AI regulations.

A relevant piece of context is that most regulatory CBAs are pretty superficial and low-quality.[24] Often, instead of rigorously quantifying costs and benefits and building a regulation based on this, the CBA is done haphazardly after writing the regulation.[25] Such CBAs are often a few pages of qualitative discussion of costs and benefits.[26] In theory, OIRA would like to bring up all CBAs to some quality standard that allows them to inform their regulations. However, in practice, OIRA works with whatever level of quality they receive from agencies and just tries to improve a bit upon that. OIRA can’t hold up regulations for too long since the administration wants their regulations finished, especially politically important ones. If this doesn’t happen, the head of OIRA can even be fired by the president. This means some CBAs are improved from good to even better, while others are merely salvaged from bad to passable.

A superficial CBA doesn’t inform its regulation because it is often made after the regulation and also because it is not detailed enough to offer much information. Coming back to toning down regulations, OIRA might tone down a regulation if the CBA implies this decision. But superficial CBAs hardly offer implications for the substance of a regulation. So, in most cases, OIRA wouldn’t be able to stop or tone down a frontier AI regulation on the basis of CBA.

 

OIRA Reducing the Stringency of Regulations With High-Quality CBAs

However, there are exceptions where a CBA is of high quality when coming into OIRA review so it’s actually possible to inform the regulation with it. My guess is that this happens in <15% of regulations overall, but the number might be higher for AI regulations.[27] I am unsure how many of these regulations OIRA then indeed reduces in stringency due to CBA concerns.[28]

I don’t have a good overview of why and when CBAs are of high quality but it depends on at least two factors. Firstly, the practice and culture at the issuing agency. Some agencies have a lot of in-house CBA capacity. Some agencies have a lot of economists who naturally think in terms of CBA due to their training. Some agencies talk more to OIRA than others.[29] I am unsure how agencies relevant to frontier AI fare on these factors.

Secondly, the quality of CBA depends on the object of regulation. I can characterise two extremes[30]:

  1. Highly specific and technical regulations. E.g., regulating the specific energy efficiency standards, in numbers, for fridges and freezers. Such a regulation is crucially informed by the accompanying CBA, which calculates what the specific numbers in the regulation should be. These CBAs can be hundreds of pages long.
  2. Broad regulations dealing with large and complex systems and fuzzy objectives. E.g., there’s a regulation directing federal agencies themselves to consider their climate impacts.[31] Climate impacts could of course be made quite specific but this regulation took a broad and fuzzy view of them. Further, the federal agency space is a large and complex system. It would be hard to unambiguously quantify the costs and benefits of this regulation, so the CBA only loosely informed the regulation, if at all.

For highly specific and technical regulations, it is commonplace for OIRA to reduce (or increase) their stringency because their CBAs imply this. I am not sure where frontier AI regulations would fall on the specific-to-fuzzy spectrum. This depends on how specific versus fuzzy the statutes will be that direct agencies to regulate frontier AI.[32] These statutes come from Congress and I don’t have much insight into how they are formed. Still, most regulations overall are kind of fuzzy and so aren’t influenced much by OIRA. So the same is probably true for frontier AI regulations.

 

OIRA Reducing the Stringency of Regulations Independently of CBA

OIRA isn’t purely driven by what the CBA says. Even without a quality CBA, OIRA may have opinions on how much cost is justified for which kinds of regulations. If a frontier AI regulation appears inordinately costly to them, even without a detailed CBA, they may push to reduce the regulation’s stringency. It seems likely to me that this happens sometimes.[33] This is not quite on the topic of CBA being the reason frontier AI regulations fail but it is similar. I’m not sure how to think about it.

 

Changes in the Administration

An additional uncertainty comes from changes in the administration. How OIRA has reviewed regulations has changed from administration to administration, including how much emphasis was put on CBA.[34] E.g., OIRA review seems to have been more stringent in the Reagan administration than in the Bush Sr administration according to one of Bush’s chief regulatory advisors who observed a “relaxed commitment to oversight in the Executive Office of the President”.[35] As another example of change, in the Bush Jr administration, OIRA started flat-out rejecting regulations for CBA reasons, which didn’t really happen before or after.[36] Finally, in the Trump administration, many regulations with dire analytical mistakes passed OIRA review.[37] The current OIRA administrator has called regulatory CBA in the Trump era “twisted and deformed beyond recognition”.[38]

So far, the Biden administration has been pretty pro-CBA and pretty consistent with everything I’ve said.[39] A Trump administration on the other hand might lessen OIRA’s focus on CBA. It is questionable whether OIRA managed to do much quality assurance of CBAs at all during the last Trump administration. Still, I am unsure both if this would continue in a next Trump term and if this would affect, or leave out, frontier AI regulations. Additionally, Trump may again instate some kind of cost caps on regulations. It’s questionable whether his cost caps had much of an effect in his last presidency.[40] But in theory, they could be a strong constraint that makes OIRA review of costs more of a failure point for regulations.[41]

Beyond the next couple of years, I can’t predict how OIRA’s role might evolve.

 

Actor 4: Courts

About 30% of regulations challenged in court fail the judicial review. Although, according to one measure, the Trump administration had about a 90% failure rate.[42] A regulation can only be challenged in court if there is a question of whether it violates some law. Judges can scrutinise regulatory CBAs to the extent that they might violate laws governing regulatory CBAs.[43] There are various laws allowing judges to scrutinise CBA in a very shallow way.[44] The only law that allows judges to scrutinise CBA substantively says regulations can’t be “arbitrary and capricious”.[45] Only a small number of cases involve looking at CBAs substantively in this way. Most challenges to regulations focus on more shallow or procedural issues with the regulation. This is the firm impression of two administrative lawyers I have asked, although I don’t have precise numbers, unfortunately.[46] So it is less likely frontier AI regulation CBAs would be scrutinised substantively as well.

 

Case Law

The “arbitrary and capricious” standard is exactly as vague as it sounds.[47] Since it is vague, judges look to case law (also known as common law), which means they look to how other judges have decided in similar cases in the past.[48] Case law has held that all choices in the substance of a CBA made by an agency need to have a reason.[49] E.g. the agency has to give a reason for why they applied this model and not that or used one discount rate and not another. It’s fine if the agency’s choice is controversial if they give such a reason. It’s usually even fine to be controversial whether the reason justifies the CBA choice. As long as reasonable people disagree, the court usually defers to the agency.[50] You can see how this “arbitrary and capricious” standard is pretty lax and should not be able to stop frontier AI regulations.

To illustrate further the level of substantive CBA scrutiny to expect, here are some real and hypothetical example cases:

Real case: A regulation to prevent financial crisis which entailed running some data collections.[51]

  • Challenged CBA choice: Not quantifying the benefit of the data collections.
  • Agency reason: Explained that it would be hard to quantify these benefits and only discussed them qualitatively.[52]

  • Real verdict: Passed judicial review.

Hypothetical case: Cass Sunstein, a former OIRA administrator and law professor, speculates that even a regulation imposing a cost of $600 million to prevent financial crisis would pass without quantifying its benefits.[53]

  • The agency would still have to give adequate reason for deciding benefits justify costs. E.g., my (the author’s) guess is that only discussing the benefits qualitatively in four paragraphs would be inadequate for a $600 million cost. But if the agency wrote several pages, even without quantification, that seems likely adequate to me. It also depends on the topic. A $600 million cost for avoiding a financial crisis is easier to justify than a $600 million cost for, say, protecting some species of grass in Florida.

Real case: Rolling back regulations that decrease greenhouse gas emissions from cars (in the Trump era).[54]

  • Challenged CBA choice: The CBA here involved many questionable choices. Leading economists think it violated basic economic principles. One particularly strange calculation concluded reducing the price of cars would reduce demand for cars.[55]

  • Hypothetical verdict: Revesz, current OIRA administrator and also a law professor, speculates that the CBA was “so riddled with implausible assumptions that it would likely fail in court”.[56]

Hypothetical CBA choice: Don’t apply a discount rate to benefits at all.

  • Agency reason: Two paragraphs on why they think future generations matter equally and thus no discount rate needs to be applied.
  • Hypothetical verdict: I interpret Sunstein’s writing to imply this would fail.[57] Not applying any discount rate breaks so much with established practice that two paragraphs hardly count as “some reasonable reason”. However, if the agency had written several pages, my guess would change to the regulation passing.

I am pretty uncertain about these precise edge cases but I think they show how far judicial review leans towards laxness about letting regulations pass. Frontier AI regulations should have no problem doing the minimum of best practices, like applying standard discount rates. And for controversial choices, they should be able to give extensive justification; otherwise, the regulation would not be justified anyway.

 

Recent/Future Changes in Case Law

Case law changes over time as precedents accumulate of judges diverging from existing case law. Currently, the case law is changing to be less deferential to agencies.[58] I’m not familiar with the recent years’ case law. So there might be some more cases of judges scrutinising CBA substantively, such as what assumptions were made or what studies were relied on. My guess is this would still be a minority of cases though.

The future is pretty unclear to me. At the time of writing this, the Supreme Court is considering overturning a major doctrine of deference to agencies called Chevron deference. Chevron deference is theoretically separate from the “arbitrary and capricious” standard, but they are both about deference to agencies. Opinions among experts I talked to vary, but overturning Chevron could influence judges to be less deferential about the “arbitrary and capricious” standard as well.[59] I think it would be hard to predict how this will develop over the coming years, even for a lawyer. So to some extent, we will have to wait and see.

 

Unpredictable Judges

However, during the Trump administration, many new judges were appointed that do not necessarily stick to case law. (These judges remained appointed after the Trump administration.) Some of these judges have written court decisions that are seen as quite low-quality, or even unreasonable, by people across the political spectrum. Often this is politically motivated, but sometimes it may be a judge following a whim of their own.[60] Such a judge may well scrutinise a CBA substantively, and not defer to the agency. In this way, CBA could cause a frontier AI regulation to fail judicial review.

I don’t know how big a deal exactly these unpredictable judges are and how often they ignore case law. This could affect only a handful of cases or almost all, for all I know. Further, it is unclear to me whether CBA would really be the root cause of such a failure or whether such a judge would simply have found some other reason to reject the regulation if not its CBA. (A lawyer familiar with the track record of this subset of Trump-appointed judges could probably answer these questions.)


  1. ^

     Congressional Research Service, Cost-Benefit Analysis in Federal Agency Rulemaking (2022), p.

  2. ^
  3. ^

     ICF, The Reg Map Informal Rulemaking (2020); Reginfo.gov, FAQ.

  4. ^

     “collaborating with the regulatory agency” (my emphasis), Congressional Research Service, Counting Regulations: An Overview of Rulemaking, Types of Federal Regulations, and Pages in the Federal Register (2019), p. 14.; “Most issues that arise during OIRA review are resolved at the staff level”, “collaborative nature”, Dudley (2022), p. 254.

  5. ^

     Congressional Research Service, An Overview of Federal Regulations and the Rulemaking Process (2021), p. 2.

  6. ^

     “Because independent agencies are not required by the Executive Order to conduct CBAs, many agencies do not conduct CBAs, and when they do, those CBAs are often qualitative”, Cecot & Hahn (2020), p. 178; “Agencies also vary in terms of how they discount these effects, the extent to which they describe costs and benefits qualitatively as opposed to quantitatively, and the number of alternatives they explicitly consider, among numerous other factors”, Nou (2013), pp. 1791-1792.

  7. ^

     Conversation with Andrew Stawasz, June 11 2024. (Andrew Stawasz is an attorney who graduated from Harvard Law School in 2021. During law school, he served in the inaugural cohort of Brooks Institute Emerging Scholars Fellows. He most recently served as a Legal Fellow at the Institute for Policy Integrity at NYU School of Law and an Advisor at the White House Office of Information and Regulatory Affairs.)

  8. ^

     “OIRA codes few regulations as having been “withdrawn” by the submitting agency (6 percent), and even fewer are “returned” (less than 1 percent)”, Dudley (2022), p. 254.

  9. ^

     Conversation with an administrative lawyer in a federal agency, May 3 2024; “In addition, we found that agencies made changes to the text of 31 of the 51 rules, most often in response to public comments”, Government Accountability Office, Agencies Could Take Additional Steps to Respond to Public Comments (2012), p. 26.

  10. ^

     Conversation with Andrew Stawasz, June 11 2024. (Andrew Stawasz is an attorney who graduated from Harvard Law School in 2021. During law school, he served in the inaugural cohort of Brooks Institute Emerging Scholars Fellows. He most recently served as a Legal Fellow at the Institute for Policy Integrity at NYU School of Law and an Advisor at the White House Office of Information and Regulatory Affairs.)

  11. ^

     “major rules are almost always challenged in court”, DeMuth (2020), p. 7.

  12. ^

     Independent agencies are slightly more removed from political influence than regular federal agencies. They also sometimes have fewer/shallower CBA requirements.

  13. ^

     Or if their CBA methodology already shapes the way they think without even explicitly making a CBA.

  14. ^

     “Surveying the evidence, Hahn and Tetlock (2008, pp. 82–83) concluded that economic analysis has not had much impact [...] Regulatory analysis rarely, if ever, dictates the agency’s decision, occasionally affects important aspects of decisions, and more often has smaller effects on some aspects of decisions”, Ellig et al. (2012), p. 154. An administrative lawyer I talked to found CBA was essentially always an afterthought in the two departments they had worked in (May 3 2024). Further, Cecot & Hahn (2020) discuss how some CBAs don’t influence their regulation at all and some indeed are only made after the regulation is already finalised.

  15. ^

     Indeed, it is one of OIRA’s functions to convene any opposers from within the government during review in order to get their input. Conversation with Andy Stawasz, June 11 2024; ”Sunstein states that OIRA’s primary role in the regulatory process is as an 'information-aggregator' – compiling information from many actors in the executive branch and using that information to help get at the right regulatory result. Observing that the White House is a 'they,' not an 'it,' Sunstein emphasizes the role of other White House offices and officials, beyond OIRA, in shaping regulatory policy. Sunstein lists almost a dozen White House offices that, he says, play a significant role. Beyond the White House, Sunstein asserts that agencies other than the agency proposing a particular regulatory action also have a large influence on regulatory policy. Sometimes it is another Cabinet secretary who might have such influence; often, Sunstein says, it is career staff at another agency. Sometimes it is the Chief of Staff of the White House who plays the major role; sometimes it is a member of Congress”, Heinzerling (2014), pp. 342-343.

  16. ^

     Conversation with an administrative lawyer in a federal agency, May 3 2024.

  17. ^

     “collaborating with the regulatory agency” (my emphasis), Congressional Research Service, Counting Regulations: An Overview of Rulemaking, Types of Federal Regulations, and Pages in the Federal Register (2019), p. 14.; “Most issues that arise during OIRA review are resolved at the staff level”, “collaborative nature”, Dudley (2022), p. 254.

  18. ^

     Presumably agency staff already expect this will happen and they might tone down/give up on a regulation before it even comes to OIRA review.

  19. ^

     Congressional Research Service, An Overview of Federal Regulations and the Rulemaking Process (2021), p. 2.

  20. ^

     “OIRA has revised Circular A-4”, Office of Management and Budget, Modernizing Regulatory Review. Circular A-4 and other CBA guidance documents: Office of Management and Budget, Circular No. A-4 (2023); Office of Management and Budget, OMB Circular No. A-4: Explanation and Response to Public Input (2023); Office of Management and Budget, Circular No. A-94 (2023); Office of Management and Budget, Guidance on Accounting for Competition Effects When Developing and Analyzing Regulatory Actions (2023).

  21. ^

     Maybe next to the Environmental Protection Agency, which has a lot of in-house CBA expertise too. “EPA has built substantial in-house economics capacity, which far dwarfs that of OIRA and has made significant methodological contributions, fostering the elaboration of concepts such as nonuse value and discounting that are fundamental to how cost-benefit analysis is carried out. These contributions have affected EPA's rule makings, how OIRA carries out its review, and the analytic practices of other agencies”, Livermore (2014), p. 613.

  22. ^

     “OIRA codes few regulations as having been “withdrawn” by the submitting agency (6 percent), and even fewer are “returned” (less than 1 percent)”, Dudley (2022), p. 254.

  23. ^

     “What troubles me is the suggestion that OIRA, an institution of roughly 50 people, the vast majority of whom are civil servants, could or should stand alone to thwart the political will of the president and his senior advisors when it comes to discretionary choices about regulations. This is not aligned with my experience. We have a whole phrase for it: getting rolled. This is what happens when political or other value judgments overcome technocratic or operational concerns”, Dooling, OIRA the Angel; OIRA the Devil (2021); “Examples in the literature (particularly in Box C – Box B examples are hard to discern because one would need to either find rules not promulgated or rules promulgated despite presidential opposition or the opposition of his top staff) tend to show that when analysis and politics conflict, politics wins”, Shapiro (2019).

  24. ^

     In a sample of 167 economically significant rules from 2015 to 2018, a paper found only 22% (37) of them quantify at least some costs and some benefits. Cecot & Hahn (2020), pp. 177-178. Data for more years can be found in OIRA’s reports to Congress here.

  25. ^

     An administrative lawyer I talked to found CBA was essentially always an afterthought in the two departments they had worked in (May 3 2024). Further, Cecot & Hahn (2020) discuss how some CBAs don’t influence their regulation at all and some indeed are only made after the regulation is already finalised. “Surveying the evidence, Hahn and Tetlock (2008, pp. 82–83) concluded that economic analysis has not had much impact [...] Regulatory analysis rarely, if ever, dictates the agency’s decision, occasionally affects important aspects of decisions, and more often has smaller effects on some aspects of decisions”, Ellig et al. (2012), p. 154.

  26. ^

     “Because independent agencies are not required by the Executive Order to conduct CBAs, many agencies do not conduct CBAs, and when they do, those CBAs are often qualitative”, Cecot & Hahn (2020), p. 178; “Agencies also vary in terms of how they discount these effects, the extent to which they describe costs and benefits qualitatively as opposed to quantitatively, and the number of alternatives they explicitly consider, among numerous other factors”, Nou (2013), pp. 1791-1792.

  27. ^

     In a sample of 167 economically significant rules from 2015 to 2018, a paper found only 22% (37) of them quantify at least some costs and some benefits. Cecot & Hahn (2020), pp. 177-178. Data for more years can be found in OIRA’s reports to Congress here.

  28. ^

     Speculation: How much OIRA tones down regulations with quality CBAs firstly depends on how much OIRA disagrees with these CBAs. This in turn depends on:

    How hard is it to make a CBA for the regulation that suits OIRA’s preferences about CBA?

    How competent/informed/incentivised etc. would agency staff be to make a CBA that looks best to OIRA? (Even if a CBA isn’t much at odds with OIRA’s preferences, agency staff might still fail to make it look good to OIRA for various reasons.)

    Speculation on A: CBA is a very flexible tool. You can estimate costs and benefits in a variety of ways, with a variety of data, models, and parameters. My sense is CBA is usually done as an afterthought after coming up with a desired regulation and it is generally possible to write a supporting CBA post-hoc. Therefore, for most AI regulations, my guess is you wouldn’t need to do any odd CBA acrobatics that OIRA might object to. This might be different if you actually need to make a somewhat non-standard argument, e.g. involving the long-term future, to support your regulation. It also depends on how detailed OIRA’s preferences about CBA are. If they are very detailed about what kinds of models to use in which situations, what studies to rely on, etc., that would make OIRA review a more likely point of failure.

  29. ^

     Conversation with Andrew Stawasz, June 11 2024. (Andrew Stawasz is an attorney who graduated from Harvard Law School in 2021. During law school, he served in the inaugural cohort of Brooks Institute Emerging Scholars Fellows. He most recently served as a Legal Fellow at the Institute for Policy Integrity at NYU School of Law and an Advisor at the White House Office of Information and Regulatory Affairs.)

  30. ^

     Conversation with Andrew Stawasz, June 11 2024. (Andrew Stawasz is an attorney who graduated from Harvard Law School in 2021. During law school, he served in the inaugural cohort of Brooks Institute Emerging Scholars Fellows. He most recently served as a Legal Fellow at the Institute for Policy Integrity at NYU School of Law and an Advisor at the White House Office of Information and Regulatory Affairs.)

  31. ^

     Council on Environmental Quality, National Environmental Policy Act Implementing Regulations Revisions Phase 2 (2024), 89 FR 35442.

  32. ^

     Conversation with Andrew Stawasz, June 11 2024. (Andrew Stawasz is an attorney who graduated from Harvard Law School in 2021. During law school, he served in the inaugural cohort of Brooks Institute Emerging Scholars Fellows. He most recently served as a Legal Fellow at the Institute for Policy Integrity at NYU School of Law and an Advisor at the White House Office of Information and Regulatory Affairs.)

  33. ^

     One of OIRA’s major effects in practice is reigning in the agencies’ tendency to want to regulate a lot.

    (I gained this impression from lots of sources but I will only cite one here: “OIRA has regularly disputed agency cost-benefit analyses and moderated agencies’ tendencies to ‘overregulation.’” DeMuth (2020), p. 7.) Further, OIRA has possibly the best overview of regulation costs and benefits across agencies within the government. OIRA is also tasked with enforcing a deregulatory trajectory and cutting costs in some Republican administrations. My personal extrapolation of this is that OIRA would therefore have opinions about how much cost is justified for which regulations, even in the absence of an adequate CBA.

  34. ^

     “Look no further than the yawning gap between Trump’s two-for-1 & regulatory budget E.O. and Biden’s memo on modernizing regulatory review for an illustration of how differently presidents can approach OIRA and regulatory review”, Dooling, OIRA the Angel; OIRA the Devil (2021); “So when Bill Clinton came into office in 1993, there was some pressure on that administration to essentially undo the prior order that Reagan had put in place and take regulatory review away from OIRA or radically restructure it, de-emphasize the use of cost-benefit analysis”, Livermore, Webinar “Cost-Benefit Analysis After Trump, What Does the Future Hold?” (2021), p. 2.

  35. ^

     “The most important problem was the relaxed commitment to oversight in the Executive Office of the President”, Boskin et al. (1993), p. 32.

  36. ^

     “President George W. Bush (Bush 43), who took office in January 2001, retained E.O. 12866 but, at least in some respects, his OIRA administrator implemented it more aggressively. John Graham began to return draft regulations to agencies for reconsideration pursuant to Sec. 6(b)(3) of E.O. 12866, something that had not occurred in the Clinton administration 2019. In 2001 alone, OIRA sent 14 regulations back to agencies with public letters detailing the ways in which the rule or supporting analysis failed to comply with the principles of E.O. 12866 (OMB & GSA). After this initial flurry, the rate of return letters declined, with four in 2002 and two or fewer in subsequent years of the Bush 43 administration”, Dudley (2022), p. 251.

  37. ^

     “Under Trump, OIRA has largely lost either the desire or ability to effectively perform its oversight role, and cost-benefit analysis has been subject to flagrant manipulation”, “given the poor quality of analysis in the proposed and final rules that have been published, OIRA has not ensured a minimum quality standard”, Revesz & Livermore (2020), p. 28.

  38. ^

     “And cost-benefit analysis, emissions trading, and other ideas that were hatched in economics departments and then embraced by Republican politicians have become verboten for many in the party, or have been twisted and deformed beyond recognition”, Revesz & Livermore (2020), p. 3.

  39. ^

     E.g., see the Executive Order on Modernizing Regulatory Review (2023) and the establishment of a new Subcommittee on Frontiers of Benefit-Cost Analysis.

  40. ^

     In an investigation by the Government Accountability Office (GAO), only 1 out of 5 agencies who were doing the most deregulation said cost caps were a factor in their regulatory agenda. Further, 3 of 5 agencies didn’t have to delay any regulations. 8 of 9 significant deregulatory actions that GAO picked randomly would’ve happened even without the cost caps. “According to selected agency officials, eight of the nine selected deregulatory actions that we reviewed, and which were implemented when the EOs were in effect, would likely have been finalized regardless of the EOs’ directives. Three of the five selected agencies told us that pursuing the goals of the EOs did not require them to postpone planned

    regulatory actions that were underway prior to the publication of the deregulatory EOs”, “Officials from two of the five selected agencies we spoke with, Commerce and DHS, told us that complying with the requirement to perform cost-savings calculations posed some resource challenges. For example, officials at both agencies told us that calculating the cost-savings included additional spreadsheets, paperwork, and economic analyses. Officials at DHS’s Coast Guard told us that while the deregulatory EO did not affect their overall regulatory operations, it did affect their regulatory agenda. For example, according to the officials, in response to the deregulatory EOs, the Coast Guard’s Marine Safety and Security Council de-prioritized 13 projects that would have otherwise gone forward. These projects included developing regulations on outer continental shelf activities and commercial fishing vessels. However, officials at the other three agencies said that complying with this EO requirement generally did not affect their overall regulatory operations or agenda” (my emphasis), Government Accountability Office, Deregulatory Executive Orders Did Not Substantially Change Selected Agencies’ Processes or Procedures (2021), pp. 10-11 and p. 12 respectively.

  41. ^

     Cost caps have been discussed widely and often as a means to reign in regulation. “Modeled on ideas that have percolated in the academic literature for more than a generation”, Shapiro, The limits of thinking of a regulatory budget like a fiscal budget (2019).

  42. ^

     “According to one measure, as of July 2019, the Trump administration has lost nearly 90 percent of regulatory cases. This win rate of around 10 percent is a stunningly poor track record, particularly when compared to a typical administration, which wins 70 percent of challenges to its regulatory actions” Revesz & Livermore (2020), p. 31.

  43. ^

     “unlike the OMB review procedure, in which OMB approval of the regulation is required before a regulation can be issued, there is no comparable requirement for judicial review and approval of regulations. Rather there must be some legal challenge involving the regulation and its BCA to trigger a judicial review”, Viscusi & Cecot (2015), p. 576.

  44. ^

     “The challenges that require a reviewing court to evaluate an agency's use of BCA come in three general varieties. First, the reviewing court may be asked to determine whether the agency was authorized to rely on a BCA given its statutory mandate. Or, instead, the court may have to determine

    whether the agency was statutorily obligated to employ BCA to justify its rulemaking. In this context, the reviewing court examines the agency's statutory mandate and determines the role that BCA is allowed or required to play in the agency's decision making, employing guidance from the U.S. Supreme Court. Second, a reviewing court may have to evaluate the adequacy of an agency's BCA in light of the agency's statutory mandate. These challenges to the quality of the agency's BCA often revolve around whether the agency sufficiently considered all reasonable-or statutorily mandated-factors in its BCA. [...] The court may also analyze whether the agency provided sufficient explanation of the BCA's scope or methodology to provide adequate opportunity for notice and comment and substantive judicial review”, Viscusi & Cecot (2015), pp. 576-577.

  45. ^

     “Other decisions involve what is called arbitrariness review under the Administrative Procedure Act. That act makes it unlawful for agencies to make decisions that are ‘arbitrary’ or ‘capricious’, Sunstein (2018), p. 161.

  46. ^

     Conversation with an administrative lawyer in a federal agency, May 3 2024; Conversation with an administrative lawyer, June 6 2024.

  47. ^

     “The arbitrariness of an action is a decidedly ‘elusive’ concept in administrative law”, “Without additional standards to reference, the APA theory [the Act that uses the words ‘arbitrary and capricious’] does not offer judges anything specific against which to check an agency’s work”, Mannix & Dooling (2019), p. 16 and p. 18 respectively.

  48. ^

     There are many papers detailing past court rulings (case law) on agency CBA. E.g., Noe & Graham (2020); Cecot & Viscusi (2015); Masur & Posner (2018).

  49. ^

     “The prohibition on arbitrariness imposes some constraints on its choices. If an agency discounted costs but not benefits, it would have to explain itself. If an agency used $1 million or $30 million as the value of a statistical life, it would face a heavy burden of justification. To offer a real-world example: In 2017, the Environmental Protection Agency under President Donald Trump proposed to depart from the previous decision to use the “global” figure for the social cost of carbon (approximately $40, consisting of the damage done to the world from a ton of carbon emissions in the United States) in favor of the domestic figure (between $1 and $6, representing the damage done only in the United States). That decision may or may not be justifiable. But it was not justified. No explanation was given. That is the height of arbitrariness, and it should be invalidated in court”, “The agency should be required to do more than announce its conclusions. It has to explain them. It is not implausible to think that it must support

    its claims with numbers—ranges, if not point estimates—unless it can explain why it has failed to do so”, Sunstein (2018), p. 159 and pp. 160-161 respectively.

  50. ^

     The best way to see this is looking at examples of when court accepted a contentious reason an agency gave for its CBA choices versus when court did not. Many such examples can be found in Sunstein (2018), pp. 158-169; “Under arbitrariness review, the initial question is simple: Is the agency’s explanation unreasonable? Because agencies have technical expertise, a challenger would face a heavy burden here. If the agency has adequately explained [in this case] its failure to quantify, the issue would seem to be at an end”, Sunstein (2018), p. 161.

     

  51. ^

     Inv. Co. Inst. v. Commodity Futures Trading Comm’n, 720 F.3d 370, 372–375 (D.C. Cir. 2013).

  52. ^

     “The appellants further complain that CFTC failed to put a precise number on the benefit of data collection in preventing future financial crises. But the law does not require agencies to measure the immeasurable. CFTC’s discussion of unquantifiable benefits fulfills its statutory obligation to consider and evaluate potential costs and benefits.”  Technically, this decision didn’t use the ‘arbitrary and capricious’ standard but a standard derived from a statute. But Sunstein thinks it would be the same under the ‘arbitrary and capricious’ standard: “If this principle holds for a statute that requires consideration of costs and benefits, it certainly holds under arbitrariness review more generally”, Sunstein (2018), p. 161.

  53. ^

     “Suppose, for example, that an agency has imposed a cost of $600 million with a regulation that will reduce the risk of a financial crisis by some unquantifiable amount. To survive a claim of arbitrariness, it would be best for the agency to engage in some kind of breakeven analysis, which is eminently doable. But in light of the sheer magnitude of a financial crisis, a court should not require breakeven analysis as a precondition for validation”, Sunstein (2018), pp. 166-167. (A breakeven analysis is weaker than CBA. It doesn’t require quantifying costs and benefits fully. Instead it requires quantifying one of them and asking “how high would we need/allow the other one to be to break even?”. Sunstein speculating that not even breakeven analysis would be required indicates a very low standard of quantification indeed.)

  54. ^

     Competitive Enterprise Institute v. National Highway Traffic Safety Administration,

    No. 20-1145, D.C. Cir. (filed June 2022)

  55. ^

     “There are many analytic shortcomings in the legal and economic analysis of the proposed SAFE rule—too many to detail here. But there is one modeling forecast that is particularly galling in how it flouts basic economic principles. The Trump administration predicted that by reducing the price of cars, it would decrease the number of cars on the road, and therefore reduce the number of automobile fatalities. This prediction—which contravenes basic principles of supply and demand—is the rough equivalent in economics of the Flat Earth Hypothesis”, Revesz & Livermore (2020), pp. 99-100. The real case was never decided. It was shelved indefinitely when Biden made a new SAFE rule and that rule was in turn challenged in court. Climatecasechart.com, Competitive Enterprise Institute v. National Highway Traffic Safety Administration.

  56. ^

     “Once it became clear that the analysis of the proposed rollback was so riddled with implausible assumptions that it would likely fail in court [...]”, Revesz & Livermore (2020), p. 100.

  57. ^

     “Within the broad bounds of reason, an agency might make a large number of discretionary choices. But agencies consist of human beings, who may exceed those bounds. The prohibition on arbitrariness imposes some constraints on its choices. If an agency discounted costs but not benefits, it would have to explain itself. If an agency used $1 million or $30 million as the value of a statistical life, it would face a heavy burden of justification”, Sunstein (2018), p. 159.

  58. ^

     Conversation with an administrative lawyer in a federal agency, May 3 2024; Conversation with an administrative lawyer, June 6 2024.

  59. ^

     Conversation with Andrew Stawasz, June 11 2024. (Andrew Stawasz is an attorney who graduated from Harvard Law School in 2021. During law school, he served in the inaugural cohort of Brooks Institute Emerging Scholars Fellows. He most recently served as a Legal Fellow at the Institute for Policy Integrity at NYU School of Law and an Advisor at the White House Office of Information and Regulatory Affairs.)

  60. ^

     Conversation with Andrew Stawasz, June 11 2024. (Andrew Stawasz is an attorney who graduated from Harvard Law School in 2021. During law school, he served in the inaugural cohort of Brooks Institute Emerging Scholars Fellows. He most recently served as a Legal Fellow at the Institute for Policy Integrity at NYU School of Law and an Advisor at the White House Office of Information and Regulatory Affairs.)

21

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: Regulatory cost-benefit analysis (CBA) requirements are unlikely to stop frontier AI regulations in the US, though there are some uncertainties around judicial review and potential changes in administration.

Key points:

  1. Federal agencies must conduct CBAs for large regulations, but these are often superficial and don't substantially influence the regulation.
  2. The Office of Information and Regulatory Affairs (OIRA) reviews CBAs but rarely causes regulations to be withdrawn, especially under pro-regulatory administrations.
  3. Courts can scrutinize CBAs but generally defer to agencies unless choices are clearly unreasonable.
  4. Unpredictable judges appointed during the Trump administration create some uncertainty around judicial review of CBAs.
  5. Changes in administration could alter how OIRA and courts approach CBA review, potentially making it a more significant hurdle.
  6. Overall, CBA requirements are unlikely to stop frontier AI regulations, but there are some areas of uncertainty.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from Luise
Curated and popular this week
Relevant opportunities