A report released yesterday by the Government Accountability Office will likely get spun to imply that federal agencies are doing a pretty good job of assessing the benefits and costs of their proposed regulations. The subtitle of the report reads in part, “Agencies Included Key Elements of Cost-Benefit Analysis…” Unfortunately, agency analyses of regulations are less complete than this subtitle suggests.
The GAO report defined four major elements of regulatory analysis: discussion of the need for the regulatory action, analysis of alternatives, and assessment of the benefits and costs of the regulation. These crucial features have been required in executive orders on regulatory analysis and OMB guidance for decades. For the largest regulations with economic effects exceeding $100 million annually (“economically significant” regulations), GAO found that agencies always included a statement of the regulation’s purpose, discussed alternatives 81 percent of the time, always discussed benefits and costs, provided a monetized estimate of costs 97 percent of the time, and provided a monetized estimate of benefits 76 percent of the time.
A deeper dive into the report, however, reveals that GAO did not evaluate the quality of any of these aspects of agencies’ analysis. Page 4 of the report notes, “[O]ur analysis was not designed to evaluate the quality of the cost-benefit analysis in the rules. The presence of all key elements does not provide information regarding the quality of the analysis, nor does the absence of a key element necessarily imply a deficiency in a cost-benefit analysis.”
For example, GAO checked to see if the agency include a statement of the purpose of the regulation, but it apparently accepted a statement that the regulation is required by law as a sufficient statement of purpose (p. 22). Citing a statute is not the same thing as articulating a goal or identifying the root cause of the problem an agency seeks to solve.
Similarly, an agency can provide a monetary estimate of some benefits or costs without necessarily addressing all major benefits or costs the regulation is likely to create. GAO notes that it did not ascertain whether agencies addressed all relevant benefits or costs (p. 23).
For an assessment of the quality of agencies’ regulatory analysis, check out the Mercatus Center’s Regulatory Report Card. The Report Card evaluation method explicitly assesses the quality of the agency’s analysis, rather than just checking to see if the agency discussed the topics. For example, to assess how well the agency analyzed the problem it is trying to solve, the evaluators ask five questions:
1. Does the analysis identify a market failure or other systemic problem?
2. Does the analysis outline a coherent and testable theory that explains why the problem is systemic rather than anecdotal?
3. Does the analysis present credible empirical support for the theory?
4. Does the analysis adequately address the baseline — that is, what the state of the world is likely to be in the absence of federal intervention not just now but in the future?
5. Does the analysis adequately assess uncertainty about the existence or size of the problem?
These questions are intended to ascertain whether the agency identified a real, significant problem and identified its likely cause. On a scoring scale ranging from 0 points (no relevant content) to 5 points (substantial analysis), economically significant regulations proposed between 2008 and 2012 scored an average of just 2.2 points for their analysis of the systemic problem. This score indicates that many regulations are accompanied by very little evidence-based analysis of the underlying problem the regulation is supposed to solve. Scores for assessment of alternatives, benefits, and costs are only slightly better, which suggests that these aspects of the analysis are often seriously incomplete.
These results are consistent with the findings of other scholars who have evaluated the quality of agency Regulatory Impact Analyses during the past several decades. (Check pp. 7-10 of this paper for citations.)
The Report Card results are also consistent with the findings in the GAO report. GAO assessed whether agencies are turning in their assigned homework; the Report Card assesses how well they did the work.
The GAO report contains a lot of useful information, and the authors are forthright about its limitations. GAO combed through 203 final regulations to figure out what parts of the analysis the agencies did and did not do — an impressive accomplishment by any measure!
I’m more concerned that some participants in the political debate over regulatory reform will claim that the report shows regulatory agencies are doing a great job of analysis, and no reforms to improve the quality of analysis are needed. The Regulatory Report Card results clearly demonstrate otherwise.