Tag Archives: economic growth

Does Anyone Know the Net Benefits of Regulation?

In early August, I was invited to testify before the Senate Judiciary subcommittee on Oversight, Federal Rights and Agency Action, which is chaired by Sen. Richard Blumenthal (D-Conn.).  The topic of the panel was the amount of time it takes to finalize a regulation.  Specifically, some were concerned that new regulations were being deliberately or needlessly held up in the regulatory process, and as a result, the realization of the benefits of those regulations was delayed (hence the dramatic title of the panel: “Justice Delayed: The Human Cost of Regulatory Paralysis.”)

In my testimony, I took the position that economic and scientific analysis of regulations is important.  Careful consideration of regulatory options can help minimize the costs and unintended consequences that regulations necessarily incur. If additional time can improve regulations—meaning both improving individual regulations’ quality and having the optimal quantity—then additional time should be taken.  My logic behind taking this position was buttressed by three main points:

  1. The accumulation of regulations stifles innovation and entrepreneurship and reduces efficiency. This slows economic growth, and over time, the decreased economic growth attributable to regulatory accumulation has significantly reduced real household income.
  2. The unintended consequences of regulations are particularly detrimental to low-income households— resulting in costs to precisely the same group that has the fewest resources to deal with them.
  3. The quality of regulations matters. The incentive structure of regulatory agencies, coupled with occasional pressure from external forces such as Congress, can cause regulations to favor particular stakeholder groups or to create regulations for which the costs exceed the benefits. In some cases, because of statutory deadlines and other pressures, agencies may rush regulations through the crafting process. That can lead to poor execution: rushed regulations are, on average, more poorly considered, which can lead to greater costs and unintended consequences. Even worse, the regulation’s intended benefits may not be achieved despite incurring very real human costs.

At the same time, I told the members of the subcommittee that if “political shenanigans” are the reason some rules take a long time to finalize, then they should use their bully pulpits to draw attention to such actions.  The influence of politics on regulation and the rulemaking process is an unfortunate reality, but not one that should be accepted.

I actually left that panel with some small amount of hope that, going forward, there might be room for an honest discussion about regulatory reform.  It seemed to me that no one in the room was happy with the current regulatory process – a good starting point if you want real change.  Chairman Blumenthal seemed to feel the same way, stating in his closing remarks that he saw plenty of common ground.  I sent a follow-up letter to Chairman Blumenthal stating as much. I wrote to the Chairman in August:

I share your guarded optimism that there may exist substantial agreement that the regulatory process needs to be improved. My research indicates that any changes to regulatory process should include provisions for improved analysis because better analysis can lead to better outcomes. Similarly, poor analysis can lead to rules that cost more human lives than they needed to in order to accomplish their goals.

A recent op-ed penned by Sen. Blumenthal in The Hill shows me that at least one person is still thinking about the topic of that hearing.  The final sentence of his op-ed said that “we should work together to make rule-making better, more responsive and even more effective at protecting Americans.” I agree. But I disagree with the idea that we know that, as the Senator wrote, “by any metric, these rules are worth [their cost].”  The op-ed goes on to say:

The latest report from the Office of Information and Regulatory Affairs shows federal regulations promulgated between 2002 and 2012 produced up to $800 billion in benefits, with just $84 billion in costs.

Sen. Blumenthal’s op-ed would make sense if his facts were correct.  However, the report to Congress from OIRA that his op-ed referred to actually estimates the costs and benefits of only a handful of regulations.  It’s simple enough to open that report and quote the very first bullet point in the executive summary, which reads:

The estimated annual benefits of major Federal regulations reviewed by OMB from October 1, 2002, to September 30, 2012, for which agencies estimated and monetized both benefits and costs, are in the aggregate between $193 billion and $800 billion, while the estimated annual costs are in the aggregate between $57 billion and $84 billion. These ranges are reported in 2001 dollars and reflect uncertainty in the benefits and costs of each rule at the time that it was evaluated.

But you have to actually dig a little farther into the report to realize that this characterization of the costs and benefits of regulations represents only the view of agency economists (think about their incentive for a moment – they work for the regulatory agencies) and for only 115 regulations out of 37,786 created from October 1, 2002, to September 30, 2012.  As the report that Sen. Blumenthal refers to actually says:

The estimates are therefore not a complete accounting of all the benefits and costs of all regulations issued by the Federal Government during this period.

Furthermore, as an economist who used to work in a regulatory agency and produce these economic analyses of regulations, I find it heartening that the OMB report emphasizes that the estimates it relies on to produce the report are “neither precise nor complete.”  Here’s another point of emphasis from the OMB report:

Individual regulatory impact analyses vary in rigor and may rely on different assumptions, including baseline scenarios, methods, and data. To take just one example, all agencies draw on the existing economic literature for valuation of reductions in mortality and morbidity, but the technical literature has not converged on uniform figures, and consistent with the lack of uniformity in that literature, such valuations vary somewhat (though not dramatically) across agencies. Summing across estimates involves the aggregation of analytical results that are not strictly comparable.

I don’t doubt Sen. Blumenthal’s sincerity in believing that the net benefits of regulation are reflected in the first bullet point of the OMB Report to Congress.  But this shows one of the problems facing regulatory reform today: People on both sides of the debate continue to believe that they know the facts, but in reality we know a lot less about the net effects of regulation than we often pretend to know.  Only recently have economists even begun to understand the drag that regulatory accumulation has on economic growth, and that says nothing about what benefits regulation create in exchange.

All members of Congress need to understand the limitations of our knowledge of the total effects of regulation.  We tend to rely on prospective analyses – analyses that state the costs and benefits of a regulation before they come to fruition.  What we need are more retrospective analyses, with which we can learn what has really worked and what hasn’t, and more comparative studies – studies that have control and experiment groups and see if regulations affect those groups differently.  In the meantime, the best we can do is try to ensure that the people engaged in creating new regulations follow a path of basic problem-solving: First, identify whether there is a problem that actually needs to be solved.  Second, examine several alternative ways of addressing that problem.  Then consider what the costs and benefits of the various alternatives are before choosing one. 

The Economics of Regulation Part 3: How to Estimate the Effect of Regulatory Accumulation on the Economy? Exploring Endogenous Growth and Other Models

This post is the third part in a three part series spurred by a recent study by economists John Dawson and John Seater that estimates that the accumulation of federal regulation has slowed economic growth in the US by about 2% annually.  The first part discussed generally how Dawson and Seater’s study and other investigations into the consequences of regulation are important because they highlight the cumulative drag of our regulatory system. The second part went into detail on some of the ways that economists measure regulation, highlighting the strengths and weaknesses of each.  This post – the final one in the series – looks at how those measures of regulation are used to estimate the consequences of regulatory policy.  As always, economists do it with models.  In the case of Dawson and Seater, they appeal to a well-established family of endogenous growth models built upon the foundational principle of creative destruction, in the tradition of Joseph Schumpeter.

So, what is an endogenous growth model?

First, a brief discussion of models:  In a social or hard science, the ideal model is one that is useful (applicable to the real world using observable inputs to predict outcomes of interest), testable (predictions can be tested with observed outcomes), flexible (able to adapt to a wide variety of input data), and tractable (not too cumbersome to work with).  Suppose a map predicts that following a certain route will lead to a certain location.  When you follow that route in the real world, if you do not actually end up at the predicted location, you will probably stop using that map.  Same thing with models: if a model does a good job at predicting real world outcomes, then it sticks around until someone invents one that does an even better job.  If it doesn’t predict things well, then it usually gets abandoned quickly.

Economists have been obsessed with modeling the growth of national economies at least since Nobel prize winner Simon Kuznets began exploring how to measure GDP in the 1930s.  Growth models generally refer to models that try to represent how the scale of an economy, using metrics such as GDP, grows over time.  For a long time, economists relied on neoclassical growth models, which primarily use capital accumulation, population growth, technology, and productivity as the main explanatory factors in predicting the economic growth of a country. One of the first and most famous of such economic growth models is the Solow model, which has a one-to-one (simple) mapping from increasing levels of the accumulated stock of capital to increasing levels of GDP.  In the Solow model, GDP does not increase at the same rate as capital accumulation due to the diminishing marginal returns to capital.  Even though the Solow model was a breakthrough in describing the growth of GDP from capital stock accumulation, most factors in this growth process (and, generally speaking, in the growth processes of other models in the neoclassical family of growth models) are generated by economic decisions that are outside of the model. As a result, these factors are dubbed exogenous, as opposed to endogenous factors which are generated inside of the model as a result of the economic decisions made by the actors being modeled.

Much of the research into growth modeling over the subsequent decades following Solow’s breakthrough has been dedicated to trying to “endogenize” those exogenous forces (i.e. move them inside the model). For instance, a major accomplishment was endogenizing the savings rate – how much of household income was saved and invested in expanding firms’ capital stocks. Even with this endogenous savings rate, as well as exogenous growth in the population providing labor for production, the accumulating capital stocks in these neoclassical growth models could not explain all of the growth in GDP. The difference, called the Solow Residual, was interpreted as the growth in productivity due to technological development and was like manna from heaven for the actors in the economy – exogenously growing over time regardless of the decisions made by the actors in the model.

But it should be fairly obvious that decisions we make today can affect our future productivity through technological development, and not just through the accumulation of capital stocks or population growth. Technological development is not free. It is the result of someone’s decision to invest in developing technologies. Because technological development is the endogenous result of an economic decision, it can be affected by any factors that distort the incentives involved in such investment decisions (e.g., taxes and regulations). 

This is the primary improvement of endogenous growth theory over neoclassical growth models.  Endogenous growth models take into account the idea that innovative firms invest in both capital and technology, which has the aggregate effect of moving out the entire production possibilities curve.  Further, policies such as increasing regulatory restrictions or changing tax rates will affect the incentives and abilities of people in the economy to innovate and produce.  The Dawson and Seater study relies on a model originally developed by Pietro Peretto to examine the effects of taxes on economic growth.  Dawson and Seater adapt the model to include regulation as another endogenous variable, although they do not formally model the exact mechanism by which regulation affects investment choices in the same way as taxes.  Nonetheless, it’s perfectly feasible that regulation does affect investment, and, to a degree, it is simply an empirical question of how much.

So, now that you at least know that Dawson and Seater selected an accepted and feasible model—a model that, like a good map, makes reliable predictions about real world outcomes—you’re surely asking how that model provided empirical evidence of regulation’s effect on economic growth.  The answer depends on what empirical means.  Consider a much better established model: gravity.  A simple model of gravity states that an object in a vacuum near the Earth’s surface will accelerate towards the Earth at 9.81 meters per second squared. On other planets, that number may be higher or lower, depending on the planet’s massiveness and the object’s distance from the center of the planet.  In this analogy, consider taxes the equivalent of mass – we know from previous endogenous growth models that taxes have a fairly known effect on the economy, just like we know that mass has a known effect on the rate of acceleration from gravitational forces.  Dawson and Seater have effectively said that regulations must have a similar effect on the economy as taxes.  Maybe the coefficient isn’t 9.81, but the generalized model will allow them to estimate what that coefficient is – so long as they can measure the “mass” equivalent of regulation and control for “distance.”  They had to rely on the model, in fact, to produce the counterfactual, or to use a term from science experiments, a control group.  If you know that mass affects acceleration at some given constant, then you can figure out what acceleration is for a different level of mass without actually observing it.  Similarly, if you know that regulations affect economic growth in some established pattern, then you can deduce what economic growth would be without regulations.  Dawson and Seater appealed to an endogenous growth model (courtesy of Perreto) to simulate a counterfactual economy that maintained regulation levels seen in the year 1949.  By the year 2005, that counterfactual economy had become considerably larger than the actual economy – the one in which we’ve seen regulation increase to include over 1,000,000 restrictions.