Author Archives: Patrick McLaughlin

About Patrick McLaughlin

Patrick A. McLaughlin is a Senior Research Fellow at the Mercatus Center at George Mason University, where he co-created RegData. His research focuses on regulations and the regulatory process, and he has published peer-reviewed articles on several topics related to regulation, including administrative law, regulatory economics, law and economics, public choice, environmental economics, and international trade. Prior to joining Mercatus, Patrick served as a Senior Economist at the Federal Railroad Administration in the United States Department of Transportation. He holds a Ph.D. in economics from Clemson University in South Carolina. Patrick currently lives in Arlington, VA, with his wife, two cats, and a dog that weighs more than his wife and two cats combined.

It’s Time to Change the Incentives of Regulators

One of the primary reasons that regulation slows down economic growth is that regulation inhibits innovation.  Another example of that is playing out in real-time.  Julian Hattem at The Hill recently blogged about online educators trying to stop the US Department of Education from preventing the expansion of educational opportunities with regulations.  From Hattem’s post:

Funders and educators trying to spur innovations in online education are complaining that federal regulators are making their jobs more difficult.

John Ebersole, president of the online Excelsior College, said on Monday that Congress and President Obama both were making a point of exploring how the Internet can expand educational opportunities, but that regulators at the Department of Education were making it harder.

“I’m afraid that those folks over at the Departnent of Education see their role as being that of police officers,” he said. “They’re all about creating more and more regulations. No matter how few institutions are involved in particular inappropriate behavior, and there have been some, the solution is to impose regulations on everybody.”

Ebersole has it right – the incentive for people at the Department of Education, and at regulatory agencies in general, is to create more regulations.  Economists sometimes model the government as if it were a machine that benevolently chooses to intervene in markets only when it makes sense. But those models ignore that there are real people inside the machine of government, and people respond to incentives.  Regulations are the product that regulatory agencies create, and employees of those agencies are rewarded with things like plaques (I’ve got three sitting on a shelf in my office, from my days as a regulatory economist at the Department of Transportation), bonuses, and promotions for being on teams that successfully create more regulations.  This is unfortunate, because it inevitably creates pressure to regulate regardless of consequences on things like innovation and economic growth.

A system that rewards people for producing large quantities of some product, regardless of that product’s real value or potential long-term consequences, is a recipe for disaster.  In fact, it sounds reminiscent of the situation of home loan originators in the years leading up to the financial crisis of 2008.  Mortgage origination is the act of making a loan to someone for the purposes of buying a home.  Fannie Mae and Freddie Mac, as well as large commercial and investment banks, would buy mortgages (and the interest that they promised) from home loan originators, the most notorious of which was probably Countrywide Financial (now part of Bank of America).  The originators knew they had a ready buyer for mortgages, including subprime mortgages – that is, mortgages that were relatively riskier and potentially worthless if interest rates rose.  The knowledge that they could quickly turn a profit by originating more loans and selling them to Fannie, Freddie, and some Wall Street firms led many mortgage originators to turn a blind eye to the possibility that many of the loans they made would not be paid back.  That is, the incentives of individuals working in mortgage origination companies led them to produce large quantities of their product, regardless of the product’s real value or potential long-term consequences.  Sound familiar?

The Use of Science in Public Policy

For the budding social scientists out there who hope that their research will someday positively affect public policy, my colleague Jerry Ellig recently pointed out a 2012 publication from the National Research Council called “Using Science as Evidence in Public Policy.” (It takes a few clicks to download, but you can get it for free).

From the intro, the council’s goal was:

[T]o review the knowledge utilization and other relevant literature to assess what is known about how social science knowledge is used in policy making . . . [and] to develop a framework for further research that can improve the use of social science knowledge in policy making.

The authors conclude that, while “knowledge from all the sciences is relevant to policy choices,” it is difficult to explain exactly how that knowledge is used in the public policy sphere.  They go on to develop a framework for research on how science is used.  The entire report is interesting, especially if you care about using science as evidence in public policy, and doubly so if you are a Ph.D. student or recently minted Ph.D. I particularly liked the stark recognition of the fact that political actors will consider their own agendas (i.e., re-election) and values (i.e., the values most likely to help in a re-election bid) regardless of scientific evidence.  That’s not a hopeless statement, though – there’s still room for science to influence policy, but, as public choice scholars have pointed out for decades, the government is run by people who will, on average, rationally act in their own self-interest.  Here are another couple of lines to that point:

Holding to a sharp, a priori distinction between science and politics is nonsense if the goal is to develop an understanding of the use of science in public policy. Policy making, far from being a sphere in which science can be neatly separated from politics, is a sphere in which they necessarily come together… Our position is that the use of [scientific] evidence or adoption of that [evidence-based] policy cannot be studied without also considering politics and values.

One thing in particular stands out to anyone who has worked on the economic analysis of regulations.  The introduction to this report includes this summary of science’s role in policy:

Science has five tasks related to policy:

(1) identify problems, such as endangered species, obesity, unemployment, and vulnerability to natural disasters or terrorist acts;

(2) measure their magnitude and seriousness;

(3) review alternative policy interventions;

(4) systematically assess the likely consequences of particular policy actions—intended and unintended, desired and unwanted; and

(5) evaluate what, in fact, results from policy.

This sounds almost exactly like the process of performing an economic analysis of a regulation, at least when it’s done well (if you want to know well agencies actually perform regulatory analysis, read this, and for how well they actually use the analysis in decision-making,  read this).  Executive Order 12866, issued by President Bill Clinton in 1993, instructs federal executive agencies on the role of analysis in creating regulations, including each of the following instructions.  Below I’ve slightly rearranged some excerpts and slightly paraphrased other parts from Executive Order 12866, and I have added in the bold numbers to map these instructions back to summary of science’s role quoted above. (For the admin law wonks, I’ve noted the exact section and paragraph of the Executive Order that each element is contained in.):

(1) Each agency shall identify the problem that it intends to address (including, where applicable, the failures of private markets or public institutions that warrant new agency action). [Section 1(b)(1)]

(2) Each agency shall assess the significance of that problem. [Section 1(b)(1)]

(3) Each agency shall identify and assess available alternatives to direct regulation, including providing economic incentives to encourage the desired behavior, such as user fees or marketable permits, or providing information upon which choices can be made by the public. Each agency shall identify and assess alternative forms of regulation. [Section 1(b)(3) and Section 1(b)(8)]

(4) When an agency determines that a regulation is the best available method of achieving the regulatory objective, it shall design its regulations in the most cost-effective manner to achieve the regulatory objective. In doing so, each agency shall consider incentives for innovation, consistency, predictability, the costs of enforcement and compliance (to the government, regulated entities, and the public), flexibility, distributive impacts, and equity. [Section 1(b)(5)]

(5) Each agency shall periodically review its existing significant regulations to determine whether any such regulations should be modified or eliminated so as to make the agency’s regulatory program more effective in achieving the regulatory objectives, less burdensome, or in greater alignment with the President’s priorities and the principles set forth in this Executive order. [Section 5(a)]

OMB’s Circular A-4—the instruction guide for government economists tasked with analyzing regulatory impacts—similarly directs economists to include three basic elements in their regulatory analyses (again, the bold numbers are mine to help map these elements back to the summary of science’s role):

(1 & 2) a statement of the need for the proposed action,

(3) an examination of alternative approaches, and

(4) an evaluation of the benefits and costs—quantitative and qualitative—of the proposed action and the main alternatives identified by the analysis.

The statement of the need for proposed action is equivalent to the first (identifying problems) and second tasks (measuring their magnitude and seriousness) from NRC report.  The examination of alternative approaches and evaluation of the benefits and costs of the possible alternatives are equivalent to tasks 3 (review alternative policy interventions) and 4 (assess the likely consequences). 

It’s also noteworthy that the NRC points out the importance of measuring the magnitude and seriousness of problems.  A lot of public time and money gets spent trying to fix problems that are not widespread or systemic.  There may be better ways to use those resources.  Evaluating the seriousness of problems allows a prioritization of limited resources.

Finally, I want to point out how this parallels a project here at Mercatus.  Not coincidentally, the statement of science’s role in policy reads like the grading criteria of the Mercatus Regulatory Report Card, which are:

1. Systemic Problem: How well does the analysis identify and demonstrate the existence of a market failure or other systemic problem the regulation is supposed to solve?
2. Alternatives: How well does the analysis assess the effectiveness of alternative approaches?
3. Benefits (or other Outcomes): How well does the analysis identify the benefits or other desired outcomes and demonstrate that the regulation will achieve them?
4. Costs: How well does the analysis assess costs?
5. Use of Analysis: Does the proposed rule or the RIA present evidence that the agency used the Regulatory Impact Analysis in any decisions?
6. Cognizance of Net Benefits: Did the agency maximize net benefits or explain why it chose another alternative?

The big difference is that the Report Card contains elements that emphasize measuring whether the analysis is actually used – bringing us back to the original goal of the research council – to determine “how social science knowledge is used in policy making.”

Does Anyone Know the Net Benefits of Regulation?

In early August, I was invited to testify before the Senate Judiciary subcommittee on Oversight, Federal Rights and Agency Action, which is chaired by Sen. Richard Blumenthal (D-Conn.).  The topic of the panel was the amount of time it takes to finalize a regulation.  Specifically, some were concerned that new regulations were being deliberately or needlessly held up in the regulatory process, and as a result, the realization of the benefits of those regulations was delayed (hence the dramatic title of the panel: “Justice Delayed: The Human Cost of Regulatory Paralysis.”)

In my testimony, I took the position that economic and scientific analysis of regulations is important.  Careful consideration of regulatory options can help minimize the costs and unintended consequences that regulations necessarily incur. If additional time can improve regulations—meaning both improving individual regulations’ quality and having the optimal quantity—then additional time should be taken.  My logic behind taking this position was buttressed by three main points:

  1. The accumulation of regulations stifles innovation and entrepreneurship and reduces efficiency. This slows economic growth, and over time, the decreased economic growth attributable to regulatory accumulation has significantly reduced real household income.
  2. The unintended consequences of regulations are particularly detrimental to low-income households— resulting in costs to precisely the same group that has the fewest resources to deal with them.
  3. The quality of regulations matters. The incentive structure of regulatory agencies, coupled with occasional pressure from external forces such as Congress, can cause regulations to favor particular stakeholder groups or to create regulations for which the costs exceed the benefits. In some cases, because of statutory deadlines and other pressures, agencies may rush regulations through the crafting process. That can lead to poor execution: rushed regulations are, on average, more poorly considered, which can lead to greater costs and unintended consequences. Even worse, the regulation’s intended benefits may not be achieved despite incurring very real human costs.

At the same time, I told the members of the subcommittee that if “political shenanigans” are the reason some rules take a long time to finalize, then they should use their bully pulpits to draw attention to such actions.  The influence of politics on regulation and the rulemaking process is an unfortunate reality, but not one that should be accepted.

I actually left that panel with some small amount of hope that, going forward, there might be room for an honest discussion about regulatory reform.  It seemed to me that no one in the room was happy with the current regulatory process – a good starting point if you want real change.  Chairman Blumenthal seemed to feel the same way, stating in his closing remarks that he saw plenty of common ground.  I sent a follow-up letter to Chairman Blumenthal stating as much. I wrote to the Chairman in August:

I share your guarded optimism that there may exist substantial agreement that the regulatory process needs to be improved. My research indicates that any changes to regulatory process should include provisions for improved analysis because better analysis can lead to better outcomes. Similarly, poor analysis can lead to rules that cost more human lives than they needed to in order to accomplish their goals.

A recent op-ed penned by Sen. Blumenthal in The Hill shows me that at least one person is still thinking about the topic of that hearing.  The final sentence of his op-ed said that “we should work together to make rule-making better, more responsive and even more effective at protecting Americans.” I agree. But I disagree with the idea that we know that, as the Senator wrote, “by any metric, these rules are worth [their cost].”  The op-ed goes on to say:

The latest report from the Office of Information and Regulatory Affairs shows federal regulations promulgated between 2002 and 2012 produced up to $800 billion in benefits, with just $84 billion in costs.

Sen. Blumenthal’s op-ed would make sense if his facts were correct.  However, the report to Congress from OIRA that his op-ed referred to actually estimates the costs and benefits of only a handful of regulations.  It’s simple enough to open that report and quote the very first bullet point in the executive summary, which reads:

The estimated annual benefits of major Federal regulations reviewed by OMB from October 1, 2002, to September 30, 2012, for which agencies estimated and monetized both benefits and costs, are in the aggregate between $193 billion and $800 billion, while the estimated annual costs are in the aggregate between $57 billion and $84 billion. These ranges are reported in 2001 dollars and reflect uncertainty in the benefits and costs of each rule at the time that it was evaluated.

But you have to actually dig a little farther into the report to realize that this characterization of the costs and benefits of regulations represents only the view of agency economists (think about their incentive for a moment – they work for the regulatory agencies) and for only 115 regulations out of 37,786 created from October 1, 2002, to September 30, 2012.  As the report that Sen. Blumenthal refers to actually says:

The estimates are therefore not a complete accounting of all the benefits and costs of all regulations issued by the Federal Government during this period.

Furthermore, as an economist who used to work in a regulatory agency and produce these economic analyses of regulations, I find it heartening that the OMB report emphasizes that the estimates it relies on to produce the report are “neither precise nor complete.”  Here’s another point of emphasis from the OMB report:

Individual regulatory impact analyses vary in rigor and may rely on different assumptions, including baseline scenarios, methods, and data. To take just one example, all agencies draw on the existing economic literature for valuation of reductions in mortality and morbidity, but the technical literature has not converged on uniform figures, and consistent with the lack of uniformity in that literature, such valuations vary somewhat (though not dramatically) across agencies. Summing across estimates involves the aggregation of analytical results that are not strictly comparable.

I don’t doubt Sen. Blumenthal’s sincerity in believing that the net benefits of regulation are reflected in the first bullet point of the OMB Report to Congress.  But this shows one of the problems facing regulatory reform today: People on both sides of the debate continue to believe that they know the facts, but in reality we know a lot less about the net effects of regulation than we often pretend to know.  Only recently have economists even begun to understand the drag that regulatory accumulation has on economic growth, and that says nothing about what benefits regulation create in exchange.

All members of Congress need to understand the limitations of our knowledge of the total effects of regulation.  We tend to rely on prospective analyses – analyses that state the costs and benefits of a regulation before they come to fruition.  What we need are more retrospective analyses, with which we can learn what has really worked and what hasn’t, and more comparative studies – studies that have control and experiment groups and see if regulations affect those groups differently.  In the meantime, the best we can do is try to ensure that the people engaged in creating new regulations follow a path of basic problem-solving: First, identify whether there is a problem that actually needs to be solved.  Second, examine several alternative ways of addressing that problem.  Then consider what the costs and benefits of the various alternatives are before choosing one. 

The Myth of Deregulation and the Financial Crisis

In an opinion piece on American Banker, Rep. Jeb Hensarling wrote that:

The great tragedy of the financial crisis, however, was not that Washington regulations failed to prevent it, but instead that Washington regulations helped lead us into it.

Even putting aside the issue of causality, my colleague Robert Greene and I recently examined the data on regulatory growth as we sought to answer the question, “Did Deregulation Cause the Financial Crisis?” Our conclusion was that there was no measurable, net deregulation leading up to the financial crisis.

The data on regulatory growth came from RegData, which uses text analysis to measure the quantity of restrictions published in regulatory text each year.  The graph below shows the number of regulatory restrictions published each year in Title 12 of the Code of Federal Regulations, which covers the subject area of banks and banking, and Title 17, which covers commodity futures and securities trading.  Deregulation would show a general downward trend.  Instead, we see that both titles grew over that time period. The only downward ticks we see occurred because of some consolidation of duplicative regulations from 1997 to 1999 (see our article for more details on that).

As we wrote at the time:

[W]e find that between 1997 and 2008 the number of financial regulatory restrictions in the Code of Federal Regulations (CFR) rose from approximately 40,286 restrictions to 47,494—an increase of 17.9 percent. Regulatory restrictions in Title 12 of the CFR—which regulates banking—increased 18.2 percent while the number of restrictions in Title 17—which regulates commodity futures and securities markets—increased 17.4 percent.

The Economics of Regulation Part 3: How to Estimate the Effect of Regulatory Accumulation on the Economy? Exploring Endogenous Growth and Other Models

This post is the third part in a three part series spurred by a recent study by economists John Dawson and John Seater that estimates that the accumulation of federal regulation has slowed economic growth in the US by about 2% annually.  The first part discussed generally how Dawson and Seater’s study and other investigations into the consequences of regulation are important because they highlight the cumulative drag of our regulatory system. The second part went into detail on some of the ways that economists measure regulation, highlighting the strengths and weaknesses of each.  This post – the final one in the series – looks at how those measures of regulation are used to estimate the consequences of regulatory policy.  As always, economists do it with models.  In the case of Dawson and Seater, they appeal to a well-established family of endogenous growth models built upon the foundational principle of creative destruction, in the tradition of Joseph Schumpeter.

So, what is an endogenous growth model?

First, a brief discussion of models:  In a social or hard science, the ideal model is one that is useful (applicable to the real world using observable inputs to predict outcomes of interest), testable (predictions can be tested with observed outcomes), flexible (able to adapt to a wide variety of input data), and tractable (not too cumbersome to work with).  Suppose a map predicts that following a certain route will lead to a certain location.  When you follow that route in the real world, if you do not actually end up at the predicted location, you will probably stop using that map.  Same thing with models: if a model does a good job at predicting real world outcomes, then it sticks around until someone invents one that does an even better job.  If it doesn’t predict things well, then it usually gets abandoned quickly.

Economists have been obsessed with modeling the growth of national economies at least since Nobel prize winner Simon Kuznets began exploring how to measure GDP in the 1930s.  Growth models generally refer to models that try to represent how the scale of an economy, using metrics such as GDP, grows over time.  For a long time, economists relied on neoclassical growth models, which primarily use capital accumulation, population growth, technology, and productivity as the main explanatory factors in predicting the economic growth of a country. One of the first and most famous of such economic growth models is the Solow model, which has a one-to-one (simple) mapping from increasing levels of the accumulated stock of capital to increasing levels of GDP.  In the Solow model, GDP does not increase at the same rate as capital accumulation due to the diminishing marginal returns to capital.  Even though the Solow model was a breakthrough in describing the growth of GDP from capital stock accumulation, most factors in this growth process (and, generally speaking, in the growth processes of other models in the neoclassical family of growth models) are generated by economic decisions that are outside of the model. As a result, these factors are dubbed exogenous, as opposed to endogenous factors which are generated inside of the model as a result of the economic decisions made by the actors being modeled.

Much of the research into growth modeling over the subsequent decades following Solow’s breakthrough has been dedicated to trying to “endogenize” those exogenous forces (i.e. move them inside the model). For instance, a major accomplishment was endogenizing the savings rate – how much of household income was saved and invested in expanding firms’ capital stocks. Even with this endogenous savings rate, as well as exogenous growth in the population providing labor for production, the accumulating capital stocks in these neoclassical growth models could not explain all of the growth in GDP. The difference, called the Solow Residual, was interpreted as the growth in productivity due to technological development and was like manna from heaven for the actors in the economy – exogenously growing over time regardless of the decisions made by the actors in the model.

But it should be fairly obvious that decisions we make today can affect our future productivity through technological development, and not just through the accumulation of capital stocks or population growth. Technological development is not free. It is the result of someone’s decision to invest in developing technologies. Because technological development is the endogenous result of an economic decision, it can be affected by any factors that distort the incentives involved in such investment decisions (e.g., taxes and regulations). 

This is the primary improvement of endogenous growth theory over neoclassical growth models.  Endogenous growth models take into account the idea that innovative firms invest in both capital and technology, which has the aggregate effect of moving out the entire production possibilities curve.  Further, policies such as increasing regulatory restrictions or changing tax rates will affect the incentives and abilities of people in the economy to innovate and produce.  The Dawson and Seater study relies on a model originally developed by Pietro Peretto to examine the effects of taxes on economic growth.  Dawson and Seater adapt the model to include regulation as another endogenous variable, although they do not formally model the exact mechanism by which regulation affects investment choices in the same way as taxes.  Nonetheless, it’s perfectly feasible that regulation does affect investment, and, to a degree, it is simply an empirical question of how much.

So, now that you at least know that Dawson and Seater selected an accepted and feasible model—a model that, like a good map, makes reliable predictions about real world outcomes—you’re surely asking how that model provided empirical evidence of regulation’s effect on economic growth.  The answer depends on what empirical means.  Consider a much better established model: gravity.  A simple model of gravity states that an object in a vacuum near the Earth’s surface will accelerate towards the Earth at 9.81 meters per second squared. On other planets, that number may be higher or lower, depending on the planet’s massiveness and the object’s distance from the center of the planet.  In this analogy, consider taxes the equivalent of mass – we know from previous endogenous growth models that taxes have a fairly known effect on the economy, just like we know that mass has a known effect on the rate of acceleration from gravitational forces.  Dawson and Seater have effectively said that regulations must have a similar effect on the economy as taxes.  Maybe the coefficient isn’t 9.81, but the generalized model will allow them to estimate what that coefficient is – so long as they can measure the “mass” equivalent of regulation and control for “distance.”  They had to rely on the model, in fact, to produce the counterfactual, or to use a term from science experiments, a control group.  If you know that mass affects acceleration at some given constant, then you can figure out what acceleration is for a different level of mass without actually observing it.  Similarly, if you know that regulations affect economic growth in some established pattern, then you can deduce what economic growth would be without regulations.  Dawson and Seater appealed to an endogenous growth model (courtesy of Perreto) to simulate a counterfactual economy that maintained regulation levels seen in the year 1949.  By the year 2005, that counterfactual economy had become considerably larger than the actual economy – the one in which we’ve seen regulation increase to include over 1,000,000 restrictions.

The Economics of Regulation Part 2: Quantifying Regulation

I recently wrote about a new study from economists John Dawson and John Seater that shows that federal regulations have slowed economic growth in the US by an average of 2% per year.  The study was novel and important enough from my perspective that it deserved some detailed coverage.  In this post, which is part two of a three part series (part one here), I go into some detail on the various ways that economists measure regulation.  This will help put into context the measure that Dawson and Seater used, which is the main innovation of their study.  The third part of the series will discuss the endogenous growth model in which they used their new measure of regulation to estimate its effect on economic growth.

From the macroeconomic perspective, the main policy interventions—that is, instruments wielded in a way to change individual or firm behavior—used by governments are taxes and regulations.  Others might include spending/deficit spending and monetary policy in that list, but a large percentage of economics studies on interventions intended to change behavior have focused on taxes, for one simple reason: taxes are relatively easy to quantify.  As a result, we know a lot more about taxes than we do about regulations, even if much of that knowledge is not well implemented.  Economists can calculate changes to marginal tax rates caused by specific policies, and by simultaneously tracking outcomes such as changes in tax revenue and the behavior of taxed and untaxed groups, deduce specific numbers with which to characterize the consequences of those taxation policies.  In short, with taxes, you have specific dollar values or percentages to work with. With regulations, not so much.

In fact, the actual burden of regulation is notoriously hidden, especially when directly compared to taxes that attempt to achieve the same policy objective.  For example, since fuel economy regulations (called Corporate Average Fuel Economy, or CAFE, standards) were first implemented in the 1970s, it has been broadly recognized that the goal of reducing gasoline consumption could be more efficiently achieved through a gasoline tax rather than vehicle design or performance standards.  However, it is much easier for a politician to tell her constituents that she will make auto manufacturers build more fuel-efficient cars than to tell constituents that they now face higher gasoline prices because of a fuel tax.  In econospeak, taxes are salient to voters—remembered as important and costly—whereas regulations are not. Even when comparing taxes to taxes, some, such as property taxes, are apparently more salient than others, such as payroll taxes, as this recent study shows.  If some taxes that workers pay on a regular basis are relatively unnoticed, how much easier is it to hide a tax in the form of a regulation?  Indeed, it is arguably because regulations are uniquely opaque as policy instruments that all presidents since Jimmy Carter have required some form of benefit-cost analysis on new regulations prior to their enactment (note, however, that the average quality of those analyses is astonishingly low).  Of course, it is for these same obfuscatory qualities that politicians seem to prefer regulations to taxes.

Despite the inherent difficulty, scholars have been analyzing the consequences of regulation for decades, leading to a fairly large literature. Studies typically examine the causal effect of a unique regulation or a small collection of related regulations, such as air quality standards stemming from the Clean Air Act.  Compared to the thousands of actual regulations that are in effect, the regulation typically studied is relatively limited in scope, even if its effects can be far-reaching.  Because most studies on regulation focus only on one or perhaps a few specific regulations, there is a lot of room for more research to be done.  Specifically, improved metrics of regulation, especially metrics that can be used either in multi-industry microeconomic studies or in macroeconomic contexts, could help advance our understanding of the overall effect of all regulations.

With that goal in mind, some attempts have been made to more comprehensively measure regulation through the use of surveys and legal studies.  The most famous example is probably the Doing Business index from the World Bank, while perhaps the most widely used in academic studies is the Indicators of Product Market Regulation from the OECD.  Since 2003, the World Bank has produced the Doing Business Index, which combines survey data with observational data into a single number designed to tell how much it would cost to “do business,” e.g. set up a company, get construction permits, get electricity, register property, etc., in set of 185 countries.  The Doing Business index is perhaps most useful for identifying good practices to follow in early to middle stages of economic development, when property rights and other beneficial institutions can be created and strengthened.

The OECD’s Indicators of Product Market Regulation database focuses more narrowly on types of regulation that are more relevant to developed economies.  Specifically, the original OECD data considered only product market and employment protection regulations, both of which are measured at “economy-wide” level—meaning the OECD measured whether those types of regulations existed in a given country, regardless of whether they were applicable to only certain individuals or particular industries.  The OECD later extended the data by adding barriers to entry, public ownership, vertical integration, market structure, and price controls for a small subset of broadly defined industries (gas, electricity, post, telecommunications, passenger air transport, railways, and road freight).  The OECD develops its database by surveying government officials in several countries and aggregating their responses, with weightings, into several indexes.

By design, the OECD and Doing Business approaches do a good job of relating obscure macroeconomic data to actual people and businesses.  Consider the chart below, taken from the OECD description of how the Product Market Regulation database is created.  As I wrote last week and as the chart shows, the rather sanitized term “product market regulation” actually consists of several components that are directly relevant to a would-be entrepreneur (such as the opacity of a country’s licenses and permits system and administrative burdens for sole proprietorships) and to a consumer (such as price controls and barriers to foreign direct investment).  You can click on the chart below to see some of the other components that are considered in OECD’s product market regulation indicator.

oecd product regulation tree structure

Still, there are two major shortcomings of the OECD data (shortcomings that are equally applicable to similar indexes produced by the World Bank and others).  First, they cover relatively short time spans.  Changes in regulatory policy often require several years, if not decades, to implement, so the results of these changes may not be reflected in short time frames (to a degree, this can be overcome by measuring regulation for several different countries or different industries, so that results of different policies can be compared across countries or industries).

Second, and in my mind, more importantly, the Doing Business Index is not comprehensive.  Instead, it is focused on a few areas of regulation, and then only on whether regulations exist—not how complex or burdensome they are.  As Dawson and Seater explain:

[M]easures of regulation [such as the Doing Business Index and the OECD Indicators] generally proceed by constructing indices based on binary indicators of whether or not various kinds of regulation exist, assigning a value of 1 to each type of regulation that exists and a 0 to those that do not exist.  The index then is constructed as a weighted sum of all the binary indicators.  Such measures capture the existence of given types of regulation but cannot capture their extent or complexity.

Dawson and Seater go out of their way to mention at least twice that the OECD dataset ignores environmental and occupational health and safety regulations.  Theirs is a good point – in the US, at least, environmental regulations from the EPA alone accounted for about 15% of all restrictions published in federal regulations in 2010, and that percentage has consistently grown for the past decade, as can be seen in the graph below (created using data from RegData).  Occupational health and safety regulations take up a significant portion of the regulatory code as well.

env regs as percentage of total

In contrast, one could measure all federal regulations, not just a few select types.  But then the process requires some usage of the actual legal texts containing regulations.  There have been a few attempts to create all-inclusive time series measures of regulation based on the voluminous legal documents detailing regulatory activity at the federal level.   For the most part, studies have relied on the Federal Register, the government’s daily journal of newly proposed and final regulations.  For example, many scholars have counted pages in the Federal Register to test for the existence of the midnight regulations phenomenon—the observation that the administrations of outgoing presidents seem to produce abnormally large numbers of regulations during the lame-duck period

There are problems with using the Federal Register to measure regulation (I say this despite having used it in some of my own papers).  First and foremost, the Federal Register includes deregulatory activity.  When a regulatory agency eliminates words, paragraphs, or even entire chapters from the CFR, the agency has to notify the public of the changes.  The agency does this by printing a notice of proposed rulemaking in the Federal Register that explains the agencies intentions.  Then, once the public has had adequate time to comment on the agencies proposed actions, the agency has to publish a final rule in the Federal Register—another set of pages that detail the final actions the agency is taking.  Obviously, if one is counting pages published in the Federal Register and using that as a proxy for the growth of regulation, deregulatory activity that produces positive page counts would lead to incorrect measurements.  

Furthermore, pages published in the Federal Register may be a biased measure because the number of pages associated with individual rulemakings has increased over time as acts of Congress or executive orders have required more analyses. In his Ten-Thousand Commandments series, Wayne Crews mitigates this drawback to some degree by focusing only on pages devoted to final rules.  The Ten-Thousand Commandments series keeps track of both the annual number of final regulations published in the Federal Register and the annual number of Federal Register pages devoted to final regulations.

Dawson and Seater instead rely on the Code of Federal Regulations, another set of legal documents related to federal regulationsActually, the CFR would be better described as the books that contain the actual text of regulations in effect each year.  When a regulatory agency creates new regulations, or alters existing regulations, those changes are reflected in the next publication of the CFR.  Dawson and Seater collected data on the total number of pages in the CFR in each year from 1949 to 2005. I’ve graphed their data below.

dawson and seater cfr pages

*Dawson and Seater exclude Titles 1 – 3 and 32 from their total page counts because they argue that those Titles do not contain regulation, so comparing this graph with page count graphs produced elsewhere will show some discrepancies.

Perhaps the most significant advantage of the CFR over counting pages in the Federal Register is that it allows for decreases in regulations. However, using the CFR arguably has several advantages over indexes like the OECD product market regulation index and the World Bank Doing Business index.  First, using the CFR captures all federal regulation, not just a select few types.  Dawson and Seater point out:

Incomplete coverage leads to two problems: (1) omitted variables bias, and, in any time series study, (2) divergence between the time series behavior of subsets of regulation on the one hand and of total regulation on the other.

In other words, ignoring potentially important variables (such as environmental regulations) can cause estimates of the effect of regulation to be wrong.

Second, the number of pages in the CFR may reflect the complexity of regulations to some degree.  In contrast, the index metrics of regulation typically only consider whether a regulation exists—a binary variable equal to 1 or 0, with nothing in between.  Third, the CFR offers a long time series – almost three times as long as the OECD index, although it is shorter than the Federal Register time series.

Of course, there are downsides to using the CFR.  For one, it is possible that legal drafting standards and language norms have changed over the 57 years, which could introduce bias to their measure (Dawson and Seater brush this concern aside, but not convincingly in my opinion).  Second, the CFR is limited to only one country—the United States—whereas the OECD and World Bank products cover many countries.  Data on multiple countries (or multiple industries within a country, like RegData offers) allow comparisons of real-world outcomes and how they respond to different regulatory treatments.  In contrast, Dawson and Seater are limited to constructing a “counterfactual” economy – one that their model predicts would exist had regulations stayed at the level they were in 1949.  In my next post, I’ll go into more detail on the model they use to do this.

The Economics of Regulation Part 1: A New Study Shows That Regulatory Accumulation Hurts the Economy

In June, John Dawson and John Seater, economists at Appalachian State University and North Carolina State University, respectively, published a potentially important study (ungated version here) in the Journal of Economic Growth that shows the effects of regulatory accumulation on the US economy.  Several others have already summarized the study’s results (two examples here and here) with respect to how the accumulation of federal regulation caused substantial reductions in the growth rate of GDP.  So, while the results are important, I won’t dwell on them here.  The short summary is this: using a new measure of federal regulation in an endogenous growth model, Dawson and Seater find that, on average, federal regulation reduced economic growth in the US by about 2% annually in the period from 1949 to 2005.  Considering that economic growth is an exponential process, an average reduction of 2% over 57 years makes a big difference.  A relevant excerpt tells just how big of a difference:

 We can convert the reduction in output caused by regulation to more tangible terms by computing the dollar value of the loss involved.  […] In 2011, nominal GDP was $15.1 trillion.  Had regulation remained at its 1949 level, current GDP would have been about $53.9 trillion, an increase of $38.8 trillion.  With about 140 million households and 300 million people, an annual loss of $38.8 trillion converts to about $277,100 per household and $129,300 per person.

These are large numbers, but in fact they aren’t much different from what a bevy of previous studies have found about the effects of regulation.  The key differences between this study and most previous studies are the method of measuring regulation and the model used to estimate regulation’s effect on economic growth and total factor productivity.

In a multi-part series, I will focus on the tools that allowed Dawson and Seater to produce this study: 1. A new time series measure of total federal regulation, and 2. Models of endogenous growth.  My next post will go into detail on Dawson and Seater’s new time series measure of regulation, and compares it to other metrics that have been used.  Then I’ll follow up with a post discussing endogenous growth models, which consider that policy decisions can affect the accumulation of knowledge and the rates of innovation and entrepreneurship in an economy, and through these mechanisms affect economic growth.

Why should you care about something as obscure as a “time series measure of regulation” and “endogenous growth theory?”  Regulations—a form of law that lawyers call administrative law—create a hidden tax.  When the Department of Transportation creates new regulations that mandate that cars must become more fuel efficient, all cars become more expensive, in the same way that a tax on cars would make them more expensive.  Even worse, the accumulation of regulations over time stifle innovation, hinder entrepreneurship, and create unintended consequences by altering the prices of everyday purchases and activities.  For an example of hindering entrepreneurship, occupational licensing requirements in 17 states make it illegal for someone to braid hair for a living without first being licensed, a process which, in Pennsylvania at least, requires 300 hours of training, at least a 10th grade education, and passing a practical and a theory exam. Oh, and after you’ve paid for all that training, you still have to pay for a license.

And for an example of unintended consequences: Transportation Security Administration procedures in airports obviously slow down travel.  So now you have to leave work or home 30 minutes or even an hour earlier than you would have otherwise, and you lose the chance to spend another hour with your family or finishing some important project.  Furthermore, because of increased travel times when flying, some people choose to drive instead of fly.  Because driving involves a higher risk of accident and death than does flying, this shift, caused by regulation, of travelers from plane to car actually causes people to die (statistically speaking), as this paper showed.

Economists have realized the accumulation of regulation must be causing serious problems in the economy.  As a result, they have been trying to measure regulation in different ways, in order to include regulation in their models and better study its impact.  One famous measure of regulation, which I’ll discuss in more detail in my next post, is the OECD’s index of Product Market Regulation.  That rather sanitized term, “product market regulation,” actually consists of several components that are directly relevant to a would-be entrepreneur (such as the opacity of a country’s licenses and permits system and administrative burdens for sole proprietorships) and to a consumer (such as price controls, which can lead to shortages like we often see after hurricanes where anti-price gouging laws exist, and barriers to foreign direct investment, which could prevent multinational firms like Toyota from building a new facility and creating new jobs in a country).  But as you’ll see in the next post, that OECD measure (and many other measures) of regulation miss a lot of regulations that also directly affect every individual and business.  In any science, correct measurement is a necessary first step to empirical hypothesis testing.

Dawson and Seater have contributed a new measure of regulation that improves upon previously existing ones in many ways, although it also has its drawbacks.  And because their new measure of regulation offers many more years of observations than most other measures, it can be used in an endogenous growth model to estimate how regulation has affected the growth of the US economy.  Again, in endogenous growth models, policy decisions (such as how much regulation to create) affect economic growth if they affect the rates of accumulation of knowledge, innovation, and entrepreneurship. It’s by using their measure in an endogenous growth model that Dawson and Seater were able to estimate that individuals in the US would have been $129,300 richer if regulations had stayed at their 1949 level.  I’ll explain a bit more about endogenous growth theory in a second follow-up post.  But first things first—my next post will go into detail on measures of regulation and Dawson and Seater’s innovation.

A Hidden Opportunity Cost of Regulatory Compliance: Management Time

At the federal level, regulators in many agencies attempt to estimate the impacts that new regulations would have on businesses, even if the average quality of these analyses is typically poor.  But these impact analyses rarely consider a conceivably major cost: the opportunity cost of business owners or managers who have to spend their time dealing with regulations.

One of the simplest costs that regulators consider, for example, is paperwork: how much more paperwork will be imposed on businesses as a result of a new regulation?  Indeed, the paperwork burden is sometimes the primary cost considered in these analyses, as was the case in this rule proposed by the Department of Labor towards the end of 2011.  This proposal addresses requirements for affirmative action and non-discrimination that apply to federal contractors, proposing, among other things, to “strengthen the affirmative action provisions, detailing specific actions a contractor must take to satisfy its obligations. [The proposal] would also increase the contractor’s data collection obligations, and establish a utilization goal for individuals with disabilities to assist in measuring the effectiveness of the contractor’s affirmative action efforts.”

Just consider one part of the summary of that proposed rule: “increase the contractor’s data collection obligations.” If you read on in the Federal Register notice (search for the term “12866” to get to the analysis section), you’ll find that the Dept. of Labor assumed that contractors have people in place to perform the increased data collection obligations. So for the analysis, the Dept. of Labor simply added some paperwork time to each contractor, and calculated how much the extra employee time would cost each contractor.

But here’s the catch.  What if the contractor has to hire a new employee to handle this?  The costs of searching for a new employee can be substantial.  A recent post in the St. Louis Business Journal featured Steve Baden, president of Royal Banks of Missouri, discussing the difficulties in finding and hiring a compliance officer – an employee whose job it is to oversee regulatory compliance, which certainly includes vast amounts of paperwork.  Baden said that the process of hiring a compliance officer took him “a year of interviews to find someone qualified and cost [him] six figures.”

Management time is expensive.  Business owners are the entrepreneurs that help create economic growth through innovation.  When they have to spend their time searching for compliance officers or filling out paper work, they are not spending their time finding new ways to improve their businesses or starting new ones.  This is a real cost of regulation, and one of the reasons that the accumulation of regulations can stifle an economy.

Furthermore, any employee’s time—whether it’s a new employee or one who already worked for the contractor—is also valuable time.  When Steve Baden has to hire a full-time compliance officer in order to navigate the paperwork maze created by regulations, that individual hired to ensure compliance will not do some other productive activity with her time.  How valuable is it to society to have highly skilled individuals spending their time collecting data or filling out paperwork to show compliance with regulations?  Time used on regulatory compliance is necessarily not time used elsewhere. Without the million-plus restrictions created by federal regulations, countless compliance officers would be gainfully employed in roles that create better value in the economy.

One of my mentors once stated that he could create jobs by hiring people to trim his lawn with toenail clippers (warning: links to a Penn & Teller episode, and they do not refrain from using vulgar language).  But that’s probably not the most productive use of their time.  The fact that an action creates jobs does not mean the skills and efforts of individuals are used in the best possible way, nor does it mean that there is necessarily a net gain in jobs.  The creation of a regulatory compliance job may be offset by elimination of one or more jobs elsewhere because of increased operating costs.