Tag Archives: research

How are your state’s finances?

Just how well do your state’s finances compare to those of other states?

I sat down with our state policy group last week to discuss recent Mercatus research that ranks states’ fiscal condition based on the their 2012 Comprehensive Annual Financial Report (CAFR). The findings contained in State Fiscal Condition: Ranking the 50 States by Dr. Sarah Arnett are aimed at helping states apply basic financial ratios to get a general picture of fiscal health. Dr. Arnett’s paper uses four solvency criteria developed in the public finance literature – cash solvency, budgetary solvency, long-run solvency and service-level solvency. In this podcast, I discuss how legislators and policy analyst might use the study and the limitations of ratios and rankings in understanding a state’s deeper financial picture.

Come Study at George Mason University

It is hard to believe but it’s been about 15 years since I attended my first Institute for Humane Studies weekend seminar at Claremont McKenna College. I can still remember the challenging conversations and stimulating lectures, especially those by Jeffrey Rogers Hummel and Lydia Ortega, both of San Jose State.

The most exciting idea I walked away from that weekend with was this: it’s possible to make a career out of advancing liberty.

From I.H.S. I learned about George Mason University. After doing quite a bit of research and attending a Public Choice Outreach Conference at GMU, I became convinced that the best thing I could do to set myself on the path of a career exploring the ideas of liberty was to get a graduate degree in economics from GMU. I eventually got my doctorate at GMU and now I have the best job in the world at the Mercatus Center.

George Mason University School of Public Policy 3351 Fairfax Drive Arlington (VA) 2013If you, too, have ever thought about such a career, now is the time to act on it. Here are a few opportunities:

The PhD Fellowship is a three-year, competitive, full-time fellowship program for students who are pursuing a doctoral degree in economics at George Mason University. It includes full tuition support, a stipend, and experience as a research assistant working closely with Mercatus-affiliated Mason faculty. It is a total award of up to $120,000 over three years. The application deadline is February 1, 2014.

The MA Fellowship is a two-year, competitive, full-time fellowship program for students pursuing a master’s degree in economics at George Mason University and interested in gaining advanced training in applied economics in preparation for a career in public policy. It includes full tuition support, a stipend, and practical experience as a research assistant working with Mercatus scholars. It is a total award of up to $80,000 over two years. The application deadline is March 1, 2014.

The Adam Smith Fellowship is a one-year, competitive fellowship for graduate students attending PhD programs at any university, in a variety of fields, including economics, philosophy, political science, and sociology. Smith Fellows receive a stipend and attend workshops and seminars on the Austrian, Virginia, and Bloomington schools of political economy. It is a total award of up to $10,000 for the year. The application deadline is March 15, 2014.

Does the minimum wage increase unemployment? Ask Willie Lyons.

President Obama recently claimed:

[T]here’s no solid evidence that a higher minimum wage costs jobs, and research shows it raises incomes for low-wage workers and boosts short-term economic growth.

Students of economics may find this a curious claim. Many of them will have been assigned Steven Landsburg’s Price Theory and Applications where, on page 380, they will have read:

Overwhelming empirical evidence has convinced most economists that the minimum wage is a significant cause of unemployment, particularly among the unskilled.

Or perhaps they will have been assigned Hirschleifer, Glazer, and Hirschleifer’s widely-read text. In this case, they will have seen on page 21 that 78.9 percent of surveyed economists either “agree generally” or “agree with provisions” with the statement that “A minimum wage increases unemployment among young and unskilled workers.”

More advanced students may have encountered this January 2013 paper by David Neumark, J.M. Ian Salas, and William Wascher which assesses the latest research and concludes:

[T]he evidence still shows that minimum wages pose a tradeoff of higher wages for some against job losses for others, and that policymakers need to bear this tradeoff in mind when making decisions about increasing the minimum wage.

Some students may have even studied Jonathan Meer and Jeremy West’s hot-off-the-presses study which focuses on the effect of a minimum wage on job growth. They conclude:

[T]he minimum wage reduces net job growth, primarily through its effect on job creation by expanding establishments. These effects are most pronounced for younger workers and in industries with a higher proportion of low-wage workers.

Students of history, however, will be aware of another testimonial. It comes not from an economist but from an elevator operator. Her name was Willie Lyons and in 1918, at the age of 21, she had a job working for the Congress Hall Hotel in Washington, D.C. She made $35 per month, plus two meals a day. According to the court, she reported that “the work was light and healthful, the hours short, with surroundings clean and moral, and that she was anxious to continue it for the compensation she was receiving.”

Then, on September 19, 1918, Congress passed a law establishing a District of Columbia Minimum Wage Board and setting a minimum wage for any woman or child working in the District. Though it would have been happy to retain Ms. Lyons at her agreed-upon wage, the Hotel decided that her services were not worth the higher wage and let her go.

Ms. Lyons sued the Board, claiming that the minimum wage violated her “liberty of contract” under the Due Process clauses of the 5th and 14th Amendments.* As the Supreme Court would describe it:

The wages received by this appellee were the best she was able to obtain for any work she was capable of performing, and the enforcement of the order, she alleges, deprived her of such employment and wages. She further averred that she could not secure any other position at which she could make a living, with as good physical and moral surroundings, and earn as good wages, and that she was desirous of continuing and would continue the employment, but for the order of the board.

For a time, the Supreme Court agreed with Ms. Lyons, finding that the minimum wage did, indeed, violate her right to contract.

The minimum wage was eliminated and she got her job back.

——————-

*Legal theorists might well claim that the Immunities and/or Privileges clauses of these amendments would have been more reasonable grounds, but those had long been gutted by the Supreme Court.

Competition in health care saves lives without raising costs

The effect of competition on the quality of health care remains a contested issue. Most empirical estimates rely on inference from nonexperimental data. In contrast, this paper exploits a procompetitive policy reform to provide estimates of the impact of competition on hospital outcomes. The English government introduced a policy in 2006 to promote competition between hospitals. Using this policy to implement a difference-in-differences research design, we estimate the impact of the introduction of competition on not only clinical outcomes but also productivity and expenditure. We find that the effect of competition is to save lives without raising costs.

That’s Martin Gaynor, Rodrigo Moreno-Serra, and Carol Propper writing in the latest issue of the American Economic Journal: Economic Policy.

“Regulatory Certainty” as a Justification for Regulating

A key principle of good policy making is that regulatory agencies should define the problem they are seeking to solve before finalizing a regulation. Thus, it is odd that in the economic analysis for a recent proposed rule related to greenhouse gas emissions from new power plants, the Environmental Protection Agency (EPA) cites “regulatory certainty” as a justification for regulating. It seems almost any regulation could be justified on these grounds.

The obvious justification for regulating carbon dioxide emissions would be to limit harmful effects of climate change. However, as the EPA’s own analysis states:

the EPA anticipates that the proposed Electric Generating Unit New Source Greenhouse Gas Standards will result in negligible CO2 emission changes, energy impacts, quantified benefits, costs, and economic impacts by 2022.

The reason the rule will result in no benefits or costs, according to the EPA, is because the agency anticipates:

even in the absence of this rule, existing and anticipated economic conditions will lead electricity generators to choose new generation technologies that meet the proposed standard without the need for additional controls.

So why issue a new regulation? If the EPA’s baseline assessment is correct (i.e. it is making an accurate prediction about what the world would look like in absence of the regulation), then the regulation provides no benefits since it causes no deviations from that baseline. If the EPA’s baseline turns out to be wrong, a “wait and see” approach likely makes more sense. This approach may be more sensible, especially given all the inherent uncertainties surrounding predicting future energy prices and all of the unintended consequences that often result from regulating.

Instead, the EPA cites “regulatory certainty” as a justification for regulating, presumably because businesses will now be able to anticipate what emission standards will be going forward, and they can now invest with confidence. But announcing there will be no new regulation for a period of time also provides certainty. Of course, any policy can always change, whether the agency decides to issue a regulation or not. That’s why having clearly-stated goals and clearly-understood factors that guide regulatory decisions is so important.

Additionally, there are still costs to regulating, even if the EPA has decided not to count these costs in its analysis. Just doing an economic analysis is a cost. So is using agency employees’ time to enforce a new regulation. News outlets suggest “industry-backed lawsuits are inevitable” in response to this regulation. This too is a cost. If costs exceed benefits, the rule is difficult to justify.

One might argue that because of the 2007 Supreme Court ruling finding that CO2 is covered under the Clean Air Act, and the EPA’s subsequent endangerment finding related to greenhouse gases, there is some basis for the argument that uncertainty is holding back investment in new power plants. However, if this is true then this policy uncertainty should be accounted for in the agency’s baseline. If the proposed regulation alleviates some of this uncertainty, and leads to additional power plant construction and energy creation, that change is a benefit of the regulation and should be identified in the agency’s analysis.

The EPA also states it “intends this rule to send a clear signal about the current and future status of carbon capture and storage technology” because the agency wants to create the “incentive for supporting research, development, and investment into technology to capture and store CO2.”

However, by identifying the EPA’s preferred method of reducing CO2 emissions from new power plants, the agency may discourage businesses from investing in other promising new technologies. Additionally, by setting different standards for new and existing power plants, the EPA is clearly favoring one set of companies at the expense of another. This is a form of cronyism.

The EPA needs to get back to policymaking 101. That means identifying a problem before regulating, and tailoring regulations to address the specific problem at hand.

The Use of Science in Public Policy

For the budding social scientists out there who hope that their research will someday positively affect public policy, my colleague Jerry Ellig recently pointed out a 2012 publication from the National Research Council called “Using Science as Evidence in Public Policy.” (It takes a few clicks to download, but you can get it for free).

From the intro, the council’s goal was:

[T]o review the knowledge utilization and other relevant literature to assess what is known about how social science knowledge is used in policy making . . . [and] to develop a framework for further research that can improve the use of social science knowledge in policy making.

The authors conclude that, while “knowledge from all the sciences is relevant to policy choices,” it is difficult to explain exactly how that knowledge is used in the public policy sphere.  They go on to develop a framework for research on how science is used.  The entire report is interesting, especially if you care about using science as evidence in public policy, and doubly so if you are a Ph.D. student or recently minted Ph.D. I particularly liked the stark recognition of the fact that political actors will consider their own agendas (i.e., re-election) and values (i.e., the values most likely to help in a re-election bid) regardless of scientific evidence.  That’s not a hopeless statement, though – there’s still room for science to influence policy, but, as public choice scholars have pointed out for decades, the government is run by people who will, on average, rationally act in their own self-interest.  Here are another couple of lines to that point:

Holding to a sharp, a priori distinction between science and politics is nonsense if the goal is to develop an understanding of the use of science in public policy. Policy making, far from being a sphere in which science can be neatly separated from politics, is a sphere in which they necessarily come together… Our position is that the use of [scientific] evidence or adoption of that [evidence-based] policy cannot be studied without also considering politics and values.

One thing in particular stands out to anyone who has worked on the economic analysis of regulations.  The introduction to this report includes this summary of science’s role in policy:

Science has five tasks related to policy:

(1) identify problems, such as endangered species, obesity, unemployment, and vulnerability to natural disasters or terrorist acts;

(2) measure their magnitude and seriousness;

(3) review alternative policy interventions;

(4) systematically assess the likely consequences of particular policy actions—intended and unintended, desired and unwanted; and

(5) evaluate what, in fact, results from policy.

This sounds almost exactly like the process of performing an economic analysis of a regulation, at least when it’s done well (if you want to know well agencies actually perform regulatory analysis, read this, and for how well they actually use the analysis in decision-making,  read this).  Executive Order 12866, issued by President Bill Clinton in 1993, instructs federal executive agencies on the role of analysis in creating regulations, including each of the following instructions.  Below I’ve slightly rearranged some excerpts and slightly paraphrased other parts from Executive Order 12866, and I have added in the bold numbers to map these instructions back to summary of science’s role quoted above. (For the admin law wonks, I’ve noted the exact section and paragraph of the Executive Order that each element is contained in.):

(1) Each agency shall identify the problem that it intends to address (including, where applicable, the failures of private markets or public institutions that warrant new agency action). [Section 1(b)(1)]

(2) Each agency shall assess the significance of that problem. [Section 1(b)(1)]

(3) Each agency shall identify and assess available alternatives to direct regulation, including providing economic incentives to encourage the desired behavior, such as user fees or marketable permits, or providing information upon which choices can be made by the public. Each agency shall identify and assess alternative forms of regulation. [Section 1(b)(3) and Section 1(b)(8)]

(4) When an agency determines that a regulation is the best available method of achieving the regulatory objective, it shall design its regulations in the most cost-effective manner to achieve the regulatory objective. In doing so, each agency shall consider incentives for innovation, consistency, predictability, the costs of enforcement and compliance (to the government, regulated entities, and the public), flexibility, distributive impacts, and equity. [Section 1(b)(5)]

(5) Each agency shall periodically review its existing significant regulations to determine whether any such regulations should be modified or eliminated so as to make the agency’s regulatory program more effective in achieving the regulatory objectives, less burdensome, or in greater alignment with the President’s priorities and the principles set forth in this Executive order. [Section 5(a)]

OMB’s Circular A-4—the instruction guide for government economists tasked with analyzing regulatory impacts—similarly directs economists to include three basic elements in their regulatory analyses (again, the bold numbers are mine to help map these elements back to the summary of science’s role):

(1 & 2) a statement of the need for the proposed action,

(3) an examination of alternative approaches, and

(4) an evaluation of the benefits and costs—quantitative and qualitative—of the proposed action and the main alternatives identified by the analysis.

The statement of the need for proposed action is equivalent to the first (identifying problems) and second tasks (measuring their magnitude and seriousness) from NRC report.  The examination of alternative approaches and evaluation of the benefits and costs of the possible alternatives are equivalent to tasks 3 (review alternative policy interventions) and 4 (assess the likely consequences). 

It’s also noteworthy that the NRC points out the importance of measuring the magnitude and seriousness of problems.  A lot of public time and money gets spent trying to fix problems that are not widespread or systemic.  There may be better ways to use those resources.  Evaluating the seriousness of problems allows a prioritization of limited resources.

Finally, I want to point out how this parallels a project here at Mercatus.  Not coincidentally, the statement of science’s role in policy reads like the grading criteria of the Mercatus Regulatory Report Card, which are:

1. Systemic Problem: How well does the analysis identify and demonstrate the existence of a market failure or other systemic problem the regulation is supposed to solve?
2. Alternatives: How well does the analysis assess the effectiveness of alternative approaches?
3. Benefits (or other Outcomes): How well does the analysis identify the benefits or other desired outcomes and demonstrate that the regulation will achieve them?
4. Costs: How well does the analysis assess costs?
5. Use of Analysis: Does the proposed rule or the RIA present evidence that the agency used the Regulatory Impact Analysis in any decisions?
6. Cognizance of Net Benefits: Did the agency maximize net benefits or explain why it chose another alternative?

The big difference is that the Report Card contains elements that emphasize measuring whether the analysis is actually used – bringing us back to the original goal of the research council – to determine “how social science knowledge is used in policy making.”

Does Anyone Know the Net Benefits of Regulation?

In early August, I was invited to testify before the Senate Judiciary subcommittee on Oversight, Federal Rights and Agency Action, which is chaired by Sen. Richard Blumenthal (D-Conn.).  The topic of the panel was the amount of time it takes to finalize a regulation.  Specifically, some were concerned that new regulations were being deliberately or needlessly held up in the regulatory process, and as a result, the realization of the benefits of those regulations was delayed (hence the dramatic title of the panel: “Justice Delayed: The Human Cost of Regulatory Paralysis.”)

In my testimony, I took the position that economic and scientific analysis of regulations is important.  Careful consideration of regulatory options can help minimize the costs and unintended consequences that regulations necessarily incur. If additional time can improve regulations—meaning both improving individual regulations’ quality and having the optimal quantity—then additional time should be taken.  My logic behind taking this position was buttressed by three main points:

  1. The accumulation of regulations stifles innovation and entrepreneurship and reduces efficiency. This slows economic growth, and over time, the decreased economic growth attributable to regulatory accumulation has significantly reduced real household income.
  2. The unintended consequences of regulations are particularly detrimental to low-income households— resulting in costs to precisely the same group that has the fewest resources to deal with them.
  3. The quality of regulations matters. The incentive structure of regulatory agencies, coupled with occasional pressure from external forces such as Congress, can cause regulations to favor particular stakeholder groups or to create regulations for which the costs exceed the benefits. In some cases, because of statutory deadlines and other pressures, agencies may rush regulations through the crafting process. That can lead to poor execution: rushed regulations are, on average, more poorly considered, which can lead to greater costs and unintended consequences. Even worse, the regulation’s intended benefits may not be achieved despite incurring very real human costs.

At the same time, I told the members of the subcommittee that if “political shenanigans” are the reason some rules take a long time to finalize, then they should use their bully pulpits to draw attention to such actions.  The influence of politics on regulation and the rulemaking process is an unfortunate reality, but not one that should be accepted.

I actually left that panel with some small amount of hope that, going forward, there might be room for an honest discussion about regulatory reform.  It seemed to me that no one in the room was happy with the current regulatory process – a good starting point if you want real change.  Chairman Blumenthal seemed to feel the same way, stating in his closing remarks that he saw plenty of common ground.  I sent a follow-up letter to Chairman Blumenthal stating as much. I wrote to the Chairman in August:

I share your guarded optimism that there may exist substantial agreement that the regulatory process needs to be improved. My research indicates that any changes to regulatory process should include provisions for improved analysis because better analysis can lead to better outcomes. Similarly, poor analysis can lead to rules that cost more human lives than they needed to in order to accomplish their goals.

A recent op-ed penned by Sen. Blumenthal in The Hill shows me that at least one person is still thinking about the topic of that hearing.  The final sentence of his op-ed said that “we should work together to make rule-making better, more responsive and even more effective at protecting Americans.” I agree. But I disagree with the idea that we know that, as the Senator wrote, “by any metric, these rules are worth [their cost].”  The op-ed goes on to say:

The latest report from the Office of Information and Regulatory Affairs shows federal regulations promulgated between 2002 and 2012 produced up to $800 billion in benefits, with just $84 billion in costs.

Sen. Blumenthal’s op-ed would make sense if his facts were correct.  However, the report to Congress from OIRA that his op-ed referred to actually estimates the costs and benefits of only a handful of regulations.  It’s simple enough to open that report and quote the very first bullet point in the executive summary, which reads:

The estimated annual benefits of major Federal regulations reviewed by OMB from October 1, 2002, to September 30, 2012, for which agencies estimated and monetized both benefits and costs, are in the aggregate between $193 billion and $800 billion, while the estimated annual costs are in the aggregate between $57 billion and $84 billion. These ranges are reported in 2001 dollars and reflect uncertainty in the benefits and costs of each rule at the time that it was evaluated.

But you have to actually dig a little farther into the report to realize that this characterization of the costs and benefits of regulations represents only the view of agency economists (think about their incentive for a moment – they work for the regulatory agencies) and for only 115 regulations out of 37,786 created from October 1, 2002, to September 30, 2012.  As the report that Sen. Blumenthal refers to actually says:

The estimates are therefore not a complete accounting of all the benefits and costs of all regulations issued by the Federal Government during this period.

Furthermore, as an economist who used to work in a regulatory agency and produce these economic analyses of regulations, I find it heartening that the OMB report emphasizes that the estimates it relies on to produce the report are “neither precise nor complete.”  Here’s another point of emphasis from the OMB report:

Individual regulatory impact analyses vary in rigor and may rely on different assumptions, including baseline scenarios, methods, and data. To take just one example, all agencies draw on the existing economic literature for valuation of reductions in mortality and morbidity, but the technical literature has not converged on uniform figures, and consistent with the lack of uniformity in that literature, such valuations vary somewhat (though not dramatically) across agencies. Summing across estimates involves the aggregation of analytical results that are not strictly comparable.

I don’t doubt Sen. Blumenthal’s sincerity in believing that the net benefits of regulation are reflected in the first bullet point of the OMB Report to Congress.  But this shows one of the problems facing regulatory reform today: People on both sides of the debate continue to believe that they know the facts, but in reality we know a lot less about the net effects of regulation than we often pretend to know.  Only recently have economists even begun to understand the drag that regulatory accumulation has on economic growth, and that says nothing about what benefits regulation create in exchange.

All members of Congress need to understand the limitations of our knowledge of the total effects of regulation.  We tend to rely on prospective analyses – analyses that state the costs and benefits of a regulation before they come to fruition.  What we need are more retrospective analyses, with which we can learn what has really worked and what hasn’t, and more comparative studies – studies that have control and experiment groups and see if regulations affect those groups differently.  In the meantime, the best we can do is try to ensure that the people engaged in creating new regulations follow a path of basic problem-solving: First, identify whether there is a problem that actually needs to be solved.  Second, examine several alternative ways of addressing that problem.  Then consider what the costs and benefits of the various alternatives are before choosing one. 

Politics makes us dumb

A new paper by Dan Kahan, Ellen Peters, Erica Cantrell Dawson and Paul Slovic offers an ingenious test of an interesting hypothesis. The authors set out to test two questions: a) Are people’s abilities to interpret data impaired when the data concerns a politically polarizing issue? And b) Are more numerate people more or less susceptible to this problem?

Chris Mooney offers an excellent description of the study here. His entire post is worth reading but here is the gist:

At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their “numeracy,” that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a “new cream for treating skin rashes.” But in other cases, the study was described as involving the effectiveness of “a law banning private citizens from carrying concealed handguns in public.”

The result? Survey respondents performed wildly differently on what was in essence the same basic problem, simply depending upon whether they had been told that it involved guns or whether they had been told that it involved a new skin cream. What’s more, it turns out that highly numerate liberals and conservatives were even more – not less — susceptible to letting politics skew their reasoning than were those with less mathematical ability.

Over at Salon, Marty Kaplan offers his interpretation of the results:

I hate what this implies – not only about gun control, but also about other contentious issues, like climate change.  I’m not completely ready to give up on the idea that disputes over facts can be resolved by evidence, but you have to admit that things aren’t looking so good for a reason.  I keep hoping that one more photo of an iceberg the size of Manhattan calving off of Greenland, one more stretch of record-breaking heat and drought and fires, one more graph of how atmospheric carbon dioxide has risen in the past century, will do the trick.  But what these studies of how our minds work suggest is that the political judgments we’ve already made are impervious to facts that contradict us.

Maybe climate change denial isn’t the right term; it implies a psychological disorder.  Denial is business-as-usual for our brains.  More and better facts don’t turn low-information voters into well-equipped citizens.  It just makes them more committed to their misperceptions.  In the entire history of the universe, no Fox News viewers ever changed their minds because some new data upended their thinking.  When there’s a conflict between partisan beliefs and plain evidence, it’s the beliefs that win.  The power of emotion over reason isn’t a bug in our human operating systems, it’s a feature.

I suspect that if Mr. Kaplan followed his train of thinking a little bit further he’d come to really hate what this implies. Mr. Kaplan’s biggest concern seems to be that the study shows just how hard it is to convince stupid Republicans that climate change is real. The deeper and more important conclusion to draw, however, is that the study shows just how hard it is for humans to solve problems through collective political action.

Myth of the rational voterTo understand why, it’s helpful to turn to another Caplan—Bryan Caplan of George Mason’s economics department. In The Myth of the Rational Voter that Caplan offers a convincing and fascinating explanation for why otherwise rational people might make less than reasonable decisions when they step into a voting booth or answer a political opinion survey. Building on insights from previous public choice thinkers such as Anthony Downs and Geoffrey Brennan and Loren Lomasky, Caplan makes the case that people are systematically disposed to cling to irrational beliefs when—as is the case in voting–they pay almost no price for these beliefs.

Contrast this the way people behave in a marketplace where they (tend) to pay for irrational beliefs. For example, as Brennan and Lomasky put it (p. 48), “The bigot who refuses to serve blacks in his shop foregoes the profit he might have made from their custom; the anti-Semite who will not work with Jews is constrained in his choice of jobs and may well have to knock back one she would otherwise have accepted.” In contrast, “To express such antipathy at the ballot box involves neither threat of retaliation nor any significant personal cost.”

This helps explain why baby-faced candidates often lose to mature-looking (but not necessarily acting!) candidates, or why voters consistently favor trade protectionism in spite of centuries of scientific data demonstrating its inefficiency.

Given that humans are less likely to exhibit such irrationality in their private affairs, this entire body or research constitutes a powerful case for limiting the number of human activities that are organized by the political process, and maximizing the number of activities organized through private, voluntary interaction.

——

Update: Somehow, I missed Bryan’s excellent take on the study (and what the Enlightenment was really about) here.

 

New resource: Mercatus Center’s 2013 State and Local Policy Guide

Are you interested in the practical policy applications of the kinds of research the State and Local Policy Project is producing?

For an accessible and very useful review have a look at the inaugural edition of the Mercatus Center’s 2013 State and Local Policy Guide produced by our Outreach Team.

The guide is divided into six sections outlining how to control spending, fix broken pensions systems, control healthcare cost, streamline government, evaluate regulations, and develop competitive tax policies. Each section gives an overview of our research and makes brief, specific, and practical policy proposals.

If you have any questions, please contact Michael Leland, Associate Director of State Outreach, mleland@mercatus.gmu.edu

The Economics of Regulation Part 3: How to Estimate the Effect of Regulatory Accumulation on the Economy? Exploring Endogenous Growth and Other Models

This post is the third part in a three part series spurred by a recent study by economists John Dawson and John Seater that estimates that the accumulation of federal regulation has slowed economic growth in the US by about 2% annually.  The first part discussed generally how Dawson and Seater’s study and other investigations into the consequences of regulation are important because they highlight the cumulative drag of our regulatory system. The second part went into detail on some of the ways that economists measure regulation, highlighting the strengths and weaknesses of each.  This post – the final one in the series – looks at how those measures of regulation are used to estimate the consequences of regulatory policy.  As always, economists do it with models.  In the case of Dawson and Seater, they appeal to a well-established family of endogenous growth models built upon the foundational principle of creative destruction, in the tradition of Joseph Schumpeter.

So, what is an endogenous growth model?

First, a brief discussion of models:  In a social or hard science, the ideal model is one that is useful (applicable to the real world using observable inputs to predict outcomes of interest), testable (predictions can be tested with observed outcomes), flexible (able to adapt to a wide variety of input data), and tractable (not too cumbersome to work with).  Suppose a map predicts that following a certain route will lead to a certain location.  When you follow that route in the real world, if you do not actually end up at the predicted location, you will probably stop using that map.  Same thing with models: if a model does a good job at predicting real world outcomes, then it sticks around until someone invents one that does an even better job.  If it doesn’t predict things well, then it usually gets abandoned quickly.

Economists have been obsessed with modeling the growth of national economies at least since Nobel prize winner Simon Kuznets began exploring how to measure GDP in the 1930s.  Growth models generally refer to models that try to represent how the scale of an economy, using metrics such as GDP, grows over time.  For a long time, economists relied on neoclassical growth models, which primarily use capital accumulation, population growth, technology, and productivity as the main explanatory factors in predicting the economic growth of a country. One of the first and most famous of such economic growth models is the Solow model, which has a one-to-one (simple) mapping from increasing levels of the accumulated stock of capital to increasing levels of GDP.  In the Solow model, GDP does not increase at the same rate as capital accumulation due to the diminishing marginal returns to capital.  Even though the Solow model was a breakthrough in describing the growth of GDP from capital stock accumulation, most factors in this growth process (and, generally speaking, in the growth processes of other models in the neoclassical family of growth models) are generated by economic decisions that are outside of the model. As a result, these factors are dubbed exogenous, as opposed to endogenous factors which are generated inside of the model as a result of the economic decisions made by the actors being modeled.

Much of the research into growth modeling over the subsequent decades following Solow’s breakthrough has been dedicated to trying to “endogenize” those exogenous forces (i.e. move them inside the model). For instance, a major accomplishment was endogenizing the savings rate – how much of household income was saved and invested in expanding firms’ capital stocks. Even with this endogenous savings rate, as well as exogenous growth in the population providing labor for production, the accumulating capital stocks in these neoclassical growth models could not explain all of the growth in GDP. The difference, called the Solow Residual, was interpreted as the growth in productivity due to technological development and was like manna from heaven for the actors in the economy – exogenously growing over time regardless of the decisions made by the actors in the model.

But it should be fairly obvious that decisions we make today can affect our future productivity through technological development, and not just through the accumulation of capital stocks or population growth. Technological development is not free. It is the result of someone’s decision to invest in developing technologies. Because technological development is the endogenous result of an economic decision, it can be affected by any factors that distort the incentives involved in such investment decisions (e.g., taxes and regulations). 

This is the primary improvement of endogenous growth theory over neoclassical growth models.  Endogenous growth models take into account the idea that innovative firms invest in both capital and technology, which has the aggregate effect of moving out the entire production possibilities curve.  Further, policies such as increasing regulatory restrictions or changing tax rates will affect the incentives and abilities of people in the economy to innovate and produce.  The Dawson and Seater study relies on a model originally developed by Pietro Peretto to examine the effects of taxes on economic growth.  Dawson and Seater adapt the model to include regulation as another endogenous variable, although they do not formally model the exact mechanism by which regulation affects investment choices in the same way as taxes.  Nonetheless, it’s perfectly feasible that regulation does affect investment, and, to a degree, it is simply an empirical question of how much.

So, now that you at least know that Dawson and Seater selected an accepted and feasible model—a model that, like a good map, makes reliable predictions about real world outcomes—you’re surely asking how that model provided empirical evidence of regulation’s effect on economic growth.  The answer depends on what empirical means.  Consider a much better established model: gravity.  A simple model of gravity states that an object in a vacuum near the Earth’s surface will accelerate towards the Earth at 9.81 meters per second squared. On other planets, that number may be higher or lower, depending on the planet’s massiveness and the object’s distance from the center of the planet.  In this analogy, consider taxes the equivalent of mass – we know from previous endogenous growth models that taxes have a fairly known effect on the economy, just like we know that mass has a known effect on the rate of acceleration from gravitational forces.  Dawson and Seater have effectively said that regulations must have a similar effect on the economy as taxes.  Maybe the coefficient isn’t 9.81, but the generalized model will allow them to estimate what that coefficient is – so long as they can measure the “mass” equivalent of regulation and control for “distance.”  They had to rely on the model, in fact, to produce the counterfactual, or to use a term from science experiments, a control group.  If you know that mass affects acceleration at some given constant, then you can figure out what acceleration is for a different level of mass without actually observing it.  Similarly, if you know that regulations affect economic growth in some established pattern, then you can deduce what economic growth would be without regulations.  Dawson and Seater appealed to an endogenous growth model (courtesy of Perreto) to simulate a counterfactual economy that maintained regulation levels seen in the year 1949.  By the year 2005, that counterfactual economy had become considerably larger than the actual economy – the one in which we’ve seen regulation increase to include over 1,000,000 restrictions.