Tag Archives: Mercatus Center

Energy Efficiency as Foreign Aid?

A recent suite of energy efficiency regulations issued by the Department of Energy (DOE) have been criticized due to the DOE’s claim that consumers and businesses are behaving irrationally when purchasing appliances and other energy using devises. The Department believes it is bestowing benefits on society by “correcting” these faulty decisions. Mercatus Center scholars have written about this extensively here, here, and here.

However, even if we set aside the Department’s claims of consumer and business “irrationality,” a separate rationale for these regulations is also very problematic. The vast majority of the environmental benefits of these rules stem from reductions in CO2 emissions due to lower emissions from power plants. However, in a 2010 report, the US government estimated only 7 to 23 percent of these benefits will be captured by Americans. The rest will go to people in other countries.

Here’s a recent example. In August, the DOE proposed a rule setting energy efficiency standards for metal halide lamp fixtures. In the agency’s analysis, it estimated total benefits from CO2 emission reductions at $1,532 million. Using the more optimistic estimate of the percentage of CO2 related benefits going to the US citizens (23%), Americans should capture about $450 million in environmental benefits from the rule (once we include benefits from reductions in NOx emissions as well). At the same time, the DOE estimates the rule will cost $1,294 million, much of which will be paid by American consumers and businesses. How can the DOE, which is tasked with serving the American public, support such a policy?

One might argue America is imposing costs on the rest of the world with its carbon emissions, and therefore should pay a type of tax to internalize this external cost we impose on others. However, the rest of the world is also imposing costs on us. In fact, US emissions are actually in decline, while global emissions are on the rise.

Even if we assume it is a sensible policy for Americans to compensate other countries for our carbon emissions, is paying for more expensive products like household appliances the best way to accomplish this goal? Given that no amount of carbon dioxide emission reductions in the US will do much of anything to reduce anticipated global warming, wouldn’t the rest of the world be better off with resources to adapt to climate change, instead of (at best) the warm feeling they might get from knowing Americans are buying more expensive microwave ovens? A more efficient policy would be a cash transfer to other countries, or the US could create a fund the purpose of which would be to help other countries adapt to climate change.

Energy efficiency regulations from the DOE are already difficult enough to justify. Knowing they are really just a roundabout form of foreign aid makes these rules look even less sensible.

Does Anyone Know the Net Benefits of Regulation?

In early August, I was invited to testify before the Senate Judiciary subcommittee on Oversight, Federal Rights and Agency Action, which is chaired by Sen. Richard Blumenthal (D-Conn.).  The topic of the panel was the amount of time it takes to finalize a regulation.  Specifically, some were concerned that new regulations were being deliberately or needlessly held up in the regulatory process, and as a result, the realization of the benefits of those regulations was delayed (hence the dramatic title of the panel: “Justice Delayed: The Human Cost of Regulatory Paralysis.”)

In my testimony, I took the position that economic and scientific analysis of regulations is important.  Careful consideration of regulatory options can help minimize the costs and unintended consequences that regulations necessarily incur. If additional time can improve regulations—meaning both improving individual regulations’ quality and having the optimal quantity—then additional time should be taken.  My logic behind taking this position was buttressed by three main points:

  1. The accumulation of regulations stifles innovation and entrepreneurship and reduces efficiency. This slows economic growth, and over time, the decreased economic growth attributable to regulatory accumulation has significantly reduced real household income.
  2. The unintended consequences of regulations are particularly detrimental to low-income households— resulting in costs to precisely the same group that has the fewest resources to deal with them.
  3. The quality of regulations matters. The incentive structure of regulatory agencies, coupled with occasional pressure from external forces such as Congress, can cause regulations to favor particular stakeholder groups or to create regulations for which the costs exceed the benefits. In some cases, because of statutory deadlines and other pressures, agencies may rush regulations through the crafting process. That can lead to poor execution: rushed regulations are, on average, more poorly considered, which can lead to greater costs and unintended consequences. Even worse, the regulation’s intended benefits may not be achieved despite incurring very real human costs.

At the same time, I told the members of the subcommittee that if “political shenanigans” are the reason some rules take a long time to finalize, then they should use their bully pulpits to draw attention to such actions.  The influence of politics on regulation and the rulemaking process is an unfortunate reality, but not one that should be accepted.

I actually left that panel with some small amount of hope that, going forward, there might be room for an honest discussion about regulatory reform.  It seemed to me that no one in the room was happy with the current regulatory process – a good starting point if you want real change.  Chairman Blumenthal seemed to feel the same way, stating in his closing remarks that he saw plenty of common ground.  I sent a follow-up letter to Chairman Blumenthal stating as much. I wrote to the Chairman in August:

I share your guarded optimism that there may exist substantial agreement that the regulatory process needs to be improved. My research indicates that any changes to regulatory process should include provisions for improved analysis because better analysis can lead to better outcomes. Similarly, poor analysis can lead to rules that cost more human lives than they needed to in order to accomplish their goals.

A recent op-ed penned by Sen. Blumenthal in The Hill shows me that at least one person is still thinking about the topic of that hearing.  The final sentence of his op-ed said that “we should work together to make rule-making better, more responsive and even more effective at protecting Americans.” I agree. But I disagree with the idea that we know that, as the Senator wrote, “by any metric, these rules are worth [their cost].”  The op-ed goes on to say:

The latest report from the Office of Information and Regulatory Affairs shows federal regulations promulgated between 2002 and 2012 produced up to $800 billion in benefits, with just $84 billion in costs.

Sen. Blumenthal’s op-ed would make sense if his facts were correct.  However, the report to Congress from OIRA that his op-ed referred to actually estimates the costs and benefits of only a handful of regulations.  It’s simple enough to open that report and quote the very first bullet point in the executive summary, which reads:

The estimated annual benefits of major Federal regulations reviewed by OMB from October 1, 2002, to September 30, 2012, for which agencies estimated and monetized both benefits and costs, are in the aggregate between $193 billion and $800 billion, while the estimated annual costs are in the aggregate between $57 billion and $84 billion. These ranges are reported in 2001 dollars and reflect uncertainty in the benefits and costs of each rule at the time that it was evaluated.

But you have to actually dig a little farther into the report to realize that this characterization of the costs and benefits of regulations represents only the view of agency economists (think about their incentive for a moment – they work for the regulatory agencies) and for only 115 regulations out of 37,786 created from October 1, 2002, to September 30, 2012.  As the report that Sen. Blumenthal refers to actually says:

The estimates are therefore not a complete accounting of all the benefits and costs of all regulations issued by the Federal Government during this period.

Furthermore, as an economist who used to work in a regulatory agency and produce these economic analyses of regulations, I find it heartening that the OMB report emphasizes that the estimates it relies on to produce the report are “neither precise nor complete.”  Here’s another point of emphasis from the OMB report:

Individual regulatory impact analyses vary in rigor and may rely on different assumptions, including baseline scenarios, methods, and data. To take just one example, all agencies draw on the existing economic literature for valuation of reductions in mortality and morbidity, but the technical literature has not converged on uniform figures, and consistent with the lack of uniformity in that literature, such valuations vary somewhat (though not dramatically) across agencies. Summing across estimates involves the aggregation of analytical results that are not strictly comparable.

I don’t doubt Sen. Blumenthal’s sincerity in believing that the net benefits of regulation are reflected in the first bullet point of the OMB Report to Congress.  But this shows one of the problems facing regulatory reform today: People on both sides of the debate continue to believe that they know the facts, but in reality we know a lot less about the net effects of regulation than we often pretend to know.  Only recently have economists even begun to understand the drag that regulatory accumulation has on economic growth, and that says nothing about what benefits regulation create in exchange.

All members of Congress need to understand the limitations of our knowledge of the total effects of regulation.  We tend to rely on prospective analyses – analyses that state the costs and benefits of a regulation before they come to fruition.  What we need are more retrospective analyses, with which we can learn what has really worked and what hasn’t, and more comparative studies – studies that have control and experiment groups and see if regulations affect those groups differently.  In the meantime, the best we can do is try to ensure that the people engaged in creating new regulations follow a path of basic problem-solving: First, identify whether there is a problem that actually needs to be solved.  Second, examine several alternative ways of addressing that problem.  Then consider what the costs and benefits of the various alternatives are before choosing one. 

New resource: Mercatus Center’s 2013 State and Local Policy Guide

Are you interested in the practical policy applications of the kinds of research the State and Local Policy Project is producing?

For an accessible and very useful review have a look at the inaugural edition of the Mercatus Center’s 2013 State and Local Policy Guide produced by our Outreach Team.

The guide is divided into six sections outlining how to control spending, fix broken pensions systems, control healthcare cost, streamline government, evaluate regulations, and develop competitive tax policies. Each section gives an overview of our research and makes brief, specific, and practical policy proposals.

If you have any questions, please contact Michael Leland, Associate Director of State Outreach, mleland@mercatus.gmu.edu

Has the Sequester Hurt the Economy?

Several weeks ago, Steve Forbes argued that the federal government spending cuts known as the “sequester” are actually having beneficial effects on the US economy, and not slowing growth as many economists and pundits in the media have claimed. Forbes’s statement attracted critics, and many economists have expressed skepticism about the sequester too. One economist even went so far as to say, “The disjunction between textbook economics and the choices being made in Washington is larger than any I’ve seen in my lifetime.”

So have the sequester cuts hurt the economy? One possible answer comes from a new paper by Scott Sumner of Bentley University. Sumner argues that cuts to government spending don’t have serious deleterious macroeconomic effects when the Federal Reserve is targeting inflation. This is because the Fed ensures that prices stay stable under an inflation targeting regime, which keeps demand stable even in the face of government spending cuts. Similarly, when the Fed stabilizes the price level it also offsets any beneficial effects that fiscal stimulus might have, which helps explain the lackluster results from the 2009 American Recovery and Reinvestment Act (aka the “stimulus”).

Implicit in Sumner’s theory is that expansionary austerity, or the idea that the economy can grow even in the face of large government spending cuts, is indeed possible. Some of my colleagues at the Mercatus Center have described other ways in which expansionary austerity is possible.

Luckily, there are still things Congress can do to improve the economic outlook, even as spending cuts take hold. Lawmakers can enact policies that boost the performance of the real economy. By this I mean policies that increase the amount of real goods and services the economy produces, as opposed to policies that affect demand (i.e. spending).

One example is reforming the regulatory system, which discourages production of all sorts. With over 174,000 pages of federal regulations in place, there must be a few obsolete or duplicative rules that can be eliminated to relieve the burden on businesses and entrepreneurs. Congress could also reform the tax code, with its perverse incentives and countless carve outs for special interests.

Starting new government programs isn’t likely to do much to benefit growth. New projects take too long to implement, politicians waste too much money on silly boondoggles, and monetary policy will likely offset any beneficial effects anyway. If Congress wants to do something to improve growth, it should focus on creating a regulatory and tax environment that encourages investment and entrepreneurial risk taking.

Why Regulations Fail

Last week, David Fahrenthold wrote a great article in the Washington Post, in which he described the sheer absurdity of a USDA regulation mandating a small town magician to develop a disaster evacuation plan for his rabbit (the rabbit was an indispensible part of trick that also involved a hat). The article provides a good example of the federal regulatory process’ flaws that can derail even the best-intentioned regulations. I list a few of these flaws below.

  1. Bad regulations often start with bad congressional statutes. The Animal Welfare Act of 1966, the statute authorizing the regulation, was meant to prevent medical labs from using lost pets for experiments. Over time, the statute expanded to include all warm-blooded animals (pet lizards apparently did not merit congressional protection) and to apply to zoos and circuses in addition to labs (pet stores, dog and cat shows, and several other venues for exhibiting animals were exempt).The statute’s spotty coverage resulted from political bargaining rather than the general public interest in animal welfare. The USDA rule makes the statute’s arbitrariness immediately apparent. Why would a disaster plan benefit circus animals but not the animals in pet stores or farms? (A colleague of mine jokingly suggested eating the rabbit as part of an evacuation plan, since rabbits raised for meat are exempt from the regulation’s requirements).
  2. Regulations face little oversight. When media reported on the regulation’s absurdity, the USDA Secretary Tom Vilsack ordered the regulation to be reviewed. It seems that even the agency’s head was caught off guard by the actions of his agency’s regulators. Beyond internal supervision, only a fraction of regulations face external oversight. Of over 2600 regulations issued in 2012, less than 200 were subject to the OMB review (data from GAO and OMB). Interestingly, the OMB did review the USDA rule but offered only minor revisions.
  3. Agencies often fail to examine the need for regulation. In typical Washington fashion, the agency decided to regulate in response to a crisis – Hurricane Katrina in this case. In fact, the USDA offered little more than Katrina’s example to justify the regulation. It offered little evidence that the lack of disaster evacuation plans was a widespread problem that required the federal government to step in. In this, the USDA is not alone. According to the Mercatus Center’s Regulatory Report Card, which evaluates agencies’ economic analysis, few agencies offer substantial evidence justifying the need for promulgated regulations.
  4. Agencies often fail to examine the regulation’s effectiveness. The USDA’s plan to save animals in case of a disaster was to require owners to draw up an evacuation plan. It offered little evidence that having a plan would in fact save the animals. For example, the magician’s evacuation plan called for shoving the rabbit into a plastic bag and getting out. In the USDA’s view, the magician would not have thought of doing the same had he not drawn up the evacuation plan beforehand.
  5. The public has little influence in the process. By law, agencies are required to ask the public for input on proposed regulations. Yet, small businesses and individual consumers rarely have time or resources to keep an eye on federal agencies. In general, organized interests dominate the commenting process. The article describes the magician’s surprise to learn that he was required to have a license and a disaster evacuation plan his rabbit, even though the regulation was in the works for a few years and was open for public comments for several months. Most small businesses, much like this magician, learn about regulations only after they have passed.
  6. Public comments are generally ignored. Most public comments that the USDA received argued against the rule. They pointed out that it would impose substantial costs on smaller businesses. The agency dismissed the comments with little justification. This case is not unique. Research indicates that agencies rarely make substantial changes to regulations in response to public comments.

Do “Indirect Effects” of Regulation Matter to Real People?

Congressional regulatory reformers recently caught criticism from advocacy groups for introducing legislation that would require federal regulatory agencies to analyze the “indirect effects” of proposed regulations. The only thing I’d criticize the reformers for is poor word choice.

The very term “indirect effects” suggests that they’re talking about something theoretical, inconsequential, and unimportant to the average citizen. But to economists, the indirect effects of a regulation are often the effects that touch the average citizen most directly.

Consider airport security, for example. The Department of Homeland Security (DHS) recently sought public comment on its decision to deploy Advanced Imaging Technology scanners instead of metal detectors at airports. The direct costs of this decision are the extra cost of the new machines, the electricity to run them, and the personnel to staff them – which airline passengers pay for via the taxes and fees on airline tickets. Those are pretty obvious costs, and DHS dutifully toted up these costs in its analysis of its proposed rule.

Less obvious but potentially more important are the other, indirect costs associated with airport security. Passengers who decline to walk through the new machines will receive additional pat-downs. This involves a cost in terms of time (which DHS acknowledges) and potentially diminished privacy and human dignity (which DHS does not discuss). The now-classic phrase “Don’t touch my junk” aptly summarizes one passenger’s reaction to an indirect effect of security regulation that touches passengers quite directly.

But that does not exhaust the list of significant, predictable, indirect effects associated with airport security regulation. The increased delays associated with enhanced, post-9/11 security measures prompted some travelers to substitute driving for flying on short trips. An article by Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon published in the November 2007 Journal of Law & Economics estimates that post-9/11 security measures cost the airline industry $1.1 billion in lost revenue in the fourth quarter of 2002. Driving is also riskier than flying. Blalock et. al. estimated that the security measures were associated with 129 additional highway deaths in the fourth quarter of 2002.

I’m all for making air travel as safe as possible, but I’d like to see it done smartly, with a minimum of hassle and a maximum of respect for the flying public who pays the bills. A full accounting of the indirect effects of airport security might just prompt policymakers to consider whether they are pursuing regulatory goals in the most sensible way possible.

Unfortunately, airport security is not an isolated example. Data from the Mercatus Center’s Regulatory Report Card reveal that for about 40 percent of the major regulations proposed by executive branch agencies between 2008 and 2012, the agencies failed to conduct any substantial analysis of costs that stem from the proposed regulation’s effects on prices or on human behavior – two classic types of indirect effects.

This won’t do. Telling federal agencies they do not need to understand the indirect effects of regulation is telling them they should proceed in willful ignorance of the effects of their decisions on real people. The reformers have a good idea here – even if it has a misleadingly boring name.

Should Illinois be Downgraded? Credit Ratings and Mal-Investment

No one disputes that Illinois’s pension systems are in seriously bad condition with large unfunded obligations. But should this worry Illinois bondholders? New Mercatus research by Marc Joffe of Public Sector Credit Solutions finds that recent downgrades of Illinois’s bonds by credit ratings agencies aren’t merited. He models the default risk of Illinois and Indiana based on a projection of these states’ financial position. These findings are put in the context of the history of state default and the role the credit ratings agencies play in debt markets. The influence of credit ratings agencies in this market is the subject a guest blog post by Marc today at Neighborhood Effects.

Credit Ratings and Mal-Investment

by Marc Joffe

Prices play a crucial role in a market economy because they provide signals to buyers and sellers about the availability and desirability of goods. Because prices coordinate supply and demand, they enabled the market system to triumph over Communism – which lacked a price mechanism.

Interest rates are also prices. They reflect investor willingness to delay consumption and take on risk. If interest rates are manipulated, serious dislocations can occur. As both Horwitz and O’Driscoll have discussed, the Fed’s suppression of interest rates in the early 2000s contributed to the housing bubble, which eventually gave way to a crash and a serious financial crisis.

Even in the absence of Fed policy errors, interest rate mispricing is possible. For example, ahead of the financial crisis, investors assumed that subprime residential mortgage backed securities (RMBS) were less risky than they really were. As a result, subprime mortgage rates did not reflect their underlying risk and thus too many dicey borrowers received home loans. The ill effects included a wave of foreclosures and huge, unexpected losses by pension funds and other institutional investors.

The mis-pricing of subprime credit risk was not the direct result of Federal Reserve or government intervention; instead, it stemmed from investor ignorance. Since humans lack perfect foresight, some degree of investor ignorance is inevitable, but it can be minimized through reliance on expert opinion.

In many markets, buyers rely on expert opinions when making purchase decisions. For example, when choosing a car we might look at Consumer Reports. When choosing stocks, we might read investment newsletters or review reports published by securities firms – hopefully taking into account potential biases in the latter case. When choosing fixed income most large investors rely on credit rating agencies.

The rating agencies assigned what ultimately turned out to be unjustifiably high ratings to subprime RMBS. This error and the fact that investors relied so heavily on credit rating agencies resulted in the overproduction and overconsumption of these toxic securities. Subsequent investigations revealed that the incorrect rating of these instruments resulted from some combination of suboptimal analytical techniques and conflicts of interest.

While this error occurred in market context, the institutional structure of the relevant market was the unintentional consequence of government interventions over a long period of time. Rating agencies first found their way into federal rulemaking in the wake of the Depression. With the inception of the FDIC, regulators decided that expert third party evaluations were needed to ensure that banks were investing depositor funds wisely.

The third party regulators chose were the credit rating agencies. Prior to receiving this federal mandate, and for a few decades thereafter, rating agencies made their money by selling manuals to libraries and institutional investors. The manuals included not only ratings but also large volumes of facts and figures about bond issuers.

After mid-century, the business became tougher with the advent of photocopiers. Eventually, rating agencies realized (perhaps implicitly) that they could monetize their federally granted power by selling ratings to bond issuers.

Rather than revoking their regulatory mandate in the wake of this new business model, federal regulators extended the power of incumbent rating agencies – codifying their opinions into the assessments of the portfolios of non-bank financial institutions.

With the growth in fixed income markets and the inception of structured finance over the last 25 years, rating agencies became much larger and more profitable. Due to their size and due to the fact that their ratings are disseminated for free, rating agencies have been able to limit the role of alternative credit opinion providers. For example, although a few analytical firms market their insights directly to institutional investors, it is hard for these players to get much traction given the widespread availability of credit ratings at no cost.

Even with rating agencies being written out of regulations under Dodd-Frank, market structure is not likely to change quickly. Many parts of the fixed income business display substantial inertia and the sheer size of the incumbent firms will continue to make the environment challenging for new entrants.

Regulatory involvement in the market for fixed income credit analysis has undoubtedly had many unintended consequences, some of which may be hard to ascertain in the absence of unregulated markets abroad. One fairly obvious negative consequence has been the stunting of innovation in the institutional credit analysis field.

Despite the proliferation of computer technology and statistical research methods, credit rating analysis remains firmly rooted in its early 20th century origins. Rather than estimate the probability of a default or the expected loss on a credit instruments, rating agencies still provide their assessments in the form of letter grades that have imprecise definitions and can easily be misinterpreted by market participants.

Starting with the pioneering work of Beaver and Altman in the 1960s, academic models of corporate bankruptcy risk have become common, but these modeling techniques have had limited impact on rating methodology.

Worse yet, in the area of government bonds, very little academic or applied work has taken place. This is especially unfortunate because government bond ratings frame the fiscal policy debate. In the absence of credible government bond ratings, we have no reliable way of estimating the probability that any government’s revenue and expenditure policies will lead to a socially disruptive default in the future. Further, in the absence of credible research, there is great likelihood that markets inefficiently price government bond risk – sending confusing signals to policymakers and the general public.

Given these concerns, I am pleased that the Mercatus Center has provided me the opportunity to build a model for Illinois state bond credit risk (as well as a reference model for Indiana). This is an effort to apply empirical research and Monte Carlo simulation techniques to the question of how much risk Illinois bondholders actually face.

While readers may not like my conclusion – that Illinois bonds carry very little credit risk – I hope they recognize the benefits of constructing, evaluating and improving credit models for systemically important public sector entities like our largest states. Hopefully, this research will contribute to a discussion about how we can improve credit rating assessments.

 

 

Third Edition of Freedom in the 50 States

Today the Mercatus Center released the third edition of Freedom in the 50 States by Will Ruger and Jason Sorens. In this new edition, the authors score states on over 200 policy variables. Additionally, they have collected data from 2001 to measure how states’ freedom rankings have changed over the past decade. While several organizations publish state freedom rankingsFreedom in the 50 States is the only one that measures both economic and personal freedoms.

Ruger and Sorens have implemented a new methodology for measuring freedom. While previously the authors developed a subjective weighting system in which they sought to determine how significantly policies limited the freedom of how many people, in this edition they have use a victim-cost method, assigning a dollar value to each variable that restricts freedom measuring the cost of restricting freedom for potential victims. The authors’ cost calculations are designed to measure the value of the states’ freedom for the average resident. Since individuals measure the cost of policies differently, readers can put their own price on each freedom variable on the website to find the states that best match their subjective policy preference.

In addition to an overall freedom ranking, Freedom in the 50 States includes a breakdown of states’ Fiscal Policy Ranking, Regulatory Ranking, and Personal Freedom Ranking. On the overall freedom ranking, North Dakota comes in first followed by South Dakota, Tennessee, New Hampshire, and Oklahoma.  At the bottom of the ranking, New York ranks worst by a significant margin, with rent control and burdensome insurance regulations dragging down its regulatory freedom score. New York is behind California at 49th, then New Jersey, Hawaii, and Rhode Island.

The authors note that residents respond to the costs of freedom-reducing policies by voting with their feet. Between 2000 and 2011, New York lost 9% of its population to out-migration. In addition to all types of freedom being associated with domestic migration, the authors find that regulatory freedom in particular is associated with states’ growth in personal income. They conclude:

Freedom is not the only determinant of personal satisfaction and fulfillment, but as our analysis of migration patterns shows, it makes a tangible difference for people’s decisions about where to live. Moreover, we fully expect people in the freer states to develop and benefit from the kinds of institutions (such as symphonies and museums) and amenities (such as better restaurants and cultural attractions) seen in some of the older cities on the coasts.

[…]

These things take time, but the same kind of dynamic freedom enjoyed in Chicago or New York in the 19th century — that led to their rise — might propel places in the middle of the country to be a bit more hip to those with urbane tastes.

Shortfalls in non-profit disaster rebuilding

This post originally appeared at Market Urbanism, a blog about free-market urban development.

After receiving years of praise for its work in post-Katrina recovery, Brad Pitt’s home building organization, Make It Right, is receiving some media criticism. At the New Republic, Lydia Depillis points out that the Make It Right homes built in the Lower Ninth Ward have resulted in scarce city dollars going to this neighborhood with questionable results. While some residents have been able to return to the Lower Ninth Ward through non-profit and private investment, the population hasn’t reached the level necessary to bring the commercial services to the neighborhood that it needs to be a comfortable place to live.

After Hurricane Katrina, the Mercatus Center conducted extensive field research in the Gulf Coast, interviewing people who decided to return and rebuild in the city and those who decided to permanently relocate. They discussed the events that unfolded immediately after the storm as well as the rebuilding process. They interviewed many people in the New Orleans neighborhood surrounding the Mary Queen of Vietnam Church. This neighborhood rebounded exceptionally well after Hurricane Katrina, despite experiencing some of the city’s worst flooding 5-12-feet-deep and being a low-income neighborhood. As Emily Chamlee-Wright and Virgil Storr found [pdf]:

Within a year of the storm, more than 3,000 residents had returned [of the neighborhood’s 4,000 residents when the storm hit]. By the summer of 2007, approximately 90% of the MQVN residents were back while the rate of return in New Orleans overall remained at only 45%. Further, within a year of the storm, 70 of the 75 Vietnamese-owned businesses in the MQVN neighborhood were up and running.

Virgil and Emily attribute some of MQVN’s rebuilding success to the club goods that neighborhood residents shared. Club goods share some characteristics with public goods in that they are non-rivalrous — one person using the pool at a swim club doesn’t impede others from doing so — but club goods are excludable, so that non-members can be banned from using them. Adam has written about club goods previously, using the example of mass transit. The turnstile acts as a method of exclusion, and one person riding the subway doesn’t prevent other passengers from doing so as well. In the diagram below, a subway would fall into the “Low-congestion Goods” category:

club goods

In the case of MQVN, the neighborhood’s sense of community and shared culture provided a club good that encouraged residents to return after the storm. The church provided food and supplies to the first neighborhood residents to return after the storm. Church leadership worked with Entergy, the city’s power company, to demonstrate that the neighborhood had 500 residents ready to pay their bills with the restoration of power, making them one of the city’s first outer neighborhoods to get power back after the storm.

While resources have poured into the Lower Ninth Ward from outside groups in the form of $400,000 homes from Make It Right $65 million  in city money for a school, police station, and recreation center, the neighborhood has not seen the success that MQVN achieved from the bottom up. This isn’t to say that large non-profits don’t have an important role to play in disaster recovery. Social entrepreneurs face strong incentives to work well toward their objectives because their donors hold them accountable and they typically are involved in a cause because of their passion for it. Large organizations from Wal-Mart to the American Red Cross provided key resources to New Orleans residents in the days and months after Hurricane Katrina.

The post-Katrina success of MQVN relative to many other neighborhoods in the city does demonstrates the effectiveness of voluntary cooperation at the community level and the importance of bottom-up participation for long-term neighborhood stability. While people throughout the city expressed their love for New Orleans and desire to return in their conversations with Mercatus interviewers, many faced coordination problems in their efforts to rebuild. In the case of MQVN, club goods and voluntary cooperation permitted the quick and near-complete return of residents.

The Home Mortgage Interest Deduction: A Bad Deal for Taxpayers

A new policy brief released by the Mercatus Center and co-authored by Jeremy Horpedahl and Harrison Searles analyzes one of the most popular—and therefore one of the most difficult to reform—subsidies in the tax code: the home mortgage interest deduction. This study touches on many of the points that Emily talked about in her op-ed on the subject last month; namely, this policy’s failure to achieve its intended effects and the fact that a lion’s share of the benefits go to high-income homeowners. Despite widespread enthusiasm for the home mortgage interest deduction, the authors argue that the benefits of this policy are overstated and the consequences are understated.

The home mortgage interest deduction is one of the largest tax expenditures in the U.S. tax code, second only to the non-taxation of employer-provided health insurance and pension contributions. Proponents of the home mortgage interest deduction argue that this policy provides needed tax relief to the middle class and encourages the oft-invoked American dream of homeownership. These folks may be surprised to learn, as Horpedahl and Searles point out, that a mere 21.7% of taxpayers even claim this benefit. What’s more, most of these benefits don’t go to the middle class, but rather to households with incomes of over $200,000. Here’s a breakdown of the tax savings from the brief:


The claim that this policy is necessary to encourage home ownership is dubious as well. The authors explain:

Empirical evidence supports the claim that the mortgage interest deduction has little effect on homeownership rates in the United States. Between 1960 and 1997, homeownership rates stayed within a narrow range of 62 to 66 percent, despite the fact that the implicit tax subsidy fluctuated dramatically. During the recent housing bubble, the homeownership rate rose to 69 percent, but it has since returned to the historical range. This rise appears to have been unrelated to the mortgage interest deduction, though it was almost certainly related to other housing policies that encouraged the bubble. More sophisticated analysis suggests that the homeownership rate would be modestly lower without the deduction, by around 0.4 percent.

Ironically, the home mortgage interest deduction likely creates the perverse effect of discouraging homeownership by artificially raising home values. Economic intuition suggests, and empirical studies have supported, that the deduction does not provide much in the way of savings at all since the value of the deduction is simply capitalized into the value of home prices. The artificially higher house prices prevent would-be home owners on the margins of affordability from purchasing a home within their price range. This effect, combined with the low rates of deduction claims and concentration of benefits to high-income earners, likely contributes to the inefficacy of the home mortgage interest deduction to boost homeownership to the degree that its proponents envisioned.

Additionally, countries like Canada and Australia have managed to produce comparable rates of home ownership as the US without the crutch of a mortgage interest deduction.

While the home mortgage interest deduction doesn’t do much for increasing the number of houses, it has a knack for increasing the size of houses, as a study by Lori Taylor of the Federal Reserve Bank of Dallas pointed out. The deduction has had the unintended consequence of directing capital and labor to high-income residential housing projects that might not have been taken without government intervention—and the benefits overwhelmingly go to the wealthy.

This is all before considering the regressive effects of the policy by design: low- and middle-income renters are made to subsidize the increasingly opulent residences (and sometimes the extra vacation homes!) of their more well-off peers while they struggle to make ends meet in a sometimes-inhospitable economy. This injustice, combined with the inefficacy of the tax deduction to increase homeownership in any meaningful way, causes the justifications for the mortgage interest deduction to grow scarce.

In fact, it is becoming increasingly clear that this policy, which evaded the fate of its similar counterpart—the credit card interest deduction—during the tax fight of 1986, continues as law not because of good economics but because of bad political incentives.

Horpedahl and Searles offer three proposals for scaling back the home mortgage interest deduction: policymakers could 1) eliminate the deduction entirely, 2) eliminate the deduction while simultaneously lowering marginal income tax rates to compensate for the virtual tax increase, or 3) stop the deduction and replace it with a tax credit that taxpayers could redeem upon purchase of their first house. Horpedahl and Searles demonstrate that while this deduction is popular with the public and the real estate industry, it is simply a bad deal for most taxpayers.