Tag Archives: uncertainty

Decreasing congestion with driverless cars

Traffic is aggravating. Especially for San Francisco residents. According to Texas A&M Transportation Institute, traffic congestion in the San Francisco-Oakland CA area costs the average auto commuter 78 hours per year in extra travel time, $1,675 for their travel time delays, and an extra 33 gallons of gas compared to free-flow traffic conditions. That means the average commuter spends more than three full days stuck in traffic each year. Unfortunately for these commuters, a potential solution to their problems just left town.

Last month, after California officials told Uber to stop its pilot self-driving car program because it lacked the necessary state permits for autonomous driving, Uber decided to relocate the program from San Francisco to Phoenix, Arizona. In an attempt to alleviate safety concerns, these self-driving cars are not yet driverless, but they do have the potential to reduce the number of cars on the road. Other companies like Google, Tesla, and Ford have expressed plans to develop similar technologies, and some experts predict that completely driverless cars will be on the road by 2021.

Until then, however, cities like San Francisco will continue to suffer from the most severe congestion in the country. Commuters in these cities experience serious delays, higher gasoline usage, and lost time behind the wheel. If you live in any of these areas, you are probably very familiar with the mind-numbing effect of sitting through sluggish traffic.

It shouldn’t be surprising then that these costs could culminate into a larger problem for economic growth. New Mercatus research finds that traffic congestion can significantly harm economic growth and concludes with optimistic predictions for how autonomous vehicle usage could help.

Brookings Senior Fellow Clifford Winston and Yale JD candidate Quentin Karpilow find significant negative effects of traffic congestion on the growth rates of California counties’ gross domestic product (GDP), employment, wages, and commodity freight flows. They find that a 10% reduction in congestion in a California urban area increases both job and GDP growth by roughly 0.25% and wage growth to increase by approximately 0.18%.

This is the first comprehensive model built to understand how traffic harms the economy, and it builds on past research that has found that highway congestion leads to slower job growth. Similarly, congestion in West Coast ports, which occurs while dockworkers and marine terminal employers negotiate contracts, has caused perishable commodities to go bad, resulting in a 0.2 percentage point reduction in GDP during the first quarter of 2015.

There are two main ways to solve the congestion problem; either by reducing the number of cars on the road or by increasing road capacity. Economists have found that the “build more roads” method in application has actually been quite wasteful and usually only induces additional highway traffic that quickly fills the new road capacity.

A common proposal for the alternative method of reducing the number of cars on the road is to implement congestion pricing, or highway tolls that change based on the number of drivers using the road. Increasing the cost of travel during peak travel times incentivizes drivers to think more strategically about when they plan their trips; usually shifting less essential trips to a different time or by carpooling. Another Mercatus study finds that different forms of congestion pricing have been effective at reducing traffic congestion internationally in London and Stockholm as well as for cities in Southern California.

The main drawback of this proposal, however, is the political difficulty of implementation, especially with interstate highways that involve more than one jurisdiction to approve it. Even though surveys show that drivers generally change their mind towards supporting congestion pricing after they experience the lower congestion that results from tolling, getting them on board in the first place can be difficult.

Those skeptical of congestion pricing, or merely looking for a less challenging policy to implement, should look forward to the new growing technology of driverless cars. The authors of the recent Mercatus study, Winston and Karpilow, find that the adoption of autonomous vehicles could have large macroeconomic stimulative effects.

For California specifically, even if just half of vehicles became driverless, this would create nearly 350,000 additional jobs, increase the state’s GDP by $35 billion, and raise workers’ earnings nearly $15 billion. Extrapolating this to the whole country, this could add at least 3 million jobs, raise the nation’s annual growth rate 1.8 percentage points, and raise annual labor earnings more than $100 billion.

What would this mean for the most congested cities? Using Winston and Karpilow’s estimates, I calculated how reduced congestion from increased autonomous car usage could affect Metropolitan Statistical Areas (MSAs) that include New York City, Los Angeles, Boston, San Francisco, and the DC area. The first chart shows the number of jobs that would have been added in 2011 if 50% of motor vehicles had been driverless. The second chart shows how this would affect real GDP per capita, revealing that the San Francisco MSA would have the most to gain, but with the others following close behind.

jobsadd_autonomousvehicles realgdp_autonomousvehicles

As with any new technology, there is uncertainty with how exactly autonomous cars will be fully developed and integrated into cities. But with pilot programs already being implemented by Uber in Pittsburgh and nuTonomy in Singapore, it is becoming clear that the technology’s efficacy is growing.

With approximately $1,332 GDP per capita and 45,318 potential jobs on the table for the San Francisco Metropolitan Statistical Area, it is a shame that San Francisco just missed a chance to realize some of these gains and to be at the forefront of driving progress in autonomous vehicle implementation.

Does the New Markets Tax Credit Program work?

Location-based programs that provide tax credits to firms and investors that locate in particular areas are popular among politicians of both parties. Democrats tend to support them because they are meant to revitalize poorer or rural areas. In a recent speech about the economy, presumed Democratic nominee Hillary Clinton spoke favorably about two of them: the New Markets Tax Credit Program and Empowerment Zones.

Some Republicans also support such programs, which they view as being a pro-business way to help low-income communities. However, House Speaker Paul Ryan’s recent tax reform blueprint generally disapproves of tax credit programs.

Due to the volume of location-based programs and their relatively narrow objectives, many taxpayers are unfamiliar with their differences or unaware that they even exist. This is to be expected since most people are never directly affected by one. In this post I explain one that Hillary Clinton recently spoke about, the New Markets Tax Credit (NMTC) program.

The NMTC program was created in 2000 as part of the Community Renewal Tax Relief Act. It is managed by the Community Development Financial Institutions Fund, which is a division of the U.S. Treasury Department.

The NMTC program provides both new and established businesses with a tax credit that can be used to offset the costs of new capital investment or hiring new workers. The goal is to increase investment in low income communities (LIC) in order to improve the economic outcomes of residents.

Even though the program was started in 2000, no funds were issued to investors until 2003 (although some funds were allocated to the program in 2001 and 2002). Since 2001 over $43 billion has been allocated to the program. The figure below shows the allocations by year, amount issued to investors, and the total amount allocated from 2001 – 2014 (orange bar, uses right axis).

NMTC allocations

Figure 1

Practically all of the allocated funds from 2001 to 2012 have been issued to investors. A little over $250 million remains from 2013 and $1.3 billion from 2014. As the figure makes clear, this program controls a non-trivial amount of money.

The types of projects funded by the NMTC program can be seen in the figure below. The data for this figure comes from a 2013 Urban Institute report.

NMTC projects funded

Figure 2

So what have taxpayers gotten for their money? The program’s ‘fact sheet’ asserts that since 2003 the program has

“…created or retained an estimated 197,585 jobs. It has also supported the construction of 32.4 million square feet of manufacturing space, 74.8 million square feet of office space, and 57.5 million square feet of retail space.”

Like many government program administrators, those running the NMTC program seem to confuse outputs with outcomes. Presumably the goal of the NMTC program is not to build office space, which is a trivial achievement, but to improve the lives of the people living in low income communities. In fact, the program’s fact sheet also states that

“Investments made through the NMTC Program are used to finance businesses, breathing new life into neglected, underserved low-income communities.”

What really matters is whether the program has succeeded at “breathing new life” into LICs. To answer this more complicated question one needs to examine the actual economic outcomes in areas receiving the credits in order to determine whether they have improved relative to areas that haven’t received the credits. Such an exercise is not the same thing as simply reporting the amount of new office space.

That being said, even the simpler task of measuring new office space or counting new jobs is harder than it first appears. It’s important for program evaluators and the taxpayers who fund the program to be aware of the reasons that either result could be speciously assigned to the tax credit.

First, the office space or jobs might have been added regardless of the tax credit. Firms choose locations for a variety of reasons and it’s possible that a particular firm would locate in a particular low income community regardless of the availability of a tax credit. This could happen for economic reasons—the firm is attracted by the low price of space or the location is near an important supplier—or the location has sentimental value e.g. the firm owner is from the neighborhood.

A second reason is that the firms that locate or expand in the community might do so at the expense of other firms that would have located there absent the tax credit. For example, suppose the tax credit attracts a hotel owner who due to the credit finds it worthwhile to build a hotel in the neighborhood, and that this prevents a retail store owner from locating on the same plot of land, even though she would have done so without a credit.

The tax credit may also mistakenly appear to be beneficial if all it does is reallocate investment from one community to another. Not all communities are eligible for these tax credits. If a firm was going to locate in a neighboring community that wasn’t eligible but then switched to the eligible community upon finding out about the tax credit then no new investment was created in the city, it was simply shifted around. In this scenario one community benefits at the expense of another due to the availability of the tax credit.

A new study examines the NMTC program in order to determine whether it has resulted in new employment or new businesses in eligible communities. It uses census tract data from 2002 – 2006. In order to qualify for NMTCs, a census tract’s median family income must be 80% or less of its state’s median family income or the poverty rate of the tract must be over 20%. (There are two other population criteria that were added in 2004, but according to the study 98% qualify due to the income or poverty criterion.)

The authors use the median income ratio of 0.8 to separate census tracts into a qualifying and non-qualifying group, and then compare tracts that are close to and on either side of the 0.8 cutoff. The economic outcomes they examine are employment at new firms, number of new firms, and new employment at existing firms.

They find that there was less new employment at new firms in NMTC eligible tracts in the transportation and wholesale industries but more new employment in the retail industry. Figure 2 shows that retail received a relatively large portion of the tax credits. This result shows that the tax credits helped new retail firms add workers relative to firms in transportation and manufacturing in eligible census tracts.

The authors note that the magnitude of the effects are small—a 0.2% increase in new retail employment and a 0.12% and 0.41% decrease in new transportation and wholesale employment respectively. Thus the program had a limited impact during the 2002 – 2006 period according to this measure, despite the fact that nearly $8 billion was granted to investors from 2002 – 2005.

The authors find a similar result when examining new firms: Retail firms located in the NMTC eligible tracts while services and wholesale firms did not. Together these two results are evidence that the NMTC does not benefit firms in all industries equally since it causes firms in different industries to locate in different tracts. The latter result also supports the idea that firms that benefit most from the tax credit crowd out other types of firms, similar to the earlier hotel and retail store example.

Finally, the authors examined new employment at existing firms. This result is more favorable to the program—an 8.8% increase in new employment at existing manufacturing firms and a 10.4% increase at retail firms. Thus NMTCs appear to have been primarily used to expand existing operations.

But while there is evidence that the tax credit slightly increased employment, the authors note that due to the limitations of their data they are unable to conclude whether the gains in new employment or firms was due to a re-allocation of economic activity from non-eligible to eligible census tracts or to actual new economic activity that only occurred because of the program. Thus even the small effects identified by the authors cannot be conclusively considered net new economic activity generated by the NMTC program. Instead, the NMTC program may have just moved economic activity from one community to another.

The mixed results of this recent study combined with the inability to conclusively assign them to the NMTC program cast doubt on the programs overall effectiveness. Additionally, the size of the effects are quite small. Thus even if the effects are positive once crowding out and reallocation are taken into account, the benefits still may fall short of the $43.5 billion cost of the program (which doesn’t include the program’s administrative costs).

An alternative to location-based tax credit programs is to lower tax rates on businesses and investment across the board. This would remove the distortions that are inherent in location-based programs that favor some areas and businesses over others. It would also reduce the uncertainty that surrounds the renewal and management of the programs. Attempts to help specific places are often unsuccessful and give residents of such places false hope that community revitalization is right around the corner.

Tax credits, despite their good intentions, often fail to deliver the promised benefits. The alternative—low, stable tax rates that apply to all firms—helps create a business climate that is conducive to long-term planning and investment, which leads to better economic outcomes.

Grants to Puerto Rico haven’t helped much

Greece’s monetary and fiscal issues have overshadowed a similar situation right in America’s own back yard: Puerto Rico. Puerto Rico’s governor recently called the commonwealth’s $72 billion in debt “unpayable” and this has made Puerto Rico’s bondholders more nervous than they already were. Puerto Rico’s bonds were previously downgraded to junk by the credit rating agencies and there is a lot of uncertainty surrounding Puerto Rico’s ability to honor its obligations to both bond holders and its own workers, as the commonwealth’s pension system is drastically underfunded.   A major default would likely impact residents of the mainland U.S., since according to Morningstar most of the debt is owned by U.S. mutual funds, hedge funds, and mainland Americans.

So how did Puerto Rico get into this situation? Like many other places, including Greece and several U.S. cities, the government of Puerto Rico routinely spent more than it collected in revenue and then borrowed to fill the gap as shown in the graph below from Puerto Rico’s Office of Management and Budget. Over a recent 13 year period (2000 – 2012) Puerto Rico ran a deficit each year and accrued $23 billion in debt.

Puerto rico govt spending

Puerto Rico has a lot in common with many struggling cities in the U.S. that followed a similar fiscal path, such as a high unemployment rate of 12.4%, a shrinking labor force, stagnant or declining median household income, population flight, and falling house prices. Only 46.1% of the population 16 and over was in the labor force in 2012 (compared to an average of nearly 64% in the US in 2012) and the population declined by 4.8% from 2010 to 2014. It is difficult to raise enough revenue to fund basic government services when less than half the population is employed and the most able-bodied workers are leaving the country.

Like other U.S. cities and states, Puerto Rico receives intergovernmental grants from the federal government. As I have explained before, these grants reduce the incentives for a local government to get its fiscal house in order and misallocate resources from relatively responsible, growing areas to less responsible, shrinking areas. As an example, since 1975 Puerto Rico has received nearly $2.7 billion in Community Development Block Grants (CDBG). San Juan, the capital of Puerto Rico, has received over $900 million. The graph below shows the total amount of CDBGs awarded to the major cities of Puerto Rico from 1975 – 2014.

Total CDBGs Puerto Rico

As shown in the graph San Juan has received the bulk of the grant dollars. The graph below shows the amount by year for various years between 1980 and 2014 for San Juan and Puerto Rico as a whole plotted on the left vertical axis (bar graphs). On the right vertical axis is the amount of CDBG dollars per capita (line graphs). San Juan is in orange and Puerto Rico is in blue.

CDBGs per capita, yr Puerto Rico

San Juan has consistently received more dollars per capita than the other areas of Puerto Rico. Both total dollars and dollars per capita have been declining since 1980, which is when the CDBG program was near its peak funding level. As part of the 2009 Recovery Act, San Juan received an additional $2.8 million dollars and Puerto Rico as a country received another $5.9 million on top of the $32 million already provided by the program (not shown on the graph).

It’s hard to look at all of this redistribution and not consider whether it did any good. After all, $2.7 billion later Puerto Rico’s economy is struggling and their fiscal situation looks grim. Grant dollars from programs like the CDBG program consistently fail to make a lasting impact on the recipient’s economy. There are structural problems holding Puerto Rico’s economy back, such as the Jones Act, which increases the costs of goods on the island by restricting intra-U.S.-shipping to U.S. ships, and the enforcement of the U.S. minimum wage, which is a significant cost to employers in a place where the median wage is much lower than on the mainland. Intergovernmental grants and transfers do nothing to solve these underlying structural problems. But despite this reality, millions of dollars are spent every year with no lasting benefit.

No, bailouts are not something to celebrate

Robert Samuelson at the Washington Post is celebrating the auto bailout.

Last December I had a piece in the Post in which I argued that “pro-business” policies like bailouts are actually bad for business. I offered five reasons:

  1. Pro-business policies undermine competition.
  2. They retard innovation
  3. They sucker workers into unsustainable careers.
  4. They encourage wasteful privilege seeking.
  5. They undermine the legitimacy of government and business.

Read my piece for the full argument.

But aren’t things different in the midst of a major economic and financial crisis? Shouldn’t we have more leeway for bailouts in exigent circumstances?

No. Here is why:

First, we should always remember that the concentrated beneficiaries of a bailout have every incentive to overstate its necessity while the diffuse interests that pay for it (other borrowers, taxpayers, un-favored competitors, and the future inheritors of a less dynamic and less competitive economy) have almost no incentive or ability to get organized and lobby against it.

Bailout proponents talk as if they know bailouts avert certain calamity. But the truth is that we can never know exactly what would have happened without a bailout. We can, however, draw on both economic theory and past experience. And both suggest that the macroeconomy of a world without bailouts is actually more stable than one with bailouts. This is because bailouts incentivize excessive risk (and, importantly, correlated risk taking). Moreover, because the bailout vs. no bailout call is inherently arbitrary, bailouts generate uncertainty.

Todd Zywicki at GMU law argues convincingly that normal bankruptcy proceedings would have worked just fine in the case of the autos.

Moreover, as Garett Jones and Katelyn Christ explain, alternative options like “speed bankruptcy” (aka debt-to-equity swaps) offer better ways to improve the health of institutions without completely letting creditors off the hook. This isn’t just blind speculation. The EU used this approach in its “bail in” of Cyprus and it seems to have worked pretty well.

Ironically, one can make a reasonable case that many (most?) bailouts are themselves the result of previous bailouts. The 1979 bailout of Chrysler taught a valuable lesson to the big 3 automakers and their creditors. It showed them that Washington would do whatever it took to save them. That, and decades of other privileges allowed the auto makers to ignore both customers and market realities.

Indeed, at least some of the blame for the entire 2008 debacle falls on the ‘too big to fail’ expectation that systematically encouraged most large financial firms to leverage up. While it was hardly the only factor, the successive bailouts of Continental Illinois (1984), the S&Ls (1990s), the implicit guarantee of the GSEs, etc., likely exacerbated the severity of the 2008 financial crisis. So a good cost-benefit analysis of any bailout should include some probability that it will encourage future excessive risk taking, and future calls for more bailouts. Once these additional costs are accounted for, bailouts look like significantly worse deals.

Adherence to the “rule of law” is more important in a crisis than it is in normal times. Constitutional prohibitions, statutory limits, and even political taboos are typically not needed in “easy cases.” It is the hard cases that make for bad precedent.

How Complete Are Federal Agencies’ Regulatory Analyses?

A report released yesterday by the Government Accountability Office will likely get spun to imply that federal agencies are doing a pretty good job of assessing the benefits and costs of their proposed regulations. The subtitle of the report reads in part, “Agencies Included Key Elements of Cost-Benefit Analysis…” Unfortunately, agency analyses of regulations are less complete than this subtitle suggests.

The GAO report defined four major elements of regulatory analysis: discussion of the need for the regulatory action, analysis of alternatives, and assessment of the benefits and costs of the regulation. These crucial features have been required in executive orders on regulatory analysis and OMB guidance for decades. For the largest regulations with economic effects exceeding $100 million annually (“economically significant” regulations), GAO found that agencies always included a statement of the regulation’s purpose, discussed alternatives 81 percent of the time, always discussed benefits and costs, provided a monetized estimate of costs 97 percent of the time, and provided a monetized estimate of benefits 76 percent of the time.

A deeper dive into the report, however, reveals that GAO did not evaluate the quality of any of these aspects of agencies’ analysis. Page 4 of the report notes, “[O]ur analysis was not designed to evaluate the quality of the cost-benefit analysis in the rules. The presence of all key elements does not provide information regarding the quality of the analysis, nor does the absence of a key element necessarily imply a deficiency in a cost-benefit analysis.”

For example, GAO checked to see if the agency include a statement of the purpose of the regulation, but it apparently accepted a statement that the regulation is required by law as a sufficient statement of purpose (p. 22). Citing a statute is not the same thing as articulating a goal or identifying the root cause of the problem an agency seeks to solve.

Similarly, an agency can provide a monetary estimate of some benefits or costs without necessarily addressing all major benefits or costs the regulation is likely to create. GAO notes that it did not ascertain whether agencies addressed all relevant benefits or costs (p. 23).

For an assessment of the quality of agencies’ regulatory analysis, check out the Mercatus Center’s Regulatory Report Card. The Report Card evaluation method explicitly assesses the quality of the agency’s analysis, rather than just checking to see if the agency discussed the topics. For example, to assess how well the agency analyzed the problem it is trying to solve, the evaluators ask five questions:

1. Does the analysis identify a market failure or other systemic problem?

2. Does the analysis outline a coherent and testable theory that explains why the problem is systemic rather than anecdotal?

3. Does the analysis present credible empirical support for the theory?

4. Does the analysis adequately address the baseline — that is, what the state of the world is likely to be in the absence of federal intervention not just now but in the future?

5. Does the analysis adequately assess uncertainty about the existence or size of the problem?

These questions are intended to ascertain whether the agency identified a real, significant problem and identified its likely cause. On a scoring scale ranging from 0 points (no relevant content) to 5 points (substantial analysis), economically significant regulations proposed between 2008 and 2012 scored an average of just 2.2 points for their analysis of the systemic problem. This score indicates that many regulations are accompanied by very little evidence-based analysis of the underlying problem the regulation is supposed to solve. Scores for assessment of alternatives, benefits, and costs are only slightly better, which suggests that these aspects of the analysis are often seriously incomplete.

These results are consistent with the findings of other scholars who have evaluated the quality of agency Regulatory Impact Analyses during the past several decades. (Check pp. 7-10 of this paper for citations.)

The Report Card results are also consistent with the findings in the GAO report. GAO assessed whether agencies are turning in their assigned homework; the Report Card assesses how well they did the work.

The GAO report contains a lot of useful information, and the authors are forthright about its limitations. GAO combed through 203 final regulations to figure out what parts of the analysis the agencies did and did not do — an impressive accomplishment by any measure!

I’m more concerned that some participants in the political debate over regulatory reform will claim that the report shows regulatory agencies are doing a great job of analysis, and no reforms to improve the quality of analysis are needed. The Regulatory Report Card results clearly demonstrate otherwise.

“Regulatory Certainty” as a Justification for Regulating

A key principle of good policy making is that regulatory agencies should define the problem they are seeking to solve before finalizing a regulation. Thus, it is odd that in the economic analysis for a recent proposed rule related to greenhouse gas emissions from new power plants, the Environmental Protection Agency (EPA) cites “regulatory certainty” as a justification for regulating. It seems almost any regulation could be justified on these grounds.

The obvious justification for regulating carbon dioxide emissions would be to limit harmful effects of climate change. However, as the EPA’s own analysis states:

the EPA anticipates that the proposed Electric Generating Unit New Source Greenhouse Gas Standards will result in negligible CO2 emission changes, energy impacts, quantified benefits, costs, and economic impacts by 2022.

The reason the rule will result in no benefits or costs, according to the EPA, is because the agency anticipates:

even in the absence of this rule, existing and anticipated economic conditions will lead electricity generators to choose new generation technologies that meet the proposed standard without the need for additional controls.

So why issue a new regulation? If the EPA’s baseline assessment is correct (i.e. it is making an accurate prediction about what the world would look like in absence of the regulation), then the regulation provides no benefits since it causes no deviations from that baseline. If the EPA’s baseline turns out to be wrong, a “wait and see” approach likely makes more sense. This approach may be more sensible, especially given all the inherent uncertainties surrounding predicting future energy prices and all of the unintended consequences that often result from regulating.

Instead, the EPA cites “regulatory certainty” as a justification for regulating, presumably because businesses will now be able to anticipate what emission standards will be going forward, and they can now invest with confidence. But announcing there will be no new regulation for a period of time also provides certainty. Of course, any policy can always change, whether the agency decides to issue a regulation or not. That’s why having clearly-stated goals and clearly-understood factors that guide regulatory decisions is so important.

Additionally, there are still costs to regulating, even if the EPA has decided not to count these costs in its analysis. Just doing an economic analysis is a cost. So is using agency employees’ time to enforce a new regulation. News outlets suggest “industry-backed lawsuits are inevitable” in response to this regulation. This too is a cost. If costs exceed benefits, the rule is difficult to justify.

One might argue that because of the 2007 Supreme Court ruling finding that CO2 is covered under the Clean Air Act, and the EPA’s subsequent endangerment finding related to greenhouse gases, there is some basis for the argument that uncertainty is holding back investment in new power plants. However, if this is true then this policy uncertainty should be accounted for in the agency’s baseline. If the proposed regulation alleviates some of this uncertainty, and leads to additional power plant construction and energy creation, that change is a benefit of the regulation and should be identified in the agency’s analysis.

The EPA also states it “intends this rule to send a clear signal about the current and future status of carbon capture and storage technology” because the agency wants to create the “incentive for supporting research, development, and investment into technology to capture and store CO2.”

However, by identifying the EPA’s preferred method of reducing CO2 emissions from new power plants, the agency may discourage businesses from investing in other promising new technologies. Additionally, by setting different standards for new and existing power plants, the EPA is clearly favoring one set of companies at the expense of another. This is a form of cronyism.

The EPA needs to get back to policymaking 101. That means identifying a problem before regulating, and tailoring regulations to address the specific problem at hand.

America’s best pension system? The case of Milwaukee

NPR reports that while many municipal and state governments’s pension systems are suffering from deep underfunding, there are some outliers. One such city is Milwaukee, Wisconsin. With a funding ratio of 90 percent, Milwaukee’s public employees’ plan would seem to have beaten the odds with a very simple (and laudable) strategy: fully fund the pension plan every year.

It is common sense. Make the full annual contribution and the plan can ensure that the benefits promised are available when retirement day arrives.

Except, thanks to government accounting guidance, it’s a little more complicated than that.

The problem is that the annual contribution the city is (prudently) making each year is calculated incorrectly. This flawed approach is why Detroit could claim a few short years ago that its plans were 100 percent funded. It is why New Jersey thought its plans were overfunded in the late 1990s.

Public plans calculate their liabilities – and thus the annual amount needed to contribute to the fund – based on how much they expect the assets to return. Milwaukee’s discount rate is 8.25%, recently lowered from 8.5%.

Unfortunately, if these liabilities are considered safe and guaranteed by the government, then they should valued as such. A better rate to use is the yield US Treasury bonds. In economist-speak: the value of liabilities and assets are independent. By way of analogy: Your monthly mortgage payment doesn’t change based on how much you think you may earn in your 401(K).

On a default-free, market valuation basis, Milwaukee’s pension plans is 40% funded and has a funding gap of $6.5 billion.

The good news – Milwaukee’s elected officials have funding discipline. They aren’t skipping, skimming, or torturing their contributions based on  the desire to avoid paying their bills. And this can be said of many other cities and states. Funding a pension shouldn’t be magic or entail lots of uncertainty for the sponsor or employee.

But that leads to the bad news. Even when governments are responsible managers, they’re being sunk by bad accounting. Public sector accounting assumptions (GASB 25) lead governments to miscalculate the bill for public sector pension contributions. Even when governments pay 100 percent of the recommended amount – as it is presently calculated – this amount is too little to fully fund pension promises.

Last week I posted the Tax Foundation’s map of what pension funding levels look like under market valuation. Almost all state plans are under the 50 percent funded level. That is, they are in far worse funding shape than their current accounts recognize.

Until plans de-link the value of the liability from the expected performance of plan assets, even the best -managed plans are going to be in danger of not having put aside enough to pay these promises. Even the best intentions cannot undo the effects of bad accounting assumptions.

 

 

Does Anyone Know the Net Benefits of Regulation?

In early August, I was invited to testify before the Senate Judiciary subcommittee on Oversight, Federal Rights and Agency Action, which is chaired by Sen. Richard Blumenthal (D-Conn.).  The topic of the panel was the amount of time it takes to finalize a regulation.  Specifically, some were concerned that new regulations were being deliberately or needlessly held up in the regulatory process, and as a result, the realization of the benefits of those regulations was delayed (hence the dramatic title of the panel: “Justice Delayed: The Human Cost of Regulatory Paralysis.”)

In my testimony, I took the position that economic and scientific analysis of regulations is important.  Careful consideration of regulatory options can help minimize the costs and unintended consequences that regulations necessarily incur. If additional time can improve regulations—meaning both improving individual regulations’ quality and having the optimal quantity—then additional time should be taken.  My logic behind taking this position was buttressed by three main points:

  1. The accumulation of regulations stifles innovation and entrepreneurship and reduces efficiency. This slows economic growth, and over time, the decreased economic growth attributable to regulatory accumulation has significantly reduced real household income.
  2. The unintended consequences of regulations are particularly detrimental to low-income households— resulting in costs to precisely the same group that has the fewest resources to deal with them.
  3. The quality of regulations matters. The incentive structure of regulatory agencies, coupled with occasional pressure from external forces such as Congress, can cause regulations to favor particular stakeholder groups or to create regulations for which the costs exceed the benefits. In some cases, because of statutory deadlines and other pressures, agencies may rush regulations through the crafting process. That can lead to poor execution: rushed regulations are, on average, more poorly considered, which can lead to greater costs and unintended consequences. Even worse, the regulation’s intended benefits may not be achieved despite incurring very real human costs.

At the same time, I told the members of the subcommittee that if “political shenanigans” are the reason some rules take a long time to finalize, then they should use their bully pulpits to draw attention to such actions.  The influence of politics on regulation and the rulemaking process is an unfortunate reality, but not one that should be accepted.

I actually left that panel with some small amount of hope that, going forward, there might be room for an honest discussion about regulatory reform.  It seemed to me that no one in the room was happy with the current regulatory process – a good starting point if you want real change.  Chairman Blumenthal seemed to feel the same way, stating in his closing remarks that he saw plenty of common ground.  I sent a follow-up letter to Chairman Blumenthal stating as much. I wrote to the Chairman in August:

I share your guarded optimism that there may exist substantial agreement that the regulatory process needs to be improved. My research indicates that any changes to regulatory process should include provisions for improved analysis because better analysis can lead to better outcomes. Similarly, poor analysis can lead to rules that cost more human lives than they needed to in order to accomplish their goals.

A recent op-ed penned by Sen. Blumenthal in The Hill shows me that at least one person is still thinking about the topic of that hearing.  The final sentence of his op-ed said that “we should work together to make rule-making better, more responsive and even more effective at protecting Americans.” I agree. But I disagree with the idea that we know that, as the Senator wrote, “by any metric, these rules are worth [their cost].”  The op-ed goes on to say:

The latest report from the Office of Information and Regulatory Affairs shows federal regulations promulgated between 2002 and 2012 produced up to $800 billion in benefits, with just $84 billion in costs.

Sen. Blumenthal’s op-ed would make sense if his facts were correct.  However, the report to Congress from OIRA that his op-ed referred to actually estimates the costs and benefits of only a handful of regulations.  It’s simple enough to open that report and quote the very first bullet point in the executive summary, which reads:

The estimated annual benefits of major Federal regulations reviewed by OMB from October 1, 2002, to September 30, 2012, for which agencies estimated and monetized both benefits and costs, are in the aggregate between $193 billion and $800 billion, while the estimated annual costs are in the aggregate between $57 billion and $84 billion. These ranges are reported in 2001 dollars and reflect uncertainty in the benefits and costs of each rule at the time that it was evaluated.

But you have to actually dig a little farther into the report to realize that this characterization of the costs and benefits of regulations represents only the view of agency economists (think about their incentive for a moment – they work for the regulatory agencies) and for only 115 regulations out of 37,786 created from October 1, 2002, to September 30, 2012.  As the report that Sen. Blumenthal refers to actually says:

The estimates are therefore not a complete accounting of all the benefits and costs of all regulations issued by the Federal Government during this period.

Furthermore, as an economist who used to work in a regulatory agency and produce these economic analyses of regulations, I find it heartening that the OMB report emphasizes that the estimates it relies on to produce the report are “neither precise nor complete.”  Here’s another point of emphasis from the OMB report:

Individual regulatory impact analyses vary in rigor and may rely on different assumptions, including baseline scenarios, methods, and data. To take just one example, all agencies draw on the existing economic literature for valuation of reductions in mortality and morbidity, but the technical literature has not converged on uniform figures, and consistent with the lack of uniformity in that literature, such valuations vary somewhat (though not dramatically) across agencies. Summing across estimates involves the aggregation of analytical results that are not strictly comparable.

I don’t doubt Sen. Blumenthal’s sincerity in believing that the net benefits of regulation are reflected in the first bullet point of the OMB Report to Congress.  But this shows one of the problems facing regulatory reform today: People on both sides of the debate continue to believe that they know the facts, but in reality we know a lot less about the net effects of regulation than we often pretend to know.  Only recently have economists even begun to understand the drag that regulatory accumulation has on economic growth, and that says nothing about what benefits regulation create in exchange.

All members of Congress need to understand the limitations of our knowledge of the total effects of regulation.  We tend to rely on prospective analyses – analyses that state the costs and benefits of a regulation before they come to fruition.  What we need are more retrospective analyses, with which we can learn what has really worked and what hasn’t, and more comparative studies – studies that have control and experiment groups and see if regulations affect those groups differently.  In the meantime, the best we can do is try to ensure that the people engaged in creating new regulations follow a path of basic problem-solving: First, identify whether there is a problem that actually needs to be solved.  Second, examine several alternative ways of addressing that problem.  Then consider what the costs and benefits of the various alternatives are before choosing one. 

Detroit’s Art is Not the Key to its Revival

This post originally appeared at Market Urbanism, a blog about free-market urban development.

Detroit’s art assets have made news as Emergency Manager Kevyn Orr is evaluating the city’s assets for a potential bankruptcy filing. Belle Isle, where Rod Lockwood recently proposed a free city-state may be on the chopping block, but according to a Detroit Free Press poll, residents are most concerned about the city auctioning pieces from the Detroit Institute of the Arts’ collection.

I’ve written previously about the downsides of publicly funding art from the perspective of free speech, but the Detroit case presents a new reason why cities are not the best keepers of artistic treasures. Pittsburgh’s Post-Gazette contrasts the Detroit Institute of Art’s situation with the benefits of a museum funded with an endowment:

As usual, Andrew Carnegie knew what he was doing.

The steel baron turned philanthropist put the City of Pittsburgh in charge of operating the library he gave it in 1895, but when he added an art museum to the Oakland facility just one year later, he kept it out of city hands.

“The city is not to maintain [the art gallery and museum],” Carnegie said in his dedication address. “These are to be regarded as wise extravagances, for which public revenues should not be given, not as necessaries. These are such gifts as a citizen may fitly bestow upon a community and endow, so that it will cost the city nothing.”

Museums and other cultural amenities  are a sign of a city’s success, not drivers of success itself. The correlation between culturally interesting cities and cities with strong economic opportunities is often mistakenly interpreted to demonstrate that if cities do more to build their cultural appeal from the top down, they will encourage job growth in the process. Rather, a productive and well-educated population both demand and supply these amenities. While an art museum may increase tourism on the margin, it is unlikely to draw additional firms or individuals away from other locations. Detroit is sitting on an estimated $2.5 billion in art, enough to put a dent in its $15 billion long-term obligations.

On a recent episode of Econtalk, Ed Glaeser explains that over investing in public amenities relative to demand is a sign of continued challenges for municipalities:

It is so natural and so attractive to plunk down a new skyscraper and declare Cleveland has ‘come back.’ Or to build a monorail and pretend you are going to be just as successful as Disney World, for some reason. You get short-term headlines even when this infrastructure is just totally ill-suited for the actual needs of the city. The hallmark of declining cities is to have over-funded infrastructure relative to the level of demand in that city.

Similarly, cities throwing resources at museums and other amenities designed to attract the “creative class” are highly likely to fail because bureaucrats are poorly-positioned to learn about and respond to their municipalities’ cultural demands. When cities do successfully provide cultural amenities, they are catering primarily to well-educated, high-income residents — not the groups that should be the targets of government programs.

I think it’s highly unlikely that Detroit will sell off any taxpayer-owned art to pay down its debts based on the media and political blow back the possibility has seen. However, seeing the city in a position where it owns enough art to cover a substantial portion of its unsustainable long-term debts demonstrates why municipalities should not be curators. Tying up municipal resources in art is irresponsible. The uncertainty that the city’s debt creates for future tax and service provision is clearly detrimental to economic growth. While assets like museums are nice for residents, they do not attract or keep residents or jobs.

Detroit does have an important asset; new ideas need cheap rent. Detroit’s affordable real estate is attracting start ups with five of the metro area’s young companies making Brand Innovator’s list of American brands to watch. While these budding businesses could be key players in the city’s economic recovery, top-down plans to preserve and increase cultural amenities for these firms’ employees will not.

States Aim to Eliminate Corporate and Individual Income Taxes

Although the prospects of fundamental tax reform on the federal level continue to look bleak, the sprigs of beneficial tax proposals in states across the US are beginning to grow and gain political support. Perhaps motivated by the twin problems of tough budgeting options and mounting liability obligations that states face in this stubborn economy, the governors of several states have recommended a variety of tax reform proposals, many of which aim to lower or completely eliminate corporate and individual income taxes, which would increase state economic growth and hopefully improve the revenues that flow into state coffers along the way.

Here is a sampling of the proposals:

  • Nebraska: During his State of the State address last week, Gov. Dave Heineman outlined his vision of a reformed tax system that would be “modernized and transformed” to reflect the realities of his state’s current economic environment. His bold plan would completely eliminate the income tax and corporate income tax in Nebraska and shift to a sales tax as the state’s main revenue source. To do this, the governor proposes to eliminate approximately $2.8 billion dollars in sales tax exemptions for purchases as diverse as school lunches and visits to the laundromat. If the entire plan proves to be politically unpalatable, Heineman is prepared to settle for at least reducing these rates as a way to improve his state’s competitiveness.
  • North Carolina: Legislative leaders in the Tar Heel State have likewise been eying their individual and corporate income taxes as cumbersome impediments to economic growth and competitiveness that they’d like to jettison. State Senate leader Phil Berger made waves last week by announcing his coalition’s intentions to ax these taxes. In their place would be a higher sales tax, up from 6.75% to 8%, which would be free from the myriad exemptions that have clogged the revenue-generating abilities of the sales tax over the years.
  • Louisiana: In a similar vein, Gov. Bobby Jindal of Louisiana has called for the elimination of the individual and corporate income taxes in his state. In a prepared statement given to the Times-Picayune, Jindal emphasized the need to simplify Louisiana’s currently complex tax system in order to “foster an environment where businesses want to invest and create good-paying jobs.” To ensure that the proposal is revenue neutral, Jindal proposes to raise sale taxes while keeping those rates as “low and flat” as possible.
  • Kansas: Emboldened by the previous legislative year’s successful income tax rate reduction and an overwhelmingly supportive legislature, Kansas Gov. Sam Brownback laid out his plans to further lower the top Kansas state income tax rate from the current 4.9% to 3.5%. Eventually, Brownback dreams of completely abolishing the income tax. “Look out Texas,” he chided during last week’s State of the State address, “here comes Kansas!” Like the other states that are aiming to lower or remove state income taxes, Kansas would make up for the loss in revenue through an increased sales tax. Bonus points for Kansas: Brownback is also eying the Kansas mortgage interest tax deduction as the next to go, the benefits of which I discussed in my last post.

These plans for reform are as bold as they are novel; no state has legislatively eliminated state income taxes since resource-rich Alaska did so in 1980. It is interesting that the aforementioned reform leaders all referenced the uncertainty and complexity of their current state tax systems as the primary motivator for eliminating state income taxes. Seth Giertz and Jacob Feldman tackled this issue in their Mercatus Research paper, “The Economic Costs of Tax Policy Uncertainty,” last fall. The authors argued that complex tax systems that are laden with targeted deductions tend to concentrate benefits towards the politically-connected and therefore result in an inefficient tax system to the detriment of everyone within that system.

Additionally, moving to a sales tax model of revenue-generation may provide state governments with a more stable revenue source when compared to the previous regime based on personal and corporate income taxes. As Matt argued before, the progressive taxation of personal and corporate income is a particularly volatile source of revenue and tends to suddenly dry up in times of economic hardship. What’s more, a state’s reliance on corporate and personal income taxes as a primary source of revenue is associated with large state budget gaps, a constant concern for squeezed state finances.

If these governors are successful and they are able to move their states to a straightforward tax system based on a sales tax, they will likely see the economic growth and increased investment that they seek.

Keep an eye on these states in the following year: depending on the success of their reforms and tax policies, more states could be soon to follow.