Category Archives: Transit and Transportation

High-speed rail: is this year different?

Many U.S. cities are racing to develop high speed rail systems that shorten commute times and develop the economy for residents. These trains are able to reach speeds over 124 mph, sometimes even as high as 374 mph as in the case of Japan’s record-breaking trains. Despite this potential, American cities haven’t quite had the success of other countries. In 2009, the Obama administration awarded almost a billion dollars of stimulus money to Wisconsin to build a high-speed rail line connection between Milwaukee and Madison, and possibly to the Twin Cities, but that project was derailed. Now, the Trump administration has plans to support a high-speed rail project in Texas. Given so many failed attempts in the U.S., it’s fair to ask if this time is different. And if it is, will high-speed rail bring the benefits that proponents claim it to have?

The argument for building high-speed rail lines usually entails promises of faster trips, better connections between major cities, and economic growth as a result. It almost seems like a no-brainer – why would any city not want to pursue something like this? The answer, like with most public policy questions, depends on the costs, and whether the benefits actually realize.

In a forthcoming paper for the Mercatus Center, transportation scholar Kenneth Button explores these questions by studying the high-speed rail experiences of Spain, Japan, and China; the countries with the three largest systems (measured by network length). Although there are benefits to these rail systems, Button cautions against focusing too narrowly on them as models, primarily because what works in one area can’t necessarily be easily replicated in another.

Most major systems in other countries have been the result of large public investment and built with each area’s unique geography and political environment kept in mind. Taking their approaches and trying to apply them to American cities not only ignores how these factors can differ, but also how much costs can differ. For example, the average infrastructure unit price of high-speed rail in Europe is between $17 and $24 million per mile and the estimated cost for proposals in California is conservatively estimated at $35 million per mile.

The cost side of the equation is often overlooked, and more attention is given to the benefit side. Button explains that the main potential benefit – generating economic growth – doesn’t always live up to expectations. The realized growth effects are usually minimal, and sometimes even negative. Despite this, proponents of high-speed rail oversell them. The process of thinking through high-speed rail as a sound public investment is often short-lived.

The goal is to generate new economic activity, not merely replace or divert it from elsewhere. In Japan, for example, only six percent of the traffic on the Sanyo Shinkansen line was newly generated, while 55 percent came from other rail lines, 23 percent from air, and 16 percent from inter-city bus. In China, after the Nanguang and Guiguang lines began operating in 2014, a World Bank survey found that many of the passengers would have made the journey along these commutes through some other form of transportation if the high-speed rail option wasn’t there. The passengers who chose this new transport method surely benefited from shorter travel times, but this should not be confused with net growth across the economy.

Even if diverted away from other transport modes, the amount of high-speed rail traffic Japan and China have generated is commendable. Spain’s system, however, has not been as successful. Its network has only generated about 5 percent of Japan’s passenger volume. A line between Perpignan, France and Figueres, Spain that began services in 2009 severely fell short of projected traffic. Originally, it was expected to run 19,000 trains per year, but has only reached 800 trains by 2015.

There is also evidence that high speed rail systems poorly re-distribute activity geographically. This is especially concerning given the fact that projects are often sold on a promise of promoting regional equity and reducing congestion in over-heating areas. You can plan a track between well-developed and less-developed regions, but this does not guarantee that growth for both will follow. The Shinkansen system delivers much of Japan’s workforce to Tokyo, for example, but does not spread much employment away from the capital. In fact, faster growth happened where it was already expected, even before the high-speed rail was planned or built. Additionally, the Tokyo-Osaka Shinkansan line in particular has strengthened the relative economic position of Tokyo and Osaka while weakening those of cities not served.

Passenger volume and line access are not – and should not be – the only metrics of success. Academics have exhibited a fair amount of skepticism regarding high-speed rail’s ability to meet other objectives. When it comes to investment value, many cases have resulted in much lower returns than expected. A recent, extreme example of this is California’s bullet train that is 50 percent over its planned budget; not to mention being seven years behind in its building schedule.

The project in California has been deemed a lost cause by many, but other projects have gained more momentum in the past year. North American High Speed Rail Group has proposed a rail line between Rochester and the Twin Cities, and if it gets approval from city officials, it plans to finance entirely with private money. The main drawback of the project is that it would require the use of eminent domain to take the property of existing businesses that are in the way of the planned line path. Private companies trying to use eminent domain to get past a roadblock like this often do so claiming that it is for the “public benefit.” Given that many residents have resisted the North American High Speed Rail Group’s plans, trying to force the use of eminent domain would likely only destroy value; reallocating property from a higher-value to a lower-value use.

Past Mercatus research has found that using eminent domain powers for redevelopment purposes – i.e. by taking from one private company and giving to another – can cause the tax base to shrink as a result of decreases in private investment. Or in other words, when entrepreneurs see that the projects that they invest in could easily be taken if another business owner makes the case to city officials, it would in turn discourage future investors from moving into the same area. This ironically discourages development and the government’s revenues suffer as a result.

Florida’s Brightline might have found a way around this. Instead of trying to take the property of other businesses and homes in its way, the company has raised money to re-purpose existing tracks already between Miami and West Palm Beach. If implemented successfully, this will be the first privately run and operated rail service launched in the U.S. in over 100 years. And it doesn’t require using eminent domain or the use of taxpayer dollars to jump-start that, like any investment, has risk of being a failure; factors that reduce the cost side of the equation from the public’s perspective.

Which brings us back to the Houston-to-Dallas line that Trump appears to be getting behind. How does that plan stack up to these other projects? For one, it would require eminent domain to take from rural landowners in order to build a line that would primarily benefit city residents. Federal intervention would require picking a winner and loser at the offset. Additionally, there is no guarantee that building of the line would bring about the economic development that many proponents promise. Button’s new paper suggests that it’s fair to be skeptical.

I’m not making the argument that high-speed rail in America should be abandoned altogether. Progress in Florida demonstrates that maybe in the right conditions and with the right timing, it could be cost-effective. The authors of a 2013 study echo this by writing:

“In the end, HSR’s effect on economic and urban development can be characterized as analogous to a fertilizer’s effect on crop growth: it is one ingredient that could stimulate economic growth, but other ingredients must be present.”

For cities that can’t seem to mix up the right ingredients, they can look to other options for reaching the same goals. In fact, a review of the economic literature finds that investing in road infrastructure is a much better investment than other transportation methods like airports, railways, or ports. Or like I’ve discussed previously, being more welcoming to new technologies like driver-less cars has the potential to both reduce congestion and generate significant economic gains.

Decreasing congestion with driverless cars

Traffic is aggravating. Especially for San Francisco residents. According to Texas A&M Transportation Institute, traffic congestion in the San Francisco-Oakland CA area costs the average auto commuter 78 hours per year in extra travel time, $1,675 for their travel time delays, and an extra 33 gallons of gas compared to free-flow traffic conditions. That means the average commuter spends more than three full days stuck in traffic each year. Unfortunately for these commuters, a potential solution to their problems just left town.

Last month, after California officials told Uber to stop its pilot self-driving car program because it lacked the necessary state permits for autonomous driving, Uber decided to relocate the program from San Francisco to Phoenix, Arizona. In an attempt to alleviate safety concerns, these self-driving cars are not yet driverless, but they do have the potential to reduce the number of cars on the road. Other companies like Google, Tesla, and Ford have expressed plans to develop similar technologies, and some experts predict that completely driverless cars will be on the road by 2021.

Until then, however, cities like San Francisco will continue to suffer from the most severe congestion in the country. Commuters in these cities experience serious delays, higher gasoline usage, and lost time behind the wheel. If you live in any of these areas, you are probably very familiar with the mind-numbing effect of sitting through sluggish traffic.

It shouldn’t be surprising then that these costs could culminate into a larger problem for economic growth. New Mercatus research finds that traffic congestion can significantly harm economic growth and concludes with optimistic predictions for how autonomous vehicle usage could help.

Brookings Senior Fellow Clifford Winston and Yale JD candidate Quentin Karpilow find significant negative effects of traffic congestion on the growth rates of California counties’ gross domestic product (GDP), employment, wages, and commodity freight flows. They find that a 10% reduction in congestion in a California urban area increases both job and GDP growth by roughly 0.25% and wage growth to increase by approximately 0.18%.

This is the first comprehensive model built to understand how traffic harms the economy, and it builds on past research that has found that highway congestion leads to slower job growth. Similarly, congestion in West Coast ports, which occurs while dockworkers and marine terminal employers negotiate contracts, has caused perishable commodities to go bad, resulting in a 0.2 percentage point reduction in GDP during the first quarter of 2015.

There are two main ways to solve the congestion problem; either by reducing the number of cars on the road or by increasing road capacity. Economists have found that the “build more roads” method in application has actually been quite wasteful and usually only induces additional highway traffic that quickly fills the new road capacity.

A common proposal for the alternative method of reducing the number of cars on the road is to implement congestion pricing, or highway tolls that change based on the number of drivers using the road. Increasing the cost of travel during peak travel times incentivizes drivers to think more strategically about when they plan their trips; usually shifting less essential trips to a different time or by carpooling. Another Mercatus study finds that different forms of congestion pricing have been effective at reducing traffic congestion internationally in London and Stockholm as well as for cities in Southern California.

The main drawback of this proposal, however, is the political difficulty of implementation, especially with interstate highways that involve more than one jurisdiction to approve it. Even though surveys show that drivers generally change their mind towards supporting congestion pricing after they experience the lower congestion that results from tolling, getting them on board in the first place can be difficult.

Those skeptical of congestion pricing, or merely looking for a less challenging policy to implement, should look forward to the new growing technology of driverless cars. The authors of the recent Mercatus study, Winston and Karpilow, find that the adoption of autonomous vehicles could have large macroeconomic stimulative effects.

For California specifically, even if just half of vehicles became driverless, this would create nearly 350,000 additional jobs, increase the state’s GDP by $35 billion, and raise workers’ earnings nearly $15 billion. Extrapolating this to the whole country, this could add at least 3 million jobs, raise the nation’s annual growth rate 1.8 percentage points, and raise annual labor earnings more than $100 billion.

What would this mean for the most congested cities? Using Winston and Karpilow’s estimates, I calculated how reduced congestion from increased autonomous car usage could affect Metropolitan Statistical Areas (MSAs) that include New York City, Los Angeles, Boston, San Francisco, and the DC area. The first chart shows the number of jobs that would have been added in 2011 if 50% of motor vehicles had been driverless. The second chart shows how this would affect real GDP per capita, revealing that the San Francisco MSA would have the most to gain, but with the others following close behind.

jobsadd_autonomousvehicles realgdp_autonomousvehicles

As with any new technology, there is uncertainty with how exactly autonomous cars will be fully developed and integrated into cities. But with pilot programs already being implemented by Uber in Pittsburgh and nuTonomy in Singapore, it is becoming clear that the technology’s efficacy is growing.

With approximately $1,332 GDP per capita and 45,318 potential jobs on the table for the San Francisco Metropolitan Statistical Area, it is a shame that San Francisco just missed a chance to realize some of these gains and to be at the forefront of driving progress in autonomous vehicle implementation.

Congestion taxes can make society worse off

A new paper by Jeffrey Brinkman in the Journal of Urban Economics (working version here) analyzes two phenomena that are pervasive in urban economics—congestion costs and agglomeration economies. What’s interesting about this paper is that it formalizes the tradeoff that exists between the two. As stated in the abstract:

“Congestion costs in urban areas are significant and clearly represent a negative externality. Nonetheless, economists also recognize the production advantages of urban density in the form of positive agglomeration externalities.”

Agglomeration economies is a term used to describe the benefits that occur when firms and workers are in proximity to one another. This behavior results in firm clusters and cities. In regard to the existence of agglomeration economies, economist Ed Glaeser writes:

“The concentration of people and industries has long been seen by economists as evidence for the existence of agglomeration economies. After all, why would so many people suffer the inconvenience of crowding into the island of Manhattan if there weren’t also advantages from being close to so much economic activity?”

Since congestion is a result of the high population density that is also associated with agglomeration economies, there is tradeoff between the two. Decreasing congestion costs ultimately means spreading out people and firms so that both are more equally distributed across space. Using other modes of transportation such as buses, bikes and subways may alleviate some congestion without changing the location of firms, but the examples of London and New York City, which have robust public transportation systems and a large amount of congestion, show that such a strategy has its limits.

The typical congestion analysis correctly states that workers not only face a private cost from commuting into the city, but that they impose a cost on others in the form of more traffic that slows everyone down. Since they do not consider this cost when deciding whether or not to commute the result is too much traffic.

In economic jargon, the cost to society due to an additional commuter—the marginal social cost (MSC)—is greater than the private cost to the individual—the marginal private cost (MPC). The result is that too many people commute, traffic is too high and society experiences a deadweight loss (DWL). We can depict this analysis using the basic marginal benefit/cost framework.

congestion diagram 1

In this diagram the MSC is higher than the MPC line, and so the traffic that results from equating the driver’s marginal benefit (MB) to her MPC, CH, is too high. The result is the red deadweight loss triangle which reduces society’s welfare. The correct amount is C*, which is the amount that results when the MB intersects the MSC.

The economist’s solution to this problem is to levy a tax equal to the difference between the MSC and the MPC. This difference is sometimes referred to as the marginal damage cost (MDC) and it’s equal to the external cost imposed on society from an additional commuter. The tax aligns the MPC with the MSC and induces the correct amount of traffic, C*. London is one of the few cities that has a congestion charge intended to alleviate inner-city congestion.

But this analysis gets more complicated if an activity has external benefits along with external costs. In that case the diagram would look like this:

congestion diagram 2

Now there is a marginal social benefit associated with traffic—agglomeration economies—that causes the marginal benefit of traffic to diverge from the benefits to society. In this case the efficient amount of traffic is C**, which is where the MSC line intersects the MSB line. Imposing a congestion tax equal to the MDC still eliminates the red DWL, but it creates the smaller blue DWL since it reduces too much traffic. This occurs because the congestion tax does not take into account the positive effects of agglomeration economies.

One solution would be to impose a congestion tax equal to the MDC and then pay a subsidy equal to the distance between the MSB and the MB lines. This would align the private benefits and costs with the social benefits and costs and lead to C**. Alternatively, since in this example the cost gap is greater than the benefit gap, the government could levy a smaller tax. This is shown below.

congestion diagram 3

In this case the tax is decreased to the gap between the dotted red line and the MPC curve, and this tax leads to the correct amount of traffic since it raises the private cost just enough to get the traffic level down from CH to C**, which is the efficient amount (associated with the point where the MSB intersects the MSC).

If city officials ignore the positive effect of agglomeration economies on productivity when calculating their congestion taxes they may set the tax too high. Overall welfare may improve even if the tax is too high (it depends on the size of the DWL when no tax is implemented) but society will not be as well off as it would be if the positive agglomeration effects were taken into account. Alternatively, if the gap between the MSB and the MB is greater than the cost gap, any positive tax would reduce welfare since the correct policy would be a subsidy.

This paper reminds me that the world is complicated. While taxing activities that generate negative externalities and subsidizing activities that generate positive externalities is economically sound, calculating the appropriate tax or subsidy is often difficult in practice. And, as the preceding analysis demonstrated, sometimes both need to be calculated in order to implement the appropriate policy.

City population dynamics since 1850

The reason why some cities grow and some cities shrink is a heavily debated topic in economics, sociology, urban planning, and public administration. In truth, there is no single reason why a city declines. Often exogenous factors – new modes of transportation, increased globalization, institutional changes, and federal policies – initiate the decline while subsequent poor political management can exacerbate it. This post focuses on the population trends of America’s largest cities since 1850 and how changes in these factors affected the distribution of people within the US.

When water transportation, water power, and proximity to natural resources such as coal were the most important factors driving industrial productivity, businesses and people congregated in locations near major waterways for power and shipping purposes. The graph below shows the top 10 cities* by population in 1850 and follows them until 1900. The rank of the city is on the left axis.

top cities 1850-1900

 

* The 9th, 11th, and 12th ranked cities in 1850 were all incorporated into Philadelphia by 1860. Pittsburgh was the next highest ranked city (13th) that was not incorporated so I used it in the graph instead.

All of the largest cities were located on heavily traveled rivers (New Orleans, Cincinnati, Pittsburgh, and St. Louis) or on the coast and had busy ports (New York, Boston, Philadelphia, Brooklyn, and Baltimore). Albany, NY may seem like an outlier but it was the starting point of the Erie Canal.

As economist Ed Glaeser (2005) notes “…almost every large northern city in the US as of 1860 became an industrial powerhouse over the next 60 years as factories started in central locations where they could save transport costs and make use of large urban labor forces.”

Along with waterways, railroads were an important mode of transportation from 1850 – 1900 and many of these cities had important railroads running through them, such as the B&O through Balitmore and the Erie Railroad in New York. The increasing importance of railroads impacted the list of top 10 cities in 1900 as shown below.

top cities 1900-1950

A similar but not identical set of cities dominated the urban landscape over the next 50 years. By 1900, New Orleans, Brooklyn (merged with New York) Albany, and Pittsburgh were replaced by Chicago, Cleveland, Buffalo, and San Francisco. Chicago, Cleveland, and Buffalo are all located on the Great Lakes and thus had water access, but it was the increasing importance of railroad shipping and travel that helped their populations grow. Buffalo was on the B&O railroad and was also the terminal point of the Erie Canal. San Francisco became much more accessible after the completion of the Pacific Railroad in 1869, but the California Gold Rush in the late 1840s got its population growth started.

As rail and eventually automobile/truck transportation became more important during the early 1900s, cities that relied on strategic river locations began to decline. New Orleans was already out of the top 10 by 1900 (falling from 5th to 12th) and Cincinnati went from 10th in 1900 to 18th by 1950. Buffalo also fell out of the top 10 during this time period, declining from 8th to 15th. But despite some changes in the rankings, there was only one warm-weather city in the top 10 as late as 1950 (Los Angeles). However, as the next graphs shows there was a surge in the populations of warm-weather cities during the period from 1950 to 2010 that caused many of the older Midwestern cities to fall out of the rankings.

top cities 1950-2010

The largest shakeup in the population rankings occurred during this period. Out of the top 10 cities in 1950, only 4 (Philadelphia, Los Angeles, Chicago, and New York) were still in the top 10 in 2010 (All were in the top 5, with Houston – 4th in 2010 – being the only city not already ranked in the top 10 in 1950, when it was 14th). The cities ranked 6 – 10 fell out of the top 20 while Detroit declined from 5th to 18th. The large change in the rankings during this time period is striking when compared to the relative stability of the earlier time periods.

Economic changes due to globalization and the prevalence of right-to-work laws in the southern states, combined with preferences for warm weather and other factors have resulted in both population and economic decline in many major Midwestern and Northeastern cities. All of the new cities in the top ten in 2010 have relatively warm weather: Phoenix, San Antonio, San Diego, Dallas, and San Jose. Some large cities missing from the 2010 list – particularly San Francisco and perhaps Washington D.C. and Boston as well – would probably be ranked higher if not for restrictive land-use regulations that artificially increase housing prices and limit population growth. In those cities and other smaller cities – primarily located in Southern California – low population growth is a goal rather than a result of outside forces.

The only cold-weather cities that were in the top 15 in 2014 that were not in the top 5 in 1950 were Indianapolis, IN (14th) and Columbus, OH (15th). These two cities not only avoided the fate of nearby Detroit and Cleveland, they thrived. From 1950 to 2014 Columbus’ population grew by 122% and Indianapolis’ grew by 99%. This is striking compared to the 57% decline in Cleveland and the 63% decline in Detroit during the same time period.

So why have Columbus and Indianapolis grown since 1950 while every other large city in the Midwest has declined? There isn’t an obvious answer. One thing among many that both Columbus and Indianapolis have in common is that they are both state capitals. State spending as a percentage of Gross State Product (GSP) has been increasing since 1970 across the country as shown in the graph below.

OH, IN state spending as per GSP

In Ohio state spending growth as a percentage of GSP has outpaced the nation since 1970. It is possible that increased state spending in Ohio and Indiana is crowding out private investment in other parts of those states. And since much of the money collected by the state ends up being spent in the capital via government wages, both Columbus and Indianapolis grow relative to other cities in their respective states.

There has also been an increase in state level regulation over time. As state governments become larger players in the economy business leaders will find it more and more beneficial to be near state legislators and governors in order to lobby for regulations that help their company or for exemptions from rules that harm it. Company executives who fail to get a seat at the table when regulations are being drafted may find that their competitors have helped draft rules that put them at a competitive disadvantage. The decline of manufacturing in the Midwest may have created an urban reset that presented firms and workers with an opportunity to migrate to areas that have a relative abundance of an increasingly important factor of production – government.

We don’t need more federal infrastructure spending

Many of the presidential candidates on both sides of the aisle have expressed interest in fixing America’s infrastructure, including Donald Trump, Hilary Clinton, and Bernie Sanders. All of them claim that America’s roads and bridges are crumbling and that more money, often in the form of tax increases, is needed before they fall into further disrepair.

The provision of basic infrastructure is one of the most economically sound purposes of government. Good roads, bridges, and ports facilitate economic transactions and the exchange of ideas which helps foster innovation and economic growth. There is certainly room to debate which level of government – federal, state, or local – should provide which type of infrastructure, but I want to start by examining US infrastructure spending over time. To hear the candidates talk one would think that infrastructure spending has fallen of a cliff. What else could explain the current derelict state?

A quick look at the data shows that this simply isn’t true. A 2015 CBO report on public spending on transportation and water infrastructure provides the following figure.

CBO us infrastructure spending

In inflation adjusted dollars (the top panel) infrastructure spending has exhibited a positive trend and was higher on average post 1992 after the completion of the interstate highway system. (By the way, the original estimate for the interstate system was $25 billion over 12 years and it ended up costing $114 billion over 35 years.)

The bottom panel shows that spending as a % of GDP has declined since the early 80s, but it has never been very high, topping out at approximately 6% in 1965. Since the top panel shows an increase in the level of spending, the decline relative to GDP is due to the government increasing spending in other areas over this time period, not cutting spending on infrastructure.

The increase in the level of spending over time is further revealed when looking at per capita spending. Using the data from the CBO report and US population data I created the following figure (dollars are adjusted for inflation and are in 2014 dollars).

infrastructure spend per cap

The top green line is total spending per capita, the middle red line is state and local spending with federal grants and loan subsidies subtracted out, and the bottom blue line is federal spending. Federal spending per capita has remained relatively flat while state and local spending experienced a big jump in the late 80s, which increased the total as well. This graph shows that the amount of infrastructure spending has largely increased when adjusted for inflation and population. It’s true that spending is down since the early 2000s but it’s still higher than at any point prior to the early 90s and higher than it was during the 35-year-construction of the interstate highway system.

Another interesting thing that jumps out is that state and local governments provide the bulk of infrastructure spending. The graph below depicts the percentage of total infrastructure spending that is done by state and local governments.

infrastructure spend state, local as percent of total

As shown in the graph state and local spending on infrastructure has accounted for roughly 75% of total infrastructure spending since the late 80s. Prior to that it averaged about 70% except for a dip to around 65% in the late 70s.

All of this data shows that the federal government – at least in terms of spending – has not ignored the country’s infrastructure over the last 50 plus years, despite the rhetoric one hears from the campaign trail. In fact, on a per capita basis total infrastructure spending has increased since the early 1980s, driven primarily by state and local governments.

And this brings up a second important point: state and local governments are and have always been the primary source of infrastructure spending. The federal government has historically played a small role in building and maintaining roads, bridges, and water infrastructure. And for good reason. As my colleague Veronique de Rugy has pointed out :

“…infrastructure spending by the federal government tends to suffer from massive cost overruns, waste, fraud, and abuse. As a result, many projects that look good on paper turn out to have much lower return on investments than planned.”

As evidence she notes that:

“According to the Danish researchers, American cost overruns reached on average $55 billion per year. This figure includes famous disasters like the Central Artery/Tunnel Project (CA/T), better known as the Boston Big Dig.22 By the time the Beantown highway project—the most expensive in American history—was completed in 2008 its price tag was a staggering $22 billion. The estimated cost in 1985 was $2.8 billion. The Big Dig also wrapped up 7 years behind schedule.”

Since state and local governments are doing the bulk of the financing anyway and most infrastructure is local in nature it is best to keep the federal government out as much as possible. States are also more likely to experiment with private methods of infrastructure funding. As de Rugy points out:

“…a number of states have started to finance and operate highways privately. In 1995, Virginia opened the Dulles Greenway, a 14-mile highway, paid for by private bond and equity issues. Similar private highway projects have been completed, or are being pursued, in California, Maryland, Minnesota, North Carolina, South Carolina, and Texas. In Indiana, Governor Mitch Daniels leased the highways and made a $4 billion profit for the state’s taxpayers. Consumers in Indiana were better off: the deal not only saved money, but the quality of the roads improved as they were run more efficiently.”

It remains an open question as to exactly how much more money should be devoted to America’s infrastructure. But even if the amount is substantial it’s not clear that the federal government needs to get any more involved than they already are. Infrastructure is largely a state and local issue and that is where the taxing and spending should take place, not in Washington D.C.

 

 

Paving over pension liabilities, again

Public sector pensions are subject to a variety of accounting and actuarial manipulations. A lot of the reason for the lack of funding discipline, I’ve argued, is in part due to the mal-incentives in the public sector to fully fund employee pensions. Discount rate assumptions, asset smoothing, and altering amortization schedules are three of the most common kinds of maneuvers used to make pension payments easier on the sponsor. Short-sighted politicians don’t always want to pay the full bill when they can use revenues for other things. The problem with these tactics is they can also lead to underfunding, basically kicking the can down the road.

Private sector plans are not immune to government-sanctioned accounting subterfuges. Last week’s Wall Street Journal reported on just one such technique.

President Obama recently signed a $10.8 billion transportation bill that also included a provision to allow companies to continue “pension smoothing” for 10 more months. The result is to lower the companies’ contribution to employee pension plans. It’s also a federal revenue device. Since pension payments are tax-deductible these companies will have slightly higher tax bills this year. Those taxes go to help fund federal transportation per the recently signed legislation.

A little bit less is put into private-sector pension plans and a little bit more is put into the government’s coffers.

The WSJ notes that the top 100 private pension plans could see their $44 billion required pension contribution reduced by 30 percent, adding an estimated $2.3 billion deficit to private pension plans. It’s poor discipline considering the variable condition of a lot of private plans which are backed by the Pension Benefit Guaranty Corporation (PBGC).

My colleague Jason Fichtner and I drew attention to these subtle accounting dodges triggered by last year’s transportation bill. In “Paving over Pension Liabilities,” we call out discount rate manipulation used by corporations and encouraged by Congress that basically has the same effect: redirecting a portion of the companies’ reduced pension payments to the federal government in order to finance transportation spending. The small reduction in corporate plans’ discount rate translates into an extra $8.8 billion for the federal government over 10 years.

The AFL-CIO isn’t worried about these gimmicks. They argue that pension smoothing makes life easier for the sponsor, and thus makes offering a defined benefit plan, “less daunting.” But such, “politically-opportunistic accounting,” (a term defined by economist Odd Stalebrink) is basically a means of covering up reality, like only paying a portion of your credit card bill or mortgage. Do it long enough and you’ll eventually forget how much those shopping sprees and your house actually cost.

Do We Need Speed Limits to Drive Safely?

A recent RegBlog post discussed a paper by van Benthem, which suggested that the social costs of higher speed limits outweigh their benefits. The paper examines the data from a natural experiment – the repeal of National Maximum Speed Law in 1995 that led many states to increase speed limits – to make its point. Yet, both the RegBlog post and the paper miss the larger question: lower driving speeds may be safer, but do we need the government-imposed speed limits to drive at safe speeds?

In the study, van Benthem finds that “a 10 mph speed limit increase on highways leads to a 3-4 mph increase in travel speed, 9-15% more accidents, 34-60% more fatal accidents.” Thus, he concludes that the difference between private benefits and social costs of faster driving provide a strong rationale for speed limits (while the paper looks at both traffic fatalities and increasing pollution levels, I focus on traffic safety. As the RegBlog post points out, there are alternatives to speed limits to deal with pollution, e.g. emission standards).

However, there is a natural experiment that van Benthem does not discuss: only a third of highways in Germany (mostly around urban areas) have permanent speed limits. On the remaining portion of highways, drivers choose their own speed. The data indicate that there is little difference between traffic fatality rates on highways with and without speed limits. In fact, over the last 20 years, the number of highway traffic fatalities in Germany decreased by 71% despite a 17% increase in number of vehicles on the road and a 25% increase in traffic flow. At 5.6 deaths per billion vehicle-kilometers driven, Germany’s traffic fatality rate is lower than the US rate (6.83) or even France’s (7.01). Apparently, German drivers are able to choose safe driving speeds even without government prodding.

Entrusting drivers with responsibility for their own safety and safety of those around them is behind another natural experiment adopted in several European cities – the concept of shared spaces. These cities are doing away with a thicket of street signs, streetlights and in some cases even sidewalks on some busy intersections. Instead, cars and pedestrians share the road, negotiating their ways as they go. While this may sound like a disaster waiting to happen, the cities report fewer accidents and increased foot traffic in businesses along the roads. The key to the concept’s success comes from drivers’ psychology; drivers compensate for lack of predictable traffic rules by paying attention to their surroundings and being more considerate to others. As Hans Monderman, a proponent of shared spaces, points out “The many rules strip us of the most important thing: the ability to be considerate… The greater the number of prescriptions, the more people’s sense of personal responsibility dwindles.”

For social regulation proponents, stringent rules are the go to response to all social ills. Yet, as European experiences with traffic demonstrate, regulation is not the only and may not even be the best alternative. Crazy as it may sound to some, treating people as responsible adults and trusting them to make the right choices may in fact lead to better social outcomes for all.

Delaying the Rearview Camera Rule is Good for the Poor

A few weeks ago, the Department of Transportation (DOT) announced it would delay implementation of a regulation requiring that rearview cameras be installed in new automobiles. The rule was designed to prevent backover accidents by increasing drivers’ fields of vision to include the area behind and underneath vehicles. The DOT said more research was needed before finalizing the regulation, but there is another, perhaps more important reason for delaying the rule. The costs of this rule, and many others like it, weigh most heavily on those with low incomes, while the benefits cater to the preferences of those who are better-off financially.

The rearview camera regulation was expected to increase the cost of an automobile by approximately $200. This may not seem like much money, but it means a person buying a new car will have less money on hand to spend on other items that improve quality of life. These items might include things like healthcare or healthier food. Those who already have access to quality healthcare services, or who shop regularly at high end supermarkets like Whole Foods, may prefer to have the risk of a backup accident reduced over the additional $200 spent on a new car. Alternatively, those who don’t have easy access to healthcare or healthy food, may well prefer the $200.

A lot of regulation is really about reducing risks. Some risks pose large dangers, like the risk of radiation exposure (or death) if you are within range of a nuclear blast. Some risks pose small dangers, like a mosquito bite. Some risks are very likely, like the risk of stubbing your toe at some point in your lifetime, while other risks are very remote, like the chance that the Earth will be hit by a gigantic asteroid next week.

Risks are everywhere and can never be eliminated entirely from life. If we tried to eliminate every risk we face, we’d all live like John Travolta in the movie The Boy in the Plastic Bubble (and of course, he could also be hit by an asteroid!). The question we need to ask ourselves is: how do we manage risks in a way that makes the most sense given limited resources in society? In addition to this important question, we may also want to ask ourselves to what degree distributional effects are important as we consider which risks to mitigate?

There are two main ways that society can manage risks. First, we can manage risks we face privately, say by choosing to eat vegetables often or to go to the gym. In this way, a person can reduce the risk of cardiovascular disease, a leading cause of death in the United States, as well as other health problems. We can also choose to manage risks publicly, say through regulation or other government action. For example, the government passes laws requiring everyone to get vaccinated against certain illnesses, and this reduces the risk of getting sick from those around us.

Not surprisingly, low income families spend less on private risk mitigation than high income families do. Similarly, those who live in lower income areas tend to face higher mortality risks from a whole host of factors (e.g. accidents, homicide, cancer), when compared to those who live in wealthier neighborhoods. People with higher incomes tend to demand more risk reduction, just as they demand more of other goods or services. Therefore, spending money to reduce very low probability risks, like the risk of being backed over by a car in reverse, is more in line with preferences of the wealthy, since the wealthy will demand more risk reduction of this sort than the poor will.

Such a rule may also result in unintended consequences.  Just as using seat belts has been shown to lead to people driving faster, relying on a rearview camera when driving in reverse may lead to people being less careful about backing up.  For example, someone could be running outside of the camera’s view, and only come into view just as he or she is hit by the car.  Relying on cameras entirely may increase the risk of some people getting hit.

When the government intervenes and reduces risks for us, it is making a choice for us about which risks are most important, and forcing everyone in society to pay to address these risks. But not all risks are the same. In the case of the rearview camera rule, everyone must pay the extra money for the new device in the car (unless they forgo buying a new car which also carries risks), yet the risk of accident in a backup crash is small relative to other risks. Simply moving out of a low income neighborhood can reduce a whole host of risks that low income families face. By forcing the poor to pay to reduce the likelihood of tiny probability events, DOT is essentially saying poor people shouldn’t have the option of reducing larger risks they face. Instead, the poor should share the burden of reducing risks that are more in line with the preferences of the wealthy, who have likely already paid to reduce the types of risks that low income families still face.

Politicians and regulators like to claim that they are saving lives with regulation and just leave it at that. But the reality is often much more complicated with unintended consequences and regressive effects. Regulations have costs and those costs often fall disproportionately on those with the least ability to pay. Regulations also involve tradeoffs that leave some groups better off, while making other groups worse off. When one of the groups made worse off is the poor, we should think very carefully before proceeding with a policy, no matter how well intentioned policymakers may be.

The DOT is delaying the rearview camera rule so it can conduct more research on the issue. This is a sensible decision. Everyone wants to reduce the prevalence of backover accidents, but we should be looking for ways to achieve this goal that don’t disadvantage the least well off in society.

WMATA’s failures are institutional, not personal

Chris Barnes who writes the DC blog FixWMATA  is supporting a petition to replace the Board of Directors of the Washington Metropolitan Area Transit Authority. Frustration with the transit agency is growing among Washington-area residents as ongoing system repairs have made the system’s weekend service increasingly unusable. The situation has led to the birth of multiple blogs documenting WMATA’s failures. As an intern in DC from the Czech Republic recently summed up the situation, “Metro is both terrible and expensive.”

While the need for reforms at WMATA is clear, replacing the Board of Directors is unlikely to lead to significant improvements in the system. Rather, WMATA’s problems are institutional, and new actors facing the same incentives as the current WMATA Board are unlikely to produce better results. Some of the institutions preventing a Metro of reasonable quality and cost include:

1) Union work rules. Stephen Smith, my co-blogger at Market Urbanism, has done an excellent job of explaining how union work rules make transit needlessly expensive. One of the biggest culprits is requiring shifts to be at least eight hours and preventing the hiring of part-time workers. WMATA rationally runs trains and buses more often during morning and evening rush hours, but it is not permitted to staff these time periods at levels above mid-day staffing because of the eight-hour shift requirement. Combined with the above-market wages and benefits that WMATA employees make, these bloated employee costs prevent WMATA from achieving a higher farebox recovery rate and having more resources to dedicate to needed capital improvements.

2) Intergovernmental transfers. Over half of WMATA’s current capital improvement budget comes from the federal government, meaning that while the benefits of the system are narrowly bestowed on riders, a large share of the capital improvement costs are spread across U.S. taxpayers. This large dispersal of costs permits much more expensive transit than would be tolerated if all funding came from the localities that benefit from the system. Furthermore, with funds coming from the District, Maryland, Virginia, and the federal government, the flypaper effect comes into play. This means that a $100 million infusion from the federal government to WMATA will not reduce the cost born by local taxpayers by $100 million; rather, total spending on the project will increase with grants from higher level of government. Absent incentives to spend this money well, WMATA demonstrates that high levels of federal funding will not necessarily result efficiently carried out capital improvements.

At Pedestrian Observations, Alon Levy provides a comparison of transit construction costs across countries, and finds that U.S. construction costs are exorbitant. The reasons for these cost disparities are many and not well-understood. One reason for high costs in the U.S., though, may be that the prevalence of  federal funding comes with the strings of costly federal regulations.

3) Accountability. While all U.S. transit systems suffer inefficiencies from intergovernmental transfers and union work rules, DC’s Metro has a unique governance structure that seems to produce particularly bad and costly service. WMATA has the blessing and the curse of being multijurisdictional. On the one hand, the Washington region is not plagued with the agency turf wars that New York City transit sees. Several of the system’s rail lines run through Virginia, DC, and Maryland, providing many infrastructure efficiencies and service improvements over requiring transfers between jurisdictions.

Despite these opportunities to provide improved service at a lower cost, WMATA’s lack of jurisdictional control seems to do more harm than good. No politician can take full credit for running WMATA efficiently, so none prioritize the agency’s performance. It’s a tragedy of the political commons.

Josh Barro has recommended directly electing the Board of Directors of WMATA to create elected officials with an incentive to improve service. This institutional change would be more likely to improve outcomes than replacing the current Board with new members who would face the same incentives. Clearly, WMATA’s Board of Directors is failing in its job to oversee quality and cost-effective transit for the region; however, replacing the board members without changing the institutions that they work under will not likely improve outcomes. Intergovernmental transfers and union work rules limit transit efficiency across the country, but WMATA’s interjurisdictional status exacerbates inefficiencies and waste.

Local control over transportation: good in principle but not being practiced

State and local governments know their transportation needs better than Washington D.C. But that doesn’t mean that state and local governments are necessarily more efficient or less prone to public choice problems when it comes to funding projects, and some of that is due to the intertwined funding streams that make up a transportation budget.

Emily Goff at The Heritage Foundation finds two such examples in the recent transportation bills passed in Virginia and Maryland.

Both Virginia Governor Bob McDonnell and Maryland Governor Martin O’Malley propose raising taxes to fund new transit projects. In Virginia the state will eliminate the gas tax and replace it with an increase in the sales tax. This is a move away from a user-based tax to a more general source of taxation, severing the connection between those who use the roads and those who pay. The gas tax is related to road use; sales taxes are barely related. There is a much greater chance of political diversion of sales tax revenues to subsidized transit projects: trolleys, trains and bike paths, rather than using revenues for road improvements.

Maryland reduces the gas tax by five cents to 18.5 cents per gallon and imposes a new wholesale tax on motor fuels.

How’s the money being spent? In Virginia 42 percent of the new sales tax revenues will go to mass transit with the rest going to highway maintenance. As Goff notes this means lower -income southwestern Virginians will subsidize transit for affluent northern Virginians every time they make a nonfood purchase.

As an example, consider Arlington’s $1 million dollar bus stop. Arlingtonians chipped in $200,000 and the rest came from the Virginia Department of Transportation (VDOT). It’s likely with a move to the sales tax, we’ll see more of this. And indeed, according to Arlington Now, there’s a plan for 24 more bus stops to compliment the proposed Columbia Pike streetcar, a light rail project that is the subject of a lively local debate.

Revenue diversions to big-ticket transit projects are also incentivized by the states trying to come up with enough money to secure federal grants for Metrorail extensions (Virginia’s Silver Line to Dulles Airport and Maryland’s Purple Line to New Carrolton).

Truly modernizing and improving roads and mass transit could be better achieved by following a few principles.

  • First, phase out federal transit grants which encourage states to pursue politically-influenced and costly projects that don’t always address commuters’ needs. (See the rapid bus versus light rail debate).
  • Secondly, Virginia and Maryland should move their revenue system back towards user-fees for road improvements. This is increasingly possible with technology and a Vehicle Miles Tax (VMT), which the GAO finds is “more equitable and efficient” than the gas tax.
  • And lastly, improve transit funding. One way this can be done is through increasing farebox recovery rates. The idea is to get transit fares in line with the rest of the world.

Interestingly, Paris, Madrid, and Tokyo have built rail systems at a fraction of the cost of heavily-subsidized projects in New York, Boston, and San Francisco. Stephen Smith, writing at Bloomberg, highlights that a big part of the problem in the U.S. are antiquated procurement laws that limit bidders on transit projects and push up costs. These legal restrictions amount to real money. French rail operator SNCF estimated it could cut $30 billion off of the proposed $68 billion California light rail project. California rejected the offer and is sticking with the pricier lead contractor.