Tag Archives: New Mercatus

Decreasing congestion with driverless cars

Traffic is aggravating. Especially for San Francisco residents. According to Texas A&M Transportation Institute, traffic congestion in the San Francisco-Oakland CA area costs the average auto commuter 78 hours per year in extra travel time, $1,675 for their travel time delays, and an extra 33 gallons of gas compared to free-flow traffic conditions. That means the average commuter spends more than three full days stuck in traffic each year. Unfortunately for these commuters, a potential solution to their problems just left town.

Last month, after California officials told Uber to stop its pilot self-driving car program because it lacked the necessary state permits for autonomous driving, Uber decided to relocate the program from San Francisco to Phoenix, Arizona. In an attempt to alleviate safety concerns, these self-driving cars are not yet driverless, but they do have the potential to reduce the number of cars on the road. Other companies like Google, Tesla, and Ford have expressed plans to develop similar technologies, and some experts predict that completely driverless cars will be on the road by 2021.

Until then, however, cities like San Francisco will continue to suffer from the most severe congestion in the country. Commuters in these cities experience serious delays, higher gasoline usage, and lost time behind the wheel. If you live in any of these areas, you are probably very familiar with the mind-numbing effect of sitting through sluggish traffic.

It shouldn’t be surprising then that these costs could culminate into a larger problem for economic growth. New Mercatus research finds that traffic congestion can significantly harm economic growth and concludes with optimistic predictions for how autonomous vehicle usage could help.

Brookings Senior Fellow Clifford Winston and Yale JD candidate Quentin Karpilow find significant negative effects of traffic congestion on the growth rates of California counties’ gross domestic product (GDP), employment, wages, and commodity freight flows. They find that a 10% reduction in congestion in a California urban area increases both job and GDP growth by roughly 0.25% and wage growth to increase by approximately 0.18%.

This is the first comprehensive model built to understand how traffic harms the economy, and it builds on past research that has found that highway congestion leads to slower job growth. Similarly, congestion in West Coast ports, which occurs while dockworkers and marine terminal employers negotiate contracts, has caused perishable commodities to go bad, resulting in a 0.2 percentage point reduction in GDP during the first quarter of 2015.

There are two main ways to solve the congestion problem; either by reducing the number of cars on the road or by increasing road capacity. Economists have found that the “build more roads” method in application has actually been quite wasteful and usually only induces additional highway traffic that quickly fills the new road capacity.

A common proposal for the alternative method of reducing the number of cars on the road is to implement congestion pricing, or highway tolls that change based on the number of drivers using the road. Increasing the cost of travel during peak travel times incentivizes drivers to think more strategically about when they plan their trips; usually shifting less essential trips to a different time or by carpooling. Another Mercatus study finds that different forms of congestion pricing have been effective at reducing traffic congestion internationally in London and Stockholm as well as for cities in Southern California.

The main drawback of this proposal, however, is the political difficulty of implementation, especially with interstate highways that involve more than one jurisdiction to approve it. Even though surveys show that drivers generally change their mind towards supporting congestion pricing after they experience the lower congestion that results from tolling, getting them on board in the first place can be difficult.

Those skeptical of congestion pricing, or merely looking for a less challenging policy to implement, should look forward to the new growing technology of driverless cars. The authors of the recent Mercatus study, Winston and Karpilow, find that the adoption of autonomous vehicles could have large macroeconomic stimulative effects.

For California specifically, even if just half of vehicles became driverless, this would create nearly 350,000 additional jobs, increase the state’s GDP by $35 billion, and raise workers’ earnings nearly $15 billion. Extrapolating this to the whole country, this could add at least 3 million jobs, raise the nation’s annual growth rate 1.8 percentage points, and raise annual labor earnings more than $100 billion.

What would this mean for the most congested cities? Using Winston and Karpilow’s estimates, I calculated how reduced congestion from increased autonomous car usage could affect Metropolitan Statistical Areas (MSAs) that include New York City, Los Angeles, Boston, San Francisco, and the DC area. The first chart shows the number of jobs that would have been added in 2011 if 50% of motor vehicles had been driverless. The second chart shows how this would affect real GDP per capita, revealing that the San Francisco MSA would have the most to gain, but with the others following close behind.

jobsadd_autonomousvehicles realgdp_autonomousvehicles

As with any new technology, there is uncertainty with how exactly autonomous cars will be fully developed and integrated into cities. But with pilot programs already being implemented by Uber in Pittsburgh and nuTonomy in Singapore, it is becoming clear that the technology’s efficacy is growing.

With approximately $1,332 GDP per capita and 45,318 potential jobs on the table for the San Francisco Metropolitan Statistical Area, it is a shame that San Francisco just missed a chance to realize some of these gains and to be at the forefront of driving progress in autonomous vehicle implementation.

Should Illinois be Downgraded? Credit Ratings and Mal-Investment

No one disputes that Illinois’s pension systems are in seriously bad condition with large unfunded obligations. But should this worry Illinois bondholders? New Mercatus research by Marc Joffe of Public Sector Credit Solutions finds that recent downgrades of Illinois’s bonds by credit ratings agencies aren’t merited. He models the default risk of Illinois and Indiana based on a projection of these states’ financial position. These findings are put in the context of the history of state default and the role the credit ratings agencies play in debt markets. The influence of credit ratings agencies in this market is the subject a guest blog post by Marc today at Neighborhood Effects.

Credit Ratings and Mal-Investment

by Marc Joffe

Prices play a crucial role in a market economy because they provide signals to buyers and sellers about the availability and desirability of goods. Because prices coordinate supply and demand, they enabled the market system to triumph over Communism – which lacked a price mechanism.

Interest rates are also prices. They reflect investor willingness to delay consumption and take on risk. If interest rates are manipulated, serious dislocations can occur. As both Horwitz and O’Driscoll have discussed, the Fed’s suppression of interest rates in the early 2000s contributed to the housing bubble, which eventually gave way to a crash and a serious financial crisis.

Even in the absence of Fed policy errors, interest rate mispricing is possible. For example, ahead of the financial crisis, investors assumed that subprime residential mortgage backed securities (RMBS) were less risky than they really were. As a result, subprime mortgage rates did not reflect their underlying risk and thus too many dicey borrowers received home loans. The ill effects included a wave of foreclosures and huge, unexpected losses by pension funds and other institutional investors.

The mis-pricing of subprime credit risk was not the direct result of Federal Reserve or government intervention; instead, it stemmed from investor ignorance. Since humans lack perfect foresight, some degree of investor ignorance is inevitable, but it can be minimized through reliance on expert opinion.

In many markets, buyers rely on expert opinions when making purchase decisions. For example, when choosing a car we might look at Consumer Reports. When choosing stocks, we might read investment newsletters or review reports published by securities firms – hopefully taking into account potential biases in the latter case. When choosing fixed income most large investors rely on credit rating agencies.

The rating agencies assigned what ultimately turned out to be unjustifiably high ratings to subprime RMBS. This error and the fact that investors relied so heavily on credit rating agencies resulted in the overproduction and overconsumption of these toxic securities. Subsequent investigations revealed that the incorrect rating of these instruments resulted from some combination of suboptimal analytical techniques and conflicts of interest.

While this error occurred in market context, the institutional structure of the relevant market was the unintentional consequence of government interventions over a long period of time. Rating agencies first found their way into federal rulemaking in the wake of the Depression. With the inception of the FDIC, regulators decided that expert third party evaluations were needed to ensure that banks were investing depositor funds wisely.

The third party regulators chose were the credit rating agencies. Prior to receiving this federal mandate, and for a few decades thereafter, rating agencies made their money by selling manuals to libraries and institutional investors. The manuals included not only ratings but also large volumes of facts and figures about bond issuers.

After mid-century, the business became tougher with the advent of photocopiers. Eventually, rating agencies realized (perhaps implicitly) that they could monetize their federally granted power by selling ratings to bond issuers.

Rather than revoking their regulatory mandate in the wake of this new business model, federal regulators extended the power of incumbent rating agencies – codifying their opinions into the assessments of the portfolios of non-bank financial institutions.

With the growth in fixed income markets and the inception of structured finance over the last 25 years, rating agencies became much larger and more profitable. Due to their size and due to the fact that their ratings are disseminated for free, rating agencies have been able to limit the role of alternative credit opinion providers. For example, although a few analytical firms market their insights directly to institutional investors, it is hard for these players to get much traction given the widespread availability of credit ratings at no cost.

Even with rating agencies being written out of regulations under Dodd-Frank, market structure is not likely to change quickly. Many parts of the fixed income business display substantial inertia and the sheer size of the incumbent firms will continue to make the environment challenging for new entrants.

Regulatory involvement in the market for fixed income credit analysis has undoubtedly had many unintended consequences, some of which may be hard to ascertain in the absence of unregulated markets abroad. One fairly obvious negative consequence has been the stunting of innovation in the institutional credit analysis field.

Despite the proliferation of computer technology and statistical research methods, credit rating analysis remains firmly rooted in its early 20th century origins. Rather than estimate the probability of a default or the expected loss on a credit instruments, rating agencies still provide their assessments in the form of letter grades that have imprecise definitions and can easily be misinterpreted by market participants.

Starting with the pioneering work of Beaver and Altman in the 1960s, academic models of corporate bankruptcy risk have become common, but these modeling techniques have had limited impact on rating methodology.

Worse yet, in the area of government bonds, very little academic or applied work has taken place. This is especially unfortunate because government bond ratings frame the fiscal policy debate. In the absence of credible government bond ratings, we have no reliable way of estimating the probability that any government’s revenue and expenditure policies will lead to a socially disruptive default in the future. Further, in the absence of credible research, there is great likelihood that markets inefficiently price government bond risk – sending confusing signals to policymakers and the general public.

Given these concerns, I am pleased that the Mercatus Center has provided me the opportunity to build a model for Illinois state bond credit risk (as well as a reference model for Indiana). This is an effort to apply empirical research and Monte Carlo simulation techniques to the question of how much risk Illinois bondholders actually face.

While readers may not like my conclusion – that Illinois bonds carry very little credit risk – I hope they recognize the benefits of constructing, evaluating and improving credit models for systemically important public sector entities like our largest states. Hopefully, this research will contribute to a discussion about how we can improve credit rating assessments.