Quantcast

Economic growth is vital for improving our lives and the primary long-run determinant of economic growth is innovation. More innovation means better products, more choices for consumers and a higher standard of living. Worldwide, hundreds of millions of people have been lifted out of poverty due to the economic growth that has occurred in many countries since the 1970s.

The effect of innovation on economic growth has been heavily analyzed using data from the post-WWII period, but there is considerably less work that examines the relationship between innovation and economic growth during earlier time periods. An interesting new working paper by Ufuk Akcigit, John Grigsby and Tom Nicholas that examines innovation across America during the late 19th and early 20th century helps fill in this gap.

The authors examine innovation and inventors in the U.S. during this period using U.S. patent data and census data from 1880 to 1940. The figure below shows the geographic distribution of inventiveness in 1940. Darker colors mean higher rates of inventive activity.

geography of inventiveness 1940

Most of the inventive activity in 1940 was in the industrial Midwest and Northeast, with California being the most notable western exception.

The next figure depicts the relationship between the log of the total number of patents granted to inventors in each state from 1900 to 2000 (x-axis) and annualized GDP growth (y-axis) over the same period for the 48 contiguous states.

innovation, long run growth US states

As shown there is a strong positive relationship between this measure of innovation and economic growth. The authors also conduct multi-variable regression analyses, including an instrumental variable analysis, and find the same positive relationship.

The better understand why certain states had more inventive activity than others in the early 20th century, the authors analyze several factors: 1) urbanization, 2) access to capital, 3) geographic connectedness and 4) openness to new ideas.

The figures below show the more urbanization was associated with more innovation from 1940 to 1960. The left figure plots the percent of people in each state living in an urban area in 1940 on the x-axis while the right has the percent living on a farm on the x-axis. Both figures tell the same story—rural states were less innovative.

pop density, innovation 1940-1960

Next, the authors look at the financial health of each state using deposits per capita as their measure. A stable, well-funded banking system makes it easier for inventors to get the capital they need to innovate. The figure below shows the positive relationship between deposits per capita in 1920 and patent production from 1920 to 1930.

innovation, bank deposits 1920-1940

The size of the market should also matter to inventors, since greater access to consumers means more sales and profits from successful inventions. The figures below show the relationship between a state’s transport cost advantage (x-axis) and innovation. The left figure depicts all of the states while the right omits the less populated, more geographically isolated Western states.

innovation, transport costs 1920-1940

States with a greater transport cost advantage in 1920—i.e. less economically isolated—were more innovative from 1920 to 1940, and this relationship is stronger when states in the far West are removed.

The last relationship the authors examine is that between innovation and openness to new, potentially disruptive ideas. One of their proxies for openness is the percent of families who owned slaves in a state, with more slave ownership being a sign of less openness to change and innovation.

innovation, slavery 1880-1940

The figures show that more slave ownership in 1860 was associated with less innovation at the state-level from 1880 to 1940. This negative relationship holds when all states are included (left figure) and when states with no slave ownership in 1860—which includes many Northern states—are omitted (right figure).

The authors also analyze individual-level data and find that inventors of the early 20th century were more likely to migrate across state lines than the rest of the population. Additionally, they find that conditional on moving, inventors tended to migrate to states that were more urbanized, had higher bank deposits per capita and had lower rates of historical slave ownership.

Next, the relationship between innovation and inequality is examined. Inequality has been a hot topic the last several years, with many people citing research by economists Thomas Piketty and Emmanuel Saez that argues that inequality has increased in the U.S. since the 1970s. The methods and data used to construct some of the most notable evidence of increasing inequality has been criticized, but this has not made the topic any less popular.

In theory, innovation has an ambiguous effect on inequality. If there is a lot of regulation and high barriers to entry, the profits from innovation may primarily accrue to large established companies, which would tend to increase inequality.

On the other hand, new firms that create innovative new products can erode the market share and profits of larger, richer firms, and this would tend to decrease inequality. This idea of innovation aligns with economist Joseph Schumpeter’s “creative destruction”.

So what was going on in the early 20th century? The figure below shows the relationship between innovation and two measures of state-level inequality: the ratio of the 90th percentile wage over the 10th percentile wage in 1940 and the wage income Gini coefficient in 1940. For each measure, a smaller value means less inequality.

innovation, inc inequality 1920-1940

As shown in the figures above, a higher patent rate is correlated with less inequality. However, only the result using 90-10 ratio remains statistically significant when each state’s occupation mix is controlled for in a multi-variable regression.

The authors also find that when the share of income controlled by the top 1% of earners is used as the measure of inequality, the relationship between innovation and inequality makes a U shape. That is, innovation decreases inequality up to a point, but after that point it’s associated with more inequality.

Thus when using the broader measures of inequality (90-10 ratio, Gini coeffecieint) innovation is negatively correlated with inequality, but when using a measure of top-end inequality (income controlled by top 1%) the relationship is less clear. This shows that inequality results are sensitive to the measurement of inequality used.

Social mobility is an important measure of economic opportunity within a society and the figure below shows that innovation is positively correlated with greater social mobility.

innovation, social mobility 1940

The measure of social mobility used is the percentage of people who have a high-skill occupation in 1940 given that they had a low-skill father (y-axis). States with more innovation from 1920 to 1940 had more social mobility according to this measure.

In the early 20th century it appears that innovation improved social mobility and decreased inequality, though the latter result is sensitive to the measurement of inequality. However, the two concepts are not equally important: Economic and social mobility are worthy societal ideals that require opportunity to be available to all, while static income or wealth inequality is largely a red herring that distracts us from more important issues. And once you take into account the consumer-benefits of innovation during this period—electricity, the automobile, refrigeration etc.—it is clear that innovation does far more good than harm.

This paper is interesting and useful for several reasons. First, it shows that innovation is important for economic growth over a long time period for one country. It also shows that more innovation occurred in denser, urbanized states that provided better access to capital, were more interconnected and were more open to new, disruptive ideas. These results are consistent with what economists have found using more recent data, but this research provides evidence that these relationships have existed over a much longer time period.

The positive relationships between innovation and income equality/social mobility in the early 20th century should also help alleviate the fears some people have about the negative effects of creative destruction. Innovation inevitably creates adjustment costs that harm some people, but during this period it doesn’t appear that it caused widespread harm to workers.

If we reduce regulation today in order to encourage more innovation and competition we will likely experience similar results, along with more economic growth and all of the consumer benefits.

Over the past 60 or 70 years, levels of regulation in the United States have been on the rise by almost any measure. As evidence, in the year 1950 there were only 9,745 pages in the US Code of Federal Regulations. Today that number is over 178,000 pages. There is less information about regulation at the state level, but anecdotal evidence suggests regulation is on the rise there too. For example, the Commonwealth of Kentucky publishes its regulatory code each year in a series of volumes known as the Kentucky Administrative Regulations Service (KARS). These volumes consist of books, each roughly 400 or 500 pages or so in length. In 1975, there were 4 books in the KARS. By 2015, that number had risen to 14 books. There are many different theories as to why so much regulation gets produced, so it makes sense to review some of those theories in order to explain the phenomenon of regulatory accumulation.

Perhaps the most popular theory of regulation is that it exists to advance the public interest. According to this view, well-intended regulators intervene in the marketplace due to “market failures”, which are situations where the market fails to allocate resources optimally. Some common examples of market failures include externalities (cases where third parties are impacted by the transactions involving others), asymmetric information (cases where buyers and sellers possess different levels of information about products being sold), public goods problems (whereby certain items are under-provided or not provided at all by the market), and concentration of industry in the form of monopoly power. When market failure occurs, the idea is that regulators intervene in order to make imperfect markets behave more like theoretically perfect markets.

Other theories of regulation are less optimistic about the motivations of the different participants in the rulemaking process. One popular theory suggests regulators work primarily to help powerful special interest groups, a phenomenon known as regulatory capture. Under this view—commonly associated with the writings of University of Chicago economist George Stigler—regulators fix prices and limit entry into an industry because it benefits the industry being regulated. An example would be how regulators, up until the late 1970s, fixed airline prices above what they would have been in a competitive market.

The interest groups that “capture” regulatory agencies are most often thought to be businesses, but it’s important to remember that agencies can also be captured by other groups. The revolving door between the government and the private sector doesn’t end with large banks. It also extends to nonprofit groups, labor unions, and activist groups of various kinds that also wield significant resources and power.

The “public choice theory” of regulation posits that public officials are primarily self-interested, rather than being focused on advancing the public interest. Under this view, regulators may be most concerned with increasing their own salaries or budgets. Or, they may be focused primarily on concentrating their own power.

It’s also possible that regulators are not nearly so calculating and rational as this. The behavioral public choice theory of regulation suggests regulators behave irrationally in many cases, due to the cognitive limitations inherent in all human beings. A case in point is how regulatory agencies routinely overestimate risks, or try to regulate already very low risks down to zero. There is significant evidence that people, including regulators, tend to overestimate small probability risks, leading to responses that are disproportionate to the expected harm. For example, the Environmental Protection Agency’s evaluations of sites related to the Superfund clean-up project routinely overestimated risks by orders of magnitude. Such overreactions might also be a response to public perceptions, for example in response to high-profile media events, such as following acts of terrorism. If the public’s reactions carry over into the voting booth, then legislation and regulation may be enacted soon after.

One of the more interesting and novel theories as to why we see regulation relates to public trust in institutions. A 2010 paper in the Quarterly Journal of Economics noted that there is a strong correlation between trust in various social institutions and some measures of regulation. The figure below is an example of this relationship, found in the paper.

QJE trust

Trust can relate to public institutions, such as the government, but it also extends to trust in corporations and in our fellow citizens. Interestingly, the authors of the QJE article argue that an environment of low trust and high regulation can be a self-fulfilling prophecy. Low levels of trust, ironically, can lead to more demand for regulation, even when there is little trust in the government. One reason for this might be that people think that giving an untrustworthy government control over private affairs is still superior to allowing unscrupulous businesses to have free rein.

The flip-side of this situation is that in high-trust countries, such as Sweden, the public demands lower levels of regulation and this can breed more trust. So an environment of free-market policies combined with trustworthy businesses can produce good market outcomes, more trust, and this too can be a self-fulfilling, allowing some countries to maintain a “good” equilibrium.

This is concerning for the United States because trust has been on the decline in a whole host of areas. A Gallop survey has been asking questions related to trust in public institutions for several decades. There is a long-term secular decline in Gallup’s broad measure of trust, as evidenced by the figure below, although periodically there are upswings in the measure.

gallup trust

Pew has a similar survey that looks at public trust in the government. Here the decline is even more evident.

pew trust

Given that regulation has been on the rise for decades, a decline in trust in the government, in corporations, and in each other, may be a key reason this is occurring. Of course, it’s possible that these groups are simply dishonest and do not merit public trust. Nonetheless, the US might find itself stuck in a self-fulfilling situation, whereby distrust breeds more government intervention in the economy, worse market outcomes, and even more distrust in the future. Getting out of that kind of situation is not easy. One way might be through education about the institutions that lead to free and prosperous societies, as well as to create a culture whereby corruption and unscrupulous behavior are discouraged.

There are a number of theories that seek to explain why regulation comes about. No theory is perfect, and some theories explain certain situations better than others. Nonetheless, the theories presented here go a long way towards laying out the forces that lead to regulation, even if no one theory can explain all regulation at all times.

Many U.S. cities are racing to develop high speed rail systems that shorten commute times and develop the economy for residents. These trains are able to reach speeds over 124 mph, sometimes even as high as 374 mph as in the case of Japan’s record-breaking trains. Despite this potential, American cities haven’t quite had the success of other countries. In 2009, the Obama administration awarded almost a billion dollars of stimulus money to Wisconsin to build a high-speed rail line connection between Milwaukee and Madison, and possibly to the Twin Cities, but that project was derailed. Now, the Trump administration has plans to support a high-speed rail project in Texas. Given so many failed attempts in the U.S., it’s fair to ask if this time is different. And if it is, will high-speed rail bring the benefits that proponents claim it to have?

The argument for building high-speed rail lines usually entails promises of faster trips, better connections between major cities, and economic growth as a result. It almost seems like a no-brainer – why would any city not want to pursue something like this? The answer, like with most public policy questions, depends on the costs, and whether the benefits actually realize.

In a forthcoming paper for the Mercatus Center, transportation scholar Kenneth Button explores these questions by studying the high-speed rail experiences of Spain, Japan, and China; the countries with the three largest systems (measured by network length). Although there are benefits to these rail systems, Button cautions against focusing too narrowly on them as models, primarily because what works in one area can’t necessarily be easily replicated in another.

Most major systems in other countries have been the result of large public investment and built with each area’s unique geography and political environment kept in mind. Taking their approaches and trying to apply them to American cities not only ignores how these factors can differ, but also how much costs can differ. For example, the average infrastructure unit price of high-speed rail in Europe is between $17 and $24 million per mile and the estimated cost for proposals in California is conservatively estimated at $35 million per mile.

The cost side of the equation is often overlooked, and more attention is given to the benefit side. Button explains that the main potential benefit – generating economic growth – doesn’t always live up to expectations. The realized growth effects are usually minimal, and sometimes even negative. Despite this, proponents of high-speed rail oversell them. The process of thinking through high-speed rail as a sound public investment is often short-lived.

The goal is to generate new economic activity, not merely replace or divert it from elsewhere. In Japan, for example, only six percent of the traffic on the Sanyo Shinkansen line was newly generated, while 55 percent came from other rail lines, 23 percent from air, and 16 percent from inter-city bus. In China, after the Nanguang and Guiguang lines began operating in 2014, a World Bank survey found that many of the passengers would have made the journey along these commutes through some other form of transportation if the high-speed rail option wasn’t there. The passengers who chose this new transport method surely benefited from shorter travel times, but this should not be confused with net growth across the economy.

Even if diverted away from other transport modes, the amount of high-speed rail traffic Japan and China have generated is commendable. Spain’s system, however, has not been as successful. Its network has only generated about 5 percent of Japan’s passenger volume. A line between Perpignan, France and Figueres, Spain that began services in 2009 severely fell short of projected traffic. Originally, it was expected to run 19,000 trains per year, but has only reached 800 trains by 2015.

There is also evidence that high speed rail systems poorly re-distribute activity geographically. This is especially concerning given the fact that projects are often sold on a promise of promoting regional equity and reducing congestion in over-heating areas. You can plan a track between well-developed and less-developed regions, but this does not guarantee that growth for both will follow. The Shinkansen system delivers much of Japan’s workforce to Tokyo, for example, but does not spread much employment away from the capital. In fact, faster growth happened where it was already expected, even before the high-speed rail was planned or built. Additionally, the Tokyo-Osaka Shinkansan line in particular has strengthened the relative economic position of Tokyo and Osaka while weakening those of cities not served.

Passenger volume and line access are not – and should not be – the only metrics of success. Academics have exhibited a fair amount of skepticism regarding high-speed rail’s ability to meet other objectives. When it comes to investment value, many cases have resulted in much lower returns than expected. A recent, extreme example of this is California’s bullet train that is 50 percent over its planned budget; not to mention being seven years behind in its building schedule.

The project in California has been deemed a lost cause by many, but other projects have gained more momentum in the past year. North American High Speed Rail Group has proposed a rail line between Rochester and the Twin Cities, and if it gets approval from city officials, it plans to finance entirely with private money. The main drawback of the project is that it would require the use of eminent domain to take the property of existing businesses that are in the way of the planned line path. Private companies trying to use eminent domain to get past a roadblock like this often do so claiming that it is for the “public benefit.” Given that many residents have resisted the North American High Speed Rail Group’s plans, trying to force the use of eminent domain would likely only destroy value; reallocating property from a higher-value to a lower-value use.

Past Mercatus research has found that using eminent domain powers for redevelopment purposes – i.e. by taking from one private company and giving to another – can cause the tax base to shrink as a result of decreases in private investment. Or in other words, when entrepreneurs see that the projects that they invest in could easily be taken if another business owner makes the case to city officials, it would in turn discourage future investors from moving into the same area. This ironically discourages development and the government’s revenues suffer as a result.

Florida’s Brightline might have found a way around this. Instead of trying to take the property of other businesses and homes in its way, the company has raised money to re-purpose existing tracks already between Miami and West Palm Beach. If implemented successfully, this will be the first privately run and operated rail service launched in the U.S. in over 100 years. And it doesn’t require using eminent domain or the use of taxpayer dollars to jump-start that, like any investment, has risk of being a failure; factors that reduce the cost side of the equation from the public’s perspective.

Which brings us back to the Houston-to-Dallas line that Trump appears to be getting behind. How does that plan stack up to these other projects? For one, it would require eminent domain to take from rural landowners in order to build a line that would primarily benefit city residents. Federal intervention would require picking a winner and loser at the offset. Additionally, there is no guarantee that building of the line would bring about the economic development that many proponents promise. Button’s new paper suggests that it’s fair to be skeptical.

I’m not making the argument that high-speed rail in America should be abandoned altogether. Progress in Florida demonstrates that maybe in the right conditions and with the right timing, it could be cost-effective. The authors of a 2013 study echo this by writing:

“In the end, HSR’s effect on economic and urban development can be characterized as analogous to a fertilizer’s effect on crop growth: it is one ingredient that could stimulate economic growth, but other ingredients must be present.”

For cities that can’t seem to mix up the right ingredients, they can look to other options for reaching the same goals. In fact, a review of the economic literature finds that investing in road infrastructure is a much better investment than other transportation methods like airports, railways, or ports. Or like I’ve discussed previously, being more welcoming to new technologies like driver-less cars has the potential to both reduce congestion and generate significant economic gains.

Economists often talk about the important role institutions and policies play in generating economic growth. A new paper that examines the role of urban governance and city-level productivity provides some additional, indirect evidence that institutions and policies impact economic productivity at the local level. (The focus of the paper is how administrative fragmentation affects city-level productivity, not what I present here, but I thought the following was interesting nonetheless.)

The authors graph the correlation between city population and city productivity for five different countries. There is a positive relationship between population and productivity in all of the countries, which is consistent with other studies that find a similar relationship. This relationship is largely due to agglomeration economies and the greater degree of specialization within large cities.

One of the figures from the study—for the U.S.—is shown below. City productivity is measured on the y-axis and the natural log of city population is on the x-axis. (Technical note for those interested: city productivity is measured as the coefficient on a city dummy variable in an individual-level log hourly wage/earnings regression that also controls for gender, age, age squared, education and occupation. This strips away observable characteristics of the population that may affect city productivity.)

US city productivity Source: Ahrend, Rudiger, et al. “What makes cities more productive? Evidence from five OECD countries on the role of urban governance.” Journal of Regional Science 2017

 

As shown in the graph there is a relatively tight, positive relationship between size and productivity. The two noticeable outlies are El Paso and McAllen, TX, both of which are on the border with Mexico.

The next figure depicts the same information but for cities in Germany.

german city size, product graph

What’s interesting about this figure is that there is a cluster of outliers in the bottom left, which weakens the overall relationship. The cities in this cluster are less productive than one would expect based on their population. These cities also have another thing in common: They are located in or near what was East Germany. The authors comment on this:

“In Germany, the most noteworthy feature is probably the strong east-west divide, with city productivity premiums in eastern German cities being, on the whole, significantly below the levels found in western German cities of comparable size. In line with this finding, the city productivity premium in Berlin lies in between the trends in eastern and western Germany.”

The data used to construct these figures are from 2007, 17 years after the unification of Germany. After WWII and until 1990, East Germany was under communist control and had a centrally planned economy, complete with price controls and production quotas, while West Germany had a democratic government and market economy.

Since 1990, both areas have operated under the same country-level rules and institutions, but as shown above the productivity difference between the two regions persisted. This is evidence that it can take a considerable amount of time for an area to overcome damaging economic policies.

Why the lack of labor mobility in the U.S. is a problem and how we can fix it

January 26, 2017

Many researchers have found evidence that mobility in the U.S. is declining. More specifically, it doesn’t appear that people move from places with weaker economies to places with stronger economies as consistently as they did in the past. Two sets of figures from a paper by Peter Ganong and Daniel Shoag succinctly show this decline […]

Read the full post →

Decreasing congestion with driverless cars

January 17, 2017

Traffic is aggravating. Especially for San Francisco residents. According to Texas A&M Transportation Institute, traffic congestion in the San Francisco-Oakland CA area costs the average auto commuter 78 hours per year in extra travel time, $1,675 for their travel time delays, and an extra 33 gallons of gas compared to free-flow traffic conditions. That means […]

Read the full post →

Today’s public policies exacerbate our differences

January 3, 2017

The evidence that land-use regulations harm potential migrants keeps piling up. A recent paper in the Journal of Urban Economics finds that young workers (age 22 – 26) of average ability who enter the labor force in a large city (metropolitan areas with a population > 1.5 million) earn a wage premium equal 22.9% after […]

Read the full post →

Solving the Public Pension Crisis

December 13, 2016

Last week I had the pleasure of attending a public policy conference that brought together many scholars who study public pensions to share what they have learned from their research. The crisis – growing unfunded pension liabilities and resulting fiscal distress for states and municipalities – laid as the foundation of the day. Hosted by […]

Read the full post →

More labor market freedom means more labor force participation

December 8, 2016

The U.S. labor force participation (LFP) rate has yet to bounce back to its pre-recession level. Some of the decline is due to retiring baby-boomers but even the prime-age LFP rate, which only counts people age 25 – 54 and thus less affected by retirement, has not recovered. Economists and government officials are concerned about […]

Read the full post →

Eight years after the financial crisis: lessons from the most fiscally distressed cities

November 17, 2016

You’d think that eight years after the financial crisis, cities would have recovered. Instead, declining tax revenues following the economic downturn paired with growing liabilities have slowed recovery. Some cities exacerbated their situations with poor policy choices. Much could be learned by studying how city officials manage their finances in response to fiscal crises. Detroit […]

Read the full post →