Tag Archives: regulation

Innovation and economic growth in the early 20th century and lessons for today

Economic growth is vital for improving our lives and the primary long-run determinant of economic growth is innovation. More innovation means better products, more choices for consumers and a higher standard of living. Worldwide, hundreds of millions of people have been lifted out of poverty due to the economic growth that has occurred in many countries since the 1970s.

The effect of innovation on economic growth has been heavily analyzed using data from the post-WWII period, but there is considerably less work that examines the relationship between innovation and economic growth during earlier time periods. An interesting new working paper by Ufuk Akcigit, John Grigsby and Tom Nicholas that examines innovation across America during the late 19th and early 20th century helps fill in this gap.

The authors examine innovation and inventors in the U.S. during this period using U.S. patent data and census data from 1880 to 1940. The figure below shows the geographic distribution of inventiveness in 1940. Darker colors mean higher rates of inventive activity.

geography of inventiveness 1940

Most of the inventive activity in 1940 was in the industrial Midwest and Northeast, with California being the most notable western exception.

The next figure depicts the relationship between the log of the total number of patents granted to inventors in each state from 1900 to 2000 (x-axis) and annualized GDP growth (y-axis) over the same period for the 48 contiguous states.

innovation, long run growth US states

As shown there is a strong positive relationship between this measure of innovation and economic growth. The authors also conduct multi-variable regression analyses, including an instrumental variable analysis, and find the same positive relationship.

The better understand why certain states had more inventive activity than others in the early 20th century, the authors analyze several factors: 1) urbanization, 2) access to capital, 3) geographic connectedness and 4) openness to new ideas.

The figures below show the more urbanization was associated with more innovation from 1940 to 1960. The left figure plots the percent of people in each state living in an urban area in 1940 on the x-axis while the right has the percent living on a farm on the x-axis. Both figures tell the same story—rural states were less innovative.

pop density, innovation 1940-1960

Next, the authors look at the financial health of each state using deposits per capita as their measure. A stable, well-funded banking system makes it easier for inventors to get the capital they need to innovate. The figure below shows the positive relationship between deposits per capita in 1920 and patent production from 1920 to 1930.

innovation, bank deposits 1920-1940

The size of the market should also matter to inventors, since greater access to consumers means more sales and profits from successful inventions. The figures below show the relationship between a state’s transport cost advantage (x-axis) and innovation. The left figure depicts all of the states while the right omits the less populated, more geographically isolated Western states.

innovation, transport costs 1920-1940

States with a greater transport cost advantage in 1920—i.e. less economically isolated—were more innovative from 1920 to 1940, and this relationship is stronger when states in the far West are removed.

The last relationship the authors examine is that between innovation and openness to new, potentially disruptive ideas. One of their proxies for openness is the percent of families who owned slaves in a state, with more slave ownership being a sign of less openness to change and innovation.

innovation, slavery 1880-1940

The figures show that more slave ownership in 1860 was associated with less innovation at the state-level from 1880 to 1940. This negative relationship holds when all states are included (left figure) and when states with no slave ownership in 1860—which includes many Northern states—are omitted (right figure).

The authors also analyze individual-level data and find that inventors of the early 20th century were more likely to migrate across state lines than the rest of the population. Additionally, they find that conditional on moving, inventors tended to migrate to states that were more urbanized, had higher bank deposits per capita and had lower rates of historical slave ownership.

Next, the relationship between innovation and inequality is examined. Inequality has been a hot topic the last several years, with many people citing research by economists Thomas Piketty and Emmanuel Saez that argues that inequality has increased in the U.S. since the 1970s. The methods and data used to construct some of the most notable evidence of increasing inequality has been criticized, but this has not made the topic any less popular.

In theory, innovation has an ambiguous effect on inequality. If there is a lot of regulation and high barriers to entry, the profits from innovation may primarily accrue to large established companies, which would tend to increase inequality.

On the other hand, new firms that create innovative new products can erode the market share and profits of larger, richer firms, and this would tend to decrease inequality. This idea of innovation aligns with economist Joseph Schumpeter’s “creative destruction”.

So what was going on in the early 20th century? The figure below shows the relationship between innovation and two measures of state-level inequality: the ratio of the 90th percentile wage over the 10th percentile wage in 1940 and the wage income Gini coefficient in 1940. For each measure, a smaller value means less inequality.

innovation, inc inequality 1920-1940

As shown in the figures above, a higher patent rate is correlated with less inequality. However, only the result using 90-10 ratio remains statistically significant when each state’s occupation mix is controlled for in a multi-variable regression.

The authors also find that when the share of income controlled by the top 1% of earners is used as the measure of inequality, the relationship between innovation and inequality makes a U shape. That is, innovation decreases inequality up to a point, but after that point it’s associated with more inequality.

Thus when using the broader measures of inequality (90-10 ratio, Gini coeffecieint) innovation is negatively correlated with inequality, but when using a measure of top-end inequality (income controlled by top 1%) the relationship is less clear. This shows that inequality results are sensitive to the measurement of inequality used.

Social mobility is an important measure of economic opportunity within a society and the figure below shows that innovation is positively correlated with greater social mobility.

innovation, social mobility 1940

The measure of social mobility used is the percentage of people who have a high-skill occupation in 1940 given that they had a low-skill father (y-axis). States with more innovation from 1920 to 1940 had more social mobility according to this measure.

In the early 20th century it appears that innovation improved social mobility and decreased inequality, though the latter result is sensitive to the measurement of inequality. However, the two concepts are not equally important: Economic and social mobility are worthy societal ideals that require opportunity to be available to all, while static income or wealth inequality is largely a red herring that distracts us from more important issues. And once you take into account the consumer-benefits of innovation during this period—electricity, the automobile, refrigeration etc.—it is clear that innovation does far more good than harm.

This paper is interesting and useful for several reasons. First, it shows that innovation is important for economic growth over a long time period for one country. It also shows that more innovation occurred in denser, urbanized states that provided better access to capital, were more interconnected and were more open to new, disruptive ideas. These results are consistent with what economists have found using more recent data, but this research provides evidence that these relationships have existed over a much longer time period.

The positive relationships between innovation and income equality/social mobility in the early 20th century should also help alleviate the fears some people have about the negative effects of creative destruction. Innovation inevitably creates adjustment costs that harm some people, but during this period it doesn’t appear that it caused widespread harm to workers.

If we reduce regulation today in order to encourage more innovation and competition we will likely experience similar results, along with more economic growth and all of the consumer benefits.

Why Do We Get So Much Regulation?

Over the past 60 or 70 years, levels of regulation in the United States have been on the rise by almost any measure. As evidence, in the year 1950 there were only 9,745 pages in the US Code of Federal Regulations. Today that number is over 178,000 pages. There is less information about regulation at the state level, but anecdotal evidence suggests regulation is on the rise there too. For example, the Commonwealth of Kentucky publishes its regulatory code each year in a series of volumes known as the Kentucky Administrative Regulations Service (KARS). These volumes consist of books, each roughly 400 or 500 pages or so in length. In 1975, there were 4 books in the KARS. By 2015, that number had risen to 14 books. There are many different theories as to why so much regulation gets produced, so it makes sense to review some of those theories in order to explain the phenomenon of regulatory accumulation.

Perhaps the most popular theory of regulation is that it exists to advance the public interest. According to this view, well-intended regulators intervene in the marketplace due to “market failures”, which are situations where the market fails to allocate resources optimally. Some common examples of market failures include externalities (cases where third parties are impacted by the transactions involving others), asymmetric information (cases where buyers and sellers possess different levels of information about products being sold), public goods problems (whereby certain items are under-provided or not provided at all by the market), and concentration of industry in the form of monopoly power. When market failure occurs, the idea is that regulators intervene in order to make imperfect markets behave more like theoretically perfect markets.

Other theories of regulation are less optimistic about the motivations of the different participants in the rulemaking process. One popular theory suggests regulators work primarily to help powerful special interest groups, a phenomenon known as regulatory capture. Under this view—commonly associated with the writings of University of Chicago economist George Stigler—regulators fix prices and limit entry into an industry because it benefits the industry being regulated. An example would be how regulators, up until the late 1970s, fixed airline prices above what they would have been in a competitive market.

The interest groups that “capture” regulatory agencies are most often thought to be businesses, but it’s important to remember that agencies can also be captured by other groups. The revolving door between the government and the private sector doesn’t end with large banks. It also extends to nonprofit groups, labor unions, and activist groups of various kinds that also wield significant resources and power.

The “public choice theory” of regulation posits that public officials are primarily self-interested, rather than being focused on advancing the public interest. Under this view, regulators may be most concerned with increasing their own salaries or budgets. Or, they may be focused primarily on concentrating their own power.

It’s also possible that regulators are not nearly so calculating and rational as this. The behavioral public choice theory of regulation suggests regulators behave irrationally in many cases, due to the cognitive limitations inherent in all human beings. A case in point is how regulatory agencies routinely overestimate risks, or try to regulate already very low risks down to zero. There is significant evidence that people, including regulators, tend to overestimate small probability risks, leading to responses that are disproportionate to the expected harm. For example, the Environmental Protection Agency’s evaluations of sites related to the Superfund clean-up project routinely overestimated risks by orders of magnitude. Such overreactions might also be a response to public perceptions, for example in response to high-profile media events, such as following acts of terrorism. If the public’s reactions carry over into the voting booth, then legislation and regulation may be enacted soon after.

One of the more interesting and novel theories as to why we see regulation relates to public trust in institutions. A 2010 paper in the Quarterly Journal of Economics noted that there is a strong correlation between trust in various social institutions and some measures of regulation. The figure below is an example of this relationship, found in the paper.

QJE trust

Trust can relate to public institutions, such as the government, but it also extends to trust in corporations and in our fellow citizens. Interestingly, the authors of the QJE article argue that an environment of low trust and high regulation can be a self-fulfilling prophecy. Low levels of trust, ironically, can lead to more demand for regulation, even when there is little trust in the government. One reason for this might be that people think that giving an untrustworthy government control over private affairs is still superior to allowing unscrupulous businesses to have free rein.

The flip-side of this situation is that in high-trust countries, such as Sweden, the public demands lower levels of regulation and this can breed more trust. So an environment of free-market policies combined with trustworthy businesses can produce good market outcomes, more trust, and this too can be a self-fulfilling, allowing some countries to maintain a “good” equilibrium.

This is concerning for the United States because trust has been on the decline in a whole host of areas. A Gallop survey has been asking questions related to trust in public institutions for several decades. There is a long-term secular decline in Gallup’s broad measure of trust, as evidenced by the figure below, although periodically there are upswings in the measure.

gallup trust

Pew has a similar survey that looks at public trust in the government. Here the decline is even more evident.

pew trust

Given that regulation has been on the rise for decades, a decline in trust in the government, in corporations, and in each other, may be a key reason this is occurring. Of course, it’s possible that these groups are simply dishonest and do not merit public trust. Nonetheless, the US might find itself stuck in a self-fulfilling situation, whereby distrust breeds more government intervention in the economy, worse market outcomes, and even more distrust in the future. Getting out of that kind of situation is not easy. One way might be through education about the institutions that lead to free and prosperous societies, as well as to create a culture whereby corruption and unscrupulous behavior are discouraged.

There are a number of theories that seek to explain why regulation comes about. No theory is perfect, and some theories explain certain situations better than others. Nonetheless, the theories presented here go a long way towards laying out the forces that lead to regulation, even if no one theory can explain all regulation at all times.

Today’s public policies exacerbate our differences

The evidence that land-use regulations harm potential migrants keeps piling up. A recent paper in the Journal of Urban Economics finds that young workers (age 22 – 26) of average ability who enter the labor force in a large city (metropolitan areas with a population > 1.5 million) earn a wage premium equal 22.9% after 5 years.

The author also finds that high-ability workers experience additional wage growth in large cities but not in small cities or rural areas. This leads to high-ability workers sorting themselves into large cities and contributes an additional 3.2% to the urban wage-growth premium.

These findings are consistent with several other papers that have analyzed the urban wage premium. Potential causes of the wage premium are faster human capital accumulation in denser, more populated places due to knowledge spillovers and more efficient labor markets that better match employers and employees.

The high cost of housing in San Francisco, D.C., New York and dozens of other cities is preventing many young people from earning more money and improving their lives. City officials and residents need to strike a better balance between maintaining the “charm” of their neighborhoods and affordability. This means less regulation and more building.

City vs. rural is only one of the many dichotomies pundits have been discussing since the 2016 election. Some of the other versions of “two Americas” are educated vs. non-educated, white collar vs. blue collar, and rich vs. poor. We can debate how much these differences matter, but to the extent that they are an issue for the country our public policies have reinforced the barriers that allow them to persist.

Occupational licensing makes it more difficult for blue-collar manufacturing workers to transition to middle-class service sector jobs. Federal loan subsidies have made four-year colleges artificially cheap to the detriment of people with only a high school education. Restrictive zoning has made it too expensive for many people to move to places with the best labor markets. And once you’re in a city, unless you’re in one of the best neighborhoods your fellow citizens often keep employers and providers of much needed consumer staples like Wal-Mart out, while using eminent domain to build their next playground.

Over time people have sorted themselves into different groups and then erected barriers to keep others out. Communities do it with land-use regulations, occupations do it with licensing and established firms do it with regulatory capture. If we want a more prosperous America that de-emphasizes our differences and provides people of all backgrounds with opportunity we need more “live and let live” and less “my way or the highway”.

More labor market freedom means more labor force participation

The U.S. labor force participation (LFP) rate has yet to bounce back to its pre-recession level. Some of the decline is due to retiring baby-boomers but even the prime-age LFP rate, which only counts people age 25 – 54 and thus less affected by retirement, has not recovered.

Economists and government officials are concerned about the weak recovery in labor force participation. A high LFP rate is usually a sign of a strong economy—people are either working or optimistic about their chances of finding a job. A low LFP rate is often a sign of little economic opportunity or disappointment with the employment options available.

The U.S. is a large, diverse country so the national LFP rate obscures substantial state variation in LFP rates. The figure below shows the age 16 and up LFP rates for the 50 states and the U.S. as a whole (black bar) in 2014. (data)


The rates range from a high of 72.6% in North Dakota to a low of 53.1% in West Virginia. The U.S. rate was 62.9%. Several of the states with relatively low rates are in the south, including Mississippi, Alabama and Arkansas. Florida and Arizona also had relatively low labor force participation, which is not surprising considering their reputations as retirement destinations.

There are several reasons why some states have more labor force participation than others. Demographics is one: states with a higher percentage of people over age 65 and between 16 and 22 will have lower rates on average since people in these age groups are often retired or in school full time. States also have different economies made up of different industries and at any given time some industries are thriving while others are struggling.

Federal and state regulation also play a role. Federal regulation disparately impacts different states because of the different industrial compositions of state economies. For example, states with large energy industries tend to be more affected by federal regulation than other states.

States also tax and regulate their labor markets differently. States have different occupational licensing standards, different minimum wages and different levels of payroll and income taxes among other things. Each of these things alters the incentive for businesses to hire or for people to join the labor market and thus affects states’ LFP rates.

We can see the relationship between labor market freedom and labor force participation in the figure below. The figure shows the relationship between the Economic Freedom of North America’s 2013 labor market freedom score (x-axis) and the 2014 labor force participation rate for each state (y-axis).


As shown in the figure there is a positive relationship—more labor market freedom is associated with a higher LFP rate on average. States with lower freedom scores such as Mississippi, Kentucky and Alabama also had low LFP rates while states with higher freedom scores such as North Dakota, South Dakota and Virginia had higher LFP rates.

This is not an all-else-equal analysis and other variables—such as demographics and industry composition which I mentioned earlier—also play a role. That being said, state officials concerned about their state’s labor market should think about what they can do to increase labor market freedom—and economic freedom more broadly—in their state.

What else can the government do for America’s poor?

This year marks the 20th anniversary of the 1996 welfare reforms, which has generated some discussion about poverty in the U.S. I recently spoke to a group of high school students on this topic and about what reforms, if any, should be made to our means-tested welfare programs.

After reading several papers (e.g. here, here and here), the book Hillbilly Elegy, and reflecting on my own experiences I am not convinced the government is capable of doing much more.


President Lyndon Johnson declared “War on Poverty” in his 1964 state of the union address. Over the last 50 years there has been some progress but there are still approximately 43 million Americans living in poverty as defined by the U.S. Census Bureau.

Early on it looked as if poverty would be eradicated fairly quickly. In 1964, prior to the “War on Poverty”, the official poverty rate was 20%. It declined rapidly from 1965 to 1972, especially for the most impoverished groups as shown in the figure below (data from Table 1 in Haveman et al. , 2015). (Click to enlarge)


Since 1972 the poverty rate has remained fairly constant. It reached its lowest point in 1973—11.1%—but has since fluctuated between roughly 11% and 15%, largely in accordance with the business cycle. The number of people in poverty has increased, but that is unsurprising considering the relatively flat poverty rate coupled with a growing population.


Meanwhile, an alternative measure called the supplemental poverty measure (SPM) has declined, but it was still over 15% as of 2013, as shown below.


The official poverty measure (OPM) only includes cash and cash benefits in its measure of a person’s resources, while the SPM includes tax credits and non-cash transfers (e.g. food stamps) as part of someone’s resources when determining their poverty status. The SPM also makes adjustments for local cost of living.

For example, the official poverty threshold for a single person under the age of 65 was $12,331 in 2015. But $12,331 can buy more in rural South Carolina than it can in Manhattan, primarily because of housing costs. The SPM takes these differences into account, although I am not sure it should for reasons I won’t get into here.

Regardless of the measure we look at, poverty is still higher than most people would probably expect considering the time and resources that have been expended trying to reduce it. This is especially true in high-poverty areas where poverty rates still exceed 33%.

A county-level map from the Census that uses the official poverty measure shows the distribution of poverty across the 48 contiguous states in 2014. White represents the least amount of poverty (3.2% to 11.4%) and dark pink the most (32.7% to 52.2%).


The most impoverished counties are in the south, Appalachia and rural west, though there are pockets of high-poverty counties in the plains states, central Michigan and northern Maine.

Why haven’t we made more progress on poverty? And is there more that government can do? I think these questions are intertwined. My answer to the first is it’s complicated and to the second I don’t think so.

Past efforts

The inability to reduce the official poverty rate below 10% doesn’t appear to be due to a lack of money. The figure below shows real per capita expenditures—sum of federal, state and local—on the top 84 (top line) and the top 10 (bottom line) means-tested welfare poverty programs since 1970. It is from Haveman et al. (2015).


There has been substantial growth in both since the largest drop in poverty occurred in the late 1960s. If money was the primary issue one would expect better results over time.

So if the amount of money is not the issue what is? It could be that even though we are spending money, we aren’t spending it on the right things. The chart below shows real per capita spending on several different programs and is also from Haveman et al. (2015).


Spending on direct cash-assistance programs—Aid for Families with Dependent Children (AFDC) and Temporary Assistance for Needy Families (TANF)—has fallen over time, while spending on programs designed to encourage work—Earned Income Tax Credit (EITC)—and on non-cash benefits like food stamps and housing aid increased.

In the mid-1970s welfare programs began shifting from primarily cash aid (AFDC, TANF) to work-based aid (EITC). Today the EITC and food stamps are the core programs of the anti-poverty effort.

It’s impossible to know whether this shift has resulted in more or less poverty than what would have occurred without it. We cannot reconstruct the counterfactual without going back in time. But many people think that more direct cash aid, in the spirit of AFDC, is what’s needed.

The difference today is that instead of means-tested direct cash aid, many are calling for a universal basic income or UBI. A UBI would provide each citizen, from Bill Gates to the poorest single mother, with a monthly cash payment, no strings attached. Prominent supporters of a UBI include libertarian-leaning Charles Murray and people on the left such as Matt Bruenig and Elizabeth Stoker.

Universal Basic Income?

The details of each UBI plan vary, but the basic appeal is the same: It would reduce the welfare bureaucracy, simplify the process for receiving aid, increase the incentive to work at the margin since it doesn’t phase out, treat low-income people like adults capable of making their own decisions and mechanically decrease poverty by giving people extra cash.

A similar proposal is a negative income tax (NIT), first popularized by Milton Friedman. The current EITC is a negative income tax conditional on work, since it is refundable i.e. eligible people receive the difference between their EITC and the taxes they owe. The NIT has its own problems, discussed in the link above, but it still has its supporters.

In theory I like a UBI. Economists in general tend to favor cash benefits over in-kind programs like vouchers and food stamps due to their simplicity and larger effects on recipient satisfaction or utility. In reality, however, a UBI of even $5,000 is very expensive and there are public choice considerations that many UBI supporters ignore, or at least downplay, that are real problems.

The political process can quickly turn an affordable UBI into an unaffordable one. It seems reasonable to expect that politicians trying to win elections will make UBI increases part of their platform, with each trying to outdo the other. There is little that can be done, short of a constitutional amendment (and even those can be changed), to ensure that political forces don’t alter the amount, recipient criteria or add additional programs on top of the UBI.

I think the history of the income tax demonstrates that a relatively low, simple UBI would quickly morph into a monstrosity. In 1913 there were 7 income tax brackets that applied to all taxpayers, and a worker needed to make more than $20K (equivalent to $487,733 in 2016) before he reached the second bracket of 2% (!). By 1927 there were 23 brackets and the second one, at 3%, kicked in at $4K ($55,500 in 2016) instead of $20K. And of course we are all aware of the current tax code’s problems. To chart a different course for the UBI is, in my opinion, a work of fantasy.

Final thoughts

Because of politics, I think an increase in the EITC (and reducing its error rate), for both working parents and single adults, coupled with criminal justice reform that reduces the number of non-violent felons—who have a hard time finding employment upon release—are preferable to a UBI.

I also support the abolition of the minimum wage, which harms the job prospects of low-skilled workers. If we are going to tie anti-poverty programs to work in order to encourage movement towards self-sufficiency, then we should make it as easy as possible to obtain paid employment. Eliminating the minimum wage and subsidizing income through the EITC is a fairer, more efficient way to reduce poverty.

Additionally, if a minimum standard of living is something that is supported by society than all of society should share the burden via tax-funded welfare programs. It is not philanthropic to force business owners to help the poor on behalf of the rest of us.

More economic growth would also help. Capitalism is responsible for lifting billions of people out of dire poverty in developing countries and the poverty rate in the U.S. falls during economic expansions (see previous poverty rate figures). Unfortunately, growth has been slow over the last 8 years and neither presidential candidate’s policies inspire much hope.

In fact, a good way for the government to help the poor is to reduce regulation and lower the corporate tax rate, which would help economic growth and increase wages.

Despite the relatively high official poverty rate in the U.S., poor people here live better than just about anywhere else in the world. Extreme poverty—think Haiti—doesn’t exist in the U.S. On a consumption rather than income basis, there’s evidence that the absolute poverty rate has fallen to about 4%.

Given the way government functions I don’t think there is much left for it to do. Its lack of local knowledge and resulting blunt, one size fits all solutions, coupled with its general inefficiency, makes it incapable of helping the unique cases that fall through the current social safety net.

Any additional progress will need to come from the bottom up and I will discuss this more in a future post.

More competition can lead to less inequality

Wealth inequality in the United States and many European countries, especially between the richest and the rest, has been a popular topic since Thomas Piketty’s Capital in the 21st Century was published. Piketty and others argue that tax data shows that wealth inequality has increased in the U.S. since the late 1970s, as seen in the figure below from a paper by Emmanuel Saez—Picketty’s frequent co-author— and Gabriel Zucman.


The figure shows the percentage of all U.S. household wealth that is owned by the top 0.1% of households, which as the note explains consists of about 160,000 families. The percentage fell from 25% in the late 1920s to about 7% in the late 1970s and then began to rise. Many people have used this and similar data to argue for higher marginal taxes on the rich and more income redistribution in order to close the wealth gap between the richest and the rest.

While politicians and pundits continue debating what should be done, if anything, about taxes and redistribution, many economists are trying to understand what factors can affect wealth and thus the wealth distribution over time. An important one that is not talked about enough is competition, specifically Joseph Schumpeter’s idea of creative destruction.

Charles Jones, a professor at Stanford, has discussed the connection between profits and creative destruction and their link with inequality. To help illustrate the connection, Mr. Jones uses the example of an entrepreneur who creates a new phone app. The app’s creator will earn profits over time as the app’s popularity and sales increase. However, her profits will eventually decline due to the process of creative destruction: a newer, better app will hit the market that pulls her customers away from her product, erodes her sales and forces her to adapt or fail. The longer she is able to differentiate her product from others, the longer she will be in business and the more money she will earn. This process is stylized in the figure below.


If the app maintains its popularity for the duration of firm life 1, the entrepreneur will earn profits P1. After that the firm is replaced by a new firm that also exists for firm life 1 and earns profit P1. The longer a firm is able to maintain its product’s uniqueness, the more profit it will earn, as shown by firm life 2: In this case the firm earns profit P2. A lack of competition stretches out a firm’s life cycle since the paucity of substitutes makes it costlier for consumers to switch products if the value of the firm’s product declines.

Higher profits can translate into greater inequality as well, especially if we broaden the discussion to include wages and sole-proprietor income. Maintaining market power for a long period of time by restricting entry not only increases corporate profits, it also allows doctors, lawyers, opticians, and a host of other workers who operate under a licensing regime that restricts entry to earn higher wages than they otherwise would. The higher wages obtained due to state restrictions on healthcare provision, restrictions on providing legal services and state-level occupational licensing can exacerbate inequality at the lower levels of the income distribution as well as the higher levels.

Workers and sole proprietors in the U.S. have been using government to restrict entry into occupations since the country was founded. In the past such restrictions were often drawn on racial or ethnic lines. In their Pulitzer Prize-winning history of New York City, Gotham, historians Edwin G. Burrows and Mike Wallace write about New York City cartmen in the 1820s:

American-born carters complained to the city fathers that Irish immigrants, who had been licensed during the war [of 1812] while Anglo-Dutchmen were off soldiering, were undercutting established rates and stealing customers. Mayor Colden limited future alien licensing to dirt carting, a field the Irish quickly dominated. When they continued to challenge the Anglo-Americans in other areas, the Society of Cartmen petitioned the Common Council to reaffirm their “ancient privileges”. The municipal government agreed, rejecting calls for the decontrol of carting, as the business and trade of the city depended on in it, and in 1826 the council banned aliens from carting, pawnbroking, and hackney-coach driving; soon all licensed trades were closed to them.

Modern occupational licensing is the legacy of these earlier, successful efforts to protect profits by limiting entry, often of “undesirables”. Today’s occupational licensing is no longer a response to racial or ethnic prejudices, but it has similar results: It protects the earning power of established providers.

Throughout America’s history the economy has been relatively dynamic, and this dynamism has made it difficult for businesses to earn profits for long periods of time; only 12% of the companies on the Fortune 500 in 1955 were still on the list in 2015. In a properly functioning capitalist economy, newer, poorer firms will regularly supplant older, richer firms and this economic churn tempers inequality.

The same churn occurs among the highest echelon of individuals as well. An increasing number of the Forbes 400 are self-made, often from humble beginnings. In 1984, 99 people on the list inherited their fortune and were not actively growing it. By 2014 only 28 people were in the same position. Meanwhile, the percentage of the Forbes 400 who are largely self-made increased from 43% to 69% over the same period.

But this dynamism may be abating and excessive regulation is likely a factor. For example, the rate of new-bank formation from 1990 – 2010 was about 100 banks per year. Since 2010, the rate has fallen to about three per year. Researchers have attributed some of the decline of small banks to the Dodd-Frank Wall Street Reform Act, which increased compliance costs that disproportionately harm small banks. Fewer banks means less competition and higher prices.

Another recent example of how a lack of competition can increase profits and inequality is EpiPen. The price of EpiPen—a medicine used to treat severe allergic reactions to things like peanuts—has increased dramatically since 2011. This price increase was possible because there are almost no good substitutes for EpiPen, and the lack of substitutes can be attributed to the FDA and other government policies that have insulated EpiPen’s maker, Mylan, from market competition. Meanwhile, the compensation of Mylan’s CEO Heather Bresch increased by 671% from 2007 to 2015. I doubt that Bresch’s compensation would have increased by such a large amount without the profits of EpiPen.

Letting firms and workers compete in the marketplace fosters economic growth and can help dampen inequality. To the extent that wealth inequality is an issue we don’t need more regulation and redistribution to fix it: We need more competition.

Washington’s Legitimacy Crisis Presents an Opportunity for the States

You’ve heard it before. Americans are deeply unhappy with Washington, DC. Sixty-five percent say the country is on the wrong track. Confidence in institutions is near all-time lows. Congress’s approval rating is terrible, and the two major presidential candidates are viewed more negatively than any other mainstream presidential candidates in recent memory. Only nineteen percent of the public trust the government to do the right thing all or most of the time.

Washington’s dysfunction—what is probably driving these perceptions—extends to all three branches of the federal government. Congress is in a near-permanent state of gridlock. The president uses his executive authority wherever possible, but often with little practical impact. Even regulatory agencies are facing what Brookings Institution scholar Philip Wallach has dubbed a legitimacy crisis of the administrative state, as the public grows more skeptical of leaving the most important policymaking decisions to insulated and unelected regulators.

The courts are in little better shape. Since the death of Justice Antonin Scalia, the Supreme Court has been hobbled without its ninth member. Even before this development, there was a perception building that politics too often enters the Court’s decisions, no doubt contributing to the gradual increase in the Supreme Court’s disapproval rating over time.

On a brighter note, in contrast to this crisis of legitimacy at the federal level, polling data suggests that Americans still generally trust their state and local governments. The cop on the beat, the garbage man, and the postal worker, are still trusted symbols of everyday American life.  Furthermore, the social divisions that make dramatic change at the federal level difficult (i.e. red state versus blue state stuff) actually make it easier to get things done in the states.

Where governorships and state legislatures are dominated by a single party, there are opportunities to advance creative policy solutions, allowing the states to fulfill their roles as laboratories of democracy. Policy reforms in the states, where successful, can lay the groundwork for future changes at the federal level, perhaps restoring badly-needed trust in our ailing institutions.

There are a many reasons to be cynical about where the country is headed, and to doubt whether our leaders are capable of addressing our looming challenges. However, the states should not be made complacent by this state of affairs. They should view Washington’s dysfunction as an opportunity and not a reason for despair. Now is an opportune moment to step up and demonstrate what it means to govern. Perhaps…just perhaps… our friends in Washington might pay attention and learn something.

City population dynamics since 1850

The reason why some cities grow and some cities shrink is a heavily debated topic in economics, sociology, urban planning, and public administration. In truth, there is no single reason why a city declines. Often exogenous factors – new modes of transportation, increased globalization, institutional changes, and federal policies – initiate the decline while subsequent poor political management can exacerbate it. This post focuses on the population trends of America’s largest cities since 1850 and how changes in these factors affected the distribution of people within the US.

When water transportation, water power, and proximity to natural resources such as coal were the most important factors driving industrial productivity, businesses and people congregated in locations near major waterways for power and shipping purposes. The graph below shows the top 10 cities* by population in 1850 and follows them until 1900. The rank of the city is on the left axis.

top cities 1850-1900


* The 9th, 11th, and 12th ranked cities in 1850 were all incorporated into Philadelphia by 1860. Pittsburgh was the next highest ranked city (13th) that was not incorporated so I used it in the graph instead.

All of the largest cities were located on heavily traveled rivers (New Orleans, Cincinnati, Pittsburgh, and St. Louis) or on the coast and had busy ports (New York, Boston, Philadelphia, Brooklyn, and Baltimore). Albany, NY may seem like an outlier but it was the starting point of the Erie Canal.

As economist Ed Glaeser (2005) notes “…almost every large northern city in the US as of 1860 became an industrial powerhouse over the next 60 years as factories started in central locations where they could save transport costs and make use of large urban labor forces.”

Along with waterways, railroads were an important mode of transportation from 1850 – 1900 and many of these cities had important railroads running through them, such as the B&O through Balitmore and the Erie Railroad in New York. The increasing importance of railroads impacted the list of top 10 cities in 1900 as shown below.

top cities 1900-1950

A similar but not identical set of cities dominated the urban landscape over the next 50 years. By 1900, New Orleans, Brooklyn (merged with New York) Albany, and Pittsburgh were replaced by Chicago, Cleveland, Buffalo, and San Francisco. Chicago, Cleveland, and Buffalo are all located on the Great Lakes and thus had water access, but it was the increasing importance of railroad shipping and travel that helped their populations grow. Buffalo was on the B&O railroad and was also the terminal point of the Erie Canal. San Francisco became much more accessible after the completion of the Pacific Railroad in 1869, but the California Gold Rush in the late 1840s got its population growth started.

As rail and eventually automobile/truck transportation became more important during the early 1900s, cities that relied on strategic river locations began to decline. New Orleans was already out of the top 10 by 1900 (falling from 5th to 12th) and Cincinnati went from 10th in 1900 to 18th by 1950. Buffalo also fell out of the top 10 during this time period, declining from 8th to 15th. But despite some changes in the rankings, there was only one warm-weather city in the top 10 as late as 1950 (Los Angeles). However, as the next graphs shows there was a surge in the populations of warm-weather cities during the period from 1950 to 2010 that caused many of the older Midwestern cities to fall out of the rankings.

top cities 1950-2010

The largest shakeup in the population rankings occurred during this period. Out of the top 10 cities in 1950, only 4 (Philadelphia, Los Angeles, Chicago, and New York) were still in the top 10 in 2010 (All were in the top 5, with Houston – 4th in 2010 – being the only city not already ranked in the top 10 in 1950, when it was 14th). The cities ranked 6 – 10 fell out of the top 20 while Detroit declined from 5th to 18th. The large change in the rankings during this time period is striking when compared to the relative stability of the earlier time periods.

Economic changes due to globalization and the prevalence of right-to-work laws in the southern states, combined with preferences for warm weather and other factors have resulted in both population and economic decline in many major Midwestern and Northeastern cities. All of the new cities in the top ten in 2010 have relatively warm weather: Phoenix, San Antonio, San Diego, Dallas, and San Jose. Some large cities missing from the 2010 list – particularly San Francisco and perhaps Washington D.C. and Boston as well – would probably be ranked higher if not for restrictive land-use regulations that artificially increase housing prices and limit population growth. In those cities and other smaller cities – primarily located in Southern California – low population growth is a goal rather than a result of outside forces.

The only cold-weather cities that were in the top 15 in 2014 that were not in the top 5 in 1950 were Indianapolis, IN (14th) and Columbus, OH (15th). These two cities not only avoided the fate of nearby Detroit and Cleveland, they thrived. From 1950 to 2014 Columbus’ population grew by 122% and Indianapolis’ grew by 99%. This is striking compared to the 57% decline in Cleveland and the 63% decline in Detroit during the same time period.

So why have Columbus and Indianapolis grown since 1950 while every other large city in the Midwest has declined? There isn’t an obvious answer. One thing among many that both Columbus and Indianapolis have in common is that they are both state capitals. State spending as a percentage of Gross State Product (GSP) has been increasing since 1970 across the country as shown in the graph below.

OH, IN state spending as per GSP

In Ohio state spending growth as a percentage of GSP has outpaced the nation since 1970. It is possible that increased state spending in Ohio and Indiana is crowding out private investment in other parts of those states. And since much of the money collected by the state ends up being spent in the capital via government wages, both Columbus and Indianapolis grow relative to other cities in their respective states.

There has also been an increase in state level regulation over time. As state governments become larger players in the economy business leaders will find it more and more beneficial to be near state legislators and governors in order to lobby for regulations that help their company or for exemptions from rules that harm it. Company executives who fail to get a seat at the table when regulations are being drafted may find that their competitors have helped draft rules that put them at a competitive disadvantage. The decline of manufacturing in the Midwest may have created an urban reset that presented firms and workers with an opportunity to migrate to areas that have a relative abundance of an increasingly important factor of production – government.

Education, Innovation, and Urban Growth

One of the strongest predictors of urban growth since the start of the 20th century is the skill level of a city’s population. Cities that have a highly skilled population, usually measured as the share of the population with a bachelor’s degree or more, tend to grow faster than similar cities with less educated populations. This is true at both the metropolitan level and the city level. The figure below plots the population growth of 30 large U.S. cities from 1970 – 2013 on the vertical axis and the share of the city’s 25 and over population that had at least a bachelor’s degree in 1967 on the horizontal axis. (The education data for the cities are here. I am using the political city’s population growth and the share of the central city population with a bachelor’s degree or more from the census data linked to above.)

BA, city growth 1

As shown in the figure there is a strong, positive relationship between the two variables: The correlation coefficient is 0.61. It is well known that over the last 50 years cities in warmer areas have been growing while cities in colder areas have been shrinking, but in this sample the cities in warmer areas also tended to have a better educated population in 1967. Many of the cities known today for their highly educated populations, such as Seattle, San Francisco, and Washington D.C., also had highly educated populations in 1967. Colder manufacturing cities such as Detroit, Buffalo, and Newark had less educated workforces in 1967 and subsequently less population growth.

The above figure uses data on both warm and cold cities, but the relationship holds for only cold cities as well. Below is the same graph but only depicts cities that have a January mean temperature below 40°F. Twenty out of the 30 cities fit this criteria.

BA, city growth 2

Again, there is a strong, positive relationship. In fact it is even stronger; the correlation coefficient is 0.68. Most of the cities in the graph lost population from 1970 – 2013, but the cities that did grow, such as Columbus, Seattle, and Denver, all had relatively educated populations in 1967.

There are several reasons why an educated population and urban population growth are correlated. One is that a faster accumulation of skills and human capital spillovers in cities increase wages which attracts workers. Also, the large number of specialized employers located in cities makes it easier for workers, especially high-skill workers, to find employment. Cities are also home to a range of consumption amenities that attract educated people, such as a wide variety of shops, restaurants, museums, and sporting events.

Another reason why an educated workforce may actually cause city growth has to do with its ability to adjust and innovate. On average, educated workers tend to be more innovative and better able to learn new skills. When there is an negative, exogenous shock to an industry, such as the decline of the automobile industry or the steel industry, educated workers can learn new skills and create new industries to replace the old ones. Many of the mid-20th century workers in Detroit and other Midwestern cities decided to forego higher education because good paying factory jobs were plentiful. When manufacturing declined those workers had a difficult time learning new skills. Also, the large firms that dominated the economic landscape, such as Ford, did not support entrepreneurial thinking. This meant that even the educated workers were not prepared to create new businesses.

Local politicians often want to protect local firms in certain industries through favorable treatment and regulation. But often this protection harms newer, innovative firms since they are forced to compete with the older firms on an uneven playing field. Political favoritism fosters a stagnant economy since in the short-run established firms thrive at the expense of newer, more innovative startups. Famous political statements such as “What’s good for General Motors is good for the country” helped mislead workers into thinking that government was willing and able to protect their employers. But governments at all levels were unable to stop the economic forces that battered U.S. manufacturing.

To thrive in the 21st century local politicians need to foster economic environments that encourage innovation and ingenuity. The successful cities of the future will be those that are best able to innovate and to adapt in an increasingly complex world. History has shown us that an educated and entrepreneurial workforce is capable of overcoming economic challenges, but to do this people need to be free to innovate and create. Stringent land-use regulations, overly-burdensome occupational licensing, certificate-of-need laws, and other unnecessary regulations create barriers to innovation and make it more difficult for entrepreneurs to create the firms and industries of the future.

Why regulations that require cabs to be painted the same color are counterproductive

A few weeks ago, my colleagues Chris Koopman, Adam Thierer and I filed a comment with the FTC on the sharing economy. The comment coincided with a workshop that the FTC held at which Adam was invited to speak. Our comment, our earlier paper (forthcoming in the Pepperdine Journal of Business Entrepreneurship and the Law), and a superb piece that Adam and Chris wrote with MA fellows Anne Hobson and Chris Kuiper, have been getting a fair amount of press attention, most of it positive.

I want to highlight one piece that seems to have misunderstood us. I highlight it not because I blame the author, but because I assume we must not have described our point well. Paul Goddin of MobilityLab writes:

Their argument seems valid, but an example they use is New York City’s rule that taxicabs be painted the same color. They argue this regulation is a barrier to entry, yet neglect to mention that Uber also requires its drivers to adhere with automobile standards (although these standards have been loosened recently). As of this article, Uber’s drivers must possess a late-model 2005 sedan (2000 in some cities, 2007-08 in others), with specific color and make restrictions for those who operate the company’s Black car service.

A rule that requires everyone in an industry to use the exact same equipment, branding and paint color is, I suppose, a barrier to entry. But that isn’t why we raised the issue. We raise it because—more importantly—it is a barrier to signaling quality.

It is a good thing that Uber and Lyft require their drivers to adhere to standards, just as it is a good thing that TGI Fridays and CocaCola set their own standards. Walk into a TGI Fridays anywhere in the world and you will encounter a familiar experience. That is because the company sets standards for its recipes, its decorations, its employee’s behavior, its uniforms, and much else. Similarly strict standards govern the way CocaCola is packaged, and marketed. Retailers that operate soda fountains are all supposed to combine the syrup and the carbonated water in the same way. If they don’t, they may find that CocaCola no longer wants to work with them.

These practices ensure quality. And they help overcome what would otherwise be a significant information asymmetry between the buyer and the seller. But notice that these signals only work because they are tied to the brands. Imagine what would happen if Chili’s, Outback Steakhouse, and Macaroni Grill were all required by law to adopt the same logos, the same decor, the same recipes, and the same uniforms as TGI Fridays. Customers would have no way of distinguishing between the brands, and therefore the companies would have little incentive to provide quality service in order to protect their reputations. Who cares about cooking a T Bone properly if the other guys are likely to get blamed for it?

So here in lies the problem with taxi regulations that require all cabs to offer the same sort of service, right down to the color of their cars: If every cab looks the same, no one cab company has an incentive to carefully guard its reputation.