Category Archives: Regulation

States with lower minimum wages will feel the impact of California’s experiment

California governor Jerry Brown recently signed a law raising California’s minimum wage to $15/hour by 2022. This ill-advised increase in the minimum wage will banish the least productive workers of California – teens, the less educated, the elderly – from the labor market. It will be especially destructive in the poorer areas of California that are already struggling.

And if punishing California’s low-skill workers by preventing them from negotiating their own wage with employers isn’t bad enough, there is reason to believe that a higher minimum wage in a large state like California will eventually affect the employment opportunities of low-skill workers in other areas of the country.

Profit-maximizing firms are always on the lookout for ways to reduce costs holding quality constant (or in the best case scenario to reduce costs and increase quality). Since there are many different ways to produce the same good, if the price of one factor of production, e.g. labor, increases, firms will have an incentive to use less of that factor and more of something else in their production process. For example, if the price of low-skill workers increases relative to the cost of a machine that can do the same job firms will have an incentive to switch to the machine.

To set the stage for this post, let’s think about a real life example; touch screen ordering. Some McDonald’s have touchscreens for ordering food and coffee and San Francisco restaurant eatsa is almost entirely automated (coincidence?). The choice facing a restaurant owner is whether to use a touch screen or cashier. If a restaurant is currently using a cashier and paying them a wage, they will only switch to the touch screen if the cost of switching and the future discounted costs of operating and maintaining the touch screen device are less than the future discounted costs of using workers and paying them a wage plus any benefits. We can write this as

D + K + I + R < W

Where D represents the development costs of creating and perfecting the device, K represents the costs of working out the kinks/the trial run/adjustment costs, I represents the installation costs, and R represents the net present value of the operating and maintenance costs. On the other side of the inequality W represents the net present value of the labor costs. (In math terms R and W are: R = [ ∑ (rk) / (1+i)^n from n=0 to N ] where r is the rental rate of a unit of capital, k is the number of units of capital, and i is the interest rate and W = [ ∑ (wl) /(1+i)^n from n=0 to N ] where w is the wage and l is the amount of labor hours. But if this looks messy and confusing don’t worry about it as it’s not crucial for the example.)

The owner of a restaurant will only switch to a touch screen device rather than a cashier if the left side of the above inequality is less than the right side, since in that case the owner’s costs will be lower and they will earn a larger profit (holding sales constant).

If the cashier is earning the minimum wage or something close to it and the minimum wage is increased, say from $9 to $15, the right side of the above inequality will increase while the left side will stay the same (the w part of W is going up).  If the increase in the wage is large enough to make the right side larger than the left side the firm will switch from a cashier to a touch screen. Suppose that an increase from $9 to $15 does induce a switch to touch screen devices in California McDonald’s restaurants. Can this impact McDonald’s restaurants in areas where the minimum wage doesn’t increase? In theory yes.

Once some McDonald’s restaurants make the switch the costs for other McDonald’s to switch will be lower. The reason for this is that the McDonald’s that switch later will not have to pay the D or K costs; the development or kinks/trial run/adjustment costs. Once the technology is developed and perfected the late-adopting McDonald’s can just copy what has already been done. So after the McDonald’s restaurants in high wage areas install and perfect touch screen devices for ordering, the other McDonald’s face the decision of

I + R < W

This means that it may make sense to adopt the technology once it has been developed and perfected even if the wage in the lower wage areas does not change. In this scenario the left side decreases as D and K go to 0 while the right side stays the same. In fact, one could argue that the R will decline for late-adopting restaurants as well since the maintenance costs will decline over time as more technicians are trained and the reliability and performance of the software and hardware increase.

What this means is that a higher minimum wage in a state like California can lead to a decline in low-skill employment opportunities in places like Greenville, SC and Dayton, OH as the technology employed to offset the higher labor costs in the high minimum wage area spread to lower wage areas.

Also, firm owners and operators live in the real world. They see other state and local governments raising their minimum wage and they start to think that it could happen in their area too. This also gives them an incentive to switch since in expectation labor costs are going up. If additional states make the same bad policy choice as California, firm owners around the country may start to think that resistance is futile and that it’s best to adapt in advance by preemptively switching to more capital.

And if you think that touch screen ordering machines aren’t a good example, here is a link to an article about an automated burger-making machine. The company that created it plans on starting a chain of restaurants that use the machine. Once all of the bugs are worked out how high does the minimum wage need to be before other companies license the technology or create their own by copying what has already been done?

This is one more way that a higher minimum wage negatively impacts low-skill workers; even if workers don’t live in an area that has a relatively high minimum wage, the spread of technology may eliminate their jobs as well.

A $15 minimum wage will excessively harm California’s poorest counties

Lawmakers in California are thinking about increasing the state minimum wage to $15 per hour by 2022. If it occurs it will be the latest in a series of increases in the minimum wage across the country, both at the city and state level.

Increases in the minimum wage make it difficult for low-skill workers to find employment since the mandated wage is often higher than the value many of these workers can provide to their employers. Companies won’t stay in business long if they are forced to pay a worker $15 per hour who only produces $12 worth of goods and services per hour. Statewide increases may harm the job prospects of low-skill workers more than citywide increases since they aren’t adjusted to local labor market conditions.

California is a huge state, covering nearly 164,000 square miles, and contains 58 counties and 482 municipalities. Each of these counties and cities has their own local labor market that is based on local conditions. A statewide minimum wage ignores these local conditions and imposes the same mandated price floor on employers and workers across the state. In areas with low wages in general, a $15 minimum wage may affect nearly every worker, while in areas with high wages the adverse effects of a $15 minimum wage will be moderated. As explained in the NY Times:

“San Francisco and San Jose, both high-wage cities that have benefited from the tech boom, are likely to weather the increase without so much as a ripple. The negative consequences of the minimum wage increase in Los Angeles and San Diego — large cities where wages are lower — are likely to be more pronounced, though they could remain modest on balance.

But in lower-wage, inland cities like Bakersfield and Fresno, the effects could play out in much less predictable ways. That’s because the rise of the minimum wage to $15 over the next six years would push the wage floor much closer to the expected pay for a worker in the middle of the wage scale, affecting a much higher proportion of employees and employers there than in high-wage cities.”

To put some numbers to this idea, I used BLS weekly wage data from Dec. of 2014 to create a ratio for each of California’s counties that consists of the weekly wage of a $15 per hour job (40 x $15 = $600) divided by the average weekly wage of each county. The three counties with the lowest ratio and the three counties with the highest ratio are in the table below, with the ratio depicted as a percentage in the 4th column.

CA county weekly min wage ratio

The counties with the lowest ratios are San Mateo, Santa Clara, and San Francisco County. These are all high-wage counties located on the coast and contain the cities of San Jose and San Francisco. As an example, a $600 weekly wage is equal to 27.7% of the average weekly wage in San Mateo County.

The three counties with the highest ratios are Trinity, Lake, and Mariposa County. These are more rural counties that are located inland. Trinity and Lake are north of San Francisco while Mariposa County is located to the east of San Francisco. In Mariposa County, a $600 weekly wage would be equal to 92.6% of the avg. weekly wage in that county as Dec. 2014. The data shown in the table reveal the vastly different local labor market conditions that exist in California.

The price of non-tradeable goods like restaurant meals, haircuts, automotive repair, etc. are largely based on local land and labor costs and the willingness to pay of the local population. For example, a nice restaurant in San Francisco can charge $95 for a steak because the residents of San Francisco have a high willingness to pay for such meals as a result of their high incomes.

Selling a luxury product like a high-quality steak also makes it relatively easier to absorb a cost increase that comes from a higher minimum wage; restaurant workers are already making relatively more in wealthier areas and passing along the cost increase in the form of higher prices will have a small effect on sales if consumers of steak aren’t very sensitive to price.

But in Mariposa County, where the avg. weekly wage is only $648, a restaurant would have a hard time attracting customers if they charged similar prices. A diner in Mariposa County that sells hamburgers is probably not paying its workers much more than the minimum wage, so an increase to $15 per hour is going to drastically affect the owner’s costs. Additionally, consumers of hamburgers may be more price-sensitive than consumers of steak, making it more difficult to pass along cost increases.

Yet despite these differences, both the 5-star steakhouse in San Francisco and the mom-and-pop diner in Mariposa County are going to be bound by the same minimum wage if California passes this law.

In the table below I calculate what the minimum wage would have to be in San Mateo, Santa Clara, and San Francisco County to be on par with a $15 minimum in Mariposa County.

CA comparable min wage

If the minimum wage was 92.6% of the average wage in San Mateo it would be equal to $50.14. Using the ratio from a more developed but still lower-wage area – Kern County, where Bakersfield is located – the minimum wage would need to be $37.20 in San Mateo. Does anyone really believe that a $50 or $37 minimum wage in San Mateo wouldn’t cause a drastic decline in employment or a large increase in prices in that county?

If California’s lawmakers insist on implementing a minimum wage increase they should adjust it so that it doesn’t disproportionately affect workers in poorer, rural areas. But of course this is unlikely to happen; I doubt that the voters of San Mateo, Santa Clara, and San Francisco County will be as accepting of a $37 + minimum wage as they are of a $15 minimum wage that won’t directly affect many of them.

A minimum wage of any amount is going to harm some workers by preventing them from getting a job. But a minimum wage that ignores local labor market conditions will cause relatively more damage in poorer areas that are already struggling, and policy makers who ignore this reality are excessively harming the workers in these areas.

City population dynamics since 1850

The reason why some cities grow and some cities shrink is a heavily debated topic in economics, sociology, urban planning, and public administration. In truth, there is no single reason why a city declines. Often exogenous factors – new modes of transportation, increased globalization, institutional changes, and federal policies – initiate the decline while subsequent poor political management can exacerbate it. This post focuses on the population trends of America’s largest cities since 1850 and how changes in these factors affected the distribution of people within the US.

When water transportation, water power, and proximity to natural resources such as coal were the most important factors driving industrial productivity, businesses and people congregated in locations near major waterways for power and shipping purposes. The graph below shows the top 10 cities* by population in 1850 and follows them until 1900. The rank of the city is on the left axis.

top cities 1850-1900

 

* The 9th, 11th, and 12th ranked cities in 1850 were all incorporated into Philadelphia by 1860. Pittsburgh was the next highest ranked city (13th) that was not incorporated so I used it in the graph instead.

All of the largest cities were located on heavily traveled rivers (New Orleans, Cincinnati, Pittsburgh, and St. Louis) or on the coast and had busy ports (New York, Boston, Philadelphia, Brooklyn, and Baltimore). Albany, NY may seem like an outlier but it was the starting point of the Erie Canal.

As economist Ed Glaeser (2005) notes “…almost every large northern city in the US as of 1860 became an industrial powerhouse over the next 60 years as factories started in central locations where they could save transport costs and make use of large urban labor forces.”

Along with waterways, railroads were an important mode of transportation from 1850 – 1900 and many of these cities had important railroads running through them, such as the B&O through Balitmore and the Erie Railroad in New York. The increasing importance of railroads impacted the list of top 10 cities in 1900 as shown below.

top cities 1900-1950

A similar but not identical set of cities dominated the urban landscape over the next 50 years. By 1900, New Orleans, Brooklyn (merged with New York) Albany, and Pittsburgh were replaced by Chicago, Cleveland, Buffalo, and San Francisco. Chicago, Cleveland, and Buffalo are all located on the Great Lakes and thus had water access, but it was the increasing importance of railroad shipping and travel that helped their populations grow. Buffalo was on the B&O railroad and was also the terminal point of the Erie Canal. San Francisco became much more accessible after the completion of the Pacific Railroad in 1869, but the California Gold Rush in the late 1840s got its population growth started.

As rail and eventually automobile/truck transportation became more important during the early 1900s, cities that relied on strategic river locations began to decline. New Orleans was already out of the top 10 by 1900 (falling from 5th to 12th) and Cincinnati went from 10th in 1900 to 18th by 1950. Buffalo also fell out of the top 10 during this time period, declining from 8th to 15th. But despite some changes in the rankings, there was only one warm-weather city in the top 10 as late as 1950 (Los Angeles). However, as the next graphs shows there was a surge in the populations of warm-weather cities during the period from 1950 to 2010 that caused many of the older Midwestern cities to fall out of the rankings.

top cities 1950-2010

The largest shakeup in the population rankings occurred during this period. Out of the top 10 cities in 1950, only 4 (Philadelphia, Los Angeles, Chicago, and New York) were still in the top 10 in 2010 (All were in the top 5, with Houston – 4th in 2010 – being the only city not already ranked in the top 10 in 1950, when it was 14th). The cities ranked 6 – 10 fell out of the top 20 while Detroit declined from 5th to 18th. The large change in the rankings during this time period is striking when compared to the relative stability of the earlier time periods.

Economic changes due to globalization and the prevalence of right-to-work laws in the southern states, combined with preferences for warm weather and other factors have resulted in both population and economic decline in many major Midwestern and Northeastern cities. All of the new cities in the top ten in 2010 have relatively warm weather: Phoenix, San Antonio, San Diego, Dallas, and San Jose. Some large cities missing from the 2010 list – particularly San Francisco and perhaps Washington D.C. and Boston as well – would probably be ranked higher if not for restrictive land-use regulations that artificially increase housing prices and limit population growth. In those cities and other smaller cities – primarily located in Southern California – low population growth is a goal rather than a result of outside forces.

The only cold-weather cities that were in the top 15 in 2014 that were not in the top 5 in 1950 were Indianapolis, IN (14th) and Columbus, OH (15th). These two cities not only avoided the fate of nearby Detroit and Cleveland, they thrived. From 1950 to 2014 Columbus’ population grew by 122% and Indianapolis’ grew by 99%. This is striking compared to the 57% decline in Cleveland and the 63% decline in Detroit during the same time period.

So why have Columbus and Indianapolis grown since 1950 while every other large city in the Midwest has declined? There isn’t an obvious answer. One thing among many that both Columbus and Indianapolis have in common is that they are both state capitals. State spending as a percentage of Gross State Product (GSP) has been increasing since 1970 across the country as shown in the graph below.

OH, IN state spending as per GSP

In Ohio state spending growth as a percentage of GSP has outpaced the nation since 1970. It is possible that increased state spending in Ohio and Indiana is crowding out private investment in other parts of those states. And since much of the money collected by the state ends up being spent in the capital via government wages, both Columbus and Indianapolis grow relative to other cities in their respective states.

There has also been an increase in state level regulation over time. As state governments become larger players in the economy business leaders will find it more and more beneficial to be near state legislators and governors in order to lobby for regulations that help their company or for exemptions from rules that harm it. Company executives who fail to get a seat at the table when regulations are being drafted may find that their competitors have helped draft rules that put them at a competitive disadvantage. The decline of manufacturing in the Midwest may have created an urban reset that presented firms and workers with an opportunity to migrate to areas that have a relative abundance of an increasingly important factor of production – government.

Can historic districts dampen urban renewal?

Struggling cities in the Northeast and Midwest have been trying to revitalize their downtown neighborhoods for years. City officials have used taxpayer money to build stadiums, construct river walks, and lure employers with the hope that such actions will attract affluent, tax -paying residents back to the urban core. Often these strategies fail to deliver but that hasn’t deterred other cities from duplicating or even doubling down on the efforts. But if these policies don’t work, what can cities do?

Part of the answer is to allow more building, especially newer housing. One factor that may be hampering the gentrification efforts of many cities is the age of their housing stock. The theory is straightforward and is explained and tested in this 2009 study. From the abstract:

“This paper identifies a new factor, the age of the housing stock, that affects where high- and low-income neighborhoods are located in U.S. cities. High-income households, driven by a high demand for housing services, will tend to locate in areas of the city where the housing stock is relatively young. Because cities develop and redevelop from the center outward over time, the location of these neighborhoods varies over the city’s history. The model predicts a suburban location for the rich in an initial period, when young dwellings are found only in the suburbs, while predicting eventual gentrification once central redevelopment creates a young downtown housing stock.”

In the empirical section of the paper the authors find that:

… a tract’s economic status tends to fall rather than rise as distance increases holding age fixed, suggesting that high-income households would tend to live near city centers were it not for old central housing stocks.” (My bold)

This makes sense. High income people like relatively nicer, newer housing and will purchase housing in neighborhoods where the housing is relatively nicer and newer. In the latter half of the 20th century this meant buying new suburban homes, but as that housing ages and new housing is built to replace the even older housing in the central city high income people will be drawn back to central city neighborhoods. This has the power to reduce the income disparity between the central city and suburbs seen in many metropolitan areas. As the authors note:

Our results show that, if the influence of spatial variation in dwelling ages were eliminated, central city/suburban disparities in neighborhood economic status would be reduced by up to 50 percent within American cities. In other words, if the housing age distribution were made uniform across space, reducing average dwelling ages in the central city and raising them in the suburbs, then neighborhood economic status would shift in response, rising in the center and falling in the suburbs. (My bold)

To get a sense of the age of the housing stock in northern cities, the figure below depicts the proportion of housing in eight different age categories in Ohio’s six major cities as of 2013 (most recent data available, see table B25034 here).

age of ohio's housing stock

The age categories are: built after 2000, from 1990 and 1999, from 1980-89, from 1970-79, from 1960-69, from 1950-59, from 1940-49, and built prior to 1939. As the figure shows most of the housing stock in Ohio’s major cities is quite old. In every city except for Columbus over 30% of the housing stock was built prior to 1939. In Cleveland, over 50% of the housing stock is over 75 years old! In Columbus, which is the largest and fastest growing city in Ohio, the housing stock is fairly evenly distributed across the age categories. Columbus really stands out in the three youngest categories.

In a free market for housing old housing would be torn down and replaced by new housing once the net benefits of demolition and rebuilding exceed the net benefits of renovation. But anyone who studies the housing market knows that it is hardly free, as city ordinances regulate everything from lot sizes to height requirements. While these regulations restrict new housing, they are a larger problem in cities where demand for housing is already high since they artificially restrict supply and drive up prices.

A potentially bigger problem for declining cities that has to do with the age of the housing stock is historic districts. In historic districts the housing is protected by local rules that limit the types of renovations that can be undertaken. Property owners are required to maintain their home’s historical look and it can be difficult to demolish old houses.

For example, in Dayton, OH there are 20 historic districts in a city of only 142,000 people. Dayton’s Landmark Commission is charged with reviewing and approving major modifications to the buildings in historic districts including their demolition.  Many of the districts are located near the center of the city and contain homes built in the late 1800s and early 1900s. Some are also quite large; St. Anne’s Hill contains 315 structures and the South Park historic district covers 24 blocks and contains more than 700 structures. The table below provides a list of Dayton’s historic districts as well as the year they were classified, number of structures, acreage, and whether the district is a locally protected district. Seventy percent of the districts are protected by a local historic designation while 30 percent are only protected by the national designation.

dayton historic districts table

I personally like old houses, but I also recognize that holding on to the past can interfere with revitalization and growth. Older homes, especially those built prior to 1940, are expensive to restore and maintain. They often have old or outdated plumbing systems, electrical systems, and inefficient windows that need to be replaced. They may also contain lead paint or other hazardous materials that were commonly used at the time they were built which may have to be removed. Many people can’t afford these upfront costs and those that can often don’t want to deal with the hassle of a restoration project.

Also, people have different tastes and historic districts make it difficult for some people to live in the house they want in the area they want. As this map shows, many of the Dayton’s historic districts are located near the center of the city in the most walkable, urban neighborhoods. The Oregon district and St. Anne’s Hill are both quite walkable and contain several restaurants, bars, and shops. If a person wants to live in one of these neighborhoods they have to be content with living in an older house. The design restrictions that come standard with historic districts prevent people with certain tastes from locating in these areas.

A 2013 study that examined the Cleveland housing market determined that it is economical to demolish many of the older, vacant homes in declining cities rather than renovate them. This is just as true of older homes that happen to be in historic districts.

Ultimately homeowners should be free to do what they want with their home and the land that it sits on. If a person wants to buy a historic house and renovate it they should be free to do so, but they should also be allowed to build a new structure on the property if they wish. When a city protects large swathes of houses via historic districts they slow down the cycle of housing construction that could draw people back to urban neighborhoods. This is especially true if the historic districts encompass the best areas of the city, such as those closest to downtown amenities and employment opportunities. Living in the city is appealing to many people, but being forced to purchase and live in outdated housing dampens the appeal for some and may be contributing to the inability of cities like Dayton to turn the corner.

Rent control, housing supply, and home values in Seattle and Houston

In my recent op-ed about rent control I point out that Houston, TX  permitted more home and apartment building than Seattle, WA from 2005 to 2014. The graph below shows the magnitude of this difference. The bars are the number of permits each year (the left axis) and the line is the ratio of Zillow’s home value index (numerator) and the average single family home construction cost for each city (denominator). The right axis reports the ratio. (Seattle’s data are here, Houston’s are here, and permit data are here).

houston, seattle permits graph

As seen in the graph, the orange bars (Houston) are much taller than the blue bars (Seattle). Also, Houston’s home value to average cost ratio was relatively flat during the period shown despite the fact that Houston grew by 163,000 people during this time period. This is because Houston’s high level of building kept pace with demand. During this 10 year period Houston’s home values were roughly 1.6 times average construction cost.

In Seattle, where less building occurred, home values reached nearly 2.5 times average construction costs in 2007 before falling to approximately 1.8 in 2009 due to the housing bust. Home values decreased even further from there, reaching their low point in 2012. Since 2012, however, they have been increasing while in Houston it appears the ratio has leveled off. The difference between the two ratios is not driven by relative cost changes either. The graph below shows the cost per unit in each city over this time period. They are fairly similar in dollar amounts and the ratio between them was relatively constant during this time period.

houston, seattle cost per unit

Seattle’s building restrictions are contributing to the high price of housing in that city. And because prices in Seattle are primarily driven by demand, home values are much more volatile: When demand increases they rise and when demand falls, like from 2007 – 09, they decline quickly.

For more information about the negative consequences of rent control, see here and here.

Education, Innovation, and Urban Growth

One of the strongest predictors of urban growth since the start of the 20th century is the skill level of a city’s population. Cities that have a highly skilled population, usually measured as the share of the population with a bachelor’s degree or more, tend to grow faster than similar cities with less educated populations. This is true at both the metropolitan level and the city level. The figure below plots the population growth of 30 large U.S. cities from 1970 – 2013 on the vertical axis and the share of the city’s 25 and over population that had at least a bachelor’s degree in 1967 on the horizontal axis. (The education data for the cities are here. I am using the political city’s population growth and the share of the central city population with a bachelor’s degree or more from the census data linked to above.)

BA, city growth 1

As shown in the figure there is a strong, positive relationship between the two variables: The correlation coefficient is 0.61. It is well known that over the last 50 years cities in warmer areas have been growing while cities in colder areas have been shrinking, but in this sample the cities in warmer areas also tended to have a better educated population in 1967. Many of the cities known today for their highly educated populations, such as Seattle, San Francisco, and Washington D.C., also had highly educated populations in 1967. Colder manufacturing cities such as Detroit, Buffalo, and Newark had less educated workforces in 1967 and subsequently less population growth.

The above figure uses data on both warm and cold cities, but the relationship holds for only cold cities as well. Below is the same graph but only depicts cities that have a January mean temperature below 40°F. Twenty out of the 30 cities fit this criteria.

BA, city growth 2

Again, there is a strong, positive relationship. In fact it is even stronger; the correlation coefficient is 0.68. Most of the cities in the graph lost population from 1970 – 2013, but the cities that did grow, such as Columbus, Seattle, and Denver, all had relatively educated populations in 1967.

There are several reasons why an educated population and urban population growth are correlated. One is that a faster accumulation of skills and human capital spillovers in cities increase wages which attracts workers. Also, the large number of specialized employers located in cities makes it easier for workers, especially high-skill workers, to find employment. Cities are also home to a range of consumption amenities that attract educated people, such as a wide variety of shops, restaurants, museums, and sporting events.

Another reason why an educated workforce may actually cause city growth has to do with its ability to adjust and innovate. On average, educated workers tend to be more innovative and better able to learn new skills. When there is an negative, exogenous shock to an industry, such as the decline of the automobile industry or the steel industry, educated workers can learn new skills and create new industries to replace the old ones. Many of the mid-20th century workers in Detroit and other Midwestern cities decided to forego higher education because good paying factory jobs were plentiful. When manufacturing declined those workers had a difficult time learning new skills. Also, the large firms that dominated the economic landscape, such as Ford, did not support entrepreneurial thinking. This meant that even the educated workers were not prepared to create new businesses.

Local politicians often want to protect local firms in certain industries through favorable treatment and regulation. But often this protection harms newer, innovative firms since they are forced to compete with the older firms on an uneven playing field. Political favoritism fosters a stagnant economy since in the short-run established firms thrive at the expense of newer, more innovative startups. Famous political statements such as “What’s good for General Motors is good for the country” helped mislead workers into thinking that government was willing and able to protect their employers. But governments at all levels were unable to stop the economic forces that battered U.S. manufacturing.

To thrive in the 21st century local politicians need to foster economic environments that encourage innovation and ingenuity. The successful cities of the future will be those that are best able to innovate and to adapt in an increasingly complex world. History has shown us that an educated and entrepreneurial workforce is capable of overcoming economic challenges, but to do this people need to be free to innovate and create. Stringent land-use regulations, overly-burdensome occupational licensing, certificate-of-need laws, and other unnecessary regulations create barriers to innovation and make it more difficult for entrepreneurs to create the firms and industries of the future.

Rent control: A bad policy that just won’t die

The city council of Richmond, CA is thinking about implementing rent control in their city. Richmond is located north of Berkeley and Oakland on the San Francisco Bay in an area that has some of the highest housing prices in the country. From the article:

“Richmond is growing and becoming a more desirable place where people want to live, but that increased demand is putting pressure on the existing housing stock.”

It is true that an increase in the demand for housing will increase prices and rents. Unfortunately, rent control will not solve the problem of too little housing, which is the ultimate cause of high prices.

rent control 1

The diagram above depicts a market for housing like the one in Richmond. Without rent control, when demand increases (D1 to D2) the price rises to R2 and the equilibrium quantity increases from Q1 to Q*. However, with rent control, the price is unable to rise. For example, if the Richmond city council wanted prices to be at the pre-demand-increase level they would set the rent control price equal to R1. But with the increase in demand the quantity demanded at that price is Qd, while the quantity supplied is only Q1. Thus there is a shortage. This is the outcome of a price ceiling.

What this means is that some people will find a place to rent at the old, lower rental price (Q1 people).  But more people will want to rent at that price than there are units available, and since the price cannot rise due to the price control, the available apartments will have to be allocated some other way. This means longer wait times for vacant apartments and higher search costs. It also means lower quality apartments. Since the owners know there are more people who want an apartment than available apartments, they don’t have an incentive to maintain the apartment at the same level as they would if they had to attract customers.

With rent control, only Q1 people get an apartment. Without rent control, as the price rises more units are supplied over time and the new equilibrium has Q* (> Q1) people who get an apartment. Yes, they have to pay a higher price, but the relevant alternative is not an apartment at the lower price: The alternative is that some people who would have been willing to pay the higher price do not get an apartment.

Since Richmond has strict land-use rules like many communities in the San Francisco metro area (you can read all about their minimum lot size and parking space requirements here), rent control is adding to the housing woes of Richmond’s renters and any person who would like to move there.

rent control 2

Land-use restrictions decrease the amount of buildable land which subsequently increases the cost of housing. This is depicted in the diagram above as a shift from S1 to S2. The decrease in supply leads to a new equilibrium rent of R2 > R1 and a reduction in the equilibrium quantity to Q2 (< Q1). So land-use restrictions have already decreased the amount of available housing and increased the price.

If rent control is implemented, depicted in the diagram as the solid red line at the old price (R1), then the quantity supplied decreases even more to Qs. Again, with rent control there is a shortage as the quantity of housing demanded at R1 is Q1 (> Qs). So all of the same problems that occurred in the first example occur here, only here the quantity of housing is decreased not once, but TWICE by the government: Once due to the land use restrictions (Q1 to Q2) and then AGAIN when the rent control is implemented (Q2 to Qs). Restricting the amount of housing available does not help more people find housing, and restricting it again exacerbates the problem.

Trying to find an economist who doesn’t think that rent control is a bad idea is like trying to find a cheap apartment in a city with rent control; it can be done, but you have to spend a lot of time looking. In a Booth IGM poll question about rent control, 95% of the economists surveyed disagreed with the statement that rent control had a positive impact on the amount and quality of affordable rental housing. Yet despite basic economic theory, the agreement among experts, and the empirical evidence (see here, here, and here) rent control remains in some places and is often brought up as a viable policy for increasing the amount of affordable housing. This is truly a shame since what places like Richmond need is more housing, not less housing with artificially low prices.

Local land-use restrictions harm everyone

In a recent NBER working paper, authors Enrico Moretti and Chang-Tai Hsieh analyze how the growth of cities determines the growth of nations. They use data on 220 MSAs from 1964 – 2009 to estimate the contribution of each city to US national GDP growth. They compare what they call the accounting estimate to the model-driven estimate. The accounting estimate is the simple way of attributing city nominal GDP growth to national GDP growth in that it doesn’t account for whether the increase in city GDP is due to higher nominal wages or increased output caused by an increase in local employment. The model-driven estimate that they compare it to distinguishes between these two factors.

Before I go any further it is important to explain the theory behind the author’s empirical findings. Suppose there is a productivity shock to City A such that workers in City A are more productive than they were previously. This productivity shock could be the result of a new method of production or a newly invented piece of equipment (capital) that helps workers make more stuff with a given amount of labor. This productivity shock will increase the local demand for labor which will increase the wage.

Now one of two things can happen and the diagram below depicts the two scenarios. The supply and demand lines are those for workers, with the wage on the Y-axis and the amount of workers on the X-axis. Since more workers lead to more output I also labeled labor as L = αY, where α is some fraction less than 1 to signify that each additional unit of labor doesn’t lead to a one unit increase in output, but rather some fraction of 1 unit (capital is needed too).

moretti, land use pic

City A can have a highly elastic supply of housing, meaning that it is easy to expand the number of housing units in that city and thus it is relatively easy for people to move there. This would mean that the supply of labor is like S-elastic in the diagram. Thus the number of workers that are able to migrate to City A after labor demand increases (D1 to D2) is large, local employment increases (Le > L*), and total output (GDP) increases. Wages only increase a little bit (We > W*). In this situation the productivity shock would have a relatively large effect on national GDP since it resulted in a large increase in local output as workers moved from relatively low-productivity cities to the relatively high-productivity City A.

Alternatively, the supply of housing in City A could be very inelastic; this would be like S-inelastic. If that is the case, then the productivity shock would still increase the wage in City A (Wi > W*), but it will be more difficult for new workers to move in since new housing cannot be built to shelter them. In this case wages increase but since total local employment stays fairly constant due to the restriction on available housing the increase in output is not as large (Li > L* but < Le). If City A output stays relatively constant and instead the productivity shock is expressed in higher nominal wages, then the resulting growth in City A nominal GDP will not have as large of an effect on national output growth.

As an example, Moretti and Hsieh calculate that the growth of New York City’s GDP was 12% of national GDP growth from 1964-2009. But when accounting for the change in wages, New York’s contribution to national output growth was only 5%: Most of New York’s GDP growth was manifested in higher nominal wages. This is not surprising as it is well known that New York has strict housing regulations that make it difficult to build new housing units (the recent extension of NYC rent-control laws won’t help). This makes it difficult for people to relocate from relatively low-productivity places to a high-productivity New York.

In three of the most intensely land-regulated cities: New York, San Francisco, and San Jose, the accounting contribution to national GDP growth was 19.3%. But these cities actual contribution to national output as estimated by the authors was only 6.1%. Contrast that with the Rust Belt cities (e.g. Detroit, Pittsburgh, Cleveland, etc.) which contributed -28.5% according to the accounting method but +6.1% according to the author’s model.

The authors conclude that less onerous land-use restrictions in high-productivity cities New York, Washington D.C., Boston, San Francisco, San Jose, and the rest of Silicon Valley could increase the nation’s output growth rate by making it easier for workers to migrate from low to high-productivity areas. In an extreme migration scenario where 52% of American workers in 2009 lived in a different city than they actually did, the author’s calculate that GDP per worker would have been $8,775 higher in 2009, or $6,345 per person. In a more realistic scenario (only 20% of workers lived in a different city) it would have been $3,055 more per person: That is a substantial increase.

While I agree with the author’s conclusion that less land-use restrictions would result in a more productive allocation of labor and thus more stuff for all of us, the author’s policy prescriptions at the end of the paper leave much to be desired.  They propose that the federal government constrain the ability of municipalities to set land-use restrictions since these restrictions impose negative externalities on the rest of the country if the form of lowering national output growth. They also support the use of government funded high-speed rail to link  low-productivity labor markets to high-productivity labor markets e.g. the current high-speed rail construction project taking place in California could help workers get form low productivity areas like Stockton, Fresno, and Modesto, to high productivity areas in Silicon Valley.

Land-use restrictions are a problem in many areas, but not a problem that warrants arbitrary federal involvement. If federal involvement simply meant the Supreme Court ruling that land-use regulations (or at least most of them) are unconstitutional then I think that would be beneficial; a broad removal of land-use restrictions would go a long way towards reinstituting the institution of private property. Unfortunately, I don’t think that is what Moretti and Hsieh had in mind.

Arbitrary federal involvement in striking down local land-use regulations would further infringe on federalism and create opportunities for political cronyism. Whatever federal bureaucracy was put in charge of monitoring land-use restrictions would have little local knowledge of the situation. The Environmental Protection Agency (EPA) already monitors some local land use and faulty information along with an expensive appeals process creates problems for residents simply trying to use their own property. Creating a whole federal bureaucracy tasked with picking and choosing which land-use restrictions are acceptable and which aren’t would no doubt lead to more of these types of situations as well as increase the opportunities for regulatory activism. Also, federal land-use regulators may target certain areas that have governors or mayors who don’t agree with them on other issues.

As for more public transportation spending, I think the record speaks for itself – see here, here, and here.

An interesting development in state regulation of wine shipment

Can one state enforce another state’s laws that prohibit direct-to-consumer wine shipment from out-of-state retailers while allowing it by in-state retailers?  That’s the question posed in a recent New York case.

The New York State Liquor Authority has a rule that prohibits licensees from engaging in “improper conduct.”  The liquor regulator argues that direct shipments by retailers that violate other states’ laws constitute improper conduct.  It has fined, revoked licenses, and filed charges against New York retailers that it believes have shipped wine illegally to customers in other states. One retailer, Empire Wine, refused to settle and has sued the liquor authority in state court, claiming that the “improper conduct” rule is unconstitutionally vague and that the liquor authority cannot enforce other states’ laws that discriminate against interstate commerce.

Many states continue to prohibit direct shipment from out-of-state retailers. For example, 40 states do not allow New York retailers to ship directly to consumers.  This harms consumers, because it is usually out-of state-retailers, rather than wineries, that offer significant savings compared to in-state retailers. In a 2013 article published in the Journal of Empirical Legal Studies, Alan Wiseman and I identified two different anti-consumer effects of laws that allow out-of-state wineries to ship direct to consumer but prohibit out-of-state retailers from doing so. First, these laws deprive consumers of price savings from buying many bottles online: “Online retailers consistently offered price savings on much higher percentages of the bottles in each year—between 57 and 81 percent of the bottles when shipped via ground and between 32 and 48 percent when shipped via air. Excluding retailers from direct shipment thus substantially reduces—but does not completely eliminate—the price savings available from purchasing wine online.” Second, these laws reduce competitive pressure on bricks-and-mortar wine stores, since they exclude lower-priced out-of-state retailers from the local market. Thus, the laws likely harm consumers who buy from their local wine shops, not just consumers who want to buy online. (The published version of the paper is behind a paywall, but you can read the working paper version at SSRN.)

(Photo credit: http://srxawordonhealth.com/2011/07/11/exercise-in-a-bottle/)

 

How Complete Are Federal Agencies’ Regulatory Analyses?

A report released yesterday by the Government Accountability Office will likely get spun to imply that federal agencies are doing a pretty good job of assessing the benefits and costs of their proposed regulations. The subtitle of the report reads in part, “Agencies Included Key Elements of Cost-Benefit Analysis…” Unfortunately, agency analyses of regulations are less complete than this subtitle suggests.

The GAO report defined four major elements of regulatory analysis: discussion of the need for the regulatory action, analysis of alternatives, and assessment of the benefits and costs of the regulation. These crucial features have been required in executive orders on regulatory analysis and OMB guidance for decades. For the largest regulations with economic effects exceeding $100 million annually (“economically significant” regulations), GAO found that agencies always included a statement of the regulation’s purpose, discussed alternatives 81 percent of the time, always discussed benefits and costs, provided a monetized estimate of costs 97 percent of the time, and provided a monetized estimate of benefits 76 percent of the time.

A deeper dive into the report, however, reveals that GAO did not evaluate the quality of any of these aspects of agencies’ analysis. Page 4 of the report notes, “[O]ur analysis was not designed to evaluate the quality of the cost-benefit analysis in the rules. The presence of all key elements does not provide information regarding the quality of the analysis, nor does the absence of a key element necessarily imply a deficiency in a cost-benefit analysis.”

For example, GAO checked to see if the agency include a statement of the purpose of the regulation, but it apparently accepted a statement that the regulation is required by law as a sufficient statement of purpose (p. 22). Citing a statute is not the same thing as articulating a goal or identifying the root cause of the problem an agency seeks to solve.

Similarly, an agency can provide a monetary estimate of some benefits or costs without necessarily addressing all major benefits or costs the regulation is likely to create. GAO notes that it did not ascertain whether agencies addressed all relevant benefits or costs (p. 23).

For an assessment of the quality of agencies’ regulatory analysis, check out the Mercatus Center’s Regulatory Report Card. The Report Card evaluation method explicitly assesses the quality of the agency’s analysis, rather than just checking to see if the agency discussed the topics. For example, to assess how well the agency analyzed the problem it is trying to solve, the evaluators ask five questions:

1. Does the analysis identify a market failure or other systemic problem?

2. Does the analysis outline a coherent and testable theory that explains why the problem is systemic rather than anecdotal?

3. Does the analysis present credible empirical support for the theory?

4. Does the analysis adequately address the baseline — that is, what the state of the world is likely to be in the absence of federal intervention not just now but in the future?

5. Does the analysis adequately assess uncertainty about the existence or size of the problem?

These questions are intended to ascertain whether the agency identified a real, significant problem and identified its likely cause. On a scoring scale ranging from 0 points (no relevant content) to 5 points (substantial analysis), economically significant regulations proposed between 2008 and 2012 scored an average of just 2.2 points for their analysis of the systemic problem. This score indicates that many regulations are accompanied by very little evidence-based analysis of the underlying problem the regulation is supposed to solve. Scores for assessment of alternatives, benefits, and costs are only slightly better, which suggests that these aspects of the analysis are often seriously incomplete.

These results are consistent with the findings of other scholars who have evaluated the quality of agency Regulatory Impact Analyses during the past several decades. (Check pp. 7-10 of this paper for citations.)

The Report Card results are also consistent with the findings in the GAO report. GAO assessed whether agencies are turning in their assigned homework; the Report Card assesses how well they did the work.

The GAO report contains a lot of useful information, and the authors are forthright about its limitations. GAO combed through 203 final regulations to figure out what parts of the analysis the agencies did and did not do — an impressive accomplishment by any measure!

I’m more concerned that some participants in the political debate over regulatory reform will claim that the report shows regulatory agencies are doing a great job of analysis, and no reforms to improve the quality of analysis are needed. The Regulatory Report Card results clearly demonstrate otherwise.