Tag Archives: rules

Smart rule-breakers make the best entrepreneurs

A new paper in the Quarterly Journal of Economics (working version here) finds that the combination of intelligence and a willingness to break the rules as a youth is associated with a greater tendency to operate a high-earning incorporated business as an adult i.e. be an entrepreneur.

Previous work examining entrepreneurship that categorizes all self-employed persons as entrepreneurs has often found that entrepreneurs earn less than similar salaried workers. But this contradicts the important role entrepreneurs are presumed to play in generating economic growth. As the authors of the new QJE paper remark:

“If the self-employed are a good proxy for risk-taking, growth-creating entrepreneurs, it is puzzling that their human capital traits are similar to those of salaried workers and that they earn less.”

So instead of looking at the self-employed as one group, the authors separate them into two groups: those who operate unincorporated businesses and those who operate incorporated businesses. They argue that incorporation is important for risk-taking entrepreneurs due to the limited liability and separate legal identity it provides, and they find that those who choose incorporation are more likely to engage in tasks that require creativity, analytical flexibility and complex interpersonal communications; all tasks that are closely identified with the concept of entrepreneurship.

People who operate unincorporated businesses, on the other hand, are more likely to engage in activities that require high levels of hand, eye and foot coordination, such as landscaping or truck driving.

Once the self-employed are separated into incorporated and unincorporated, the puzzling finding of entrepreneurs earning less than similar salaried workers disappears. The statistics in the table below taken from the paper show that on average incorporated business owners (last column) earn more, work more hours, have more years of schooling and are more likely to be a college graduate than both unincorporated business owners and salaried workers based on two different data sets (Current Population Survey (CPS) and National Longitudinal Survey of Youth (NLSY)).

(click table to enlarge)

The authors then examine the individual characteristics of incorporated and unincorporated business owners. They find that people with high self-esteem, a strong sense of controlling one’s future, high Armed Forces Qualifications Test scores (AFQT)—which is a measure of intelligence and trainability—and a greater propensity for engaging in illicit activity as a youth are more likely to be incorporated self-employed.

Moreover, it’s the combination of intelligence and risk-taking that turns a young person into a high-earning owner of an incorporated business. As the authors state, “The mixture of high learning aptitude and disruptive, “break-the-rules” behavior is tightly linked with entrepreneurship.”

These findings fit nicely with some notable recent examples of entrepreneurship—Uber and Airbnb. Both companies are regularly sued for violating state and local ordinances, but this hasn’t stopped them from becoming popular providers of transportation and short-term housing.

If the founders of Uber and Airbnb always obtained approval before operating the companies would be hindered by all sorts of special interests, including taxi commissions, hotel industry groups and nosy neighbors. Seeking everyone’s approval—including the government’s—before operating likely would have meant never getting off the ground and the companies know this. It’s interesting to see evidence that many other, less well-known entrepreneurs share a similar willingness to violate the rules if necessary in order to provide their goods and services to customers.

Government Spending and Economic Growth in Nebraska since 1997

Mercatus recently released a study that examines Nebraska’s budget, budgetary rules and economy. As the study points out, Nebraska, like many other states, consistently faces budgeting problems. State officials are confronted by a variety of competing interests looking for more state funding—schools, health services and public pensions to name a few—and attempts to placate each of them often leave officials scrambling to avoid budget shortfalls in the short term.

Money spent by state and local governments is collected from taxpayers who earn money in the labor market and through investments. The money earned by taxpayers is the result of producing goods and services that people want and the total is essentially captured in a state’s Gross Domestic Product (GSP).

State GSP is a good measure of the amount of money available for a state to tax, and if state and local government spending is growing faster than GSP, state and local governments will be controlling a larger and larger portion of their state’s output over time. This is unsustainable in the long run, and in the short run more state and local government spending can reduce the dynamism of a state’s economy as resources are taken from risk-taking entrepreneurs in the private sector and given to government bureaucrats.

The charts below use data from the BEA to depict the growth of state and local government spending and private industry GSP in Nebraska (click on charts to enlarge). The first shows the annual growth rates in private industry GSP and state and local government GSP from 1997 to 2014. The data is adjusted for inflation (2009 dollars) and the year depicted is the ending year (e.g. 1998 is growth from 1997 – 1998).

NE GSP annual growth rates 1997-14

In Nebraska, real private industry GSP growth has been positive every year except for 2012. There is some volatility consistent with the business cycles over this time period, but Nebraska’s economy has regularly grown over this period.

On the other hand, state and local GSP growth was negative 10 of the 17 years depicted. It grew rapidly during recession periods (2000 – 2002 and 2009 – 2010), but it appears that state and local officials were somewhat successful in reducing spending once economic conditions improved.

The next chart shows how much private industry and state and local GSP grew over the entire period for both Nebraska and the U.S. as a whole. The 1997 value of each category is used as the base year and the yearly ratio is plotted in the figure. The data is adjusted for inflation (2009 dollars).

NE, US GSP growth since 1997

In 2014, Nebraska’s private industry GSP (red line) was nearly 1.6 times larger than its value in 1997. On the other hand, state and local spending (light red line) was only about 1.1 times larger. Nebraska’s private industry GSP grew more than the country’s as a whole over this period (57% vs 46%) while its state and local government spending grew less (11% vs. 15%).

State and local government spending in Nebraska spiked from 2009 to 2010 but has come down slightly since then. Meanwhile, the state’s private sector has experienced relatively strong growth since 2009 compared to the country as a whole, though it was lagging the country prior to the recession.

Compared to the country overall, Nebraska’s private sector economy has been doing well since 2008 and state and local spending, while growing, appears to be largely under control. If you would like to learn more about Nebraska’s economy and the policies responsible for the information presented here, I encourage you to read Governing Nebraska’s Fiscal Commons: Addressing the Budgetary Squeeze, by Creighton University Professor Michael Thomas.

Why the lack of labor mobility in the U.S. is a problem and how we can fix it

Many researchers have found evidence that mobility in the U.S. is declining. More specifically, it doesn’t appear that people move from places with weaker economies to places with stronger economies as consistently as they did in the past. Two sets of figures from a paper by Peter Ganong and Daniel Shoag succinctly show this decline over time.

The first, shown below, has log income per capita by state on the x-axis for two different years, 1940 (left) and 1990 (right). On the vertical axis of each graph is the annual population growth rate by state for two periods, 1940 – 1960 (left) and 1990 – 2010 (right).

directed migration ganong, shoag

In the 1940 – 1960 period, the graph depicts a strong positive relationship: States with higher per capita incomes in 1940 experienced more population growth over the next 20 years than states with lower per capita incomes. This relationship disappears and actually reverses in the 1990 – 2010 period: States with higher per capita incomes actually grew slower on average. So in general people became less likely to move to states with higher incomes between the middle and end of the 20th century. Other researchers have also found that people are not moving to areas with better economies.

This had an effect on income convergence, as shown in the next set of figures. In the 1940 – 1960 period (left), states with higher per capita incomes experienced less income growth than states with lower per capita incomes, as shown by the negative relationship. This negative relationship existed in the 1990 – 2010 period as well, but it was much weaker.

income convergence ganong, shoag

We would expect income convergence when workers leave low income states for high income states, since that increases the labor supply in high-income states and pushes down wages. Meanwhile, the labor supply decreases in low-income states which increases wages. Overall, this leads to per capita incomes converging across states.

Why labor mobility matters

As law professor David Schleicher points out in a recent paper, the current lack of labor mobility can reduce the ability of the federal government to manage the U.S. economy. In the U.S. we have a common currency—every state uses the U.S. dollar. This means that if a state is hit by an economic shock, e.g. low energy prices harm Texas, Alaska and North Dakota but help other states, that state’s currency cannot adjust to cushion the blow.

For example, if the UK goes into a recession, the Bank of England can print more money so that the pound will depreciate relative to other currencies, making goods produced in the UK relatively cheap. This will decrease the UK’s imports and increase economic activity and exports, which will help it emerge from the recession. If the U.S. as a whole suffered a negative economic shock, a similar process would take place.

However, within a country this adjustment mechanism is unavailable: Texas can’t devalue its dollar relative to Ohio’s dollar. There is no within-country monetary policy that can help particular states or regions. Instead, the movement of capital and labor from weak areas to strong areas is the primary mechanism available for restoring full employment within the U.S. If capital and labor mobility are low it will take longer for the U.S. to recover from area-specific negative economic shocks.

State or area-specific economic shocks are more likely in large countries like the U.S. that have very diverse local economies. This makes labor and capital mobility more important in the U.S. than in smaller, less economically diverse countries such as Denmark or Switzerland, since those countries are less susceptible to area-specific economic shocks.

Why labor mobility is low

There is some consensus about policies that can increase labor mobility. Many people, including former President Barack Obama, my colleagues at the Mercatus Center and others, have pointed out that state occupational licensing makes it harder for workers in licensed professions to move across state borders. There is similar agreement that land-use regulations increase housing prices which makes it harder for people to move to areas with the strongest economies.

Reducing occupational licensing and land-use regulations would increase labor mobility, but actually doing these things is not easy. Occupational licensing and land-use regulations are controlled at the state and local level, so currently there is little that the federal government can do.

Moreover, as Mr. Schleicher points out in his paper, state and local governments created these regulations for a reason and it’s not clear that they have any incentive to change them. Like all politicians, state and local ones care about being re-elected and that means, at least to some extent, listening to their constituents. These residents usually value stability, so politicians who advocate too strongly for growth may find themselves out of office. Mr. Schleicher also notes that incumbent politicians often prefer a stable, immobile electorate because it means that the voters who elected them in the first place will be there next election cycle.

Occupational licensing and land-use regulations make it harder for people to enter thriving local economies, but other policies make it harder to leave areas with poor economies. Nearly 13% of Americans work for state and local governments and 92% of them have a defined-benefit pension plan. Defined-benefit plans have long vesting periods and benefits can be significantly smaller if employees split their career between multiple employers rather than remain at one employer. Thus over 10% of the workforce has a strong retirement-based incentive to stay where they are.

Eligibility standards for public benefits and their amounts also vary by state, and this discourages people who receive benefits such as Temporary Assistance for Needy Families (TANF) from moving to states that may have a stronger economy but less benefits. Even when eligibility standards and benefits are similar, the paperwork and time burden of enrolling in a new state can discourage mobility.

The federal government subsidizes home ownership as well, and homeownership is correlated with less labor mobility over time. Place-based subsidies to declining cities also artificially support areas that should have less people. As long as state and federal governments subsidize government services in cities like Atlantic City and Detroit people will be less inclined to leave them. People-based subsidies that incentivize people to move to thriving areas are an alternative that is likely better for the taxpayer, the recipient and the country in the long run.

How to increase labor mobility

Since state and local governments are unlikely to directly address the impediments to labor mobility that they have created, Mr. Schleicher argues for more federal involvement. Some of his suggestions don’t interfere with local control, such as a federal clearinghouse for coordinated occupational-licensing rules across states. This is not a bad idea but I am not sure how effective it would be.

Other suggestions are more intrusive and range from complete federal preemption of state and local rules to federal grants that encourage more housing construction or suspension of the mortgage-interest deduction in places that restrict housing construction.

Local control is important due to the presence of local knowledge and the beneficial effects that arise from interjurisdictional competition, so I don’t support complete federal preemption of local rules. Economist William Fischel also thinks the mortgage interest deduction is largely responsible for excessive local land-use regulation, so eliminating it altogether or suspending it in places that don’t allow enough new housing seems like a good idea.

I also support more people-based subsidies that incentivize moving to areas with better economies and less place-based subsidies. These subsidies could target people living in specific places and the amounts could be based on the economic characteristics of the destination, with larger amounts given to people who are willing to move to areas with the most employment opportunities and/or highest wages.

Making it easier for people to retain any state-based government benefits across state lines would also help improve labor mobility. I support reforms that reduce the paperwork and time requirements for transferring benefits or for simply understanding what steps need to be taken to do so.

Several policy changes will need to occur before we can expect to see significant changes in labor mobility. There is broad agreement around some of them, such as occupational licensing and land-use regulation reform, but bringing them to fruition will take time. As for the less popular ideas, it will be interesting to see which, if any, are tried.

Northern Cities Need To Be Bold If They Want To Grow

Geography and climate have played a significant role in U.S. population growth since 1970 (see here, here, here, and here). The figure below shows the correlation between county-level natural amenities and county population growth from 1970 – 2013 controlling for other factors including the population of the county in 1970, the average wage of the county in 1970 (a measure of labor productivity), the proportion of adults in the county with a bachelor’s degree or higher in 1970 and region of the country. The county-level natural amenities index is from the U.S. Department of Agriculture and scores the counties in the continental U.S. according to their climate and geographic features. The county with the worst score is Red Lake, MN and the county with the best score is Ventura, CA.

1970-13 pop growth, amenities

As shown in the figure the slope of the best fit line is positive. The coefficient from the regression is also given at the bottom of the figure and is equal to 0.16, meaning a one point increase in the score increased population growth by 16 percentage points on average.

The effect of natural amenities on population growth is much larger than the effect of the proportion of adults with a bachelor’s degree or higher, which is another strong predictor of population growth at the metropolitan (MSA) and city level (see here, here, here, and here). The relationship between county population growth from 1970 – 2013 and human capital is depicted below.

1970-13 pop growth, bachelors or more

Again, the relationship is positive but the effect is smaller. The coefficient is 0.026 which means a 1 percentage point increase in the proportion of adults with a bachelor’s degree or higher in 1970 increased population growth by 2.6 percentage points on average.

An example using some specific counties can help us see the difference between the climate and education effects. In the table below the county where I grew up, Greene County, OH, is the baseline county. I also include five other urban counties from around the country: Charleston County, SC; Dallas County, TX; Eau Claire County, WI; San Diego County, CA; and Sedgwick County, TX.

1970-13 pop chg, amenities table

The first column lists the amenities score for each county. The highest score belongs to San Diego. The second column lists the difference between Green County’s score and the other counties, e.g. 9.78 – (-1.97) = 11.75 which is the difference between Greene County’s score and San Diego’s score. The third column is the difference column multiplied by the 0.16 coefficient from the natural amenity figure e.g. 11.75 x 0.16 = 188% in the San Diego row. What this means is that according to this model, if Greene County had San Diego’s climate and geography it would have grown by an additional 188 percentage points from 1970 – 2013 all else equal.

Finally, the last column is the actual population growth of the county from 1970 – 2013. As shown, San Diego County grew by 135% while Greene County only grew by 30% over this 43 year period. Improving Greene County’s climate to that of any of the other counties except for Eau Claire would have increased its population growth by a substantial yet realistic amount.

Table 2 below is similar to the natural amenities table above only it shows the different effects on Greene County’s population growth due to a change in the proportion of adults with a bachelor’s degree or higher.

1970-13 pop chg, bachelor's table

As shown in the first column, Greene County actually had the largest proportion of adults with bachelor’s degree or higher in 1970 – 14.7% – of the counties listed.

The third column shows how Greene County’s population growth would have changed if it had the same proportion of adults with a bachelor’s degree or higher as the other counties did in 1970. If Greene County had the proportion of Charleston (11.2%) instead of 14.7% in 1970, its population growth is predicted to have been 9 percentage points lower from 1970 – 2013, all else equal. All of the effects in the table are negative since all of the counties had a lower proportion than Greene and population education has a positive effect on population growth.

Several studies have demonstrated the positive impact of an educated population on overall city population growth – often through its impact on entrepreneurial activity – but as shown here the education effect tends to be swamped by geographic and climate features. What this means is that city officials in less desirable areas need to be bold in order to compensate for the poor geography and climate that are out of their control.

A highly educated population combined with a business environment that fosters innovation can create the conditions for city growth. Burdensome land-use regulations, lengthy, confusing permitting processes, and unpredictable rules coupled with inconsistent enforcement increase the costs of doing business and stifle entrepreneurship. When these harmful business-climate factors are coupled with a generally bad climate the result is something like Cleveland, OH.

The reality is that the tax and regulatory environments of declining manufacturing cities remain too similar to those of cities in the Sunbelt while their weather and geography differ dramatically, and not in a good way. Since only relative differences cause people and firms to relocate, the similarity across tax and regulatory environments ensures that weather and climate remain the primary drivers of population change.

To overcome the persistent disadvantage of geography and climate officials in cold-weather cities need to be aggressive in implementing reforms. Fiddling around the edges of tax and regulatory policy in a half-hearted attempt to attract educated people, entrepreneurs and large, high-skill employers is a waste of time and residents’ resources – Florida’s cities have nicer weather and they’re in a state with no income tax. Northern cities like Flint, Cleveland, and Milwaukee that simply match the tax and regulatory environment of Houston, San Diego, or Tampa have done nothing to differentiate themselves along those dimensions and still have far worse weather.

Location choices reveal that people are willing to put up with a lot of negatives to live in places with good weather. California has one of the worst tax and regulatory environments of any state in the country and terrible congestion problems yet its large cities continue to grow. A marginally better business environment is not going to overcome the allure of the sun and beaches.

While a better business environment that is attractive to high-skilled workers and encourages entrepreneurship is unlikely to completely close the gap between a place like San Diego and Dayton when it comes to being a nice place to live and work, it’s a start. And more importantly it’s the only option cities like Dayton, Buffalo, Cleveland, St. Louis and Detroit have.

City population dynamics since 1850

The reason why some cities grow and some cities shrink is a heavily debated topic in economics, sociology, urban planning, and public administration. In truth, there is no single reason why a city declines. Often exogenous factors – new modes of transportation, increased globalization, institutional changes, and federal policies – initiate the decline while subsequent poor political management can exacerbate it. This post focuses on the population trends of America’s largest cities since 1850 and how changes in these factors affected the distribution of people within the US.

When water transportation, water power, and proximity to natural resources such as coal were the most important factors driving industrial productivity, businesses and people congregated in locations near major waterways for power and shipping purposes. The graph below shows the top 10 cities* by population in 1850 and follows them until 1900. The rank of the city is on the left axis.

top cities 1850-1900

 

* The 9th, 11th, and 12th ranked cities in 1850 were all incorporated into Philadelphia by 1860. Pittsburgh was the next highest ranked city (13th) that was not incorporated so I used it in the graph instead.

All of the largest cities were located on heavily traveled rivers (New Orleans, Cincinnati, Pittsburgh, and St. Louis) or on the coast and had busy ports (New York, Boston, Philadelphia, Brooklyn, and Baltimore). Albany, NY may seem like an outlier but it was the starting point of the Erie Canal.

As economist Ed Glaeser (2005) notes “…almost every large northern city in the US as of 1860 became an industrial powerhouse over the next 60 years as factories started in central locations where they could save transport costs and make use of large urban labor forces.”

Along with waterways, railroads were an important mode of transportation from 1850 – 1900 and many of these cities had important railroads running through them, such as the B&O through Balitmore and the Erie Railroad in New York. The increasing importance of railroads impacted the list of top 10 cities in 1900 as shown below.

top cities 1900-1950

A similar but not identical set of cities dominated the urban landscape over the next 50 years. By 1900, New Orleans, Brooklyn (merged with New York) Albany, and Pittsburgh were replaced by Chicago, Cleveland, Buffalo, and San Francisco. Chicago, Cleveland, and Buffalo are all located on the Great Lakes and thus had water access, but it was the increasing importance of railroad shipping and travel that helped their populations grow. Buffalo was on the B&O railroad and was also the terminal point of the Erie Canal. San Francisco became much more accessible after the completion of the Pacific Railroad in 1869, but the California Gold Rush in the late 1840s got its population growth started.

As rail and eventually automobile/truck transportation became more important during the early 1900s, cities that relied on strategic river locations began to decline. New Orleans was already out of the top 10 by 1900 (falling from 5th to 12th) and Cincinnati went from 10th in 1900 to 18th by 1950. Buffalo also fell out of the top 10 during this time period, declining from 8th to 15th. But despite some changes in the rankings, there was only one warm-weather city in the top 10 as late as 1950 (Los Angeles). However, as the next graphs shows there was a surge in the populations of warm-weather cities during the period from 1950 to 2010 that caused many of the older Midwestern cities to fall out of the rankings.

top cities 1950-2010

The largest shakeup in the population rankings occurred during this period. Out of the top 10 cities in 1950, only 4 (Philadelphia, Los Angeles, Chicago, and New York) were still in the top 10 in 2010 (All were in the top 5, with Houston – 4th in 2010 – being the only city not already ranked in the top 10 in 1950, when it was 14th). The cities ranked 6 – 10 fell out of the top 20 while Detroit declined from 5th to 18th. The large change in the rankings during this time period is striking when compared to the relative stability of the earlier time periods.

Economic changes due to globalization and the prevalence of right-to-work laws in the southern states, combined with preferences for warm weather and other factors have resulted in both population and economic decline in many major Midwestern and Northeastern cities. All of the new cities in the top ten in 2010 have relatively warm weather: Phoenix, San Antonio, San Diego, Dallas, and San Jose. Some large cities missing from the 2010 list – particularly San Francisco and perhaps Washington D.C. and Boston as well – would probably be ranked higher if not for restrictive land-use regulations that artificially increase housing prices and limit population growth. In those cities and other smaller cities – primarily located in Southern California – low population growth is a goal rather than a result of outside forces.

The only cold-weather cities that were in the top 15 in 2014 that were not in the top 5 in 1950 were Indianapolis, IN (14th) and Columbus, OH (15th). These two cities not only avoided the fate of nearby Detroit and Cleveland, they thrived. From 1950 to 2014 Columbus’ population grew by 122% and Indianapolis’ grew by 99%. This is striking compared to the 57% decline in Cleveland and the 63% decline in Detroit during the same time period.

So why have Columbus and Indianapolis grown since 1950 while every other large city in the Midwest has declined? There isn’t an obvious answer. One thing among many that both Columbus and Indianapolis have in common is that they are both state capitals. State spending as a percentage of Gross State Product (GSP) has been increasing since 1970 across the country as shown in the graph below.

OH, IN state spending as per GSP

In Ohio state spending growth as a percentage of GSP has outpaced the nation since 1970. It is possible that increased state spending in Ohio and Indiana is crowding out private investment in other parts of those states. And since much of the money collected by the state ends up being spent in the capital via government wages, both Columbus and Indianapolis grow relative to other cities in their respective states.

There has also been an increase in state level regulation over time. As state governments become larger players in the economy business leaders will find it more and more beneficial to be near state legislators and governors in order to lobby for regulations that help their company or for exemptions from rules that harm it. Company executives who fail to get a seat at the table when regulations are being drafted may find that their competitors have helped draft rules that put them at a competitive disadvantage. The decline of manufacturing in the Midwest may have created an urban reset that presented firms and workers with an opportunity to migrate to areas that have a relative abundance of an increasingly important factor of production – government.

Can historic districts dampen urban renewal?

Struggling cities in the Northeast and Midwest have been trying to revitalize their downtown neighborhoods for years. City officials have used taxpayer money to build stadiums, construct river walks, and lure employers with the hope that such actions will attract affluent, tax -paying residents back to the urban core. Often these strategies fail to deliver but that hasn’t deterred other cities from duplicating or even doubling down on the efforts. But if these policies don’t work, what can cities do?

Part of the answer is to allow more building, especially newer housing. One factor that may be hampering the gentrification efforts of many cities is the age of their housing stock. The theory is straightforward and is explained and tested in this 2009 study. From the abstract:

“This paper identifies a new factor, the age of the housing stock, that affects where high- and low-income neighborhoods are located in U.S. cities. High-income households, driven by a high demand for housing services, will tend to locate in areas of the city where the housing stock is relatively young. Because cities develop and redevelop from the center outward over time, the location of these neighborhoods varies over the city’s history. The model predicts a suburban location for the rich in an initial period, when young dwellings are found only in the suburbs, while predicting eventual gentrification once central redevelopment creates a young downtown housing stock.”

In the empirical section of the paper the authors find that:

… a tract’s economic status tends to fall rather than rise as distance increases holding age fixed, suggesting that high-income households would tend to live near city centers were it not for old central housing stocks.” (My bold)

This makes sense. High income people like relatively nicer, newer housing and will purchase housing in neighborhoods where the housing is relatively nicer and newer. In the latter half of the 20th century this meant buying new suburban homes, but as that housing ages and new housing is built to replace the even older housing in the central city high income people will be drawn back to central city neighborhoods. This has the power to reduce the income disparity between the central city and suburbs seen in many metropolitan areas. As the authors note:

Our results show that, if the influence of spatial variation in dwelling ages were eliminated, central city/suburban disparities in neighborhood economic status would be reduced by up to 50 percent within American cities. In other words, if the housing age distribution were made uniform across space, reducing average dwelling ages in the central city and raising them in the suburbs, then neighborhood economic status would shift in response, rising in the center and falling in the suburbs. (My bold)

To get a sense of the age of the housing stock in northern cities, the figure below depicts the proportion of housing in eight different age categories in Ohio’s six major cities as of 2013 (most recent data available, see table B25034 here).

age of ohio's housing stock

The age categories are: built after 2000, from 1990 and 1999, from 1980-89, from 1970-79, from 1960-69, from 1950-59, from 1940-49, and built prior to 1939. As the figure shows most of the housing stock in Ohio’s major cities is quite old. In every city except for Columbus over 30% of the housing stock was built prior to 1939. In Cleveland, over 50% of the housing stock is over 75 years old! In Columbus, which is the largest and fastest growing city in Ohio, the housing stock is fairly evenly distributed across the age categories. Columbus really stands out in the three youngest categories.

In a free market for housing old housing would be torn down and replaced by new housing once the net benefits of demolition and rebuilding exceed the net benefits of renovation. But anyone who studies the housing market knows that it is hardly free, as city ordinances regulate everything from lot sizes to height requirements. While these regulations restrict new housing, they are a larger problem in cities where demand for housing is already high since they artificially restrict supply and drive up prices.

A potentially bigger problem for declining cities that has to do with the age of the housing stock is historic districts. In historic districts the housing is protected by local rules that limit the types of renovations that can be undertaken. Property owners are required to maintain their home’s historical look and it can be difficult to demolish old houses.

For example, in Dayton, OH there are 20 historic districts in a city of only 142,000 people. Dayton’s Landmark Commission is charged with reviewing and approving major modifications to the buildings in historic districts including their demolition.  Many of the districts are located near the center of the city and contain homes built in the late 1800s and early 1900s. Some are also quite large; St. Anne’s Hill contains 315 structures and the South Park historic district covers 24 blocks and contains more than 700 structures. The table below provides a list of Dayton’s historic districts as well as the year they were classified, number of structures, acreage, and whether the district is a locally protected district. Seventy percent of the districts are protected by a local historic designation while 30 percent are only protected by the national designation.

dayton historic districts table

I personally like old houses, but I also recognize that holding on to the past can interfere with revitalization and growth. Older homes, especially those built prior to 1940, are expensive to restore and maintain. They often have old or outdated plumbing systems, electrical systems, and inefficient windows that need to be replaced. They may also contain lead paint or other hazardous materials that were commonly used at the time they were built which may have to be removed. Many people can’t afford these upfront costs and those that can often don’t want to deal with the hassle of a restoration project.

Also, people have different tastes and historic districts make it difficult for some people to live in the house they want in the area they want. As this map shows, many of the Dayton’s historic districts are located near the center of the city in the most walkable, urban neighborhoods. The Oregon district and St. Anne’s Hill are both quite walkable and contain several restaurants, bars, and shops. If a person wants to live in one of these neighborhoods they have to be content with living in an older house. The design restrictions that come standard with historic districts prevent people with certain tastes from locating in these areas.

A 2013 study that examined the Cleveland housing market determined that it is economical to demolish many of the older, vacant homes in declining cities rather than renovate them. This is just as true of older homes that happen to be in historic districts.

Ultimately homeowners should be free to do what they want with their home and the land that it sits on. If a person wants to buy a historic house and renovate it they should be free to do so, but they should also be allowed to build a new structure on the property if they wish. When a city protects large swathes of houses via historic districts they slow down the cycle of housing construction that could draw people back to urban neighborhoods. This is especially true if the historic districts encompass the best areas of the city, such as those closest to downtown amenities and employment opportunities. Living in the city is appealing to many people, but being forced to purchase and live in outdated housing dampens the appeal for some and may be contributing to the inability of cities like Dayton to turn the corner.

Rent control: A bad policy that just won’t die

The city council of Richmond, CA is thinking about implementing rent control in their city. Richmond is located north of Berkeley and Oakland on the San Francisco Bay in an area that has some of the highest housing prices in the country. From the article:

“Richmond is growing and becoming a more desirable place where people want to live, but that increased demand is putting pressure on the existing housing stock.”

It is true that an increase in the demand for housing will increase prices and rents. Unfortunately, rent control will not solve the problem of too little housing, which is the ultimate cause of high prices.

rent control 1

The diagram above depicts a market for housing like the one in Richmond. Without rent control, when demand increases (D1 to D2) the price rises to R2 and the equilibrium quantity increases from Q1 to Q*. However, with rent control, the price is unable to rise. For example, if the Richmond city council wanted prices to be at the pre-demand-increase level they would set the rent control price equal to R1. But with the increase in demand the quantity demanded at that price is Qd, while the quantity supplied is only Q1. Thus there is a shortage. This is the outcome of a price ceiling.

What this means is that some people will find a place to rent at the old, lower rental price (Q1 people).  But more people will want to rent at that price than there are units available, and since the price cannot rise due to the price control, the available apartments will have to be allocated some other way. This means longer wait times for vacant apartments and higher search costs. It also means lower quality apartments. Since the owners know there are more people who want an apartment than available apartments, they don’t have an incentive to maintain the apartment at the same level as they would if they had to attract customers.

With rent control, only Q1 people get an apartment. Without rent control, as the price rises more units are supplied over time and the new equilibrium has Q* (> Q1) people who get an apartment. Yes, they have to pay a higher price, but the relevant alternative is not an apartment at the lower price: The alternative is that some people who would have been willing to pay the higher price do not get an apartment.

Since Richmond has strict land-use rules like many communities in the San Francisco metro area (you can read all about their minimum lot size and parking space requirements here), rent control is adding to the housing woes of Richmond’s renters and any person who would like to move there.

rent control 2

Land-use restrictions decrease the amount of buildable land which subsequently increases the cost of housing. This is depicted in the diagram above as a shift from S1 to S2. The decrease in supply leads to a new equilibrium rent of R2 > R1 and a reduction in the equilibrium quantity to Q2 (< Q1). So land-use restrictions have already decreased the amount of available housing and increased the price.

If rent control is implemented, depicted in the diagram as the solid red line at the old price (R1), then the quantity supplied decreases even more to Qs. Again, with rent control there is a shortage as the quantity of housing demanded at R1 is Q1 (> Qs). So all of the same problems that occurred in the first example occur here, only here the quantity of housing is decreased not once, but TWICE by the government: Once due to the land use restrictions (Q1 to Q2) and then AGAIN when the rent control is implemented (Q2 to Qs). Restricting the amount of housing available does not help more people find housing, and restricting it again exacerbates the problem.

Trying to find an economist who doesn’t think that rent control is a bad idea is like trying to find a cheap apartment in a city with rent control; it can be done, but you have to spend a lot of time looking. In a Booth IGM poll question about rent control, 95% of the economists surveyed disagreed with the statement that rent control had a positive impact on the amount and quality of affordable rental housing. Yet despite basic economic theory, the agreement among experts, and the empirical evidence (see here, here, and here) rent control remains in some places and is often brought up as a viable policy for increasing the amount of affordable housing. This is truly a shame since what places like Richmond need is more housing, not less housing with artificially low prices.

Institutions matter, state legislative committee edition

Last week, Mercatus published a new working paper that I coauthored with Pavel Yakovlev of Duquesne University. It addresses an understudied institutional difference between states. Some state legislative chambers allow one committee to write both spending and taxing bills while others separate these functions into two separate committees.

This institutional difference first caught my eye a few years ago when Nick Tuszynski and I reviewed the literature on institutions and state spending. Among 16 different institutions that we looked at—from strict balanced budget requirements to term limits to “item reduction vetoes”—one stood out. Previous research by Mark Crain and Timothy Muris had found that states in which separate committees craft taxing and spending bills spend significantly less per capita than states in which a single committee was responsible for both kinds of bills. As you can see from the figure below (click to enlarge), the effect was estimated to be many times larger than that found for almost any other institution:

InstitutionsBut as large as this effect seems to be, the phenomenon has largely been ignored. To our knowledge, Crain and Muris are the only ones to have studied it. Their paper was now two decades old and was based on a relatively small sample of years from the 1980s.

As I wrote in yesterday’s Economics Intelligence column for US News:

To get a fresh look at the phenomenon, my colleagues and I consulted state statutes, legislative rules, committee websites and members’ offices. We created a unique data set that for some states spans 40 years. We took a cautious approach, coding taxing and spending functions as not separate in any chambers in which it was possible for a tax bill to come out of a spending committee and vice versa. We found that in 25 states, these functions are separate in both chambers, in 7 states they are separate in one chamber, and in the rest, these functions are separate in neither chamber.

To control for other confounding factors, we also gathered data on economic, demographic, and institutional differences between the states. Controlling for these factors, we found that separate taxing and spending committees are, indeed, associated with less spending. To be precise:

Other factors being equal, we find that those states with separate taxing and spending committees spend between $300 and $450 less per capita (between $790 and $1,200 less per household) than other states.

Our full paper is here, a summary is here, and my post at US News is here. Comments welcome.

Corporate welfare spending is not transparent

Over a century ago, the Italian political economist Amilcare Puviani suggested that policy makers have a strong incentive to obscure the cost of government. Known as “fiscal illusion,” the idea is that voters will be willing to spend more money on government if they think its costs is lower than it actually is. Fiscal illusion explains a great deal of public choices, including the popularity of deficit spending.

It also explains why the public knows the least about some of the most controversial items in the public budget such as corporate welfare. But some would like to change this. Here are Jess Fields and Tom “Smitty” Smith, writing in the (subscription required) Austin-American Statesman:

Texans believe in government transparency and accountability. For this reason, we have some of the most advanced open-government initiatives in the nation. Yet one policy area remains outside the view of the general public: economic development.

When local governments cut deals that result in millions in incentives, they can do it behind closed doors in “executive session” — legally — thanks to exceptions to the Open Meetings and Public Information Acts for “economic development negotiations.”

Fields is a senior policy analyst at the free enterprise Texas Public Policy Foundation, while Smith is the director of the Texas office of Public Citizen, a progressive consumer advocacy group started by Ralph Nader in the ‘70s.

Texans aren’t the only ones interested in making corporate welfare more transparent. The Government Accounting Standards Board (GASB) is considering rules that would require governments to report the tax privileges that they hand out to businesses. Here is Liz Farmer, writing in Governing Magazine:

Specifically, GASB is proposing that state and local governments disclose information about property and other tax abatement agreements in their annual financial statements. If approved, the new disclosures could shed light on an area of government finance and provide hard data on information that is assembled sporadically, if at all. Scores of public and private groups support the proposal and it has proven to be one of GASB’s most debated topic yet, as nearly 300 groups or individuals submitted comment letters to the board. But many still say the requirements don’t go far enough.

She notes that the proposal misses a number of tax privileges including:

  • Tax increment financing (TIF),
  • Agreements to discount personal income taxes,
  • “[P]rograms that reduce the tax liabilities of businesses or similar classes of taxpayers.”

Because of these omissions the new GASB rules may only capture about one-third of all tax expenditures.

Puviani would have predicted that.

How Complete Are Federal Agencies’ Regulatory Analyses?

A report released yesterday by the Government Accountability Office will likely get spun to imply that federal agencies are doing a pretty good job of assessing the benefits and costs of their proposed regulations. The subtitle of the report reads in part, “Agencies Included Key Elements of Cost-Benefit Analysis…” Unfortunately, agency analyses of regulations are less complete than this subtitle suggests.

The GAO report defined four major elements of regulatory analysis: discussion of the need for the regulatory action, analysis of alternatives, and assessment of the benefits and costs of the regulation. These crucial features have been required in executive orders on regulatory analysis and OMB guidance for decades. For the largest regulations with economic effects exceeding $100 million annually (“economically significant” regulations), GAO found that agencies always included a statement of the regulation’s purpose, discussed alternatives 81 percent of the time, always discussed benefits and costs, provided a monetized estimate of costs 97 percent of the time, and provided a monetized estimate of benefits 76 percent of the time.

A deeper dive into the report, however, reveals that GAO did not evaluate the quality of any of these aspects of agencies’ analysis. Page 4 of the report notes, “[O]ur analysis was not designed to evaluate the quality of the cost-benefit analysis in the rules. The presence of all key elements does not provide information regarding the quality of the analysis, nor does the absence of a key element necessarily imply a deficiency in a cost-benefit analysis.”

For example, GAO checked to see if the agency include a statement of the purpose of the regulation, but it apparently accepted a statement that the regulation is required by law as a sufficient statement of purpose (p. 22). Citing a statute is not the same thing as articulating a goal or identifying the root cause of the problem an agency seeks to solve.

Similarly, an agency can provide a monetary estimate of some benefits or costs without necessarily addressing all major benefits or costs the regulation is likely to create. GAO notes that it did not ascertain whether agencies addressed all relevant benefits or costs (p. 23).

For an assessment of the quality of agencies’ regulatory analysis, check out the Mercatus Center’s Regulatory Report Card. The Report Card evaluation method explicitly assesses the quality of the agency’s analysis, rather than just checking to see if the agency discussed the topics. For example, to assess how well the agency analyzed the problem it is trying to solve, the evaluators ask five questions:

1. Does the analysis identify a market failure or other systemic problem?

2. Does the analysis outline a coherent and testable theory that explains why the problem is systemic rather than anecdotal?

3. Does the analysis present credible empirical support for the theory?

4. Does the analysis adequately address the baseline — that is, what the state of the world is likely to be in the absence of federal intervention not just now but in the future?

5. Does the analysis adequately assess uncertainty about the existence or size of the problem?

These questions are intended to ascertain whether the agency identified a real, significant problem and identified its likely cause. On a scoring scale ranging from 0 points (no relevant content) to 5 points (substantial analysis), economically significant regulations proposed between 2008 and 2012 scored an average of just 2.2 points for their analysis of the systemic problem. This score indicates that many regulations are accompanied by very little evidence-based analysis of the underlying problem the regulation is supposed to solve. Scores for assessment of alternatives, benefits, and costs are only slightly better, which suggests that these aspects of the analysis are often seriously incomplete.

These results are consistent with the findings of other scholars who have evaluated the quality of agency Regulatory Impact Analyses during the past several decades. (Check pp. 7-10 of this paper for citations.)

The Report Card results are also consistent with the findings in the GAO report. GAO assessed whether agencies are turning in their assigned homework; the Report Card assesses how well they did the work.

The GAO report contains a lot of useful information, and the authors are forthright about its limitations. GAO combed through 203 final regulations to figure out what parts of the analysis the agencies did and did not do — an impressive accomplishment by any measure!

I’m more concerned that some participants in the political debate over regulatory reform will claim that the report shows regulatory agencies are doing a great job of analysis, and no reforms to improve the quality of analysis are needed. The Regulatory Report Card results clearly demonstrate otherwise.