Tag Archives: consumption

What else can the government do for America’s poor?

This year marks the 20th anniversary of the 1996 welfare reforms, which has generated some discussion about poverty in the U.S. I recently spoke to a group of high school students on this topic and about what reforms, if any, should be made to our means-tested welfare programs.

After reading several papers (e.g. here, here and here), the book Hillbilly Elegy, and reflecting on my own experiences I am not convinced the government is capable of doing much more.

History

President Lyndon Johnson declared “War on Poverty” in his 1964 state of the union address. Over the last 50 years there has been some progress but there are still approximately 43 million Americans living in poverty as defined by the U.S. Census Bureau.

Early on it looked as if poverty would be eradicated fairly quickly. In 1964, prior to the “War on Poverty”, the official poverty rate was 20%. It declined rapidly from 1965 to 1972, especially for the most impoverished groups as shown in the figure below (data from Table 1 in Haveman et al. , 2015). (Click to enlarge)

poverty-rate-1965-72

Since 1972 the poverty rate has remained fairly constant. It reached its lowest point in 1973—11.1%—but has since fluctuated between roughly 11% and 15%, largely in accordance with the business cycle. The number of people in poverty has increased, but that is unsurprising considering the relatively flat poverty rate coupled with a growing population.

census-poverty-rate-time-series-2015

Meanwhile, an alternative measure called the supplemental poverty measure (SPM) has declined, but it was still over 15% as of 2013, as shown below.

poverty-rate-time-series

The official poverty measure (OPM) only includes cash and cash benefits in its measure of a person’s resources, while the SPM includes tax credits and non-cash transfers (e.g. food stamps) as part of someone’s resources when determining their poverty status. The SPM also makes adjustments for local cost of living.

For example, the official poverty threshold for a single person under the age of 65 was $12,331 in 2015. But $12,331 can buy more in rural South Carolina than it can in Manhattan, primarily because of housing costs. The SPM takes these differences into account, although I am not sure it should for reasons I won’t get into here.

Regardless of the measure we look at, poverty is still higher than most people would probably expect considering the time and resources that have been expended trying to reduce it. This is especially true in high-poverty areas where poverty rates still exceed 33%.

A county-level map from the Census that uses the official poverty measure shows the distribution of poverty across the 48 contiguous states in 2014. White represents the least amount of poverty (3.2% to 11.4%) and dark pink the most (32.7% to 52.2%).

us-county-poverty-map

The most impoverished counties are in the south, Appalachia and rural west, though there are pockets of high-poverty counties in the plains states, central Michigan and northern Maine.

Why haven’t we made more progress on poverty? And is there more that government can do? I think these questions are intertwined. My answer to the first is it’s complicated and to the second I don’t think so.

Past efforts

The inability to reduce the official poverty rate below 10% doesn’t appear to be due to a lack of money. The figure below shows real per capita expenditures—sum of federal, state and local—on the top 84 (top line) and the top 10 (bottom line) means-tested welfare poverty programs since 1970. It is from Haveman et al. (2015).

real-expend-per-capita-on-poverty-programs

There has been substantial growth in both since the largest drop in poverty occurred in the late 1960s. If money was the primary issue one would expect better results over time.

So if the amount of money is not the issue what is? It could be that even though we are spending money, we aren’t spending it on the right things. The chart below shows real per capita spending on several different programs and is also from Haveman et al. (2015).

expend-per-cap-non-medicaid-pov-programs

Spending on direct cash-assistance programs—Aid for Families with Dependent Children (AFDC) and Temporary Assistance for Needy Families (TANF)—has fallen over time, while spending on programs designed to encourage work—Earned Income Tax Credit (EITC)—and on non-cash benefits like food stamps and housing aid increased.

In the mid-1970s welfare programs began shifting from primarily cash aid (AFDC, TANF) to work-based aid (EITC). Today the EITC and food stamps are the core programs of the anti-poverty effort.

It’s impossible to know whether this shift has resulted in more or less poverty than what would have occurred without it. We cannot reconstruct the counterfactual without going back in time. But many people think that more direct cash aid, in the spirit of AFDC, is what’s needed.

The difference today is that instead of means-tested direct cash aid, many are calling for a universal basic income or UBI. A UBI would provide each citizen, from Bill Gates to the poorest single mother, with a monthly cash payment, no strings attached. Prominent supporters of a UBI include libertarian-leaning Charles Murray and people on the left such as Matt Bruenig and Elizabeth Stoker.

Universal Basic Income?

The details of each UBI plan vary, but the basic appeal is the same: It would reduce the welfare bureaucracy, simplify the process for receiving aid, increase the incentive to work at the margin since it doesn’t phase out, treat low-income people like adults capable of making their own decisions and mechanically decrease poverty by giving people extra cash.

A similar proposal is a negative income tax (NIT), first popularized by Milton Friedman. The current EITC is a negative income tax conditional on work, since it is refundable i.e. eligible people receive the difference between their EITC and the taxes they owe. The NIT has its own problems, discussed in the link above, but it still has its supporters.

In theory I like a UBI. Economists in general tend to favor cash benefits over in-kind programs like vouchers and food stamps due to their simplicity and larger effects on recipient satisfaction or utility. In reality, however, a UBI of even $5,000 is very expensive and there are public choice considerations that many UBI supporters ignore, or at least downplay, that are real problems.

The political process can quickly turn an affordable UBI into an unaffordable one. It seems reasonable to expect that politicians trying to win elections will make UBI increases part of their platform, with each trying to outdo the other. There is little that can be done, short of a constitutional amendment (and even those can be changed), to ensure that political forces don’t alter the amount, recipient criteria or add additional programs on top of the UBI.

I think the history of the income tax demonstrates that a relatively low, simple UBI would quickly morph into a monstrosity. In 1913 there were 7 income tax brackets that applied to all taxpayers, and a worker needed to make more than $20K (equivalent to $487,733 in 2016) before he reached the second bracket of 2% (!). By 1927 there were 23 brackets and the second one, at 3%, kicked in at $4K ($55,500 in 2016) instead of $20K. And of course we are all aware of the current tax code’s problems. To chart a different course for the UBI is, in my opinion, a work of fantasy.

Final thoughts

Because of politics, I think an increase in the EITC (and reducing its error rate), for both working parents and single adults, coupled with criminal justice reform that reduces the number of non-violent felons—who have a hard time finding employment upon release—are preferable to a UBI.

I also support the abolition of the minimum wage, which harms the job prospects of low-skilled workers. If we are going to tie anti-poverty programs to work in order to encourage movement towards self-sufficiency, then we should make it as easy as possible to obtain paid employment. Eliminating the minimum wage and subsidizing income through the EITC is a fairer, more efficient way to reduce poverty.

Additionally, if a minimum standard of living is something that is supported by society than all of society should share the burden via tax-funded welfare programs. It is not philanthropic to force business owners to help the poor on behalf of the rest of us.

More economic growth would also help. Capitalism is responsible for lifting billions of people out of dire poverty in developing countries and the poverty rate in the U.S. falls during economic expansions (see previous poverty rate figures). Unfortunately, growth has been slow over the last 8 years and neither presidential candidate’s policies inspire much hope.

In fact, a good way for the government to help the poor is to reduce regulation and lower the corporate tax rate, which would help economic growth and increase wages.

Despite the relatively high official poverty rate in the U.S., poor people here live better than just about anywhere else in the world. Extreme poverty—think Haiti—doesn’t exist in the U.S. On a consumption rather than income basis, there’s evidence that the absolute poverty rate has fallen to about 4%.

Given the way government functions I don’t think there is much left for it to do. Its lack of local knowledge and resulting blunt, one size fits all solutions, coupled with its general inefficiency, makes it incapable of helping the unique cases that fall through the current social safety net.

Any additional progress will need to come from the bottom up and I will discuss this more in a future post.

Education, Innovation, and Urban Growth

One of the strongest predictors of urban growth since the start of the 20th century is the skill level of a city’s population. Cities that have a highly skilled population, usually measured as the share of the population with a bachelor’s degree or more, tend to grow faster than similar cities with less educated populations. This is true at both the metropolitan level and the city level. The figure below plots the population growth of 30 large U.S. cities from 1970 – 2013 on the vertical axis and the share of the city’s 25 and over population that had at least a bachelor’s degree in 1967 on the horizontal axis. (The education data for the cities are here. I am using the political city’s population growth and the share of the central city population with a bachelor’s degree or more from the census data linked to above.)

BA, city growth 1

As shown in the figure there is a strong, positive relationship between the two variables: The correlation coefficient is 0.61. It is well known that over the last 50 years cities in warmer areas have been growing while cities in colder areas have been shrinking, but in this sample the cities in warmer areas also tended to have a better educated population in 1967. Many of the cities known today for their highly educated populations, such as Seattle, San Francisco, and Washington D.C., also had highly educated populations in 1967. Colder manufacturing cities such as Detroit, Buffalo, and Newark had less educated workforces in 1967 and subsequently less population growth.

The above figure uses data on both warm and cold cities, but the relationship holds for only cold cities as well. Below is the same graph but only depicts cities that have a January mean temperature below 40°F. Twenty out of the 30 cities fit this criteria.

BA, city growth 2

Again, there is a strong, positive relationship. In fact it is even stronger; the correlation coefficient is 0.68. Most of the cities in the graph lost population from 1970 – 2013, but the cities that did grow, such as Columbus, Seattle, and Denver, all had relatively educated populations in 1967.

There are several reasons why an educated population and urban population growth are correlated. One is that a faster accumulation of skills and human capital spillovers in cities increase wages which attracts workers. Also, the large number of specialized employers located in cities makes it easier for workers, especially high-skill workers, to find employment. Cities are also home to a range of consumption amenities that attract educated people, such as a wide variety of shops, restaurants, museums, and sporting events.

Another reason why an educated workforce may actually cause city growth has to do with its ability to adjust and innovate. On average, educated workers tend to be more innovative and better able to learn new skills. When there is an negative, exogenous shock to an industry, such as the decline of the automobile industry or the steel industry, educated workers can learn new skills and create new industries to replace the old ones. Many of the mid-20th century workers in Detroit and other Midwestern cities decided to forego higher education because good paying factory jobs were plentiful. When manufacturing declined those workers had a difficult time learning new skills. Also, the large firms that dominated the economic landscape, such as Ford, did not support entrepreneurial thinking. This meant that even the educated workers were not prepared to create new businesses.

Local politicians often want to protect local firms in certain industries through favorable treatment and regulation. But often this protection harms newer, innovative firms since they are forced to compete with the older firms on an uneven playing field. Political favoritism fosters a stagnant economy since in the short-run established firms thrive at the expense of newer, more innovative startups. Famous political statements such as “What’s good for General Motors is good for the country” helped mislead workers into thinking that government was willing and able to protect their employers. But governments at all levels were unable to stop the economic forces that battered U.S. manufacturing.

To thrive in the 21st century local politicians need to foster economic environments that encourage innovation and ingenuity. The successful cities of the future will be those that are best able to innovate and to adapt in an increasingly complex world. History has shown us that an educated and entrepreneurial workforce is capable of overcoming economic challenges, but to do this people need to be free to innovate and create. Stringent land-use regulations, overly-burdensome occupational licensing, certificate-of-need laws, and other unnecessary regulations create barriers to innovation and make it more difficult for entrepreneurs to create the firms and industries of the future.

State and local spending growth vs. GDP growth.

A few years ago, I produced a figure which showed inflation-adjusted state and local expenditures alongside inflation-adjusted private GDP.

It’s been some time since I made that chart and so I thought I might revisit the question. This time around, I compared state and local expenditures with overall GDP, not just private GDP.

The results are below (click to enlarge).

State and Local expenditures vs. GDPAfter adjusting for inflation, the economy is about 5.79 times its 1950 size. This is a good thing. It means more is being produced and more is available for consumption. And since the population has only doubled over this period, it means that per capita production is way up.

Over the same time period, however, state and local government expenditures have not just gone up 5 or 6 or even 8 times. Instead, after adjusting for inflation, state and local governments are spending about 12.79 times as much as they spent in 1950.

State and local governments, of course, depend entirely on the economy for their resources. As I put it when I produced the original chart, this is like a household whose income has grown about 6-fold but whose spending habits have grown nearly 13-fold.

Does an income tax make people work less?

Harry Truman famously asked for a one-handed economist since all of his seemed reluctant to decisively answer anything: “on the one hand,” they’d tell him, but “on the other…”

When asked whether an income tax makes people work more or less, the typical economist gives the sort of answer that would have grated on Truman like a bad music critic.

If, however, we change the question slightly and make it more realistic, it’s possible to give a decisive answer to the question. Income taxes do reduce overall labor supply. This is something that economists James Gwartney and Richard Stroup explained in the pages of the American Economic Review some 30 years ago. And last week, the CBO’s much-discussed report on the ACA and labor-force participation illustrated their point nicely.

Continue reading

To solve a problem, first understand its cause

A key principle of smart regulation is that regulators should first understand the nature, extent, and cause of the problem they are trying to solve before they write a regulation. (It’s even the first principle of regulation listed in Executive Order 12866, which governs regulatory analysis and review in the executive branch).

On the federal level, this principle is often honored more in the breach than in the observance. For a good example of what can happen on the state and local level when this principle is ignored, one need look no further than a recent study on the costs of excessive alcohol consumption funded by the Centers for Disease Control.

1655-barrel for drunk

Credit: Christy K. Robinson

The study estimates that binge drinking is responsible for about 76 percent of the social costs of excessive drinking, and underage drinking is responsible for another 11 percent. (“Binge drinking” was defined as 5 or more drinks on the same occasion for a man, and 4 or more on the same occasion for a woman. All underage drinking was classified as excessive since it’s illegal.)

Taking these findings at face value, the logical conclusion is that the most sensible policies to reduce the costs of excessive alcohol consumption would target binge drinkers and underage drinkers. Unfortunately, the authors recommend a grab-bag of policies that would penalize anyone who consumes alcohol — not just binge drinkers and underage drinkers.

They refer the reader to the Centers for Disease Control’s “Guide to Community Preventive Services,” which endorses policies like increased alcohol taxes, limitations on days alcohol can be sold, limiting sale hours, limiting the density of retail outlets, and government ownership of retail outlets. The only policies recommended that specifically target binge drinkers or underage drinkers are electronic screening and intervention, and enhanced enforcement of laws prohibiting sales to minors.

Two other initiatives mentioned in the Community Guide that sound like they might help — enhanced enforcement of “overservice” laws and responsible beverage service training — are not recommended because an insufficient number of studies have been done to test their effectiveness. If the CDC took the principles of sound regulatory analysis seriously, it would focus more resources on researching such targeted interventions and less on advocating broad-brush alcohol control policies that penalize citizens who have done no wrong.

Most readers can probably recall a bad experience with “group punishment” in grade school, when an entire classroom or grade got blamed for the misbehavior of a few miscreants. Many of the CDC’s preferred alcohol policies constitute group punishment on a massive scale, applied to adults. A careful focus on the root causes of the problem would help government avoid punishing everyone for the misdeeds of a few.

The Economics of Regulation Part 2: Quantifying Regulation

I recently wrote about a new study from economists John Dawson and John Seater that shows that federal regulations have slowed economic growth in the US by an average of 2% per year.  The study was novel and important enough from my perspective that it deserved some detailed coverage.  In this post, which is part two of a three part series (part one here), I go into some detail on the various ways that economists measure regulation.  This will help put into context the measure that Dawson and Seater used, which is the main innovation of their study.  The third part of the series will discuss the endogenous growth model in which they used their new measure of regulation to estimate its effect on economic growth.

From the macroeconomic perspective, the main policy interventions—that is, instruments wielded in a way to change individual or firm behavior—used by governments are taxes and regulations.  Others might include spending/deficit spending and monetary policy in that list, but a large percentage of economics studies on interventions intended to change behavior have focused on taxes, for one simple reason: taxes are relatively easy to quantify.  As a result, we know a lot more about taxes than we do about regulations, even if much of that knowledge is not well implemented.  Economists can calculate changes to marginal tax rates caused by specific policies, and by simultaneously tracking outcomes such as changes in tax revenue and the behavior of taxed and untaxed groups, deduce specific numbers with which to characterize the consequences of those taxation policies.  In short, with taxes, you have specific dollar values or percentages to work with. With regulations, not so much.

In fact, the actual burden of regulation is notoriously hidden, especially when directly compared to taxes that attempt to achieve the same policy objective.  For example, since fuel economy regulations (called Corporate Average Fuel Economy, or CAFE, standards) were first implemented in the 1970s, it has been broadly recognized that the goal of reducing gasoline consumption could be more efficiently achieved through a gasoline tax rather than vehicle design or performance standards.  However, it is much easier for a politician to tell her constituents that she will make auto manufacturers build more fuel-efficient cars than to tell constituents that they now face higher gasoline prices because of a fuel tax.  In econospeak, taxes are salient to voters—remembered as important and costly—whereas regulations are not. Even when comparing taxes to taxes, some, such as property taxes, are apparently more salient than others, such as payroll taxes, as this recent study shows.  If some taxes that workers pay on a regular basis are relatively unnoticed, how much easier is it to hide a tax in the form of a regulation?  Indeed, it is arguably because regulations are uniquely opaque as policy instruments that all presidents since Jimmy Carter have required some form of benefit-cost analysis on new regulations prior to their enactment (note, however, that the average quality of those analyses is astonishingly low).  Of course, it is for these same obfuscatory qualities that politicians seem to prefer regulations to taxes.

Despite the inherent difficulty, scholars have been analyzing the consequences of regulation for decades, leading to a fairly large literature. Studies typically examine the causal effect of a unique regulation or a small collection of related regulations, such as air quality standards stemming from the Clean Air Act.  Compared to the thousands of actual regulations that are in effect, the regulation typically studied is relatively limited in scope, even if its effects can be far-reaching.  Because most studies on regulation focus only on one or perhaps a few specific regulations, there is a lot of room for more research to be done.  Specifically, improved metrics of regulation, especially metrics that can be used either in multi-industry microeconomic studies or in macroeconomic contexts, could help advance our understanding of the overall effect of all regulations.

With that goal in mind, some attempts have been made to more comprehensively measure regulation through the use of surveys and legal studies.  The most famous example is probably the Doing Business index from the World Bank, while perhaps the most widely used in academic studies is the Indicators of Product Market Regulation from the OECD.  Since 2003, the World Bank has produced the Doing Business Index, which combines survey data with observational data into a single number designed to tell how much it would cost to “do business,” e.g. set up a company, get construction permits, get electricity, register property, etc., in set of 185 countries.  The Doing Business index is perhaps most useful for identifying good practices to follow in early to middle stages of economic development, when property rights and other beneficial institutions can be created and strengthened.

The OECD’s Indicators of Product Market Regulation database focuses more narrowly on types of regulation that are more relevant to developed economies.  Specifically, the original OECD data considered only product market and employment protection regulations, both of which are measured at “economy-wide” level—meaning the OECD measured whether those types of regulations existed in a given country, regardless of whether they were applicable to only certain individuals or particular industries.  The OECD later extended the data by adding barriers to entry, public ownership, vertical integration, market structure, and price controls for a small subset of broadly defined industries (gas, electricity, post, telecommunications, passenger air transport, railways, and road freight).  The OECD develops its database by surveying government officials in several countries and aggregating their responses, with weightings, into several indexes.

By design, the OECD and Doing Business approaches do a good job of relating obscure macroeconomic data to actual people and businesses.  Consider the chart below, taken from the OECD description of how the Product Market Regulation database is created.  As I wrote last week and as the chart shows, the rather sanitized term “product market regulation” actually consists of several components that are directly relevant to a would-be entrepreneur (such as the opacity of a country’s licenses and permits system and administrative burdens for sole proprietorships) and to a consumer (such as price controls and barriers to foreign direct investment).  You can click on the chart below to see some of the other components that are considered in OECD’s product market regulation indicator.

oecd product regulation tree structure

Still, there are two major shortcomings of the OECD data (shortcomings that are equally applicable to similar indexes produced by the World Bank and others).  First, they cover relatively short time spans.  Changes in regulatory policy often require several years, if not decades, to implement, so the results of these changes may not be reflected in short time frames (to a degree, this can be overcome by measuring regulation for several different countries or different industries, so that results of different policies can be compared across countries or industries).

Second, and in my mind, more importantly, the Doing Business Index is not comprehensive.  Instead, it is focused on a few areas of regulation, and then only on whether regulations exist—not how complex or burdensome they are.  As Dawson and Seater explain:

[M]easures of regulation [such as the Doing Business Index and the OECD Indicators] generally proceed by constructing indices based on binary indicators of whether or not various kinds of regulation exist, assigning a value of 1 to each type of regulation that exists and a 0 to those that do not exist.  The index then is constructed as a weighted sum of all the binary indicators.  Such measures capture the existence of given types of regulation but cannot capture their extent or complexity.

Dawson and Seater go out of their way to mention at least twice that the OECD dataset ignores environmental and occupational health and safety regulations.  Theirs is a good point – in the US, at least, environmental regulations from the EPA alone accounted for about 15% of all restrictions published in federal regulations in 2010, and that percentage has consistently grown for the past decade, as can be seen in the graph below (created using data from RegData).  Occupational health and safety regulations take up a significant portion of the regulatory code as well.

env regs as percentage of total

In contrast, one could measure all federal regulations, not just a few select types.  But then the process requires some usage of the actual legal texts containing regulations.  There have been a few attempts to create all-inclusive time series measures of regulation based on the voluminous legal documents detailing regulatory activity at the federal level.   For the most part, studies have relied on the Federal Register, the government’s daily journal of newly proposed and final regulations.  For example, many scholars have counted pages in the Federal Register to test for the existence of the midnight regulations phenomenon—the observation that the administrations of outgoing presidents seem to produce abnormally large numbers of regulations during the lame-duck period

There are problems with using the Federal Register to measure regulation (I say this despite having used it in some of my own papers).  First and foremost, the Federal Register includes deregulatory activity.  When a regulatory agency eliminates words, paragraphs, or even entire chapters from the CFR, the agency has to notify the public of the changes.  The agency does this by printing a notice of proposed rulemaking in the Federal Register that explains the agencies intentions.  Then, once the public has had adequate time to comment on the agencies proposed actions, the agency has to publish a final rule in the Federal Register—another set of pages that detail the final actions the agency is taking.  Obviously, if one is counting pages published in the Federal Register and using that as a proxy for the growth of regulation, deregulatory activity that produces positive page counts would lead to incorrect measurements.  

Furthermore, pages published in the Federal Register may be a biased measure because the number of pages associated with individual rulemakings has increased over time as acts of Congress or executive orders have required more analyses. In his Ten-Thousand Commandments series, Wayne Crews mitigates this drawback to some degree by focusing only on pages devoted to final rules.  The Ten-Thousand Commandments series keeps track of both the annual number of final regulations published in the Federal Register and the annual number of Federal Register pages devoted to final regulations.

Dawson and Seater instead rely on the Code of Federal Regulations, another set of legal documents related to federal regulationsActually, the CFR would be better described as the books that contain the actual text of regulations in effect each year.  When a regulatory agency creates new regulations, or alters existing regulations, those changes are reflected in the next publication of the CFR.  Dawson and Seater collected data on the total number of pages in the CFR in each year from 1949 to 2005. I’ve graphed their data below.

dawson and seater cfr pages

*Dawson and Seater exclude Titles 1 – 3 and 32 from their total page counts because they argue that those Titles do not contain regulation, so comparing this graph with page count graphs produced elsewhere will show some discrepancies.

Perhaps the most significant advantage of the CFR over counting pages in the Federal Register is that it allows for decreases in regulations. However, using the CFR arguably has several advantages over indexes like the OECD product market regulation index and the World Bank Doing Business index.  First, using the CFR captures all federal regulation, not just a select few types.  Dawson and Seater point out:

Incomplete coverage leads to two problems: (1) omitted variables bias, and, in any time series study, (2) divergence between the time series behavior of subsets of regulation on the one hand and of total regulation on the other.

In other words, ignoring potentially important variables (such as environmental regulations) can cause estimates of the effect of regulation to be wrong.

Second, the number of pages in the CFR may reflect the complexity of regulations to some degree.  In contrast, the index metrics of regulation typically only consider whether a regulation exists—a binary variable equal to 1 or 0, with nothing in between.  Third, the CFR offers a long time series – almost three times as long as the OECD index, although it is shorter than the Federal Register time series.

Of course, there are downsides to using the CFR.  For one, it is possible that legal drafting standards and language norms have changed over the 57 years, which could introduce bias to their measure (Dawson and Seater brush this concern aside, but not convincingly in my opinion).  Second, the CFR is limited to only one country—the United States—whereas the OECD and World Bank products cover many countries.  Data on multiple countries (or multiple industries within a country, like RegData offers) allow comparisons of real-world outcomes and how they respond to different regulatory treatments.  In contrast, Dawson and Seater are limited to constructing a “counterfactual” economy – one that their model predicts would exist had regulations stayed at the level they were in 1949.  In my next post, I’ll go into more detail on the model they use to do this.

Should Illinois be Downgraded? Credit Ratings and Mal-Investment

No one disputes that Illinois’s pension systems are in seriously bad condition with large unfunded obligations. But should this worry Illinois bondholders? New Mercatus research by Marc Joffe of Public Sector Credit Solutions finds that recent downgrades of Illinois’s bonds by credit ratings agencies aren’t merited. He models the default risk of Illinois and Indiana based on a projection of these states’ financial position. These findings are put in the context of the history of state default and the role the credit ratings agencies play in debt markets. The influence of credit ratings agencies in this market is the subject a guest blog post by Marc today at Neighborhood Effects.

Credit Ratings and Mal-Investment

by Marc Joffe

Prices play a crucial role in a market economy because they provide signals to buyers and sellers about the availability and desirability of goods. Because prices coordinate supply and demand, they enabled the market system to triumph over Communism – which lacked a price mechanism.

Interest rates are also prices. They reflect investor willingness to delay consumption and take on risk. If interest rates are manipulated, serious dislocations can occur. As both Horwitz and O’Driscoll have discussed, the Fed’s suppression of interest rates in the early 2000s contributed to the housing bubble, which eventually gave way to a crash and a serious financial crisis.

Even in the absence of Fed policy errors, interest rate mispricing is possible. For example, ahead of the financial crisis, investors assumed that subprime residential mortgage backed securities (RMBS) were less risky than they really were. As a result, subprime mortgage rates did not reflect their underlying risk and thus too many dicey borrowers received home loans. The ill effects included a wave of foreclosures and huge, unexpected losses by pension funds and other institutional investors.

The mis-pricing of subprime credit risk was not the direct result of Federal Reserve or government intervention; instead, it stemmed from investor ignorance. Since humans lack perfect foresight, some degree of investor ignorance is inevitable, but it can be minimized through reliance on expert opinion.

In many markets, buyers rely on expert opinions when making purchase decisions. For example, when choosing a car we might look at Consumer Reports. When choosing stocks, we might read investment newsletters or review reports published by securities firms – hopefully taking into account potential biases in the latter case. When choosing fixed income most large investors rely on credit rating agencies.

The rating agencies assigned what ultimately turned out to be unjustifiably high ratings to subprime RMBS. This error and the fact that investors relied so heavily on credit rating agencies resulted in the overproduction and overconsumption of these toxic securities. Subsequent investigations revealed that the incorrect rating of these instruments resulted from some combination of suboptimal analytical techniques and conflicts of interest.

While this error occurred in market context, the institutional structure of the relevant market was the unintentional consequence of government interventions over a long period of time. Rating agencies first found their way into federal rulemaking in the wake of the Depression. With the inception of the FDIC, regulators decided that expert third party evaluations were needed to ensure that banks were investing depositor funds wisely.

The third party regulators chose were the credit rating agencies. Prior to receiving this federal mandate, and for a few decades thereafter, rating agencies made their money by selling manuals to libraries and institutional investors. The manuals included not only ratings but also large volumes of facts and figures about bond issuers.

After mid-century, the business became tougher with the advent of photocopiers. Eventually, rating agencies realized (perhaps implicitly) that they could monetize their federally granted power by selling ratings to bond issuers.

Rather than revoking their regulatory mandate in the wake of this new business model, federal regulators extended the power of incumbent rating agencies – codifying their opinions into the assessments of the portfolios of non-bank financial institutions.

With the growth in fixed income markets and the inception of structured finance over the last 25 years, rating agencies became much larger and more profitable. Due to their size and due to the fact that their ratings are disseminated for free, rating agencies have been able to limit the role of alternative credit opinion providers. For example, although a few analytical firms market their insights directly to institutional investors, it is hard for these players to get much traction given the widespread availability of credit ratings at no cost.

Even with rating agencies being written out of regulations under Dodd-Frank, market structure is not likely to change quickly. Many parts of the fixed income business display substantial inertia and the sheer size of the incumbent firms will continue to make the environment challenging for new entrants.

Regulatory involvement in the market for fixed income credit analysis has undoubtedly had many unintended consequences, some of which may be hard to ascertain in the absence of unregulated markets abroad. One fairly obvious negative consequence has been the stunting of innovation in the institutional credit analysis field.

Despite the proliferation of computer technology and statistical research methods, credit rating analysis remains firmly rooted in its early 20th century origins. Rather than estimate the probability of a default or the expected loss on a credit instruments, rating agencies still provide their assessments in the form of letter grades that have imprecise definitions and can easily be misinterpreted by market participants.

Starting with the pioneering work of Beaver and Altman in the 1960s, academic models of corporate bankruptcy risk have become common, but these modeling techniques have had limited impact on rating methodology.

Worse yet, in the area of government bonds, very little academic or applied work has taken place. This is especially unfortunate because government bond ratings frame the fiscal policy debate. In the absence of credible government bond ratings, we have no reliable way of estimating the probability that any government’s revenue and expenditure policies will lead to a socially disruptive default in the future. Further, in the absence of credible research, there is great likelihood that markets inefficiently price government bond risk – sending confusing signals to policymakers and the general public.

Given these concerns, I am pleased that the Mercatus Center has provided me the opportunity to build a model for Illinois state bond credit risk (as well as a reference model for Indiana). This is an effort to apply empirical research and Monte Carlo simulation techniques to the question of how much risk Illinois bondholders actually face.

While readers may not like my conclusion – that Illinois bonds carry very little credit risk – I hope they recognize the benefits of constructing, evaluating and improving credit models for systemically important public sector entities like our largest states. Hopefully, this research will contribute to a discussion about how we can improve credit rating assessments.

 

 

Happy Tax Freedom Day

Today, the Tax Foundation notes that Americans have worked enough to pay off their 2013 taxes, leaving the rest of the year’s earnings available for private consumption and investment:

Tax Freedom Day is the day when the nation as a whole has earned enough money to pay its total tax bill for the year. A vivid, calendar based illustration of the cost of government, Tax Freedom Day divides all federal, state, and local taxes by the nation’s income. In 2013, Americans will pay $2.76 trillion in federal taxes and $1.45 trillion in state taxes, for a total tax bill of $4.22 trillion, or 29.4 percent of income. April 18 is 29.4 percent, or 108 days, into the year.

Because of the increase in payroll taxes and income taxes on high income earners as part of the fiscal cliff deal, Tax Freedom day falls three days later this year than it did last year. While many limited government advocates will view this tax burden as too large, the Tax Foundation website points out that the $4.22 trillion we will pay in taxes this year will not cover the full cost of government spending. Including this year’s deficit spending, which is a tax on future earnings, would push Tax Freedom Day out to May 9th.

What is a loophole?

In the Pathology of Privilege, I had this to say about “accelerated depreciation,” an artifact of the corporate tax code that many consider to be a loophole:

If income is the base of taxation, it makes sense to allow firms to “write off” expenses necessary to earn that income. For capital items that wear out over time, expenses should be written off as the items wear out. Some provisions of the tax code, however, permit firms to write off big capital expenses in one year rather than gradually as the items depreciate. These provisions privilege those firms that happen to make large capital purchases.

The idea that “accelerated” depreciation is a loophole can be traced back to Stanley Surrey, the Harvard law professor whose work in the 1950s, 60s, and 70s influenced many tax reformers, including Senator Bill Bradley and officials in the Reagan Treasury Department. When the Congressional Joint Committee on Taxation began cataloguing loopholes in their annual “tax expenditure” list in 1972, they too called accelerated depreciation a loophole. Here is how Leonard Berman and Joel Slemrod describe the issue in Taxes in America:

Why not let businesses write off their investments right away? It would make the process of determining taxable income easier, as businesses would no longer have to keep track of depreciation schedules for long-lived capital goods. The problem is that it would mean abandoning the attempt to tax business income, or at least part of it. Only a small fraction of the cost of a factory that will last twenty years is really a cost of earning income this year. (p. 72, emphasis original).

This thinking persuaded me to list accelerated depreciation alongside other tax loopholes as a privilege. In conversations with friends and colleagues over the last few weeks, however, I’ve come to change my mind on this one. Why?

To begin with, it is not obvious that generality requires corporate taxation at all. Though much maligned, Mitt Romney’s famous statement that “corporations are people” is—in some form or another—taught in just about every economics 101 course. When a government levies a tax on a corporation, some combination of the following three groups pay it: customers, investors, or employees. All three, it goes without saying, are humans. Moreover, as every student of economics knows, the statutory incidence of a tax is not the same as its economic incidence. Even if legislators earnestly want investors (or managers) to bear 100 percent of the tax, it is supply and demand, and not legislator intent which ultimately determines who pays. Each of these groups is already taxed in some other way, through sales, payroll, income, or capital gains taxes. So when a government levies a corporate income tax, it is imposing an additional levy on someone and this, by itself, is a violation of generality.

Second, if we are going to tax businesses, it isn’t clear that the tax base should be corporate income. Note that Berman and Slemrod say that immediate expensing would mean abandoning “the attempt to tax business income.” That’s because it would essentially turn the corporate income tax into a corporate consumption tax. And that may be a good thing. Capital taxation is notoriously inefficient. This is one reason why Robert Hall and Alvin Rabushka permitted immediate 100 percent expensing in their famous flat consumption tax (which, by the way, would apply to all businesses, not just corporations).

Setting these concerns aside, doesn’t accelerated depreciation privilege capital-intensive firms over labor-intensive firms? This is an argument often made against accelerated depreciation. But if you think this through, it’s not a particularly strong argument. Labor-intensive firms (appropriately) get to write off the salaries that they pay their employees as they make payroll. That makes sense. So long as we are taxing income, we shouldn’t penalize those that have to incur expenses in order to earn that income. Why shouldn’t capital-intensive firms also get to expense equipment when they cut the check for the equipment? The rate at which equipment breaks down bears no relation to the expense of buying it. In fact, one could go a step further and argue that any measure requiring capital-intensive firms to write off their expenses over a long period of time amounts to a privilege to labor-intensive firms.

I am always open to new arguments and in light of my changed view, I’ve decided to update the paper and remove these lines.

The battle of the taxes

In my last post, I discussed several exciting tax reforms that are gaining support in a handful of states. In an effort to improve the competitiveness and economic growth of these states, the plans would lower or eliminate individual and corporate income taxes and replace these revenues with funds raised by streamlined sales taxes. Since I covered this topic, legislators in two more states, Missouri and New Mexico, have demonstrated interest in adopting this type of overhaul of their state tax systems.

At the same time, policymakers in other states across the country are likewise taking advantage of their majority status by pushing their preferred tax plans through state legislatures and state referendums. These plans provide a sharp contrast with those proposed by those states that I discussed in my last post; rather than prioritizing lowering income tax burdens, leaders in these states hope to improve their fiscal outlooks by increasing income taxes.

Here’s what some of these states have in the works:

  • Massachusetts: Gov. Deval L. Patrick surprised his constituents last month during his State of the State address by calling for a 1 percentage point increase in state income tax rates while simultaneously slashing state sales taxes from 6.25% to 4.5%. Patrick defended these tax changes on the grounds of increasing investments in transportation, infrastructure, and education while improving state competitiveness. Additionally, the governor called for a doubling of personal exemptions to soften the blow of the income tax increases on low-income residents.
  • Minnesota: Gov. Mark Dayton presented a grab bag of tax reform proposals when he revealed his two-year budget plan for the state of Minnesota two weeks ago. In an effort to move his state away from a reliance on property taxes to generate revenue, Dayton has proposed to raise income taxes on the top 2% of earners within the state. At the same time, he hopes to reduce property tax burdens, lower the state sales tax from 6.875% to 5.5%, and cut the corporate tax rate by 14%.
  • Maryland: Last May, Maryland Gov. Martin O’Malley called a special legislative session to balance their state budget to avoid scheduled cuts of $500 million in state spending on education and state personnel. Rather than accepting a “cuts-only” approach to balancing state finances, O’Malley strongly pushed for income tax hikes on Marylanders that earned more than $100,000 a year and created a new top rate of 5.75% on income over $250,000 a year. These tax hikes were signed into law after the session convened last year and took effect that June.
  • California: At the urging of Gov. Jerry Brown, California voters decided to raise income taxes on their wealthiest residents and increase their state sales tax from 7.25% to 7.5% by voting in favor of Proposition 30 last November. In a bid to put an end to years of deficit spending and finally balance the state budget, Brown went to bat for the creation of four new income tax brackets for high-income earners in California. There is some doubt that these measures will actually generate the revenues that the governor is anticipating due to an exodus of taxpayers fleeing the new 13.3% income tax and uncertain prospects for economic growth within the state. 

It is interesting that these governors have defended their proposals using some of the same rhetoric that governors and legislators in other states used to defend their plans to lower income tax rates. All of these policymakers believe that their proposals will increase competitiveness, improve economic growth, and create jobs for their states. Can both sides be right at the same time?

Economic intuition suggests that policymakers should create a tax system that imposes the lowest burdens on the engines of economic growth. It makes sense, then, for states to avoid taxing individual and corporate income so that these groups have more money to save and invest. Additionally  increasing marginal tax rates on income and investments limits the returns to these activities and causes people to work and invest less. Saving and investment, not consumption, are the drivers of economic growth. Empirical studies have demonstrated that raising marginal income tax rates have damaging effects on economic growth. Policymakers in Massachusetts, Minnesota, Maryland, and California may have erred in their decisions to shift taxation towards income and away from consumption. The economies of these states may see lower rates of growth as a result.

In my last post, I mused that the successes of states that have lowered or eliminated their state income taxes may prompt other states to adopt similar reforms. If the states that have taken the opposite approach by raising income taxes see slowed economic growth as a result, they will hopefully serve as a cautionary tale to other states that might be considering these proposals.