Category Archives: Public Choice

Does statehood trigger Leviathan? A case study of New Mexico and Arizona

I was recently asked to review, “The Fiscal Case Against Statehood: Accounting for Statehood in New Mexico and Arizona, by Dr. Stephanie Moussalli for (the Economic History Association).

I highly recommend the book for scholars of public choice, economic history and accounting/public finance.

As one who spends lots of time reading  state and local financial reports in the context of public choice, I was very impressed with Moussalli’s insights and tenacity. In her research she dives into the historical accounts of territorial New Mexico and Arizona to answer two questions.  Firstly, did statehood (which arrived in 1912) lead to a “Leviathan effect” causing government spending to grow. And secondly, as a result of statehood, did accounting improve?

The answer to these questions is yes. Statehood did trigger a Leviathan effect for these Southwestern states –  findings that have implications for current policy – in particular the sovereignty debates surrounding Puerto Rico and Quebec. And the accounts did improve as a result of statehood, an outcome that controls for the fact that this occurred during the height of the Progressive era and its drive for public accountability.

A provocative implication of her findings that cuts against the received wisdom:  Are the improved accounting techniques that come with statehood a necessary tool for more ambitious spending programs? Does accounting transparency come with a price?

What makes this an engaging study is Moussalli’s persistence and creativity in bringing light to a literature void. She stakes out new research territory, and brings a public choice-infused approach to what might otherwise be bland accounting records. She rightly sees in the historical ledgers the traces of the political and social choices of individuals; and the inescapable record of their decisions. In her words, “people say one thing and do another.” The accounts speak in a way that historical narrative does not.

For more read the review.


The farm bill: a lesson in government failure

As a consumer and as a taxpayer, the farm bill is a monstrosity. But as someone who teaches public finance and public choice economics, it is a great teaching tool.

Want to explain the concept of dead-weight loss? The farm bill’s insurance subsidies are a perfect illustration of the concept. They transfer resources from taxpayers to farm producers; but taxpayers lose more than producers gain.

Want to illustrate the folly of price controls? Sugar supports which force Americans to pay twice what global consumers pay are a fine illustration.

Want to explain Gordon Tullock’s transitional gains trap? Walk your students through the connection between subsidies and land prices: much of the value of the subsidy is “capitalized” into the price of farmland, meaning that new farmers have to pay exorbitant prices to buy an asset that entitles them to subsidies. This means new farmers are no better off as a result of the subsidies. As David Friedman puts it, “the government can’t even give anything away.” The only ones to gain are those who owned the land when the laws were created. But those who paid for the land with the expectation that it would entitle them to subsidies would howl if politicians tried to do right by consumers and taxpayers and get rid of the privileges.

Want to illustrate Mancur Olson’s theory of interest group formation? Look no further than sugar loans. Taxpayers loan about $1.1 billion to producers every year. Spread among 313 million of us, that is a cost of about $3.50 per taxpayer. And who benefits? Last year just three (!) firms received the bulk of these subsidies, each benefiting to the tune of $200 million. As Olson taught us long ago, the numerous and diffused losers face a significant obstacle in organizing in opposition to this while the small and concentrated winners have every incentive to get organized in support.

Want to show how a “legislative logroll” works? Explain to your students that members representing dairy and peanut interests are statistically significantly likely to vote in the interests of peanut farmers and vice versa.

Want to explain Bruce Yandle’s bootlegger and Baptist theory of regulation? Note that catfish farmers want inspection of “foreign” catfish in the name of safety (the Baptist rationale) when the real reason for supporting additional inspections is self-interested protectionism (the bootlegger motivation).

This week’s lesson is on the power of agenda setters to block even modest reforms. Buried in the dross of privileges to wealthy farmers, both the Senate and the House versions of the bill contained a small glimmer of reform. Both included language capping the amount of subsidies that farmers and their spouses receive at “only” $250,000 per year. Right now, House and Senate conferees are working to reconcile the two versions of the Farm Bill passed this summer. And according to the latest reports, they plan to strip these modest reforms that were agreed to by both chambers.

Unfortunately, kids, this is how modern democracy works.

All votes are thrown away, so vote sincerely

Virginians go to the polls tomorrow to select a new governor. To be more precise: a modest minority of eligible voters—maybe about 35 percent—will go to the polls to select a new governor. The rest will stay at home, work late, or spend time with loved ones.

Seventh grade civics teachers and mothers everywhere wonder why more people don’t exercise their precious right to vote. Public choice economists wonder why anyone does.

Here is how I typically talk about the vote decision in my public choice classes. Perhaps it will help you think through how you’d like to spend your day tomorrow.

Let’s start with a simple model and add complexity as we go.

We are going to be “modeling” an individual’s decision to vote based on the idea that voting brings some satisfaction. We call this satisfaction “utility” and say that people will vote so long as the utility from voting is positive.

Utility from voting = a function of stuff

But what should we put on the right hand side? We know people vote so we know utility from voting is positive. What gives them this satisfaction from voting?

Let’s begin with the assumption that people vote because they want to affect the outcome, to make a difference. They derive some joy from the outcome of the election. Let’s call this joy B for benefit:

Utility from voting = B + other things

B is equal to the difference in benefits the voter obtains when one outcome beats another. B could be the benefit of a government job that the voter expects to have once his brother-in-law becomes mayor. Or it could be the benefit he expects to enjoy once the entire economy improves as a result of a candidate’s policies. It need not be personal benefits. It could also include the joy one might obtain from seeing more redistribution from one group to another. And, of course, it could be all of these. The point is that B captures the expected gain in utility from one outcome prevailing over the alternatives. Note that if you think that there is essentially no difference between the candidates, B will be zero since it represents the difference in benefits obtained from one outcome prevailing over the others.

We can say more. Voting is costly. When you vote, you have to give up time you could have spent working, reading public choice books, or playing with your children. Voting is also risky. You risk being selected for a boring jury pool, you risk getting your finger jammed in the voting machine, and you risk sustaining a life-threatening accident on the way to the polls. To account for these costs, we subtract a term called C:

Utility from voting = B – C + other things

But there is still more. Remember that the B term represents the difference in utility you obtain from seeing your preferred outcome prevail. But what if you expect to see your outcome prevail whether you vote or not (think: those who voted for Reagan in ’84)? Or what if you expect to see your outcome lose whether you vote or not (think: those who voted for Gary Johnson in 2012)? The point is that if you vote in order to make a difference, then your chance of making a difference is important in your decision to vote. So we should include that as part of the gross utility term. If P is the probability that your vote will make a difference then we can write:

Utility from voting = P*B – C

In words: the utility from voting is equal to the chance that one’s vote will make a difference, multiplied by the difference in benefits one expects to obtain from one outcome beating the others, minus whatever costs are incurred in the act of voting. So long as P*B > C, people will vote.

We call this the “instrumental theory of voting” because it describes a voter who uses her vote as an “instrument” to affect the outcome. Unfortunately, there is a problem with it.

It turns out that P is small, vanishingly small. By one estimate, the chance of casting a decisive vote in the 2008 presidential election was 1 in 60 million. Why so low? Your vote only makes a difference when the rest of the electorate is evenly split. In the case of a presidential election, you’d need your state to be the decisive state in the Electoral College and you’d need all the other citizens of your state to be exactly evenly divided. That’s not terribly likely. Of course, you have a greater chance of casting a decisive vote in a smaller election such as a governorship. But even in these cases, the probabilities are extraordinarily small. I estimate that there have been over 2,000 gubernatorial elections in the U.S. Not one has come down to a single vote (the closest was Washington state’s 2004 election which came down to 133 votes but even in this case no single vote could be said to have “made a difference”).

It turns out that the chance of sustaining a life-threatening accident on the way to the polls (an element of C) is actually greater than 1 in 60 million. So this leaves us with two conclusions:

  1. Perhaps B is so great that even when multiplied by a very tiny P, it is still enough to overcome C. In other words, perhaps people are willing to risk life and limb to obtain their preferred outcome in the election. This seems less than plausible.
  2. Perhaps people obtain some benefit from voting that has nothing to do with changing the outcome. In this case, we need to add something to our model, another gross benefits term that is not affected by P. Let’s call this “D”:

Utility from voting = P*B – C + D

Here, D represents some benefit from the act of voting that has nothing to do with changing the outcome. Different authors have suggested different ideas of what D might be. It might be the sense of pride one obtains in fulfilling one’s civic duty. Or it might be the joy one gets from “cheering” on one’s side even if it doesn’t make a difference. Think of fans at a football game. They “vote” by cheering even though they know that their own cheer won’t produce a victory. This is known as the “expressive theory of voting” since it captures the notion that people vote to express opinions, not necessarily to change the outcome.

Expressive Voter

Expressive Voter


So what is the implication of all of this? Some say that the implication is that it is irrational to vote. It is costly and has almost no chance of making a difference, which gives voting about the same ROI as a sacrifice to the rain gods.

I disagree.

There’s nothing dumb about someone feeling that they have a civic duty. There’s nothing irrational about cheering on a cause even if you know it won’t make a difference. It is no more irrational to vote than it is to cheer for the Redskins (okay, so maybe it’s a little irrational).

It is irrational, however, for someone to believe that their vote makes a difference. Despite what MTV says, not every vote matters. In fact, the only time any one vote “matters” is when the electorate is perfectly split. And in that case, the only vote that really matters is Anthony Kennedy’s.

Some of you may find this depressing. It means you don’t matter. Worse, it means that you and your fellow voters have little incentive to gather or to process information about the issues, which means we are all destined to be uninformed and irrational when we step into that voting booth.

But there is some good news here: freed from any concerns that your miniscule vote will make a difference, you should feel free to vote your conscience. So if your conscience compels you to vote for a third (or fourth or fifth) party candidate, don’t listen to the nonsense that you are “throwing your vote away.” ALL votes (except for Anthony Kennedy’s) are thrown away. So, if you’d like to express your opinion, to cheer for a cause, then vote sincerely for the candidate that you think is best.

Then go home and spend time with your loved ones.

Pension reform from California to Tennessee

Earlier this month Bay Area Rapid Transit (BART) workers went on their second strike of the year. With public transport dysfunctional for four days, area residents were not necessarily sympathetic to the workers’ complaints, according to The Economist. The incident only drew attention to the fact that BART’s workers weren’t contributing to their pensions.

Under the new collective bargaining agreement employees will contribute to their pensions, and increase the amount they pay for health care benefits to $129/month.  The growing cost of public pensions, wages and benefits on city budgets is a real matter for mayors who must struggle to contain rapidly rising costs to pay for retiree benefits. San Jose’s mayor, Chuck Reed has led the effort in California to institute pension reforms via a ballot measure that would give city workers a choice between reduced benefits or bigger contributions, known as the Pension Reform Act of 2014. Reed is actively seeking the support of California’s public sector unions for the measure that would give local authorities some flexibility to contain costs. Pension costs are presenting new threats for many California governments. Moody’s is scrutinizing 30 cities for possible downgrades based on their more complete measurement of the economic liability presented by pension plans.  In spite of this dire warning, CalPERS has sent municipalities a strong message to struggling and bankrupt cities: pay your contributions, or else.

Other states and cities that are looking to overhaul how benefits are provided to employees include Memphis, Tennessee which faces a reported unfunded liability of $642 million and a funding ratio of 74.4%. This is using a discount rate of 7.5 percent.  I calculate Memphis’ unfunded liability is approximately $3.4 billion on a risk-free basis, leaving the plan only 35% funded.

The options being discussed by the Memphis government include moving new hires to a hybrid plan, a cash balance plan, or a defined contribution plan. Which of these presents the best option for employees, governments and Memphis residents?

I would suggest the following principles be used to guide pension reform: a) economic accounting, b) shift the funding risk away from government, c) offer workers – both current workers and future hires – the option to determine their own retirement course and to choose from a menu of options that includes a DC plan or an annuity – managed by an outside firm or some combination.

The idea should be to eliminate the ever-present incentive to turn employee retirement savings into a budgetary shell-game for governments. Public sector pensions in US state and local governments have been made uncertain under flawed accounting and high-risk investing. As long as pensions are regarded as malleable for accounting purposes – either through discount rate assumptions, re-amortization games, asset smoothing, dual-purpose asset investments, or short-sighted thinking – employee benefits are at risk for underfunding. A defined contribution plan, or a privately managed annuity avoids this temptation by putting the employer on the hook annually to make the full contribution to an employee’s retirement savings.

It’s Time to Change the Incentives of Regulators

One of the primary reasons that regulation slows down economic growth is that regulation inhibits innovation.  Another example of that is playing out in real-time.  Julian Hattem at The Hill recently blogged about online educators trying to stop the US Department of Education from preventing the expansion of educational opportunities with regulations.  From Hattem’s post:

Funders and educators trying to spur innovations in online education are complaining that federal regulators are making their jobs more difficult.

John Ebersole, president of the online Excelsior College, said on Monday that Congress and President Obama both were making a point of exploring how the Internet can expand educational opportunities, but that regulators at the Department of Education were making it harder.

“I’m afraid that those folks over at the Departnent of Education see their role as being that of police officers,” he said. “They’re all about creating more and more regulations. No matter how few institutions are involved in particular inappropriate behavior, and there have been some, the solution is to impose regulations on everybody.”

Ebersole has it right – the incentive for people at the Department of Education, and at regulatory agencies in general, is to create more regulations.  Economists sometimes model the government as if it were a machine that benevolently chooses to intervene in markets only when it makes sense. But those models ignore that there are real people inside the machine of government, and people respond to incentives.  Regulations are the product that regulatory agencies create, and employees of those agencies are rewarded with things like plaques (I’ve got three sitting on a shelf in my office, from my days as a regulatory economist at the Department of Transportation), bonuses, and promotions for being on teams that successfully create more regulations.  This is unfortunate, because it inevitably creates pressure to regulate regardless of consequences on things like innovation and economic growth.

A system that rewards people for producing large quantities of some product, regardless of that product’s real value or potential long-term consequences, is a recipe for disaster.  In fact, it sounds reminiscent of the situation of home loan originators in the years leading up to the financial crisis of 2008.  Mortgage origination is the act of making a loan to someone for the purposes of buying a home.  Fannie Mae and Freddie Mac, as well as large commercial and investment banks, would buy mortgages (and the interest that they promised) from home loan originators, the most notorious of which was probably Countrywide Financial (now part of Bank of America).  The originators knew they had a ready buyer for mortgages, including subprime mortgages – that is, mortgages that were relatively riskier and potentially worthless if interest rates rose.  The knowledge that they could quickly turn a profit by originating more loans and selling them to Fannie, Freddie, and some Wall Street firms led many mortgage originators to turn a blind eye to the possibility that many of the loans they made would not be paid back.  That is, the incentives of individuals working in mortgage origination companies led them to produce large quantities of their product, regardless of the product’s real value or potential long-term consequences.  Sound familiar?

The Use of Science in Public Policy

For the budding social scientists out there who hope that their research will someday positively affect public policy, my colleague Jerry Ellig recently pointed out a 2012 publication from the National Research Council called “Using Science as Evidence in Public Policy.” (It takes a few clicks to download, but you can get it for free).

From the intro, the council’s goal was:

[T]o review the knowledge utilization and other relevant literature to assess what is known about how social science knowledge is used in policy making . . . [and] to develop a framework for further research that can improve the use of social science knowledge in policy making.

The authors conclude that, while “knowledge from all the sciences is relevant to policy choices,” it is difficult to explain exactly how that knowledge is used in the public policy sphere.  They go on to develop a framework for research on how science is used.  The entire report is interesting, especially if you care about using science as evidence in public policy, and doubly so if you are a Ph.D. student or recently minted Ph.D. I particularly liked the stark recognition of the fact that political actors will consider their own agendas (i.e., re-election) and values (i.e., the values most likely to help in a re-election bid) regardless of scientific evidence.  That’s not a hopeless statement, though – there’s still room for science to influence policy, but, as public choice scholars have pointed out for decades, the government is run by people who will, on average, rationally act in their own self-interest.  Here are another couple of lines to that point:

Holding to a sharp, a priori distinction between science and politics is nonsense if the goal is to develop an understanding of the use of science in public policy. Policy making, far from being a sphere in which science can be neatly separated from politics, is a sphere in which they necessarily come together… Our position is that the use of [scientific] evidence or adoption of that [evidence-based] policy cannot be studied without also considering politics and values.

One thing in particular stands out to anyone who has worked on the economic analysis of regulations.  The introduction to this report includes this summary of science’s role in policy:

Science has five tasks related to policy:

(1) identify problems, such as endangered species, obesity, unemployment, and vulnerability to natural disasters or terrorist acts;

(2) measure their magnitude and seriousness;

(3) review alternative policy interventions;

(4) systematically assess the likely consequences of particular policy actions—intended and unintended, desired and unwanted; and

(5) evaluate what, in fact, results from policy.

This sounds almost exactly like the process of performing an economic analysis of a regulation, at least when it’s done well (if you want to know well agencies actually perform regulatory analysis, read this, and for how well they actually use the analysis in decision-making,  read this).  Executive Order 12866, issued by President Bill Clinton in 1993, instructs federal executive agencies on the role of analysis in creating regulations, including each of the following instructions.  Below I’ve slightly rearranged some excerpts and slightly paraphrased other parts from Executive Order 12866, and I have added in the bold numbers to map these instructions back to summary of science’s role quoted above. (For the admin law wonks, I’ve noted the exact section and paragraph of the Executive Order that each element is contained in.):

(1) Each agency shall identify the problem that it intends to address (including, where applicable, the failures of private markets or public institutions that warrant new agency action). [Section 1(b)(1)]

(2) Each agency shall assess the significance of that problem. [Section 1(b)(1)]

(3) Each agency shall identify and assess available alternatives to direct regulation, including providing economic incentives to encourage the desired behavior, such as user fees or marketable permits, or providing information upon which choices can be made by the public. Each agency shall identify and assess alternative forms of regulation. [Section 1(b)(3) and Section 1(b)(8)]

(4) When an agency determines that a regulation is the best available method of achieving the regulatory objective, it shall design its regulations in the most cost-effective manner to achieve the regulatory objective. In doing so, each agency shall consider incentives for innovation, consistency, predictability, the costs of enforcement and compliance (to the government, regulated entities, and the public), flexibility, distributive impacts, and equity. [Section 1(b)(5)]

(5) Each agency shall periodically review its existing significant regulations to determine whether any such regulations should be modified or eliminated so as to make the agency’s regulatory program more effective in achieving the regulatory objectives, less burdensome, or in greater alignment with the President’s priorities and the principles set forth in this Executive order. [Section 5(a)]

OMB’s Circular A-4—the instruction guide for government economists tasked with analyzing regulatory impacts—similarly directs economists to include three basic elements in their regulatory analyses (again, the bold numbers are mine to help map these elements back to the summary of science’s role):

(1 & 2) a statement of the need for the proposed action,

(3) an examination of alternative approaches, and

(4) an evaluation of the benefits and costs—quantitative and qualitative—of the proposed action and the main alternatives identified by the analysis.

The statement of the need for proposed action is equivalent to the first (identifying problems) and second tasks (measuring their magnitude and seriousness) from NRC report.  The examination of alternative approaches and evaluation of the benefits and costs of the possible alternatives are equivalent to tasks 3 (review alternative policy interventions) and 4 (assess the likely consequences). 

It’s also noteworthy that the NRC points out the importance of measuring the magnitude and seriousness of problems.  A lot of public time and money gets spent trying to fix problems that are not widespread or systemic.  There may be better ways to use those resources.  Evaluating the seriousness of problems allows a prioritization of limited resources.

Finally, I want to point out how this parallels a project here at Mercatus.  Not coincidentally, the statement of science’s role in policy reads like the grading criteria of the Mercatus Regulatory Report Card, which are:

1. Systemic Problem: How well does the analysis identify and demonstrate the existence of a market failure or other systemic problem the regulation is supposed to solve?
2. Alternatives: How well does the analysis assess the effectiveness of alternative approaches?
3. Benefits (or other Outcomes): How well does the analysis identify the benefits or other desired outcomes and demonstrate that the regulation will achieve them?
4. Costs: How well does the analysis assess costs?
5. Use of Analysis: Does the proposed rule or the RIA present evidence that the agency used the Regulatory Impact Analysis in any decisions?
6. Cognizance of Net Benefits: Did the agency maximize net benefits or explain why it chose another alternative?

The big difference is that the Report Card contains elements that emphasize measuring whether the analysis is actually used – bringing us back to the original goal of the research council – to determine “how social science knowledge is used in policy making.”

Politics makes us dumb

A new paper by Dan Kahan, Ellen Peters, Erica Cantrell Dawson and Paul Slovic offers an ingenious test of an interesting hypothesis. The authors set out to test two questions: a) Are people’s abilities to interpret data impaired when the data concerns a politically polarizing issue? And b) Are more numerate people more or less susceptible to this problem?

Chris Mooney offers an excellent description of the study here. His entire post is worth reading but here is the gist:

At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their “numeracy,” that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a “new cream for treating skin rashes.” But in other cases, the study was described as involving the effectiveness of “a law banning private citizens from carrying concealed handguns in public.”

The result? Survey respondents performed wildly differently on what was in essence the same basic problem, simply depending upon whether they had been told that it involved guns or whether they had been told that it involved a new skin cream. What’s more, it turns out that highly numerate liberals and conservatives were even more – not less — susceptible to letting politics skew their reasoning than were those with less mathematical ability.

Over at Salon, Marty Kaplan offers his interpretation of the results:

I hate what this implies – not only about gun control, but also about other contentious issues, like climate change.  I’m not completely ready to give up on the idea that disputes over facts can be resolved by evidence, but you have to admit that things aren’t looking so good for a reason.  I keep hoping that one more photo of an iceberg the size of Manhattan calving off of Greenland, one more stretch of record-breaking heat and drought and fires, one more graph of how atmospheric carbon dioxide has risen in the past century, will do the trick.  But what these studies of how our minds work suggest is that the political judgments we’ve already made are impervious to facts that contradict us.

Maybe climate change denial isn’t the right term; it implies a psychological disorder.  Denial is business-as-usual for our brains.  More and better facts don’t turn low-information voters into well-equipped citizens.  It just makes them more committed to their misperceptions.  In the entire history of the universe, no Fox News viewers ever changed their minds because some new data upended their thinking.  When there’s a conflict between partisan beliefs and plain evidence, it’s the beliefs that win.  The power of emotion over reason isn’t a bug in our human operating systems, it’s a feature.

I suspect that if Mr. Kaplan followed his train of thinking a little bit further he’d come to really hate what this implies. Mr. Kaplan’s biggest concern seems to be that the study shows just how hard it is to convince stupid Republicans that climate change is real. The deeper and more important conclusion to draw, however, is that the study shows just how hard it is for humans to solve problems through collective political action.

Myth of the rational voterTo understand why, it’s helpful to turn to another Caplan—Bryan Caplan of George Mason’s economics department. In The Myth of the Rational Voter that Caplan offers a convincing and fascinating explanation for why otherwise rational people might make less than reasonable decisions when they step into a voting booth or answer a political opinion survey. Building on insights from previous public choice thinkers such as Anthony Downs and Geoffrey Brennan and Loren Lomasky, Caplan makes the case that people are systematically disposed to cling to irrational beliefs when—as is the case in voting–they pay almost no price for these beliefs.

Contrast this the way people behave in a marketplace where they (tend) to pay for irrational beliefs. For example, as Brennan and Lomasky put it (p. 48), “The bigot who refuses to serve blacks in his shop foregoes the profit he might have made from their custom; the anti-Semite who will not work with Jews is constrained in his choice of jobs and may well have to knock back one she would otherwise have accepted.” In contrast, “To express such antipathy at the ballot box involves neither threat of retaliation nor any significant personal cost.”

This helps explain why baby-faced candidates often lose to mature-looking (but not necessarily acting!) candidates, or why voters consistently favor trade protectionism in spite of centuries of scientific data demonstrating its inefficiency.

Given that humans are less likely to exhibit such irrationality in their private affairs, this entire body or research constitutes a powerful case for limiting the number of human activities that are organized by the political process, and maximizing the number of activities organized through private, voluntary interaction.


Update: Somehow, I missed Bryan’s excellent take on the study (and what the Enlightenment was really about) here.


The Public Choice of Sustainable Tax Reform

Comprehensive tax reform has gotten a jump-start from Senators Max Baucus (D-MT) and Orrin Hatch (R-UT), the chairman and ranking Republican on the Senate Finance Committee.  The Senate’s two top tax writers announced a new “blank slate” approach to tax reform in a “Dear Colleague” letter issued last week.

The Senators describe their new, blank slate approach as follows:

In order to make sure that we end up with a simpler, more efficient and fairer tax code, we believe it is important to start with a “blank slate”—that is, a tax code without all of the special provisions in the form of exclusions, deductions and credits and other preferences…

However, under their framework, every current tax privilege has a chance to survive.  The Senators explain:

We plan to operate from an assumption that all special provisions are out unless there is clear evidence that they: (1) help grow the economy, (2) make the tax code fairer, or (3) effectively promote other important policy objectives.

This plan has drawn both praise and criticism, and rightly so.  Yes, this is a step in the right direction; however, this is unlikely to lead to any sustainable reforms for two reasons.

First, forcing Congress to defend tax privileges won’t be hard.  To become law, each privilege had a sponsor, and each sponsor had a rationale to defend it.  Each tax privilege was passed by Congress, and each was then signed into law.  It is difficult to see how privileges that have already survived this process won’t once again find a congressman willing to defend them.  So long as Congress has the power to create and protect tax privileges, it will be nearly impossible to simply wipe such privileges away.

Second, even if a blank slate were achieved, it is unlikely that a privilege-free tax code would last long under the current institutional framework.  This is best demonstrated by what happened in the aftermath of the Tax Reform Act of 1986 (TRA86).

James Buchanan, writing after the passage of TRA86, predicted that very little its reforms would remain intact.  Buchanan noted that “[t]o the extent that [political] agents do possess discretionary authority, the tax structure established in 1986 will not be left substantially in place for decades or even years.”

Buchanan was spot on.  From 1986 through 2005, the tax reform of 1986 suffered a death of 15,000 tweaks.  As reported by the President’s Advisory Panel on Federal Tax Reform in 2005, in the two decades after the 1986 tax reform bill was passed, nearly 15,000 changes were made to the tax code – equal to more than two changes per day for 19 years straight.

What insight did Buchanan have that allowed him to so aptly predict the demise of the Tax Reform Act of 1986?  Buchanan understood that institutions matter.  That is, he understood that no matter how many times the tax code was reformed, so long as the same institutions remain unchanged, political actors will continue to respond in predictable ways, and the result would be tax privileges creeping their way back into the code.  Buchanan explained:

The 1986 broadening of the tax base by closing several established loopholes and shelters offers potential rents to those agents who can promise to renegotiate the package, piecemeal, in subsequent rounds of the tax game. The special interest lobbyists, whose clients suffered capital value losses in the 1986 exercise, may find their personal opportunities widened after 1986, as legislators seek out personal and private rents by offering to narrow the tax base again. In one fell swoop, the political agents may have created for themselves the potential for substantially increased rents. This rent-seeking hypothesis will clearly be tested by the fiscal politics of the post-1986 years.

Going forward, if any sort of reforms are achieved in the tax code, this rent-seeking hypothesis will be tested again.

Senators Baucus and Hatch admit that a blank state “is not, of course, the end product, nor the end of the discussion.”  If Buchanan’s predictions remain true today, as they most certainly are, then the Senators are quite right in admitting that a blank slate is not, and will never be, an end product.  That is, of course, unless any reform in the tax code is paired with institutional reforms to ensure that special tax privileges do not creep back into the code.

The math really matters in pension plans

Writing in The Wall Street Journal, Andy Kessler, a former hedge fund manager, gets to the heart of the matter on why state and local pension plans are running out of assets (and time): the math is a mess. Economists, financial professionals and some actuaries have been making the case for awhile that the way public sector pension plans value their liabilities is a dangerous fiction.

Today, U.S. governments calculate the present value of plan liabilities based on the returns they expect to earn on plan assets (typically between 7 and 8 percent annually). That’s all wrong. How the assets perform is immaterial to the present value of plan benefits. Instead a public sector worker’s pension should be valued as a risk-free guaranteed payout much like a bond. Unfortunately, when pensions are valued on a “guaranteed payout” basis, unfunded liabilities skyrocket. Some major plans are not just a bit underfunded, they are deeply in the hole.

Many plan managers disregard the discount rate critique of the actuarial assumptions and persist in underestimating the funding shortfalls by an order of magnitude. In conflating expected asset returns with the value of plan benefits, another troubling behavior has ensued: shifting assets into higher-return/higher-risk vehicles to catch up after market downturns, a problem I note in a recent analysis of Delaware (and they are by no means alone in this approach.)

He gives an analogy to what is happening in Stockton and is certain to visit other California cities to his experience watching GM’s pension plan bottom out. The company’s pension shortfall spiked from $14 billion to $22.4 billion between 1992 and 1993. GM got some advice from Morgan Stanley: invest the money in alternatives and watch expected returns double from 8 percent to 16 percent. Make this assumption and the hole will be filled.

But as Kessler notes, “you can’t wish this stuff away.” Instead:

Things didn’t go as planned. The fund put up $170 million in equity and borrowed another $505 million and invested in—I’m not kidding—a northern Missouri farm raising genetically engineered pigs. Meatier pork chops for all! Everything went wrong. In May 1996, the pigs defaulted on $412 million in junk debt. In a perhaps related event, General Motors entered 2012 with its global pension plans underfunded by $25.4 billion.”

 The debate between economists and government accountants continues.


Civil Disobedience and Detroit’s financial manager

Michigan’s Governor Rick Synder may be greeted by protestors when he arrives for a meeting today on Detroit’s financial condition. His recent appointment of Kevyn Orr as the city’s emergency financial manager has angered many of Detroit’s residents who are afraid he has powers that are far too sweeping and is thereby destroying local control. The purpose of the financial manager law is to help the city stave off bankruptcy and allows the emergency manager the ability to renegotiate labor contracts and potentially sell city assets. The last recession has worsened the already-struggling city’s financial outlook. Detroit has a $327 million budget deficit and $14 billion in long-term debt and has shown very little willingness to make the kind of structural changes it needs in order to stay solvent.

Detroit’s problems are acute. The city’s population has fallen from 1.8 million to 700,000, giving the city, “a look and feel that rivals post World War II Europe.” But as Public Sector Inc’s Steve Eide writes, the real problem is that local leaders have proven unable to deal with fiscal realities for far too long. His chart shows the consequences. The gap between estimated revenues and expenditures over time is striking. In sum, Detroit overestimates its revenues and underestimates its spending, by a lot, when it plans for the budget. That is a governance and administration crisis and one that the state has decided needs outside intervention to set straight.

Standard & Poors likes the appointment and has upgraded Detroit’s credit rating outlook to “stable.”