Tag Archives: George Mason

Three ways states can improve their health care markets

I have a new essay, coauthored with two of my former students, Anna Mills and Dana Williams. We just published a piece in Real Clear Policy summarizing it. Here is a selection of the OpEd:

Liberals, conservative, and libertarians agree on the goals: Patients should have access to innovative, low-cost, and high-quality care. And though another round of federal reform may be years off, a number of state-level changes can move us closer to a competitive and patient-centered health-care market, making it possible to realize these shared aspirations.

In a new paper published by the Mercatus Center at George Mason University, we identify three areas for reform: States can eliminate certificate-of-need laws, liberalize scope-of-practice regulations, and end the regulatory barriers to telemedicine.

And here is our longer essay.

Ex-Im’s Deadweight Loss

To hear defenders of Ex-Im talk, you’d think that export subsidies are ALL upside and no downside. Economic theory suggests otherwise.

Clearly, some benefit from export subsidies. The most-obvious beneficiaries are the 10 or so U.S. manufacturers whose products capture the bulk of Ex-Im’s privileges (if they didn’t benefit, their “all hands on deck” public relations campaign to save the bank wouldn’t make a lot of sense).

Foreign purchasers who receive loans and loan guarantees from the bank in exchange for buying these products also clearly benefit.

The least-conspicuous beneficiaries are the private banks who finance these deals and get to offload up to 85 percent of the risk on to U.S. taxpayers. But they too clearly benefit.

Those are the upsides. But as economists are wont to say, “there is no such thing as a free lunch.”

Behind each of these beneficiaries is someone left holding the bag: there are taxpayers who bear risks that private lenders are unable or unwilling to bear. There are consumers who must pay higher prices for products that are made artificially expensive by Ex-Im subsidies. And there are other borrowers who lose out on capital because they aren’t lucky enough to have the full faith and credit of the U.S. taxpayer standing behind them.

One might be tempted to think that gains of the winners roughly offset the losses of the losers. But basic economic analysis suggests that the losses exceed the gains.

A few simple diagrams illustrate this point.

First consider any subsidy of a private (that is, excludable and rivalrous) good. Perhaps the most relevant example is a subsidy to private lenders. This is shown in the familiar supply and demand diagram shown below. The quantity of loanable funds is displayed along the horizontal axis and the price of a loan—the interest rate—is shown on the vertical axis.

People want loans to invest in their projects. We call this the “Demand for Investment.” It is shown as the blue, downward-sloping line. It is downward sloping because there are diminishing marginal returns to investment and because if you have to pay a higher interest rate, you will borrow less.

Other people have money to lend. We call this the “Supply of Savings.” It is depicted below as the solid red, upward-sloping line. It is upward sloping because there are increasing opportunity costs to lending out money and lenders must be enticed with higher and higher interest rates to lend more and more money.

The key to understanding this diagram—and this is a point that non-economists tend to find unintuitive—is that there is an optimal quantity of loans and it is not infinity. There is some point beyond which the marginal opportunity cost of further lending exceeds the marginal expected benefit from these investments.

Now consider what happens when the government guarantees the loans. Knowing that taxpayers will cover up to 85 percent of their losses, rational lenders will be willing to supply any given quantity of loans at a lower interest rate. Thus, the supply of savings shifts to the lower, dashed red line. But just because loan guarantees shield lenders from the true opportunity cost of these funds, it does not mean that the true opportunity cost goes away. In this case, taxpayers wear the risk. (For a dated but lucid explanation of the true opportunity cost associated with Ex-Im, see this Minneapolis Fed paper).

Society as a whole is made poorer because scarce resources are redirected from higher-valued uses toward lower-valued uses. In other words, those who lose end up losing more than the winners win. Economists call this “dead weight loss” (DWL). It is represented by the red triangle in the diagram below (click to enlarge).

DWL of a Subsidy

So far, this is the basic economic theory of a subsidy. But economists have developed more-specific models to understand subsidies in the context of international trade.

To get a handle on this, check out some videos by Professor Michael Moore of George Washington University. If international trade diagrams are new to you, I’d recommend looking at these diagrams before watching his videos. Then watch Professor Moore’s excellent illustration of an export subsidy in a small country, followed by the slightly more-complicated—but more relevant—case of export subsidies in a large country.

Small country case:

Large country case:

This is the basic case for free trade and it is widely accepted by economists. Some astute readers may know that there are some interesting theoretical exceptions to this rule. These exceptions derive from what are known as “strategic trade” models. They posit that in some situations—such as oligopolistic industries—governments can theoretically manage to use subsidies to make domestic firms win more than domestic consumers lose. The world is still poorer, but domestic winnings outweigh domestic losses.

These models are worth understanding. But the truth is they have not—and should not—undermine the basic economic case for free trade. The best exposition of this point is a classic piece by Paul Krugman called “Is Free Trade Passe?” In it, Krugman carefully walks the reader through the logic of these models. He then notes, quite rightly, that:

The normative conclusion that this justifies a greater degree of government intervention in trade, however, has met with sharp criticism and opposition—not least from some of the creators of the new theory themselves.

Krugman then ticks through the reasons why free trade should still be the reasonable rule of thumb. For one thing, since the strategic trade models seem to only work in oligopolistic industries, policy makers would need to know exactly how oligopolists will respond to these subsidies and the fact is “economists do not have reliable models of how oligopolists behave.” Then there is the problem of entry. Even if a government does solve the empirical problem of anticipating and accurately responding to oligopolists, it “may still not be able to raise national income if the benefits of its intervention are dissipated by entry of additional firms.”

Krugman’s final two critiques are fascinating because they are precisely the sorts of concerns a George Mason economist might raise. First, there is what Hayek might call the information problem:

[T]o pursue a strategic trade policy successfully, a government must not only understand the effects of its policy on the targeted industry, which is difficult enough, but must also understand all the industries in the economy well enough that it can judge that an advantage gained here is worth advantage lost elsewhere. Therefore, the information burden is increased even further.

And finally, there is the public choice problem. At the international level, “In many (though not all) cases, a trade war between two interventionist governments will leave both countries worse off than if a hands-off approach were adopted by both.” And at the domestic level:

Governments do not necessarily act in the national interest, especially when making detailed microeconomic interventions. Instead, they are influenced by interest group pressures. The kinds of interventions that new trade theory suggests can raise national income will typically raise the welfare of small, fortunate groups by large amounts, while imposing costs on larger, more diffuse groups. The result, as with any microeconomic policy, can easily be that excessive or misguided intervention takes place because the beneficiaries have more knowledge and influence than the losers.

To this, one could add a host of problems that arise when governments privilege particular firms or industries.

Which (finally) brings me to the bottom line: the economic case remains strong that export subsidies to domestic firms like Boeing and GE end up costing American consumers, borrowers, and taxpayers more than they end up benefiting the privileged firms.

Is American Federalism conducive to liberty?

In new Mercatus research, Dr. Richard E. Wagner, Harris professor of Economics at George Mason University tackles a fascinating question: Is the American form of federalism supportive of liberty?

His answer is a qualified ‘yes.’ Under certain conditions, American federalism does support liberty, but that very same system can also be modified resulting in the expansion of political power relative to the liberty of citizens. The question of what results from the gradual constitutional transformation of the American federalist system is a salient one for not only students of government but also policymakers.

The important conditions that determine which form of federalism prevails (liberty-supporting or liberty-eroding) are rooted in competition among governments. Today we are experiencing a very different kind of federalism than the one instituted by the Founders. For the better part of a century, the US constitution has often been amended in a way to encourage collusion among the states thus undermining a key feature of a liberty-supporting federalism.

Restoring a liberty-supporting federalism first requires a deeper diagnosis of the American federalist system. Dr. Wagner develops that possibility through a very engaging synthesis of public choice theory, Austrian and new institutional economics.  Student of Dr. Wagner may be familiar with many of these concepts, developed in his public finance books including Deficits, Debt and Democracy (2012, Elgar). Rather than summarize the paper in today’s blog post, for now I encourage you to read the piece in full.

With Government Shekels Come Government Shackles

Though privileged firms may not focus on it when they obtain their favors, privilege almost always come with strings attached. And these strings can sometimes be quite debilitating. Call it one of the pathologies of government-granted privilege.

Perhaps the best statement of this comes from the man whose job it was to pull the strings on TARP recipients. In 2009, Kai Ryssdal of Marketplace interviewed Kenneth Feinberg. The Washington compensation guru had just been appointed to oversee compensation practices among the biggest TARP recipients. Here is how he described his powers:

Ryssdal: How much power do you have in your new job?

FEINBERG: Well, the law grants to the secretary who delegates to me the authority to determine compensation packages for 175 senior executives of the seven largest corporate top recipients. The law also permits me, or requires me, to design compensation programs for these recipients, governing overall compensation of every senior official. And finally, the law gives me great discretion in deciding whether I should seek to recoup funds that have already been distributed to executives by top recipients. So it’s a substantial delegation of power to one person.

Another example of shackles following shekels comes from Maryland. That state has doled out over $20 million in tax privileges to a film production company called MRC. MRC films House of Cards, a show about a remarkably corrupt politician named Frank Underwood. The goal of these privileges was to “induce” (others might call it bribe) MRC to film House of Cards in Maryland. One problem (among many) with targeted privileges like this is that there is no guarantee that the induced firm will stay induced; there’s nothing to keep it from coming back for more.

In this case, MRC executives recently sent a letter to Governor Martin O’Malley threatening to “break down our stage, sets and offices and set up in another state” if “sufficient incentives do not become available.” Chagrined, state Delegate William Frick came up with a plan to seize the company’s assets through eminent domain. It is clear that Delegate Frick’s intention was to shackle the company. He told the Washington Post:

I literally thought: What is an appropriate Frank Underwood response to a threat like this?…Eminent domain really struck me as the most dramatic response.

As George Mason University’s Ilya Somin aptly puts it:

But even if the courts would uphold this taking, it is extremely foolish policy. State governments rarely condemn mobile property, for the very good reason that if they try to do so, the owners can simply take it out of the jurisdiction – a lesson Maryland should have learned when it tried to condemn the Baltimore Colts to keep them from leaving back in 1984. Moreover, other businesses are likely to avoid bringing similar property into the state in the first place.

My colleague Chris Koopman notes that there are also a number of practical problems with this proposal. The only real property the state could seize from MRC would be its filming equipment: its cameras, its lights, maybe a set piece or two. And by the U.S. Constitution, it would have to offer MRC “just compensation” for these takings. The company’s real assets—the minds of its writers and the talents of its actors—would, of course, remain intact and free to move elsewhere. So essentially Mr. Frick is offering to buy MRC a bunch of new cameras, leaving the state with a bunch of old cameras which it will use for…well that hasn’t been determined yet.

In this case, it would seem that the shackles are more like bangles.

The Maryland State House adopted Frick’s measure without debate. It now goes to the Senate.

Politics makes us dumb

A new paper by Dan Kahan, Ellen Peters, Erica Cantrell Dawson and Paul Slovic offers an ingenious test of an interesting hypothesis. The authors set out to test two questions: a) Are people’s abilities to interpret data impaired when the data concerns a politically polarizing issue? And b) Are more numerate people more or less susceptible to this problem?

Chris Mooney offers an excellent description of the study here. His entire post is worth reading but here is the gist:

At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their “numeracy,” that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a “new cream for treating skin rashes.” But in other cases, the study was described as involving the effectiveness of “a law banning private citizens from carrying concealed handguns in public.”

The result? Survey respondents performed wildly differently on what was in essence the same basic problem, simply depending upon whether they had been told that it involved guns or whether they had been told that it involved a new skin cream. What’s more, it turns out that highly numerate liberals and conservatives were even more – not less — susceptible to letting politics skew their reasoning than were those with less mathematical ability.

Over at Salon, Marty Kaplan offers his interpretation of the results:

I hate what this implies – not only about gun control, but also about other contentious issues, like climate change.  I’m not completely ready to give up on the idea that disputes over facts can be resolved by evidence, but you have to admit that things aren’t looking so good for a reason.  I keep hoping that one more photo of an iceberg the size of Manhattan calving off of Greenland, one more stretch of record-breaking heat and drought and fires, one more graph of how atmospheric carbon dioxide has risen in the past century, will do the trick.  But what these studies of how our minds work suggest is that the political judgments we’ve already made are impervious to facts that contradict us.

Maybe climate change denial isn’t the right term; it implies a psychological disorder.  Denial is business-as-usual for our brains.  More and better facts don’t turn low-information voters into well-equipped citizens.  It just makes them more committed to their misperceptions.  In the entire history of the universe, no Fox News viewers ever changed their minds because some new data upended their thinking.  When there’s a conflict between partisan beliefs and plain evidence, it’s the beliefs that win.  The power of emotion over reason isn’t a bug in our human operating systems, it’s a feature.

I suspect that if Mr. Kaplan followed his train of thinking a little bit further he’d come to really hate what this implies. Mr. Kaplan’s biggest concern seems to be that the study shows just how hard it is to convince stupid Republicans that climate change is real. The deeper and more important conclusion to draw, however, is that the study shows just how hard it is for humans to solve problems through collective political action.

Myth of the rational voterTo understand why, it’s helpful to turn to another Caplan—Bryan Caplan of George Mason’s economics department. In The Myth of the Rational Voter that Caplan offers a convincing and fascinating explanation for why otherwise rational people might make less than reasonable decisions when they step into a voting booth or answer a political opinion survey. Building on insights from previous public choice thinkers such as Anthony Downs and Geoffrey Brennan and Loren Lomasky, Caplan makes the case that people are systematically disposed to cling to irrational beliefs when—as is the case in voting–they pay almost no price for these beliefs.

Contrast this the way people behave in a marketplace where they (tend) to pay for irrational beliefs. For example, as Brennan and Lomasky put it (p. 48), “The bigot who refuses to serve blacks in his shop foregoes the profit he might have made from their custom; the anti-Semite who will not work with Jews is constrained in his choice of jobs and may well have to knock back one she would otherwise have accepted.” In contrast, “To express such antipathy at the ballot box involves neither threat of retaliation nor any significant personal cost.”

This helps explain why baby-faced candidates often lose to mature-looking (but not necessarily acting!) candidates, or why voters consistently favor trade protectionism in spite of centuries of scientific data demonstrating its inefficiency.

Given that humans are less likely to exhibit such irrationality in their private affairs, this entire body or research constitutes a powerful case for limiting the number of human activities that are organized by the political process, and maximizing the number of activities organized through private, voluntary interaction.

——

Update: Somehow, I missed Bryan’s excellent take on the study (and what the Enlightenment was really about) here.

 

In DC Job Market, it is Who You Know, Not What You Can Produce

When making hiring decisions, how does one determine the productivity of a prospective worker? Assessing productivity is difficult even after you hire someone but it is even trickier when all you have to go on is a resume and some references.

Pete Thompson of the local DC NPR station (WAMU) had an interesting story yesterday on the DC job market. He interviewed George Mason economist James Bennett:

Bennett says it’s common sense that networking is the key to many jobs in DC including those in government, nonprofits, and especially politics.

But he says the end result is that many people — qualified or not — wind up wasting valuable time applying for open positions that have already been filled.

“All these people have job searches but half the time that’s show and tell, the real candidate has been picked out long ago,” he says, “but you have to go through the motions of government regulations, all this stuff’s been going for a hundred years.”

This, I think, is one of the ugly side-effects when government employment crowds-out private employment. As hard as it is to determine the productivity of a prospective worker, it is even harder when that worker is responsible for producing a public good instead of a marketable private good. And thus, compared with private employment, public employment is more likely to be subject to arbitrary standards.

In private industry, one can attempt to measure the marginal productivity of labor with the measuring rod of profit and loss: if I add one more man-hour, how does that affect my net profit? From this assessment, an employer can offer a wage based on some knowledge of the return he or she will get from hiring the worker. Of course, the prospective employee can reject that offer if he or she can find another firm that will pay more.

There are many ways to get tripped up—especially in complicated production procedures that require the collaboration of multiple workers and their machines. But competitive pressures help hone the estimation procedure: workers can leave the firm if another firm offers a greater wage (perhaps because, in using the new firm’s machines, the workers are more productive there). And as employees come and go, the employer can better-assess marginal productivity.

In public employment, however, there is no measuring rod of profit and loss and so employers must rely on more-arbitrary procedures. How does one determine how much a prospective congressional aide will add to the productivity of government? As Professor Bennett notes, a common shortcut is to look at who the job-seeker knows.