Tag Archives: policymakers

To merge or not to merge?

Princeton Image

Consolidating municipalities is a common policy prescription from across the political spectrum. In New Jersey in particular, many democratic and republican elected officials have thrown their support behind merging municipalities. In part, this support is based on the experience of Princeton. In 2011, Princeton Borough and Princeton Township moved, the first New Jersey municipalities to do so:

New Jersey GOP Gov. Chris Christie as well as governors in Ohio and Pennsylvania have been urging local governments to seek savings by eliminating unneeded costs. Christie endorsed the Princeton plan and offered to pay 20% of the $1.7-million unification cost, Bloomberg News reported.

The forecast is that Princeton taxpayers will save $3.1 million annually by consolidating services, including those for police and fire protection.

“We have redundancy in government,” borough resident Cole Crittenden told NJ.com in explaining why she supported the merger.

Framing municipal mergers as a way to get more bang for the taxpayer buck makes the proposal difficult for anyone to oppose except for those municipal employees who are redundant after a merger. However, the cost savings of consolidation are not well-understood. In an article in Governing Magazine earlier this week, Justin Marlowe writes:

It turns out that consolidations rarely save money. In fact, for the majority of citizens directly affected in these cases, consolidation has meant higher taxes and spending. Some cities consolidated because a larger government could improve local infrastructure. This has usually meant new debt and new taxes to repay that debt. Others offered generous pensions and health-care benefits to employees pushed out in the consolidation, thus saddling the new government with expensive legacy costs. In the consolidated town of Oak Island, N.C., per capita spending is two or three times higher than before consolidation, and that’s by design. Consolidation allowed this coastal community to offer new services needed to build a vibrant tourist economy.

Superficially, municipal consolidation looks like an opportunity to reduce taxes or to provide increased services for a given level of revenue. However, as Marlowe indicates, larger jurisdictions do not always result in anticipated efficiencies. As policymakers’ gain control of larger jurisdictions and in turn the ability to access more funds from revenue from the state and federal level, they may spend more, rather than less, per capita.

Why Regulations Fail

Last week, David Fahrenthold wrote a great article in the Washington Post, in which he described the sheer absurdity of a USDA regulation mandating a small town magician to develop a disaster evacuation plan for his rabbit (the rabbit was an indispensible part of trick that also involved a hat). The article provides a good example of the federal regulatory process’ flaws that can derail even the best-intentioned regulations. I list a few of these flaws below.

  1. Bad regulations often start with bad congressional statutes. The Animal Welfare Act of 1966, the statute authorizing the regulation, was meant to prevent medical labs from using lost pets for experiments. Over time, the statute expanded to include all warm-blooded animals (pet lizards apparently did not merit congressional protection) and to apply to zoos and circuses in addition to labs (pet stores, dog and cat shows, and several other venues for exhibiting animals were exempt).The statute’s spotty coverage resulted from political bargaining rather than the general public interest in animal welfare. The USDA rule makes the statute’s arbitrariness immediately apparent. Why would a disaster plan benefit circus animals but not the animals in pet stores or farms? (A colleague of mine jokingly suggested eating the rabbit as part of an evacuation plan, since rabbits raised for meat are exempt from the regulation’s requirements).
  2. Regulations face little oversight. When media reported on the regulation’s absurdity, the USDA Secretary Tom Vilsack ordered the regulation to be reviewed. It seems that even the agency’s head was caught off guard by the actions of his agency’s regulators. Beyond internal supervision, only a fraction of regulations face external oversight. Of over 2600 regulations issued in 2012, less than 200 were subject to the OMB review (data from GAO and OMB). Interestingly, the OMB did review the USDA rule but offered only minor revisions.
  3. Agencies often fail to examine the need for regulation. In typical Washington fashion, the agency decided to regulate in response to a crisis – Hurricane Katrina in this case. In fact, the USDA offered little more than Katrina’s example to justify the regulation. It offered little evidence that the lack of disaster evacuation plans was a widespread problem that required the federal government to step in. In this, the USDA is not alone. According to the Mercatus Center’s Regulatory Report Card, which evaluates agencies’ economic analysis, few agencies offer substantial evidence justifying the need for promulgated regulations.
  4. Agencies often fail to examine the regulation’s effectiveness. The USDA’s plan to save animals in case of a disaster was to require owners to draw up an evacuation plan. It offered little evidence that having a plan would in fact save the animals. For example, the magician’s evacuation plan called for shoving the rabbit into a plastic bag and getting out. In the USDA’s view, the magician would not have thought of doing the same had he not drawn up the evacuation plan beforehand.
  5. The public has little influence in the process. By law, agencies are required to ask the public for input on proposed regulations. Yet, small businesses and individual consumers rarely have time or resources to keep an eye on federal agencies. In general, organized interests dominate the commenting process. The article describes the magician’s surprise to learn that he was required to have a license and a disaster evacuation plan his rabbit, even though the regulation was in the works for a few years and was open for public comments for several months. Most small businesses, much like this magician, learn about regulations only after they have passed.
  6. Public comments are generally ignored. Most public comments that the USDA received argued against the rule. They pointed out that it would impose substantial costs on smaller businesses. The agency dismissed the comments with little justification. This case is not unique. Research indicates that agencies rarely make substantial changes to regulations in response to public comments.

Do “Indirect Effects” of Regulation Matter to Real People?

Congressional regulatory reformers recently caught criticism from advocacy groups for introducing legislation that would require federal regulatory agencies to analyze the “indirect effects” of proposed regulations. The only thing I’d criticize the reformers for is poor word choice.

The very term “indirect effects” suggests that they’re talking about something theoretical, inconsequential, and unimportant to the average citizen. But to economists, the indirect effects of a regulation are often the effects that touch the average citizen most directly.

Consider airport security, for example. The Department of Homeland Security (DHS) recently sought public comment on its decision to deploy Advanced Imaging Technology scanners instead of metal detectors at airports. The direct costs of this decision are the extra cost of the new machines, the electricity to run them, and the personnel to staff them – which airline passengers pay for via the taxes and fees on airline tickets. Those are pretty obvious costs, and DHS dutifully toted up these costs in its analysis of its proposed rule.

Less obvious but potentially more important are the other, indirect costs associated with airport security. Passengers who decline to walk through the new machines will receive additional pat-downs. This involves a cost in terms of time (which DHS acknowledges) and potentially diminished privacy and human dignity (which DHS does not discuss). The now-classic phrase “Don’t touch my junk” aptly summarizes one passenger’s reaction to an indirect effect of security regulation that touches passengers quite directly.

But that does not exhaust the list of significant, predictable, indirect effects associated with airport security regulation. The increased delays associated with enhanced, post-9/11 security measures prompted some travelers to substitute driving for flying on short trips. An article by Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon published in the November 2007 Journal of Law & Economics estimates that post-9/11 security measures cost the airline industry $1.1 billion in lost revenue in the fourth quarter of 2002. Driving is also riskier than flying. Blalock et. al. estimated that the security measures were associated with 129 additional highway deaths in the fourth quarter of 2002.

I’m all for making air travel as safe as possible, but I’d like to see it done smartly, with a minimum of hassle and a maximum of respect for the flying public who pays the bills. A full accounting of the indirect effects of airport security might just prompt policymakers to consider whether they are pursuing regulatory goals in the most sensible way possible.

Unfortunately, airport security is not an isolated example. Data from the Mercatus Center’s Regulatory Report Card reveal that for about 40 percent of the major regulations proposed by executive branch agencies between 2008 and 2012, the agencies failed to conduct any substantial analysis of costs that stem from the proposed regulation’s effects on prices or on human behavior – two classic types of indirect effects.

This won’t do. Telling federal agencies they do not need to understand the indirect effects of regulation is telling them they should proceed in willful ignorance of the effects of their decisions on real people. The reformers have a good idea here – even if it has a misleadingly boring name.

Burden of DC’s Wal-Mart Minimum Wage would be Borne by City’s Poor

Plans to bring six Wal-Marts to the District of Columbia may fall through over city requirements for the big box store to pay an hourly wage of $12.50, more than a 50-percent increase over the District’s $8.25 minimum wage. Yesterday, the DC City Council voted 8-5 to approve this higher minimum wage, creating a higher wage requirement for stores with over 75,000 square-feet and retailers that make over $1 billion annually.

The council passed Large Retailer Accountability Act under the rhetoric that raising the minimum wage would benefit the District’s workers and that Wal-Mart can afford to pay higher wages:

Vincent Orange was one of the most vocal supporters of the bill. “We don’t need Wal-Mart, Wal-Mart needs us,” he said. “The citizens of the District of Columbia demand that we stand up for them.”

While supporters of higher minimum wages say that they are helping their least well-off constituents, in fact raising the minimum wage for Wal-Mart will hurt the very members of the city’s labor force that  council members say they are trying to help. That raising a minimum wage raises unemployment is uncontroversial among most economists. When the employment rate falls with a higher minimum wage, those left without a job will be lowest-skilled workers with the fewest job choices. While a higher minimum wage will benefit a group of employees who keep their jobs and otherwise would have made the lower minimum wage, policymakers must acknowledge the tradeoffs involved in a minimum wage law and that by supporting a minimum wage, they are hurting society’s least well-off members.

Furthermore, by discouraging Wal-Mart from opening stores, DC’s council is doing another disservice to residents by reducing availability of low-cost goods. Again, the burden of this policy decision falls hardest on the city’s lowest-income residents. Because those with lower incomes tend to spend a higher percentage of their income on food and other basic goods sold at Walmart, discouraging the company from opening DC locations is a regressive policy. Even for those who don’t choose to shop at Wal-Mart, the retailer’s low prices create pressure for other city stores to reduce their own prices to compete, benefiting an even wider net of consumers.

Mayor Vincent Gray has the option to veto the bill, which would require a ninth vote from the Council to overturn. If the DC City Council actually wants to benefit the city’s low-income residents, allowing Wal-Mart to provide jobs and affordable goods would create broader, lasting benefits to the community than a restrictive minimum wage. Requiring large stores to pay a higher minimum wage than other retailers would limit consumer choice, especially for consumers who have few choices, and it would eliminate job opportunities for the least-skilled workers.

Delaying the Rearview Camera Rule is Good for the Poor

A few weeks ago, the Department of Transportation (DOT) announced it would delay implementation of a regulation requiring that rearview cameras be installed in new automobiles. The rule was designed to prevent backover accidents by increasing drivers’ fields of vision to include the area behind and underneath vehicles. The DOT said more research was needed before finalizing the regulation, but there is another, perhaps more important reason for delaying the rule. The costs of this rule, and many others like it, weigh most heavily on those with low incomes, while the benefits cater to the preferences of those who are better-off financially.

The rearview camera regulation was expected to increase the cost of an automobile by approximately $200. This may not seem like much money, but it means a person buying a new car will have less money on hand to spend on other items that improve quality of life. These items might include things like healthcare or healthier food. Those who already have access to quality healthcare services, or who shop regularly at high end supermarkets like Whole Foods, may prefer to have the risk of a backup accident reduced over the additional $200 spent on a new car. Alternatively, those who don’t have easy access to healthcare or healthy food, may well prefer the $200.

A lot of regulation is really about reducing risks. Some risks pose large dangers, like the risk of radiation exposure (or death) if you are within range of a nuclear blast. Some risks pose small dangers, like a mosquito bite. Some risks are very likely, like the risk of stubbing your toe at some point in your lifetime, while other risks are very remote, like the chance that the Earth will be hit by a gigantic asteroid next week.

Risks are everywhere and can never be eliminated entirely from life. If we tried to eliminate every risk we face, we’d all live like John Travolta in the movie The Boy in the Plastic Bubble (and of course, he could also be hit by an asteroid!). The question we need to ask ourselves is: how do we manage risks in a way that makes the most sense given limited resources in society? In addition to this important question, we may also want to ask ourselves to what degree distributional effects are important as we consider which risks to mitigate?

There are two main ways that society can manage risks. First, we can manage risks we face privately, say by choosing to eat vegetables often or to go to the gym. In this way, a person can reduce the risk of cardiovascular disease, a leading cause of death in the United States, as well as other health problems. We can also choose to manage risks publicly, say through regulation or other government action. For example, the government passes laws requiring everyone to get vaccinated against certain illnesses, and this reduces the risk of getting sick from those around us.

Not surprisingly, low income families spend less on private risk mitigation than high income families do. Similarly, those who live in lower income areas tend to face higher mortality risks from a whole host of factors (e.g. accidents, homicide, cancer), when compared to those who live in wealthier neighborhoods. People with higher incomes tend to demand more risk reduction, just as they demand more of other goods or services. Therefore, spending money to reduce very low probability risks, like the risk of being backed over by a car in reverse, is more in line with preferences of the wealthy, since the wealthy will demand more risk reduction of this sort than the poor will.

Such a rule may also result in unintended consequences.  Just as using seat belts has been shown to lead to people driving faster, relying on a rearview camera when driving in reverse may lead to people being less careful about backing up.  For example, someone could be running outside of the camera’s view, and only come into view just as he or she is hit by the car.  Relying on cameras entirely may increase the risk of some people getting hit.

When the government intervenes and reduces risks for us, it is making a choice for us about which risks are most important, and forcing everyone in society to pay to address these risks. But not all risks are the same. In the case of the rearview camera rule, everyone must pay the extra money for the new device in the car (unless they forgo buying a new car which also carries risks), yet the risk of accident in a backup crash is small relative to other risks. Simply moving out of a low income neighborhood can reduce a whole host of risks that low income families face. By forcing the poor to pay to reduce the likelihood of tiny probability events, DOT is essentially saying poor people shouldn’t have the option of reducing larger risks they face. Instead, the poor should share the burden of reducing risks that are more in line with the preferences of the wealthy, who have likely already paid to reduce the types of risks that low income families still face.

Politicians and regulators like to claim that they are saving lives with regulation and just leave it at that. But the reality is often much more complicated with unintended consequences and regressive effects. Regulations have costs and those costs often fall disproportionately on those with the least ability to pay. Regulations also involve tradeoffs that leave some groups better off, while making other groups worse off. When one of the groups made worse off is the poor, we should think very carefully before proceeding with a policy, no matter how well intentioned policymakers may be.

The DOT is delaying the rearview camera rule so it can conduct more research on the issue. This is a sensible decision. Everyone wants to reduce the prevalence of backover accidents, but we should be looking for ways to achieve this goal that don’t disadvantage the least well off in society.

Should Illinois be Downgraded? Credit Ratings and Mal-Investment

No one disputes that Illinois’s pension systems are in seriously bad condition with large unfunded obligations. But should this worry Illinois bondholders? New Mercatus research by Marc Joffe of Public Sector Credit Solutions finds that recent downgrades of Illinois’s bonds by credit ratings agencies aren’t merited. He models the default risk of Illinois and Indiana based on a projection of these states’ financial position. These findings are put in the context of the history of state default and the role the credit ratings agencies play in debt markets. The influence of credit ratings agencies in this market is the subject a guest blog post by Marc today at Neighborhood Effects.

Credit Ratings and Mal-Investment

by Marc Joffe

Prices play a crucial role in a market economy because they provide signals to buyers and sellers about the availability and desirability of goods. Because prices coordinate supply and demand, they enabled the market system to triumph over Communism – which lacked a price mechanism.

Interest rates are also prices. They reflect investor willingness to delay consumption and take on risk. If interest rates are manipulated, serious dislocations can occur. As both Horwitz and O’Driscoll have discussed, the Fed’s suppression of interest rates in the early 2000s contributed to the housing bubble, which eventually gave way to a crash and a serious financial crisis.

Even in the absence of Fed policy errors, interest rate mispricing is possible. For example, ahead of the financial crisis, investors assumed that subprime residential mortgage backed securities (RMBS) were less risky than they really were. As a result, subprime mortgage rates did not reflect their underlying risk and thus too many dicey borrowers received home loans. The ill effects included a wave of foreclosures and huge, unexpected losses by pension funds and other institutional investors.

The mis-pricing of subprime credit risk was not the direct result of Federal Reserve or government intervention; instead, it stemmed from investor ignorance. Since humans lack perfect foresight, some degree of investor ignorance is inevitable, but it can be minimized through reliance on expert opinion.

In many markets, buyers rely on expert opinions when making purchase decisions. For example, when choosing a car we might look at Consumer Reports. When choosing stocks, we might read investment newsletters or review reports published by securities firms – hopefully taking into account potential biases in the latter case. When choosing fixed income most large investors rely on credit rating agencies.

The rating agencies assigned what ultimately turned out to be unjustifiably high ratings to subprime RMBS. This error and the fact that investors relied so heavily on credit rating agencies resulted in the overproduction and overconsumption of these toxic securities. Subsequent investigations revealed that the incorrect rating of these instruments resulted from some combination of suboptimal analytical techniques and conflicts of interest.

While this error occurred in market context, the institutional structure of the relevant market was the unintentional consequence of government interventions over a long period of time. Rating agencies first found their way into federal rulemaking in the wake of the Depression. With the inception of the FDIC, regulators decided that expert third party evaluations were needed to ensure that banks were investing depositor funds wisely.

The third party regulators chose were the credit rating agencies. Prior to receiving this federal mandate, and for a few decades thereafter, rating agencies made their money by selling manuals to libraries and institutional investors. The manuals included not only ratings but also large volumes of facts and figures about bond issuers.

After mid-century, the business became tougher with the advent of photocopiers. Eventually, rating agencies realized (perhaps implicitly) that they could monetize their federally granted power by selling ratings to bond issuers.

Rather than revoking their regulatory mandate in the wake of this new business model, federal regulators extended the power of incumbent rating agencies – codifying their opinions into the assessments of the portfolios of non-bank financial institutions.

With the growth in fixed income markets and the inception of structured finance over the last 25 years, rating agencies became much larger and more profitable. Due to their size and due to the fact that their ratings are disseminated for free, rating agencies have been able to limit the role of alternative credit opinion providers. For example, although a few analytical firms market their insights directly to institutional investors, it is hard for these players to get much traction given the widespread availability of credit ratings at no cost.

Even with rating agencies being written out of regulations under Dodd-Frank, market structure is not likely to change quickly. Many parts of the fixed income business display substantial inertia and the sheer size of the incumbent firms will continue to make the environment challenging for new entrants.

Regulatory involvement in the market for fixed income credit analysis has undoubtedly had many unintended consequences, some of which may be hard to ascertain in the absence of unregulated markets abroad. One fairly obvious negative consequence has been the stunting of innovation in the institutional credit analysis field.

Despite the proliferation of computer technology and statistical research methods, credit rating analysis remains firmly rooted in its early 20th century origins. Rather than estimate the probability of a default or the expected loss on a credit instruments, rating agencies still provide their assessments in the form of letter grades that have imprecise definitions and can easily be misinterpreted by market participants.

Starting with the pioneering work of Beaver and Altman in the 1960s, academic models of corporate bankruptcy risk have become common, but these modeling techniques have had limited impact on rating methodology.

Worse yet, in the area of government bonds, very little academic or applied work has taken place. This is especially unfortunate because government bond ratings frame the fiscal policy debate. In the absence of credible government bond ratings, we have no reliable way of estimating the probability that any government’s revenue and expenditure policies will lead to a socially disruptive default in the future. Further, in the absence of credible research, there is great likelihood that markets inefficiently price government bond risk – sending confusing signals to policymakers and the general public.

Given these concerns, I am pleased that the Mercatus Center has provided me the opportunity to build a model for Illinois state bond credit risk (as well as a reference model for Indiana). This is an effort to apply empirical research and Monte Carlo simulation techniques to the question of how much risk Illinois bondholders actually face.

While readers may not like my conclusion – that Illinois bonds carry very little credit risk – I hope they recognize the benefits of constructing, evaluating and improving credit models for systemically important public sector entities like our largest states. Hopefully, this research will contribute to a discussion about how we can improve credit rating assessments.

 

 

Implications of an emergency fiscal manager for Detroit

Reuters reports that an emergency financial manager might provide Detroit with a path toward bankruptcy. This week I’m at US News writing on how an emergency financial manager might help the city renegotiate the obligations that it cannot afford to pay:

An emergency financial manager will have a greater incentive than elected city officials to improve Detroit’s financial standing. For any Michigan politician, Detroit’s municipal employees make up an important group of voters. However, their political influence is more concentrated at the city level, and as an interest group they have diminished power at the state level. Because the emergency financial manager will be responsible to the governor and state legislature, he or she will not face the pressures to appease city employees that local policymakers confront.

A price tag on congestion

The research organization TRIP finds that traffic congestion comes at a steep price for drivers in the Washington, DC area. They determine that congestion and poor road conditions cost drivers $2,195 annually in lost time and the added vehicle operating costs of driving on congested, poor quality roads.

TRIP supports increased infrastructure spending, and I haven’t looked into their methodology, but undeniably DC-area drivers waste copious time sitting in traffic. Despite this, a Washington Post poll finds that Maryland drivers do not support higher taxes to pay for road expansion or maintenance. Perhaps increased taxes are unpopular because state residents believe that transportation projects involve wasteful spending that won’t improve conditions for drivers. Additionally, they may realize that traffic congestion is very difficult to overcome in a world of zero-price roads. Because additional roads lower the time cost of driving, additional lanes induce more people to drive farther. Building enough roads to eliminate congestion for everyone who would like to use them at zero-price in DC’s rush hour might not be possible, as reducing the region’s congestion problems would even lead more people to move to the area.

An alternative to raising taxes to fund new road construction would be to implement congestion pricing on area roads. Roads could be electronically tolled and priced at the rate that will eliminate congestion, varying with driver demand. So far municipalities have tended to implement congestion pricing on new highways. Here in the DC area, the 495 Express Lanes opened in November with congestion pricing. The new lanes were funded primarily by a private company, and the tolls are not yet meeting revenue projections; many drivers are choosing to continue driving on more congested, zero-price roads. However, congestion pricing doesn’t necessarily need to be implemented on a new road. Alternatively, policymakers could implement congestion pricing on existing roads or on specific lanes to reduce congestion for those willing to pay.

Tolls are often politically unpopular because, as Donald Shoup points out in The High Cost of Free Parking, people are often very opposed to paying user fess for a provision that has previously been funded by taxpayers broadly. However, the gains from congestion pricing may outweigh the political costs. Allocating road use through prices puts roads to higher-value uses. Assuming that TRIP’s estimate of the cost of congestion is correct for the average driver, this cost will vary widely among drivers who value their time differently, and drivers will value their own time differently depending on the day and the importance of being on time to their destination. Thus pricing roads according to demand allows those who have flexible schedules to drive when roads are otherwise uncrowded, and those who place a high value on their time will be willing to pay a high toll for the convenience of saving time and reaching their destination promptly.

 

Government shouldn’t pick winners either

Last week, Steven Mufson of the Washington Post reported:

The Energy Department gave $150 million in economic Recovery Act funds to a battery company, LG Chem Michigan, which has yet to manufacture cells used in any vehicles sold to the public and whose workers passed time watching movies, playing board, card and video games, or volunteering for animal shelters and community groups.

This week, Mufson’s colleague Thomas Heath reports about another firm that has received gov’t aide:

District-based daily-deal company LivingSocial has received a much-needed $110 million cash infusion from its investors, according to a memo the company sent to employees Wednesday.

“This investment is a tremendous vote of confidence in our business from the people who know us best, our current board members and investors,” LivingSocial chief executive Tim O’Shaughnessy said in the memo, which was obtained by The Washington Post.

Mr. O’Shaughnessy is putting a nice gloss on it. A LivingSocial “senior company insider” tells PrivCo:

We scrambled for cash quickly….we did receive one other funding offer, but the current investors’ terms were the least bad of two terrible proposals….which we had no choice but to take it or file for Chapter 11.

According to PrivCo, the company ended the year with just $76 million in cash and assets while it faces some $338 million in liabilities.

Readers will no doubt remember that just eight months ago, the D.C. Council unanimously voted to give LivingSocial a $32,500,000 get-out-of-tax-free card.

These stories (and the many, many more that could be told) suggest that President Obama’s former economic adviser  Larry Summers, was right to warn that government is a crappy venture capitalist. Milton and Rose Friedman’s simple explanation of the four ways money can be spent offers a nice explanation:

how to spend money

A private venture capitalist spends her own money to buy equity in a firm. And if that firm does well, she does well. Since she is spending her own money on herself, she has an incentive to both economize and seek the highest value.

But when government policymakers play venture capitalist, they are spending other peoples’ money on other people. They therefore have little incentive to either economize or seek high value. It is no wonder that they often make the wrong bets.

But the scandal has much more to do with a bad bet. Even if the bet pays off—which it sometimes does—there are problems associated with taxpayer support of private industry. There are more details in my paper, but just to name a few, government-supported industries will tend to:

  • Be cartelized, which means consumers are stuck with higher prices;
  • Use less-efficient productive techniques;
  • Offer lower-quality goods;
  • Waste resources in an effort to expand or maintain their government-granted privileges;
  • Innovate along the wrong margins by coming up with new ways to obtain favors rather than new ways to please customers.

Together, these costs can undermine long term growth and even short-term macroeconomic stability. And since the winners tend to be the wealthy and well-connected and the losers tend to be the relatively poor and unknown, privileges such as these undermine people’s faith in both government and markets.

We should be upset when governments sink money into firms that then go bankrupt. But it is no less scandalous when government sinks funds into firms that survive.

Governments should stay out of the business of picking winners or losers.

Pre-K for All?

I’m at US News’s Economic Intelligence blog this week, writing about President Obama’s proposal for universal Pre-K. One problem with his proposal is that we don’t have data demonstrating that state- or federally-funded preschool will improve outcomes for children:

Accurately measuring the outcomes of education programs is critical for providing policymakers, educators, and the public with the necessary data to know what works. Without this data we cannot know whether or not putting scarce taxpayer resources toward preschool will provide lasting benefits to participants, let alone provide societal benefits in excess of costs.