Category Archives: Environment

The Economics of Regulation Part 2: Quantifying Regulation

I recently wrote about a new study from economists John Dawson and John Seater that shows that federal regulations have slowed economic growth in the US by an average of 2% per year.  The study was novel and important enough from my perspective that it deserved some detailed coverage.  In this post, which is part two of a three part series (part one here), I go into some detail on the various ways that economists measure regulation.  This will help put into context the measure that Dawson and Seater used, which is the main innovation of their study.  The third part of the series will discuss the endogenous growth model in which they used their new measure of regulation to estimate its effect on economic growth.

From the macroeconomic perspective, the main policy interventions—that is, instruments wielded in a way to change individual or firm behavior—used by governments are taxes and regulations.  Others might include spending/deficit spending and monetary policy in that list, but a large percentage of economics studies on interventions intended to change behavior have focused on taxes, for one simple reason: taxes are relatively easy to quantify.  As a result, we know a lot more about taxes than we do about regulations, even if much of that knowledge is not well implemented.  Economists can calculate changes to marginal tax rates caused by specific policies, and by simultaneously tracking outcomes such as changes in tax revenue and the behavior of taxed and untaxed groups, deduce specific numbers with which to characterize the consequences of those taxation policies.  In short, with taxes, you have specific dollar values or percentages to work with. With regulations, not so much.

In fact, the actual burden of regulation is notoriously hidden, especially when directly compared to taxes that attempt to achieve the same policy objective.  For example, since fuel economy regulations (called Corporate Average Fuel Economy, or CAFE, standards) were first implemented in the 1970s, it has been broadly recognized that the goal of reducing gasoline consumption could be more efficiently achieved through a gasoline tax rather than vehicle design or performance standards.  However, it is much easier for a politician to tell her constituents that she will make auto manufacturers build more fuel-efficient cars than to tell constituents that they now face higher gasoline prices because of a fuel tax.  In econospeak, taxes are salient to voters—remembered as important and costly—whereas regulations are not. Even when comparing taxes to taxes, some, such as property taxes, are apparently more salient than others, such as payroll taxes, as this recent study shows.  If some taxes that workers pay on a regular basis are relatively unnoticed, how much easier is it to hide a tax in the form of a regulation?  Indeed, it is arguably because regulations are uniquely opaque as policy instruments that all presidents since Jimmy Carter have required some form of benefit-cost analysis on new regulations prior to their enactment (note, however, that the average quality of those analyses is astonishingly low).  Of course, it is for these same obfuscatory qualities that politicians seem to prefer regulations to taxes.

Despite the inherent difficulty, scholars have been analyzing the consequences of regulation for decades, leading to a fairly large literature. Studies typically examine the causal effect of a unique regulation or a small collection of related regulations, such as air quality standards stemming from the Clean Air Act.  Compared to the thousands of actual regulations that are in effect, the regulation typically studied is relatively limited in scope, even if its effects can be far-reaching.  Because most studies on regulation focus only on one or perhaps a few specific regulations, there is a lot of room for more research to be done.  Specifically, improved metrics of regulation, especially metrics that can be used either in multi-industry microeconomic studies or in macroeconomic contexts, could help advance our understanding of the overall effect of all regulations.

With that goal in mind, some attempts have been made to more comprehensively measure regulation through the use of surveys and legal studies.  The most famous example is probably the Doing Business index from the World Bank, while perhaps the most widely used in academic studies is the Indicators of Product Market Regulation from the OECD.  Since 2003, the World Bank has produced the Doing Business Index, which combines survey data with observational data into a single number designed to tell how much it would cost to “do business,” e.g. set up a company, get construction permits, get electricity, register property, etc., in set of 185 countries.  The Doing Business index is perhaps most useful for identifying good practices to follow in early to middle stages of economic development, when property rights and other beneficial institutions can be created and strengthened.

The OECD’s Indicators of Product Market Regulation database focuses more narrowly on types of regulation that are more relevant to developed economies.  Specifically, the original OECD data considered only product market and employment protection regulations, both of which are measured at “economy-wide” level—meaning the OECD measured whether those types of regulations existed in a given country, regardless of whether they were applicable to only certain individuals or particular industries.  The OECD later extended the data by adding barriers to entry, public ownership, vertical integration, market structure, and price controls for a small subset of broadly defined industries (gas, electricity, post, telecommunications, passenger air transport, railways, and road freight).  The OECD develops its database by surveying government officials in several countries and aggregating their responses, with weightings, into several indexes.

By design, the OECD and Doing Business approaches do a good job of relating obscure macroeconomic data to actual people and businesses.  Consider the chart below, taken from the OECD description of how the Product Market Regulation database is created.  As I wrote last week and as the chart shows, the rather sanitized term “product market regulation” actually consists of several components that are directly relevant to a would-be entrepreneur (such as the opacity of a country’s licenses and permits system and administrative burdens for sole proprietorships) and to a consumer (such as price controls and barriers to foreign direct investment).  You can click on the chart below to see some of the other components that are considered in OECD’s product market regulation indicator.

oecd product regulation tree structure

Still, there are two major shortcomings of the OECD data (shortcomings that are equally applicable to similar indexes produced by the World Bank and others).  First, they cover relatively short time spans.  Changes in regulatory policy often require several years, if not decades, to implement, so the results of these changes may not be reflected in short time frames (to a degree, this can be overcome by measuring regulation for several different countries or different industries, so that results of different policies can be compared across countries or industries).

Second, and in my mind, more importantly, the Doing Business Index is not comprehensive.  Instead, it is focused on a few areas of regulation, and then only on whether regulations exist—not how complex or burdensome they are.  As Dawson and Seater explain:

[M]easures of regulation [such as the Doing Business Index and the OECD Indicators] generally proceed by constructing indices based on binary indicators of whether or not various kinds of regulation exist, assigning a value of 1 to each type of regulation that exists and a 0 to those that do not exist.  The index then is constructed as a weighted sum of all the binary indicators.  Such measures capture the existence of given types of regulation but cannot capture their extent or complexity.

Dawson and Seater go out of their way to mention at least twice that the OECD dataset ignores environmental and occupational health and safety regulations.  Theirs is a good point – in the US, at least, environmental regulations from the EPA alone accounted for about 15% of all restrictions published in federal regulations in 2010, and that percentage has consistently grown for the past decade, as can be seen in the graph below (created using data from RegData).  Occupational health and safety regulations take up a significant portion of the regulatory code as well.

env regs as percentage of total

In contrast, one could measure all federal regulations, not just a few select types.  But then the process requires some usage of the actual legal texts containing regulations.  There have been a few attempts to create all-inclusive time series measures of regulation based on the voluminous legal documents detailing regulatory activity at the federal level.   For the most part, studies have relied on the Federal Register, the government’s daily journal of newly proposed and final regulations.  For example, many scholars have counted pages in the Federal Register to test for the existence of the midnight regulations phenomenon—the observation that the administrations of outgoing presidents seem to produce abnormally large numbers of regulations during the lame-duck period

There are problems with using the Federal Register to measure regulation (I say this despite having used it in some of my own papers).  First and foremost, the Federal Register includes deregulatory activity.  When a regulatory agency eliminates words, paragraphs, or even entire chapters from the CFR, the agency has to notify the public of the changes.  The agency does this by printing a notice of proposed rulemaking in the Federal Register that explains the agencies intentions.  Then, once the public has had adequate time to comment on the agencies proposed actions, the agency has to publish a final rule in the Federal Register—another set of pages that detail the final actions the agency is taking.  Obviously, if one is counting pages published in the Federal Register and using that as a proxy for the growth of regulation, deregulatory activity that produces positive page counts would lead to incorrect measurements.  

Furthermore, pages published in the Federal Register may be a biased measure because the number of pages associated with individual rulemakings has increased over time as acts of Congress or executive orders have required more analyses. In his Ten-Thousand Commandments series, Wayne Crews mitigates this drawback to some degree by focusing only on pages devoted to final rules.  The Ten-Thousand Commandments series keeps track of both the annual number of final regulations published in the Federal Register and the annual number of Federal Register pages devoted to final regulations.

Dawson and Seater instead rely on the Code of Federal Regulations, another set of legal documents related to federal regulationsActually, the CFR would be better described as the books that contain the actual text of regulations in effect each year.  When a regulatory agency creates new regulations, or alters existing regulations, those changes are reflected in the next publication of the CFR.  Dawson and Seater collected data on the total number of pages in the CFR in each year from 1949 to 2005. I’ve graphed their data below.

dawson and seater cfr pages

*Dawson and Seater exclude Titles 1 – 3 and 32 from their total page counts because they argue that those Titles do not contain regulation, so comparing this graph with page count graphs produced elsewhere will show some discrepancies.

Perhaps the most significant advantage of the CFR over counting pages in the Federal Register is that it allows for decreases in regulations. However, using the CFR arguably has several advantages over indexes like the OECD product market regulation index and the World Bank Doing Business index.  First, using the CFR captures all federal regulation, not just a select few types.  Dawson and Seater point out:

Incomplete coverage leads to two problems: (1) omitted variables bias, and, in any time series study, (2) divergence between the time series behavior of subsets of regulation on the one hand and of total regulation on the other.

In other words, ignoring potentially important variables (such as environmental regulations) can cause estimates of the effect of regulation to be wrong.

Second, the number of pages in the CFR may reflect the complexity of regulations to some degree.  In contrast, the index metrics of regulation typically only consider whether a regulation exists—a binary variable equal to 1 or 0, with nothing in between.  Third, the CFR offers a long time series – almost three times as long as the OECD index, although it is shorter than the Federal Register time series.

Of course, there are downsides to using the CFR.  For one, it is possible that legal drafting standards and language norms have changed over the 57 years, which could introduce bias to their measure (Dawson and Seater brush this concern aside, but not convincingly in my opinion).  Second, the CFR is limited to only one country—the United States—whereas the OECD and World Bank products cover many countries.  Data on multiple countries (or multiple industries within a country, like RegData offers) allow comparisons of real-world outcomes and how they respond to different regulatory treatments.  In contrast, Dawson and Seater are limited to constructing a “counterfactual” economy – one that their model predicts would exist had regulations stayed at the level they were in 1949.  In my next post, I’ll go into more detail on the model they use to do this.

Where Are The Benefits From Recent Energy Efficiency Regulations?

On Tuesday, President Obama gave a speech announcing his new agenda to combat climate change. As part of his efforts to curb greenhouse gas emissions, the President and his administration plan on releasing a series of energy efficiency regulations, supposedly with the intention of reducing carbon dioxide emissions. The problem is, the vast majority of the benefits from many energy efficiency rules have nothing to do with reducing carbon dioxide emissions, and this is according to the government’s own estimates. Instead, agencies like the Department of Energy (DOE) are eliminating options for consumers, and then counting the loss to consumers as a benefit of regulating.

How do they do this? It all has to do with a relatively new field of social science known as behavioral economics. You can think of behavioral economics as the intersection of psychology and economics. Behavioral economists believe that people exhibit many biases that cause them to systematically act in ways that are out of line with their true preferences. In a lab situation, there are many examples of such biases that have been demonstrated. For example, a person buying a home may bid one price, but if she is selling the same house, she may require a higher price, implying she values the same object differently depending on whether the object belongs to her or not. Or, people may value objects differently depending on time. For instance, a person might choose to receive $100 today over $110 tomorrow, yet at the same time pass on $100 a year from now in exchange for $110 in a year and one day, implying the person is more impatient today than he sees himself being in the future.

As the chart below demonstrates, the Department of Energy recently finalized a regulation related to microwave ovens, and nearly 80% of the benefits of the rule stemmed, not from protecting the environment or public health, but from saving consumers money by preventing them from buying the products they would choose otherwise. DOE does not seem to understand why consumers might choose to pay a relatively low price today for a product that is not very energy efficient, when this person could buy a more expensive energy efficient product that will save money over the life of the product through lower electricity bills. From an economics perspective, DOE does not believe this behavior is rational, hence it is like one of the behavioral biases described above, and in many cases DOE has decided to ban the products it doesn’t like in order to protect consumers from themselves.

Energy Efficiency Benefits from DOE Microwave Ovens Regulation*

Capture4

Federal agencies are ignoring the fact that consumers may value other attributes of products aside from energy and fuel efficiency. With automobiles, consumers may prefer larger and safer cars, to smaller more fuel efficient vehicles. Restaurants may prefer light bulbs that raise electric bills slightly every month, but whose warm glow creates an ambiance that customers enjoy. And in the case of microwaves and laundry machines, it may be that machines that use more energy simply work better at their stated purpose. And it’s not just microwave ovens. DOE, and other agencies like the Department of Transportation and the Environmental Protection Agency, make this same type of assumption with other regulations, like rules impacting commercial clothes washers, light bulbs, and fuel efficiency standards for vehicles. Agencies even assume businesses are behaving in this manner. Does anyone honestly believe that trucking companies aren’t taking fuel efficiency into account when buying new fleets? Or that laundromat owners don’t consider electricity costs when purchasing new equipment? It seems highly implausible, but agencies are assuming just that.

For decades, agencies have been required to identify a market failure or other systemic problem that exists before intervening in the marketplace with a regulation. Market failures include things like a lack of competition, a lack of consumer information, or costs that spill over onto the public as the result of a private transaction. Now, agencies like DOE have begun to expand the definition of market failure to include what they deem to be personal failures on the part of consumers.

So why are agencies doing this? One reason may be because the environmental benefits alone aren’t enough to justify the costs of some regulations. Claiming additional benefits helps agencies justify an inefficient policy, and keeps government programs continuing to employ regulators. Agencies have other ways to make the benefits of rules appear greater too. In the case of the microwave rule, of the small portion of benefits related to carbon dioxide reductions, most will be captured by citizens of foreign countries, with only a small fraction going to US citizens. Counting benefits to foreigners makes the benefits of rules appear greater, even though agencies are asked to only consider benefits to the United States in most cases.

Another reason we may be getting these types of rules is the rules may really be intended to benefit special interest groups more than consumers. A manufacturer that is already producing an energy efficient product may capture market share by getting the products of its competitors banned. Or manufacturers may simply want to force consumers to buy a more expensive product, or replace old products with new ones, while eliminating the possibility of a competitor undercutting them by selling a cheaper product in the marketplace.

Reducing Carbon Dioxide emissions in order to combat climate change may be a noble goal, but recent energy efficiency regulations are unlikely to get us there. Rather than overriding consumer choice, and counting this loss to consumers as a benefit, DOE and other agencies should give the American people a more honest assessment of the benefits of their rules.

* Source: Department of Energy, “Technical Support Document: Energy Efficiency Program for Consumer Products and Commercial and Industrial Equipment: Residential Microwave Ovens – Stand-By-Power,” (Table 1.2.1.), May 2013. Calculated using a 3 percent discount rate. Assumes 15 percent of reductions in CO2 emissions are attributed to the United States. This is the midpoint between 7 percent and 23 percent, the range estimated by the Interagency Working Group on Social Cost of Carbon, “Technical Support Document, Social Cost of Carbon for Regulatory Impact Analysis under Executive Order 12866,” February 2010.

Local control over transportation: good in principle but not being practiced

State and local governments know their transportation needs better than Washington D.C. But that doesn’t mean that state and local governments are necessarily more efficient or less prone to public choice problems when it comes to funding projects, and some of that is due to the intertwined funding streams that make up a transportation budget.

Emily Goff at The Heritage Foundation finds two such examples in the recent transportation bills passed in Virginia and Maryland.

Both Virginia Governor Bob McDonnell and Maryland Governor Martin O’Malley propose raising taxes to fund new transit projects. In Virginia the state will eliminate the gas tax and replace it with an increase in the sales tax. This is a move away from a user-based tax to a more general source of taxation, severing the connection between those who use the roads and those who pay. The gas tax is related to road use; sales taxes are barely related. There is a much greater chance of political diversion of sales tax revenues to subsidized transit projects: trolleys, trains and bike paths, rather than using revenues for road improvements.

Maryland reduces the gas tax by five cents to 18.5 cents per gallon and imposes a new wholesale tax on motor fuels.

How’s the money being spent? In Virginia 42 percent of the new sales tax revenues will go to mass transit with the rest going to highway maintenance. As Goff notes this means lower -income southwestern Virginians will subsidize transit for affluent northern Virginians every time they make a nonfood purchase.

As an example, consider Arlington’s $1 million dollar bus stop. Arlingtonians chipped in $200,000 and the rest came from the Virginia Department of Transportation (VDOT). It’s likely with a move to the sales tax, we’ll see more of this. And indeed, according to Arlington Now, there’s a plan for 24 more bus stops to compliment the proposed Columbia Pike streetcar, a light rail project that is the subject of a lively local debate.

Revenue diversions to big-ticket transit projects are also incentivized by the states trying to come up with enough money to secure federal grants for Metrorail extensions (Virginia’s Silver Line to Dulles Airport and Maryland’s Purple Line to New Carrolton).

Truly modernizing and improving roads and mass transit could be better achieved by following a few principles.

  • First, phase out federal transit grants which encourage states to pursue politically-influenced and costly projects that don’t always address commuters’ needs. (See the rapid bus versus light rail debate).
  • Secondly, Virginia and Maryland should move their revenue system back towards user-fees for road improvements. This is increasingly possible with technology and a Vehicle Miles Tax (VMT), which the GAO finds is “more equitable and efficient” than the gas tax.
  • And lastly, improve transit funding. One way this can be done is through increasing farebox recovery rates. The idea is to get transit fares in line with the rest of the world.

Interestingly, Paris, Madrid, and Tokyo have built rail systems at a fraction of the cost of heavily-subsidized projects in New York, Boston, and San Francisco. Stephen Smith, writing at Bloomberg, highlights that a big part of the problem in the U.S. are antiquated procurement laws that limit bidders on transit projects and push up costs. These legal restrictions amount to real money. French rail operator SNCF estimated it could cut $30 billion off of the proposed $68 billion California light rail project. California rejected the offer and is sticking with the pricier lead contractor.

 

 

 

 

Virginia’s transportation plan under the microscope

Last week Virginia Governor Bob McDonnell shared his plan to address the state’s transportation needs. The big news is that the Governor wants to eliminate Virginia’s gas tax of 17.5 cents/gallon. This revenue would be replaced with an increase in the state’s sales tax from 5 percent to 5.8 percent. This along with a transfer of $812 million from the general fund, a $15 increase in the car registration fee, a $100 fee on alternative fuel vehicles and the promise of federal revenues should Congress pass legislation to tax online sales brings the total amount of revenue projected to fund Virginia’s transportation to $3.1 billion.

As the Tax Foundation points out, more than half of this relies on a transfer from the state’s general fund, and on Congressional legislation that has not yet passed.

Virginia plans to spend $4.9 billion on transportation. As currently structured, the gas tax only brings in $961 million. There are a few reasons why. First, Virginia hasn’t indexed the gas tax to inflation since 1986. It’s currently worth 40 cents on the dollar. In today’s dollars 17.5 cents is worth about 8 cents. Secondly, while there are more drivers in Virginia, cars are also more fuel efficient and more of those cars (91,000) are alternative fuel. In 2013, the gas tax isn’t bringing in the same amount of revenue as it once did.

But that doesn’t mean that switching from a user-based tax to a general tax isn’t problematic. Two concerns are transparency and fairness. Switching from (an imperfect) user-based fee to a broader tax breaks the link between those who use the roads and those who pay, shorting an important feedback mechanism. Another issue is fairness. Moving from a gas tax to a sales tax leads to cross-subsidization. Those who don’t drive pay for others’ road usage.

The proposal has received a fair amount of criticism with other approaches suggested. Randal O’Toole at Cato likes the idea of Vehicle Miles Travelled (VMT) which would track the number of miles driven via an EZ-Pass type technology billing the user directly for road usage. It would probably take at least a decade to fully implement. And, some have strong libertarian objections. Joseph Henchman at the Tax Foundation proposes a mix of indexing the gas tax to inflation, increased tolls, and levying a local transportation sales tax on NOVA drivers.

The plan opens up Virginia’s 2013 legislative session and is sure to receive a fair amount of discussion among legislators.

Don’t make us drive these cattle over the cliff

First a brief note: I am now blogging at the American Spectator on economic issues. I invite you to visit the inaugural posts. Last week, I covered the fiscal cliff. Like many others, I also marvel at the audacity of the pork contained therein.

Lately the headlines have given me a flashback to 1990 and those first undergrad economics classes. And not just econ but also U.S. history and the American experience with price floors and ceilings. In this post I’ll discuss the floors.

As I note at The Spectacle one of the matters settled by the American Taxpayer Relief Act is the extension of dairy price supports from the 2008 farm bill. Now, Congress won’t be “forced to charge $8 gallon for milk.” To me, nothing screams government price-fixing more than this threat aimed to scare small children and the parents who buy their food.

Chris Edwards explains how America’s dairy subsidy programs work in Milk Madness. Since the 1930’s the federal government  has set the minimum price to be charged for dairy. A misguided idea from the start, the point of the program was to ensure that dairy farmers weren’t hurt by falling prices during the Great Depression. When market prices fall below the government set price the government agrees to buy up any excess butter, dry milk or cheese that is produced. Thusly, dairy prices are kept artificially high which stimulates more demand.

According to Edwards’ study, the OECD found that U.S. dairy policies create a 26 percent “implicit tax” on milk, a regressive tax that affects low-income families in particular. Taxpayers pay to keep food prices artificially high, generate waste, and prevent local farmers from entering a caretlized market.

Now for the cows. The recession revealed that the nation has an oversupply of them. The New York Times reports that rapid expansion in the U.S. dairy market driven by increased global demand for milk products came to a sudden halt in 2008. Farmers were left with cows that needed to be milked regardless of the slump in world prices. The excess dry milk was then sold to the government but only at a price that was set above what the market demanded.

In other words, in a world without price supports, farmers could have sold the milk for less at market and consumers would have enjoyed cheaper butter, cheese and baby formula. Instead, the government stepped in, bought $91 million in milk powder so the farmer could get an above-market price and keep supporting an excess of milk cows. Rather than downsize the dairy based on market signals (and sell part of the herd to other dairy farmers, or the butcher) farmers take the subsidy and keep one too many cows pumping out more milk than is demanded.

It turns out auctioning a herd is not something all farmers are anxious to do. Some may look for additional governmental assistance to keep their cattle fed in spite of dropping prices, increased feed costs, and bad weather. To be sure eliminating farm subsidies would produce a temporary shock (a windfall for farmers and sticker shock for consumers), but in the long run as markets adjust everyone benefits.Dairy cows in the sale ring at the Warragul cattle sales, Victoria, [2]

New Zealand did it. Thirty years later and costs are lower for consumers, farmers are thrivingenvironmental practices have improved, and organic farming is growing. While politicians and the farm lobby may continue pushing for inefficient agricultural policy in spite of the nation’s fiscal path,as Robert Samuelson at Real Clear Politics writes, “If we can’t kill farm subsidies, what can we kill?”

 

Using incentives to save the prairie dogs

Growing up in Western Colorado, I was never aware that prairie dog populations were threatened. Frankly, I always considered them to be about one step up from rats. In fact though, the Utah prairie dog is an endangered species, causing challenges for developers in Iron County.

Until recently, landowners in Utah had to obtain permission to build on land that is considered prairie dog habitat in accordance with the Endangered Species Act. They would have to relocate the animals to a suitable new habitat, after which they would typically be allotted only a 60-day window in which to begin building, resulting in uncertain property rights and incentives to rush development. Now, developers can instead purchase Habitat Credits, or the right to build on current prairie dog habitats, from farmers and ranchers who own land suitable for prairie dogs.

The Associated Press reports:

The program works like a bank, allowing private landowners to sell “credits” if they own prairie dog habitat they’re willing to protect. Buyers who purchase those credits gain permission to develop other habitat areas on their own timeframes.

The number of credits up for purchase and the cost of the credits will vary depending on the population of prairie dogs on the land.

The arrangement would fulfill the Endangered Species Act requirement that bars destruction of a listed species’ habitat without developing new habitat.

Environmentalists are hopeful that this program will boost prairie dog populations enough to get them off of the endangered species list, and the policy change has made life easier for developers. Furthermore, this change is good for residents of Iron County, as reducing obstacles to development will result in an improved built environment.

This seemingly simple policy change illustrates the power of property rights. Assigning them in a way to better align incentives benefits everyone by allowing for improvements in land allocation.

BP Now Breaking Windows?

The Wall Street Journal reminds us of the musical dénouement from The Life of Brian.

[In the] 1979 comedy from the Monty Python team, the hero ends up whistling the song “Always Look on the Bright Side of Life” despite having his hands nailed on each side of a cross.

It seems British Petroleum took that lesson to heart. Fortunately, the company found the bright side of the oil spill, which sounds a lot like Bastiat’s broken window fallacy. Planet BP, an online internal publication sent “reporters” to the gulf, and interviewed some locals who still love the big sunflower. From the WSJ:

“Much of the region’s [nonfishing boat] businesses — particularly the hotels — have been prospering because so many people have come here from BP and other oil emergency response teams,” another report says. Indeed, one tourist official in a local town makes it clear that “BP has always been a very great partner of ours here…We have always valued the business that BP sent us.”

Milton Friedman called this the most persistent economic fallacy in history. Moving money around isn’t the same as growing the economy. The broken window fallacy lauds simple currency exchanges to replace senseless destruction, instead of focusing on growing the total productivity. It’s like running in place, but believing you’ll win a race.

Building from the Top Down

Senator Chris Dodd is sponsoring a bill to promote development of livable cities.  The Livable Communities Act is designed to coordinate federal policies on housing, transportation, energy, and the environment.  It would provide grants to cities to build in alignment with federal urban policy.

As Reuters explains:

Dodd described the bill as combining housing development, public transit, and infrastructure and land-use planning into one comprehensive approach to city development. Currently, many of those decisions are made separately from one another, and Dodd and others said the partitions have led to urban sprawl.

However, Dodd’s explanation of the causes of urban sprawl ignores the history of density restrictions, federal subsidies of highways and mortgages that have pushed and pulled many cities to their current states of sprawl.  His policy prescription does not address the types of challenges that cities pose.

Urban development is the essence of an economic problem, rather than an engineering problem.  Even a “coordinated” federal policy will not necessarily help urban development, which must be a ground-up process.  As Jane Jacobs explains, top-down funding for urban development is often “cataclysmic” because vital development must come from entrepreneurs rather than politicians, and must be supported by local residents.  Without an understanding of the hyper-local issues that impact block-by-block development, federal funding for urban development is likely to destroy blossoming vitality by diverting resources from their most valued uses.

Previous federal urban policies, such as Community Development Block Grants and Federal housing Administration loans have led to systemic problems in American cities such as concentrated poverty and urban sprawl.  The Livable Communities Act is likely to have similar, unforeseen consequences.

Bob Nelson on Utah’s Land Management

Neighborhood Effects blogger Bob Nelson had an op-ed in Friday’s Salt Lake Tribune arguing that Utah should offer to take control of federal lands in the state:

The largest area of Utah public land, 22.8 million acres, is managed by the Bureau of Land Management in the Interior Department. Another 8.1 million acres is in the national forest system managed by the U.S. Forest Service in the Agriculture Department. On these lands, the most important decisions concern matters such as the number of cows that will be allowed to graze, the levels of timber harvesting, the leasing of land for oil and gas drilling, the prevention and fighting of forest fires and the areas available to off-road recreational vehicles.

Except in Utah and other parts of the American West, where the federal government still holds about half the total land area, such matters are the responsibility of private land owners and of state and local governments. It is time to end this antiquated system which has failed the test of time. Despite the possession of hundreds of millions of acres of land, and vast oil and gas, coal and other valuable mineral resources, the federal lands proved to be a money-losing proposition.

Read the whole thing here.

In 2008, Bob wrote about how local control of federal lands in California can lead to more effective fire management. And, of course, Bob is the author of one of Neighborhood Effects’ all-time most-read posts, wherein he argued that the US Senate is obsolete.

Want to Help the Earth? Move Back to Metropolis

Ed Glaeser writes in City Journal on his latest study, which suggests that cities emit less carbon than suburbs. (Full NBER paper with Matthew Kahn can be found here.) The top five cities (by emissions) are in California.

This sounds counterintuitive at first blush. But, Glaeser suggests, people who live in the suburbs drive more and consume more housing. The policy implication is make cities more affordable by loosening building restrictions:

If climate change is the major environmental challenge that we face, the state should actively encourage new construction, rather than push it toward other areas. True, increasing development in California might increase per-household carbon emissions within the state if the new development, following the current model, took place on the extreme edges of urban areas. A better path would be to ease restrictions in the urban cores of San Francisco, San Jose, Los Angeles, and San Diego. More building there would reduce average commute lengths and improve per-capita emissions. Higher densities could also justify more investment in new, low-emissions energy plants.

Similarly, limiting the height or growth of New York City skyscrapers incurs environmental costs. Building more apartments in Gotham will not only make the city more affordable; it will also reduce global warming.

Here’s Glaeser’s write-up at the New York Times Economix blog. Here’s Tyler Cowen on a previous, related study.