What Happens When The Regulators Are Biased?

In their influential book, Nudge: Improving Decisions about Health, Wealth, and Happiness, Richard Thaler and Cass Sunstein catalogue the various “biases and blunders” to which we all fall prey. These are common mistakes that behavioral economists and other social scientists have documented over the years.

To take one example, most people suffer from what’s known as “availability bias.” As Thaler and Sunstein explain it (pp. 24-5):

How much should you worry about hurricanes, nuclear power, terrorism, mad cow disease, alligator attacks, or avian flu? And how much care should you take in avoiding risks associated with each?…In answering questions of this kind, most people use what is called the availability heuristic. They assess the likelihood of risks by asking how readily examples come to mind.

The problem is that sometimes vivid but rare risks pop into the mind quicker than mundane but common risks. As they write:

[V]ivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here a factor of twenty).

In order to help people avoid these types of mistakes, Thaler and Sunstein recommend government policies that rearrange the “choice architecture” to “try to influence people’s behavior in order to make their lives longer, healthier, and better.” (p. 5)

These are interesting ideas. But in my view the big blind spot to their reasoning is the possibility that regulators themselves might suffer from all sorts of biases. In fact, because regulators rarely bear the costs of their own mistakes, these biases may even be more pronounced than the ones from which we laymen are prone to suffer.

That’s why I was glad to see this new article in the Journal of Regulatory Economics by James Cooper and William Kovacic. They write:

Behavioral economics (BE) examines the implications for decision-making when actors suffer from biases documented in the psychological literature. This article considers how such biases affect regulatory decisions. The article posits a simple model of a regulator who serves as an agent to a political overseer. The regulator chooses a policy that accounts for the rewards she receives from the political overseer—whose optimal policy is assumed to maximize short-run outputs that garner political support, rather than long-term welfare outcomes—and the weight the regulator puts on the optimal long run policy. Flawed heuristics and myopia are likely to lead regulators to adopt policies closer to the preferences of political overseers than they would otherwise. The incentive structure for regulators is likely to reward those who adopt politically expedient policies, either intentionally (due to a desire to please the political overseer) or accidentally (due to bounded rationality). The article urges that careful thought be given to calls for greater state intervention, especially when those calls seek to correct firm biases. The article proposes measures that focus rewards to regulators on outcomes rather than outputs as a way to help ameliorate regulatory biases.