The Department of Housing and Urban Development, or HUD, has been in the news lately due to its policy proposal to ban smoking in public housing. HUD usually flys under the radar as far as federal agencies are concerned so many people are probably hearing about if for the first time and are unsure about what it does.
“…is to create strong, sustainable, inclusive communities and quality affordable homes for all.”
HUD carries out its mission through numerous programs. On the HUD website over 100 programs and sub-programs are listed. Running that many programs is not cheap, and the graph below depicts the outlays for HUD and three other federal agencies for reference purposes from 1965 – 2014 in inflation adjusted dollars. The other agencies are the Dept. of Energy (DOE), the Dept. of Justice (DOJ), and the Environmental Protection Agency (EPA).
HUD was the second largest agency by outlays in 2014 and for many years was consistently as large as the Department of Energy. It was larger than both the DOJ and the EPA over this period, yet those two agencies are much better known than HUD. Google searches for the DOJ, EPA and HUD return 566 million, 58.3 million, and 25.7 million results respectively.
For such a large agency HUD has managed to stay relatively anonymous outside of policy circles. This lack of public scrutiny has contributed to HUD being able to distribute billions of dollars through its numerous programs despite little examination into their effectiveness. To be fair HUD does make a lot of reports about their programs available, but these reports are often just stories about how much money was spent and what it was spent on rather than evaluations of a program’s effectiveness.
As an example, in 2009 the Partnership for Sustainable Communities program was started. It is an interagency program run by HUD, the EPA, and the Dept. of Transportation. The website for the program provides a collection of case studies about the various projects the program has supported. The case studies for the Euclid corridor project in Cleveland, the South Lake Union neighborhood in Seattle, and the Central Corridor Light Rail project in Minneapolis are basically descriptions of the projects themselves and all of the federal and state money that was spent. The others contain similar content. Other than a few anecdotal data points the evidence for the success of the projects consists of quotes and assertions. In the summary of the Seattle project, for example, the last line is “Indeed, reflecting on early skepticism about the city’s initial investments in SLU, in 2011 a prominent local journalist concluded, “It’s hard not to revisit those debates…and acknowledge that the investment has paid off”. Yet there is no benchmarking in the report that can be used to compare the area before and after redevelopment along any metric of interest such as employment, median wage, resident satisfaction, tax revenue, etc.
The lack of rigorous program analysis is not unique to the Sustainable Communities program. The Community Development Block Grant Program (CDBG) is probably the best known HUD program. It distributes grants to municipalities and states that can be used on a variety of projects that benefit low and moderate income households. The program was started in 1975 yet relatively few studies have been done to measure its efficacy. The lack of informative evaluation of CDBG projects has even been recognized by HUD officials. Raphael Bostic, the Assistant Secretary of the Office of Policy Development and Research for HUD from 2009 – 2012, has stated “For a program with the longevity of the CDBG, remarkably few evaluations have been conducted, so relatively little is known about what works” (Bostic, 2014). Other government entities have also taken notice. During the Bush administration (2001 – 08) the Office of Management and Budget created the Program Assessment Rating Tool (PART). Several HUD programs were rated as “ineffective” – including CDBG – or “moderately effective”. The assessments noted that “CDBG is unable to demonstrate its effectiveness” in developing viable urban communities and that the program’s performance measures “have a weak connection to the program purpose and do not focus on outcomes”.
Two related reasons for the limited evaluation of CDBG and other HUD programs are the lack of data and the high cost of obtaining what data are available. For example, Brooks and Sinitsyn (2014) had to submit a Freedom of Information Act request to obtain the data necessary for their study. Furthermore, after obtaining the data, significant time and effort were needed to manipulate the data into a usable format since they “…received data in multiple different tables that required linking with little documentation” (p. 154).
HUD has significant effects on state and local policy even though it largely works behind the scenes. Regional economic and transportation plans are frequently funded by HUD grants and municipal planning agencies allocate scarce resources to the pursuit of additional grants that can be used for a variety of purposes. For those that win a grant the amount of the grant likely exceeds the cost of obtaining it. For the others, however, the resources spent pursuing the grant are largely wasted since they could have been used to advance the agency’s core mission. The larger the grant the more applicants there will be which will lead to greater amounts of resources being diverted from core activities to pursuing grants. Pursuing government grants is an example of rent-seeking and wastes resources.
Like other federal agencies, HUD needs to do a better job of evaluating its abundant programs. Or better yet, it needs to make more data available to the public so that individual researchers can conduct and duplicate studies that measure the net benefits of its programs. Currently much of the data that are available are usually only weakly related to the relevant outcomes and often are outdated or missing.
HUD also needs to specify what results it expects from the various grants it awards. Effective program evaluation starts with specifying measurable goals for each program. Without this first step there is no way to tell if a program is succeeding. Many of the goals of HUD programs are broad qualitative statements like “enhance economic competitiveness” which are difficult to measure. This allows grant recipients and HUD to declare every program a success since ex post they can use whatever measure best matches their desired result. Implementing measurable goals for all of its programs would help HUD identify ineffective programs and allow it to allocate more scarce resources to the programs that are working.