You’ve Come a Long Way, Baby: Evaluations at USAID

It has been just five years since USAID established its landmark evaluation policy, but it’s a whole new day for accountability and learning at the agency.

At a March 30 event jointly sponsored by MFAN and the Brookings Institute, USAID Administrator Gayle Smith rattled off some of the key statistics from a brand new report on evaluation practice at USAID:

  • The average annual number of USAID evaluations rose from about 130 in the five years prior to the policy to about 230 in the five years since.
  • 59% of Country Development Cooperation Strategies (CDCS) reference evaluations or cite them as evidence.
  • 71% of evaluations helped USAID to design or modify a project or activity.

As Smith explained, the reintroduction of evaluations at USAID has produced a culture shift that is a “game-changer”.  While there are many retired Foreign Service Officers who remember the days when evaluations were conducted routinely and considered seriously at USAID (with over 400 evaluations per year in the 1980s and early 1990s), that ethos declined when USAID came under attack in the mid-1990s and all but disappeared with the elimination of the policy bureau in 2006.  Evaluations became something of a “lost science” in U.S. foreign assistance until the Presidential Policy Directive on Global Development opened the door to, and then-Administrator Raj Shah ushered in, an era of data and evidence for decision-making.

Key to USAID’s transformation is not just the quantity of evaluations conducted, but their quality and use.  While both are clearly improving, here the picture is a bit murkier: a recent study of evaluation utilization found that only 8 of the 609 discrete evaluations conducted from 2011-2014 were impact evaluations, and for the most part evaluations were not planned in coordination with – or even shared with – local partners.

To its credit, USAID has already begun identifying some of the obstacles to greater evaluation quality and use, and creating new procedures and capacities to overcome them.

  • Because evaluations were often being commissioned late in the program cycle, experts were unable to measure baselines, identify control groups, or use the findings to make mid-course corrections. As a result, USAID has issued guidance on planning evaluations from the very beginning of a project or activity.  Missions will now include learning plans as part of the CDCS process, and bureaus have developed evaluation action plans.
  • Because there was limited internal capacity to commission and oversee high-quality evaluations, USAID has trained more than 1,600 staff in evaluation since 2011. The agency has designated points of contact for monitoring and evaluation (M&E) in each bureau and independent office, established communities of practice, begun recruiting M&E fellows, and initiated partnerships with academic experts and institutions.
  • Because of time constraints and information overload, USAID staff were not always aware of relevant evaluation findings, especially those stemming from other programs, sectors, countries or regions. To ensure that knowledge is widely shared, USAID is beginning to synthesize evidence from evaluations to provide overviews of lessons learned and draw “gap maps” showing where more research is required. The agency is also developing a set of tools and suggested processes to help disseminate findings, improve stakeholder consultations, and track implementation of evaluation recommendations.
  • Because most evaluations have been conducted at the activity level, from which it may be difficult to extract broader lessons and generalizations, USAID will encourage project-level evaluations to examine how and whether individual activities are leading to higher-level results.

Taken together, these advances firmly establish USAID, alongside the Millennium Challenge Corporation, as a leader in evidence-based program design.  It is now incumbent upon the State Department to take a hard look at the quality and use of its own evaluations for accountability and learning.  Further behind is the Defense Department, which spends $8-10 billion per year on direct assistance to more than 180 countries, but has no evaluation policy at all.

Evaluation may not be a silver bullet for achieving development outcomes, as USAID recognizes, but it certainly is an important tool for enhancing the effectiveness of foreign assistance and ensuring responsible use of taxpayer resources.

***

This is a guest post from Diana Ohlbaum, Co-Chair of MFAN’s Accountability Working Group. This post is part of our ACCOUNTdown Dialogue Series and is the first piece on the topic of evaluation.

You Might Also Like