Are We Really Learning from Evaluations?

Last week our Foreign Aid Effectiveness team at The Lugar Center (TLC), in partnership with the Modernizing Foreign Assistance Network (MFAN), released a new report on the role of evaluation and learning in U.S. foreign assistance programs. Why did we take on this project, why now, and what’s next?

First, why this project? At TLC we are strong supporters of effective U.S. foreign assistance programs. These programs include humanitarian assistance for people in need of basic human necessities like food, shelter and medicine because they have lost access to these vital basics as a result of either natural disasters or manmade conflict or crises. The U.S. has always been a generous nation during crises and our citizens have viewed a U.S. response as a moral imperative. We also believe that other types of foreign aid such as education, governance, health care (AIDs, TB, and malaria treatments for example), global food security, clean water and sanitation, and economic development programs will help people help themselves out of hunger and poverty and build stability and sustainable economic growth. This will eventually change our relationship with these countries and their people from one of aid recipient to that of economic and national security partner. As Senator Lugar notes in much of his writing and speeches on this topic, U.S. foreign assistance programs, done well, help to build a safer, more stable and prosperous world.

Foreign assistance is challenging, though, with work under difficult and sometimes dangerous conditions, and therefore its results aren’t always perfect. That’s why at TLC we endorse a number of reforms and policies that would make U.S. taxpayer dollars more effective.  Among these reforms are 1) increasing accountability through transparency, monitoring, evaluation, and learning and 2) supporting country ownership to improve outcomes and make our investments sustainable. Coincidentally, these are also MFAN’s two reform pillars. We believe that by fully implementing these reforms, we can fix programs that are struggling to produce their intended outcomes, or improve upon those that seem to be working and better design new programs.

A lot of this kind of thinking was put into practice under President George W. Bush when he worked with Congress to establish the President’s Emergency Plan for AIDS Relief (PEPFAR) program and the Millennium Challenge Corporation (MCC), a new agency with a focus on poverty reduction through economic growth. Both programs have data and results as the framework of their work. Next, President Obama built on this concept in the first ever Presidential Policy Directive on Global Development  with the inclusion of two important points for investing in evaluation policy and application to inform budgets and policy decisions. He called on agencies to develop “rigorous procedures to evaluate the impact of policies and programs, report on results and reallocate resources accordingly, incorporate relevant evidence and analysis from other institutions, and inform the policy and budget process.… and to undertake a more substantial investment of resources in monitoring and evaluation…”

With this background and the passage of time to allow these programs and policies to be put in place and ramp up we thought that we should check in on how these new policies were going and whether they were improving the effectiveness of the programs themselves.  So that’s the why.

As far as why now, with the conclusion of the Obama Administration and the beginning of the Trump Administration and a new Congress, we thought the timing was right to conduct our study in 2017.  It’s a good time to assess how things are going with our foreign aid funds and whether we are seeing improved programming based on sound data that is being reviewed and whether the information gleaned is being put into action.

Over the past several months, we examined the status of evaluation and learning of foreign assistance programs at the State Department, the U.S. Agency for International Development (USAID), the MCC, and in the PEPFAR program. In order to ensure the greatest level of confidentiality, we worked with an outside consultant who got input from more than 70 people through surveys and individual interviews. They included current and former agency personnel, NGOs, independent evaluators, and policy practitioners.  In addition to these individual responses, we also pulled information from recent reports that were conducted either by the agency itself, by the Government Accountability Office (GAO), and others to get the most accurate picture of the situation.

Here’s what we found. All three agencies now have evaluation policies in place, and, in fact, they have revised them in order to comply with a new law, the Foreign Aid Transparency and Accountability Act (FATAA), that Congress passed last year requiring them to have evaluation policies in place. Because PEPFAR’s program spans across multiple agencies, they must follow the evaluation policies of each. We also found that the agencies are taking the time to train their employees on how to manage and design evaluations.

We also found some shortcomings in both the quality and the utilization of evaluations. First, although there is significant evidence that the quality of evaluations is continuing to improve, scientific data remains of fairly low quality at most agencies. This is concerning since the very basic and most important piece of a quality evaluation is data, and poor data quality does not bode well for an evaluation to be taken seriously or utilized to improve programs. Next, when the evaluations are complete, it’s sometimes extremely difficult to find them. Although the agencies’ policies all call for making evaluations public, and they are doing this, these documents are often deeply buried on an agency website, and the agencies are doing little, if anything, to call attention to the reports and talk about their successes or failures. Too often, local partners remain excluded from the process of evaluation, continuing to call into question the sustainability of our investments.

Regarding utilization – learning – the concept of calling up evaluations within the same sector or program before developing a new program is still largely absent in the aid agencies. While the MCC had demonstrated leadership in this area during its early days, it seems to lack this priority currently. USAID seems to be almost hit or miss, depending on the individual leading the effort, and sadly, the problem is fairly extreme at the State Department, with several commenters noting that “There is not a culture of evaluation” at State. Without this culture there is certainly no companion interest in applying lessons learned to improve future outcomes.

So what’s next? Our report includes several recommendations for the agencies themselves to improve upon data quality and utilization of evaluations. Key to this improvement within the agencies is their own leadership. All still have several vacancies, and I am hopeful that improving the effectives of aid programs through monitoring, evaluation, and learning will be a priority as these new leaders are appointed. Importantly, our study also calls on Congress and the Office of Management and Budget (OMB) to hold the agencies to high standards as they put guidelines into place and for overseeing the new FATAA law. We expect OMB to release their guidelines next month and are hopeful that they will set a high bar for evaluation and learning.

As we head into 2018 we will continue to advocate for greater accountability and local ownership of U.S. foreign aid programs with U.S. taxpayer dollars and believe this new report will serve as an excellent roadmap to build a safer, more stable and prosperous world.

**

This is a guest post from Lori Rowley, Director, Global Food Security and Aid Effectiveness at The Lugar Center. This piece was originally featured on The Lugar Center’s blog.

You Might Also Like