A good evaluation will tell you whether a program achieved its intended results.
A better evaluation will tell you why.
But the best evaluation will tell you whether those were the right results — for the people they were intended to help.
As the global development community has begun to get serious about conducting evaluations, they have tended to focus so heavily on accountability to funders that they have often overlooked accountability to beneficiaries. And all too often, this means findings and results that may be technically accurate, but functionally invalid.
For instance, as Carlisle Levine and Laia Griñó explain in a thoughtful new paper for InterAction, “when practitioners and evaluators listen, they might hear that, yes, shelters were provided, but the materials used were inappropriate for the climate, or the placement of the shelters reinforced community divisions, rather than helping to bridge them.” If you don’t ask the right questions, you won’t get the right answers.
That’s why locally owned programs demand locally owned evaluations. Program participants must be involved from the very start in defining what counts as success, as well as in monitoring program implementation and evaluating results. Yet until now, calls for greater country ownership have focused almost entirely on program design and implementation, while overlooking monitoring and evaluation. At USAID, for example, there has been little integration of the robust evaluation policy with the Local Systems Framework, which embraces a vision of development that is “locally owned, locally led, and locally sustained”.
To the extent that there has been local participation in evaluations, it is generally limited to using local communities as sources of data once a project is complete, or including local staff on evaluation teams. Only rarely are the evaluations led and managed by local evaluation professionals. But most importantly, partner country government institutions, civil society organizations, and beneficiary communities are given little role in designing the questions that will be asked or the indicators by which success will be measured. As a result, unintended impacts, both good and bad, are often missed.
One exception is an approach being piloted by USAID’s Office of Transition Initiatives (OTI), whose quick-response and “just do it” culture was once quite resistant to any kind of evaluations. Now, rather than just hiring an outside evaluator at the end of a project, who may spend a couple of weeks in the country and know little about the place or the program or the context, OTI is working with local evaluators from the very beginning to monitor results throughout the program cycle.
Local participation in evaluations not only ensures that outcomes are measured properly; it can also improve those outcomes. The InterAction paper cites a study comparing the use of standard “expert-developed” scorecards to community-developed scorecards to monitor schools in Uganda. The researchers found that introducing a participatory monitoring process, in and of itself, helped improve pupil test scores and reduce pupil and teacher absenteeism because community members felt a greater stake in holding schools accountable.
It’s time for the international development community to move beyond the concept of local “buy-in” and embrace the principle of local authorship. By giving participants a meaningful role in evaluation – and, as the InterAction report reminds us, by ensuring that those evaluation findings are used to inform decision-making – we can make our assistance more effective, and make development more sustainable. Doing so will require a few adjustments to the program cycle, and InterAction helpfully provides guidelines on how to put these principles into practice.
This is a guest post from MFAN Accountability Working Group Co-Chair, Diana Ohlbaum.