Want to Know What We Got for Our Money? MCC’s Telling Us.

In this space back in February, I suggested a list of ways for the Millennium Challenge Corporation (MCC) to advance its thought leadership and boost its development effectiveness.  One of those ideas was to conduct “after-action reviews” to provide an honest look at what did (or didn’t) happen during a compact, why (or why not), and how to improve next time.

True to form, less than a month later the MCC responded by publishing its first “compact closed” page, in this case with respect to Mozambique.  The page has a lot to recommend it: a concise summary of the amounts promised and spent; the dates of compact signature, entry into force and completion; a break-down by project area; the total number of beneficiaries; the changes made during the term of the compact; the key compact indicators, targets, and results for each project; the policy conditions; and links to all the key documents, including constraints analysis, evaluations and scorecards.

This is an extremely useful way to look at the big picture of MCC’s results, particularly for Congressional staff who don’t want to have to wade back through years of notifications and justifications to understand how the project changed over time and what they got for their money.  It answers the important question of “What did this compact achieve?”, which is not ordinarily addressed by evaluations, Inspector General investigations, or Government Accountability Office reports.  My only quibble with the page is that it shows the results according to the adjusted targets rather than the initial goals, which gives an unfairly rosy picture of how the compact was implemented.  The Center for Global Development’s Sarah Rose also helpfully suggests that the page include information on policy impact.

I hope, as well, that this effort was not entirely an exercise in packaging information for the public, but also included a very detailed and substantive de-brief from those who worked on the program, with an eye to identifying best practices and lessons learned.  Although this should happen continuously throughout a compact’s duration, compact closure is an important opportunity to ask questions like:  What do you know now that you wish you had known at the start? What were the most burdensome processes and requirements, and how did you manage them?  If you were doing it all over again, what would you do differently?

Such a review would be different from an independent evaluation in that it would draw directly from the perceptions and experiences of the staff and local partners who were most closely involved, and not necessarily be designed for public consumption.  This type of learning is essential for an organization whose personnel are hired for their specific country knowledge, subject-matter expertise and language skills, and often leave when the compact is complete rather than assuming a new post within the MCC.

Some of these lessons may be too sensitive to be trotted out in public, or too context-specific to be of broader value.  But some of the feedback – from MCC’s local staff and partners as well as its direct hires — could be summarized in a way that is helpful to other organizations, inside and outside government, working in the same countries or on similar projects.

Although it may not be obvious to the user, the “compact closed” page required an enormous amount of effort from the MCC, with dozens of people and multiple departments involved in developing content, writing code, creating charts, designing new layouts and styles, and extracting data.  It’s the template for similar pages forthcoming on other completed compacts, which will be a useful resource for the entire development community.  From my perspective, this is a noteworthy step forward on transparency as well as a valuable tool for assessing overall results.

***

This is a guest post from MFAN Accountability Working Group Co-Chair, Diana Ohlbaum.

You Might Also Like