Monitoring, Evaluation, and Learning: Grounding Development Programs in Evidence-Based Design
By Susan M. Puska and Carol J. Yee
This is the first in a series of KANAVA blog posts exploring the life cycle of development programs and how each phase in that cycle can be improved through better monitoring, evaluation, and learning.
On June 13, USAID Administrator Ambassador Mark Green announced a new initiative to employ a set of third-party indicators gauging how well the agency supports partner countries and focuses assistance where it is most likely to have impact. The Self-Reliance Metrics include 17 items that fall into two broad categories -- commitment and capacity -- and cover things like open government, gender equity, health, and education.
The new initiative, which will be rolled out to all USAID Missions, comes as international development practitioners face increasing scrutiny for how they manage taxpayers’ money overseas. At issue is whether the often high cost of implementing projects in developing countries is yielding the kind of impact envisioned by aid supporters. Although those supporters hail from across the political spectrum and include diplomats as well as current and former military leaders, all must rely on the sometimes obscure language of USAID “results frameworks” to make the case for investing in foreign assistance.
The good news is that, although sectors like education or health can vary widely from region to region, universal benchmarks for progress have been established by donor agencies and their partners in host countries. The missing link, however, is ensuring that the data measuring that progress -- or lack thereof -- is also sound, meeting accepted quality standards and enabling evidence-based decision making.
That’s because, at the country level, basic data collection systems sometimes make it hard to get at the true impact of development efforts. Ask a government official how many children in rural areas have access to basic education, for example, and the answer could depend on the accuracy of the country’s last census. In some cases, these population counts are so outdated as to be essentially irrelevant.
Technology can help overcome these obstacles. In country-wide programs dealing with broad swaths of the population, GIS technology, for example, is helping track where and when beneficiaries are reached and feeding that data into open-source databases that can be transitioned to host governments over the course of a development project. Ultimately, however, these systems are only as good as the monitoring and evaluation (M&E) professionals designing and operating them.
That’s where USAID implementing partners like KANAVA can help. Charged with maintaining robust M&E for each of their projects, implementing partners are responsible for ensuring that the host-country staff they hire are fully versed in the language of USAID indicators and other development metrics. That means building the capacity of M&E staff, but it also means encouraging evidence-based decision making from program managers.
Too often, these managers focus on the pro forma contractual requirement to monitor their outputs (e.g., number of people trained, etc.) without taking a hard look at whether those outputs are actually improving people's lives. To be effective, program managers must ensure that M&E is always front-and-center, informing their decisions and prompting changes when necessary to keep a project on track.
With our deep expertise in management capacity building, demonstrated through projects on three continents, KANAVA remains committed to preparing the next generation of M&E professionals, elevating the role of this critical function in development programs. As a member of the Advisory Committee on Voluntary Foreign Aid, we are proud to be a part of the conversation being led by Administrator Green.