Results Indicators: Costs vs. Benefits?
We had a fascinating conversation a few weeks ago with a medical doctor who runs an HIV clinic in Tanzania. He offered an anecdote that powerfully illustrates an important issue that we’ve begun to uncover with our Results Data Initiative: Does the development community understand the total costs of collecting data for performance indicators? and Does the value of such indicators justify these costs?
Here’s the story:
Our friend, as we mentioned, founded and operates an HIV clinic that serves hundreds of patients. Like any number of similar clinics, he gets funding from multiple donors, and as a result, has to fulfill a number of indicator reporting requirements. This often means his staff must fill out multiple (yet similar) reporting templates – nothing unusual there, of course, but it bears mentioning.
To give an example of these donor requirements, the doctor spoke at some length about the HIV prevalence indicators that he must report to donors, and about how he instead needs data about HIV incidence to ensure his clinic’s effectiveness.
Again, not unusual (and no, we are not wading into the HIV prevalence versus incidence debate) – but the doctor’s point is pretty clear. In addition to collecting and reporting donor reporting requirements (needed to maintain clinic funding), his staff also must collect and record separate data to ensure the clinic’s effectiveness.
What this all amounts to, of course, is a lot of time spent on data collection and reporting against different metrics for different stakeholders. But what’s the opportunity cost – who spends the time to collect and report this data? The answer, for this facility: antenatal care nurses. In the doctor’s mind, the opportunity cost was fairly stark: every minute spent on a data reporting template meant a minute an expecting mother didn’t receive adequate care.
Now, there’s a lot we could debate here about data collection processes, or tools, or when and how data clerks should be employed – but that’s not the point. Instead, we suggest that a growing amount of the qualitative evidence indicates that costs of collecting and reporting on the data that inform high-level performance indicators (for various agencies) can be quite high – perhaps higher than the M&E community typically realizes. These opportunity costs were echoed across countries and sectors; discussions with agricultural staff in Ghana, for example, suggest that many extension workers spend up to a quarter (or more) of their time collecting and reporting data.
So the questions arise: What are these results indicators used for? Do the benefits of these uses outweigh the significant costs of data collection? Or are we sometimes spending the time of front-line service delivery workers without a well-formed plan for using the data they provide?
Moreover, who is using the data? Over and over, respondents at the local level say they are responsible for “reporting data up” but do not receive any feedback on what the data means, or how it could or should influence their work. Do we see front-line workers as collectors of information for use in agency headquarters and capital cities, or as primary data users that make critical decisions every day? Can we find ways to make sure that the information front-line workers collect is more useful to their work?
Our full country reports, to be shared in the next few months, will elaborate on these ideas. But this question should resonate with anyone who creates or uses a results framework, M&E plan, or set of indicators: does the intended use for this indicator justify the cost of collecting the data? Every time of our agencies or organizations says that we “need more M&E” because “we need to learn what works,” we should ask ourselves who “we” are, and to whom we are accountable. Are we helping front-line workers deliver crucial services, or are we feeding the national/international demand for data?
Image Credit: ICAP CC BY-NC-ND 2.0
Share This Post
Related from our library
The Results Data Initiative has Ended, but We’re still Learning from It
If an organization with an existing culture of learning and adaptation gets lucky, and an innovative funding opportunity appears, the result can be a perfect storm for changing everything. The Results Data Initiative was that perfect storm for DG. RDI confirmed that simply building technology and supplying data is not enough to ensure data is actually used. It also allowed us to test our assumptions and develop new solutions, methodologies & approaches to more effectively implement our work.
Catalyzing Use of Gender Data
From our experience understanding data use, the primary obstacle to measuring and organizational learning from feminist outcomes is that development actors do not always capture gender data systematically. What can be done to change that?
Sharing DG’s Strategic Vision
Development Gateway’s mission is to support the use of data, technology, and evidence to create more effective and responsive institutions. We envision a world where institutions listen and respond to the needs of their constituents; are accountable; and are efficient in targeting and delivering services that improve lives. Since late 2018, we’ve been operating under