The Results Data Scorecard: A glimpse at the importance and accessibility of results data

August 16, 2016
Katherine Wikrent, Danny Walker
Innovation, Results Data

A key component of the Results Data Initiative (RDI) was the development of the Results Data Scorecard, a mechanism for assessing how development organizations gather and publish results information. Improving how DPs track and report results has the potential to reduce duplication of efforts and improve programming effectiveness – critical if we are to meet the SDGs.

So how accessible and useful are development partners’ results data? Not very: the most common source for results information is project documentation, oftentimes in PDF format. But based on our analysis, neither the PDF formats, nor differences across organizational templates, are major barriers to crosswalking results data. In fact, we were able to get the data through scraping and data algorithms. Rather, the key barrier to useful results data has to do with varying levels of data openness and completeness. By making improvements to existing internal reporting templates, development partners can make their information more accessible and comparable.

And that’s where the scorecard comes in. Our Scorecard – accessible at the bottom of the RDI data visualization page – assesses the quality of DP monitoring systems across five categories, and generates a composite score per organization.

<p>World Bank's Scorecard</p>

Figure 1: World Bank’s Scorecard

The scores are based on a subset (not a complete sample) of results data sources from each DP. All sub-scores and the composite score are rated on a scale of 0-4, with 4 being the highest possible score; we’ll delve into the specifics of what each category means and the criteria (submetrics) that figure into the calculation in our next post. The goal of this Scorecard is not to name and shame DPs. Rather, we aim to illuminate ways in which existing M&E data, systems, and tools can be improved, to help DPs reach the goals they set for themselves.

To begin these conversations, we are using the results of our analysis to craft tailored recommendations on a DP-by-DP basis. Below is just a brief preview of a few recommendations we have included in our donor-specific Scorecard reports. We will present and discuss these in more depth in later posts.

Table 1: Sample Scorecard Recommendations

<p>Sample Scorecard Recommendations</p>

Table 1: Sample Scorecard Recommendations

While we continue our conversations with donors about our findings from the Scorecard exercise, readers can look forward to four upcoming posts that will provide deeper insight into our methodology, metrics, findings, and recommendations. We hope that series of posts will encourage development stakeholders to consider how revamping M&E and data systems can lead to cost savings, improved outcomes, and heightened accountability and transparency – stay tuned!

Image: Ashwani Kumar Ojha CC BY-NC-ND 2.0

Share This Post

Related from our library

To Enable W-SMEs to Thrive in Côte d’Ivoire We Start by Listening to their Data and Digital Needs

This blog is co-written by Development Gateway’s Aminata Camara, Senior Consultant; Kathryn Alexander, Senior Program Advisor; and MCC‘s Agnieszka Rawa, Managing Director of Data Collaboratives for Local Impact (DCLI). On June 28th, 2021, MCC, USAID, Microsoft, Thinkroom, and Development Gateway will be co-hosting a workshop to share, validate, inform, and build on recent research on

June 24, 2021  
The Results Data Initiative has Ended, but We’re still Learning from It

If an organization with an existing culture of learning and adaptation gets lucky, and an innovative funding opportunity appears, the result can be a perfect storm for changing everything. The Results Data Initiative was that perfect storm for DG. RDI confirmed that simply building technology and supplying data is not enough to ensure data is actually used. It also allowed us to test our assumptions and develop new solutions, methodologies & approaches to more effectively implement our work.

July 2, 2020 Strategic Advisory Services
Catalyzing Use of Gender Data

From our experience understanding data use, the primary obstacle to measuring and organizational learning from feminist outcomes is that development actors do not always capture gender data systematically. What can be done to change that?

March 16, 2020 Global Data Policy