Catalyzing Use of Gender Data

March 16, 2020 Global Data Policy
Annie Kilroy
Data Use, Explainer, Results Data
The learnings surfaced through this exercise reduced the time and effort for the second round of FIAP's annual reporting cycle by 66%.

March is International Women’s History Month. DG is publishing a series of blogs that highlight and honor the work that we and others are doing to support the vital role of women.

Kicking off the series last week in Part 1 of our Gender Data series, we discussed the importance of gender-disaggregated data (GDD) for development and humanitarian programming, and how each kind of gender data can be leveraged to improve programming outcomes for women and girls. But we know from our experience understanding data use that the primary obstacle to measuring and organizational learning from feminist outcomes is that development actors do not always capture gender data systematically.

For example, Global Affairs Canada, International Assistance (GAC-IA) and many others have adopted policies that target gender equality and the empowerment of women and girls – also known as “feminist development.” However, GAC and others have only recently required implementing partners and project officers to report activities and beneficiaries by gender. Gender policy markers and sector codes are new introductions into M&E strategies – so their uptake and standardization has been slow. Last year, DG worked with GAC-IA to develop its Feminist International Assistance Policy (FIAP) M&E work plan using existing data, and worked closely with these indicators. We aimed to establish a clear feminist M&E plan within FIAP’s first year of existence.

Take, for example, GAC-IA’s FIAP indicator on health and nutrition:

Number of people benefitting from gender-sensitive health and nutrition services through GAC projects (linked to SDG 2)

Though the indicator is simple in concept, methodological challenges have prevented organization-wide aggregation of project data. Questions arose about if and how projects were collecting GDD on health and nutritional programming, but also around:

  • To what extent do gender-sensitive services need to be a core component of the project? For example, can “education” data coming from a secondary school that provides nutrition information on boys and girls be counted, even if its primary outcome is not health/nutrition based? 
  • What specific activities constitute “gender-sensitive health or nutrition” services? Do they exclusively include sexual, reproductive, maternal, and newborn and child health or nutrition services, or do other activities qualify?
  • To what extent must nutritional/health services be a project focus for that data to be included? For example, if a 3-year education project highlights nutrition for one day, does that project contribute to this indicator? 
  • What information is documented and reported that would allow identification of these qualifying activities?

Because of these reporting challenges, GAC-IA anticipated that understanding exactly how many women and girls their programs were reaching would be difficult to pin down. However, challenges in extracting data were less anticipated, and prevented GAC-IA from aggregating gender insights across the organization to quantify feminist outcomes.

There are no universally-accepted indicators or codebooks for feminist development. Further, project cycles (design, contracting, and implementation) are often years long. If donors and agencies wait for new programming cycles to begin to start collecting and analyzing gender data, years could pass before solid insights can be drawn.

The question becomes: how do we bridge the gap between theoretical best practice, and more immediate, practical solutions that meet agencies’ current programming and abilities?

Question 1 for Feminist M&E: What Are You Measuring?

Indicator design involves achieving a balance between which measurements are “ideal,” and which measurements are practical and actionable. To achieve this balance, GAC-IA developed what is known as its “placemat,” a comprehensive results framework with 26 global indicators (derived mostly from SDG indicators) and 24 GAC-specific key performance indicators (derived from project-level M&E data). These key performance indicators were reviewed alongside global metrics, to arrive at an accurate, actionable measure of GAC-IA’S contribution to women’s empowerment.

In developing a framework that is both forward-thinking and responsive to current needs, we worked closely with GAC-IA to identify realistic tradeoffs in design – with the idea that down the road, these tradeoffs would become less and less necessary.

Figure 1: Designing the Results Framework: Tradeoffs

Question 2 for Feminist M&E: How do You Measure Results? 

Once you have identified goals and standard definitions for how and where to capture relevant gender data, there are two methods to develop the new indicators:

          1. Top-Down: Commission new data collection based exclusively on your new indicators.

          2. Bottom-Up: Use existing data in M&E channels and try to fit it into your new indicators. 

The costs and benefits of each approach depend on the organization’s data and reporting management structures. For example, using the Top-Down approach may effectively surface the intended data, but requires a significant time and effort investment. With Bottom-Up, the level of effort is more immediately attainable, but the “right” data may be incomplete or not exist. 

Figure 2: Two Approaches to Aggregating New Organization-Wide Indicators

Case Study: Working with GAC-IA on its Bottom-Up Approach

In line with GAC-IA’s priorities, DG used existing data and the Bottom-Up approach to calculate the baseline. In one example, we fit existing M&E data into one of FIAP’s education indicators:

EDU1: # of teachers trained (m/f), according to national standards, supported by GAC programming

To make the existing M&E data compatible with the new indicators, DG took the following steps:

1. Use information already on-hand to isolate projects that are sector-relevant (in this case, OECD education projects).

This will more easily identify where corresponding gender data might exist.

2. Be specific with indicator definitions, and give methodological guidance for aggregating data to FIAP indicators.

For example, does the term “teachers” also include technical and vocational education trainers, or strictly refer to primary and secondary school educators?

3. Organize your data and document your assumptions.

This one speaks for itself – organization and documentation are both essential to the sustainability of your data. 

4. Tag each project to any applicable gender key performance indicators (KPI).

If it is unclear whether project components fit KPI definitions, classifications that allow more options on a “sliding scale” can help flag projects for further review with sector experts. For GAC-IA, we classified each project as one of the following:

Figure 3: Sliding-Scale Classifications for calculating FIAP baseline indicators

5. Refine the dataset.

Review assumptions, definitions, and methodology with gender experts and project teams.

6. As a final step, (re-)calculate final KPIs.

After DG reviewed all potentially relevant project documentation, the following assumptions and results were presented to GAC-IA for further review by their gender and project experts:

Figure 4: GAC-IA FIAP sample results table for KPI EDU1, showing all possible sliding-scale project classifications and assumptions as they pertain to (potential) EDU1 results. NOTE: these assumptions were made by non-sector experts at DG, and not necessarily the final definitions used by GAC-IA to aggregate data related to this indicator.

Case Study Conclusions

This approach to aggregating, flagging, and categorizing project data on a sliding scale was important to implementing the FIAP M&E plan. GAC-IA took DG’s assumptions, converting each into methodological guidance for their program teams. This saved GAC-IA months of expert analysis and debate on indicator definitions, because they were rooted in examples of what GAC-IA projects were already doing. Further, the exercise provided tangible insight into barriers to data use for GAC’s feminist decision-making – an important note in the first year of FIAP implementation.

Working with GAC-IA to establish a clear pathway to feminist, results-driven monitoring and evaluation was a new undertaking on both sides. It required close collaboration between GAC-IA and DG, and we gathered lessons along the way that are applicable to others looking to bridge best practice theory with pragmatism. Even in the absence of perfect data, this work has established that with a bit of flexibility and initiative, we can arrive at a custom made, actionable, and fit-for-purpose M&E plan.

If you missed Part 1 of our Gender Data series, check it out here. Stay tuned for Pt. 3, where we’ll lay out our biggest takeaways from this work, and how lessons uncovered through the GAC-IA experience resonate with other government agencies who have sought to measure gender-sensitive or feminist policies.

Photo credit: Juan Arredondo/Getty Images/Images of Empowerment
Share This Post

Related from our library

At a Glance | Evidence-Informed Policymaking: Education Data-Driven Decision Mapping in Kenya and Senegal

Development Gateway: An IREX Venture (DG) and our strategic partner IREX, supported by the William & Flora Hewlett Foundation, conducted a study of the education data systems in Kenya and Senegal. In our findings from this study, we underscore the necessity of a harmonized approach to education data management and share insights that provide a valuable roadmap for future reforms and investments in education data systems.

July 2, 2024 Data Management Systems and MEL, Education
At a Glance | Tracking Climate Finance in Africa: Political and Technical Insights on Building Sustainable Digital Public Goods

In order to combat the effects of climate change, financing is needed to fund effective climate fighting strategies. Our white paper, “Tracking Climate Finance in Africa: Political and Technical Insights on Building Sustainable Digital Public Goods,” explores the importance of climate finance tracking, common barriers to establishing climate finance tracking systems, and five insights on developing climate finance tracking systems.

June 24, 2024 Data Management Systems and MEL, Global Data Policy
Great Green Wall Observatory: A New Data Platform to Support One of Africa’s Most Ambitious Efforts to Combat Climate Change

In partnership with UNCCD, GGW Accelerator, and the Pan African Agency for the GGW, DG has launched the Great Green Wall Observatory. This pioneering digital platform monitors the GGW Initiative's progress, enhancing collaboration, accountability, and transparency across 11 African countries. By providing financial and project management data, the Observatory empowers communities, stakeholders, and policymakers to combat climate change in the Sahara and Sahel regions. With over 302 projects and $15 billion in commitments, this tool promotes robust climate action and fosters local and global engagement.

June 4, 2024 Data Management Systems and MEL, Global Data Policy