Miss the earlier post in this series? Check out the introductory post, Understanding National Data Ecosystems, before reading more!
What does “fit-for-purpose” data actually mean?
It depends: on who you ask, and what decision is at stake. For governments and development partners – particularly those who rely on data from country systems for program planning and management – much frustration came from perceived redundancies in statistical and administrative data systems.
Strengths and Limitations of Statistics
Overall, interviewees expressed confidence in statistical data, which are produced or overseen by National Statistical Offices (NSOs). However, serious barriers to the usefulness of statistical data for programmatic decisions included:
- poor timeliness;
- limited disaggregation – by sex, age, location, ethnicity, and disability status; and
- incomplete access to machine-readable microdata.
As city and municipal interviewees pointed out, survey data disaggregated by region – oftentimes two or more administrative levels above – are not useful for local planning purposes.
Much of this frustration is inherent in how statistical data are collected. Statistics are often based on a representative sample, which is used to make population-wide generalizations. Collecting and publishing hyper-local statistical information – dis-aggregatable across 1,500+ Philippine cities and municipalities, for example – would require significantly more financial, time, and human resource investment in data collection and processing. In many countries, the decennial population census consumes 10-15% of the NSOs operating budget for the decade – underscoring the financial infeasibility of frequent, disaggregated statistics.
In some contexts, a secondary concern lies with the politicization of statistics, which can result in direct interference or under-resourcing. Increased investment in NSOs – to ensure they are able to remain independent government entities, provided the appropriate human, technical, and financial resources, and empowered with appropriate legal authority to coordinate National Statistical Systems – is vital.
The Missing Middle in Administrative Data
Yet human development inequities, and subnational governance responsibilities, often require more timely and disaggregated data than official statistics can provide. Administrative data systems are a potential resource that can complement official statistics, addressing some of data timeliness and disaggregation gaps. For example, when used with a statistical denominator, administrative data from ministries of health can provide more accurate picture of disease rates. However, many expressed serious concerns with administrative data quality.
In contrast with statistical information – for which interviewees could typically pinpoint specific methodological or political issues impacting data quality – there was generally not a clear understanding of how to systematically assess the quality of available administrative data.
In most cases, service delivery workers are mandated to report administrative data at the facility or local levels: teachers are expected to report student attendance in education management information systems (EMIS), doctors and nurses in health management information systems (HMIS), etc. Common challenges to this type of data collection include infrastructure (connectivity, technology), and – more importantly – local staff incentives and trade-offs of dedicating time to such reporting.
But we frequently found that custodial data agencies often did not perceive their own data quality as a concern. This mismatch between administrative data user trust and custodial perceptions contributed to a pessimism regarding the usefulness of data in decision-making.
In many cases, there was some consensus that NSOs or National Planning Commissions should serve as administrative data quality overseers. However, in many contexts such a role is beyond either agency’s mandate. Even with a fully elaborated framework, limitations in NSO legal mandates – and the financial resources and staff skill sets needed to conduct data quality assurance – make achieving such centralized, top-down data quality a challenge.
So – are there systemic opportunities to increase the quality and trust in administrative data? Stay tuned for our next post, where we explore opportunities for greater quality and trust – provided we get the incentives right.
Over the past two years, Development Gateway (DG) worked across seven countries and two regions to support the roll-out of UNICEF’s Data for Children Strategic Framework. This work included developing country ecosystem diagnostics and strategic action plans for UNICEF Country Offices in Myanmar, Papua New Guinea, Philippines, Thailand, Viet Nam, Lesotho, and Ethiopia; and for the UNICEF East Asia and Pacific Regional Office. Development Gateway holds long-term agreement with UNICEF to continue this work around the world.
For as long as Development Gateway has specialized in data, we have also specialized in data visualizations. In that time, we have discovered the pitfalls and learned ways that data visualizations can increase data use. In this post, we look specifically at selecting the right type of visualization for the story you want to tell.
In 2020, we sought to answer a pivotal question: what are the good practices and lessons learned from the many existing women’s, children's, and adolescent’s health data visualization tools? In partnership with UNICEF, DG worked to identify good practices, as well as to determine any differences for emergency-focused data visualization tools, using COVID-19 as a test case.
In advance of the first VIFAA country dashboard launch next week, we will explore the importance and source of accurate and reliable data for each of the indicators. This is a crucial step in making data available in a way that stakeholders can use to inform their decisions.