Artificial Intelligence: A Silver Bullet or Scrap Metal for Global Development?

February 6, 2019 Global Data Policy
Beverley Hatcher-Mbu
Explainer, Innovation

When someone mentions artificial intelligence (AI), it’s easy to conjure up two conflicting images: the first, killer robots whizzing past, replacing human jobs, daily tasks, and social interactions in a post-apocalyptic world; the second, a C-3PO-esque personality revolutionizing our health and food systems. Pondering this, we are also inclined to explore the question, where does global development fit in within this futuristic, Star Wars-inspired universe?

Our recent work collaborating with AidData to design a machine learning model predicting humanitarian trends, and our participation in an Open Data & Artificial Intelligence Roundtable at the Open Gov Hub, has reinforced for us that AI isn’t a “silver bullet” for global development. However, when used ethically, it can be (another) tool in the toolbox for tackling persistent global challenges.

ai

So What’s the Skinny on AI?

AI is the training of computers to automate decisions in order to simulate human-like decision making, while machine learning (ML) is a subset of AI that teaches computers to recognize patterns in data, using those patterns to make future predictions. Already, both are rife with possibilities for better development outcomes. AI is currently being used as a tool to improve health service delivery, implemented through mobile phones to make health screening more widely available for patients in rural, often underserved areas.

However, the downside is that AI has come under pressure for perpetuating racial discrimination in facial recognition technology. The same technology has also appeared in some countries as a tool to identify (and jail) political dissidents. While these varying outcomes are not unique to AI (plenty of repressive regimes have targeted other digital tools to perpetuate discrimination), this does mean that international actors hold significant ethical responsibility as they increasingly turn to AI to enhance programs on everything from agricultural value chains to health and wellness.

 

Factors to Consider

Pick your problems.

Not all problems can, or should, be solved by AI. Why? Because the use of AI is heavily shaped by the context surrounding the problem: the culture and processes already in existence, and the cost-benefit analysis of redesigning the culture so that AI can be used effectively. For example, for a health program operating in a rural area where community relationships are highly valued, AI could speed up the time needed to read x-rays, but it cannot effectively replace the program’s human elements – for example, the relationship between local health extension workers and community members that drives preventive care, such as motivation to get x-rays.

The ideal mix is a blend of humans and machines working together, resulting in the slow, time-adjusted reorganization of work that reflects which tasks can be completed by a machine, and which should be completed by a human.

AI requires investment.

AI is not like a microwave – you can’t “set and forget it.” AI models are affected by “concept drift,” meaning they have to be routinely re-calibrated with new data so that they continue to produce “correct” decisions. At DG, we learned this first hand in developing the Autogeocoder tool which uses machine learning to read through project documents and pinpoint specific activity locations. We continually updated our model to address questions such as, what happens when a project’s activities occur in multiple locations at once? Or when the document mentions other, unrelated locations such as the implementing partner’s mailing address? The Autogeocoder model needed to be trained, consistently and by many users over time, to become effective at making these important distinctions.

Mind the accountability/privacy gap.

AI can help increase the quality of public service delivery, but it can also exacerbate a lack of transparency around decision making. As with the open data world, basic principles and guidelines do exist that outline how algorithms can become accountable. Additionally, there is a developing methodology that breaks down how algorithms come to make their decisions, which can help identify bias. In the push for transparency in AI practices, France is leading the way by making public all algorithms its government uses.

Closely related to transparency concerns is the need to protect individual data, Vulnerable groups remain at risk in digital development spaces – highlighting just how essential it is that safety-oriented, post-colonial, and gender-focused concerns frame how, when, and why individual data is used to feed AI models.

 

Some Best Practices for Using AI

AI is already showing incredible ability to answer complex development questions, so it’s not all doom and gloom! As a result, we’ve identified three approaches that can help anchor AI models as solutions-oriented tools to tackle global challenges:

Talk to people.

Conduct an assessment before you build the AI model to gain a basic idea of the availability, quality, and timeliness of data that will feed your model. We developed CALM as a method to navigate assessing data ecosystems, to then build tools and processes that address the most pressing decisions.

Remember partnerships.

Consensus building through human partnerships remains essential, from accessing the data needed to teach the model, to ensuring that the model ultimately solves the problem raised by citizens/end users.

Draw parallels.

The medical industry has frequently balanced open data, informed consent, and restricted access, offering examples of how the data for development community could also weigh innovation with individual rights.

 

Where do we go next?

There is no “one size fits all” answer to solving global development challenges. Ultimately, the power of AI rests with the humans behind it, and hinges on their ability to connect AI to other digital and analog tools – that, collectively, can address complex problems. Effective use of AI will require proponents and skeptics alike to proceed both cautiously and with an open mind, to ensure that our analog weaknesses don’t become our digital ones.

 

Image credit: Mike MacKenzie, (CC-BY-2.0)

Share This Post

Related from our library

At a Glance | Tracking Climate Finance in Africa: Political and Technical Insights on Building Sustainable Digital Public Goods

In order to combat the effects of climate change, financing is needed to fund effective climate fighting strategies. Our white paper, “Tracking Climate Finance in Africa: Political and Technical Insights on Building Sustainable Digital Public Goods,” explores the importance of climate finance tracking, common barriers to establishing climate finance tracking systems, and five insights on developing climate finance tracking systems.

June 24, 2024 Data Management Systems and MEL, Global Data Policy
Great Green Wall Observatory: A New Data Platform to Support One of Africa’s Most Ambitious Efforts to Combat Climate Change

In partnership with UNCCD, GGW Accelerator, and the Pan African Agency for the GGW, DG has launched the Great Green Wall Observatory. This pioneering digital platform monitors the GGW Initiative's progress, enhancing collaboration, accountability, and transparency across 11 African countries. By providing financial and project management data, the Observatory empowers communities, stakeholders, and policymakers to combat climate change in the Sahara and Sahel regions. With over 302 projects and $15 billion in commitments, this tool promotes robust climate action and fosters local and global engagement.

June 4, 2024 Data Management Systems and MEL, Global Data Policy
Raising Awareness on World No Tobacco Day 2024: DaYTA/TCDI’s Work on Tobacco Industry Interference

As tobacco companies have aggressively deployed creative strategies to market retail nicotine and tobacco products at children and adolescents, it is imperative that tobacco control stakeholders have access to timely and high-quality data to inform robust policies, regulations, and enforcement mechanisms.

May 31, 2024 Global Data Policy, Health