Illustration by Andry Djumantara

Beyond the Algorithm: The Case for Human Judgment in AI

January 21, 2026 Artificial Intelligence
Fernando Ferreyra, Beverley Hatcher-Mbu
AI

Currently, we are hearing that artificial intelligence (AI) will change everything that we do, from classrooms to how we get health care, how we harvest crops, and how we distribute fertilizer. Some are asking a simple question: how do we make these tools genuinely useful, ethical, and sustainable? As we try to keep pace with the advancements and figure out what is “smoke and mirrors” versus what is a genuine application, we believe that there’s no approach that removes the human in the loop (HITL). At least for now.

Human-in-the-loop generally refers to “the need for human interaction, intervention, and judgment to control or change the outcome of a process, and it is a practice that is being increasingly emphasized in machine learning, generative AI, and the like.” In practical terms, it means that human intelligence is deliberately included in the supervision, training, and decision-making rather than leaving it to an automated algorithm.

Why include humans?

There are many arguments for including humans in the loop; here are the most salient:

  • Improvements in decision quality because humans can apply domain expertise, contextual understanding, and common sense to AI recommendations to catch what edge cases the AI misses (e.g., nuanced scenarios where human empathy is required to override standard protocol)
  • Ethical oversight and accountability: decision-making systems do not have the capacity to exercise moral judgments; there has to be some form of safeguard against discriminatory or harmful automated decisions.
  • Transparency: humans can explain decisions, adjust workflows, and take feedback, things that black boxes can’t do.
  • Learning from experience remains a fundamentally human feature; machines learn patterns, while genuine human learning means internalizing lessons and changing behavior reliably over time. Machines may adjust to feedback, but people can truly embed and sustain change.

As the AI space grows rapidly, the need for human-in-the-loop interactions remains crucial, particularly as AI becomes more autonomous and the impact of its decisions bigger. Human oversight is needed to ensure that AI agents adhere to ethical boundaries, correct system failures, and maintain alignment with human objectives. The risk of a runaway or misaligned agent necessitates the stopgap that having a human in the loop provides.

What are the downsides?

As with everything, there are sponsors and detractors for any given approach. The main argument against having humans perform too many verifications is obviously speed; human interactions introduce latency and throughput limits to processes that may not necessarily be critical, and of course, all the costs associated with training people to verify the work accordingly. A deeper critique of HITL is that humans may fall into the illusion of accountability, overtrusting the AI and doing a bad job at the verification, falling into the comfortable assumption that the AI is mostly right for processes that should not tolerate negligent mistakes, as well as omission errors by failing to notice errors that the system doesn’t flag. 

These dynamics may mean that simply adding a human won’t guarantee that the outcomes are better, since even highly skilled people can make mistakes. Additionally, some processes are inherently complex, and the human may just not understand the process enough to be able to meaningfully intervene.

Where do we go from here?

Our 25 years of experience with public sector digital development has us thinking particularly about how public sector agencies will navigate this new age, in particular, how they can develop, adopt, and scale AI that includes HITL. Additionally, as outlined in our thinking here, we are building an assessment technology in Jordan to frame AI responsibly, with human-in-the-loop in mind, to demystify how AI can be practically integrated into national systems to drive development outcomes. 

Based on our prior experience, we’re looking at the age of AI and what needs to be cleared for adoption:

  • Thinking through use cases and evaluating technology against use cases 
  • Licensing and how data is going to be used 
  • For Low- and Middle-Income Countries (LMICs), what downstream delivery bottlenecks need to be addressed?

Internally, we’re also thinking through AI and its impact on the humans, our software developers, within the software cycle. We think the impact of AI on the software engineering field is a bellwether for how we should collectively think about its impact on labor, development, and growth. Watch this space for how we talk about this in the coming months.

AI and Human-in-the-loop together improve quality, ethics, and trust only when humans have time, context, and authority to disagree with the machine, and when processes are built to capture and learn from their feedback.

AI, as it is now, holds the promise of addressing several of our long-standing problems when developing software that’s sustainable, but at least for the time being, and unless a new breakthrough is achieved, humans will have to stay in the loop.