Skip to main content
Software

What makes a good human in the loop?

Pros share important skills for being an AI output reviewer.

4 min read

Once upon a time, the term “humans in the loop” meant people stuck on a roller-coaster.

In the age of automation, however, the idea of humans and loops appears frequently in conversations about guardrails for AI, a technology with lots of potential to cause havoc if unsupervised.

AI-powered decisions call for an accuracy-checking presence, a human reviewer who can prevent potential disaster stemming from a wrong number or poorly generated word choice. The desired candidate isn’t just someone who can give a thumbs-up to whatever appears on a dashboard.

We spoke with AI pros about how tech practitioners can set themselves up with the necessary skills to become a good human in the loop (HITL).

What is a human in the loop? The Harvard Data Science Review defines the term as “the need for human interaction, intervention, and judgment to control or change the outcome of a process”—including generative AI applications.

Aron Lindberg, associate professor of information systems at the Stevens Institute of Technology School of Business, sees two kinds of HITL scenarios:

  • Placing a human throughout an automated process. Semiconductor design, for example, has increasingly incorporated algorithms that take human-defined design requests and generate alternatives, Lindberg told us. A human then reviews the options for details like heat requirements and time-to-market priorities. “Those are the types of decisions that are really hard to delegate to a machine,” Lindberg said.
  • Placing a human at the end of a very automated process. This placement can feel a bit “pro forma,” according to Lindberg, “where the intention is really to automate the process as much as possible, without really thinking carefully about what humans are good at and what machines are good at.”

Critical thinking. Travis Rehl, CTO at cloud consultancy Innovative Solutions and AWS premier-tier services partner, recently built an AI tool to help his company’s sales team determine customer expectations, or “scope,” for provided services; the resulting scope document pulls and learns from previous project data. But scopes change over time, Rehl noted, and the output is treated as a draft that must be reviewed and approved by a sales team member with their own unique knowledge of the job.

Shawn Spooner, global CTO at out-of-home ad company Billups, helped to implement “Audrai,” a tool turning human-initiated specs into billboard mockups; those mockups can be placed into digital replicas of desired outdoor environments. The results still require a domain expert.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

“If I’m Nike, and it just dropped me [into] a non-sporting environment,” Spooner said, “it’s wrong by default. The AI won’t know that, but I might and my teams certainly will.”

Rehl sees HITL as requiring critical thinking skills. Even the best models don’t always get it right, he said, and AI users and reviewers should not take outputs at face value. An important human skill, he added, is determining where AI tools belong and where critical human thinking is preferred.

Rehl also emphasized the importance of establishing “healthy friction points” (like making sure scope document drafts require approval) in automated processes.

System failure. Humans in the loop may also need to serve as an AI’s “conscience.” More than four in 10 (44%) technology leader respondents to a recent IEEE study ranked “ethical practices” as a top skill desired in candidates applying for AI roles.

Aman Mahapatra, chief of staff at engineering and digital-transformation company Tribeca Softech, sees the HITL as an air traffic controller. “They don’t build or fly the planes, but they hold a systems-level view of everything in motion and make high-consequence calls under time pressure with incomplete information,” Mahapatra wrote to us in an email.

He also called out other skills potentially valuable for HITL, like risk stratification (“What’s the blast radius if something goes wrong?”); knowing how to “push back” with prompts, if needed (“What’s the second most likely cause?” or “What evidence are you not weighing?”); and business literacy (to know what “the AI is optimizing for”).

Lindberg, like Mahapatra, sees a human in the loop requiring system-level understanding: knowing the data being processed and how a tool (say, Microsoft Copilot) works and integrates with that data, and understanding where it’s likely to misfire.

“When you’re brought in to validate some output, you need to have information from the system. You need to have tools to be able to inspect the work process that you’re asked to intervene in, so that you can actually build yourself an understanding of how a particular result was generated, so that you, as a human, can create an opinion of it,” he said.

About the author

Billy Hurley

Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.