DARPA wants to build AI that can be a true teammate

A new research project called Artificial Social Intelligence for Successful Teams (ASIST) holds the immodest goal of figuring out how to imbue machines with social intelligence.
(jbraine / Flickr)

Today, artificial intelligence makes a pretty good tool.

It can help humans surface relevant information from overloaded databases, review government-granted security clearances, read address labels quicker and more accurately, and much more. But AI isn’t a good teammate. It can’t fundamentally understand humans — their beliefs, intentions and restrictions.

The Defense Advanced Research Projects Agency (DARPA) wants to change this.

The agency recently kicked off a new research project called Artificial Social Intelligence for Successful Teams (ASIST), which holds the immodest goal of figuring out how to imbue machines with social intelligence.


Human and machine teaming is an emerging field of study with big potential, project lead Dr. Joshua Elliott told FedScoop. But machine social intelligence is, for the moment, a huge missing piece in the puzzle. Humans have a skill called the theory of mind used to infer the cognitive states of the humans around us. This, understandably, is a crucial skill for working in teams. But computers just don’t have it.

Can this change?

DARPA and its contractors on this project are going to try to figure that out. The project began in December 2019 and will run for the next four years.

Project contractor Aptima, which comes to the table with a background in the social science of crafting high-performing teams, will design an experiment using an urban search and rescue challenge in the video game Minecraft. In the experiment the AI will “observe” a human teammate using sensors, wearables and cameras and use this data to predict what the human will do, Dr. Jared Freeman, Aptima’s chief scientist and principle investigator in this team, told FedScoop.

Researchers will begin by exploring whether the AI can use gathered data to understand one single human partner; in later stages, the machine intelligence will work with teams of multiple humans.


Elliott admits that the project is “very ambitious.” It’s what he calls a “DARPA-hard problem,” meaning it might take years to solve, if it proves to be solvable at all. If successful, though, he says this research could translate to a substantial leap in the functionality of AI assistants and remotely operated vehicles.

Freeman also sees huge potential. “We are at the front edge of a new generation of teams,” he said.

Latest Podcasts