DARPA announces $2 billion in funding for ‘AI Next’ campaign

The goal is to get the technology into its "third wave," where it begins to adapt to changing situations more in the way human intelligence does.
(Getty Images)

The Defense Advanced Research Projects Agency says it plans to spend more than $2 billion on research into so-called “third wave” artificial intelligence capacities over the next few years.

The initiative — announced Friday at the D60 event — is called “AI Next,” and it’s concerned with moving AI beyond the mode where it needs lots of high-quality training data in myriad situations to develop an algorithm. The goal is to get the technology to a place where machines adapt to changing situations the way human intelligence does.

“With AI Next, we are making multiple research investments aimed at transforming computers from specialized tools to partners in problem-solving,” DARPA Director Steven Walker said in a statement. “Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible. We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”

As part of AI Next, DARPA intends to issue “multiple” broad agency announcements over the next year. Priority areas to explore, the agency says, include automating DoD business tasks like security clearance vetting, increasing the security of machine learning tech, and more. The initiative also includes further investment in existing programs.


AI Next will leverage the new Artificial Intelligence Exploration (AIE) program that DARPA launched in July, too — a fast-acting funding program through which researchers have 18 months to establish the viability of an AI theory.

AI is not a new research area for DARPA. The defense agency has around 20 ongoing AI research projects, including those that hope to be able to catch so-called “deepfakes” and initiatives around “explainable AI” — the idea that as artificial intelligence takes over more roles, it will need to be able to explain how the algorithm reached the conclusion that it did.

Despite all that interest and progress, however, the development of AI in a military context has been controversial lately. In June, after internal employee petitions and resignations, Google announced that it will end its partnership with the central Pentagon AI initiative known as Project Maven when its current contract expires in 2019.

Perhaps owing to this fallout, Pentagon’s recently established Joint AI Center (JAIC) has expressed interest in establishing some kind of ethics guide — a set of “AI principles for defense,” as DIU’s head of machine learning Brendan McCord put it.

Latest Podcasts