US Military Taps 'Thunderforge' AI for Wargaming and Planning Operations

10 hours ago 12

As geopolitical tensions rise, the U.S. Department of Defense is expanding its integration of artificial intelligence to stay ahead and turning to AI agents to simulate confrontations with foreign adversaries.

On Wednesday, the Defense Innovation Unit, a Department of Defense organization, awarded a prototype contract to San Francisco-based Scale AI to build Thunderforge, an AI platform designed to enhance battlefield decision-making.

“[Thunderforge] will be the flagship program within the DoD for AI-based military planning and operations,” Scale AI CEO Alexandr Wang said Wednesday on X.

Launched in 2016 by Wang and Lucy Guo, Scale AI helps speed up development by providing labeled data and the infrastructure needed to train AI models.

To develop Thunderforge, Scale AI will work with Microsoft, Google, and American defense contractor Anduril Industries, Wang said.

Thunderforge will initially be deployed to the U.S. Indo-Pacific Command, which operates in the Pacific Ocean, Indian Ocean, and parts of Asia, and the U.S. European Command, which oversees Europe, the Middle East, the Arctic, and the Atlantic Ocean.

Thunderforge will support campaign strategy, resource allocation, and strategic assessments, according to a statement on Wednesday.

"Thunderforge brings AI-powered analysis and automation to operational and strategic planning, allowing decision-makers to operate at the pace required for emerging conflicts," DIU Thunderforge Program Lead Bryce Goodman said in the statement.

AI-focused or “Agentic Warfare” represents a shift from traditional warfare, where experts manually coordinate scenarios and make decisions over days, to an AI-driven model where decisions can be made in minutes.

Ensuring AI performs reliably in real-world defense applications is particularly challenging, especially when faced with unpredictable scenarios and ethical considerations.

“These AIs are trained on collected historical data and simulated data, which may not cover all the possible situations in the real world,” Professor of Computer Science at USC Sean Ren told Decrypt. “Additionally, defense operations are high-stakes use cases, so we need the AI to understand human values and make ethical decisions, which is still under active research.”

Challenges and safeguards

As the founder of Los Angeles-based decentralized AI developer Sahara AI, Ren said building realistic AI-driven wargaming simulations comes with significant challenges in accuracy and adaptability.

"I think two key aspects make this possible: collecting a large amount of real-world data for reference when building wargaming simulations and incorporating various constraints from both physical and human aspects," he said.

To create adaptive and strategic AI for wargaming simulations, Ren said it’s crucial to use training methods that allow the system to learn from experience and refine its decision-making over time.

“Reinforcement learning is a model training technique that can learn from the ‘outcome/feedback’ of a series of actions,” he said.

“In wargaming simulations, the AI can take exploratory actions and look for positive or negative outcomes from the simulated environment,” he added. “Depending on how comprehensive the simulated environment is, this is helpful for the AI to explore various situations exhaustively.”

With the expanding role of AI in military strategy, the Pentagon is forming more deals with private AI companies such as Scale AI to strengthen its capabilities.

While the idea of AI used by militaries may conjure images of “The Terminator,” military AI developers like San Diego-based Kratos Defense say that fear is unfounded.

"In the military context, we’re mostly seeing highly advanced autonomy and elements of classical machine learning, where machines aid in decision-making, but this does not typically involve decisions to release weapons," Kratos Defense President of Unmanned Systems Division, Steve Finley, previously told Decrypt. “AI substantially accelerates data collection and analysis to form decisions and conclusions."

One of the biggest concerns when discussing the integration of AI into military operations is ensuring that human oversight remains a fundamental part of decision-making, especially in high-stakes scenarios.

“If a weapon is involved or a maneuver risks human life, a human decision-maker is always in the loop,” Finley said. “There's always a safeguard—a 'stop' or 'hold'—for any weapon release or critical maneuver."

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Read Entire Article