AI in Military Applications: A Strategic Exploration

At a classified US military base, situated approximately 50 miles from the Mexican border, defense contractor Anduril is engaged in the experimental utilization of a large language model for a novel and remarkable purpose. I had the opportunity to attend one of the initial demonstrations last year. From a sun – faded landing strip, I witnessed four jet aircraft, designated with the codename “Mustang,” emerging on the western horizon and soaring over a desolate expanse of boulders and brush. For the demonstration, these prototypes were miniaturized. They neatly fell into formation, and as they approached, the humming of their engines became audible.

The intense sunlight was straining my eyes, so I shifted my attention to a nearby computer monitor, shielded under a dusty tarp. With a few keystrokes, a fifth aircraft materialized at the edge of the screen. Its outline bore a striking resemblance to that of the Chinese J – 20 stealth fighter. A young individual named Colby, adorned with a black baseball hat and sunglasses, issued the command to engage the computer – simulated target: “Mustang intercept.” This was the moment when artificial intelligence (AI) entered the scene. A model akin to the one powering ChatGPT analyzed the command, interacted with the drones, and then responded in an emotionless female voice: “Mustang collapsing.” In approximately one minute, the drones had converged on the target and, with relative ease and the use of virtual missiles, neutralized it.

Anduril’s demonstration serves as a testament to the eagerness with which the defense industry is delving into new manifestations of AI. Through a project named Fury, this startup is developing a larger autonomous fighter for the US Air Force, intended to operate in tandem with crewed jets. Many existing systems already possess a degree of autonomy, courtesy of older AI technologies. However, the novel concept is to integrate aspects of large language models (LLMs) into the command structure, enabling the seamless relay of orders and the presentation of relevant information to pilots. One could envision a “Sergeant Chatbot” at the pilots’ disposal.

This scenario may seem somewhat peculiar. Nevertheless, defense technology has always been characterized by an element of the unexpected. Substantial resources, both in terms of funding and effort, are invested, with varying degrees of success. Here, the allure lies in the promise of enhanced efficiency. Kill chains are inherently complex, and in theory, AI has the potential to streamline them – a diplomatic way of stating that it can render them more lethal. According to four – star American strategists, the nation that wields control over this technology will assert dominance on the global stage. This very ideology is the driving force behind the United States’ determination to restrict China’s access to cutting – edge AI, as well as the Pentagon’s plan to escalate its spending on AI in the coming years. The proposed plan, while bold, is not entirely unexpected. The war in Ukraine, with its widespread use of low – cost, computer – vision – equipped drones, has vividly illustrated the significance of autonomy on the battlefield.

Simultaneously, the generative AI boom has significantly amplified interest in this domain. A 2024 Brookings report indicates that funding for AI – related federal contracts experienced a staggering 1,200 percent growth from August 2022 to August 2023, with the vast majority of this funding originating from the Department of Defense. This was prior to President Trump’s return to office. His administration is now advocating for even more strategic deployment of AI. The trillion – dollar 2026 defense – or rather, “war” – budget includes, for the first time, a dedicated allocation of $13.4 billion for AI and autonomy.

This development implies that AI companies stand to reap substantial benefits by making ambitious claims regarding their capabilities in a war – fighting context. This year, Anthropic, Google, OpenAI, and xAI were each awarded AI – related military contracts valued at up to $200 million. This represents a significant departure from 2018, when Google notably withdrew from Project Maven, an initiative aimed at using AI to analyze aerial imagery. Emelia Probasco, a researcher at Georgetown University specializing in AI military applications, notes that Project Maven, now managed by Palantir, has evolved into Maven Smart Systems and has become one of the military’s most extensively used AI tools. She posits that large language models are well – suited for intelligence – gathering tasks due to their proficiency in processing vast amounts of information. Additionally, their ability to write and analyze code makes them ideal for cyber – offensive operations. “The aspiration, which is somewhat disconcerting, is that AI could be so intelligent as to prevent war or simply engage in combat and emerge victorious,” Probasco remarks. “It’s almost like a form of magical panacea.” Presently, contemporary models remain too unreliable, error – prone, and opaque to make battlefield decisions or be entrusted with direct control of any hardware.

A pivotal challenge for these stakeholders, therefore, is to devise strategies for deploying AI that capitalize on its strengths while minimizing associated risks. In September, Anduril and Meta jointly submitted a bid for a US Army contract, potentially worth up to $159 million, to develop yet another AI – integrated application: a rugged augmented reality helmet display for soldiers. Anduril states that this system, designed to provide warfighters with mission – critical information while simultaneously sensing their surroundings, will leverage a new generation of more capable AI models. These models are better equipped to interpret the physical world in real – time.

What of the prospect of fully roboticized soldiers? I reached out to Michael Stewart, a former fighter pilot who previously headed the US Navy’s disruptive capabilities office and was instrumental in promoting AI experimentation within the Fifth Fleet in 2022. Stewart currently runs a consulting firm and engages with military planners worldwide. He anticipates that the future of warfare will be highly automated. “In 10, 15, or 20 years, we can expect to see robots with a high degree of autonomy,” he predicts. “That is the inevitable trajectory.” Assuming these systems are powered by LLMs, they will not merely serve as passive observers of the atrocities of war. They will be capable of articulating, in their own “words,” the actions they undertake and the rationale behind them.

admin

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注