A Refreshed Autonomous Weapons Policy Will Be Critical for U.S. Global Leadership Moving Forward
from Renewing America

A Refreshed Autonomous Weapons Policy Will Be Critical for U.S. Global Leadership Moving Forward

The updated policy will hopefully reflect developments in the field and incorporate recent DoD initiatives, paving the way for what future governance of emerging capabilities should look like
A sign attached to a robot is pictured as activists from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called 'killer robots', stage a protest at Brandenburg Gate in Berlin.
A sign attached to a robot is pictured as activists from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called 'killer robots', stage a protest at Brandenburg Gate in Berlin. Annegret Hilse\ Reuters

The U.S. Department of Defense (DoD) has announced its intention to update its keystone directive on autonomous weapons systems (AWS). The directive “establishes DoD policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems” with the aim to reduce the possibility of accidents from the use of these weapons, including those that might lead to unintended conflict or inadvertent escalation.

First published in 2012, the directive remains “one of the only publicly available national policies” on weapon systems that present higher degrees of autonomy. With the directive coming up on its tenth anniversary, the directive must either be updated or canceled—in keeping with the Department’s issuance policy—and it is perfect timing. Given advances in artificial intelligence and autonomy technologies, as well as changes within the Department, DoD has an opportunity to update the policy and sustain responsible U.S. global leadership.

A Great Time to Modernize

More on:

United States

Artificial Intelligence (AI)

U.S. Department of Defense

Defense and Security

When the autonomous weapons directive was written a decade ago, researchers at Stanford and Google were just solving how to create an artificial intelligence (AI) algorithm that could identify pictures of cats without needing to be trained on labeled data. Just a few weeks ago, DeepMind unveiled Gato—a single “generalist” AI agent that can complete a variety of tasks that have constituted some of the hallmark AI milestones in the past decade, including playing Atari, captioning images, chatting, and even stacking physical blocks with the aid of a robot arm.

In just ten years, it is clear that the landscape of the field has altered dramatically, with expectations changing every few months. While AI and autonomy are not one and the same, AI technology facilitates autonomy, and as a result, has become a driving force in reducing the need for direct human intervention in systems, from computers to vehicles to weapons.

Since the directive was introduced, no autonomous system has been developed or proposed to the Department that fell under the purview of the directive. The directive currently applies to autonomous weapon systems, defined as systems that “once activated, can select and engage in targets without further intervention by a human operator.” The key differentiator here is the independent target selection and pursuit—it does not apply to weapon systems like “fire-and-forget” smart bombs, uncrewed platforms such as drones, or even loitering munitions that can “wait” for the opportune moment before engaging a pre-selected target designated by a person.

Recent developments in both the technology and in the national security world—with the rise of China and its focus on increasingly AI-based and autonomous military systems, and with weapons and systems presenting higher degrees of autonomy proving themselves on the battlefield in Ukraine—the United States will need an updated directive that is sharper and more primed to guide these technologies when they mature.

Not only will the updated policy need to reflect advances in the technology, but developments within DoD itself and established norms and values when it comes to AI and autonomy. The directive predates many of the initiatives the department has instituted in order to promote defense innovation and safe and ethical uses for emerging capabilities like AI, quantum, and cyber. For example, the directive was published before the creation of the Joint Artificial Intelligence Center (JAIC), before the DoD had any set ethical principles for AI, and before the Department’s Defense Innovation Unit (DIU) efforts to operationalize those principles. It predates the recent organizational overhaul of the Department’s entire approach to emerging technologies and data with the creation of the Emerging Capabilities Policy Office and the Chief Data and AI Officer (CDAO).

More on:

United States

Artificial Intelligence (AI)

U.S. Department of Defense

Defense and Security

To ensure that DoD is capable of adopting and implementing these types of systems with responsible speed if senior leaders decide a particular system is necessary, an updated directive should directly reference these ethical principles, especially in its guidelines for testing and evaluation and training of operators, as well as ensuring transparency of systems and positive human control. It should also outline how these new offices play a role in the review process.

Ensuring Future U.S. Leadership

The DoD Directive on Autonomy in Weapon Systems is not simply a statement of principles, but establishes policy, delegates authority, and assigns responsibilities. It is also an example of the type of preemptive regulation that is necessary to ensure innovation can proceed swiftly as well as safely. A significant challenge the U.S. government faces when it comes to adopting emerging technologies is that innovation—often largely led by the private sector—drastically outpaces government regulation and bureaucracy. The fact that this policy is coming of age, and being updated now as these capabilities inch toward reality, reflects how the U.S. is leading when it comes to realistic and responsible autonomous weapons policy. An updated U.S. AWS policy will serve as a concrete model for what broader governance of artificial intelligence and related technologies might look like, in both defense and civilian contexts, that are more binding than high-level ethics principles and more nuanced than outright bans.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail