
The Pentagon recently updated its directive on autonomous weapons systems for the first time in over a decade. The updated policy, titled “Autonomy in Weapon Systems,” reflects the growing role of AI in warfare and outlines a framework for the military’s study and development of AI systems going forward.
According to Michael Horowitz, the Pentagon’s Director of Emerging Capabilities Policy, the new version contains “relatively minor clarifications and refinements” such as clear oversight and advisory bodies to ensure ethical research and development. The last guidelines on AI were released in 2012, and since then the field has grown significantly with autonomous or semi-autonomous weapons systems becoming crucial in modern warfare.
The updated directive integrates newer offices within the Pentagon such as the Chief Digital and Artificial Intelligence Office, tasked with implementing the Pentagon’s AI ethical principles, into policy. The policy outlines guidelines to minimize the possibility of failures in autonomous and semi-autonomous systems leading to unintended engagements, and establishes the Autonomous Weapon Systems Working Group, run by the Under Secretary of Defense for Policy, to advise Pentagon leadership on autonomous technologies.
The framework advances the study of autonomous and semi-autonomous systems, but ensures that human judgment still plays a role in the use of force. The military is still figuring out how to integrate AI into units, ranging from creating robotic wingmen for Air Force pilots to testing evasion from computer detection for troops.
Recently, the Pentagon announced a $12 million partnership with Howard University to research the Air Force’s tactical autonomy program, which aims to develop systems requiring minimal human involvement. While the Pentagon is excited about autonomous systems, it wants to avoid the consequences seen in the fictional AI Skynet.