While all of this talk of military AI applications may sound like a wild, wild west scenario — and indeed, things are moving extremely fast — some folks have thankfully raised the red flag to try and establish ground rules about how military AI should operate. In 2023, the United States joined a 47-nation international agreement dubbed “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” The goal, per the Department of Defense, was to ensure that military AI use “advances international norms on responsible military use of AI and autonomy, provides a basis for building common understanding, and creates a community for all states to exchange best practices.” Such norms include well-defined AI use, appropriate safeguards and oversight, the use of well-trained personnel, etc. As the U.S. Department of State lists, abiding countries include practically all European nations, the U.K., Japan, Singapore, Morocco, the Dominican Republic, and more.
And yet, as always, the problem is: What about the nations who don’t agree to abide by such ethics? The Department of Defense might have an answer to that question, because its goal is to “give warfighters the edge in deterring and, as necessary, defeating adversaries anywhere around the globe.” “The edge” is the key term there, provided fundamental ethical principles don’t get sidelined. Those principles were outlined back in 2021 in a memorandum from the office of the Deputy Secretary of Defense: responsible, ethical, traceable, reliable, and governable. Only time will tell if such ideals hold out against real-world pressures.