What Can We Learn About AI Regulation from the Military?

In a bustling restaurant in downtown Enitown, USA, an overwhelmed manager turns to artificial intelligence to assist with staffing shortages and customer service. Across town, a weary newspaper editor uses artificial intelligence to generate news content. Both are part of a growing number of people who rely on artificial intelligence to solve everyday business tasks. But what happens when the technology fails or, worse yet, creates risks we haven't fully considered? The current political discourse largely focuses on the roughly eight influential companies that produce AI. A groundbreaking and extensive new executive order on artificial intelligence is also geared towards developers and government users. It's time to also focus on how to regulate (and honestly, help) the millions of smaller players and individuals who will increasingly use this technology. Navigating this uncharted territory, we can find guidance from an unexpected source: the U.S. military.

Every day, American military personnel entrust the world's most powerful weaponry to hundreds of thousands of servicemen and women deployed worldwide, the vast majority of whom are under 30 years old. The military mitigates potential risks associated with all these powerful technologies deployed worldwide in the hands of young and often novice users using a three-pronged approach: they regulate the technology, the users, and their units. The government has the opportunity to do the same with AI.

Depending on the task, military personnel must successfully complete courses, training, and oral exams before gaining the right to operate a ship, fire a weapon, or even, in some cases, perform technical maintenance tasks. Each qualification reflects how technologically complex a system can be, how deadly it can be, and what authority will be granted to the user to make decisions. Moreover, knowing that even qualified individuals get tired, bored, or stressed, the military has a backup system of standard operating procedures (SOPs) and checklists that ensure sequential and safe behavior—something surgeons, for example, emulate.

Risk mitigation in the armed forces goes beyond individual qualifications and also encompasses units. For example, the "carrier qualification" is not just for individual pilots. They must also earn it through a joint demonstration of the aircraft carrier and its associated air wing (group of pilots). Unit qualifications emphasize teamwork, collective responsibility, and integrated functioning of multiple roles within a specific context. This ensures that every team member not only excels at their individual tasks but fully understands their duties in a broader context.

Finally, to complement qualifications and checklists, the military divides and demarcates authorities among different individuals based on the task and the level of responsibility or seniority of the person. For example, a surface warfare officer, even with the authority to release weapons, still needs to seek approval from the ship's captain to launch certain types of ordnance. This check ensures that individuals with the appropriate authorities and knowledge have the ability to mitigate particular categories of risk, such as those that could lead to conflict escalation or depletion of critical weaponry.

These military risk mitigation strategies should inspire conversations about how to regulate AI, as similar approaches have proven effective in other non-military sectors. Qualifications, SOPs, and specific authorities already complement technical and engineering standards in sectors such as healthcare, finance, and law enforcement. While the military possesses a unique ability to enforce such qualification regimes, these frameworks can also be effectively applied in the civilian sector. Their adoption can be driven by demonstrating the value of such tools for businesses, through government regulation, or by using economic incentives.