Safe Growth of Autonomous Systems Through International Regulation

Safe Growth of Autonomous Systems Through International Regulation

2017 Arthur L. Williston Medal Winning Paper Abstract By Austin Patrick Kraus

 

Autonomous systems must be regulated to ensure that they change the world for the better, not destroy it. When Isaac Asimov proposed laws governing robots, he likely did not foresee his fiction becoming a reality so soon. As technology creeps towards more advanced machines, a need for global regulation and oversight should be considered. The ethical dilemma posed by even semi-autonomous systems becomes more acute with technology capable of making independent lethal decisions. A proposed new organization under the auspices of the United Nations, the Autonomous Systems Authority, would develop regulations for lethal autonomous technology that could be approved by and, if necessary, overridden by the General Assembly.

Weaponization of semi-autonomous systems, such as reconnaissance drones, gives credibility to the possibility of armed autonomous systems capable of independently carrying out lethal strikes. With missed opportunities to take out Osama bin-Laden and in the aftermath of the September 11th, 2001 attacks, the conversation in the Pentagon and Langley was no longer whether to use armed Predator drones, but how to use them, even as proponents and critics discussed the ethics of such use.

The debate becomes even more fierce when considering truly autonomous systems, capable of making life-or-death decisions without human intervention. Robotic technology can be split into two different control standards: human in the loop (HITL) or human on the loop (HOTL). HITL systems require a human to make some, if not all, decisions and actions taken by the vehicle. HOTL systems act autonomously while allowing for human intervention.

While autonomous weapons have ethical and strategic benefits, scientists, commanders, and researchers have been warning of the potential pitfalls of autonomous technology should it not be properly regulated. In an open letter from the IJCAI 2015 conference, artificial intelligence and robotics researchers warned of the dangers of starting an AI arms race. They argued that such a race should be stopped before it is started. (Autonomous Weapons: An Open Letter From AI & Robotics Researchers, 2015)

One danger in allowing armed systems to have increasing autonomy is their inability to distinguish between combatants and civilians. Should an autonomous vehicle be given the capability to make life or death decisions, it cannot be programed to sense when something is wrong or when it has made a faulty decision (Lin, Abney, & Bekey, 2012).

While it is almost impossible to fully ensure the safety of noncombatants in modern conflict, using machines that lack the ability to distinguish between combatants and noncombatants could have catastrophic consequences without accountability. When humans make mistakes, they can be held accountable. It is impossible to punish an inanimate object.

Before action can be taken to ensure safety and cooperation in the field of autonomous systems, appropriate governing bodies must determine what should be regulated. Under this proposed governing model, autonomous systems would be regulated based on three criteria: their application in society, their economic and trade impact, and engineering and technology limitations. For humanitatian reasons, HOTL robots would never be deployed without human support and HITL systems must have constant human observers. Domestic deployment of autonomous weapons would be restricted to non-lethal applications unless the aggressor is defined as an unlawful violent militia or terrorist organization.

The economic and trade regulations imposed will also have a direct effect on the quantity and the authorized recipients of autonomous systems distributed. Sanctions prohibiting sales to countries with grievous human rights violations, limiting the quantity of autonomous systems produced, and creating and enforcing sanctions on offending nations will help insure international safety from autonomous exploitation.

When considering the technological advances of autonomous systems, the engineering innovations need equally strong regulation. Before the systems are deployed, extensive testing must be performed and verified on all capabilities. Identification techniques, such as voice and facial recognition, and preventive measures, both physical and electronic, would be standard on all autonomous systems. Only conventional weapon systems allowed under current treaties will be deployed with artificial intelligence. These regulations form the basis for a reasonable society to safely regulate the emergence of this technology.

With an uncertain and ever-expanding field of technology, regulation and oversight is necessary to prevent disaster. Because of the simultaneous potential for both progress and impending destruction, the authority and standards should be set by the largest intergovernmental organization in the world: The United Nations.

Within the United Nations, The General Assembly (UNGA), the Security Council (UNSC), the International Court of Justice (ICJ), and the International Criminal Court (ICC) have primary jurisdiction in matters regarding making, enforcing, and studying standards and practices. Currently, the United Nations cannot effectively set and enforce regulations for a variety of reasons, due to an ineffective system of checks and balances.

For regulations to be created and enforced fairly, a proposed new organization within the United Nations, the Autonomous Systems Authority (ASA), free from the veto power of the permanent five, could be established. This organization would include representatives from member states as well as experts from industry and academia. This way, resolutions passed by this organization would not be solely political in nature. The ASA would develop regulations and submit them to the UNGA for approval. Request for an override would be initiated by a member state and presented to the UNGA, requiring a two-thirds vote to overturn. These resolutions could not be vetoed by the UNSC and would be jointly enforced by the UNSC and USGA. Representatives from ASA would monitor all member nations developing autonomous technology to oversee design, testing, and production of these systems.

Unfortunately, regulations that would benefit society as a whole have both intended and unforeseen consequences. The restrictions on developing advanced weapons technology could inhibit progress on civilian applications of the technology, as oversight through ASA will slow down production and design of autonomous systems. Regulations controlling the deployment of only HITL systems around civilian populations could result in countries pushing the limits of what is considered a “human in the loop.” The restriction of trade opens opportunity for back-door and black market deals. Legalizing autonomous systems as front-line soldiers could create an arms race on its own. Countries could begin to develop offensive systems either for fear of falling behind or in order to gain a strategic advantage against their adversaries. Transparency through assessment and verification must become a priority among nations in order to prevent the effects of these regulations.

Autonomous systems represent another turning point in technology and ethics. While this technology can advance military operations and benefit society in general, proper regulations must be put into place to ensure the safety of society as a whole. For non-lethal systems, individual countries should regulate their development. However, for lethal systems, human kind has demonstrated its desire to expand and advance with sometimes unknown consequences. With regulation provided by the Autonomous Systems Authority, steps can be taken to ensure that this emerging technology will be ethically regulated for the benefit of all mankind.

Austin Patrick Kraus University of Evansville - Evansville, Indiana University of Kansas - Lawrence, Kansas

References:

Autonomous Weapons: An Open Letter From AI & Robotics Researchers. (2015, July 28). Retrieved from Future of Life Institute: https://futureoflife.org/open-letter-autonomous-weapons/

Henckaerts, J. (2011, April 30). The Laws of War. Retrieved from Human Rights Investigations: https://humanrightsinvestigations.org/the-laws-of-war/

Lin, P., Abney, K., & Bekey, G. A. (2012). Robot Ethics: The Ethical and Social Implications of Robotics. Boston: Massachusetts Institute of Technology.

Operations, C. O. (2005). Autonomous Vehicles in Support of Naval Operations. Washington, D.C.: The National Academies Press.

Rosenberg, M., & Markoff, J. (2016, October 25). The Pentagon's 'Terminator Conundrum': Robots That Could Kill on Their Own. Retrieved from The New York Times: https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html

Savage, C. (2011, October 10). Al Qaeda Group Confirms Deaths of Two American Citizens. Retrieved from The New York Times: https://atwar.blogs.nytimes.com/2011/10/10/al-qaeda-group-confirms-deaths-of-two-american-citizens

Shaw, I. G. (2014). The Rise of the Predator Empire: Tracing the History of U.S. Drones. Retrieved from Understanding Empire: https://understandingempire.wordpress.com/2-0-a-brief-history-of-u-s-drones/

Tice, C. B. (1991). Unmanned Aerial Vehicles. Retrieved from Internet Archive: Wayback Machine: https://web.archive.org/web/20090724015052/http://www.airpower.maxwell.af.mil/airchronicles/apj/apj91/spr91/4spr91.htm

Windrem, R. (2013, June 5). How the Predator went from eye in the sky to war on terror's weapon of choice. Retrieved from NBC News: http://investigations.nbcnews.com/_news/2013/06/05/18780716-how-the-predator-went-from-eye-in-the-sky-to-war-on-terrors-weapon-of-choice

You are now leaving ASME.org