Weaponized Artificial Intelligence (AI) and the Laws of Armed Conflict (LOAC): The RAILE Project

Morgan M. Broman, Pamela Finckenberg-Broman

    Research output: Chapter in Book/Report/Conference proceedingConference Paper published in Proceedingspeer-review


    Today much has already been written about Artificial Intelligence (AI), robotics and autonomous systems. In particular the more and more prevalent autonomous vehicles, i.e. cars, trucks, trains and to a lesser extent aeroplanes.This article looks at an emerging technology that has a fundamental impact on our society, namely the use of artificial intelligence (AI) in lethal autonomous weapon systems (LAWS) – weaponized AI - as used by the armed forces. It specifically approaches the questions around how laws and policy for this specific form of emerging technology - the military application of autonomous weapon systems (AWS) could be developed. The article focuses on how potential solution(s) may be found rather than on the well-established issues. Currently, there are three main streams in the debate around how to deal with LAWS; the ‘total ban’, the ‘wait and see’ and ‘the ‘pre-emptive legislation’ path. The recent increase in the development of LAWS has led to the Human Rights Watch (HRW) taking a strong stance against ‘killer robots’ promoting a total ban. This causes its own legal issues already in the first stage, the definition of autonomous weapons, which is inconsistent but often refers to the Human Rights Watch (HRW) 3-step listing – human-in/on/out-of the loop. However, the fact remains that the LAWS are already in existence and continues to be developed. This raises the question of how to deal with them. From a civilian perspective, the initial legal issue has been focusing on liability in relation to accidents. On the military side, international legislation has been and still is, through a series of treaties between states, striving to regulate the behaviour of troops on the fields of armed conflict. These treaties, at times referred to as Laws of Armed Conflict (LOAC) and at times as International Humanitarian Law (IHL) share four (4) fundamental core principles – distinction, proportionality, humanity and military necessity. With LAWS being an unavoidable fact in today’s field of armed conflict and rules governing troop behaviour existing in the form of international treaties, what is the next step? This article will look to present a short description of each debate stream utilizing relevant literature for the subject matter including a selection of arguments raised by prominent authors in the field of AWS and international law. The question for this article is: How do we achieve AWS/AI programming which adheres to the LOAC/IHL’s intentions of the ‘core principles of distinction, proportionality, humanity and military necessity?
    Original languageEnglish
    Title of host publicationHuman Interaction & Emerging Technologies (IHEIET-AI 2022)
    Subtitle of host publicationArtificial Intelligence & Future Applications
    EditorsTareq Ahram, Redha Taiar
    Place of PublicationLausanne, Switzerland
    PublisherAHFE International
    Number of pages10
    ISBN (Electronic)978-1-7923-8989-4
    Publication statusPublished - 26 Apr 2022
    Event7th International Conference on Human Interaction & Emerging Technologies: Artificial Intelligence & Future Applications - Virtual, Lausanne, Switzerland
    Duration: 21 Apr 202223 Apr 2022
    Conference number: 7

    Publication series

    NameIntelligent Systems and Computing


    Conference7th International Conference on Human Interaction & Emerging Technologies
    Abbreviated titleIHIET-AI 2022
    Internet address


    Dive into the research topics of 'Weaponized Artificial Intelligence (AI) and the Laws of Armed Conflict (LOAC): The RAILE Project'. Together they form a unique fingerprint.

    Cite this