{"keyword":["Algorithmic Bias","AI","Decision Support Systems","Autonomous Weapons Systems"],"publisher":"ICRC Humanitarian Law & Policy Blog","date_created":"2024-09-30T11:44:28Z","citation":{"short":"I. Bhila, I. Bode, The Problem of Algorithmic Bias in AI-Based Military Decision Support Systems, ICRC Humanitarian Law & Policy Blog, 2024.","apa":"Bhila, I., & Bode, I. (2024). The problem of algorithmic bias in AI-based military decision support systems. ICRC Humanitarian Law & Policy Blog.","ieee":"I. Bhila and I. Bode, The problem of algorithmic bias in AI-based military decision support systems. ICRC Humanitarian Law & Policy Blog, 2024.","ama":"Bhila I, Bode I. The Problem of Algorithmic Bias in AI-Based Military Decision Support Systems. ICRC Humanitarian Law & Policy Blog; 2024.","chicago":"Bhila, Ishmael, and Ingvild Bode. The Problem of Algorithmic Bias in AI-Based Military Decision Support Systems. ICRC Humanitarian Law & Policy Blog, 2024.","bibtex":"@book{Bhila_Bode_2024, title={The problem of algorithmic bias in AI-based military decision support systems}, publisher={ICRC Humanitarian Law & Policy Blog}, author={Bhila, Ishmael and Bode, Ingvild}, year={2024} }","mla":"Bhila, Ishmael, and Ingvild Bode. The Problem of Algorithmic Bias in AI-Based Military Decision Support Systems. ICRC Humanitarian Law & Policy Blog, 2024."},"title":"The problem of algorithmic bias in AI-based military decision support systems","oa":"1","user_id":"105772","language":[{"iso":"eng"}],"date_updated":"2024-09-30T11:44:40Z","publication_status":"published","author":[{"full_name":"Bhila, Ishmael","last_name":"Bhila","first_name":"Ishmael","id":"105772"},{"first_name":"Ingvild","last_name":"Bode","full_name":"Bode, Ingvild"}],"has_accepted_license":"1","main_file_link":[{"open_access":"1","url":"https://blogs.icrc.org/law-and-policy/2024/09/03/the-problem-of-algorithmic-bias-in-ai-based-military-decision-support-systems/"}],"_id":"56282","type":"misc","abstract":[{"lang":"eng","text":"Algorithmic bias has long been recognized as a key problem affecting decision-making processes that integrate artificial intelligence (AI) technologies. The increased use of AI in making military decisions relevant to the use of force has sustained such questions about biases in these technologies and in how human users programme with and rely on data based on hierarchized socio-cultural norms, knowledges, and modes of attention.\r\n\r\nIn this post, Dr Ingvild Bode, Professor at the Center for War Studies, University of Southern Denmark, and Ishmael Bhila, PhD researcher at the “Meaningful Human Control: Between Regulation and Reflexion” project, Paderborn University, unpack the problem of algorithmic bias with reference to AI-based decision support systems (AI DSS). They examine three categories of algorithmic bias – preexisting bias, technical bias, and emergent bias – across four lifecycle stages of an AI DSS, concluding that stakeholders in the ongoing discussion about AI in the military domain should consider the impact of algorithmic bias on AI DSS more seriously."}],"status":"public","year":"2024"}