{"publication_status":"accepted","status":"public","user_id":"93275","department":[{"_id":"660"}],"date_created":"2024-10-09T15:02:42Z","date_updated":"2024-10-09T15:06:27Z","citation":{"short":"F. Liedeker, O. Sanchez-Graillet, M. Seidler, C. Brandt, J. Wellmer, P. Cimiano, in: n.d.","ama":"Liedeker F, Sanchez-Graillet O, Seidler M, Brandt C, Wellmer J, Cimiano P. A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support.","bibtex":"@inproceedings{Liedeker_Sanchez-Graillet_Seidler_Brandt_Wellmer_Cimiano, title={A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support}, author={Liedeker, Felix and Sanchez-Graillet, Olivia and Seidler, Moana and Brandt, Christian and Wellmer, Jörg and Cimiano, Philipp} }","ieee":"F. Liedeker, O. Sanchez-Graillet, M. Seidler, C. Brandt, J. Wellmer, and P. Cimiano, “A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support,” presented at the First Workshop on Natural Language Argument-Based Explanations, Santiago de Compostela, Spain.","mla":"Liedeker, Felix, et al. A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support.","chicago":"Liedeker, Felix, Olivia Sanchez-Graillet, Moana Seidler, Christian Brandt, Jörg Wellmer, and Philipp Cimiano. “A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support,” n.d.","apa":"Liedeker, F., Sanchez-Graillet, O., Seidler, M., Brandt, C., Wellmer, J., & Cimiano, P. (n.d.). A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support. First Workshop on Natural Language Argument-Based Explanations, Santiago de Compostela, Spain."},"conference":{"location":"Santiago de Compostela, Spain","start_date":"2024-10-19","name":"First Workshop on Natural Language Argument-Based Explanations","end_date":"2024-10-24"},"project":[{"name":"TRR 318 - C5: TRR 318 - Subproject C5","_id":"128"}],"abstract":[{"text":"As the field of healthcare increasingly adopts artificial intelligence, it becomes important to understand which types of explanations increase transparency and empower users to develop confidence and trust in the predictions made by machine learning (ML) systems. \r\nIn shared decision-making scenarios where doctors cooperate with ML systems to reach an appropriate decision, establishing mutual trust is crucial. In this paper, we explore different approaches to generating explanations in eXplainable AI (XAI) and make their underlying arguments explicit so that they can be evaluated by medical experts.\r\nIn particular, we present the findings of a user study conducted with physicians to investigate their perceptions of various types of AI-generated explanations in the context of diagnostic decision support. The study aims to identify the most effective and useful explanations that enhance the diagnostic process. \r\nIn the study, medical doctors filled out a survey to assess different types of explanations. Further, an interview was carried out post-survey to gain qualitative insights on the requirements of explanations incorporated in diagnostic decision support. Overall, the insights gained from this study contribute to understanding the types of explanations that are most effective.","lang":"eng"}],"type":"conference","author":[{"id":"93275","first_name":"Felix","last_name":"Liedeker","full_name":"Liedeker, Felix"},{"full_name":"Sanchez-Graillet, Olivia","last_name":"Sanchez-Graillet","first_name":"Olivia"},{"first_name":"Moana","last_name":"Seidler","full_name":"Seidler, Moana"},{"full_name":"Brandt, Christian","last_name":"Brandt","first_name":"Christian"},{"full_name":"Wellmer, Jörg","last_name":"Wellmer","first_name":"Jörg"},{"last_name":"Cimiano","full_name":"Cimiano, Philipp","first_name":"Philipp"}],"language":[{"iso":"eng"}],"title":"A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support","_id":"56480","year":"2024"}