{"project":[{"name":"TRR 318 - B3: TRR 318 - Subproject B3","_id":"122"},{"_id":"124","name":"TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen"},{"grant_number":"438445824","name":"TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren KI","_id":"370"}],"language":[{"iso":"eng"}],"_id":"59917","title":"Healthy Distrust in AI systems","date_updated":"2025-05-16T19:51:26Z","publication":"arXiv","type":"preprint","citation":{"ama":"Paaßen B, Alpsancar S, Matzner T, Scharlau I. Healthy Distrust in AI systems. arXiv. Published online 2025.","mla":"Paaßen, Benjamin, et al. “Healthy Distrust in AI Systems.” ArXiv, 2025.","bibtex":"@article{Paaßen_Alpsancar_Matzner_Scharlau_2025, title={Healthy Distrust in AI systems}, journal={arXiv}, author={Paaßen, Benjamin and Alpsancar, Suzana and Matzner, Tobias and Scharlau, Ingrid}, year={2025} }","apa":"Paaßen, B., Alpsancar, S., Matzner, T., & Scharlau, I. (2025). Healthy Distrust in AI systems. In arXiv.","chicago":"Paaßen, Benjamin, Suzana Alpsancar, Tobias Matzner, and Ingrid Scharlau. “Healthy Distrust in AI Systems.” ArXiv, 2025.","short":"B. Paaßen, S. Alpsancar, T. Matzner, I. Scharlau, ArXiv (2025).","ieee":"B. Paaßen, S. Alpsancar, T. Matzner, and I. Scharlau, “Healthy Distrust in AI systems,” arXiv. 2025."},"user_id":"93637","year":"2025","main_file_link":[{"url":"https://arxiv.org/abs/2505.09747","open_access":"1"}],"author":[{"last_name":"Paaßen","full_name":"Paaßen, Benjamin","first_name":"Benjamin"},{"full_name":"Alpsancar, Suzana","last_name":"Alpsancar","first_name":"Suzana","id":"93637"},{"id":"65695","first_name":"Tobias","full_name":"Matzner, Tobias","last_name":"Matzner"},{"first_name":"Ingrid","id":"451","orcid":"0000-0003-2364-9489","full_name":"Scharlau, Ingrid","last_name":"Scharlau"}],"department":[{"_id":"424"},{"_id":"26"},{"_id":"14"}],"abstract":[{"lang":"eng","text":"nder the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone---neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person’s interest. In such cases, distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term \\emph{healthy distrust} to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy."}],"date_created":"2025-05-16T09:39:13Z","status":"public","oa":"1"}