{"date_updated":"2025-05-02T09:32:06Z","department":[{"_id":"424"},{"_id":"660"}],"status":"public","abstract":[{"lang":"eng","text":"A current concern in the field of Artificial Intelligence (AI) is to ensure the trustworthiness of AI systems. The development of explainability methods is one prominent way to address this, which has often resulted in the assumption that the use of explainability will lead to an increase in the trust of users and wider society. However, the dynamics between explainability and trust are not well established and empirical investigations of their relation remain mixed or inconclusive.\r\nIn this paper we provide a detailed description of the concepts of user trust and distrust in AI and their relation to appropriate reliance. For that we draw from the fields of machine learning, human–computer interaction, and the social sciences. Based on these insights, we have created a focused study of empirical literature of existing empirical studies that investigate the effects of AI systems and XAI methods on user (dis)trust, in order to substantiate our conceptualization of trust, distrust, and reliance. With respect to our conceptual understanding we identify gaps in existing empirical work. With clarifying the concepts and summarizing the empirical studies, we aim to provide researchers, who examine user trust in AI, with an improved starting point for developing user studies to measure and evaluate the user’s attitude towards and reliance on AI systems."}],"project":[{"name":"TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen","_id":"124"}],"language":[{"iso":"eng"}],"doi":"10.1016/j.cogsys.2025.101357","_id":"59756","publication_identifier":{"issn":["1389-0417"]},"author":[{"first_name":"Roel","full_name":"Visser, Roel","last_name":"Visser"},{"last_name":"Peters","full_name":"Peters, Tobias Martin","id":"92810","first_name":"Tobias Martin","orcid":"0009-0008-5193-6243"},{"first_name":"Ingrid","orcid":"0000-0003-2364-9489","id":"451","last_name":"Scharlau","full_name":"Scharlau, Ingrid"},{"first_name":"Barbara","full_name":"Hammer, Barbara","last_name":"Hammer"}],"user_id":"92810","publication":"Cognitive Systems Research","publisher":"Elsevier BV","title":"Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation","publication_status":"inpress","date_created":"2025-05-02T09:26:15Z","citation":{"ieee":"R. Visser, T. M. Peters, I. Scharlau, and B. Hammer, “Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation,” Cognitive Systems Research, Art. no. 101357, doi: 10.1016/j.cogsys.2025.101357.","ama":"Visser R, Peters TM, Scharlau I, Hammer B. Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation. Cognitive Systems Research. doi:10.1016/j.cogsys.2025.101357","bibtex":"@article{Visser_Peters_Scharlau_Hammer, title={Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation}, DOI={10.1016/j.cogsys.2025.101357}, number={101357}, journal={Cognitive Systems Research}, publisher={Elsevier BV}, author={Visser, Roel and Peters, Tobias Martin and Scharlau, Ingrid and Hammer, Barbara} }","chicago":"Visser, Roel, Tobias Martin Peters, Ingrid Scharlau, and Barbara Hammer. “Trust, Distrust, and Appropriate Reliance in (X)AI: A Conceptual Clarification of User Trust and Survey of Its Empirical Evaluation.” Cognitive Systems Research, n.d. https://doi.org/10.1016/j.cogsys.2025.101357.","mla":"Visser, Roel, et al. “Trust, Distrust, and Appropriate Reliance in (X)AI: A Conceptual Clarification of User Trust and Survey of Its Empirical Evaluation.” Cognitive Systems Research, 101357, Elsevier BV, doi:10.1016/j.cogsys.2025.101357.","short":"R. Visser, T.M. Peters, I. Scharlau, B. Hammer, Cognitive Systems Research (n.d.).","apa":"Visser, R., Peters, T. M., Scharlau, I., & Hammer, B. (n.d.). Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation. Cognitive Systems Research, Article 101357. https://doi.org/10.1016/j.cogsys.2025.101357"},"article_number":"101357","keyword":["XAI","Appropriate trust","Distrust","Reliance","Human-centric evaluation","Trustworthy AI"],"year":"2025","type":"journal_article"}