https://ris.uni-paderborn.de
2000-01-01T00:00+00:001monthlyAgency in metaphors of explaining: An analysis of scientific texts
https://ris.uni-paderborn.de/record/59839
Scharlau, IngridRohlfing, Katharina J.2025In many scientific approaches, especially in those that try to foster explainability of Artificial Intelligences, a narrow conception of explaining prevails. This narrow conception implies that explaining is a one-directional action in which knowledge is transferred from the explainer to an addressee. By studying the amount of agency in metaphors for explaining in scientific texts, we want to find out – or at least to contribute a partial answer to the question – why this narrow conception is so dominant. For our analysis, we use a linguistic conception of agency, transitivity. This concept allows to specify the degree of agency or effectiveness of the action in a verbalised event. It is defined by several component parts. We detail and discuss both the parameters of and global transitivity. Overall, transitivity of explaining metaphors has a rather common pattern across metaphors. Agency is not high and reduced in characteristic aspects: The metaphors imply that the object of explaining is static, i.e., is not changed within the explanation, and that explaining is the activity of one person only. This pattern may account for the narrow conception of explaining. It contrasts strongly with current co-constructive or sociotechnical approaches to explainability.https://ris.uni-paderborn.de/record/59839engCenter for Open Scienceinfo:eu-repo/semantics/closedAccessScharlau I, Rohlfing KJ. Agency in metaphors of explaining: An analysis of scientific texts. Published online 2025.Agency in metaphors of explaining: An analysis of scientific textsinfo:eu-repo/semantics/preprintdoc-type:preprinttexthttp://purl.org/coar/resource_type/c_816bHealthy Distrust in AI systems
https://ris.uni-paderborn.de/record/59917
Paaßen, BenjaminAlpsancar, SuzanaMatzner, TobiasScharlau, Ingrid2025nder the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone---neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person’s interest. In such cases, distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term \emph{healthy distrust} to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy.https://ris.uni-paderborn.de/record/59917enginfo:eu-repo/grantAgreement/EC/438445824info:eu-repo/semantics/openAccessPaaßen B, Alpsancar S, Matzner T, Scharlau I. Healthy Distrust in AI systems. <i>arXiv</i>. Published online 2025.Healthy Distrust in AI systemsinfo:eu-repo/semantics/preprintdoc-type:preprinttexthttp://purl.org/coar/resource_type/c_816b