---
_id: '63611'
abstract:
- lang: eng
  text: When humans interact with artificial intelligence (AI), one desideratum is
    appropriate trust. Typically, appropriate trust encompasses that humans trust
    AI except for instances in which they either explicitly notice AI errors or are
    suspicious that errors could be present. So far, appropriate trust or related
    notions have mainly been investigated by assessing trust and reliance. In this
    contribution, we argue that these assessments are insufficient to measure the
    complex aim of appropriate trust and the related notion of healthy distrust. We
    introduce and test the perspective of covert visual attention as an additional
    indicator for appropriate trust and draw conceptual connections to the notion
    of healthy distrust. To test the validity of our conceptualization, we formalize
    visual attention using the Theory of Visual Attention and measure its properties
    that are potentially relevant to appropriate trust and healthy distrust in an
    image classification task. Based on temporal-order judgment performance, we estimate
    participants' attentional capacity and attentional weight toward correct and incorrect
    mock-up AI classifications. We observe that misclassifications reduce attentional
    capacity compared to correct classifications. However, our results do not indicate
    that this reduction is beneficial for a subsequent judgment of the classifications.
    The attentional weighting is not affected by the classifications' correctness
    but by the difficulty of categorizing the stimuli themselves. We discuss these
    results, their implications, and the limited potential for using visual attention
    as an indicator of appropriate trust and healthy distrust.
article_number: '1694367'
article_type: original
author:
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Kai
  full_name: Biermeier, Kai
  id: '55908'
  last_name: Biermeier
  orcid: 0000-0002-2879-2359
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Peters TM, Biermeier K, Scharlau I. Assessing healthy distrust in human-AI
    interaction: interpreting changes in visual attention. <i>Frontiers in Psychology</i>.
    2026;16. doi:<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>'
  apa: 'Peters, T. M., Biermeier, K., &#38; Scharlau, I. (2026). Assessing healthy
    distrust in human-AI interaction: interpreting changes in visual attention. <i>Frontiers
    in Psychology</i>, <i>16</i>, Article 1694367. <a href="https://doi.org/10.3389/fpsyg.2025.1694367">https://doi.org/10.3389/fpsyg.2025.1694367</a>'
  bibtex: '@article{Peters_Biermeier_Scharlau_2026, title={Assessing healthy distrust
    in human-AI interaction: interpreting changes in visual attention}, volume={16},
    DOI={<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>},
    number={1694367}, journal={Frontiers in Psychology}, publisher={Frontiers Media
    SA}, author={Peters, Tobias Martin and Biermeier, Kai and Scharlau, Ingrid}, year={2026}
    }'
  chicago: 'Peters, Tobias Martin, Kai Biermeier, and Ingrid Scharlau. “Assessing
    Healthy Distrust in Human-AI Interaction: Interpreting Changes in Visual Attention.”
    <i>Frontiers in Psychology</i> 16 (2026). <a href="https://doi.org/10.3389/fpsyg.2025.1694367">https://doi.org/10.3389/fpsyg.2025.1694367</a>.'
  ieee: 'T. M. Peters, K. Biermeier, and I. Scharlau, “Assessing healthy distrust
    in human-AI interaction: interpreting changes in visual attention,” <i>Frontiers
    in Psychology</i>, vol. 16, Art. no. 1694367, 2026, doi: <a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>.'
  mla: 'Peters, Tobias Martin, et al. “Assessing Healthy Distrust in Human-AI Interaction:
    Interpreting Changes in Visual Attention.” <i>Frontiers in Psychology</i>, vol.
    16, 1694367, Frontiers Media SA, 2026, doi:<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>.'
  short: T.M. Peters, K. Biermeier, I. Scharlau, Frontiers in Psychology 16 (2026).
date_created: 2026-01-14T14:21:59Z
date_updated: 2026-01-14T14:29:03Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2025.1694367
intvolume: '        16'
keyword:
- appropriate trust
- healthy distrust
- visual attention
- Theory of Visual Attention
- human-AI interaction
- Bayesian cognitive model
- image classification
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 ; TP C01: Gesundes Misstrauen in Erklärungen'
publication: Frontiers in Psychology
publication_identifier:
  issn:
  - 1664-1078
publication_status: published
publisher: Frontiers Media SA
status: public
title: 'Assessing healthy distrust in human-AI interaction: interpreting changes in
  visual attention'
type: journal_article
user_id: '92810'
volume: 16
year: '2026'
...
---
_id: '59756'
abstract:
- lang: eng
  text: "A current concern in the field of Artificial Intelligence (AI) is to ensure
    the trustworthiness of AI systems. The development of explainability methods is
    one prominent way to address this, which has often resulted in the assumption
    that the use of explainability will lead to an increase in the trust of users
    and wider society. However, the dynamics between explainability and trust are
    not well established and empirical investigations of their relation remain mixed
    or inconclusive.\r\nIn this paper we provide a detailed description of the concepts
    of user trust and distrust in AI and their relation to appropriate reliance. For
    that we draw from the fields of machine learning, human–computer interaction,
    and the social sciences. Based on these insights, we have created a focused study
    of empirical literature of existing empirical studies that investigate the effects
    of AI systems and XAI methods on user (dis)trust, in order to substantiate our
    conceptualization of trust, distrust, and reliance. With respect to our conceptual
    understanding we identify gaps in existing empirical work. With clarifying the
    concepts and summarizing the empirical studies, we aim to provide researchers,
    who examine user trust in AI, with an improved starting point for developing user
    studies to measure and evaluate the user’s attitude towards and reliance on AI
    systems."
article_number: '101357'
author:
- first_name: Roel
  full_name: Visser, Roel
  last_name: Visser
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Barbara
  full_name: Hammer, Barbara
  last_name: Hammer
citation:
  ama: 'Visser R, Peters TM, Scharlau I, Hammer B. Trust, distrust, and appropriate
    reliance in (X)AI: A conceptual clarification of user trust and survey of its
    empirical evaluation. <i>Cognitive Systems Research</i>. Published online 2025.
    doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>'
  apa: 'Visser, R., Peters, T. M., Scharlau, I., &#38; Hammer, B. (2025). Trust, distrust,
    and appropriate reliance in (X)AI: A conceptual clarification of user trust and
    survey of its empirical evaluation. <i>Cognitive Systems Research</i>, Article
    101357. <a href="https://doi.org/10.1016/j.cogsys.2025.101357">https://doi.org/10.1016/j.cogsys.2025.101357</a>'
  bibtex: '@article{Visser_Peters_Scharlau_Hammer_2025, title={Trust, distrust, and
    appropriate reliance in (X)AI: A conceptual clarification of user trust and survey
    of its empirical evaluation}, DOI={<a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>},
    number={101357}, journal={Cognitive Systems Research}, publisher={Elsevier BV},
    author={Visser, Roel and Peters, Tobias Martin and Scharlau, Ingrid and Hammer,
    Barbara}, year={2025} }'
  chicago: 'Visser, Roel, Tobias Martin Peters, Ingrid Scharlau, and Barbara Hammer.
    “Trust, Distrust, and Appropriate Reliance in (X)AI: A Conceptual Clarification
    of User Trust and Survey of Its Empirical Evaluation.” <i>Cognitive Systems Research</i>,
    2025. <a href="https://doi.org/10.1016/j.cogsys.2025.101357">https://doi.org/10.1016/j.cogsys.2025.101357</a>.'
  ieee: 'R. Visser, T. M. Peters, I. Scharlau, and B. Hammer, “Trust, distrust, and
    appropriate reliance in (X)AI: A conceptual clarification of user trust and survey
    of its empirical evaluation,” <i>Cognitive Systems Research</i>, Art. no. 101357,
    2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>.'
  mla: 'Visser, Roel, et al. “Trust, Distrust, and Appropriate Reliance in (X)AI:
    A Conceptual Clarification of User Trust and Survey of Its Empirical Evaluation.”
    <i>Cognitive Systems Research</i>, 101357, Elsevier BV, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>.'
  short: R. Visser, T.M. Peters, I. Scharlau, B. Hammer, Cognitive Systems Research
    (2025).
date_created: 2025-05-02T09:26:15Z
date_updated: 2025-05-15T11:16:27Z
department:
- _id: '424'
- _id: '660'
doi: 10.1016/j.cogsys.2025.101357
keyword:
- XAI
- Appropriate trust
- Distrust
- Reliance
- Human-centric evaluation
- Trustworthy AI
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
publication: Cognitive Systems Research
publication_identifier:
  issn:
  - 1389-0417
publication_status: published
publisher: Elsevier BV
status: public
title: 'Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification
  of user trust and survey of its empirical evaluation'
type: journal_article
user_id: '92810'
year: '2025'
...
---
_id: '59755'
abstract:
- lang: eng
  text: "Due to the application of Artificial Intelligence (AI) in high-risk domains
    like law or medicine,\r\ntrustworthy AI and trust in AI are of increasing scientific
    and public relevance. A typical conception,\r\nfor example in the context of medical
    diagnosis, is that a knowledgeable user receives AIgenerated\r\nclassification
    as advice. Research to improve such interactions often aims to foster the\r\nuser’s
    trust, which in turn should improve the combined human-AI performance. Given that
    AI\r\nmodels can err, we argue that the possibility to critically review, thus
    to distrust, an AI decision is\r\nan equally interesting target of research.\r\nWe
    created two image classification scenarios in which the participants received
    mock-up\r\nAI advice. The quality of the advice decreases for a phase of the experiment.
    We studied the\r\ntask performance, trust and distrust of the participants, and
    tested whether an instruction to\r\nremain skeptical and review each piece of
    advice led to a better performance compared to a\r\nneutral condition. Our results
    indicate that this instruction does not improve but rather worsens\r\nthe participants’
    performance. Repeated single-item self-report of trust and distrust shows an\r\nincrease
    in trust and a decrease in distrust after the drop in the AI’s classification
    quality, with no\r\ndifference between the two instructions. Furthermore, via
    a Bayesian Signal Detection Theory\r\nanalysis, we provide a procedure to assess
    appropriate reliance in detail, by quantifying whether\r\nthe problems of under-
    and over-reliance have been mitigated. We discuss implications of our\r\nresults
    for the usage of disclaimers before interacting with AI, as prominently used in
    current\r\nLLM-based chatbots, and for trust and distrust research."
article_type: original
author:
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Peters TM, Scharlau I. Interacting with fallible AI: Is distrust helpful when
    receiving AI misclassifications? <i>Frontiers in Psychology</i>. 2025;16. doi:<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>'
  apa: 'Peters, T. M., &#38; Scharlau, I. (2025). Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications? <i>Frontiers in Psychology</i>,
    <i>16</i>. <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>'
  bibtex: '@article{Peters_Scharlau_2025, title={Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications?}, volume={16}, DOI={<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>},
    journal={Frontiers in Psychology}, author={Peters, Tobias Martin and Scharlau,
    Ingrid}, year={2025} }'
  chicago: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible
    AI: Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in
    Psychology</i> 16 (2025). <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>.'
  ieee: 'T. M. Peters and I. Scharlau, “Interacting with fallible AI: Is distrust
    helpful when receiving AI misclassifications?,” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi: <a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  mla: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible AI:
    Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi:<a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  short: T.M. Peters, I. Scharlau, Frontiers in Psychology 16 (2025).
date_created: 2025-05-02T09:22:39Z
date_updated: 2025-05-27T09:10:09Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2025.1574809
intvolume: '        16'
keyword:
- trust in AI
- trust
- distrust
- human-AI interaction
- Signal Detection Theory
- Bayesian parameter estimation
- image classification
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
publication: Frontiers in Psychology
publication_status: published
status: public
title: 'Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?'
type: journal_article
user_id: '92810'
volume: 16
year: '2025'
...
