---
_id: '63611'
abstract:
- lang: eng
  text: When humans interact with artificial intelligence (AI), one desideratum is
    appropriate trust. Typically, appropriate trust encompasses that humans trust
    AI except for instances in which they either explicitly notice AI errors or are
    suspicious that errors could be present. So far, appropriate trust or related
    notions have mainly been investigated by assessing trust and reliance. In this
    contribution, we argue that these assessments are insufficient to measure the
    complex aim of appropriate trust and the related notion of healthy distrust. We
    introduce and test the perspective of covert visual attention as an additional
    indicator for appropriate trust and draw conceptual connections to the notion
    of healthy distrust. To test the validity of our conceptualization, we formalize
    visual attention using the Theory of Visual Attention and measure its properties
    that are potentially relevant to appropriate trust and healthy distrust in an
    image classification task. Based on temporal-order judgment performance, we estimate
    participants' attentional capacity and attentional weight toward correct and incorrect
    mock-up AI classifications. We observe that misclassifications reduce attentional
    capacity compared to correct classifications. However, our results do not indicate
    that this reduction is beneficial for a subsequent judgment of the classifications.
    The attentional weighting is not affected by the classifications' correctness
    but by the difficulty of categorizing the stimuli themselves. We discuss these
    results, their implications, and the limited potential for using visual attention
    as an indicator of appropriate trust and healthy distrust.
article_number: '1694367'
article_type: original
author:
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Kai
  full_name: Biermeier, Kai
  id: '55908'
  last_name: Biermeier
  orcid: 0000-0002-2879-2359
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Peters TM, Biermeier K, Scharlau I. Assessing healthy distrust in human-AI
    interaction: interpreting changes in visual attention. <i>Frontiers in Psychology</i>.
    2026;16. doi:<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>'
  apa: 'Peters, T. M., Biermeier, K., &#38; Scharlau, I. (2026). Assessing healthy
    distrust in human-AI interaction: interpreting changes in visual attention. <i>Frontiers
    in Psychology</i>, <i>16</i>, Article 1694367. <a href="https://doi.org/10.3389/fpsyg.2025.1694367">https://doi.org/10.3389/fpsyg.2025.1694367</a>'
  bibtex: '@article{Peters_Biermeier_Scharlau_2026, title={Assessing healthy distrust
    in human-AI interaction: interpreting changes in visual attention}, volume={16},
    DOI={<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>},
    number={1694367}, journal={Frontiers in Psychology}, publisher={Frontiers Media
    SA}, author={Peters, Tobias Martin and Biermeier, Kai and Scharlau, Ingrid}, year={2026}
    }'
  chicago: 'Peters, Tobias Martin, Kai Biermeier, and Ingrid Scharlau. “Assessing
    Healthy Distrust in Human-AI Interaction: Interpreting Changes in Visual Attention.”
    <i>Frontiers in Psychology</i> 16 (2026). <a href="https://doi.org/10.3389/fpsyg.2025.1694367">https://doi.org/10.3389/fpsyg.2025.1694367</a>.'
  ieee: 'T. M. Peters, K. Biermeier, and I. Scharlau, “Assessing healthy distrust
    in human-AI interaction: interpreting changes in visual attention,” <i>Frontiers
    in Psychology</i>, vol. 16, Art. no. 1694367, 2026, doi: <a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>.'
  mla: 'Peters, Tobias Martin, et al. “Assessing Healthy Distrust in Human-AI Interaction:
    Interpreting Changes in Visual Attention.” <i>Frontiers in Psychology</i>, vol.
    16, 1694367, Frontiers Media SA, 2026, doi:<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>.'
  short: T.M. Peters, K. Biermeier, I. Scharlau, Frontiers in Psychology 16 (2026).
date_created: 2026-01-14T14:21:59Z
date_updated: 2026-01-14T14:29:03Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2025.1694367
intvolume: '        16'
keyword:
- appropriate trust
- healthy distrust
- visual attention
- Theory of Visual Attention
- human-AI interaction
- Bayesian cognitive model
- image classification
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 ; TP C01: Gesundes Misstrauen in Erklärungen'
publication: Frontiers in Psychology
publication_identifier:
  issn:
  - 1664-1078
publication_status: published
publisher: Frontiers Media SA
status: public
title: 'Assessing healthy distrust in human-AI interaction: interpreting changes in
  visual attention'
type: journal_article
user_id: '92810'
volume: 16
year: '2026'
...
---
_id: '59755'
abstract:
- lang: eng
  text: "Due to the application of Artificial Intelligence (AI) in high-risk domains
    like law or medicine,\r\ntrustworthy AI and trust in AI are of increasing scientific
    and public relevance. A typical conception,\r\nfor example in the context of medical
    diagnosis, is that a knowledgeable user receives AIgenerated\r\nclassification
    as advice. Research to improve such interactions often aims to foster the\r\nuser’s
    trust, which in turn should improve the combined human-AI performance. Given that
    AI\r\nmodels can err, we argue that the possibility to critically review, thus
    to distrust, an AI decision is\r\nan equally interesting target of research.\r\nWe
    created two image classification scenarios in which the participants received
    mock-up\r\nAI advice. The quality of the advice decreases for a phase of the experiment.
    We studied the\r\ntask performance, trust and distrust of the participants, and
    tested whether an instruction to\r\nremain skeptical and review each piece of
    advice led to a better performance compared to a\r\nneutral condition. Our results
    indicate that this instruction does not improve but rather worsens\r\nthe participants’
    performance. Repeated single-item self-report of trust and distrust shows an\r\nincrease
    in trust and a decrease in distrust after the drop in the AI’s classification
    quality, with no\r\ndifference between the two instructions. Furthermore, via
    a Bayesian Signal Detection Theory\r\nanalysis, we provide a procedure to assess
    appropriate reliance in detail, by quantifying whether\r\nthe problems of under-
    and over-reliance have been mitigated. We discuss implications of our\r\nresults
    for the usage of disclaimers before interacting with AI, as prominently used in
    current\r\nLLM-based chatbots, and for trust and distrust research."
article_type: original
author:
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Peters TM, Scharlau I. Interacting with fallible AI: Is distrust helpful when
    receiving AI misclassifications? <i>Frontiers in Psychology</i>. 2025;16. doi:<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>'
  apa: 'Peters, T. M., &#38; Scharlau, I. (2025). Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications? <i>Frontiers in Psychology</i>,
    <i>16</i>. <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>'
  bibtex: '@article{Peters_Scharlau_2025, title={Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications?}, volume={16}, DOI={<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>},
    journal={Frontiers in Psychology}, author={Peters, Tobias Martin and Scharlau,
    Ingrid}, year={2025} }'
  chicago: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible
    AI: Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in
    Psychology</i> 16 (2025). <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>.'
  ieee: 'T. M. Peters and I. Scharlau, “Interacting with fallible AI: Is distrust
    helpful when receiving AI misclassifications?,” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi: <a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  mla: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible AI:
    Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi:<a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  short: T.M. Peters, I. Scharlau, Frontiers in Psychology 16 (2025).
date_created: 2025-05-02T09:22:39Z
date_updated: 2025-05-27T09:10:09Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2025.1574809
intvolume: '        16'
keyword:
- trust in AI
- trust
- distrust
- human-AI interaction
- Signal Detection Theory
- Bayesian parameter estimation
- image classification
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
publication: Frontiers in Psychology
publication_status: published
status: public
title: 'Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?'
type: journal_article
user_id: '92810'
volume: 16
year: '2025'
...
