---
_id: '59755'
abstract:
- lang: eng
  text: "Due to the application of Artificial Intelligence (AI) in high-risk domains
    like law or medicine,\r\ntrustworthy AI and trust in AI are of increasing scientific
    and public relevance. A typical conception,\r\nfor example in the context of medical
    diagnosis, is that a knowledgeable user receives AIgenerated\r\nclassification
    as advice. Research to improve such interactions often aims to foster the\r\nuser’s
    trust, which in turn should improve the combined human-AI performance. Given that
    AI\r\nmodels can err, we argue that the possibility to critically review, thus
    to distrust, an AI decision is\r\nan equally interesting target of research.\r\nWe
    created two image classification scenarios in which the participants received
    mock-up\r\nAI advice. The quality of the advice decreases for a phase of the experiment.
    We studied the\r\ntask performance, trust and distrust of the participants, and
    tested whether an instruction to\r\nremain skeptical and review each piece of
    advice led to a better performance compared to a\r\nneutral condition. Our results
    indicate that this instruction does not improve but rather worsens\r\nthe participants’
    performance. Repeated single-item self-report of trust and distrust shows an\r\nincrease
    in trust and a decrease in distrust after the drop in the AI’s classification
    quality, with no\r\ndifference between the two instructions. Furthermore, via
    a Bayesian Signal Detection Theory\r\nanalysis, we provide a procedure to assess
    appropriate reliance in detail, by quantifying whether\r\nthe problems of under-
    and over-reliance have been mitigated. We discuss implications of our\r\nresults
    for the usage of disclaimers before interacting with AI, as prominently used in
    current\r\nLLM-based chatbots, and for trust and distrust research."
article_type: original
author:
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Peters TM, Scharlau I. Interacting with fallible AI: Is distrust helpful when
    receiving AI misclassifications? <i>Frontiers in Psychology</i>. 2025;16. doi:<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>'
  apa: 'Peters, T. M., &#38; Scharlau, I. (2025). Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications? <i>Frontiers in Psychology</i>,
    <i>16</i>. <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>'
  bibtex: '@article{Peters_Scharlau_2025, title={Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications?}, volume={16}, DOI={<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>},
    journal={Frontiers in Psychology}, author={Peters, Tobias Martin and Scharlau,
    Ingrid}, year={2025} }'
  chicago: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible
    AI: Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in
    Psychology</i> 16 (2025). <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>.'
  ieee: 'T. M. Peters and I. Scharlau, “Interacting with fallible AI: Is distrust
    helpful when receiving AI misclassifications?,” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi: <a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  mla: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible AI:
    Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi:<a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  short: T.M. Peters, I. Scharlau, Frontiers in Psychology 16 (2025).
date_created: 2025-05-02T09:22:39Z
date_updated: 2025-05-27T09:10:09Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2025.1574809
intvolume: '        16'
keyword:
- trust in AI
- trust
- distrust
- human-AI interaction
- Signal Detection Theory
- Bayesian parameter estimation
- image classification
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
publication: Frontiers in Psychology
publication_status: published
status: public
title: 'Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?'
type: journal_article
user_id: '92810'
volume: 16
year: '2025'
...
