---
_id: '63611'
abstract:
- lang: eng
  text: When humans interact with artificial intelligence (AI), one desideratum is
    appropriate trust. Typically, appropriate trust encompasses that humans trust
    AI except for instances in which they either explicitly notice AI errors or are
    suspicious that errors could be present. So far, appropriate trust or related
    notions have mainly been investigated by assessing trust and reliance. In this
    contribution, we argue that these assessments are insufficient to measure the
    complex aim of appropriate trust and the related notion of healthy distrust. We
    introduce and test the perspective of covert visual attention as an additional
    indicator for appropriate trust and draw conceptual connections to the notion
    of healthy distrust. To test the validity of our conceptualization, we formalize
    visual attention using the Theory of Visual Attention and measure its properties
    that are potentially relevant to appropriate trust and healthy distrust in an
    image classification task. Based on temporal-order judgment performance, we estimate
    participants' attentional capacity and attentional weight toward correct and incorrect
    mock-up AI classifications. We observe that misclassifications reduce attentional
    capacity compared to correct classifications. However, our results do not indicate
    that this reduction is beneficial for a subsequent judgment of the classifications.
    The attentional weighting is not affected by the classifications' correctness
    but by the difficulty of categorizing the stimuli themselves. We discuss these
    results, their implications, and the limited potential for using visual attention
    as an indicator of appropriate trust and healthy distrust.
article_number: '1694367'
article_type: original
author:
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Kai
  full_name: Biermeier, Kai
  id: '55908'
  last_name: Biermeier
  orcid: 0000-0002-2879-2359
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Peters TM, Biermeier K, Scharlau I. Assessing healthy distrust in human-AI
    interaction: interpreting changes in visual attention. <i>Frontiers in Psychology</i>.
    2026;16. doi:<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>'
  apa: 'Peters, T. M., Biermeier, K., &#38; Scharlau, I. (2026). Assessing healthy
    distrust in human-AI interaction: interpreting changes in visual attention. <i>Frontiers
    in Psychology</i>, <i>16</i>, Article 1694367. <a href="https://doi.org/10.3389/fpsyg.2025.1694367">https://doi.org/10.3389/fpsyg.2025.1694367</a>'
  bibtex: '@article{Peters_Biermeier_Scharlau_2026, title={Assessing healthy distrust
    in human-AI interaction: interpreting changes in visual attention}, volume={16},
    DOI={<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>},
    number={1694367}, journal={Frontiers in Psychology}, publisher={Frontiers Media
    SA}, author={Peters, Tobias Martin and Biermeier, Kai and Scharlau, Ingrid}, year={2026}
    }'
  chicago: 'Peters, Tobias Martin, Kai Biermeier, and Ingrid Scharlau. “Assessing
    Healthy Distrust in Human-AI Interaction: Interpreting Changes in Visual Attention.”
    <i>Frontiers in Psychology</i> 16 (2026). <a href="https://doi.org/10.3389/fpsyg.2025.1694367">https://doi.org/10.3389/fpsyg.2025.1694367</a>.'
  ieee: 'T. M. Peters, K. Biermeier, and I. Scharlau, “Assessing healthy distrust
    in human-AI interaction: interpreting changes in visual attention,” <i>Frontiers
    in Psychology</i>, vol. 16, Art. no. 1694367, 2026, doi: <a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>.'
  mla: 'Peters, Tobias Martin, et al. “Assessing Healthy Distrust in Human-AI Interaction:
    Interpreting Changes in Visual Attention.” <i>Frontiers in Psychology</i>, vol.
    16, 1694367, Frontiers Media SA, 2026, doi:<a href="https://doi.org/10.3389/fpsyg.2025.1694367">10.3389/fpsyg.2025.1694367</a>.'
  short: T.M. Peters, K. Biermeier, I. Scharlau, Frontiers in Psychology 16 (2026).
date_created: 2026-01-14T14:21:59Z
date_updated: 2026-01-14T14:29:03Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2025.1694367
intvolume: '        16'
keyword:
- appropriate trust
- healthy distrust
- visual attention
- Theory of Visual Attention
- human-AI interaction
- Bayesian cognitive model
- image classification
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 ; TP C01: Gesundes Misstrauen in Erklärungen'
publication: Frontiers in Psychology
publication_identifier:
  issn:
  - 1664-1078
publication_status: published
publisher: Frontiers Media SA
status: public
title: 'Assessing healthy distrust in human-AI interaction: interpreting changes in
  visual attention'
type: journal_article
user_id: '92810'
volume: 16
year: '2026'
...
---
_id: '59756'
abstract:
- lang: eng
  text: "A current concern in the field of Artificial Intelligence (AI) is to ensure
    the trustworthiness of AI systems. The development of explainability methods is
    one prominent way to address this, which has often resulted in the assumption
    that the use of explainability will lead to an increase in the trust of users
    and wider society. However, the dynamics between explainability and trust are
    not well established and empirical investigations of their relation remain mixed
    or inconclusive.\r\nIn this paper we provide a detailed description of the concepts
    of user trust and distrust in AI and their relation to appropriate reliance. For
    that we draw from the fields of machine learning, human–computer interaction,
    and the social sciences. Based on these insights, we have created a focused study
    of empirical literature of existing empirical studies that investigate the effects
    of AI systems and XAI methods on user (dis)trust, in order to substantiate our
    conceptualization of trust, distrust, and reliance. With respect to our conceptual
    understanding we identify gaps in existing empirical work. With clarifying the
    concepts and summarizing the empirical studies, we aim to provide researchers,
    who examine user trust in AI, with an improved starting point for developing user
    studies to measure and evaluate the user’s attitude towards and reliance on AI
    systems."
article_number: '101357'
author:
- first_name: Roel
  full_name: Visser, Roel
  last_name: Visser
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Barbara
  full_name: Hammer, Barbara
  last_name: Hammer
citation:
  ama: 'Visser R, Peters TM, Scharlau I, Hammer B. Trust, distrust, and appropriate
    reliance in (X)AI: A conceptual clarification of user trust and survey of its
    empirical evaluation. <i>Cognitive Systems Research</i>. Published online 2025.
    doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>'
  apa: 'Visser, R., Peters, T. M., Scharlau, I., &#38; Hammer, B. (2025). Trust, distrust,
    and appropriate reliance in (X)AI: A conceptual clarification of user trust and
    survey of its empirical evaluation. <i>Cognitive Systems Research</i>, Article
    101357. <a href="https://doi.org/10.1016/j.cogsys.2025.101357">https://doi.org/10.1016/j.cogsys.2025.101357</a>'
  bibtex: '@article{Visser_Peters_Scharlau_Hammer_2025, title={Trust, distrust, and
    appropriate reliance in (X)AI: A conceptual clarification of user trust and survey
    of its empirical evaluation}, DOI={<a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>},
    number={101357}, journal={Cognitive Systems Research}, publisher={Elsevier BV},
    author={Visser, Roel and Peters, Tobias Martin and Scharlau, Ingrid and Hammer,
    Barbara}, year={2025} }'
  chicago: 'Visser, Roel, Tobias Martin Peters, Ingrid Scharlau, and Barbara Hammer.
    “Trust, Distrust, and Appropriate Reliance in (X)AI: A Conceptual Clarification
    of User Trust and Survey of Its Empirical Evaluation.” <i>Cognitive Systems Research</i>,
    2025. <a href="https://doi.org/10.1016/j.cogsys.2025.101357">https://doi.org/10.1016/j.cogsys.2025.101357</a>.'
  ieee: 'R. Visser, T. M. Peters, I. Scharlau, and B. Hammer, “Trust, distrust, and
    appropriate reliance in (X)AI: A conceptual clarification of user trust and survey
    of its empirical evaluation,” <i>Cognitive Systems Research</i>, Art. no. 101357,
    2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>.'
  mla: 'Visser, Roel, et al. “Trust, Distrust, and Appropriate Reliance in (X)AI:
    A Conceptual Clarification of User Trust and Survey of Its Empirical Evaluation.”
    <i>Cognitive Systems Research</i>, 101357, Elsevier BV, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101357">10.1016/j.cogsys.2025.101357</a>.'
  short: R. Visser, T.M. Peters, I. Scharlau, B. Hammer, Cognitive Systems Research
    (2025).
date_created: 2025-05-02T09:26:15Z
date_updated: 2025-05-15T11:16:27Z
department:
- _id: '424'
- _id: '660'
doi: 10.1016/j.cogsys.2025.101357
keyword:
- XAI
- Appropriate trust
- Distrust
- Reliance
- Human-centric evaluation
- Trustworthy AI
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
publication: Cognitive Systems Research
publication_identifier:
  issn:
  - 1389-0417
publication_status: published
publisher: Elsevier BV
status: public
title: 'Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification
  of user trust and survey of its empirical evaluation'
type: journal_article
user_id: '92810'
year: '2025'
...
---
_id: '59755'
abstract:
- lang: eng
  text: "Due to the application of Artificial Intelligence (AI) in high-risk domains
    like law or medicine,\r\ntrustworthy AI and trust in AI are of increasing scientific
    and public relevance. A typical conception,\r\nfor example in the context of medical
    diagnosis, is that a knowledgeable user receives AIgenerated\r\nclassification
    as advice. Research to improve such interactions often aims to foster the\r\nuser’s
    trust, which in turn should improve the combined human-AI performance. Given that
    AI\r\nmodels can err, we argue that the possibility to critically review, thus
    to distrust, an AI decision is\r\nan equally interesting target of research.\r\nWe
    created two image classification scenarios in which the participants received
    mock-up\r\nAI advice. The quality of the advice decreases for a phase of the experiment.
    We studied the\r\ntask performance, trust and distrust of the participants, and
    tested whether an instruction to\r\nremain skeptical and review each piece of
    advice led to a better performance compared to a\r\nneutral condition. Our results
    indicate that this instruction does not improve but rather worsens\r\nthe participants’
    performance. Repeated single-item self-report of trust and distrust shows an\r\nincrease
    in trust and a decrease in distrust after the drop in the AI’s classification
    quality, with no\r\ndifference between the two instructions. Furthermore, via
    a Bayesian Signal Detection Theory\r\nanalysis, we provide a procedure to assess
    appropriate reliance in detail, by quantifying whether\r\nthe problems of under-
    and over-reliance have been mitigated. We discuss implications of our\r\nresults
    for the usage of disclaimers before interacting with AI, as prominently used in
    current\r\nLLM-based chatbots, and for trust and distrust research."
article_type: original
author:
- first_name: Tobias Martin
  full_name: Peters, Tobias Martin
  id: '92810'
  last_name: Peters
  orcid: 0009-0008-5193-6243
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Peters TM, Scharlau I. Interacting with fallible AI: Is distrust helpful when
    receiving AI misclassifications? <i>Frontiers in Psychology</i>. 2025;16. doi:<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>'
  apa: 'Peters, T. M., &#38; Scharlau, I. (2025). Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications? <i>Frontiers in Psychology</i>,
    <i>16</i>. <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>'
  bibtex: '@article{Peters_Scharlau_2025, title={Interacting with fallible AI: Is
    distrust helpful when receiving AI misclassifications?}, volume={16}, DOI={<a
    href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>},
    journal={Frontiers in Psychology}, author={Peters, Tobias Martin and Scharlau,
    Ingrid}, year={2025} }'
  chicago: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible
    AI: Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in
    Psychology</i> 16 (2025). <a href="https://doi.org/10.3389/fpsyg.2025.1574809">https://doi.org/10.3389/fpsyg.2025.1574809</a>.'
  ieee: 'T. M. Peters and I. Scharlau, “Interacting with fallible AI: Is distrust
    helpful when receiving AI misclassifications?,” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi: <a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  mla: 'Peters, Tobias Martin, and Ingrid Scharlau. “Interacting with Fallible AI:
    Is Distrust Helpful When Receiving AI Misclassifications?” <i>Frontiers in Psychology</i>,
    vol. 16, 2025, doi:<a href="https://doi.org/10.3389/fpsyg.2025.1574809">10.3389/fpsyg.2025.1574809</a>.'
  short: T.M. Peters, I. Scharlau, Frontiers in Psychology 16 (2025).
date_created: 2025-05-02T09:22:39Z
date_updated: 2025-05-27T09:10:09Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2025.1574809
intvolume: '        16'
keyword:
- trust in AI
- trust
- distrust
- human-AI interaction
- Signal Detection Theory
- Bayesian parameter estimation
- image classification
language:
- iso: eng
project:
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
publication: Frontiers in Psychology
publication_status: published
status: public
title: 'Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?'
type: journal_article
user_id: '92810'
volume: 16
year: '2025'
...
---
_id: '58650'
abstract:
- lang: eng
  text: 'Technical systems are characterized by increasing interdisciplinarity, complexity
    and networking. A product and its corresponding production systems require interdisciplinary
    multi-objective optimization. Sustainability and recyclability demands increase
    said complexity. The efficiency of previously established engineering methods
    is reaching its limits, which can only be overcome by systematic integration of
    extreme data. The aim of "hybrid decision support" is as follows: Data science
    and artificial intelligence should be used to supplement human capabilities in
    conjunction with existing heuristics, methods, modeling and simulation to increase
    the efficiency of product creation.'
alternative_title:
- Hybride Entscheidungsunterstützung in der Produktentstehung - Mit Data Science und
  Künstlicher Intelligenz die Leistungsfähigkeit erhöhen
article_type: original
author:
- first_name: Iris
  full_name: Gräßler, Iris
  id: '47565'
  last_name: Gräßler
  orcid: 0000-0001-5765-971X
- first_name: Jens
  full_name: Pottebaum, Jens
  id: '405'
  last_name: Pottebaum
  orcid: http://orcid.org/0000-0001-8778-2989
- first_name: Peter
  full_name: Nyhuis, Peter
  last_name: Nyhuis
- first_name: Rainer
  full_name: Stark, Rainer
  last_name: Stark
- first_name: Klaus-Dieter
  full_name: Thoben, Klaus-Dieter
  last_name: Thoben
- first_name: Petra
  full_name: Wiederkehr, Petra
  last_name: Wiederkehr
citation:
  ama: Gräßler I, Pottebaum J, Nyhuis P, Stark R, Thoben K-D, Wiederkehr P. Hybrid
    Decision Support in Product Creation - Improving performance with data science
    and artificial intelligence. <i>Industry 40 Science</i>. 2025;2025(1). doi:<a
    href="https://doi.org/10.30844/i4sd.25.1.18">10.30844/i4sd.25.1.18</a>
  apa: Gräßler, I., Pottebaum, J., Nyhuis, P., Stark, R., Thoben, K.-D., &#38; Wiederkehr,
    P. (2025). Hybrid Decision Support in Product Creation - Improving performance
    with data science and artificial intelligence. <i>Industry 4.0 Science</i>, <i>2025</i>(1).
    <a href="https://doi.org/10.30844/i4sd.25.1.18">https://doi.org/10.30844/i4sd.25.1.18</a>
  bibtex: '@article{Gräßler_Pottebaum_Nyhuis_Stark_Thoben_Wiederkehr_2025, title={Hybrid
    Decision Support in Product Creation - Improving performance with data science
    and artificial intelligence}, volume={2025}, DOI={<a href="https://doi.org/10.30844/i4sd.25.1.18">10.30844/i4sd.25.1.18</a>},
    number={1}, journal={Industry 4.0 Science}, publisher={GITO mbH Verlag}, author={Gräßler,
    Iris and Pottebaum, Jens and Nyhuis, Peter and Stark, Rainer and Thoben, Klaus-Dieter
    and Wiederkehr, Petra}, year={2025} }'
  chicago: Gräßler, Iris, Jens Pottebaum, Peter Nyhuis, Rainer Stark, Klaus-Dieter
    Thoben, and Petra Wiederkehr. “Hybrid Decision Support in Product Creation - Improving
    Performance with Data Science and Artificial Intelligence.” <i>Industry 4.0 Science</i>
    2025, no. 1 (2025). <a href="https://doi.org/10.30844/i4sd.25.1.18">https://doi.org/10.30844/i4sd.25.1.18</a>.
  ieee: 'I. Gräßler, J. Pottebaum, P. Nyhuis, R. Stark, K.-D. Thoben, and P. Wiederkehr,
    “Hybrid Decision Support in Product Creation - Improving performance with data
    science and artificial intelligence,” <i>Industry 4.0 Science</i>, vol. 2025,
    no. 1, 2025, doi: <a href="https://doi.org/10.30844/i4sd.25.1.18">10.30844/i4sd.25.1.18</a>.'
  mla: Gräßler, Iris, et al. “Hybrid Decision Support in Product Creation - Improving
    Performance with Data Science and Artificial Intelligence.” <i>Industry 4.0 Science</i>,
    vol. 2025, no. 1, GITO mbH Verlag, 2025, doi:<a href="https://doi.org/10.30844/i4sd.25.1.18">10.30844/i4sd.25.1.18</a>.
  short: I. Gräßler, J. Pottebaum, P. Nyhuis, R. Stark, K.-D. Thoben, P. Wiederkehr,
    Industry 4.0 Science 2025 (2025).
date_created: 2025-02-15T09:31:30Z
date_updated: 2025-02-15T09:40:52Z
department:
- _id: '152'
doi: 10.30844/i4sd.25.1.18
intvolume: '      2025'
issue: '1'
keyword:
- AI
- artificial intelligence
- Data Science
- decision support
- extreme data
- Künstliche Intelligenz
- product creation
- product development
language:
- iso: eng
main_file_link:
- open_access: '1'
oa: '1'
publication: Industry 4.0 Science
publication_identifier:
  issn:
  - 2942-6170
publication_status: published
publisher: GITO mbH Verlag
quality_controlled: '1'
status: public
title: Hybrid Decision Support in Product Creation - Improving performance with data
  science and artificial intelligence
type: journal_article
user_id: '405'
volume: 2025
year: '2025'
...
---
_id: '61410'
abstract:
- lang: eng
  text: "Purpose: The purpose of this study is to identify, analyze, and explain the
    implications that could\r\narise for service settings if AI systems develop, or
    are perceived to develop, consciousness – the\r\nability to acknowledge their
    own existence and the capacity for positive or negative experiences.\r\n\r\nDesign/methodology/approach:
    This study proposes and explores four hypothetical scenarios in\r\nwhich conscious
    AI in service could manifest. We contextualize our resulting typology in the\r\nhealth
    service context and integrate extant literature on technology-enabled service,
    AI\r\nconsciousness, and AI ethics into the narrative.\r\n\r\nFindings: This study
    provides a unique theoretical contribution to service research in the form of\r\na
    Type IV theory. It enables future service researchers to apprehend, explain, and
    predict how\r\nfunctionally conscious AI in service might unfold.\r\n\r\nOriginality:
    An increasingly prolific public discourse acknowledges that conscious AI systems\r\nmay
    emerge. Against this backdrop, this study aims to systematically explore a question
    that is\r\nperhaps the most critical and timely, but also inherently speculative,
    in relation to AI in service\r\nresearch by introducing much-needed theory and
    terminology.\r\n\r\nPractical implications: The ethical use of conscious AI in
    service could emerge as a distinct\r\ncompetitive advantage in the future. Achieving
    this outcome involves speculative yet actionable\r\nrecommendations that include
    training, guiding, and controlling how humans engage with such\r\nsystems, developing
    appropriate wellbeing protocols for functionally conscious AI systems, and\r\nestablishing
    AI rights and governance frameworks."
article_type: original
author:
- first_name: Christoph
  full_name: Breidbach, Christoph
  last_name: Breidbach
- first_name: Casper Ferm
  full_name: Lars-Erik, Casper Ferm
  last_name: Lars-Erik
- first_name: Paul
  full_name: Maglio, Paul
  last_name: Maglio
- first_name: Daniel
  full_name: Beverungen, Daniel
  id: '59677'
  last_name: Beverungen
- first_name: Jochen
  full_name: Wirtz, Jochen
  last_name: Wirtz
- first_name: Alex
  full_name: Twigg, Alex
  last_name: Twigg
citation:
  ama: Breidbach C, Lars-Erik CF, Maglio P, Beverungen D, Wirtz J, Twigg A. Conscious
    Artificial Intelligence in Service. <i>Journal of Service Management</i>.
  apa: Breidbach, C., Lars-Erik, C. F., Maglio, P., Beverungen, D., Wirtz, J., &#38;
    Twigg, A. (n.d.). Conscious Artificial Intelligence in Service. <i>Journal of
    Service Management</i>.
  bibtex: '@article{Breidbach_Lars-Erik_Maglio_Beverungen_Wirtz_Twigg, title={Conscious
    Artificial Intelligence in Service}, journal={Journal of Service Management},
    publisher={Emerald}, author={Breidbach, Christoph and Lars-Erik, Casper Ferm and
    Maglio, Paul and Beverungen, Daniel and Wirtz, Jochen and Twigg, Alex} }'
  chicago: Breidbach, Christoph, Casper Ferm Lars-Erik, Paul Maglio, Daniel Beverungen,
    Jochen Wirtz, and Alex Twigg. “Conscious Artificial Intelligence in Service.”
    <i>Journal of Service Management</i>, n.d.
  ieee: C. Breidbach, C. F. Lars-Erik, P. Maglio, D. Beverungen, J. Wirtz, and A.
    Twigg, “Conscious Artificial Intelligence in Service,” <i>Journal of Service Management</i>.
  mla: Breidbach, Christoph, et al. “Conscious Artificial Intelligence in Service.”
    <i>Journal of Service Management</i>, Emerald.
  short: C. Breidbach, C.F. Lars-Erik, P. Maglio, D. Beverungen, J. Wirtz, A. Twigg,
    Journal of Service Management (n.d.).
date_created: 2025-09-23T11:47:47Z
date_updated: 2025-11-10T10:22:59Z
ddc:
- '380'
department:
- _id: '195'
file:
- access_level: closed
  content_type: application/pdf
  creator: dabe
  date_created: 2025-11-10T10:20:48Z
  date_updated: 2025-11-10T10:20:48Z
  file_id: '62150'
  file_name: Breidbach et al, 2025_Conscious AI in Service_w link.pdf
  file_size: 743479
  relation: main_file
  success: 1
file_date_updated: 2025-11-10T10:20:48Z
has_accepted_license: '1'
keyword:
- AI
- AI consciousness
- AI ethics
- service systems
language:
- iso: eng
publication: Journal of Service Management
publication_status: inpress
publisher: Emerald
quality_controlled: '1'
status: public
title: Conscious Artificial Intelligence in Service
type: journal_article
user_id: '59677'
year: '2025'
...
---
_id: '61156'
abstract:
- lang: eng
  text: Explainability has become an important topic in computer science and artificial
    intelligence, leading to a subfield called Explainable Artificial Intelligence
    (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’
    on the part of the explainee. However, what it means to ‘understand’ is still
    not clearly defined, and the concept itself is rarely the subject of scientific
    investigation. This conceptual article aims to present a model of forms of understanding
    for XAI-explanations and beyond. From an interdisciplinary perspective bringing
    together computer science, linguistics, sociology, philosophy and psychology,
    a definition of understanding and its forms, assessment, and dynamics during the
    process of giving everyday explanations are explored. Two types of understanding
    are considered as possible outcomes of explanations, namely enabledness, ‘knowing
    how’ to do or decide something, and comprehension, ‘knowing that’ – both in different
    degrees (from shallow to deep). Explanations regularly start with shallow understanding
    in a specific domain and can lead to deep comprehension and enabledness of the
    explanandum, which we see as a prerequisite for human users to gain agency. In
    this process, the increase of comprehension and enabledness are highly interdependent.
    Against the background of this systematization, special challenges of understanding
    in XAI are discussed.
article_number: '101419'
article_type: original
author:
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  id: '76456'
  last_name: Buschmeier
  orcid: 0000-0002-9613-5713
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Helen
  full_name: Beierling, Helen
  id: '50995'
  last_name: Beierling
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
  orcid: 0000-0002-9997-9241
- first_name: André
  full_name: Groß, André
  id: '93405'
  last_name: Groß
  orcid: 0000-0002-9593-7220
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Michael
  full_name: Lenke, Michael
  last_name: Lenke
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  id: '86589'
  last_name: Vollmer
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Annedore
  full_name: Wilmes, Annedore
  last_name: Wilmes
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>. 2025;94. doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>
  apa: Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher,
    J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer,
    V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang,
    Y., Wilmes, A., &#38; Wrede, B. (2025). Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>, <i>94</i>, Article 101419. <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>
  bibtex: '@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et
    al._2025, title={Forms of Understanding for XAI-Explanations}, volume={94}, DOI={<a
    href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>},
    number={101419}, journal={Cognitive Systems Research}, author={Buschmeier, Hendrik
    and Buhl, Heike M. and Kern, Friederike and Grimminger, Angela and Beierling,
    Helen and Fisher, Josephine Beryl and Groß, André and Horwath, Ilona and Klowait,
    Nils and Lazarov, Stefan Teodorov and et al.}, year={2025} }'
  chicago: Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger,
    Helen Beierling, Josephine Beryl Fisher, André Groß, et al. “Forms of Understanding
    for XAI-Explanations.” <i>Cognitive Systems Research</i> 94 (2025). <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>.
  ieee: 'H. Buschmeier <i>et al.</i>, “Forms of Understanding for XAI-Explanations,”
    <i>Cognitive Systems Research</i>, vol. 94, Art. no. 101419, 2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.'
  mla: Buschmeier, Hendrik, et al. “Forms of Understanding for XAI-Explanations.”
    <i>Cognitive Systems Research</i>, vol. 94, 101419, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.
  short: H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher,
    A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing,
    I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede,
    Cognitive Systems Research 94 (2025).
date_created: 2025-09-08T14:24:32Z
date_updated: 2025-12-05T15:32:25Z
ddc:
- '006'
department:
- _id: '660'
doi: 10.1016/j.cogsys.2025.101419
file:
- access_level: closed
  content_type: application/pdf
  creator: hbuschme
  date_created: 2025-12-01T21:02:20Z
  date_updated: 2025-12-01T21:02:20Z
  file_id: '62730'
  file_name: Buschmeier-etal-2025-COGSYS.pdf
  file_size: 10114981
  relation: main_file
  success: 1
file_date_updated: 2025-12-01T21:02:20Z
has_accepted_license: '1'
intvolume: '        94'
keyword:
- understanding
- explaining
- explanations
- explainable
- AI
- interdisciplinarity
- comprehension
- enabledness
- agency
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.sciencedirect.com/science/article/pii/S1389041725000993?via%3Dihub
oa: '1'
project:
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '113'
  name: TRR 318 - Subproject A3
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '122'
  name: TRR 318 - Subproject B3
- _id: '123'
  name: TRR 318 - Subproject B5
- _id: '119'
  name: TRR 318 - Project Area Ö
publication: Cognitive Systems Research
publication_status: published
quality_controlled: '1'
status: public
title: Forms of Understanding for XAI-Explanations
type: journal_article
user_id: '57578'
volume: 94
year: '2025'
...
---
_id: '63019'
author:
- first_name: Johannes Aurelius Tamino
  full_name: Donner, Johannes Aurelius Tamino
  id: '72054'
  last_name: Donner
  orcid: 0009-0007-4757-4393
- first_name: Alexander
  full_name: Schlüter, Alexander
  id: '103302'
  last_name: Schlüter
  orcid: 0000-0002-2569-1624
citation:
  ama: 'Donner JAT, Schlüter A. Development of an AI-driven decentralized control
    for fifth generation district heating and cooling networks. In: <i>SDEWES Conference
    2025</i>. ; 2025.'
  apa: Donner, J. A. T., &#38; Schlüter, A. (2025). Development of an AI-driven decentralized
    control for fifth generation district heating and cooling networks. <i>SDEWES
    Conference 2025</i>. 20th SDEWES Conference, Dubrovnik.
  bibtex: '@inproceedings{Donner_Schlüter_2025, title={Development of an AI-driven
    decentralized control for fifth generation district heating and cooling networks},
    booktitle={SDEWES Conference 2025}, author={Donner, Johannes Aurelius Tamino and
    Schlüter, Alexander}, year={2025} }'
  chicago: Donner, Johannes Aurelius Tamino, and Alexander Schlüter. “Development
    of an AI-Driven Decentralized Control for Fifth Generation District Heating and
    Cooling Networks.” In <i>SDEWES Conference 2025</i>, 2025.
  ieee: J. A. T. Donner and A. Schlüter, “Development of an AI-driven decentralized
    control for fifth generation district heating and cooling networks,” presented
    at the 20th SDEWES Conference, Dubrovnik, 2025.
  mla: Donner, Johannes Aurelius Tamino, and Alexander Schlüter. “Development of an
    AI-Driven Decentralized Control for Fifth Generation District Heating and Cooling
    Networks.” <i>SDEWES Conference 2025</i>, 2025.
  short: 'J.A.T. Donner, A. Schlüter, in: SDEWES Conference 2025, 2025.'
conference:
  end_date: 10.10.2025
  location: Dubrovnik
  name: 20th SDEWES Conference
  start_date: 05.10.2025
date_created: 2025-12-10T12:30:59Z
date_updated: 2026-01-06T07:53:40Z
department:
- _id: '876'
- _id: '321'
- _id: '9'
- _id: '393'
keyword:
- 5GDHC
- district heating
- DHC
- waste heat
- AI-Driven
language:
- iso: eng
publication: SDEWES Conference 2025
status: public
title: Development of an AI-driven decentralized control for fifth generation district
  heating and cooling networks
type: conference_abstract
user_id: '103302'
year: '2025'
...
---
_id: '53793'
abstract:
- lang: eng
  text: We utilize extreme learning machines for the prediction of partial differential
    equations (PDEs). Our method splits the state space into multiple windows that
    are predicted individually using a single model. Despite requiring only few data
    points (in some cases, our method can learn from a single full-state snapshot),
    it still achieves high accuracy and can predict the flow of PDEs over long time
    horizons. Moreover, we show how additional symmetries can be exploited to increase
    sample efficiency and to enforce equivariance.
author:
- first_name: Hans
  full_name: Harder, Hans
  id: '98879'
  last_name: Harder
- first_name: Sebastian
  full_name: Peitz, Sebastian
  id: '47427'
  last_name: Peitz
  orcid: 0000-0002-3389-793X
citation:
  ama: Harder H, Peitz S. Predicting PDEs Fast and Efficiently with Equivariant Extreme
    Learning Machines.
  apa: Harder, H., &#38; Peitz, S. (n.d.). <i>Predicting PDEs Fast and Efficiently
    with Equivariant Extreme Learning Machines</i>.
  bibtex: '@article{Harder_Peitz, title={Predicting PDEs Fast and Efficiently with
    Equivariant Extreme Learning Machines}, author={Harder, Hans and Peitz, Sebastian}
    }'
  chicago: Harder, Hans, and Sebastian Peitz. “Predicting PDEs Fast and Efficiently
    with Equivariant Extreme Learning Machines,” n.d.
  ieee: H. Harder and S. Peitz, “Predicting PDEs Fast and Efficiently with Equivariant
    Extreme Learning Machines.” .
  mla: Harder, Hans, and Sebastian Peitz. <i>Predicting PDEs Fast and Efficiently
    with Equivariant Extreme Learning Machines</i>.
  short: H. Harder, S. Peitz, (n.d.).
date_created: 2024-04-30T08:43:14Z
date_updated: 2024-04-30T08:45:24Z
keyword:
- extreme learning machines
- partial differential equations
- data-driven prediction
- high-dimensional systems
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2404.18530
oa: '1'
publication_status: unpublished
status: public
title: Predicting PDEs Fast and Efficiently with Equivariant Extreme Learning Machines
type: preprint
user_id: '98879'
year: '2024'
...
---
_id: '56166'
abstract:
- lang: eng
  text: Developing Intelligent Technical Systems (ITS) involves a complex process
    encompassing planning, analysis, design, production, and maintenance. Model-Based
    Systems Engineering (MBSE) is a key methodology for systematic systems engineering.
    Designing models for ITS requires harmonious interaction of various elements,
    posing a challenge in MBSE. Leveraging Generative Artificial Intelligence, we
    generated a dataset for modeling, using prompt engineering on large language models.
    The generated artifacts can aid engineers in MBSE design or serve as synthetic
    training data for AI assistants.
author:
- first_name: Pranav Jayant
  full_name: Kulkarni, Pranav Jayant
  id: '86782'
  last_name: Kulkarni
- first_name: Denis
  full_name: Tissen, Denis
  id: '44458'
  last_name: Tissen
- first_name: Ruslan
  full_name: Bernijazov, Ruslan
  id: '36312'
  last_name: Bernijazov
- first_name: Roman
  full_name: Dumitrescu, Roman
  id: '16190'
  last_name: Dumitrescu
citation:
  ama: 'Kulkarni PJ, Tissen D, Bernijazov R, Dumitrescu R. Towards Automated Design:
    Automatically Generating Modeling Elements with Prompt Engineering and Generative
    Artificial Intelligence. In: Malmqvist J, Candi M, Saemundsson R, Bystrom F, Isaksson
    O, eds. <i>DS 130: Proceedings of NordDesign 2024</i>. ; 2024:617-625. doi:<a
    href="https://doi.org/10.35199/NORDDESIGN2024.66">10.35199/NORDDESIGN2024.66</a>'
  apa: 'Kulkarni, P. J., Tissen, D., Bernijazov, R., &#38; Dumitrescu, R. (2024).
    Towards Automated Design: Automatically Generating Modeling Elements with Prompt
    Engineering and Generative Artificial Intelligence. In J. Malmqvist, M. Candi,
    R. Saemundsson, F. Bystrom, &#38; O. Isaksson (Eds.), <i>DS 130: Proceedings of
    NordDesign 2024</i> (pp. 617–625). <a href="https://doi.org/10.35199/NORDDESIGN2024.66">https://doi.org/10.35199/NORDDESIGN2024.66</a>'
  bibtex: '@inproceedings{Kulkarni_Tissen_Bernijazov_Dumitrescu_2024, title={Towards
    Automated Design: Automatically Generating Modeling Elements with Prompt Engineering
    and Generative Artificial Intelligence}, DOI={<a href="https://doi.org/10.35199/NORDDESIGN2024.66">10.35199/NORDDESIGN2024.66</a>},
    booktitle={DS 130: Proceedings of NordDesign 2024}, author={Kulkarni, Pranav Jayant
    and Tissen, Denis and Bernijazov, Ruslan and Dumitrescu, Roman}, editor={Malmqvist,
    J. and Candi, M. and Saemundsson, R. and Bystrom, F. and Isaksson, O.}, year={2024},
    pages={617–625} }'
  chicago: 'Kulkarni, Pranav Jayant, Denis Tissen, Ruslan Bernijazov, and Roman Dumitrescu.
    “Towards Automated Design: Automatically Generating Modeling Elements with Prompt
    Engineering and Generative Artificial Intelligence.” In <i>DS 130: Proceedings
    of NordDesign 2024</i>, edited by J. Malmqvist, M. Candi, R. Saemundsson, F. Bystrom,
    and O. Isaksson, 617–25, 2024. <a href="https://doi.org/10.35199/NORDDESIGN2024.66">https://doi.org/10.35199/NORDDESIGN2024.66</a>.'
  ieee: 'P. J. Kulkarni, D. Tissen, R. Bernijazov, and R. Dumitrescu, “Towards Automated
    Design: Automatically Generating Modeling Elements with Prompt Engineering and
    Generative Artificial Intelligence,” in <i>DS 130: Proceedings of NordDesign 2024</i>,
    Reykjavik, 2024, pp. 617–625, doi: <a href="https://doi.org/10.35199/NORDDESIGN2024.66">10.35199/NORDDESIGN2024.66</a>.'
  mla: 'Kulkarni, Pranav Jayant, et al. “Towards Automated Design: Automatically Generating
    Modeling Elements with Prompt Engineering and Generative Artificial Intelligence.”
    <i>DS 130: Proceedings of NordDesign 2024</i>, edited by J. Malmqvist et al.,
    2024, pp. 617–25, doi:<a href="https://doi.org/10.35199/NORDDESIGN2024.66">10.35199/NORDDESIGN2024.66</a>.'
  short: 'P.J. Kulkarni, D. Tissen, R. Bernijazov, R. Dumitrescu, in: J. Malmqvist,
    M. Candi, R. Saemundsson, F. Bystrom, O. Isaksson (Eds.), DS 130: Proceedings
    of NordDesign 2024, 2024, pp. 617–625.'
conference:
  end_date: 2024-08-14
  location: Reykjavik
  name: NordDesign Conference 2024
  start_date: 2024-08-12
date_created: 2024-09-17T09:56:43Z
date_updated: 2024-09-17T09:57:07Z
doi: 10.35199/NORDDESIGN2024.66
editor:
- first_name: J.
  full_name: Malmqvist, J.
  last_name: Malmqvist
- first_name: M.
  full_name: Candi, M.
  last_name: Candi
- first_name: R.
  full_name: Saemundsson, R.
  last_name: Saemundsson
- first_name: F.
  full_name: Bystrom, F.
  last_name: Bystrom
- first_name: O.
  full_name: Isaksson, O.
  last_name: Isaksson
keyword:
- Data Driven Design
- Design Automation
- Systems Engineering (SE)
- Artificial Intelligence (AI)
language:
- iso: eng
page: 617-625
publication: 'DS 130: Proceedings of NordDesign 2024'
publication_identifier:
  unknown:
  - 978-1-912254-21-7
publication_status: epub_ahead
related_material:
  link:
  - relation: confirmation
    url: https://www.designsociety.org/publication/47658/Towards+Automated+Design%3A+Automatically+Generating+Modeling+Elements+with+Prompt+Engineering+and+Generative+Artificial+Intelligence
status: public
title: 'Towards Automated Design: Automatically Generating Modeling Elements with
  Prompt Engineering and Generative Artificial Intelligence'
type: conference
user_id: '86782'
year: '2024'
...
---
_id: '56277'
abstract:
- lang: eng
  text: What is learner-sensitive feedback to argumentative learner texts when it
    is to be issued computer- based? Learning stages are difficult to quantify. The
    paper provides insight into the history of research since the 1980s and a preview
    of what this automated feedback might look like. These questions are embedded
    in a research project at the Universities of Paderborn and Hannover, Germany,
    from which a software (project name ArgSchool) emerges that will provide such
    feedback.
author:
- first_name: Sebastian
  full_name: Kilsbach, Sebastian
  id: '93839'
  last_name: Kilsbach
- first_name: Nadine
  full_name: Michel, Nadine
  id: '47857'
  last_name: Michel
citation:
  ama: 'Kilsbach S, Michel N. Computer-Based Generation of Learner-Sensitive Feedback
    to Argumentative Learner Texts. In: <i>Proceedings of the Tenth Conference of
    the International Society for the Study of Argumentation</i>. ; 2024.'
  apa: Kilsbach, S., &#38; Michel, N. (2024). Computer-Based Generation of Learner-Sensitive
    Feedback to Argumentative Learner Texts. <i>Proceedings of the Tenth Conference
    of the International Society for the Study of Argumentation</i>. Tenth Conference
    of the International Society for the Study of Argumentation, Leiden.
  bibtex: '@inproceedings{Kilsbach_Michel_2024, title={Computer-Based Generation of
    Learner-Sensitive Feedback to Argumentative Learner Texts}, booktitle={Proceedings
    of the Tenth Conference of the International Society for the Study of Argumentation},
    author={Kilsbach, Sebastian and Michel, Nadine}, year={2024} }'
  chicago: Kilsbach, Sebastian, and Nadine Michel. “Computer-Based Generation of Learner-Sensitive
    Feedback to Argumentative Learner Texts.” In <i>Proceedings of the Tenth Conference
    of the International Society for the Study of Argumentation</i>, 2024.
  ieee: S. Kilsbach and N. Michel, “Computer-Based Generation of Learner-Sensitive
    Feedback to Argumentative Learner Texts,” presented at the Tenth Conference of
    the International Society for the Study of Argumentation, Leiden, 2024.
  mla: Kilsbach, Sebastian, and Nadine Michel. “Computer-Based Generation of Learner-Sensitive
    Feedback to Argumentative Learner Texts.” <i>Proceedings of the Tenth Conference
    of the International Society for the Study of Argumentation</i>, 2024.
  short: 'S. Kilsbach, N. Michel, in: Proceedings of the Tenth Conference of the International
    Society for the Study of Argumentation, 2024.'
conference:
  end_date: 2023-07-07
  location: Leiden
  name: Tenth Conference of the International Society for the Study of Argumentation
  start_date: 2023-07-04
date_created: 2024-09-30T09:24:12Z
date_updated: 2024-09-30T09:25:14Z
keyword:
- AI
- argumentation mining
- discourse history
- (automated
- learner-sensitive) feedback
language:
- iso: eng
publication: Proceedings of the Tenth Conference of the International Society for
  the Study of Argumentation
status: public
title: Computer-Based Generation of Learner-Sensitive Feedback to Argumentative Learner
  Texts
type: conference
user_id: '47857'
year: '2024'
...
---
_id: '56282'
abstract:
- lang: eng
  text: "Algorithmic bias has long been recognized as a key problem affecting decision-making
    processes that integrate artificial intelligence (AI) technologies. The increased
    use of AI in making military decisions relevant to the use of force has sustained
    such questions about biases in these technologies and in how human users programme
    with and rely on data based on hierarchized socio-cultural norms, knowledges,
    and modes of attention.\r\n\r\nIn this post, Dr Ingvild Bode, Professor at the
    Center for War Studies, University of Southern Denmark, and Ishmael Bhila, PhD
    researcher at the “Meaningful Human Control: Between Regulation and Reflexion”
    project, Paderborn University, unpack the problem of algorithmic bias with reference
    to AI-based decision support systems (AI DSS). They examine three categories of
    algorithmic bias – preexisting bias, technical bias, and emergent bias – across
    four lifecycle stages of an AI DSS, concluding that stakeholders in the ongoing
    discussion about AI in the military domain should consider the impact of algorithmic
    bias on AI DSS more seriously."
author:
- first_name: Ishmael
  full_name: Bhila, Ishmael
  id: '105772'
  last_name: Bhila
- first_name: Ingvild
  full_name: Bode, Ingvild
  last_name: Bode
citation:
  ama: Bhila I, Bode I. <i>The Problem of Algorithmic Bias in AI-Based Military Decision
    Support Systems</i>. ICRC Humanitarian Law &#38; Policy Blog; 2024.
  apa: Bhila, I., &#38; Bode, I. (2024). <i>The problem of algorithmic bias in AI-based
    military decision support systems</i>. ICRC Humanitarian Law &#38; Policy Blog.
  bibtex: '@book{Bhila_Bode_2024, title={The problem of algorithmic bias in AI-based
    military decision support systems}, publisher={ICRC Humanitarian Law &#38; Policy
    Blog}, author={Bhila, Ishmael and Bode, Ingvild}, year={2024} }'
  chicago: Bhila, Ishmael, and Ingvild Bode. <i>The Problem of Algorithmic Bias in
    AI-Based Military Decision Support Systems</i>. ICRC Humanitarian Law &#38; Policy
    Blog, 2024.
  ieee: I. Bhila and I. Bode, <i>The problem of algorithmic bias in AI-based military
    decision support systems</i>. ICRC Humanitarian Law &#38; Policy Blog, 2024.
  mla: Bhila, Ishmael, and Ingvild Bode. <i>The Problem of Algorithmic Bias in AI-Based
    Military Decision Support Systems</i>. ICRC Humanitarian Law &#38; Policy Blog,
    2024.
  short: I. Bhila, I. Bode, The Problem of Algorithmic Bias in AI-Based Military Decision
    Support Systems, ICRC Humanitarian Law &#38; Policy Blog, 2024.
date_created: 2024-09-30T11:44:28Z
date_updated: 2024-11-26T09:49:48Z
has_accepted_license: '1'
keyword:
- Algorithmic Bias
- AI
- Decision Support Systems
- Autonomous Weapons Systems
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://blogs.icrc.org/law-and-policy/2024/09/03/the-problem-of-algorithmic-bias-in-ai-based-military-decision-support-systems/
oa: '1'
publication_status: published
publisher: ICRC Humanitarian Law & Policy Blog
related_material:
  link:
  - relation: confirmation
    url: https://blogs.icrc.org/law-and-policy/2024/09/03/the-problem-of-algorithmic-bias-in-ai-based-military-decision-support-systems/
status: public
title: The problem of algorithmic bias in AI-based military decision support systems
type: misc
user_id: '105772'
year: '2024'
...
---
_id: '51368'
abstract:
- lang: eng
  text: Dealing with opaque algorithms, the frequent overlap between transparency
    and explainability produces seemingly unsolvable dilemmas, as the much-discussed
    trade-off between model performance and model transparency. Referring to Niklas
    Luhmann's notion of communication, the paper argues that explainability does not
    necessarily require transparency and proposes an alternative approach. Explanations
    as communicative processes do not imply any disclosure of thoughts or neural processes,
    but only reformulations that provide the partners with additional elements and
    enable them to understand (from their perspective) what has been done and why.
    Recent computational approaches aiming at post-hoc explainability reproduce what
    happens in communication, producing explanations of the working of algorithms
    that can be different from the processes of the algorithms.
author:
- first_name: 'Elena '
  full_name: 'Esposito, Elena '
  last_name: Esposito
citation:
  ama: Esposito E. Does Explainability Require Transparency? <i>Sociologica</i>. 2023;16(3):17-27.
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>
  apa: Esposito, E. (2023). Does Explainability Require Transparency? <i>Sociologica</i>,
    <i>16</i>(3), 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>
  bibtex: '@article{Esposito_2023, title={Does Explainability Require Transparency?},
    volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena }, year={2023}, pages={17–27}
    }'
  chicago: 'Esposito, Elena . “Does Explainability Require Transparency?” <i>Sociologica</i>
    16, no. 3 (2023): 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>.'
  ieee: 'E. Esposito, “Does Explainability Require Transparency?,” <i>Sociologica</i>,
    vol. 16, no. 3, pp. 17–27, 2023, doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.'
  mla: Esposito, Elena. “Does Explainability Require Transparency?” <i>Sociologica</i>,
    vol. 16, no. 3, 2023, pp. 17–27, doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.
  short: E. Esposito, Sociologica 16 (2023) 17–27.
date_created: 2024-02-18T10:16:43Z
date_updated: 2024-02-26T08:46:26Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/15804
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Transparency
- Explanation
- Communication
- Sociological systems theory
language:
- iso: eng
page: 17-27
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: Does Explainability Require Transparency?
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
---
_id: '51369'
abstract:
- lang: eng
  text: This short introduction presents the symposium ‘Explaining Machines’. It locates
    the debate about Explainable AI in the history of the reflection about AI and
    outlines the issues discussed in the contributions.
author:
- first_name: Elena
  full_name: Esposito, Elena
  last_name: Esposito
citation:
  ama: 'Esposito E. Explaining Machines: Social Management of Incomprehensible Algorithms.
    Introduction. <i>Sociologica</i>. 2023;16(3):1-4. doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>'
  apa: 'Esposito, E. (2023). Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction. <i>Sociologica</i>, <i>16</i>(3), 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>'
  bibtex: '@article{Esposito_2023, title={Explaining Machines: Social Management of
    Incomprehensible Algorithms. Introduction}, volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena}, year={2023}, pages={1–4}
    }'
  chicago: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i> 16, no. 3 (2023): 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>.'
  ieee: 'E. Esposito, “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction,” <i>Sociologica</i>, vol. 16, no. 3, pp. 1–4, 2023,
    doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  mla: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i>, vol. 16, no. 3, 2023, pp. 1–4,
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  short: E. Esposito, Sociologica 16 (2023) 1–4.
date_created: 2024-02-18T10:23:23Z
date_updated: 2024-02-26T08:45:56Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/16265
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Inexplicability
- Transparency
- Explanation
- Opacity
- Contestability
language:
- iso: eng
page: 1-4
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: 'Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction'
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
---
_id: '52369'
abstract:
- lang: eng
  text: Megatrends, such as digitization or sustainability, are confronting the product
    management of manufacturing companies with a variety of challenges regarding the
    design of future products, but also the management of the actual products. To
    successfully position their products in the market, product managers need to gather
    and analyze comprehensive information about customers, developments in the products’
    environment, product usage, and more. The digitization of all aspects of life
    is making data on these topics increasingly available – via social media, documents,
    or the internet of things from the products themselves. The systematic collection
    and analysis of these data enable the exploitation of new potentials for the adaption
    of existing products and the creation of the products of tomorrow. However, there
    are still no insights into the main concepts and cause-effect relationships in
    exploiting data-driven approaches for product management. Therefore, this paper
    aims to identify the main concepts and advantages of data-driven product management.
    To answer the corresponding research questions a comprehensive systematic literature
    review is conducted. From its results, a detailed description of the main concepts
    of data-driven product management is derived. Furthermore, a taxonomy for the
    advantages of data-driven product management is presented. The main concepts and
    the taxonomy allow for a deeper understanding of the topic while highlighting
    necessary future actions and research needs.
author:
- first_name: Timm
  full_name: Fichtler, Timm
  id: '66731'
  last_name: Fichtler
  orcid: https://orcid.org/0000-0001-6034-4399
- first_name: Khoren
  full_name: Grigoryan, Khoren
  last_name: Grigoryan
- first_name: Christian
  full_name: Koldewey, Christian
  id: '43136'
  last_name: Koldewey
  orcid: https://orcid.org/0000-0001-7992-6399
- first_name: Roman
  full_name: Dumitrescu, Roman
  id: '16190'
  last_name: Dumitrescu
citation:
  ama: 'Fichtler T, Grigoryan K, Koldewey C, Dumitrescu R. Towards a Data-Driven Product
    Management – Concepts, Advantages, and Future Research. In: <i>2023 IEEE International
    Conference on Technology Management, Operations and Decisions (ICTMOD)</i>. IEEE;
    2023. doi:<a href="https://doi.org/10.1109/ictmod59086.2023.10438135">10.1109/ictmod59086.2023.10438135</a>'
  apa: Fichtler, T., Grigoryan, K., Koldewey, C., &#38; Dumitrescu, R. (2023). Towards
    a Data-Driven Product Management – Concepts, Advantages, and Future Research.
    <i>2023 IEEE International Conference on Technology Management, Operations and
    Decisions (ICTMOD)</i>. IEEE International Conference on Technology Management,
    Operations and Decisions (ICTMOD), Rabat, Morocco. <a href="https://doi.org/10.1109/ictmod59086.2023.10438135">https://doi.org/10.1109/ictmod59086.2023.10438135</a>
  bibtex: '@inproceedings{Fichtler_Grigoryan_Koldewey_Dumitrescu_2023, title={Towards
    a Data-Driven Product Management – Concepts, Advantages, and Future Research},
    DOI={<a href="https://doi.org/10.1109/ictmod59086.2023.10438135">10.1109/ictmod59086.2023.10438135</a>},
    booktitle={2023 IEEE International Conference on Technology Management, Operations
    and Decisions (ICTMOD)}, publisher={IEEE}, author={Fichtler, Timm and Grigoryan,
    Khoren and Koldewey, Christian and Dumitrescu, Roman}, year={2023} }'
  chicago: Fichtler, Timm, Khoren Grigoryan, Christian Koldewey, and Roman Dumitrescu.
    “Towards a Data-Driven Product Management – Concepts, Advantages, and Future Research.”
    In <i>2023 IEEE International Conference on Technology Management, Operations
    and Decisions (ICTMOD)</i>. IEEE, 2023. <a href="https://doi.org/10.1109/ictmod59086.2023.10438135">https://doi.org/10.1109/ictmod59086.2023.10438135</a>.
  ieee: 'T. Fichtler, K. Grigoryan, C. Koldewey, and R. Dumitrescu, “Towards a Data-Driven
    Product Management – Concepts, Advantages, and Future Research,” presented at
    the IEEE International Conference on Technology Management, Operations and Decisions
    (ICTMOD), Rabat, Morocco, 2023, doi: <a href="https://doi.org/10.1109/ictmod59086.2023.10438135">10.1109/ictmod59086.2023.10438135</a>.'
  mla: Fichtler, Timm, et al. “Towards a Data-Driven Product Management – Concepts,
    Advantages, and Future Research.” <i>2023 IEEE International Conference on Technology
    Management, Operations and Decisions (ICTMOD)</i>, IEEE, 2023, doi:<a href="https://doi.org/10.1109/ictmod59086.2023.10438135">10.1109/ictmod59086.2023.10438135</a>.
  short: 'T. Fichtler, K. Grigoryan, C. Koldewey, R. Dumitrescu, in: 2023 IEEE International
    Conference on Technology Management, Operations and Decisions (ICTMOD), IEEE,
    2023.'
conference:
  end_date: 2023-11-24
  location: Rabat, Morocco
  name: IEEE International Conference on Technology Management, Operations and Decisions
    (ICTMOD)
  start_date: 2023-11-22
date_created: 2024-03-07T18:13:47Z
date_updated: 2024-03-07T18:17:34Z
department:
- _id: '563'
doi: 10.1109/ictmod59086.2023.10438135
keyword:
- Product Lifecyle Management (PLM)
- Data Analytics
- Data-driven Design
- Engineering Management
- Lifecycle Data
language:
- iso: eng
publication: 2023 IEEE International Conference on Technology Management, Operations
  and Decisions (ICTMOD)
publication_status: published
publisher: IEEE
status: public
title: Towards a Data-Driven Product Management – Concepts, Advantages, and Future
  Research
type: conference
user_id: '66731'
year: '2023'
...
---
_id: '33490'
abstract:
- lang: eng
  text: Algorithmic fairness in Information Systems (IS) is a concept that aims to
    mitigate systematic discrimination and bias in automated decision-making. However,
    previous research argued that different fairness criteria are often incompatible.
    In hiring, AI is used to assess and rank applicants according to their fit for
    vacant positions. However, various types of bias also exist for AI-based algorithms
    (e.g., using biased historical data). To reduce AI’s bias and thereby unfair treatment,
    we conducted a systematic literature review to identify suitable strategies for
    the context of hiring. We identified nine fundamental articles in this context
    and extracted four types of approaches to address unfairness in AI, namely pre-process,
    in-process, post-process, and feature selection. Based on our findings, we (a)
    derived a research agenda for future studies and (b) proposed strategies for practitioners
    who design and develop AIs for hiring purposes.
author:
- first_name: Jonas
  full_name: Rieskamp, Jonas
  id: '77643'
  last_name: Rieskamp
- first_name: Lennart
  full_name: Hofeditz, Lennart
  last_name: Hofeditz
- first_name: Milad
  full_name: Mirbabaie, Milad
  id: '88691'
  last_name: Mirbabaie
- first_name: Stefan
  full_name: Stieglitz, Stefan
  last_name: Stieglitz
citation:
  ama: 'Rieskamp J, Hofeditz L, Mirbabaie M, Stieglitz S. Approaches to Improve Fairness
    when Deploying AI-based Algorithms in Hiring – Using a Systematic Literature Review
    to Guide Future Research. In: <i>Proceedings of the Annual Hawaii International
    Conference on System Sciences (HICSS)</i>. ; 2023.'
  apa: Rieskamp, J., Hofeditz, L., Mirbabaie, M., &#38; Stieglitz, S. (2023). Approaches
    to Improve Fairness when Deploying AI-based Algorithms in Hiring – Using a Systematic
    Literature Review to Guide Future Research. <i>Proceedings of the Annual Hawaii
    International Conference on System Sciences (HICSS)</i>. Proceedings of the Annual
    Hawaii International Conference on System Sciences (HICSS).
  bibtex: '@inproceedings{Rieskamp_Hofeditz_Mirbabaie_Stieglitz_2023, title={Approaches
    to Improve Fairness when Deploying AI-based Algorithms in Hiring – Using a Systematic
    Literature Review to Guide Future Research}, booktitle={Proceedings of the Annual
    Hawaii International Conference on System Sciences (HICSS)}, author={Rieskamp,
    Jonas and Hofeditz, Lennart and Mirbabaie, Milad and Stieglitz, Stefan}, year={2023}
    }'
  chicago: Rieskamp, Jonas, Lennart Hofeditz, Milad Mirbabaie, and Stefan Stieglitz.
    “Approaches to Improve Fairness When Deploying AI-Based Algorithms in Hiring –
    Using a Systematic Literature Review to Guide Future Research.” In <i>Proceedings
    of the Annual Hawaii International Conference on System Sciences (HICSS)</i>,
    2023.
  ieee: J. Rieskamp, L. Hofeditz, M. Mirbabaie, and S. Stieglitz, “Approaches to Improve
    Fairness when Deploying AI-based Algorithms in Hiring – Using a Systematic Literature
    Review to Guide Future Research,” presented at the Proceedings of the Annual Hawaii
    International Conference on System Sciences (HICSS), 2023.
  mla: Rieskamp, Jonas, et al. “Approaches to Improve Fairness When Deploying AI-Based
    Algorithms in Hiring – Using a Systematic Literature Review to Guide Future Research.”
    <i>Proceedings of the Annual Hawaii International Conference on System Sciences
    (HICSS)</i>, 2023.
  short: 'J. Rieskamp, L. Hofeditz, M. Mirbabaie, S. Stieglitz, in: Proceedings of
    the Annual Hawaii International Conference on System Sciences (HICSS), 2023.'
conference:
  end_date: 2023-01-06
  name: Proceedings of the Annual Hawaii International Conference on System Sciences
    (HICSS)
  start_date: 2023-01-03
date_created: 2022-09-27T12:39:12Z
date_updated: 2023-02-06T14:39:51Z
keyword:
- fairness in AI
- SLR
- hiring
- AI implementation
- AI-based algorithms
language:
- iso: eng
main_file_link:
- url: https://hdl.handle.net/10125/102654
publication: Proceedings of the Annual Hawaii International Conference on System Sciences
  (HICSS)
status: public
title: Approaches to Improve Fairness when Deploying AI-based Algorithms in Hiring
  – Using a Systematic Literature Review to Guide Future Research
type: conference
user_id: '77643'
year: '2023'
...
---
_id: '45299'
abstract:
- lang: eng
  text: Many applications are driven by Machine Learning (ML) today. While complex
    ML models lead to an accurate prediction, their inner decision-making is obfuscated.
    However, especially for high-stakes decisions, interpretability and explainability
    of the model are necessary. Therefore, we develop a holistic interpretability
    and explainability framework (HIEF) to objectively describe and evaluate an intelligent
    system’s explainable AI (XAI) capacities. This guides data scientists to create
    more transparent models. To evaluate our framework, we analyse 50 real estate
    appraisal papers to ensure the robustness of HIEF. Additionally, we identify six
    typical types of intelligent systems, so-called archetypes, which range from explanatory
    to predictive, and demonstrate how researchers can use the framework to identify
    blind-spot topics in their domain. Finally, regarding comprehensiveness, we used
    a random sample of six intelligent systems and conducted an applicability check
    to provide external validity.
author:
- first_name: Jan-Peter
  full_name: Kucklick, Jan-Peter
  id: '77066'
  last_name: Kucklick
citation:
  ama: 'Kucklick J-P. HIEF: a holistic interpretability and explainability framework.
    <i>Journal of Decision Systems</i>. Published online 2023:1-41. doi:<a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>'
  apa: 'Kucklick, J.-P. (2023). HIEF: a holistic interpretability and explainability
    framework. <i>Journal of Decision Systems</i>, 1–41. <a href="https://doi.org/10.1080/12460125.2023.2207268">https://doi.org/10.1080/12460125.2023.2207268</a>'
  bibtex: '@article{Kucklick_2023, title={HIEF: a holistic interpretability and explainability
    framework}, DOI={<a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>},
    journal={Journal of Decision Systems}, publisher={Taylor &#38; Francis}, author={Kucklick,
    Jan-Peter}, year={2023}, pages={1–41} }'
  chicago: 'Kucklick, Jan-Peter. “HIEF: A Holistic Interpretability and Explainability
    Framework.” <i>Journal of Decision Systems</i>, 2023, 1–41. <a href="https://doi.org/10.1080/12460125.2023.2207268">https://doi.org/10.1080/12460125.2023.2207268</a>.'
  ieee: 'J.-P. Kucklick, “HIEF: a holistic interpretability and explainability framework,”
    <i>Journal of Decision Systems</i>, pp. 1–41, 2023, doi: <a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>.'
  mla: 'Kucklick, Jan-Peter. “HIEF: A Holistic Interpretability and Explainability
    Framework.” <i>Journal of Decision Systems</i>, Taylor &#38; Francis, 2023, pp.
    1–41, doi:<a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>.'
  short: J.-P. Kucklick, Journal of Decision Systems (2023) 1–41.
date_created: 2023-05-26T05:04:45Z
date_updated: 2023-05-26T05:08:36Z
department:
- _id: '195'
- _id: '196'
doi: 10.1080/12460125.2023.2207268
keyword:
- Explainable AI (XAI)
- machine learning
- interpretability
- real estate appraisal
- framework
- taxonomy
language:
- iso: eng
main_file_link:
- url: https://www.tandfonline.com/doi/full/10.1080/12460125.2023.2207268
page: 1-41
publication: Journal of Decision Systems
publication_identifier:
  issn:
  - 1246-0125
  - 2116-7052
publication_status: published
publisher: Taylor & Francis
status: public
title: 'HIEF: a holistic interpretability and explainability framework'
type: journal_article
user_id: '77066'
year: '2023'
...
---
_id: '45793'
abstract:
- lang: eng
  text: The global megatrends of digitization and sustainability lead to new challenges
    for the design and management of technical products in industrial companies. Product
    management - as the bridge between market and company - has the task to absorb
    and combine the manifold requirements and make the right product-related decisions.
    In the process, product management is confronted with heterogeneous information,
    rapidly changing portfolio components, as well as increasing product, and organizational
    complexity. Combining and utilizing data from different sources, e.g., product
    usage data and social media data leads to promising potentials to improve the
    quality of product-related decisions. In this paper, we reinforce the need for
    data-driven product management as an interdisciplinary field of action. The state
    of data-driven product management in practice was analyzed by conducting workshops
    with six manufacturing companies and hosting a focus group meeting with experts
    from different industries. We investigate the expectations and derive requirements
    leading us to open research questions, a vision for data-driven product management,
    and a research agenda to shape future research efforts.
author:
- first_name: Khoren
  full_name: Grigoryan, Khoren
  last_name: Grigoryan
- first_name: Timm
  full_name: Fichtler, Timm
  id: '66731'
  last_name: Fichtler
  orcid: https://orcid.org/0000-0001-6034-4399
- first_name: Nick
  full_name: Schreiner, Nick
  last_name: Schreiner
- first_name: Martin
  full_name: Rabe, Martin
  last_name: Rabe
- first_name: Melina
  full_name: Panzner, Melina
  id: '72658'
  last_name: Panzner
- first_name: Arno
  full_name: Kühn, Arno
  last_name: Kühn
- first_name: Roman
  full_name: Dumitrescu, Roman
  id: '16190'
  last_name: Dumitrescu
- first_name: Christian
  full_name: Koldewey, Christian
  id: '43136'
  last_name: Koldewey
  orcid: https://orcid.org/0000-0001-7992-6399
citation:
  ama: 'Grigoryan K, Fichtler T, Schreiner N, et al. Data-Driven Product Management:
    A Practitioner-Driven Research Agenda. In: <i>Procedia CIRP 33</i>. ; 2023.'
  apa: 'Grigoryan, K., Fichtler, T., Schreiner, N., Rabe, M., Panzner, M., Kühn, A.,
    Dumitrescu, R., &#38; Koldewey, C. (2023). Data-Driven Product Management: A Practitioner-Driven
    Research Agenda. <i>Procedia CIRP 33</i>. 33rd CIRP Design Conference, Sydney.'
  bibtex: '@inproceedings{Grigoryan_Fichtler_Schreiner_Rabe_Panzner_Kühn_Dumitrescu_Koldewey_2023,
    title={Data-Driven Product Management: A Practitioner-Driven Research Agenda},
    booktitle={Procedia CIRP 33}, author={Grigoryan, Khoren and Fichtler, Timm and
    Schreiner, Nick and Rabe, Martin and Panzner, Melina and Kühn, Arno and Dumitrescu,
    Roman and Koldewey, Christian}, year={2023} }'
  chicago: 'Grigoryan, Khoren, Timm Fichtler, Nick Schreiner, Martin Rabe, Melina
    Panzner, Arno Kühn, Roman Dumitrescu, and Christian Koldewey. “Data-Driven Product
    Management: A Practitioner-Driven Research Agenda.” In <i>Procedia CIRP 33</i>,
    2023.'
  ieee: 'K. Grigoryan <i>et al.</i>, “Data-Driven Product Management: A Practitioner-Driven
    Research Agenda,” presented at the 33rd CIRP Design Conference, Sydney, 2023.'
  mla: 'Grigoryan, Khoren, et al. “Data-Driven Product Management: A Practitioner-Driven
    Research Agenda.” <i>Procedia CIRP 33</i>, 2023.'
  short: 'K. Grigoryan, T. Fichtler, N. Schreiner, M. Rabe, M. Panzner, A. Kühn, R.
    Dumitrescu, C. Koldewey, in: Procedia CIRP 33, 2023.'
conference:
  location: Sydney
  name: 33rd CIRP Design Conference
date_created: 2023-06-27T13:46:45Z
date_updated: 2023-06-27T13:57:42Z
department:
- _id: '563'
- _id: '241'
keyword:
- Product Management
- Data Analytics
- Data-Driven Design
- Product-related data
- Lifecycle Data
- Tool-support
language:
- iso: eng
publication: Procedia CIRP 33
status: public
title: 'Data-Driven Product Management: A Practitioner-Driven Research Agenda'
type: conference
user_id: '66731'
year: '2023'
...
---
_id: '56477'
abstract:
- lang: eng
  text: We describe a prototype of a Clinical Decision Support System (CDSS) that
    provides (counterfactual) explanations to support accurate medical diagnosis.
    The prototype is based on an inherently interpretable Bayesian network (BN). Our
    research aims to investigate which explanations are most useful for medical experts
    and whether co-constructing explanations can foster trust and acceptance of CDSS.
author:
- first_name: Felix
  full_name: Liedeker, Felix
  id: '93275'
  last_name: Liedeker
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
citation:
  ama: 'Liedeker F, Cimiano P. A Prototype of an Interactive Clinical Decision Support
    System with Counterfactual Explanations. In: ; 2023.'
  apa: Liedeker, F., &#38; Cimiano, P. (2023). <i>A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations</i>. xAI-2023 Late-breaking
    Work, Demos and Doctoral Consortium co-located with the 1st World Conference on
    eXplainable Artificial Intelligence (xAI-2023), Lissabon.
  bibtex: '@inproceedings{Liedeker_Cimiano_2023, title={A Prototype of an Interactive
    Clinical Decision Support System with Counterfactual Explanations}, author={Liedeker,
    Felix and Cimiano, Philipp}, year={2023} }'
  chicago: Liedeker, Felix, and Philipp Cimiano. “A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations,” 2023.
  ieee: F. Liedeker and P. Cimiano, “A Prototype of an Interactive Clinical Decision
    Support System with Counterfactual Explanations,” presented at the xAI-2023 Late-breaking
    Work, Demos and Doctoral Consortium co-located with the 1st World Conference on
    eXplainable Artificial Intelligence (xAI-2023), Lissabon, 2023.
  mla: Liedeker, Felix, and Philipp Cimiano. <i>A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations</i>. 2023.
  short: 'F. Liedeker, P. Cimiano, in: 2023.'
conference:
  end_date: 2023-07-28
  location: Lissabon
  name: xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with
    the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)
  start_date: 2023-07-26
date_created: 2024-10-09T14:50:09Z
date_updated: 2024-10-09T15:04:53Z
department:
- _id: '660'
keyword:
- Explainable AI
- Clinical decision support
- Bayesian network
- Counterfactual explanations
language:
- iso: eng
project:
- _id: '128'
  name: 'TRR 318 - C5: TRR 318 - Subproject C5'
status: public
title: A Prototype of an Interactive Clinical Decision Support System with Counterfactual
  Explanations
type: conference
user_id: '93275'
year: '2023'
...
---
_id: '34171'
abstract:
- lang: eng
  text: State estimation when only a partial model of a considered system is available
    remains a major challenge in many engineering fields. This work proposes a joint,
    square-root unscented Kalman filter to estimate states and model uncertainties
    simultaneously by linear combinations of physics-motivated library functions.
    Using a sparsity promoting approach, a selection of those linear combinations
    is chosen and thus an interpretable model can be extracted. Results indicate a
    small estimation error compared to a traditional square-root unscented Kalman
    filter and exhibit the enhancement of physically meaningful models.
author:
- first_name: Ricarda-Samantha
  full_name: Götte, Ricarda-Samantha
  id: '43992'
  last_name: Götte
- first_name: Julia
  full_name: Timmermann, Julia
  id: '15402'
  last_name: Timmermann
citation:
  ama: 'Götte R-S, Timmermann J. Estimating States and Model Uncertainties Jointly
    by a Sparsity Promoting UKF. In: <i>12th IFAC Symposium on Nonlinear Control Systems
    (NOLCOS 2022)</i>. Vol 56. ; 2023:85-90. doi:<a href="https://doi.org/10.1016/j.ifacol.2023.02.015">https://doi.org/10.1016/j.ifacol.2023.02.015</a>'
  apa: Götte, R.-S., &#38; Timmermann, J. (2023). Estimating States and Model Uncertainties
    Jointly by a Sparsity Promoting UKF. <i>12th IFAC Symposium on Nonlinear Control
    Systems (NOLCOS 2022)</i>, <i>56</i>(1), 85–90. <a href="https://doi.org/10.1016/j.ifacol.2023.02.015">https://doi.org/10.1016/j.ifacol.2023.02.015</a>
  bibtex: '@inproceedings{Götte_Timmermann_2023, title={Estimating States and Model
    Uncertainties Jointly by a Sparsity Promoting UKF}, volume={56}, DOI={<a href="https://doi.org/10.1016/j.ifacol.2023.02.015">https://doi.org/10.1016/j.ifacol.2023.02.015</a>},
    number={1}, booktitle={12th IFAC Symposium on Nonlinear Control Systems (NOLCOS
    2022)}, author={Götte, Ricarda-Samantha and Timmermann, Julia}, year={2023}, pages={85–90}
    }'
  chicago: Götte, Ricarda-Samantha, and Julia Timmermann. “Estimating States and Model
    Uncertainties Jointly by a Sparsity Promoting UKF.” In <i>12th IFAC Symposium
    on Nonlinear Control Systems (NOLCOS 2022)</i>, 56:85–90, 2023. <a href="https://doi.org/10.1016/j.ifacol.2023.02.015">https://doi.org/10.1016/j.ifacol.2023.02.015</a>.
  ieee: 'R.-S. Götte and J. Timmermann, “Estimating States and Model Uncertainties
    Jointly by a Sparsity Promoting UKF,” in <i>12th IFAC Symposium on Nonlinear Control
    Systems (NOLCOS 2022)</i>, Canberra, Australien, 2023, vol. 56, no. 1, pp. 85–90,
    doi: <a href="https://doi.org/10.1016/j.ifacol.2023.02.015">https://doi.org/10.1016/j.ifacol.2023.02.015</a>.'
  mla: Götte, Ricarda-Samantha, and Julia Timmermann. “Estimating States and Model
    Uncertainties Jointly by a Sparsity Promoting UKF.” <i>12th IFAC Symposium on
    Nonlinear Control Systems (NOLCOS 2022)</i>, vol. 56, no. 1, 2023, pp. 85–90,
    doi:<a href="https://doi.org/10.1016/j.ifacol.2023.02.015">https://doi.org/10.1016/j.ifacol.2023.02.015</a>.
  short: 'R.-S. Götte, J. Timmermann, in: 12th IFAC Symposium on Nonlinear Control
    Systems (NOLCOS 2022), 2023, pp. 85–90.'
conference:
  end_date: 2023-01-06
  location: Canberra, Australien
  name: 12th IFAC Symposium on Nonlinear Control Systems NOLCOS 2022
  start_date: 2023-01-04
date_created: 2022-12-01T07:17:00Z
date_updated: 2024-11-13T08:43:05Z
department:
- _id: '153'
- _id: '880'
doi: https://doi.org/10.1016/j.ifacol.2023.02.015
intvolume: '        56'
issue: '1'
keyword:
- joint estimation
- unscented transform
- Kalman filter
- sparsity
- data-driven
- compressed sensing
language:
- iso: eng
page: 85-90
publication: 12th IFAC Symposium on Nonlinear Control Systems (NOLCOS 2022)
quality_controlled: '1'
status: public
title: Estimating States and Model Uncertainties Jointly by a Sparsity Promoting UKF
type: conference
user_id: '43992'
volume: 56
year: '2023'
...
---
_id: '29842'
abstract:
- lang: eng
  text: To build successful software products, developers continuously have to discover
    what features the users really need. This discovery can be achieved with continuous
    experimentation, testing different software variants with distinct user groups,
    and deploying the superior variant for all users. However, existing approaches
    do not focus on explicit modeling of variants and experiments, which offers advantages
    such as traceability of decisions and combinability of experiments. Therefore,
    our vision is the provision of model-driven continuous experimentation, which
    provides the developer with a framework for structuring the experimentation process.
    For that, we introduce the overall concept, apply it to the experimentation on
    component-based software architectures and point out future research questions.
    In particular, we show the applicability by combining feature models for modeling
    the software variants, users, and experiments (i.e., model-driven) with MAPE-K
    for the adaptation (i.e., continuous experimentation) and implementing the concept
    based on the component-based Angular framework.
author:
- first_name: Sebastian
  full_name: Gottschalk, Sebastian
  id: '47208'
  last_name: Gottschalk
- first_name: Enes
  full_name: Yigitbas, Enes
  id: '8447'
  last_name: Yigitbas
  orcid: 0000-0002-5967-833X
- first_name: Gregor
  full_name: Engels, Gregor
  id: '107'
  last_name: Engels
citation:
  ama: 'Gottschalk S, Yigitbas E, Engels G. Model-driven Continuous Experimentation
    on Component-based Software Architectures . In: <i>Proceedings of the 18th International
    Conference on Software Architecture Companion </i>. IEEE; 2022. doi:<a href="https://doi.org/10.1109/ICSA-C54293.2022.00011">10.1109/ICSA-C54293.2022.00011</a>'
  apa: Gottschalk, S., Yigitbas, E., &#38; Engels, G. (2022). Model-driven Continuous
    Experimentation on Component-based Software Architectures . <i>Proceedings of
    the 18th International Conference on Software Architecture Companion </i>. 18th
    International Conference on Software Architecture , Hawaii. <a href="https://doi.org/10.1109/ICSA-C54293.2022.00011">https://doi.org/10.1109/ICSA-C54293.2022.00011</a>
  bibtex: '@inproceedings{Gottschalk_Yigitbas_Engels_2022, title={Model-driven Continuous
    Experimentation on Component-based Software Architectures }, DOI={<a href="https://doi.org/10.1109/ICSA-C54293.2022.00011">10.1109/ICSA-C54293.2022.00011</a>},
    booktitle={Proceedings of the 18th International Conference on Software Architecture
    Companion }, publisher={IEEE}, author={Gottschalk, Sebastian and Yigitbas, Enes
    and Engels, Gregor}, year={2022} }'
  chicago: Gottschalk, Sebastian, Enes Yigitbas, and Gregor Engels. “Model-Driven
    Continuous Experimentation on Component-Based Software Architectures .” In <i>Proceedings
    of the 18th International Conference on Software Architecture Companion </i>.
    IEEE, 2022. <a href="https://doi.org/10.1109/ICSA-C54293.2022.00011">https://doi.org/10.1109/ICSA-C54293.2022.00011</a>.
  ieee: 'S. Gottschalk, E. Yigitbas, and G. Engels, “Model-driven Continuous Experimentation
    on Component-based Software Architectures ,” presented at the 18th International
    Conference on Software Architecture , Hawaii, 2022, doi: <a href="https://doi.org/10.1109/ICSA-C54293.2022.00011">10.1109/ICSA-C54293.2022.00011</a>.'
  mla: Gottschalk, Sebastian, et al. “Model-Driven Continuous Experimentation on Component-Based
    Software Architectures .” <i>Proceedings of the 18th International Conference
    on Software Architecture Companion </i>, IEEE, 2022, doi:<a href="https://doi.org/10.1109/ICSA-C54293.2022.00011">10.1109/ICSA-C54293.2022.00011</a>.
  short: 'S. Gottschalk, E. Yigitbas, G. Engels, in: Proceedings of the 18th International
    Conference on Software Architecture Companion , IEEE, 2022.'
conference:
  end_date: 2022-03-15
  location: Hawaii
  name: '18th International Conference on Software Architecture '
  start_date: 2022-03-12
date_created: 2022-02-15T07:32:10Z
date_updated: 2022-07-04T12:34:53Z
ddc:
- '000'
department:
- _id: '66'
- _id: '534'
doi: 10.1109/ICSA-C54293.2022.00011
file:
- access_level: open_access
  content_type: application/pdf
  creator: sego
  date_created: 2022-07-04T12:33:18Z
  date_updated: 2022-07-04T12:34:52Z
  file_id: '32322'
  file_name: ICSA_CR.pdf
  file_size: 183185
  relation: main_file
file_date_updated: 2022-07-04T12:34:52Z
has_accepted_license: '1'
keyword:
- continuous experimentation
- model-driven
- component-based software architectures
- self-adaptation
language:
- iso: eng
oa: '1'
project:
- _id: '1'
  name: 'SFB 901: SFB 901'
- _id: '4'
  name: 'SFB 901 - C: SFB 901 - Project Area C'
- _id: '17'
  name: 'SFB 901 - C5: SFB 901 - Subproject C5'
publication: 'Proceedings of the 18th International Conference on Software Architecture
  Companion '
publisher: IEEE
status: public
title: 'Model-driven Continuous Experimentation on Component-based Software Architectures '
type: conference
user_id: '47208'
year: '2022'
...
