---
_id: '65061'
abstract:
- lang: eng
  text: "<jats:title>Abstract</jats:title>\r\n                  <jats:p>\r\n                    One
    of the purposes for which XAI is often brought into play is to enable a user to
    act responsibly. However, responsibility is a complex normative and social phenomenon
    that we unfold in this chapter. We consider that the classical concepts of agency
    and responsibility do not fully capture what is needed for meaningful collaboration
    between human users and XAI. Advocating the perspective of sXAI, we argue that
    the growing adaptivity of AI systems will result in sXAI being considered as partners.
    Both partners adopt particular (dialogical) roles within a collaborative process
    and take responsibility for them. We expect that these roles lead to reactive
    attitudes toward the sXAI on the side of the human partners that make these roles
    relational. They resemble those reactive attitudes that we hold toward other human
    agents. For agents to exercise their responsibility, they need to possess agential
    capacities to fulfill their role with respect to the structure of a social interaction.
    Hence, sXAI can be expected to act responsibly. But because of XAI’s limited normative
    capacities, it might rather act as a marginal agent. We refer to marginal agents
    and show they can be scaffolded with regard to their agential capacities and their
    knowledge about the structure of a social interaction. The structure links the
    actions of the partners to each other in terms of a set of stimuli and responses
    to it in pursuit of a particular goal. Hence, it is important to differentiate
    between the different goals that a structure can impose for exercising responsibility.
    Therefore, we follow (Responsibility from the margins. Oxford University Press;
    2015.\r\n                    <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\"
    xlink:href=\"https://doi.org/10.1093/acprof:oso/9780198715672.24001.0001\" ext-link-type=\"uri\">https://doi.org/10.1093/acprof:oso/9780198715672.24001.0001</jats:ext-link>\r\n
    \                   ) and offer three structures that can help to organize responsibility
    for\r\n                    <jats:italic>decisions made</jats:italic>\r\n                    with
    the assistance of AI systems. These structures are attributability, answerability,
    and accountability. Our insights will inform the development and design process
    of XAI to meet the guiding principles of responsible research and innovation as
    well as trustworthy AI.\r\n                  </jats:p>"
author:
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Carsten
  full_name: Schulte, Carsten
  id: '60311'
  last_name: Schulte
citation:
  ama: 'Rohlfing KJ, Alpsancar S, Schulte C. Responsibilities in sXAI. In: <i>Social
    Explainable AI</i>. Springer Nature Singapore; 2026:157-177. doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_9">10.1007/978-981-96-5290-7_9</a>'
  apa: Rohlfing, K. J., Alpsancar, S., &#38; Schulte, C. (2026). Responsibilities
    in sXAI. In <i>Social Explainable AI</i> (pp. 157–177). Springer Nature Singapore.
    <a href="https://doi.org/10.1007/978-981-96-5290-7_9">https://doi.org/10.1007/978-981-96-5290-7_9</a>
  bibtex: '@inbook{Rohlfing_Alpsancar_Schulte_2026, place={Singapore}, title={Responsibilities
    in sXAI}, DOI={<a href="https://doi.org/10.1007/978-981-96-5290-7_9">10.1007/978-981-96-5290-7_9</a>},
    booktitle={Social Explainable AI}, publisher={Springer Nature Singapore}, author={Rohlfing,
    Katharina J. and Alpsancar, Suzana and Schulte, Carsten}, year={2026}, pages={157–177}
    }'
  chicago: 'Rohlfing, Katharina J., Suzana Alpsancar, and Carsten Schulte. “Responsibilities
    in SXAI.” In <i>Social Explainable AI</i>, 157–77. Singapore: Springer Nature
    Singapore, 2026. <a href="https://doi.org/10.1007/978-981-96-5290-7_9">https://doi.org/10.1007/978-981-96-5290-7_9</a>.'
  ieee: 'K. J. Rohlfing, S. Alpsancar, and C. Schulte, “Responsibilities in sXAI,”
    in <i>Social Explainable AI</i>, Singapore: Springer Nature Singapore, 2026, pp.
    157–177.'
  mla: Rohlfing, Katharina J., et al. “Responsibilities in SXAI.” <i>Social Explainable
    AI</i>, Springer Nature Singapore, 2026, pp. 157–77, doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_9">10.1007/978-981-96-5290-7_9</a>.
  short: 'K.J. Rohlfing, S. Alpsancar, C. Schulte, in: Social Explainable AI, Springer
    Nature Singapore, Singapore, 2026, pp. 157–177.'
date_created: 2026-03-19T10:59:18Z
date_updated: 2026-03-19T11:53:01Z
department:
- _id: '26'
- _id: '756'
doi: 10.1007/978-981-96-5290-7_9
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1007/978-981-96-5290-7_9
oa: '1'
page: 157-177
place: Singapore
project:
- _id: '109'
  name: 'TRR 318: Erklärbarkeit konstruieren'
- _id: '370'
  name: 'TRR 318; TP B06: Ethik und Normativität der erklärbaren KI'
publication: Social Explainable AI
publication_identifier:
  isbn:
  - '9789819652891'
  - '9789819652907'
publication_status: published
publisher: Springer Nature Singapore
status: public
title: Responsibilities in sXAI
type: book_chapter
user_id: '93637'
year: '2026'
...
---
_id: '65063'
abstract:
- lang: eng
  text: "<jats:title>Abstract</jats:title>\r\n                  <jats:p>\r\n                    This
    chapter critically examines how social explainable AI (sXAI) can better support
    AI practitioners in ensuring fairness in AI-based decision-making. We argue for
    a fundamental shift: Fairness should be understood not as a technical property
    or an information problem, but as a matter of vulnerability—focusing on the real-world
    impacts of AI on individuals and groups, especially those most at risk. Hereby,
    we call for a shift in perspective: from fair AI to\r\n                    <jats:italic>tasking
    AI fairly</jats:italic>\r\n                    . To motivate our vulnerability
    approach, we review the “Dutch welfare fraud scandal” (system risk indication—SyRI)
    and current challenges in the field of fair AI/machine learning (ML). Vulnerability
    of a person or members of a definable group of persons is a complex relational
    notion, and not a technical property of a technical system. Accordingly, we suggest
    several nontechnical strategies that hold the promise to compensate for the insufficiency
    of purely technical approaches to fairness and other ethical issues in the practical
    use of AI-based systems. To discuss how sXAI, due to its interactive and adaptive
    social character, might better fulfill this role than current XAI techniques,
    we provide a toy scenario for how sXAI might support the virtuous AI practitioner
    in an ethical inquiry. Finally, we also address challenges and limits of our approach.\r\n
    \                 </jats:p>"
author:
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Eugenia
  full_name: Stamboliev, Eugenia
  last_name: Stamboliev
citation:
  ama: 'Alpsancar S, Stamboliev E. Tasking AI Fairly. How to Empower AI Practitioners
    With sXAI? In: <i>Social Explainable AI</i>. Springer Nature Singapore; 2026:557-581.
    doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_29">10.1007/978-981-96-5290-7_29</a>'
  apa: Alpsancar, S., &#38; Stamboliev, E. (2026). Tasking AI Fairly. How to Empower
    AI Practitioners With sXAI? In <i>Social Explainable AI</i> (pp. 557–581). Springer
    Nature Singapore. <a href="https://doi.org/10.1007/978-981-96-5290-7_29">https://doi.org/10.1007/978-981-96-5290-7_29</a>
  bibtex: '@inbook{Alpsancar_Stamboliev_2026, place={Singapore}, title={Tasking AI
    Fairly. How to Empower AI Practitioners With sXAI?}, DOI={<a href="https://doi.org/10.1007/978-981-96-5290-7_29">10.1007/978-981-96-5290-7_29</a>},
    booktitle={Social Explainable AI}, publisher={Springer Nature Singapore}, author={Alpsancar,
    Suzana and Stamboliev, Eugenia}, year={2026}, pages={557–581} }'
  chicago: 'Alpsancar, Suzana, and Eugenia Stamboliev. “Tasking AI Fairly. How to
    Empower AI Practitioners With SXAI?” In <i>Social Explainable AI</i>, 557–81.
    Singapore: Springer Nature Singapore, 2026. <a href="https://doi.org/10.1007/978-981-96-5290-7_29">https://doi.org/10.1007/978-981-96-5290-7_29</a>.'
  ieee: 'S. Alpsancar and E. Stamboliev, “Tasking AI Fairly. How to Empower AI Practitioners
    With sXAI?,” in <i>Social Explainable AI</i>, Singapore: Springer Nature Singapore,
    2026, pp. 557–581.'
  mla: Alpsancar, Suzana, and Eugenia Stamboliev. “Tasking AI Fairly. How to Empower
    AI Practitioners With SXAI?” <i>Social Explainable AI</i>, Springer Nature Singapore,
    2026, pp. 557–81, doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_29">10.1007/978-981-96-5290-7_29</a>.
  short: 'S. Alpsancar, E. Stamboliev, in: Social Explainable AI, Springer Nature
    Singapore, Singapore, 2026, pp. 557–581.'
date_created: 2026-03-19T11:03:30Z
date_updated: 2026-03-19T11:53:42Z
department:
- _id: '26'
- _id: '756'
doi: 10.1007/978-981-96-5290-7_29
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1007/978-981-96-5290-7_29
oa: '1'
page: 557-581
place: Singapore
project:
- _id: '370'
  name: 'TRR 318; TP B06: Ethik und Normativität der erklärbaren KI'
publication: Social Explainable AI
publication_identifier:
  isbn:
  - '9789819652891'
  - '9789819652907'
publication_status: published
publisher: Springer Nature Singapore
status: public
title: Tasking AI Fairly. How to Empower AI Practitioners With sXAI?
type: book_chapter
user_id: '93637'
year: '2026'
...
---
_id: '65064'
abstract:
- lang: eng
  text: "<jats:title>Abstract</jats:title>\r\n                  <jats:p>XAI can minimize
    the risks of being manipulated and deceived by AI but in turn entails other specific
    risks. This also applies to sXAI, and the specifically social character of sXAI
    harbors particular risks that designers and developers should be aware of. In
    this chapter, we shall discuss the potential opportunities and risks of sXAI.
    We see a particularly positive potential in the social character of sXAI, which
    lies in the fact that skillful users, including those with “healthy distrust,”
    can use the adaptivity of sXAI to produce an explanation that is actually relevant
    and adequate for them. However, this requires a high level of skills on the part
    of the user and is thus in contrast to the general promise of efficiency in the
    use of AI. A potential risk of XAI is that it can be (even more) persuasive, as
    the interactive involvement and the anthropomorphism strengthen a trustworthy
    appearance/performance (independent of the adequacy of the sXAI performance).</jats:p>"
author:
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Michael
  full_name: Klenk, Michael
  last_name: Klenk
citation:
  ama: 'Alpsancar S, Klenk M. The Risk of Manipulation and Deception in sXAI. In:
    <i>Social Explainable AI</i>. Springer Nature Singapore; 2026:583-616. doi:<a
    href="https://doi.org/10.1007/978-981-96-5290-7_30">10.1007/978-981-96-5290-7_30</a>'
  apa: Alpsancar, S., &#38; Klenk, M. (2026). The Risk of Manipulation and Deception
    in sXAI. In <i>Social Explainable AI</i> (pp. 583–616). Springer Nature Singapore.
    <a href="https://doi.org/10.1007/978-981-96-5290-7_30">https://doi.org/10.1007/978-981-96-5290-7_30</a>
  bibtex: '@inbook{Alpsancar_Klenk_2026, place={Singapore}, title={The Risk of Manipulation
    and Deception in sXAI}, DOI={<a href="https://doi.org/10.1007/978-981-96-5290-7_30">10.1007/978-981-96-5290-7_30</a>},
    booktitle={Social Explainable AI}, publisher={Springer Nature Singapore}, author={Alpsancar,
    Suzana and Klenk, Michael}, year={2026}, pages={583–616} }'
  chicago: 'Alpsancar, Suzana, and Michael Klenk. “The Risk of Manipulation and Deception
    in SXAI.” In <i>Social Explainable AI</i>, 583–616. Singapore: Springer Nature
    Singapore, 2026. <a href="https://doi.org/10.1007/978-981-96-5290-7_30">https://doi.org/10.1007/978-981-96-5290-7_30</a>.'
  ieee: 'S. Alpsancar and M. Klenk, “The Risk of Manipulation and Deception in sXAI,”
    in <i>Social Explainable AI</i>, Singapore: Springer Nature Singapore, 2026, pp.
    583–616.'
  mla: Alpsancar, Suzana, and Michael Klenk. “The Risk of Manipulation and Deception
    in SXAI.” <i>Social Explainable AI</i>, Springer Nature Singapore, 2026, pp. 583–616,
    doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_30">10.1007/978-981-96-5290-7_30</a>.
  short: 'S. Alpsancar, M. Klenk, in: Social Explainable AI, Springer Nature Singapore,
    Singapore, 2026, pp. 583–616.'
date_created: 2026-03-19T11:05:30Z
date_updated: 2026-03-19T11:52:00Z
department:
- _id: '26'
- _id: '756'
doi: 10.1007/978-981-96-5290-7_30
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.1007/978-981-96-5290-7_30'
oa: '1'
page: 583-616
place: Singapore
project:
- _id: '109'
  name: 'TRR 318: Erklärbarkeit konstruieren'
- _id: '370'
  name: 'TRR 318; TP B06: Ethik und Normativität der erklärbaren KI'
publication: Social Explainable AI
publication_identifier:
  isbn:
  - '9789819652891'
  - '9789819652907'
publication_status: published
publisher: Springer Nature Singapore
status: public
title: The Risk of Manipulation and Deception in sXAI
type: book_chapter
user_id: '93637'
year: '2026'
...
---
_id: '62709'
author:
- first_name: Wessel
  full_name: Reijers, Wessel
  id: '102524'
  last_name: Reijers
  orcid: 0000-0003-2505-1587
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
citation:
  ama: 'Reijers W, Alpsancar S. Values and Norms in sXAI. In: Rohlfing K, Främling
    K, Lim B, Alpsancar S, Thommes K, eds. <i>Social Explainable AI. Communications
    of NII Shonan Meetings</i>. Springer; 2026:179-195.'
  apa: Reijers, W., &#38; Alpsancar, S. (2026). Values and Norms in sXAI. In K. Rohlfing,
    K. Främling, B. Lim, S. Alpsancar, &#38; K. Thommes (Eds.), <i>Social explainable
    AI. Communications of NII Shonan Meetings</i> (pp. 179–195). Springer.
  bibtex: '@inbook{Reijers_Alpsancar_2026, place={Singapore}, title={Values and Norms
    in sXAI}, booktitle={Social explainable AI. Communications of NII Shonan Meetings},
    publisher={Springer}, author={Reijers, Wessel and Alpsancar, Suzana}, editor={Rohlfing,
    Katharina and Främling, Kary and Lim, Brian and Alpsancar, Suzana and Thommes,
    Kirsten}, year={2026}, pages={179–195} }'
  chicago: 'Reijers, Wessel, and Suzana Alpsancar. “Values and Norms in SXAI.” In
    <i>Social Explainable AI. Communications of NII Shonan Meetings</i>, edited by
    Katharina Rohlfing, Kary Främling, Brian Lim, Suzana Alpsancar, and Kirsten Thommes,
    179–95. Singapore: Springer, 2026.'
  ieee: 'W. Reijers and S. Alpsancar, “Values and Norms in sXAI,” in <i>Social explainable
    AI. Communications of NII Shonan Meetings</i>, K. Rohlfing, K. Främling, B. Lim,
    S. Alpsancar, and K. Thommes, Eds. Singapore: Springer, 2026, pp. 179–195.'
  mla: Reijers, Wessel, and Suzana Alpsancar. “Values and Norms in SXAI.” <i>Social
    Explainable AI. Communications of NII Shonan Meetings</i>, edited by Katharina
    Rohlfing et al., Springer, 2026, pp. 179–95.
  short: 'W. Reijers, S. Alpsancar, in: K. Rohlfing, K. Främling, B. Lim, S. Alpsancar,
    K. Thommes (Eds.), Social Explainable AI. Communications of NII Shonan Meetings,
    Springer, Singapore, 2026, pp. 179–195.'
date_created: 2025-11-30T07:54:44Z
date_updated: 2026-03-19T10:58:47Z
department:
- _id: '26'
- _id: '756'
editor:
- first_name: Katharina
  full_name: Rohlfing, Katharina
  last_name: Rohlfing
- first_name: Kary
  full_name: Främling, Kary
  last_name: Främling
- first_name: Brian
  full_name: Lim, Brian
  last_name: Lim
- first_name: Suzana
  full_name: Alpsancar, Suzana
  last_name: Alpsancar
- first_name: Kirsten
  full_name: Thommes, Kirsten
  last_name: Thommes
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1007/978-981-96-5290-7_10
oa: '1'
page: 179-195
place: Singapore
publication: Social explainable AI. Communications of NII Shonan Meetings
publication_status: published
publisher: Springer
quality_controlled: '1'
related_material:
  link:
  - relation: confirmation
    url: https://link.springer.com/book/9789819652891
status: public
title: Values and Norms in sXAI
type: book_chapter
user_id: '93637'
year: '2026'
...
---
_id: '65065'
abstract:
- lang: eng
  text: "<jats:title>Abstract</jats:title>\r\n                  <jats:p>This introduction
    sets the stage for the present book. Whereas research in eXplainable AI (XAI)
    is motivated by societal changes and values, technology development largely ignores
    social aspects. This book aims to address this research gap with a systematic
    and comprehensive social view on explainable AI. Besides introducing many relevant
    concepts, the book offers first access to their possible implementation, thus
    advancing the development of more social XAI. The introduction starts by connecting
    the topic to the general research field of XAI. The second part defines the novel
    approach of social eXplainable AI (sXAI) along the three characteristics of social
    interaction such as patternedness, incrementality, and multimodality. Finally,
    the third part explains the structure followed by each chapter. The book offers
    insights not only for readers who work on technology development but also for
    those working in sociotechnical fields. Addressing an interdisciplinary readership,
    the book is an invitation for more exchange and further development of the sXAI
    field.</jats:p>"
citation:
  ama: Rohlfing KJ, Främling K, Lim B, Alpsancar S, Thommes K, eds. <i>Social Explainable
    AI</i>. Springer Nature Singapore; 2026. doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_1">10.1007/978-981-96-5290-7_1</a>
  apa: Rohlfing, K. J., Främling, K., Lim, B., Alpsancar, S., &#38; Thommes, K. (Eds.).
    (2026). <i>Social Explainable AI</i>. Springer Nature Singapore. <a href="https://doi.org/10.1007/978-981-96-5290-7_1">https://doi.org/10.1007/978-981-96-5290-7_1</a>
  bibtex: '@book{Rohlfing_Främling_Lim_Alpsancar_Thommes_2026, place={Singapore},
    title={Social Explainable AI}, DOI={<a href="https://doi.org/10.1007/978-981-96-5290-7_1">10.1007/978-981-96-5290-7_1</a>},
    publisher={Springer Nature Singapore}, year={2026} }'
  chicago: 'Rohlfing, Katharina J., Kary Främling, Brian Lim, Suzana Alpsancar, and
    Kirsten Thommes, eds. <i>Social Explainable AI</i>. Singapore: Springer Nature
    Singapore, 2026. <a href="https://doi.org/10.1007/978-981-96-5290-7_1">https://doi.org/10.1007/978-981-96-5290-7_1</a>.'
  ieee: 'K. J. Rohlfing, K. Främling, B. Lim, S. Alpsancar, and K. Thommes, Eds.,
    <i>Social Explainable AI</i>. Singapore: Springer Nature Singapore, 2026.'
  mla: Rohlfing, Katharina J., et al., editors. <i>Social Explainable AI</i>. Springer
    Nature Singapore, 2026, doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_1">10.1007/978-981-96-5290-7_1</a>.
  short: K.J. Rohlfing, K. Främling, B. Lim, S. Alpsancar, K. Thommes, eds., Social
    Explainable AI, Springer Nature Singapore, Singapore, 2026.
date_created: 2026-03-19T11:55:17Z
date_updated: 2026-03-19T11:59:42Z
department:
- _id: '26'
- _id: '756'
doi: 10.1007/978-981-96-5290-7_1
editor:
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Kary
  full_name: Främling, Kary
  last_name: Främling
- first_name: Brian
  full_name: Lim, Brian
  last_name: Lim
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Kirsten
  full_name: Thommes, Kirsten
  id: '72497'
  last_name: Thommes
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://link.springer.com/book/10.1007/978-981-96-5290-7
oa: '1'
place: Singapore
project:
- _id: '109'
  name: 'TRR 318: Erklärbarkeit konstruieren'
publication_identifier:
  isbn:
  - '9789819652891'
  - '9789819652907'
publication_status: published
publisher: Springer Nature Singapore
status: public
title: Social Explainable AI
type: book_editor
user_id: '93637'
year: '2026'
...
---
_id: '59167'
author:
- first_name: Sven
  full_name: Thomas, Sven
  id: '94561'
  last_name: Thomas
citation:
  ama: 'Thomas S. Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung
    von Theorie und Praxis. <i>HannahArendtNet</i>. 2025;14(1):240–242. doi:<a href="https://doi.org/10.57773/HANET.V14I1.607">10.57773/HANET.V14I1.607</a>'
  apa: 'Thomas, S. (2025). Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild
    der Verstrickung von Theorie und Praxis. <i>HannahArendt.Net</i>, <i>14</i>(1),
    240–242. <a href="https://doi.org/10.57773/HANET.V14I1.607">https://doi.org/10.57773/HANET.V14I1.607</a>'
  bibtex: '@article{Thomas_2025, title={Rezension: Thomas Meyers neue Arendt Biographie.
    Sinnbild der Verstrickung von Theorie und Praxis}, volume={14}, DOI={<a href="https://doi.org/10.57773/HANET.V14I1.607">10.57773/HANET.V14I1.607</a>},
    number={1}, journal={HannahArendt.Net}, author={Thomas, Sven}, year={2025}, pages={240–242}
    }'
  chicago: 'Thomas, Sven. “Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild
    der Verstrickung von Theorie und Praxis.” <i>HannahArendt.Net</i> 14, no. 1 (2025):
    240–242. <a href="https://doi.org/10.57773/HANET.V14I1.607">https://doi.org/10.57773/HANET.V14I1.607</a>.'
  ieee: 'S. Thomas, “Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der
    Verstrickung von Theorie und Praxis,” <i>HannahArendt.Net</i>, vol. 14, no. 1,
    pp. 240–242, 2025, doi: <a href="https://doi.org/10.57773/HANET.V14I1.607">10.57773/HANET.V14I1.607</a>.'
  mla: 'Thomas, Sven. “Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der
    Verstrickung von Theorie und Praxis.” <i>HannahArendt.Net</i>, vol. 14, no. 1,
    2025, pp. 240–242, doi:<a href="https://doi.org/10.57773/HANET.V14I1.607">10.57773/HANET.V14I1.607</a>.'
  short: S. Thomas, HannahArendt.Net 14 (2025) 240–242.
date_created: 2025-03-27T06:49:51Z
date_updated: 2026-01-22T06:22:42Z
department:
- _id: '26'
- _id: '756'
doi: 10.57773/HANET.V14I1.607
intvolume: '        14'
issue: '1'
language:
- iso: ger
main_file_link:
- open_access: '1'
  url: https://www.hannaharendt.net/index.php/han/article/view/607/1022
oa: '1'
page: 240–242
publication: HannahArendt.Net
publication_status: published
status: public
title: 'Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung
  von Theorie und Praxis'
type: journal_article
user_id: '94561'
volume: 14
year: '2025'
...
---
_id: '59166'
author:
- first_name: Sven
  full_name: Thomas, Sven
  id: '94561'
  last_name: Thomas
citation:
  ama: 'Thomas S. Rezension: Hanna Meretoja: Die Nacht der alten Feuer. <i>HannahArendtNet</i>.
    2025;14(1):237–239. doi:<a href="https://doi.org/10.57773/HANET.V14I1.606">10.57773/HANET.V14I1.606</a>'
  apa: 'Thomas, S. (2025). Rezension: Hanna Meretoja: Die Nacht der alten Feuer. <i>HannahArendt.Net</i>,
    <i>14</i>(1), 237–239. <a href="https://doi.org/10.57773/HANET.V14I1.606">https://doi.org/10.57773/HANET.V14I1.606</a>'
  bibtex: '@article{Thomas_2025, title={Rezension: Hanna Meretoja: Die Nacht der alten
    Feuer}, volume={14}, DOI={<a href="https://doi.org/10.57773/HANET.V14I1.606">10.57773/HANET.V14I1.606</a>},
    number={1}, journal={HannahArendt.Net}, author={Thomas, Sven}, year={2025}, pages={237–239}
    }'
  chicago: 'Thomas, Sven. “Rezension: Hanna Meretoja: Die Nacht der alten Feuer.”
    <i>HannahArendt.Net</i> 14, no. 1 (2025): 237–239. <a href="https://doi.org/10.57773/HANET.V14I1.606">https://doi.org/10.57773/HANET.V14I1.606</a>.'
  ieee: 'S. Thomas, “Rezension: Hanna Meretoja: Die Nacht der alten Feuer,” <i>HannahArendt.Net</i>,
    vol. 14, no. 1, pp. 237–239, 2025, doi: <a href="https://doi.org/10.57773/HANET.V14I1.606">10.57773/HANET.V14I1.606</a>.'
  mla: 'Thomas, Sven. “Rezension: Hanna Meretoja: Die Nacht der alten Feuer.” <i>HannahArendt.Net</i>,
    vol. 14, no. 1, 2025, pp. 237–239, doi:<a href="https://doi.org/10.57773/HANET.V14I1.606">10.57773/HANET.V14I1.606</a>.'
  short: S. Thomas, HannahArendt.Net 14 (2025) 237–239.
date_created: 2025-03-27T06:48:27Z
date_updated: 2025-11-18T09:11:32Z
department:
- _id: '26'
- _id: '756'
doi: 10.57773/HANET.V14I1.606
intvolume: '        14'
issue: '1'
language:
- iso: ger
main_file_link:
- open_access: '1'
  url: https://www.hannaharendt.net/index.php/han/article/view/606/961
oa: '1'
page: 237–239
publication: HannahArendt.Net
status: public
title: 'Rezension: Hanna Meretoja: Die Nacht der alten Feuer'
type: journal_article
user_id: '94561'
volume: 14
year: '2025'
...
---
_id: '61517'
author:
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
citation:
  ama: 'Alpsancar S. Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der
    Computerisierung des Botanischen Gartens und Botanischen Museums Berlin. In: Hashagen
    U, Seising R, eds. <i>Algorithmische Wissenskulturen. Der Einfluss des Computers
    auf die Wissenschaftsentwicklung</i>. Die blaue Stunde der Informatik. Springer;
    2025:327–365. doi:<a href="https://doi.org/10.1007/978-3-658-35560-9_14">10.1007/978-3-658-35560-9_14</a>'
  apa: Alpsancar, S. (2025). Algorithmische Kulturen des Pflanzensammelns? Das Beispiel
    der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin. In
    U. Hashagen &#38; R. Seising (Eds.), <i>Algorithmische Wissenskulturen. Der Einfluss
    des Computers auf die Wissenschaftsentwicklung</i> (pp. 327–365). Springer. <a
    href="https://doi.org/10.1007/978-3-658-35560-9_14">https://doi.org/10.1007/978-3-658-35560-9_14</a>
  bibtex: '@inbook{Alpsancar_2025, place={Wiesbaden}, series={Die blaue Stunde der
    Informatik}, title={Algorithmische Kulturen des Pflanzensammelns? Das Beispiel
    der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin},
    DOI={<a href="https://doi.org/10.1007/978-3-658-35560-9_14">10.1007/978-3-658-35560-9_14</a>},
    booktitle={Algorithmische Wissenskulturen. Der Einfluss des Computers auf die
    Wissenschaftsentwicklung}, publisher={Springer}, author={Alpsancar, Suzana}, editor={Hashagen,
    Ulf and Seising, Rudolf}, year={2025}, pages={327–365}, collection={Die blaue
    Stunde der Informatik} }'
  chicago: 'Alpsancar, Suzana. “Algorithmische Kulturen des Pflanzensammelns? Das
    Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums
    Berlin.” In <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf
    die Wissenschaftsentwicklung</i>, edited by Ulf Hashagen and Rudolf Seising, 327–365.
    Die blaue Stunde der Informatik. Wiesbaden: Springer, 2025. <a href="https://doi.org/10.1007/978-3-658-35560-9_14">https://doi.org/10.1007/978-3-658-35560-9_14</a>.'
  ieee: 'S. Alpsancar, “Algorithmische Kulturen des Pflanzensammelns? Das Beispiel
    der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin,”
    in <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung</i>,
    U. Hashagen and R. Seising, Eds. Wiesbaden: Springer, 2025, pp. 327–365.'
  mla: Alpsancar, Suzana. “Algorithmische Kulturen des Pflanzensammelns? Das Beispiel
    der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin.”
    <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung</i>,
    edited by Ulf Hashagen and Rudolf Seising, Springer, 2025, pp. 327–365, doi:<a
    href="https://doi.org/10.1007/978-3-658-35560-9_14">10.1007/978-3-658-35560-9_14</a>.
  short: 'S. Alpsancar, in: U. Hashagen, R. Seising (Eds.), Algorithmische Wissenskulturen.
    Der Einfluss des Computers auf die Wissenschaftsentwicklung, Springer, Wiesbaden,
    2025, pp. 327–365.'
date_created: 2025-10-05T15:29:31Z
date_updated: 2025-11-18T09:31:23Z
department:
- _id: '26'
- _id: '756'
doi: 10.1007/978-3-658-35560-9_14
editor:
- first_name: Ulf
  full_name: Hashagen, Ulf
  last_name: Hashagen
- first_name: Rudolf
  full_name: Seising, Rudolf
  last_name: Seising
language:
- iso: ger
page: 327–365
place: Wiesbaden
publication: Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung
publication_identifier:
  isbn:
  - '9783658355593'
  - '9783658355609'
  issn:
  - 2730-7425
  - 2730-7433
publication_status: published
publisher: Springer
series_title: Die blaue Stunde der Informatik
status: public
title: Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung
  des Botanischen Gartens und Botanischen Museums Berlin
type: book_chapter
user_id: '93637'
year: '2025'
...
---
_id: '59917'
abstract:
- lang: eng
  text: nder the slogan of trustworthy AI, much of contemporary AI research is focused
    on designing AI systems and usage practices that inspire human trust and, thus,
    enhance adoption of AI systems. However, a person affected by an AI system may
    not be convinced by AI system design alone---neither should they, if the AI system
    is embedded in a social context that gives good reason to believe that it is used
    in tension with a person’s interest. In such cases,  distrust in the system may
    be justified and necessary to build meaningful trust in the first place. We propose
    the term \emph{healthy distrust} to describe such a justified, careful stance
    towards certain AI usage practices. We investigate prior notions of trust and
    distrust in computer science, sociology, history, psychology, and philosophy,
    outline a remaining gap that healthy distrust might fill and conceptualize healthy
    distrust as a crucial part for AI usage that respects human autonomy.
author:
- first_name: Benjamin
  full_name: Paaßen, Benjamin
  last_name: Paaßen
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: Paaßen B, Alpsancar S, Matzner T, Scharlau I. Healthy Distrust in AI systems.
    <i>arXiv</i>. Published online 2025.
  apa: Paaßen, B., Alpsancar, S., Matzner, T., &#38; Scharlau, I. (2025). Healthy
    Distrust in AI systems. In <i>arXiv</i>.
  bibtex: '@article{Paaßen_Alpsancar_Matzner_Scharlau_2025, title={Healthy Distrust
    in AI systems}, journal={arXiv}, author={Paaßen, Benjamin and Alpsancar, Suzana
    and Matzner, Tobias and Scharlau, Ingrid}, year={2025} }'
  chicago: Paaßen, Benjamin, Suzana Alpsancar, Tobias Matzner, and Ingrid Scharlau.
    “Healthy Distrust in AI Systems.” <i>ArXiv</i>, 2025.
  ieee: B. Paaßen, S. Alpsancar, T. Matzner, and I. Scharlau, “Healthy Distrust in
    AI systems,” <i>arXiv</i>. 2025.
  mla: Paaßen, Benjamin, et al. “Healthy Distrust in AI Systems.” <i>ArXiv</i>, 2025.
  short: B. Paaßen, S. Alpsancar, T. Matzner, I. Scharlau, ArXiv (2025).
date_created: 2025-05-16T09:39:13Z
date_updated: 2025-11-18T09:38:01Z
department:
- _id: '424'
- _id: '26'
- _id: '756'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2505.09747
oa: '1'
project:
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
- _id: '370'
  name: 'TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren
    KI'
publication: arXiv
status: public
title: Healthy Distrust in AI systems
type: preprint
user_id: '93637'
year: '2025'
...
---
_id: '62028'
abstract:
- lang: eng
  text: 'Explainable AI (XAI) methods can support the identification of biases in
    automated decision-making (ADM) systems. However, existing research does not sufficiently
    address whether these biases originate from the ADM system or mirror underlying
    societal inequalities. This distinction is important because it has major implications
    for how to act upon an explanation: while the societal bias produced by the ADM
    system can be algorithmically fixed, societal inequalities demand societal actions.
    To address this gap, we propose the RR-XAI-framework (recognition-redistribution
    through XAI) that builds on a distinction between socio-technical and societal
    bias and Nancy Fraser''s justice theory of recognition and redistribution. In
    our framework, explanations can play two distinct roles: as a socio-technical
    diagnosis when they reveal biases produced by the ADM system itself, or as a societal
    diagnosis when they expose biases that reflect broader societal inequalities.
    We then outline the operationalization of the framework and discuss its applicability
    for cases in algorithmic hiring and credit scoring. Based on our findings, we
    argue that the diagnostic functions of XAI are contingent on the provision of
    such explanations, the resources of the audiences, as well as the current limits
    of XAI techniques.'
article_type: original
author:
- first_name: Miriam
  full_name: Fahimi, Miriam
  id: '118059'
  last_name: Fahimi
  orcid: 0000-0002-0619-3160
- first_name: Laura
  full_name: State, Laura
  last_name: State
- first_name: Atoosa
  full_name: Kasirzadeh, Atoosa
  last_name: Kasirzadeh
citation:
  ama: 'Fahimi M, State L, Kasirzadeh A. From Explaining to Diagnosing: A Justice-Oriented
    Framework of Explainable AI for Bias Detection. <i>Proceedings of the AAAI/ACM
    Conference on AI, Ethics, and Society</i>. 2025;8(1):879-892. doi:<a href="https://doi.org/10.1609/aies.v8i1.36597">10.1609/aies.v8i1.36597</a>'
  apa: 'Fahimi, M., State, L., &#38; Kasirzadeh, A. (2025). From Explaining to Diagnosing:
    A Justice-Oriented Framework of Explainable AI for Bias Detection. <i>Proceedings
    of the AAAI/ACM Conference on AI, Ethics, and Society</i>, <i>8</i>(1), 879–892.
    <a href="https://doi.org/10.1609/aies.v8i1.36597">https://doi.org/10.1609/aies.v8i1.36597</a>'
  bibtex: '@article{Fahimi_State_Kasirzadeh_2025, title={From Explaining to Diagnosing:
    A Justice-Oriented Framework of Explainable AI for Bias Detection}, volume={8},
    DOI={<a href="https://doi.org/10.1609/aies.v8i1.36597">10.1609/aies.v8i1.36597</a>},
    number={1}, journal={Proceedings of the AAAI/ACM Conference on AI, Ethics, and
    Society}, publisher={Association for the Advancement of Artificial Intelligence
    (AAAI)}, author={Fahimi, Miriam and State, Laura and Kasirzadeh, Atoosa}, year={2025},
    pages={879–892} }'
  chicago: 'Fahimi, Miriam, Laura State, and Atoosa Kasirzadeh. “From Explaining to
    Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection.”
    <i>Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society</i> 8, no.
    1 (2025): 879–92. <a href="https://doi.org/10.1609/aies.v8i1.36597">https://doi.org/10.1609/aies.v8i1.36597</a>.'
  ieee: 'M. Fahimi, L. State, and A. Kasirzadeh, “From Explaining to Diagnosing: A
    Justice-Oriented Framework of Explainable AI for Bias Detection,” <i>Proceedings
    of the AAAI/ACM Conference on AI, Ethics, and Society</i>, vol. 8, no. 1, pp.
    879–892, 2025, doi: <a href="https://doi.org/10.1609/aies.v8i1.36597">10.1609/aies.v8i1.36597</a>.'
  mla: 'Fahimi, Miriam, et al. “From Explaining to Diagnosing: A Justice-Oriented
    Framework of Explainable AI for Bias Detection.” <i>Proceedings of the AAAI/ACM
    Conference on AI, Ethics, and Society</i>, vol. 8, no. 1, Association for the
    Advancement of Artificial Intelligence (AAAI), 2025, pp. 879–92, doi:<a href="https://doi.org/10.1609/aies.v8i1.36597">10.1609/aies.v8i1.36597</a>.'
  short: M. Fahimi, L. State, A. Kasirzadeh, Proceedings of the AAAI/ACM Conference
    on AI, Ethics, and Society 8 (2025) 879–892.
date_created: 2025-10-31T15:05:38Z
date_updated: 2025-11-18T10:09:40Z
department:
- _id: '756'
- _id: '26'
doi: 10.1609/aies.v8i1.36597
intvolume: '         8'
issue: '1'
language:
- iso: eng
page: 879-892
publication: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
publication_identifier:
  issn:
  - 3065-8365
publication_status: published
publisher: Association for the Advancement of Artificial Intelligence (AAAI)
status: public
title: 'From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable
  AI for Bias Detection'
type: journal_article
user_id: '118059'
volume: 8
year: '2025'
...
---
_id: '62229'
abstract:
- lang: eng
  text: In 2024, the EU adopted the AI Act, a new set of rules for trustworthy artificial
    intelligence. This legal instrument carves a large place for standardisation,
    a regulatory technique that consists in crafting so-called harmonised technical
    standards, to facilitate legal compliance by industry stakeholders. While EU technical
    standards have been used in the past for ensuring product safety, for the first
    time the AI Act relies on standardisation to facilitate compliance with fundamental
    rights, including the right to non-discrimination and equality. The attempt to
    translate inherently open-textured rights and ethical principles into operationalizable
    standards raises critical questions. In particular, how will standardisation practices
    under the new EU AI Act affect, transform, contest and stabilise notions of equality
    and non-discrimination in an increasingly algorithmic society? This paper proposes
    a research agenda to address this question and unpack the black box of AI standardisation.
author:
- first_name: Raphaële
  full_name: Xenidis, Raphaële
  last_name: Xenidis
- first_name: Miriam
  full_name: Fahimi, Miriam
  id: '118059'
  last_name: Fahimi
  orcid: 0000-0002-0619-3160
citation:
  ama: 'Xenidis R, Fahimi M. Standardising Equality in the Algorithmic Society? A
    Research Agenda. In: <i>Proceedings of Fourth European Workshop on Algorithmic
    Fairness</i>. PMLR; 2025:310–314.'
  apa: Xenidis, R., &#38; Fahimi, M. (2025). Standardising Equality in the Algorithmic
    Society? A Research Agenda. <i>Proceedings of Fourth European Workshop on Algorithmic
    Fairness</i>, 310–314.
  bibtex: '@inproceedings{Xenidis_Fahimi_2025, title={Standardising Equality in the
    Algorithmic Society? A Research Agenda}, booktitle={Proceedings of Fourth European
    Workshop on Algorithmic Fairness}, publisher={PMLR}, author={Xenidis, Raphaële
    and Fahimi, Miriam}, year={2025}, pages={310–314} }'
  chicago: Xenidis, Raphaële, and Miriam Fahimi. “Standardising Equality in the Algorithmic
    Society? A Research Agenda.” In <i>Proceedings of Fourth European Workshop on
    Algorithmic Fairness</i>, 310–314. PMLR, 2025.
  ieee: R. Xenidis and M. Fahimi, “Standardising Equality in the Algorithmic Society?
    A Research Agenda,” in <i>Proceedings of Fourth European Workshop on Algorithmic
    Fairness</i>, 2025, pp. 310–314.
  mla: Xenidis, Raphaële, and Miriam Fahimi. “Standardising Equality in the Algorithmic
    Society? A Research Agenda.” <i>Proceedings of Fourth European Workshop on Algorithmic
    Fairness</i>, PMLR, 2025, pp. 310–314.
  short: 'R. Xenidis, M. Fahimi, in: Proceedings of Fourth European Workshop on Algorithmic
    Fairness, PMLR, 2025, pp. 310–314.'
date_created: 2025-11-18T09:59:34Z
date_updated: 2025-11-18T10:02:20Z
department:
- _id: '756'
- _id: '26'
language:
- iso: eng
page: 310–314
publication: Proceedings of Fourth European Workshop on Algorithmic Fairness
publisher: PMLR
status: public
title: Standardising Equality in the Algorithmic Society? A Research Agenda
type: conference
user_id: '118059'
year: '2025'
...
---
_id: '57531'
author:
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Alpsancar S, Buhl HM, Matzner T, Scharlau I. Explanation needs and ethical
    demands: unpacking the instrumental value of XAI. <i>AI and Ethics</i>. 2025;5:3015–3033.
    doi:<a href="https://doi.org/10.1007/s43681-024-00622-3">https://doi.org/10.1007/s43681-024-00622-3</a>'
  apa: 'Alpsancar, S., Buhl, H. M., Matzner, T., &#38; Scharlau, I. (2025). Explanation
    needs and ethical demands: unpacking the instrumental value of XAI. <i>AI and
    Ethics</i>, <i>5</i>, 3015–3033. <a href="https://doi.org/10.1007/s43681-024-00622-3">https://doi.org/10.1007/s43681-024-00622-3</a>'
  bibtex: '@article{Alpsancar_Buhl_Matzner_Scharlau_2025, title={Explanation needs
    and ethical demands: unpacking the instrumental value of XAI}, volume={5}, DOI={<a
    href="https://doi.org/10.1007/s43681-024-00622-3">https://doi.org/10.1007/s43681-024-00622-3</a>},
    journal={AI and Ethics}, publisher={Springer}, author={Alpsancar, Suzana and Buhl,
    Heike M. and Matzner, Tobias and Scharlau, Ingrid}, year={2025}, pages={3015–3033}
    }'
  chicago: 'Alpsancar, Suzana, Heike M. Buhl, Tobias Matzner, and Ingrid Scharlau.
    “Explanation Needs and Ethical Demands: Unpacking the Instrumental Value of XAI.”
    <i>AI and Ethics</i> 5 (2025): 3015–3033. <a href="https://doi.org/10.1007/s43681-024-00622-3">https://doi.org/10.1007/s43681-024-00622-3</a>.'
  ieee: 'S. Alpsancar, H. M. Buhl, T. Matzner, and I. Scharlau, “Explanation needs
    and ethical demands: unpacking the instrumental value of XAI,” <i>AI and Ethics</i>,
    vol. 5, pp. 3015–3033, 2025, doi: <a href="https://doi.org/10.1007/s43681-024-00622-3">https://doi.org/10.1007/s43681-024-00622-3</a>.'
  mla: 'Alpsancar, Suzana, et al. “Explanation Needs and Ethical Demands: Unpacking
    the Instrumental Value of XAI.” <i>AI and Ethics</i>, vol. 5, Springer, 2025,
    pp. 3015–3033, doi:<a href="https://doi.org/10.1007/s43681-024-00622-3">https://doi.org/10.1007/s43681-024-00622-3</a>.'
  short: S. Alpsancar, H.M. Buhl, T. Matzner, I. Scharlau, AI and Ethics 5 (2025)
    3015–3033.
date_created: 2024-12-02T08:32:00Z
date_updated: 2025-11-25T21:27:44Z
department:
- _id: '756'
- _id: '26'
doi: https://doi.org/10.1007/s43681-024-00622-3
intvolume: '         5'
language:
- iso: eng
main_file_link:
- open_access: '1'
oa: '1'
page: 3015–3033
project:
- _id: '370'
  name: 'TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren
    KI'
- _id: '111'
  name: 'TRR 318 - A01: TRR 318 - Adaptives Erklären (Teilprojekt A01)'
- _id: '114'
  name: 'TRR 318 - A04: TRR 318 - Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten (Teilprojekt A04)'
publication: AI and Ethics
publication_status: published
publisher: Springer
related_material:
  link:
  - relation: confirmation
    url: https://links.springernature.com/f/a/xjbXcT06ufIgbHT1duGaHQ~~/AABE5gA~/RgRpMhXcP0SiaHR0cHM6Ly9saW5rLnNwcmluZ2VyLmNvbS8xMC4xMDA3L3M0MzY4MS0wMjQtMDA2MjItMz91dG1fc291cmNlPXJjdF9jb25ncmF0ZW1haWx0JnV0bV9tZWRpdW09ZW1haWwmdXRtX2NhbXBhaWduPW9hXzIwMjQxMjAzJnV0bV9jb250ZW50PTEwLjEwMDcvczQzNjgxLTAyNC0wMDYyMi0zVwNzcGNCCmdG3JBPZxsDc2FSIXN1emFuYS5hbHBzYW5jYXJAdW5pLXBhZGVyYm9ybi5kZVgEAAAHLA~~
status: public
title: 'Explanation needs and ethical demands: unpacking the instrumental value of
  XAI'
type: journal_article
user_id: '93637'
volume: 5
year: '2025'
...
---
_id: '62305'
author:
- first_name: Wessel
  full_name: Reijers, Wessel
  id: '102524'
  last_name: Reijers
  orcid: 0000-0003-2505-1587
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
citation:
  ama: 'Reijers W, Matzner T, Alpsancar S. Explainability and AI Governance. In: Farina
    M, Yu X, Chen J, eds. <i>Digital Development. Technology, Ethics and Governance</i>.
    Routledge; 2025. doi:<a href="https://doi.org/10.4324/9781003567622-22">10.4324/9781003567622-22</a>'
  apa: Reijers, W., Matzner, T., &#38; Alpsancar, S. (2025). Explainability and AI
    Governance. In M. Farina, X. Yu, &#38; J. Chen (Eds.), <i>Digital Development.
    Technology, Ethics and Governance</i>. Routledge. <a href="https://doi.org/10.4324/9781003567622-22">https://doi.org/10.4324/9781003567622-22</a>
  bibtex: '@inbook{Reijers_Matzner_Alpsancar_2025, place={New York}, title={Explainability
    and AI Governance}, DOI={<a href="https://doi.org/10.4324/9781003567622-22">10.4324/9781003567622-22</a>},
    booktitle={Digital Development. Technology, Ethics and Governance}, publisher={Routledge},
    author={Reijers, Wessel and Matzner, Tobias and Alpsancar, Suzana}, editor={Farina,
    Mirko  and Yu, Xiao  and Chen, Jin}, year={2025} }'
  chicago: 'Reijers, Wessel, Tobias Matzner, and Suzana Alpsancar. “Explainability
    and AI Governance.” In <i>Digital Development. Technology, Ethics and Governance</i>,
    edited by Mirko  Farina, Xiao  Yu, and Jin Chen. New York: Routledge, 2025. <a
    href="https://doi.org/10.4324/9781003567622-22">https://doi.org/10.4324/9781003567622-22</a>.'
  ieee: 'W. Reijers, T. Matzner, and S. Alpsancar, “Explainability and AI Governance,”
    in <i>Digital Development. Technology, Ethics and Governance</i>, M. Farina, X.
    Yu, and J. Chen, Eds. New York: Routledge, 2025.'
  mla: Reijers, Wessel, et al. “Explainability and AI Governance.” <i>Digital Development.
    Technology, Ethics and Governance</i>, edited by Mirko  Farina et al., Routledge,
    2025, doi:<a href="https://doi.org/10.4324/9781003567622-22">10.4324/9781003567622-22</a>.
  short: 'W. Reijers, T. Matzner, S. Alpsancar, in: M. Farina, X. Yu, J. Chen (Eds.),
    Digital Development. Technology, Ethics and Governance, Routledge, New York, 2025.'
date_created: 2025-11-25T17:58:04Z
date_updated: 2025-11-25T21:25:31Z
department:
- _id: '26'
- _id: '756'
- _id: '660'
doi: 10.4324/9781003567622-22
editor:
- first_name: 'Mirko '
  full_name: 'Farina, Mirko '
  last_name: Farina
- first_name: 'Xiao '
  full_name: 'Yu, Xiao '
  last_name: Yu
- first_name: Jin
  full_name: Chen, Jin
  last_name: Chen
language:
- iso: eng
place: New York
project:
- _id: '370'
  name: 'TRR 318; TP B06: Ethik und Normativität der erklärbaren KI'
publication: Digital Development. Technology, Ethics and Governance
publication_identifier:
  isbn:
  - '9781003567622'
publication_status: published
publisher: Routledge
status: public
title: Explainability and AI Governance
type: book_chapter
user_id: '93637'
year: '2025'
...
---
_id: '55869'
author:
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
citation:
  ama: 'Alpsancar S. Warum und wozu erklärbare KI? Über die Verschiedenheit dreier
    paradigmatischer Zwecksetzungen. In: Adolphi R, Alpsancar S, Hahn S, Kettner M,
    eds. <i> Philosophische Digitalisierungsforschung  Verantwortung, Verständigung,
    Vernunft, Macht</i>. transcript; 2024:55-113.'
  apa: Alpsancar, S. (2024). Warum und wozu erklärbare KI? Über die Verschiedenheit
    dreier paradigmatischer Zwecksetzungen. In R. Adolphi, S. Alpsancar, S. Hahn,
    &#38; M. Kettner (Eds.), <i> Philosophische Digitalisierungsforschung  Verantwortung,
    Verständigung, Vernunft, Macht</i> (pp. 55–113). transcript.
  bibtex: '@inbook{Alpsancar_2024, place={Bielefeld}, title={Warum und wozu erklärbare
    KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen}, booktitle={
    Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft,
    Macht}, publisher={transcript}, author={Alpsancar, Suzana}, editor={Adolphi, Rainer
    and Alpsancar, Suzana and Hahn, Susanne and Kettner, Matthias}, year={2024}, pages={55–113}
    }'
  chicago: 'Alpsancar, Suzana. “Warum Und Wozu Erklärbare KI? Über Die Verschiedenheit
    Dreier Paradigmatischer Zwecksetzungen.” In <i> Philosophische Digitalisierungsforschung 
    Verantwortung, Verständigung, Vernunft, Macht</i>, edited by Rainer Adolphi, Suzana
    Alpsancar, Susanne Hahn, and Matthias Kettner, 55–113. Bielefeld: transcript,
    2024.'
  ieee: 'S. Alpsancar, “Warum und wozu erklärbare KI? Über die Verschiedenheit dreier
    paradigmatischer Zwecksetzungen,” in <i> Philosophische Digitalisierungsforschung 
    Verantwortung, Verständigung, Vernunft, Macht</i>, R. Adolphi, S. Alpsancar, S.
    Hahn, and M. Kettner, Eds. Bielefeld: transcript, 2024, pp. 55–113.'
  mla: Alpsancar, Suzana. “Warum Und Wozu Erklärbare KI? Über Die Verschiedenheit
    Dreier Paradigmatischer Zwecksetzungen.” <i> Philosophische Digitalisierungsforschung 
    Verantwortung, Verständigung, Vernunft, Macht</i>, edited by Rainer Adolphi et
    al., transcript, 2024, pp. 55–113.
  short: 'S. Alpsancar, in: R. Adolphi, S. Alpsancar, S. Hahn, M. Kettner (Eds.),  Philosophische
    Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht, transcript,
    Bielefeld, 2024, pp. 55–113.'
date_created: 2024-08-28T18:50:46Z
date_updated: 2024-08-28T18:51:44Z
department:
- _id: '756'
editor:
- first_name: Rainer
  full_name: Adolphi, Rainer
  last_name: Adolphi
- first_name: Suzana
  full_name: Alpsancar, Suzana
  last_name: Alpsancar
- first_name: Susanne
  full_name: Hahn, Susanne
  last_name: Hahn
- first_name: Matthias
  full_name: Kettner, Matthias
  last_name: Kettner
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.transcript-verlag.de/978-3-8376-7497-2/philosophische-digitalisierungsforschung/?number=978-3-8394-7497-6
oa: '1'
page: 55-113
place: Bielefeld
project:
- _id: '370'
  grant_number: '438445824'
  name: 'TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren
    KI'
publication: ' Philosophische Digitalisierungsforschung  Verantwortung, Verständigung,
  Vernunft, Macht'
publisher: transcript
quality_controlled: '1'
status: public
title: Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer
  Zwecksetzungen
type: book_chapter
user_id: '93637'
year: '2024'
...
---
_id: '57172'
author:
- first_name: Wessel
  full_name: Reijers, Wessel
  id: '102524'
  last_name: Reijers
  orcid: 0000-0003-2505-1587
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Martina
  full_name: Philippi, Martina
  id: '100856'
  last_name: Philippi
citation:
  ama: 'Reijers W, Matzner T, Alpsancar S, Philippi M. AI explainability, temporality,
    and civic virtue. In: <i>Smart Ethics in the Digital World: Proceedings of the
    ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts
    of ICT. Universidad de La Rioja, 2024.</i> ; 2024.'
  apa: 'Reijers, W., Matzner, T., Alpsancar, S., &#38; Philippi, M. (2024). AI explainability,
    temporality, and civic virtue. <i>Smart Ethics in the Digital World: Proceedings
    of the ETHICOMP 2024. 21th International Conference on the Ethical and Social
    Impacts of ICT. Universidad de La Rioja, 2024.</i>'
  bibtex: '@inproceedings{Reijers_Matzner_Alpsancar_Philippi_2024, place={Longrono},
    title={AI explainability, temporality, and civic virtue}, booktitle={Smart Ethics
    in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference
    on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.}, author={Reijers,
    Wessel and Matzner, Tobias and Alpsancar, Suzana and Philippi, Martina}, year={2024}
    }'
  chicago: 'Reijers, Wessel, Tobias Matzner, Suzana Alpsancar, and Martina Philippi.
    “AI Explainability, Temporality, and Civic Virtue.” In <i>Smart Ethics in the
    Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference
    on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.</i> Longrono,
    2024.'
  ieee: W. Reijers, T. Matzner, S. Alpsancar, and M. Philippi, “AI explainability,
    temporality, and civic virtue,” 2024.
  mla: 'Reijers, Wessel, et al. “AI Explainability, Temporality, and Civic Virtue.”
    <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International
    Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja,
    2024.</i>, 2024.'
  short: 'W. Reijers, T. Matzner, S. Alpsancar, M. Philippi, in: Smart Ethics in the
    Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference
    on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024., Longrono,
    2024.'
date_created: 2024-11-18T10:06:46Z
date_updated: 2024-12-17T11:44:41Z
department:
- _id: '756'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://dialnet.unirioja.es/descarga/articulo/9326093.pdf
oa: '1'
place: Longrono
project:
- _id: '370'
  grant_number: '438445824'
  name: 'TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren
    KI'
publication: 'Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024.
  21th International Conference on the Ethical and Social Impacts of ICT. Universidad
  de La Rioja, 2024.'
publication_status: published
status: public
title: AI explainability, temporality, and civic virtue
type: conference_abstract
user_id: '93637'
year: '2024'
...
---
_id: '56217'
author:
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Tobias
  full_name: Matzner, Tobias
  last_name: Matzner
- first_name: Martina
  full_name: Philippi, Martina
  last_name: Philippi
citation:
  ama: 'Alpsancar S, Matzner T, Philippi M. Unpacking the purposes of explainable
    AI. In: <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024.
    21th International Conference on the Ethical and Social Impacts of ICT</i>. Universidad
    de La Rioja; 2024:31-35.'
  apa: 'Alpsancar, S., Matzner, T., &#38; Philippi, M. (2024). Unpacking the purposes
    of explainable AI. <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP
    2024. 21th International Conference on the Ethical and Social Impacts of ICT</i>,
    31–35.'
  bibtex: '@inproceedings{Alpsancar_Matzner_Philippi_2024, title={Unpacking the purposes
    of explainable AI}, booktitle={Smart Ethics in the Digital World: Proceedings
    of the ETHICOMP 2024. 21th International Conference on the Ethical and Social
    Impacts of ICT}, publisher={Universidad de La Rioja}, author={Alpsancar, Suzana
    and Matzner, Tobias and Philippi, Martina}, year={2024}, pages={31–35} }'
  chicago: 'Alpsancar, Suzana, Tobias Matzner, and Martina Philippi. “Unpacking the
    Purposes of Explainable AI.” In <i>Smart Ethics in the Digital World: Proceedings
    of the ETHICOMP 2024. 21th International Conference on the Ethical and Social
    Impacts of ICT</i>, 31–35. Universidad de La Rioja, 2024.'
  ieee: 'S. Alpsancar, T. Matzner, and M. Philippi, “Unpacking the purposes of explainable
    AI,” in <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024.
    21th International Conference on the Ethical and Social Impacts of ICT</i>, 2024,
    pp. 31–35.'
  mla: 'Alpsancar, Suzana, et al. “Unpacking the Purposes of Explainable AI.” <i>Smart
    Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International
    Conference on the Ethical and Social Impacts of ICT</i>, Universidad de La Rioja,
    2024, pp. 31–35.'
  short: 'S. Alpsancar, T. Matzner, M. Philippi, in: Smart Ethics in the Digital World:
    Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical
    and Social Impacts of ICT, Universidad de La Rioja, 2024, pp. 31–35.'
date_created: 2024-09-23T19:17:41Z
date_updated: 2024-12-17T11:46:27Z
department:
- _id: '756'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://dialnet.unirioja.es/descarga/articulo/9326091.pdf
oa: '1'
page: 31-35
project:
- _id: '370'
  grant_number: '438445824'
  name: 'TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren
    KI'
publication: 'Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024.
  21th International Conference on the Ethical and Social Impacts of ICT'
publisher: Universidad de La Rioja
status: public
title: Unpacking the purposes of explainable AI
type: conference_abstract
user_id: '93637'
year: '2024'
...
---
_id: '57762'
citation:
  ama: Alpsancar S, Friedrich A, Gehring P, Kaminski A, Nordmann A, eds. <i>Der Sog
    des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024</i>. Nomos
  apa: Alpsancar, S., Friedrich, A., Gehring, P., Kaminski, A., &#38; Nordmann, A.
    (Eds.). (n.d.). <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie
    2024</i>. Nomos.
  bibtex: '@book{Alpsancar_Friedrich_Gehring_Kaminski_Nordmann, place={Baden Baden},
    title={Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie
    2024}, publisher={Nomos} }'
  chicago: 'Alpsancar, Suzana, Alexander Friedrich, Petra Gehring, Andreas Kaminski,
    and Alfred Nordmann, eds. <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch
    Technikphilosophie 2024</i>. Baden Baden: Nomos, n.d.'
  ieee: 'S. Alpsancar, A. Friedrich, P. Gehring, A. Kaminski, and A. Nordmann, Eds.,
    <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024</i>.
    Baden Baden: Nomos.'
  mla: Alpsancar, Suzana, et al., editors. <i>Der Sog des Neuen (und der Schock des
    Alten). Jahrbuch Technikphilosophie 2024</i>. Nomos.
  short: S. Alpsancar, A. Friedrich, P. Gehring, A. Kaminski, A. Nordmann, eds., Der
    Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024, Nomos,
    Baden Baden, n.d.
date_created: 2024-12-13T09:26:16Z
date_updated: 2024-12-17T11:43:20Z
department:
- _id: '756'
editor:
- first_name: Suzana
  full_name: Alpsancar, Suzana
  last_name: Alpsancar
- first_name: Alexander
  full_name: Friedrich, Alexander
  last_name: Friedrich
- first_name: Petra
  full_name: Gehring, Petra
  last_name: Gehring
- first_name: Andreas
  full_name: Kaminski, Andreas
  last_name: Kaminski
- first_name: Alfred
  full_name: Nordmann, Alfred
  last_name: Nordmann
language:
- iso: ger
- iso: eng
place: Baden Baden
publication_status: inpress
publisher: Nomos
status: public
title: Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024
type: book_editor
user_id: '93637'
year: '2024'
...
---
_id: '55868'
citation:
  ama: Adolphi R, Hahn S, Kettner M, eds. <i>Philosophische Digitalisierungsforschung 
    Verantwortung, Verständigung, Vernunft, Macht</i>. transcript; 2024.
  apa: Adolphi, R., Hahn, S., &#38; Kettner, M. (Eds.). (2024). <i>Philosophische
    Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>.
    transcript.
  bibtex: '@book{Adolphi_Hahn_Kettner_2024, place={Bielefeld}, title={Philosophische
    Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht}, publisher={transcript},
    year={2024} }'
  chicago: 'Adolphi, Rainer, Susanne Hahn, and Matthias Kettner, eds. <i>Philosophische
    Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>.
    Bielefeld: transcript, 2024.'
  ieee: 'R. Adolphi, S. Hahn, and M. Kettner, Eds., <i>Philosophische Digitalisierungsforschung 
    Verantwortung, Verständigung, Vernunft, Macht</i>. Bielefeld: transcript, 2024.'
  mla: Adolphi, Rainer, et al., editors. <i>Philosophische Digitalisierungsforschung 
    Verantwortung, Verständigung, Vernunft, Macht</i>. transcript, 2024.
  short: R. Adolphi, S. Hahn, M. Kettner, eds., Philosophische Digitalisierungsforschung 
    Verantwortung, Verständigung, Vernunft, Macht, transcript, Bielefeld, 2024.
date_created: 2024-08-28T18:47:16Z
date_updated: 2025-07-02T07:39:27Z
department:
- _id: '756'
- _id: '26'
editor:
- first_name: Rainer
  full_name: Adolphi, Rainer
  last_name: Adolphi
- first_name: Susanne
  full_name: Hahn, Susanne
  last_name: Hahn
- first_name: Matthias
  full_name: Kettner, Matthias
  last_name: Kettner
language:
- iso: ger
main_file_link:
- open_access: '1'
  url: https://www.transcript-verlag.de/978-3-8376-7497-2/philosophische-digitalisierungsforschung/?number=978-3-8394-7497-6
oa: '1'
page: '464'
place: Bielefeld
publication_identifier:
  isbn:
  - 978-3-8376-7497-2
publication_status: published
publisher: transcript
status: public
title: Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft,
  Macht
type: book_editor
user_id: '93637'
year: '2024'
...
---
_id: '62228'
abstract:
- lang: eng
  text: This chapter highlights the intricate nature of data and their profound social
    implications. It examines the acts of rendering data visible and the inherent
    power dynamics and imbalances that accompany such processes. Our dialogue unfolds
    in three interconnected parts, each focusing on the intersection of in/visibility
    and power. Part 1 attends to the challenges of producing knowledge about and with
    data, emphasizing the relativity, fluidity, and instability inherent in data.
    It explores frameworks that uncover the often invisible infrastructures of algorithms,
    rendering visible the actors, technologies, and divergent values involved in data
    manipulation. Part 2 presents empirical case studies that analyse the consequences
    of data visibility while contemplating the methodological opportunities and challenges
    of foregrounding the embedded values and norms within data. Part 3 discusses tool-based
    interventions aimed at bringing alternative data framings and narratives to the
    fore. It examines the complexities of tracing data across various contexts and
    the value, utility, and obstacles associated with creating visual representations
    of data and their flows. By critically engaging with the complexities of data
    in/visibility, this chapter challenges existing gatekeepers and fosters a deeper
    understanding of the multifaceted nature of data and its socio-political ramifications.
author:
- first_name: Miriam
  full_name: Fahimi, Miriam
  id: '118059'
  last_name: Fahimi
  orcid: 0000-0002-0619-3160
- first_name: Petter
  full_name: Falk, Petter
  last_name: Falk
- first_name: Jonathan W. Y.
  full_name: Gray, Jonathan W. Y.
  last_name: Gray
- first_name: Juliane
  full_name: Jarke, Juliane
  last_name: Jarke
- first_name: Katharina
  full_name: Kinder-Kurlanda, Katharina
  last_name: Kinder-Kurlanda
- first_name: Evan
  full_name: Light, Evan
  last_name: Light
- first_name: Ellouise
  full_name: McGeachey, Ellouise
  last_name: McGeachey
- first_name: Itzelle Medina
  full_name: Perea, Itzelle Medina
  last_name: Perea
- first_name: Nikolaus
  full_name: Poechhacker, Nikolaus
  last_name: Poechhacker
- first_name: Lindsay
  full_name: Poirier, Lindsay
  last_name: Poirier
- first_name: Theo
  full_name: Röhle, Theo
  last_name: Röhle
- first_name: Tamar
  full_name: Sharon, Tamar
  last_name: Sharon
- first_name: Marthe
  full_name: Stevens, Marthe
  last_name: Stevens
- first_name: Bernard van
  full_name: Gastel, Bernard van
  last_name: Gastel
- first_name: Quinn
  full_name: White, Quinn
  last_name: White
- first_name: Irina
  full_name: Zakharova, Irina
  last_name: Zakharova
citation:
  ama: 'Fahimi M, Falk P, Gray JWY, et al. In/visibilities in Data Studies: Methods,
    Tools, and Interventions. In: <i>Dialogues in Data Power</i>. Bristol University
    Press; 2024:52–79.'
  apa: 'Fahimi, M., Falk, P., Gray, J. W. Y., Jarke, J., Kinder-Kurlanda, K., Light,
    E., McGeachey, E., Perea, I. M., Poechhacker, N., Poirier, L., Röhle, T., Sharon,
    T., Stevens, M., Gastel, B. van, White, Q., &#38; Zakharova, I. (2024). In/visibilities
    in Data Studies: Methods, Tools, and Interventions. In <i>Dialogues in Data Power</i>
    (pp. 52–79). Bristol University Press.'
  bibtex: '@inbook{Fahimi_Falk_Gray_Jarke_Kinder-Kurlanda_Light_McGeachey_Perea_Poechhacker_Poirier_et
    al._2024, title={In/visibilities in Data Studies: Methods, Tools, and Interventions},
    booktitle={Dialogues in Data Power}, publisher={Bristol University Press}, author={Fahimi,
    Miriam and Falk, Petter and Gray, Jonathan W. Y. and Jarke, Juliane and Kinder-Kurlanda,
    Katharina and Light, Evan and McGeachey, Ellouise and Perea, Itzelle Medina and
    Poechhacker, Nikolaus and Poirier, Lindsay and et al.}, year={2024}, pages={52–79}
    }'
  chicago: 'Fahimi, Miriam, Petter Falk, Jonathan W. Y. Gray, Juliane Jarke, Katharina
    Kinder-Kurlanda, Evan Light, Ellouise McGeachey, et al. “In/Visibilities in Data
    Studies: Methods, Tools, and Interventions.” In <i>Dialogues in Data Power</i>,
    52–79. Bristol University Press, 2024.'
  ieee: 'M. Fahimi <i>et al.</i>, “In/visibilities in Data Studies: Methods, Tools,
    and Interventions,” in <i>Dialogues in Data Power</i>, Bristol University Press,
    2024, pp. 52–79.'
  mla: 'Fahimi, Miriam, et al. “In/Visibilities in Data Studies: Methods, Tools, and
    Interventions.” <i>Dialogues in Data Power</i>, Bristol University Press, 2024,
    pp. 52–79.'
  short: 'M. Fahimi, P. Falk, J.W.Y. Gray, J. Jarke, K. Kinder-Kurlanda, E. Light,
    E. McGeachey, I.M. Perea, N. Poechhacker, L. Poirier, T. Röhle, T. Sharon, M.
    Stevens, B. van Gastel, Q. White, I. Zakharova, in: Dialogues in Data Power, Bristol
    University Press, 2024, pp. 52–79.'
date_created: 2025-11-18T09:58:30Z
date_updated: 2025-11-18T10:02:15Z
department:
- _id: '756'
- _id: '26'
language:
- iso: eng
page: 52–79
publication: Dialogues in Data Power
publication_identifier:
  isbn:
  - 978-1-5292-3832-7
publisher: Bristol University Press
status: public
title: 'In/visibilities in Data Studies: Methods, Tools, and Interventions'
type: book_chapter
user_id: '118059'
year: '2024'
...
---
_id: '62230'
abstract:
- lang: eng
  text: 'Algorithms have risen to become one, if not the central technology for producing,
    circulating, and evaluating knowledge in multiple societal arenas. In this book,
    scholars from the social sciences, humanities, and computer science argue that
    this shift has, and will continue to have, profound implications for how knowledge
    is produced and what and whose knowledge is valued and deemed valid. To attend
    to this fundamental change, the authors propose the concept of algorithmic regimes
    and demonstrate how they transform the epistemological, methodological, and political
    foundations of knowledge production, sensemaking, and decision-making in contemporary
    societies. Across sixteen chapters, the volume offers a diverse collection of
    contributions along three perspectives on algorithmic regimes: the methods necessary
    to research and design algorithmic regimes, the ways in which algorithmic regimes
    reconfigure sociotechnical interactions, and the politics engrained in algorithmic
    regimes.'
author:
- first_name: Katharina
  full_name: Kinder-Kurlanda, Katharina
  last_name: Kinder-Kurlanda
- first_name: Miriam
  full_name: Fahimi, Miriam
  id: '118059'
  last_name: Fahimi
  orcid: 0000-0002-0619-3160
citation:
  ama: 'Kinder-Kurlanda K, Fahimi M. Making Algorithms Fair: Ethnographic Insights
    from Machine Learning Interventions. In: Jarke J, Prietl B, Egbert S, Boeva Y,
    Heuer H, Arnold M, eds. <i>Algorithmic Regimes. Methods, Interactions, and Politics.</i>
    Amsterdam University Press; 2024:309–330.'
  apa: 'Kinder-Kurlanda, K., &#38; Fahimi, M. (2024). Making Algorithms Fair: Ethnographic
    Insights from Machine Learning Interventions. In J. Jarke, B. Prietl, S. Egbert,
    Y. Boeva, H. Heuer, &#38; M. Arnold (Eds.), <i>Algorithmic Regimes. Methods, Interactions,
    and Politics.</i> (pp. 309–330). Amsterdam University Press.'
  bibtex: '@inbook{Kinder-Kurlanda_Fahimi_2024, place={Amsterdam}, title={Making Algorithms
    Fair: Ethnographic Insights from Machine Learning Interventions}, booktitle={Algorithmic
    Regimes. Methods, Interactions, and Politics.}, publisher={Amsterdam University
    Press}, author={Kinder-Kurlanda, Katharina and Fahimi, Miriam}, editor={Jarke,
    Juliane and Prietl, Bianca and Egbert, Simon and Boeva, Yana and Heuer, Hendrik
    and Arnold, Maike}, year={2024}, pages={309–330} }'
  chicago: 'Kinder-Kurlanda, Katharina, and Miriam Fahimi. “Making Algorithms Fair:
    Ethnographic Insights from Machine Learning Interventions.” In <i>Algorithmic
    Regimes. Methods, Interactions, and Politics.</i>, edited by Juliane Jarke, Bianca
    Prietl, Simon Egbert, Yana Boeva, Hendrik Heuer, and Maike Arnold, 309–330. Amsterdam:
    Amsterdam University Press, 2024.'
  ieee: 'K. Kinder-Kurlanda and M. Fahimi, “Making Algorithms Fair: Ethnographic Insights
    from Machine Learning Interventions,” in <i>Algorithmic Regimes. Methods, Interactions,
    and Politics.</i>, J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, and M.
    Arnold, Eds. Amsterdam: Amsterdam University Press, 2024, pp. 309–330.'
  mla: 'Kinder-Kurlanda, Katharina, and Miriam Fahimi. “Making Algorithms Fair: Ethnographic
    Insights from Machine Learning Interventions.” <i>Algorithmic Regimes. Methods,
    Interactions, and Politics.</i>, edited by Juliane Jarke et al., Amsterdam University
    Press, 2024, pp. 309–330.'
  short: 'K. Kinder-Kurlanda, M. Fahimi, in: J. Jarke, B. Prietl, S. Egbert, Y. Boeva,
    H. Heuer, M. Arnold (Eds.), Algorithmic Regimes. Methods, Interactions, and Politics.,
    Amsterdam University Press, Amsterdam, 2024, pp. 309–330.'
date_created: 2025-11-18T10:00:38Z
date_updated: 2025-11-18T10:02:25Z
department:
- _id: '756'
- _id: '26'
editor:
- first_name: Juliane
  full_name: Jarke, Juliane
  last_name: Jarke
- first_name: Bianca
  full_name: Prietl, Bianca
  last_name: Prietl
- first_name: Simon
  full_name: Egbert, Simon
  last_name: Egbert
- first_name: Yana
  full_name: Boeva, Yana
  last_name: Boeva
- first_name: Hendrik
  full_name: Heuer, Hendrik
  last_name: Heuer
- first_name: Maike
  full_name: Arnold, Maike
  last_name: Arnold
language:
- iso: eng
page: 309–330
place: Amsterdam
publication: Algorithmic Regimes. Methods, Interactions, and Politics.
publication_identifier:
  isbn:
  - 978-94-6372-848-5
publisher: Amsterdam University Press
status: public
title: 'Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions'
type: book_chapter
user_id: '118059'
year: '2024'
...
