[{"_id":"65061","project":[{"name":"TRR 318: Erklärbarkeit konstruieren","_id":"109"},{"_id":"370","name":"TRR 318; TP B06: Ethik und Normativität der erklärbaren KI"}],"department":[{"_id":"26"},{"_id":"756"}],"user_id":"93637","language":[{"iso":"eng"}],"publication":"Social Explainable AI","type":"book_chapter","abstract":[{"lang":"eng","text":"<jats:title>Abstract</jats:title>\r\n                  <jats:p>\r\n                    One of the purposes for which XAI is often brought into play is to enable a user to act responsibly. However, responsibility is a complex normative and social phenomenon that we unfold in this chapter. We consider that the classical concepts of agency and responsibility do not fully capture what is needed for meaningful collaboration between human users and XAI. Advocating the perspective of sXAI, we argue that the growing adaptivity of AI systems will result in sXAI being considered as partners. Both partners adopt particular (dialogical) roles within a collaborative process and take responsibility for them. We expect that these roles lead to reactive attitudes toward the sXAI on the side of the human partners that make these roles relational. They resemble those reactive attitudes that we hold toward other human agents. For agents to exercise their responsibility, they need to possess agential capacities to fulfill their role with respect to the structure of a social interaction. Hence, sXAI can be expected to act responsibly. But because of XAI’s limited normative capacities, it might rather act as a marginal agent. We refer to marginal agents and show they can be scaffolded with regard to their agential capacities and their knowledge about the structure of a social interaction. The structure links the actions of the partners to each other in terms of a set of stimuli and responses to it in pursuit of a particular goal. Hence, it is important to differentiate between the different goals that a structure can impose for exercising responsibility. Therefore, we follow (Responsibility from the margins. Oxford University Press; 2015.\r\n                    <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://doi.org/10.1093/acprof:oso/9780198715672.24001.0001\" ext-link-type=\"uri\">https://doi.org/10.1093/acprof:oso/9780198715672.24001.0001</jats:ext-link>\r\n                    ) and offer three structures that can help to organize responsibility for\r\n                    <jats:italic>decisions made</jats:italic>\r\n                    with the assistance of AI systems. These structures are attributability, answerability, and accountability. Our insights will inform the development and design process of XAI to meet the guiding principles of responsible research and innovation as well as trustworthy AI.\r\n                  </jats:p>"}],"status":"public","oa":"1","publisher":"Springer Nature Singapore","date_updated":"2026-03-19T11:53:01Z","date_created":"2026-03-19T10:59:18Z","author":[{"first_name":"Katharina J.","last_name":"Rohlfing","orcid":"0000-0002-5676-8233","id":"50352","full_name":"Rohlfing, Katharina J."},{"full_name":"Alpsancar, Suzana","id":"93637","last_name":"Alpsancar","first_name":"Suzana"},{"first_name":"Carsten","last_name":"Schulte","id":"60311","full_name":"Schulte, Carsten"}],"title":"Responsibilities in sXAI","doi":"10.1007/978-981-96-5290-7_9","main_file_link":[{"url":"https://doi.org/10.1007/978-981-96-5290-7_9","open_access":"1"}],"publication_identifier":{"isbn":["9789819652891","9789819652907"]},"publication_status":"published","year":"2026","place":"Singapore","page":"157-177","citation":{"apa":"Rohlfing, K. J., Alpsancar, S., &#38; Schulte, C. (2026). Responsibilities in sXAI. In <i>Social Explainable AI</i> (pp. 157–177). Springer Nature Singapore. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_9\">https://doi.org/10.1007/978-981-96-5290-7_9</a>","short":"K.J. Rohlfing, S. Alpsancar, C. Schulte, in: Social Explainable AI, Springer Nature Singapore, Singapore, 2026, pp. 157–177.","bibtex":"@inbook{Rohlfing_Alpsancar_Schulte_2026, place={Singapore}, title={Responsibilities in sXAI}, DOI={<a href=\"https://doi.org/10.1007/978-981-96-5290-7_9\">10.1007/978-981-96-5290-7_9</a>}, booktitle={Social Explainable AI}, publisher={Springer Nature Singapore}, author={Rohlfing, Katharina J. and Alpsancar, Suzana and Schulte, Carsten}, year={2026}, pages={157–177} }","mla":"Rohlfing, Katharina J., et al. “Responsibilities in SXAI.” <i>Social Explainable AI</i>, Springer Nature Singapore, 2026, pp. 157–77, doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_9\">10.1007/978-981-96-5290-7_9</a>.","ama":"Rohlfing KJ, Alpsancar S, Schulte C. Responsibilities in sXAI. In: <i>Social Explainable AI</i>. Springer Nature Singapore; 2026:157-177. doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_9\">10.1007/978-981-96-5290-7_9</a>","ieee":"K. J. Rohlfing, S. Alpsancar, and C. Schulte, “Responsibilities in sXAI,” in <i>Social Explainable AI</i>, Singapore: Springer Nature Singapore, 2026, pp. 157–177.","chicago":"Rohlfing, Katharina J., Suzana Alpsancar, and Carsten Schulte. “Responsibilities in SXAI.” In <i>Social Explainable AI</i>, 157–77. Singapore: Springer Nature Singapore, 2026. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_9\">https://doi.org/10.1007/978-981-96-5290-7_9</a>."}},{"page":"557-581","citation":{"ieee":"S. Alpsancar and E. Stamboliev, “Tasking AI Fairly. How to Empower AI Practitioners With sXAI?,” in <i>Social Explainable AI</i>, Singapore: Springer Nature Singapore, 2026, pp. 557–581.","chicago":"Alpsancar, Suzana, and Eugenia Stamboliev. “Tasking AI Fairly. How to Empower AI Practitioners With SXAI?” In <i>Social Explainable AI</i>, 557–81. Singapore: Springer Nature Singapore, 2026. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_29\">https://doi.org/10.1007/978-981-96-5290-7_29</a>.","short":"S. Alpsancar, E. Stamboliev, in: Social Explainable AI, Springer Nature Singapore, Singapore, 2026, pp. 557–581.","bibtex":"@inbook{Alpsancar_Stamboliev_2026, place={Singapore}, title={Tasking AI Fairly. How to Empower AI Practitioners With sXAI?}, DOI={<a href=\"https://doi.org/10.1007/978-981-96-5290-7_29\">10.1007/978-981-96-5290-7_29</a>}, booktitle={Social Explainable AI}, publisher={Springer Nature Singapore}, author={Alpsancar, Suzana and Stamboliev, Eugenia}, year={2026}, pages={557–581} }","mla":"Alpsancar, Suzana, and Eugenia Stamboliev. “Tasking AI Fairly. How to Empower AI Practitioners With SXAI?” <i>Social Explainable AI</i>, Springer Nature Singapore, 2026, pp. 557–81, doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_29\">10.1007/978-981-96-5290-7_29</a>.","ama":"Alpsancar S, Stamboliev E. Tasking AI Fairly. How to Empower AI Practitioners With sXAI? In: <i>Social Explainable AI</i>. Springer Nature Singapore; 2026:557-581. doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_29\">10.1007/978-981-96-5290-7_29</a>","apa":"Alpsancar, S., &#38; Stamboliev, E. (2026). Tasking AI Fairly. How to Empower AI Practitioners With sXAI? In <i>Social Explainable AI</i> (pp. 557–581). Springer Nature Singapore. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_29\">https://doi.org/10.1007/978-981-96-5290-7_29</a>"},"place":"Singapore","publication_identifier":{"isbn":["9789819652891","9789819652907"]},"publication_status":"published","doi":"10.1007/978-981-96-5290-7_29","main_file_link":[{"open_access":"1","url":"https://doi.org/10.1007/978-981-96-5290-7_29"}],"author":[{"first_name":"Suzana","last_name":"Alpsancar","full_name":"Alpsancar, Suzana","id":"93637"},{"last_name":"Stamboliev","full_name":"Stamboliev, Eugenia","first_name":"Eugenia"}],"date_updated":"2026-03-19T11:53:42Z","oa":"1","status":"public","type":"book_chapter","department":[{"_id":"26"},{"_id":"756"}],"user_id":"93637","_id":"65063","project":[{"_id":"370","name":"TRR 318; TP B06: Ethik und Normativität der erklärbaren KI"}],"year":"2026","title":"Tasking AI Fairly. How to Empower AI Practitioners With sXAI?","date_created":"2026-03-19T11:03:30Z","publisher":"Springer Nature Singapore","abstract":[{"text":"<jats:title>Abstract</jats:title>\r\n                  <jats:p>\r\n                    This chapter critically examines how social explainable AI (sXAI) can better support AI practitioners in ensuring fairness in AI-based decision-making. We argue for a fundamental shift: Fairness should be understood not as a technical property or an information problem, but as a matter of vulnerability—focusing on the real-world impacts of AI on individuals and groups, especially those most at risk. Hereby, we call for a shift in perspective: from fair AI to\r\n                    <jats:italic>tasking AI fairly</jats:italic>\r\n                    . To motivate our vulnerability approach, we review the “Dutch welfare fraud scandal” (system risk indication—SyRI) and current challenges in the field of fair AI/machine learning (ML). Vulnerability of a person or members of a definable group of persons is a complex relational notion, and not a technical property of a technical system. Accordingly, we suggest several nontechnical strategies that hold the promise to compensate for the insufficiency of purely technical approaches to fairness and other ethical issues in the practical use of AI-based systems. To discuss how sXAI, due to its interactive and adaptive social character, might better fulfill this role than current XAI techniques, we provide a toy scenario for how sXAI might support the virtuous AI practitioner in an ethical inquiry. Finally, we also address challenges and limits of our approach.\r\n                  </jats:p>","lang":"eng"}],"publication":"Social Explainable AI","language":[{"iso":"eng"}]},{"citation":{"mla":"Alpsancar, Suzana, and Michael Klenk. “The Risk of Manipulation and Deception in SXAI.” <i>Social Explainable AI</i>, Springer Nature Singapore, 2026, pp. 583–616, doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_30\">10.1007/978-981-96-5290-7_30</a>.","short":"S. Alpsancar, M. Klenk, in: Social Explainable AI, Springer Nature Singapore, Singapore, 2026, pp. 583–616.","bibtex":"@inbook{Alpsancar_Klenk_2026, place={Singapore}, title={The Risk of Manipulation and Deception in sXAI}, DOI={<a href=\"https://doi.org/10.1007/978-981-96-5290-7_30\">10.1007/978-981-96-5290-7_30</a>}, booktitle={Social Explainable AI}, publisher={Springer Nature Singapore}, author={Alpsancar, Suzana and Klenk, Michael}, year={2026}, pages={583–616} }","apa":"Alpsancar, S., &#38; Klenk, M. (2026). The Risk of Manipulation and Deception in sXAI. In <i>Social Explainable AI</i> (pp. 583–616). Springer Nature Singapore. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_30\">https://doi.org/10.1007/978-981-96-5290-7_30</a>","ama":"Alpsancar S, Klenk M. The Risk of Manipulation and Deception in sXAI. In: <i>Social Explainable AI</i>. Springer Nature Singapore; 2026:583-616. doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_30\">10.1007/978-981-96-5290-7_30</a>","chicago":"Alpsancar, Suzana, and Michael Klenk. “The Risk of Manipulation and Deception in SXAI.” In <i>Social Explainable AI</i>, 583–616. Singapore: Springer Nature Singapore, 2026. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_30\">https://doi.org/10.1007/978-981-96-5290-7_30</a>.","ieee":"S. Alpsancar and M. Klenk, “The Risk of Manipulation and Deception in sXAI,” in <i>Social Explainable AI</i>, Singapore: Springer Nature Singapore, 2026, pp. 583–616."},"page":"583-616","year":"2026","place":"Singapore","publication_status":"published","publication_identifier":{"isbn":["9789819652891","9789819652907"]},"main_file_link":[{"open_access":"1","url":" https://doi.org/10.1007/978-981-96-5290-7_30"}],"doi":"10.1007/978-981-96-5290-7_30","title":"The Risk of Manipulation and Deception in sXAI","date_created":"2026-03-19T11:05:30Z","author":[{"full_name":"Alpsancar, Suzana","id":"93637","last_name":"Alpsancar","first_name":"Suzana"},{"first_name":"Michael","full_name":"Klenk, Michael","last_name":"Klenk"}],"oa":"1","publisher":"Springer Nature Singapore","date_updated":"2026-03-19T11:52:00Z","status":"public","abstract":[{"text":"<jats:title>Abstract</jats:title>\r\n                  <jats:p>XAI can minimize the risks of being manipulated and deceived by AI but in turn entails other specific risks. This also applies to sXAI, and the specifically social character of sXAI harbors particular risks that designers and developers should be aware of. In this chapter, we shall discuss the potential opportunities and risks of sXAI. We see a particularly positive potential in the social character of sXAI, which lies in the fact that skillful users, including those with “healthy distrust,” can use the adaptivity of sXAI to produce an explanation that is actually relevant and adequate for them. However, this requires a high level of skills on the part of the user and is thus in contrast to the general promise of efficiency in the use of AI. A potential risk of XAI is that it can be (even more) persuasive, as the interactive involvement and the anthropomorphism strengthen a trustworthy appearance/performance (independent of the adequacy of the sXAI performance).</jats:p>","lang":"eng"}],"type":"book_chapter","publication":"Social Explainable AI","language":[{"iso":"eng"}],"user_id":"93637","department":[{"_id":"26"},{"_id":"756"}],"project":[{"name":"TRR 318: Erklärbarkeit konstruieren","_id":"109"},{"name":"TRR 318; TP B06: Ethik und Normativität der erklärbaren KI","_id":"370"}],"_id":"65064"},{"language":[{"iso":"eng"}],"user_id":"93637","department":[{"_id":"26"},{"_id":"756"}],"_id":"62709","status":"public","editor":[{"last_name":"Rohlfing","full_name":"Rohlfing, Katharina","first_name":"Katharina"},{"full_name":"Främling, Kary","last_name":"Främling","first_name":"Kary"},{"first_name":"Brian","last_name":"Lim","full_name":"Lim, Brian"},{"last_name":"Alpsancar","full_name":"Alpsancar, Suzana","first_name":"Suzana"},{"first_name":"Kirsten","full_name":"Thommes, Kirsten","last_name":"Thommes"}],"type":"book_chapter","publication":"Social explainable AI. Communications of NII Shonan Meetings","main_file_link":[{"open_access":"1","url":"https://doi.org/10.1007/978-981-96-5290-7_10"}],"title":"Values and Norms in sXAI","author":[{"full_name":"Reijers, Wessel","id":"102524","orcid":"0000-0003-2505-1587","last_name":"Reijers","first_name":"Wessel"},{"first_name":"Suzana","full_name":"Alpsancar, Suzana","id":"93637","last_name":"Alpsancar"}],"date_created":"2025-11-30T07:54:44Z","publisher":"Springer","oa":"1","date_updated":"2026-03-19T10:58:47Z","citation":{"chicago":"Reijers, Wessel, and Suzana Alpsancar. “Values and Norms in SXAI.” In <i>Social Explainable AI. Communications of NII Shonan Meetings</i>, edited by Katharina Rohlfing, Kary Främling, Brian Lim, Suzana Alpsancar, and Kirsten Thommes, 179–95. Singapore: Springer, 2026.","ieee":"W. Reijers and S. Alpsancar, “Values and Norms in sXAI,” in <i>Social explainable AI. Communications of NII Shonan Meetings</i>, K. Rohlfing, K. Främling, B. Lim, S. Alpsancar, and K. Thommes, Eds. Singapore: Springer, 2026, pp. 179–195.","ama":"Reijers W, Alpsancar S. Values and Norms in sXAI. In: Rohlfing K, Främling K, Lim B, Alpsancar S, Thommes K, eds. <i>Social Explainable AI. Communications of NII Shonan Meetings</i>. Springer; 2026:179-195.","apa":"Reijers, W., &#38; Alpsancar, S. (2026). Values and Norms in sXAI. In K. Rohlfing, K. Främling, B. Lim, S. Alpsancar, &#38; K. Thommes (Eds.), <i>Social explainable AI. Communications of NII Shonan Meetings</i> (pp. 179–195). Springer.","short":"W. Reijers, S. Alpsancar, in: K. Rohlfing, K. Främling, B. Lim, S. Alpsancar, K. Thommes (Eds.), Social Explainable AI. Communications of NII Shonan Meetings, Springer, Singapore, 2026, pp. 179–195.","bibtex":"@inbook{Reijers_Alpsancar_2026, place={Singapore}, title={Values and Norms in sXAI}, booktitle={Social explainable AI. Communications of NII Shonan Meetings}, publisher={Springer}, author={Reijers, Wessel and Alpsancar, Suzana}, editor={Rohlfing, Katharina and Främling, Kary and Lim, Brian and Alpsancar, Suzana and Thommes, Kirsten}, year={2026}, pages={179–195} }","mla":"Reijers, Wessel, and Suzana Alpsancar. “Values and Norms in SXAI.” <i>Social Explainable AI. Communications of NII Shonan Meetings</i>, edited by Katharina Rohlfing et al., Springer, 2026, pp. 179–95."},"page":"179-195","year":"2026","place":"Singapore","related_material":{"link":[{"url":"https://link.springer.com/book/9789819652891","relation":"confirmation"}]},"publication_status":"published","quality_controlled":"1"},{"language":[{"iso":"eng"}],"_id":"65065","project":[{"name":"TRR 318: Erklärbarkeit konstruieren","_id":"109"}],"department":[{"_id":"26"},{"_id":"756"}],"user_id":"93637","abstract":[{"text":"<jats:title>Abstract</jats:title>\r\n                  <jats:p>This introduction sets the stage for the present book. Whereas research in eXplainable AI (XAI) is motivated by societal changes and values, technology development largely ignores social aspects. This book aims to address this research gap with a systematic and comprehensive social view on explainable AI. Besides introducing many relevant concepts, the book offers first access to their possible implementation, thus advancing the development of more social XAI. The introduction starts by connecting the topic to the general research field of XAI. The second part defines the novel approach of social eXplainable AI (sXAI) along the three characteristics of social interaction such as patternedness, incrementality, and multimodality. Finally, the third part explains the structure followed by each chapter. The book offers insights not only for readers who work on technology development but also for those working in sociotechnical fields. Addressing an interdisciplinary readership, the book is an invitation for more exchange and further development of the sXAI field.</jats:p>","lang":"eng"}],"editor":[{"id":"50352","full_name":"Rohlfing, Katharina J.","orcid":"0000-0002-5676-8233","last_name":"Rohlfing","first_name":"Katharina J."},{"first_name":"Kary","last_name":"Främling","full_name":"Främling, Kary"},{"full_name":"Lim, Brian","last_name":"Lim","first_name":"Brian"},{"last_name":"Alpsancar","id":"93637","full_name":"Alpsancar, Suzana","first_name":"Suzana"},{"first_name":"Kirsten","last_name":"Thommes","full_name":"Thommes, Kirsten","id":"72497"}],"status":"public","type":"book_editor","title":"Social Explainable AI","doi":"10.1007/978-981-96-5290-7_1","main_file_link":[{"url":"https://link.springer.com/book/10.1007/978-981-96-5290-7","open_access":"1"}],"publisher":"Springer Nature Singapore","oa":"1","date_updated":"2026-03-19T11:59:42Z","date_created":"2026-03-19T11:55:17Z","year":"2026","place":"Singapore","citation":{"ieee":"K. J. Rohlfing, K. Främling, B. Lim, S. Alpsancar, and K. Thommes, Eds., <i>Social Explainable AI</i>. Singapore: Springer Nature Singapore, 2026.","chicago":"Rohlfing, Katharina J., Kary Främling, Brian Lim, Suzana Alpsancar, and Kirsten Thommes, eds. <i>Social Explainable AI</i>. Singapore: Springer Nature Singapore, 2026. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_1\">https://doi.org/10.1007/978-981-96-5290-7_1</a>.","ama":"Rohlfing KJ, Främling K, Lim B, Alpsancar S, Thommes K, eds. <i>Social Explainable AI</i>. Springer Nature Singapore; 2026. doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_1\">10.1007/978-981-96-5290-7_1</a>","apa":"Rohlfing, K. J., Främling, K., Lim, B., Alpsancar, S., &#38; Thommes, K. (Eds.). (2026). <i>Social Explainable AI</i>. Springer Nature Singapore. <a href=\"https://doi.org/10.1007/978-981-96-5290-7_1\">https://doi.org/10.1007/978-981-96-5290-7_1</a>","short":"K.J. Rohlfing, K. Främling, B. Lim, S. Alpsancar, K. Thommes, eds., Social Explainable AI, Springer Nature Singapore, Singapore, 2026.","bibtex":"@book{Rohlfing_Främling_Lim_Alpsancar_Thommes_2026, place={Singapore}, title={Social Explainable AI}, DOI={<a href=\"https://doi.org/10.1007/978-981-96-5290-7_1\">10.1007/978-981-96-5290-7_1</a>}, publisher={Springer Nature Singapore}, year={2026} }","mla":"Rohlfing, Katharina J., et al., editors. <i>Social Explainable AI</i>. Springer Nature Singapore, 2026, doi:<a href=\"https://doi.org/10.1007/978-981-96-5290-7_1\">10.1007/978-981-96-5290-7_1</a>."},"publication_identifier":{"isbn":["9789819652891","9789819652907"]},"publication_status":"published"},{"issue":"1","publication_status":"published","citation":{"ama":"Thomas S. Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung von Theorie und Praxis. <i>HannahArendtNet</i>. 2025;14(1):240–242. doi:<a href=\"https://doi.org/10.57773/HANET.V14I1.607\">10.57773/HANET.V14I1.607</a>","chicago":"Thomas, Sven. “Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung von Theorie und Praxis.” <i>HannahArendt.Net</i> 14, no. 1 (2025): 240–242. <a href=\"https://doi.org/10.57773/HANET.V14I1.607\">https://doi.org/10.57773/HANET.V14I1.607</a>.","ieee":"S. Thomas, “Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung von Theorie und Praxis,” <i>HannahArendt.Net</i>, vol. 14, no. 1, pp. 240–242, 2025, doi: <a href=\"https://doi.org/10.57773/HANET.V14I1.607\">10.57773/HANET.V14I1.607</a>.","apa":"Thomas, S. (2025). Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung von Theorie und Praxis. <i>HannahArendt.Net</i>, <i>14</i>(1), 240–242. <a href=\"https://doi.org/10.57773/HANET.V14I1.607\">https://doi.org/10.57773/HANET.V14I1.607</a>","mla":"Thomas, Sven. “Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung von Theorie und Praxis.” <i>HannahArendt.Net</i>, vol. 14, no. 1, 2025, pp. 240–242, doi:<a href=\"https://doi.org/10.57773/HANET.V14I1.607\">10.57773/HANET.V14I1.607</a>.","bibtex":"@article{Thomas_2025, title={Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung von Theorie und Praxis}, volume={14}, DOI={<a href=\"https://doi.org/10.57773/HANET.V14I1.607\">10.57773/HANET.V14I1.607</a>}, number={1}, journal={HannahArendt.Net}, author={Thomas, Sven}, year={2025}, pages={240–242} }","short":"S. Thomas, HannahArendt.Net 14 (2025) 240–242."},"intvolume":"        14","page":"240–242","year":"2025","date_created":"2025-03-27T06:49:51Z","author":[{"first_name":"Sven","id":"94561","full_name":"Thomas, Sven","last_name":"Thomas"}],"volume":14,"oa":"1","date_updated":"2026-01-22T06:22:42Z","main_file_link":[{"url":"https://www.hannaharendt.net/index.php/han/article/view/607/1022","open_access":"1"}],"doi":"10.57773/HANET.V14I1.607","title":"Rezension: Thomas Meyers neue Arendt Biographie. Sinnbild der Verstrickung von Theorie und Praxis","type":"journal_article","publication":"HannahArendt.Net","status":"public","user_id":"94561","department":[{"_id":"26"},{"_id":"756"}],"_id":"59167","language":[{"iso":"ger"}]},{"type":"journal_article","publication":"HannahArendt.Net","status":"public","_id":"59166","user_id":"94561","department":[{"_id":"26"},{"_id":"756"}],"language":[{"iso":"ger"}],"issue":"1","year":"2025","citation":{"chicago":"Thomas, Sven. “Rezension: Hanna Meretoja: Die Nacht der alten Feuer.” <i>HannahArendt.Net</i> 14, no. 1 (2025): 237–239. <a href=\"https://doi.org/10.57773/HANET.V14I1.606\">https://doi.org/10.57773/HANET.V14I1.606</a>.","ieee":"S. Thomas, “Rezension: Hanna Meretoja: Die Nacht der alten Feuer,” <i>HannahArendt.Net</i>, vol. 14, no. 1, pp. 237–239, 2025, doi: <a href=\"https://doi.org/10.57773/HANET.V14I1.606\">10.57773/HANET.V14I1.606</a>.","ama":"Thomas S. Rezension: Hanna Meretoja: Die Nacht der alten Feuer. <i>HannahArendtNet</i>. 2025;14(1):237–239. doi:<a href=\"https://doi.org/10.57773/HANET.V14I1.606\">10.57773/HANET.V14I1.606</a>","apa":"Thomas, S. (2025). Rezension: Hanna Meretoja: Die Nacht der alten Feuer. <i>HannahArendt.Net</i>, <i>14</i>(1), 237–239. <a href=\"https://doi.org/10.57773/HANET.V14I1.606\">https://doi.org/10.57773/HANET.V14I1.606</a>","short":"S. Thomas, HannahArendt.Net 14 (2025) 237–239.","mla":"Thomas, Sven. “Rezension: Hanna Meretoja: Die Nacht der alten Feuer.” <i>HannahArendt.Net</i>, vol. 14, no. 1, 2025, pp. 237–239, doi:<a href=\"https://doi.org/10.57773/HANET.V14I1.606\">10.57773/HANET.V14I1.606</a>.","bibtex":"@article{Thomas_2025, title={Rezension: Hanna Meretoja: Die Nacht der alten Feuer}, volume={14}, DOI={<a href=\"https://doi.org/10.57773/HANET.V14I1.606\">10.57773/HANET.V14I1.606</a>}, number={1}, journal={HannahArendt.Net}, author={Thomas, Sven}, year={2025}, pages={237–239} }"},"intvolume":"        14","page":"237–239","oa":"1","date_updated":"2025-11-18T09:11:32Z","date_created":"2025-03-27T06:48:27Z","author":[{"last_name":"Thomas","full_name":"Thomas, Sven","id":"94561","first_name":"Sven"}],"volume":14,"title":"Rezension: Hanna Meretoja: Die Nacht der alten Feuer","main_file_link":[{"url":"https://www.hannaharendt.net/index.php/han/article/view/606/961","open_access":"1"}],"doi":"10.57773/HANET.V14I1.606"},{"department":[{"_id":"26"},{"_id":"756"}],"user_id":"93637","series_title":"Die blaue Stunde der Informatik","_id":"61517","language":[{"iso":"ger"}],"publication":"Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung","type":"book_chapter","status":"public","editor":[{"last_name":"Hashagen","full_name":"Hashagen, Ulf","first_name":"Ulf"},{"first_name":"Rudolf","full_name":"Seising, Rudolf","last_name":"Seising"}],"author":[{"last_name":"Alpsancar","full_name":"Alpsancar, Suzana","id":"93637","first_name":"Suzana"}],"date_created":"2025-10-05T15:29:31Z","publisher":"Springer","date_updated":"2025-11-18T09:31:23Z","doi":"10.1007/978-3-658-35560-9_14","title":"Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin","publication_identifier":{"isbn":["9783658355593","9783658355609"],"issn":["2730-7425","2730-7433"]},"publication_status":"published","page":"327–365","citation":{"ama":"Alpsancar S. Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin. In: Hashagen U, Seising R, eds. <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung</i>. Die blaue Stunde der Informatik. Springer; 2025:327–365. doi:<a href=\"https://doi.org/10.1007/978-3-658-35560-9_14\">10.1007/978-3-658-35560-9_14</a>","ieee":"S. Alpsancar, “Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin,” in <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung</i>, U. Hashagen and R. Seising, Eds. Wiesbaden: Springer, 2025, pp. 327–365.","chicago":"Alpsancar, Suzana. “Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin.” In <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung</i>, edited by Ulf Hashagen and Rudolf Seising, 327–365. Die blaue Stunde der Informatik. Wiesbaden: Springer, 2025. <a href=\"https://doi.org/10.1007/978-3-658-35560-9_14\">https://doi.org/10.1007/978-3-658-35560-9_14</a>.","apa":"Alpsancar, S. (2025). Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin. In U. Hashagen &#38; R. Seising (Eds.), <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung</i> (pp. 327–365). Springer. <a href=\"https://doi.org/10.1007/978-3-658-35560-9_14\">https://doi.org/10.1007/978-3-658-35560-9_14</a>","short":"S. Alpsancar, in: U. Hashagen, R. Seising (Eds.), Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung, Springer, Wiesbaden, 2025, pp. 327–365.","bibtex":"@inbook{Alpsancar_2025, place={Wiesbaden}, series={Die blaue Stunde der Informatik}, title={Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin}, DOI={<a href=\"https://doi.org/10.1007/978-3-658-35560-9_14\">10.1007/978-3-658-35560-9_14</a>}, booktitle={Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung}, publisher={Springer}, author={Alpsancar, Suzana}, editor={Hashagen, Ulf and Seising, Rudolf}, year={2025}, pages={327–365}, collection={Die blaue Stunde der Informatik} }","mla":"Alpsancar, Suzana. “Algorithmische Kulturen des Pflanzensammelns? Das Beispiel der Computerisierung des Botanischen Gartens und Botanischen Museums Berlin.” <i>Algorithmische Wissenskulturen. Der Einfluss des Computers auf die Wissenschaftsentwicklung</i>, edited by Ulf Hashagen and Rudolf Seising, Springer, 2025, pp. 327–365, doi:<a href=\"https://doi.org/10.1007/978-3-658-35560-9_14\">10.1007/978-3-658-35560-9_14</a>."},"place":"Wiesbaden","year":"2025"},{"title":"Healthy Distrust in AI systems","main_file_link":[{"url":"https://arxiv.org/abs/2505.09747","open_access":"1"}],"oa":"1","date_updated":"2025-11-18T09:38:01Z","date_created":"2025-05-16T09:39:13Z","author":[{"first_name":"Benjamin","last_name":"Paaßen","full_name":"Paaßen, Benjamin"},{"first_name":"Suzana","last_name":"Alpsancar","id":"93637","full_name":"Alpsancar, Suzana"},{"last_name":"Matzner","full_name":"Matzner, Tobias","id":"65695","first_name":"Tobias"},{"first_name":"Ingrid","last_name":"Scharlau","orcid":"0000-0003-2364-9489","full_name":"Scharlau, Ingrid","id":"451"}],"year":"2025","citation":{"ama":"Paaßen B, Alpsancar S, Matzner T, Scharlau I. Healthy Distrust in AI systems. <i>arXiv</i>. Published online 2025.","chicago":"Paaßen, Benjamin, Suzana Alpsancar, Tobias Matzner, and Ingrid Scharlau. “Healthy Distrust in AI Systems.” <i>ArXiv</i>, 2025.","ieee":"B. Paaßen, S. Alpsancar, T. Matzner, and I. Scharlau, “Healthy Distrust in AI systems,” <i>arXiv</i>. 2025.","short":"B. Paaßen, S. Alpsancar, T. Matzner, I. Scharlau, ArXiv (2025).","mla":"Paaßen, Benjamin, et al. “Healthy Distrust in AI Systems.” <i>ArXiv</i>, 2025.","bibtex":"@article{Paaßen_Alpsancar_Matzner_Scharlau_2025, title={Healthy Distrust in AI systems}, journal={arXiv}, author={Paaßen, Benjamin and Alpsancar, Suzana and Matzner, Tobias and Scharlau, Ingrid}, year={2025} }","apa":"Paaßen, B., Alpsancar, S., Matzner, T., &#38; Scharlau, I. (2025). Healthy Distrust in AI systems. In <i>arXiv</i>."},"language":[{"iso":"eng"}],"project":[{"_id":"122","name":"TRR 318 - B3: TRR 318 - Subproject B3"},{"name":"TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen","_id":"124"},{"_id":"370","name":"TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren KI"}],"_id":"59917","user_id":"93637","department":[{"_id":"424"},{"_id":"26"},{"_id":"756"}],"abstract":[{"text":"nder the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone---neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person’s interest. In such cases,  distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term \\emph{healthy distrust} to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy.","lang":"eng"}],"status":"public","type":"preprint","publication":"arXiv"},{"date_updated":"2025-11-18T10:09:40Z","author":[{"orcid":"0000-0002-0619-3160","last_name":"Fahimi","id":"118059","full_name":"Fahimi, Miriam","first_name":"Miriam"},{"first_name":"Laura","last_name":"State","full_name":"State, Laura"},{"first_name":"Atoosa","last_name":"Kasirzadeh","full_name":"Kasirzadeh, Atoosa"}],"volume":8,"doi":"10.1609/aies.v8i1.36597","publication_status":"published","publication_identifier":{"issn":["3065-8365"]},"citation":{"apa":"Fahimi, M., State, L., &#38; Kasirzadeh, A. (2025). From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection. <i>Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society</i>, <i>8</i>(1), 879–892. <a href=\"https://doi.org/10.1609/aies.v8i1.36597\">https://doi.org/10.1609/aies.v8i1.36597</a>","bibtex":"@article{Fahimi_State_Kasirzadeh_2025, title={From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection}, volume={8}, DOI={<a href=\"https://doi.org/10.1609/aies.v8i1.36597\">10.1609/aies.v8i1.36597</a>}, number={1}, journal={Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society}, publisher={Association for the Advancement of Artificial Intelligence (AAAI)}, author={Fahimi, Miriam and State, Laura and Kasirzadeh, Atoosa}, year={2025}, pages={879–892} }","mla":"Fahimi, Miriam, et al. “From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection.” <i>Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society</i>, vol. 8, no. 1, Association for the Advancement of Artificial Intelligence (AAAI), 2025, pp. 879–92, doi:<a href=\"https://doi.org/10.1609/aies.v8i1.36597\">10.1609/aies.v8i1.36597</a>.","short":"M. Fahimi, L. State, A. Kasirzadeh, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 8 (2025) 879–892.","ama":"Fahimi M, State L, Kasirzadeh A. From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection. <i>Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society</i>. 2025;8(1):879-892. doi:<a href=\"https://doi.org/10.1609/aies.v8i1.36597\">10.1609/aies.v8i1.36597</a>","chicago":"Fahimi, Miriam, Laura State, and Atoosa Kasirzadeh. “From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection.” <i>Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society</i> 8, no. 1 (2025): 879–92. <a href=\"https://doi.org/10.1609/aies.v8i1.36597\">https://doi.org/10.1609/aies.v8i1.36597</a>.","ieee":"M. Fahimi, L. State, and A. Kasirzadeh, “From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection,” <i>Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society</i>, vol. 8, no. 1, pp. 879–892, 2025, doi: <a href=\"https://doi.org/10.1609/aies.v8i1.36597\">10.1609/aies.v8i1.36597</a>."},"page":"879-892","intvolume":"         8","_id":"62028","user_id":"118059","department":[{"_id":"756"},{"_id":"26"}],"article_type":"original","type":"journal_article","status":"public","publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","date_created":"2025-10-31T15:05:38Z","title":"From Explaining to Diagnosing: A Justice-Oriented Framework of Explainable AI for Bias Detection","issue":"1","year":"2025","language":[{"iso":"eng"}],"publication":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","abstract":[{"lang":"eng","text":"Explainable AI (XAI) methods can support the identification of biases in automated decision-making (ADM) systems. However, existing research does not sufficiently address whether these biases originate from the ADM system or mirror underlying societal inequalities. This distinction is important because it has major implications for how to act upon an explanation: while the societal bias produced by the ADM system can be algorithmically fixed, societal inequalities demand societal actions. To address this gap, we propose the RR-XAI-framework (recognition-redistribution through XAI) that builds on a distinction between socio-technical and societal bias and Nancy Fraser's justice theory of recognition and redistribution. In our framework, explanations can play two distinct roles: as a socio-technical diagnosis when they reveal biases produced by the ADM system itself, or as a societal diagnosis when they expose biases that reflect broader societal inequalities. We then outline the operationalization of the framework and discuss its applicability for cases in algorithmic hiring and credit scoring. Based on our findings, we argue that the diagnostic functions of XAI are contingent on the provision of such explanations, the resources of the audiences, as well as the current limits of XAI techniques."}]},{"date_updated":"2025-11-18T10:02:20Z","publisher":"PMLR","date_created":"2025-11-18T09:59:34Z","author":[{"first_name":"Raphaële","full_name":"Xenidis, Raphaële","last_name":"Xenidis"},{"first_name":"Miriam","id":"118059","full_name":"Fahimi, Miriam","last_name":"Fahimi","orcid":"0000-0002-0619-3160"}],"title":"Standardising Equality in the Algorithmic Society? A Research Agenda","year":"2025","page":"310–314","citation":{"ama":"Xenidis R, Fahimi M. Standardising Equality in the Algorithmic Society? A Research Agenda. In: <i>Proceedings of Fourth European Workshop on Algorithmic Fairness</i>. PMLR; 2025:310–314.","ieee":"R. Xenidis and M. Fahimi, “Standardising Equality in the Algorithmic Society? A Research Agenda,” in <i>Proceedings of Fourth European Workshop on Algorithmic Fairness</i>, 2025, pp. 310–314.","chicago":"Xenidis, Raphaële, and Miriam Fahimi. “Standardising Equality in the Algorithmic Society? A Research Agenda.” In <i>Proceedings of Fourth European Workshop on Algorithmic Fairness</i>, 310–314. PMLR, 2025.","apa":"Xenidis, R., &#38; Fahimi, M. (2025). Standardising Equality in the Algorithmic Society? A Research Agenda. <i>Proceedings of Fourth European Workshop on Algorithmic Fairness</i>, 310–314.","mla":"Xenidis, Raphaële, and Miriam Fahimi. “Standardising Equality in the Algorithmic Society? A Research Agenda.” <i>Proceedings of Fourth European Workshop on Algorithmic Fairness</i>, PMLR, 2025, pp. 310–314.","bibtex":"@inproceedings{Xenidis_Fahimi_2025, title={Standardising Equality in the Algorithmic Society? A Research Agenda}, booktitle={Proceedings of Fourth European Workshop on Algorithmic Fairness}, publisher={PMLR}, author={Xenidis, Raphaële and Fahimi, Miriam}, year={2025}, pages={310–314} }","short":"R. Xenidis, M. Fahimi, in: Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR, 2025, pp. 310–314."},"_id":"62229","department":[{"_id":"756"},{"_id":"26"}],"user_id":"118059","language":[{"iso":"eng"}],"publication":"Proceedings of Fourth European Workshop on Algorithmic Fairness","type":"conference","abstract":[{"lang":"eng","text":"In 2024, the EU adopted the AI Act, a new set of rules for trustworthy artificial intelligence. This legal instrument carves a large place for standardisation, a regulatory technique that consists in crafting so-called harmonised technical standards, to facilitate legal compliance by industry stakeholders. While EU technical standards have been used in the past for ensuring product safety, for the first time the AI Act relies on standardisation to facilitate compliance with fundamental rights, including the right to non-discrimination and equality. The attempt to translate inherently open-textured rights and ethical principles into operationalizable standards raises critical questions. In particular, how will standardisation practices under the new EU AI Act affect, transform, contest and stabilise notions of equality and non-discrimination in an increasingly algorithmic society? This paper proposes a research agenda to address this question and unpack the black box of AI standardisation."}],"status":"public"},{"publication":"AI and Ethics","language":[{"iso":"eng"}],"year":"2025","title":"Explanation needs and ethical demands: unpacking the instrumental value of XAI","date_created":"2024-12-02T08:32:00Z","publisher":"Springer","status":"public","type":"journal_article","department":[{"_id":"756"},{"_id":"26"}],"user_id":"93637","_id":"57531","project":[{"name":"TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren KI","_id":"370"},{"_id":"111","name":"TRR 318 - A01: TRR 318 - Adaptives Erklären (Teilprojekt A01)"},{"_id":"114","name":"TRR 318 - A04: TRR 318 - Integration des technischen Modells in das Partnermodell bei der Erklärung von digitalen Artefakten (Teilprojekt A04)"}],"page":"3015–3033","intvolume":"         5","citation":{"apa":"Alpsancar, S., Buhl, H. M., Matzner, T., &#38; Scharlau, I. (2025). Explanation needs and ethical demands: unpacking the instrumental value of XAI. <i>AI and Ethics</i>, <i>5</i>, 3015–3033. <a href=\"https://doi.org/10.1007/s43681-024-00622-3\">https://doi.org/10.1007/s43681-024-00622-3</a>","mla":"Alpsancar, Suzana, et al. “Explanation Needs and Ethical Demands: Unpacking the Instrumental Value of XAI.” <i>AI and Ethics</i>, vol. 5, Springer, 2025, pp. 3015–3033, doi:<a href=\"https://doi.org/10.1007/s43681-024-00622-3\">https://doi.org/10.1007/s43681-024-00622-3</a>.","bibtex":"@article{Alpsancar_Buhl_Matzner_Scharlau_2025, title={Explanation needs and ethical demands: unpacking the instrumental value of XAI}, volume={5}, DOI={<a href=\"https://doi.org/10.1007/s43681-024-00622-3\">https://doi.org/10.1007/s43681-024-00622-3</a>}, journal={AI and Ethics}, publisher={Springer}, author={Alpsancar, Suzana and Buhl, Heike M. and Matzner, Tobias and Scharlau, Ingrid}, year={2025}, pages={3015–3033} }","short":"S. Alpsancar, H.M. Buhl, T. Matzner, I. Scharlau, AI and Ethics 5 (2025) 3015–3033.","ieee":"S. Alpsancar, H. M. Buhl, T. Matzner, and I. Scharlau, “Explanation needs and ethical demands: unpacking the instrumental value of XAI,” <i>AI and Ethics</i>, vol. 5, pp. 3015–3033, 2025, doi: <a href=\"https://doi.org/10.1007/s43681-024-00622-3\">https://doi.org/10.1007/s43681-024-00622-3</a>.","chicago":"Alpsancar, Suzana, Heike M. Buhl, Tobias Matzner, and Ingrid Scharlau. “Explanation Needs and Ethical Demands: Unpacking the Instrumental Value of XAI.” <i>AI and Ethics</i> 5 (2025): 3015–3033. <a href=\"https://doi.org/10.1007/s43681-024-00622-3\">https://doi.org/10.1007/s43681-024-00622-3</a>.","ama":"Alpsancar S, Buhl HM, Matzner T, Scharlau I. Explanation needs and ethical demands: unpacking the instrumental value of XAI. <i>AI and Ethics</i>. 2025;5:3015–3033. doi:<a href=\"https://doi.org/10.1007/s43681-024-00622-3\">https://doi.org/10.1007/s43681-024-00622-3</a>"},"related_material":{"link":[{"url":"https://links.springernature.com/f/a/xjbXcT06ufIgbHT1duGaHQ~~/AABE5gA~/RgRpMhXcP0SiaHR0cHM6Ly9saW5rLnNwcmluZ2VyLmNvbS8xMC4xMDA3L3M0MzY4MS0wMjQtMDA2MjItMz91dG1fc291cmNlPXJjdF9jb25ncmF0ZW1haWx0JnV0bV9tZWRpdW09ZW1haWwmdXRtX2NhbXBhaWduPW9hXzIwMjQxMjAzJnV0bV9jb250ZW50PTEwLjEwMDcvczQzNjgxLTAyNC0wMDYyMi0zVwNzcGNCCmdG3JBPZxsDc2FSIXN1emFuYS5hbHBzYW5jYXJAdW5pLXBhZGVyYm9ybi5kZVgEAAAHLA~~","relation":"confirmation"}]},"publication_status":"published","doi":"https://doi.org/10.1007/s43681-024-00622-3","main_file_link":[{"open_access":"1"}],"volume":5,"author":[{"first_name":"Suzana","full_name":"Alpsancar, Suzana","id":"93637","last_name":"Alpsancar"},{"first_name":"Heike M.","last_name":"Buhl","full_name":"Buhl, Heike M.","id":"27152"},{"last_name":"Matzner","full_name":"Matzner, Tobias","id":"65695","first_name":"Tobias"},{"first_name":"Ingrid","orcid":"0000-0003-2364-9489","last_name":"Scharlau","full_name":"Scharlau, Ingrid","id":"451"}],"oa":"1","date_updated":"2025-11-25T21:27:44Z"},{"editor":[{"last_name":"Farina","full_name":"Farina, Mirko ","first_name":"Mirko "},{"first_name":"Xiao ","last_name":"Yu","full_name":"Yu, Xiao "},{"first_name":"Jin","full_name":"Chen, Jin","last_name":"Chen"}],"status":"public","type":"book_chapter","publication":"Digital Development. Technology, Ethics and Governance","language":[{"iso":"eng"}],"project":[{"name":"TRR 318; TP B06: Ethik und Normativität der erklärbaren KI","_id":"370"}],"_id":"62305","user_id":"93637","department":[{"_id":"26"},{"_id":"756"},{"_id":"660"}],"year":"2025","place":"New York","citation":{"chicago":"Reijers, Wessel, Tobias Matzner, and Suzana Alpsancar. “Explainability and AI Governance.” In <i>Digital Development. Technology, Ethics and Governance</i>, edited by Mirko  Farina, Xiao  Yu, and Jin Chen. New York: Routledge, 2025. <a href=\"https://doi.org/10.4324/9781003567622-22\">https://doi.org/10.4324/9781003567622-22</a>.","ieee":"W. Reijers, T. Matzner, and S. Alpsancar, “Explainability and AI Governance,” in <i>Digital Development. Technology, Ethics and Governance</i>, M. Farina, X. Yu, and J. Chen, Eds. New York: Routledge, 2025.","ama":"Reijers W, Matzner T, Alpsancar S. Explainability and AI Governance. In: Farina M, Yu X, Chen J, eds. <i>Digital Development. Technology, Ethics and Governance</i>. Routledge; 2025. doi:<a href=\"https://doi.org/10.4324/9781003567622-22\">10.4324/9781003567622-22</a>","apa":"Reijers, W., Matzner, T., &#38; Alpsancar, S. (2025). Explainability and AI Governance. In M. Farina, X. Yu, &#38; J. Chen (Eds.), <i>Digital Development. Technology, Ethics and Governance</i>. Routledge. <a href=\"https://doi.org/10.4324/9781003567622-22\">https://doi.org/10.4324/9781003567622-22</a>","short":"W. Reijers, T. Matzner, S. Alpsancar, in: M. Farina, X. Yu, J. Chen (Eds.), Digital Development. Technology, Ethics and Governance, Routledge, New York, 2025.","bibtex":"@inbook{Reijers_Matzner_Alpsancar_2025, place={New York}, title={Explainability and AI Governance}, DOI={<a href=\"https://doi.org/10.4324/9781003567622-22\">10.4324/9781003567622-22</a>}, booktitle={Digital Development. Technology, Ethics and Governance}, publisher={Routledge}, author={Reijers, Wessel and Matzner, Tobias and Alpsancar, Suzana}, editor={Farina, Mirko  and Yu, Xiao  and Chen, Jin}, year={2025} }","mla":"Reijers, Wessel, et al. “Explainability and AI Governance.” <i>Digital Development. Technology, Ethics and Governance</i>, edited by Mirko  Farina et al., Routledge, 2025, doi:<a href=\"https://doi.org/10.4324/9781003567622-22\">10.4324/9781003567622-22</a>."},"publication_status":"published","publication_identifier":{"isbn":["9781003567622"]},"title":"Explainability and AI Governance","doi":"10.4324/9781003567622-22","publisher":"Routledge","date_updated":"2025-11-25T21:25:31Z","author":[{"last_name":"Reijers","orcid":"0000-0003-2505-1587","id":"102524","full_name":"Reijers, Wessel","first_name":"Wessel"},{"first_name":"Tobias","id":"65695","full_name":"Matzner, Tobias","last_name":"Matzner"},{"first_name":"Suzana","last_name":"Alpsancar","full_name":"Alpsancar, Suzana","id":"93637"}],"date_created":"2025-11-25T17:58:04Z"},{"language":[{"iso":"eng"}],"project":[{"name":"TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren KI","_id":"370","grant_number":"438445824"}],"_id":"55869","user_id":"93637","department":[{"_id":"756"}],"editor":[{"first_name":"Rainer","last_name":"Adolphi","full_name":"Adolphi, Rainer"},{"last_name":"Alpsancar","full_name":"Alpsancar, Suzana","first_name":"Suzana"},{"first_name":"Susanne","last_name":"Hahn","full_name":"Hahn, Susanne"},{"first_name":"Matthias","last_name":"Kettner","full_name":"Kettner, Matthias"}],"status":"public","type":"book_chapter","publication":" Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht","title":"Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen","main_file_link":[{"open_access":"1","url":"https://www.transcript-verlag.de/978-3-8376-7497-2/philosophische-digitalisierungsforschung/?number=978-3-8394-7497-6"}],"publisher":"transcript","date_updated":"2024-08-28T18:51:44Z","oa":"1","author":[{"first_name":"Suzana","last_name":"Alpsancar","id":"93637","full_name":"Alpsancar, Suzana"}],"date_created":"2024-08-28T18:50:46Z","place":"Bielefeld","year":"2024","citation":{"apa":"Alpsancar, S. (2024). Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen. In R. Adolphi, S. Alpsancar, S. Hahn, &#38; M. Kettner (Eds.), <i> Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i> (pp. 55–113). transcript.","mla":"Alpsancar, Suzana. “Warum Und Wozu Erklärbare KI? Über Die Verschiedenheit Dreier Paradigmatischer Zwecksetzungen.” <i> Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>, edited by Rainer Adolphi et al., transcript, 2024, pp. 55–113.","short":"S. Alpsancar, in: R. Adolphi, S. Alpsancar, S. Hahn, M. Kettner (Eds.),  Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht, transcript, Bielefeld, 2024, pp. 55–113.","bibtex":"@inbook{Alpsancar_2024, place={Bielefeld}, title={Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen}, booktitle={ Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht}, publisher={transcript}, author={Alpsancar, Suzana}, editor={Adolphi, Rainer and Alpsancar, Suzana and Hahn, Susanne and Kettner, Matthias}, year={2024}, pages={55–113} }","ama":"Alpsancar S. Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen. In: Adolphi R, Alpsancar S, Hahn S, Kettner M, eds. <i> Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>. transcript; 2024:55-113.","chicago":"Alpsancar, Suzana. “Warum Und Wozu Erklärbare KI? Über Die Verschiedenheit Dreier Paradigmatischer Zwecksetzungen.” In <i> Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>, edited by Rainer Adolphi, Suzana Alpsancar, Susanne Hahn, and Matthias Kettner, 55–113. Bielefeld: transcript, 2024.","ieee":"S. Alpsancar, “Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen,” in <i> Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>, R. Adolphi, S. Alpsancar, S. Hahn, and M. Kettner, Eds. Bielefeld: transcript, 2024, pp. 55–113."},"page":"55-113","quality_controlled":"1"},{"status":"public","type":"conference_abstract","publication":"Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.","language":[{"iso":"eng"}],"project":[{"_id":"370","name":"TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren KI","grant_number":"438445824"}],"_id":"57172","user_id":"93637","department":[{"_id":"756"}],"year":"2024","place":"Longrono","citation":{"chicago":"Reijers, Wessel, Tobias Matzner, Suzana Alpsancar, and Martina Philippi. “AI Explainability, Temporality, and Civic Virtue.” In <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.</i> Longrono, 2024.","ieee":"W. Reijers, T. Matzner, S. Alpsancar, and M. Philippi, “AI explainability, temporality, and civic virtue,” 2024.","ama":"Reijers W, Matzner T, Alpsancar S, Philippi M. AI explainability, temporality, and civic virtue. In: <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.</i> ; 2024.","apa":"Reijers, W., Matzner, T., Alpsancar, S., &#38; Philippi, M. (2024). AI explainability, temporality, and civic virtue. <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.</i>","short":"W. Reijers, T. Matzner, S. Alpsancar, M. Philippi, in: Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024., Longrono, 2024.","mla":"Reijers, Wessel, et al. “AI Explainability, Temporality, and Civic Virtue.” <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.</i>, 2024.","bibtex":"@inproceedings{Reijers_Matzner_Alpsancar_Philippi_2024, place={Longrono}, title={AI explainability, temporality, and civic virtue}, booktitle={Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.}, author={Reijers, Wessel and Matzner, Tobias and Alpsancar, Suzana and Philippi, Martina}, year={2024} }"},"publication_status":"published","title":"AI explainability, temporality, and civic virtue","main_file_link":[{"open_access":"1","url":"https://dialnet.unirioja.es/descarga/articulo/9326093.pdf"}],"oa":"1","date_updated":"2024-12-17T11:44:41Z","author":[{"last_name":"Reijers","orcid":"0000-0003-2505-1587","id":"102524","full_name":"Reijers, Wessel","first_name":"Wessel"},{"first_name":"Tobias","last_name":"Matzner","full_name":"Matzner, Tobias","id":"65695"},{"full_name":"Alpsancar, Suzana","id":"93637","last_name":"Alpsancar","first_name":"Suzana"},{"last_name":"Philippi","full_name":"Philippi, Martina","id":"100856","first_name":"Martina"}],"date_created":"2024-11-18T10:06:46Z"},{"language":[{"iso":"eng"}],"project":[{"_id":"370","name":"TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren KI","grant_number":"438445824"}],"_id":"56217","user_id":"93637","department":[{"_id":"756"}],"status":"public","type":"conference_abstract","publication":"Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT","title":"Unpacking the purposes of explainable AI","main_file_link":[{"url":"https://dialnet.unirioja.es/descarga/articulo/9326091.pdf","open_access":"1"}],"date_updated":"2024-12-17T11:46:27Z","publisher":"Universidad de La Rioja","oa":"1","author":[{"first_name":"Suzana","full_name":"Alpsancar, Suzana","id":"93637","last_name":"Alpsancar"},{"first_name":"Tobias","last_name":"Matzner","full_name":"Matzner, Tobias"},{"full_name":"Philippi, Martina","last_name":"Philippi","first_name":"Martina"}],"date_created":"2024-09-23T19:17:41Z","year":"2024","citation":{"ieee":"S. Alpsancar, T. Matzner, and M. Philippi, “Unpacking the purposes of explainable AI,” in <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT</i>, 2024, pp. 31–35.","chicago":"Alpsancar, Suzana, Tobias Matzner, and Martina Philippi. “Unpacking the Purposes of Explainable AI.” In <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT</i>, 31–35. Universidad de La Rioja, 2024.","mla":"Alpsancar, Suzana, et al. “Unpacking the Purposes of Explainable AI.” <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT</i>, Universidad de La Rioja, 2024, pp. 31–35.","bibtex":"@inproceedings{Alpsancar_Matzner_Philippi_2024, title={Unpacking the purposes of explainable AI}, booktitle={Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT}, publisher={Universidad de La Rioja}, author={Alpsancar, Suzana and Matzner, Tobias and Philippi, Martina}, year={2024}, pages={31–35} }","short":"S. Alpsancar, T. Matzner, M. Philippi, in: Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT, Universidad de La Rioja, 2024, pp. 31–35.","apa":"Alpsancar, S., Matzner, T., &#38; Philippi, M. (2024). Unpacking the purposes of explainable AI. <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT</i>, 31–35.","ama":"Alpsancar S, Matzner T, Philippi M. Unpacking the purposes of explainable AI. In: <i>Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT</i>. Universidad de La Rioja; 2024:31-35."},"page":"31-35"},{"date_created":"2024-12-13T09:26:16Z","publisher":"Nomos","date_updated":"2024-12-17T11:43:20Z","title":"Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024","publication_status":"inpress","citation":{"mla":"Alpsancar, Suzana, et al., editors. <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024</i>. Nomos.","bibtex":"@book{Alpsancar_Friedrich_Gehring_Kaminski_Nordmann, place={Baden Baden}, title={Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024}, publisher={Nomos} }","short":"S. Alpsancar, A. Friedrich, P. Gehring, A. Kaminski, A. Nordmann, eds., Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024, Nomos, Baden Baden, n.d.","apa":"Alpsancar, S., Friedrich, A., Gehring, P., Kaminski, A., &#38; Nordmann, A. (Eds.). (n.d.). <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024</i>. Nomos.","ieee":"S. Alpsancar, A. Friedrich, P. Gehring, A. Kaminski, and A. Nordmann, Eds., <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024</i>. Baden Baden: Nomos.","chicago":"Alpsancar, Suzana, Alexander Friedrich, Petra Gehring, Andreas Kaminski, and Alfred Nordmann, eds. <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024</i>. Baden Baden: Nomos, n.d.","ama":"Alpsancar S, Friedrich A, Gehring P, Kaminski A, Nordmann A, eds. <i>Der Sog des Neuen (und der Schock des Alten). Jahrbuch Technikphilosophie 2024</i>. Nomos"},"year":"2024","place":"Baden Baden","department":[{"_id":"756"}],"user_id":"93637","_id":"57762","language":[{"iso":"ger"},{"iso":"eng"}],"type":"book_editor","status":"public","editor":[{"last_name":"Alpsancar","full_name":"Alpsancar, Suzana","first_name":"Suzana"},{"last_name":"Friedrich","full_name":"Friedrich, Alexander","first_name":"Alexander"},{"first_name":"Petra","last_name":"Gehring","full_name":"Gehring, Petra"},{"first_name":"Andreas","last_name":"Kaminski","full_name":"Kaminski, Andreas"},{"first_name":"Alfred","last_name":"Nordmann","full_name":"Nordmann, Alfred"}]},{"publication_status":"published","publication_identifier":{"isbn":["978-3-8376-7497-2"]},"place":"Bielefeld","year":"2024","citation":{"short":"R. Adolphi, S. Hahn, M. Kettner, eds., Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht, transcript, Bielefeld, 2024.","bibtex":"@book{Adolphi_Hahn_Kettner_2024, place={Bielefeld}, title={Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht}, publisher={transcript}, year={2024} }","mla":"Adolphi, Rainer, et al., editors. <i>Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>. transcript, 2024.","apa":"Adolphi, R., Hahn, S., &#38; Kettner, M. (Eds.). (2024). <i>Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>. transcript.","ama":"Adolphi R, Hahn S, Kettner M, eds. <i>Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>. transcript; 2024.","chicago":"Adolphi, Rainer, Susanne Hahn, and Matthias Kettner, eds. <i>Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>. Bielefeld: transcript, 2024.","ieee":"R. Adolphi, S. Hahn, and M. Kettner, Eds., <i>Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht</i>. Bielefeld: transcript, 2024."},"page":"464","date_updated":"2025-07-02T07:39:27Z","oa":"1","publisher":"transcript","date_created":"2024-08-28T18:47:16Z","title":"Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht","main_file_link":[{"open_access":"1","url":"https://www.transcript-verlag.de/978-3-8376-7497-2/philosophische-digitalisierungsforschung/?number=978-3-8394-7497-6"}],"type":"book_editor","editor":[{"first_name":"Rainer","full_name":"Adolphi, Rainer","last_name":"Adolphi"},{"last_name":"Hahn","full_name":"Hahn, Susanne","first_name":"Susanne"},{"last_name":"Kettner","full_name":"Kettner, Matthias","first_name":"Matthias"}],"status":"public","_id":"55868","user_id":"93637","department":[{"_id":"756"},{"_id":"26"}],"language":[{"iso":"ger"}]},{"publication_identifier":{"isbn":["978-1-5292-3832-7"]},"page":"52–79","citation":{"ama":"Fahimi M, Falk P, Gray JWY, et al. In/visibilities in Data Studies: Methods, Tools, and Interventions. In: <i>Dialogues in Data Power</i>. Bristol University Press; 2024:52–79.","chicago":"Fahimi, Miriam, Petter Falk, Jonathan W. Y. Gray, Juliane Jarke, Katharina Kinder-Kurlanda, Evan Light, Ellouise McGeachey, et al. “In/Visibilities in Data Studies: Methods, Tools, and Interventions.” In <i>Dialogues in Data Power</i>, 52–79. Bristol University Press, 2024.","ieee":"M. Fahimi <i>et al.</i>, “In/visibilities in Data Studies: Methods, Tools, and Interventions,” in <i>Dialogues in Data Power</i>, Bristol University Press, 2024, pp. 52–79.","apa":"Fahimi, M., Falk, P., Gray, J. W. Y., Jarke, J., Kinder-Kurlanda, K., Light, E., McGeachey, E., Perea, I. M., Poechhacker, N., Poirier, L., Röhle, T., Sharon, T., Stevens, M., Gastel, B. van, White, Q., &#38; Zakharova, I. (2024). In/visibilities in Data Studies: Methods, Tools, and Interventions. In <i>Dialogues in Data Power</i> (pp. 52–79). Bristol University Press.","short":"M. Fahimi, P. Falk, J.W.Y. Gray, J. Jarke, K. Kinder-Kurlanda, E. Light, E. McGeachey, I.M. Perea, N. Poechhacker, L. Poirier, T. Röhle, T. Sharon, M. Stevens, B. van Gastel, Q. White, I. Zakharova, in: Dialogues in Data Power, Bristol University Press, 2024, pp. 52–79.","mla":"Fahimi, Miriam, et al. “In/Visibilities in Data Studies: Methods, Tools, and Interventions.” <i>Dialogues in Data Power</i>, Bristol University Press, 2024, pp. 52–79.","bibtex":"@inbook{Fahimi_Falk_Gray_Jarke_Kinder-Kurlanda_Light_McGeachey_Perea_Poechhacker_Poirier_et al._2024, title={In/visibilities in Data Studies: Methods, Tools, and Interventions}, booktitle={Dialogues in Data Power}, publisher={Bristol University Press}, author={Fahimi, Miriam and Falk, Petter and Gray, Jonathan W. Y. and Jarke, Juliane and Kinder-Kurlanda, Katharina and Light, Evan and McGeachey, Ellouise and Perea, Itzelle Medina and Poechhacker, Nikolaus and Poirier, Lindsay and et al.}, year={2024}, pages={52–79} }"},"year":"2024","author":[{"first_name":"Miriam","id":"118059","full_name":"Fahimi, Miriam","last_name":"Fahimi","orcid":"0000-0002-0619-3160"},{"first_name":"Petter","last_name":"Falk","full_name":"Falk, Petter"},{"last_name":"Gray","full_name":"Gray, Jonathan W. Y.","first_name":"Jonathan W. Y."},{"first_name":"Juliane","last_name":"Jarke","full_name":"Jarke, Juliane"},{"full_name":"Kinder-Kurlanda, Katharina","last_name":"Kinder-Kurlanda","first_name":"Katharina"},{"last_name":"Light","full_name":"Light, Evan","first_name":"Evan"},{"last_name":"McGeachey","full_name":"McGeachey, Ellouise","first_name":"Ellouise"},{"first_name":"Itzelle Medina","full_name":"Perea, Itzelle Medina","last_name":"Perea"},{"first_name":"Nikolaus","full_name":"Poechhacker, Nikolaus","last_name":"Poechhacker"},{"last_name":"Poirier","full_name":"Poirier, Lindsay","first_name":"Lindsay"},{"full_name":"Röhle, Theo","last_name":"Röhle","first_name":"Theo"},{"last_name":"Sharon","full_name":"Sharon, Tamar","first_name":"Tamar"},{"first_name":"Marthe","last_name":"Stevens","full_name":"Stevens, Marthe"},{"first_name":"Bernard van","last_name":"Gastel","full_name":"Gastel, Bernard van"},{"first_name":"Quinn","last_name":"White","full_name":"White, Quinn"},{"full_name":"Zakharova, Irina","last_name":"Zakharova","first_name":"Irina"}],"date_created":"2025-11-18T09:58:30Z","date_updated":"2025-11-18T10:02:15Z","publisher":"Bristol University Press","title":"In/visibilities in Data Studies: Methods, Tools, and Interventions","publication":"Dialogues in Data Power","type":"book_chapter","status":"public","abstract":[{"text":"This chapter highlights the intricate nature of data and their profound social implications. It examines the acts of rendering data visible and the inherent power dynamics and imbalances that accompany such processes. Our dialogue unfolds in three interconnected parts, each focusing on the intersection of in/visibility and power. Part 1 attends to the challenges of producing knowledge about and with data, emphasizing the relativity, fluidity, and instability inherent in data. It explores frameworks that uncover the often invisible infrastructures of algorithms, rendering visible the actors, technologies, and divergent values involved in data manipulation. Part 2 presents empirical case studies that analyse the consequences of data visibility while contemplating the methodological opportunities and challenges of foregrounding the embedded values and norms within data. Part 3 discusses tool-based interventions aimed at bringing alternative data framings and narratives to the fore. It examines the complexities of tracing data across various contexts and the value, utility, and obstacles associated with creating visual representations of data and their flows. By critically engaging with the complexities of data in/visibility, this chapter challenges existing gatekeepers and fosters a deeper understanding of the multifaceted nature of data and its socio-political ramifications.","lang":"eng"}],"department":[{"_id":"756"},{"_id":"26"}],"user_id":"118059","_id":"62228","language":[{"iso":"eng"}]},{"title":"Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions","date_updated":"2025-11-18T10:02:25Z","publisher":"Amsterdam University Press","date_created":"2025-11-18T10:00:38Z","author":[{"full_name":"Kinder-Kurlanda, Katharina","last_name":"Kinder-Kurlanda","first_name":"Katharina"},{"orcid":"0000-0002-0619-3160","last_name":"Fahimi","full_name":"Fahimi, Miriam","id":"118059","first_name":"Miriam"}],"year":"2024","place":"Amsterdam","page":"309–330","citation":{"apa":"Kinder-Kurlanda, K., &#38; Fahimi, M. (2024). Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions. In J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, &#38; M. Arnold (Eds.), <i>Algorithmic Regimes. Methods, Interactions, and Politics.</i> (pp. 309–330). Amsterdam University Press.","mla":"Kinder-Kurlanda, Katharina, and Miriam Fahimi. “Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions.” <i>Algorithmic Regimes. Methods, Interactions, and Politics.</i>, edited by Juliane Jarke et al., Amsterdam University Press, 2024, pp. 309–330.","short":"K. Kinder-Kurlanda, M. Fahimi, in: J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, M. Arnold (Eds.), Algorithmic Regimes. Methods, Interactions, and Politics., Amsterdam University Press, Amsterdam, 2024, pp. 309–330.","bibtex":"@inbook{Kinder-Kurlanda_Fahimi_2024, place={Amsterdam}, title={Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions}, booktitle={Algorithmic Regimes. Methods, Interactions, and Politics.}, publisher={Amsterdam University Press}, author={Kinder-Kurlanda, Katharina and Fahimi, Miriam}, editor={Jarke, Juliane and Prietl, Bianca and Egbert, Simon and Boeva, Yana and Heuer, Hendrik and Arnold, Maike}, year={2024}, pages={309–330} }","ama":"Kinder-Kurlanda K, Fahimi M. Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions. In: Jarke J, Prietl B, Egbert S, Boeva Y, Heuer H, Arnold M, eds. <i>Algorithmic Regimes. Methods, Interactions, and Politics.</i> Amsterdam University Press; 2024:309–330.","chicago":"Kinder-Kurlanda, Katharina, and Miriam Fahimi. “Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions.” In <i>Algorithmic Regimes. Methods, Interactions, and Politics.</i>, edited by Juliane Jarke, Bianca Prietl, Simon Egbert, Yana Boeva, Hendrik Heuer, and Maike Arnold, 309–330. Amsterdam: Amsterdam University Press, 2024.","ieee":"K. Kinder-Kurlanda and M. Fahimi, “Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions,” in <i>Algorithmic Regimes. Methods, Interactions, and Politics.</i>, J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, and M. Arnold, Eds. Amsterdam: Amsterdam University Press, 2024, pp. 309–330."},"publication_identifier":{"isbn":["978-94-6372-848-5"]},"language":[{"iso":"eng"}],"_id":"62230","department":[{"_id":"756"},{"_id":"26"}],"user_id":"118059","abstract":[{"lang":"eng","text":"Algorithms have risen to become one, if not the central technology for producing, circulating, and evaluating knowledge in multiple societal arenas. In this book, scholars from the social sciences, humanities, and computer science argue that this shift has, and will continue to have, profound implications for how knowledge is produced and what and whose knowledge is valued and deemed valid. To attend to this fundamental change, the authors propose the concept of algorithmic regimes and demonstrate how they transform the epistemological, methodological, and political foundations of knowledge production, sensemaking, and decision-making in contemporary societies. Across sixteen chapters, the volume offers a diverse collection of contributions along three perspectives on algorithmic regimes: the methods necessary to research and design algorithmic regimes, the ways in which algorithmic regimes reconfigure sociotechnical interactions, and the politics engrained in algorithmic regimes."}],"editor":[{"first_name":"Juliane","full_name":"Jarke, Juliane","last_name":"Jarke"},{"first_name":"Bianca","last_name":"Prietl","full_name":"Prietl, Bianca"},{"first_name":"Simon","last_name":"Egbert","full_name":"Egbert, Simon"},{"full_name":"Boeva, Yana","last_name":"Boeva","first_name":"Yana"},{"first_name":"Hendrik","full_name":"Heuer, Hendrik","last_name":"Heuer"},{"full_name":"Arnold, Maike","last_name":"Arnold","first_name":"Maike"}],"status":"public","publication":"Algorithmic Regimes. Methods, Interactions, and Politics.","type":"book_chapter"}]
