{"date_updated":"2026-03-19T11:52:00Z","oa":"1","publisher":"Springer Nature Singapore","date_created":"2026-03-19T11:05:30Z","author":[{"first_name":"Suzana","id":"93637","full_name":"Alpsancar, Suzana","last_name":"Alpsancar"},{"first_name":"Michael","full_name":"Klenk, Michael","last_name":"Klenk"}],"title":"The Risk of Manipulation and Deception in sXAI","main_file_link":[{"open_access":"1","url":" https://doi.org/10.1007/978-981-96-5290-7_30"}],"doi":"10.1007/978-981-96-5290-7_30","publication_status":"published","publication_identifier":{"isbn":["9789819652891","9789819652907"]},"place":"Singapore","year":"2026","citation":{"bibtex":"@inbook{Alpsancar_Klenk_2026, place={Singapore}, title={The Risk of Manipulation and Deception in sXAI}, DOI={10.1007/978-981-96-5290-7_30}, booktitle={Social Explainable AI}, publisher={Springer Nature Singapore}, author={Alpsancar, Suzana and Klenk, Michael}, year={2026}, pages={583–616} }","short":"S. Alpsancar, M. Klenk, in: Social Explainable AI, Springer Nature Singapore, Singapore, 2026, pp. 583–616.","mla":"Alpsancar, Suzana, and Michael Klenk. “The Risk of Manipulation and Deception in SXAI.” Social Explainable AI, Springer Nature Singapore, 2026, pp. 583–616, doi:10.1007/978-981-96-5290-7_30.","apa":"Alpsancar, S., & Klenk, M. (2026). The Risk of Manipulation and Deception in sXAI. In Social Explainable AI (pp. 583–616). Springer Nature Singapore. https://doi.org/10.1007/978-981-96-5290-7_30","ieee":"S. Alpsancar and M. Klenk, “The Risk of Manipulation and Deception in sXAI,” in Social Explainable AI, Singapore: Springer Nature Singapore, 2026, pp. 583–616.","chicago":"Alpsancar, Suzana, and Michael Klenk. “The Risk of Manipulation and Deception in SXAI.” In Social Explainable AI, 583–616. Singapore: Springer Nature Singapore, 2026. https://doi.org/10.1007/978-981-96-5290-7_30.","ama":"Alpsancar S, Klenk M. The Risk of Manipulation and Deception in sXAI. In: Social Explainable AI. Springer Nature Singapore; 2026:583-616. doi:10.1007/978-981-96-5290-7_30"},"page":"583-616","project":[{"_id":"109","name":"TRR 318: Erklärbarkeit konstruieren"},{"_id":"370","name":"TRR 318; TP B06: Ethik und Normativität der erklärbaren KI"}],"_id":"65064","user_id":"93637","department":[{"_id":"26"},{"_id":"756"}],"language":[{"iso":"eng"}],"type":"book_chapter","publication":"Social Explainable AI","abstract":[{"lang":"eng","text":"Abstract\r\n XAI can minimize the risks of being manipulated and deceived by AI but in turn entails other specific risks. This also applies to sXAI, and the specifically social character of sXAI harbors particular risks that designers and developers should be aware of. In this chapter, we shall discuss the potential opportunities and risks of sXAI. We see a particularly positive potential in the social character of sXAI, which lies in the fact that skillful users, including those with “healthy distrust,” can use the adaptivity of sXAI to produce an explanation that is actually relevant and adequate for them. However, this requires a high level of skills on the part of the user and is thus in contrast to the general promise of efficiency in the use of AI. A potential risk of XAI is that it can be (even more) persuasive, as the interactive involvement and the anthropomorphism strengthen a trustworthy appearance/performance (independent of the adequacy of the sXAI performance)."}],"status":"public"}