---
_id: '55394'
abstract:
- lang: eng
  text: Nowadays we deal with robots and AI more and more in our everyday life. However,
    their behavior is not always apparent to most lay users, especially in error situations.
    This can lead to misconceptions about the behavior of the technologies being used.
    This in turn can lead to misuse and rejection by users. Explanation, for example
    through transparency, can address these misconceptions. However, explaining the
    entire software or hardware would be confusing and overwhelming for users. Therefore,
    this paper focuses on the ‘enabling’ architecture. It describes those aspects
    of a robotic system that may need to be explained to enable someone to use the
    technology effectively. Furthermore, this paper deals with the ‘explanandum’,
    i.e. the corresponding misunderstandings or missing concepts of the enabling architecture
    that need to be clarified. Thus, we have developed and are presenting an approach
    to determine the ‘enabling’ architecture and the resulting ‘explanandum’ of complex
    technologies.
article_type: original
author:
- first_name: Helen
  full_name: Beierling, Helen
  last_name: Beierling
- first_name: Phillip
  full_name: Richter, Phillip
  last_name: Richter
- first_name: Mara
  full_name: Brandt, Mara
  last_name: Brandt
- first_name: Lutz
  full_name: Terfloth, Lutz
  last_name: Terfloth
- first_name: Carsten
  full_name: Schulte, Carsten
  last_name: Schulte
- first_name: Heiko
  full_name: Wersing, Heiko
  last_name: Wersing
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  last_name: Vollmer
citation:
  ama: 'Beierling H, Richter P, Brandt M, et al. What you need to know about a learning
    robot: Identifying the enabling  architecture of complex systems. <i>Cognitive
    Systems Research</i>. 2024;88.'
  apa: 'Beierling, H., Richter, P., Brandt, M., Terfloth, L., Schulte, C., Wersing,
    H., &#38; Vollmer, A.-L. (2024). What you need to know about a learning robot:
    Identifying the enabling  architecture of complex systems. <i>Cognitive Systems
    Research</i>, <i>88</i>.'
  bibtex: '@article{Beierling_Richter_Brandt_Terfloth_Schulte_Wersing_Vollmer_2024,
    title={What you need to know about a learning robot: Identifying the enabling 
    architecture of complex systems}, volume={88}, journal={Cognitive Systems Research},
    publisher={Elsevier}, author={Beierling, Helen and Richter, Phillip and Brandt,
    Mara and Terfloth, Lutz and Schulte, Carsten and Wersing, Heiko and Vollmer, Anna-Lisa},
    year={2024} }'
  chicago: 'Beierling, Helen, Phillip Richter, Mara Brandt, Lutz Terfloth, Carsten
    Schulte, Heiko Wersing, and Anna-Lisa Vollmer. “What You Need to Know about a
    Learning Robot: Identifying the Enabling  Architecture of Complex Systems.” <i>Cognitive
    Systems Research</i> 88 (2024).'
  ieee: 'H. Beierling <i>et al.</i>, “What you need to know about a learning robot:
    Identifying the enabling  architecture of complex systems,” <i>Cognitive Systems
    Research</i>, vol. 88, 2024.'
  mla: 'Beierling, Helen, et al. “What You Need to Know about a Learning Robot: Identifying
    the Enabling  Architecture of Complex Systems.” <i>Cognitive Systems Research</i>,
    vol. 88, Elsevier, 2024.'
  short: H. Beierling, P. Richter, M. Brandt, L. Terfloth, C. Schulte, H. Wersing,
    A.-L. Vollmer, Cognitive Systems Research 88 (2024).
date_created: 2024-07-26T08:01:23Z
date_updated: 2025-09-17T13:32:31Z
ddc:
- '006'
file:
- access_level: closed
  content_type: application/pdf
  creator: helebeen
  date_created: 2025-09-17T13:31:11Z
  date_updated: 2025-09-17T13:31:11Z
  file_id: '61330'
  file_name: mentaleModelle.pdf
  file_size: 1577897
  relation: main_file
  success: 1
file_date_updated: 2025-09-17T13:31:11Z
funded_apc: '1'
has_accepted_license: '1'
intvolume: '        88'
keyword:
- Robotics HRI Explainability Didactics Didactic reconstruction
language:
- iso: eng
main_file_link:
- open_access: '1'
oa: '1'
project:
- _id: '123'
  name: 'TRR 318 - B5: TRR 318 - Subproject B5'
publication: Cognitive Systems Research
publication_status: published
publisher: Elsevier
status: public
title: 'What you need to know about a learning robot: Identifying the enabling  architecture
  of complex systems'
type: journal_article
user_id: '50995'
volume: 88
year: '2024'
...
---
_id: '44672'
abstract:
- lang: eng
  text: With enhancing digitalization, condition monitoring is used in an increasing
    number of application fields across various industrial sectors. By its application,
    increased reliability as well as reduced risks and costs can be achieved. Based
    on different approaches, technical systems are monitored and measured data is
    analyzed to enable condition-based or predictive maintenance. To this end, machine
    learning approaches are usually implemented to diagnose the health states or predict
    the health index of the monitored system. However, these trained models are often
    black-box models, not intuitively explainable for a human. To overcome this shortcoming,
    a model-based approach based on physics is developed for piezoelectric bending
    actuators. Such a model enables a transparent representation of the system. Moreover,
    the model-based approach is extended by a parameter-estimation to account for
    sudden changes in behavior e. g. caused by occurring cracks.
article_number: '114399'
article_type: original
author:
- first_name: Amelie
  full_name: Bender, Amelie
  id: '54290'
  last_name: Bender
citation:
  ama: 'Bender A. Model-based condition monitoring of piezoelectric bending actuators.
    <i>Sensors and Actuators A: Physical</i>. 2023;357. doi:<a href="https://doi.org/10.1016/j.sna.2023.114399">10.1016/j.sna.2023.114399</a>'
  apa: 'Bender, A. (2023). Model-based condition monitoring of piezoelectric bending
    actuators. <i>Sensors and Actuators A: Physical</i>, <i>357</i>, Article 114399.
    <a href="https://doi.org/10.1016/j.sna.2023.114399">https://doi.org/10.1016/j.sna.2023.114399</a>'
  bibtex: '@article{Bender_2023, title={Model-based condition monitoring of piezoelectric
    bending actuators}, volume={357}, DOI={<a href="https://doi.org/10.1016/j.sna.2023.114399">10.1016/j.sna.2023.114399</a>},
    number={114399}, journal={Sensors and Actuators A: Physical}, publisher={Elsevier
    BV}, author={Bender, Amelie}, year={2023} }'
  chicago: 'Bender, Amelie. “Model-Based Condition Monitoring of Piezoelectric Bending
    Actuators.” <i>Sensors and Actuators A: Physical</i> 357 (2023). <a href="https://doi.org/10.1016/j.sna.2023.114399">https://doi.org/10.1016/j.sna.2023.114399</a>.'
  ieee: 'A. Bender, “Model-based condition monitoring of piezoelectric bending actuators,”
    <i>Sensors and Actuators A: Physical</i>, vol. 357, Art. no. 114399, 2023, doi:
    <a href="https://doi.org/10.1016/j.sna.2023.114399">10.1016/j.sna.2023.114399</a>.'
  mla: 'Bender, Amelie. “Model-Based Condition Monitoring of Piezoelectric Bending
    Actuators.” <i>Sensors and Actuators A: Physical</i>, vol. 357, 114399, Elsevier
    BV, 2023, doi:<a href="https://doi.org/10.1016/j.sna.2023.114399">10.1016/j.sna.2023.114399</a>.'
  short: 'A. Bender, Sensors and Actuators A: Physical 357 (2023).'
date_created: 2023-05-09T09:49:44Z
date_updated: 2023-05-09T09:53:31Z
department:
- _id: '151'
doi: 10.1016/j.sna.2023.114399
intvolume: '       357'
keyword:
- Condition Monitoring
- Model-based approach Diagnostics
- Varying conditions
- Explainability
- Piezoelectric bending actuators
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://authors.elsevier.com/a/1h2WV3IC9dF7Hm
oa: '1'
publication: 'Sensors and Actuators A: Physical'
publication_identifier:
  issn:
  - 0924-4247
publication_status: published
publisher: Elsevier BV
quality_controlled: '1'
status: public
title: Model-based condition monitoring of piezoelectric bending actuators
type: journal_article
user_id: '54290'
volume: 357
year: '2023'
...
---
_id: '24456'
abstract:
- lang: eng
  text: One objective of current research in explainable intelligent systems is to
    implement social aspects in order to increase the relevance of explanations. In
    this paper, we argue that a novel conceptual framework is needed to overcome shortcomings
    of existing AI systems with little attention to processes of interaction and learning.
    Drawing from research in interaction and development, we first outline the novel
    conceptual framework that pushes the design of AI systems toward true interactivity
    with an emphasis on the role of the partner and social relevance. We propose that
    AI systems will be able to provide a meaningful and relevant explanation only
    if the process of explaining is extended to active contribution of both partners
    that brings about dynamics that is modulated by different levels of analysis.
    Accordingly, our conceptual framework comprises monitoring and scaffolding as
    key concepts and claims that the process of explaining is not only modulated by
    the interaction between explainee and explainer but is embedded into a larger
    social context in which conventionalized and routinized behaviors are established.
    We discuss our conceptual framework in relation to the established objectives
    of transparency and autonomy that are raised for the design of explainable AI
    systems currently.
article_type: original
author:
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  last_name: Buschmeier
- first_name: Elena
  full_name: Esposito, Elena
  last_name: Esposito
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Barbara
  full_name: Hammer, Barbara
  last_name: Hammer
- first_name: Reinhold
  full_name: Haeb-Umbach, Reinhold
  id: '242'
  last_name: Haeb-Umbach
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Eyke
  full_name: Hüllermeier, Eyke
  id: '48129'
  last_name: Hüllermeier
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
- first_name: Kirsten
  full_name: Thommes, Kirsten
  id: '72497'
  last_name: Thommes
- first_name: Axel-Cyrille
  full_name: Ngonga Ngomo, Axel-Cyrille
  id: '65716'
  last_name: Ngonga Ngomo
- first_name: Carsten
  full_name: Schulte, Carsten
  id: '60311'
  last_name: Schulte
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
- first_name: Petra
  full_name: Wagner, Petra
  last_name: Wagner
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: 'Rohlfing KJ, Cimiano P, Scharlau I, et al. Explanation as a Social Practice:
    Toward a Conceptual Framework for the Social Design of AI Systems. <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>. 2021;13(3):717-728. doi:<a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>'
  apa: 'Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier,
    H., Esposito, E., Grimminger, A., Hammer, B., Haeb-Umbach, R., Horwath, I., Hüllermeier,
    E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A.-C., Schulte, C., Wachsmuth,
    H., Wagner, P., &#38; Wrede, B. (2021). Explanation as a Social Practice: Toward
    a Conceptual Framework for the Social Design of AI Systems. <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>, <i>13</i>(3), 717–728. <a href="https://doi.org/10.1109/tcds.2020.3044366">https://doi.org/10.1109/tcds.2020.3044366</a>'
  bibtex: '@article{Rohlfing_Cimiano_Scharlau_Matzner_Buhl_Buschmeier_Esposito_Grimminger_Hammer_Haeb-Umbach_et
    al._2021, title={Explanation as a Social Practice: Toward a Conceptual Framework
    for the Social Design of AI Systems}, volume={13}, DOI={<a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>},
    number={3}, journal={IEEE Transactions on Cognitive and Developmental Systems},
    author={Rohlfing, Katharina J. and Cimiano, Philipp and Scharlau, Ingrid and Matzner,
    Tobias and Buhl, Heike M. and Buschmeier, Hendrik and Esposito, Elena and Grimminger,
    Angela and Hammer, Barbara and Haeb-Umbach, Reinhold and et al.}, year={2021},
    pages={717–728} }'
  chicago: 'Rohlfing, Katharina J., Philipp Cimiano, Ingrid Scharlau, Tobias Matzner,
    Heike M. Buhl, Hendrik Buschmeier, Elena Esposito, et al. “Explanation as a Social
    Practice: Toward a Conceptual Framework for the Social Design of AI Systems.”
    <i>IEEE Transactions on Cognitive and Developmental Systems</i> 13, no. 3 (2021):
    717–28. <a href="https://doi.org/10.1109/tcds.2020.3044366">https://doi.org/10.1109/tcds.2020.3044366</a>.'
  ieee: 'K. J. Rohlfing <i>et al.</i>, “Explanation as a Social Practice: Toward a
    Conceptual Framework for the Social Design of AI Systems,” <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>, vol. 13, no. 3, pp. 717–728, 2021,
    doi: <a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>.'
  mla: 'Rohlfing, Katharina J., et al. “Explanation as a Social Practice: Toward a
    Conceptual Framework for the Social Design of AI Systems.” <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>, vol. 13, no. 3, 2021, pp. 717–28,
    doi:<a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>.'
  short: K.J. Rohlfing, P. Cimiano, I. Scharlau, T. Matzner, H.M. Buhl, H. Buschmeier,
    E. Esposito, A. Grimminger, B. Hammer, R. Haeb-Umbach, I. Horwath, E. Hüllermeier,
    F. Kern, S. Kopp, K. Thommes, A.-C. Ngonga Ngomo, C. Schulte, H. Wachsmuth, P.
    Wagner, B. Wrede, IEEE Transactions on Cognitive and Developmental Systems 13
    (2021) 717–728.
date_created: 2021-09-14T20:52:57Z
date_updated: 2023-12-05T10:15:02Z
ddc:
- '300'
department:
- _id: '603'
- _id: '749'
- _id: '424'
- _id: '67'
- _id: '574'
- _id: '184'
- _id: '757'
- _id: '54'
- _id: '178'
doi: 10.1109/tcds.2020.3044366
file:
- access_level: open_access
  content_type: application/pdf
  creator: haebumb
  date_created: 2023-11-20T16:33:51Z
  date_updated: 2023-11-20T16:33:51Z
  file_id: '49081'
  file_name: 2020-12-01_explainability_final_version.pdf
  file_size: 626217
  relation: main_file
file_date_updated: 2023-11-20T16:33:51Z
has_accepted_license: '1'
intvolume: '        13'
issue: '3'
keyword:
- Explainability
- process ofexplaining andunderstanding
- explainable artificial systems
language:
- iso: eng
oa: '1'
page: 717-728
project:
- _id: '109'
  grant_number: '438445824'
  name: 'TRR 318: TRR 318 - Erklärbarkeit konstruieren'
publication: IEEE Transactions on Cognitive and Developmental Systems
publication_identifier:
  issn:
  - 2379-8920
  - 2379-8939
publication_status: published
quality_controlled: '1'
status: public
title: 'Explanation as a Social Practice: Toward a Conceptual Framework for the Social
  Design of AI Systems'
type: journal_article
user_id: '42933'
volume: 13
year: '2021'
...
