TY - CONF AB - Many applications require explainable node classification in knowledge graphs. Towards this end, a popular ``white-box'' approach is class expression learning: Given sets of positive and negative nodes, class expressions in description logics are learned that separate positive from negative nodes. Most existing approaches are search-based approaches generating many candidate class expressions and selecting the best one. However, they often take a long time to find suitable class expressions. In this paper, we cast class expression learning as a translation problem and propose a new family of class expression learning approaches which we dub neural class expression synthesizers. Training examples are ``translated'' into class expressions in a fashion akin to machine translation. Consequently, our synthesizers are not subject to the runtime limitations of search-based approaches. We study three instances of this novel family of approaches based on LSTMs, GRUs, and set transformers, respectively. An evaluation of our approach on four benchmark datasets suggests that it can effectively synthesize high-quality class expressions with respect to the input examples in approximately one second on average. Moreover, a comparison to state-of-the-art approaches suggests that we achieve better F-measures on large datasets. For reproducibility purposes, we provide our implementation as well as pretrained models in our public GitHub repository at https://github.com/dice-group/NeuralClassExpressionSynthesis AU - KOUAGOU, N'Dah Jean AU - Heindorf, Stefan AU - Demir, Caglar AU - Ngonga Ngomo, Axel-Cyrille ED - Pesquita, Catia ED - Jimenez-Ruiz, Ernesto ED - McCusker, Jamie ED - Faria, Daniel ED - Dragoni, Mauro ED - Dimou, Anastasia ED - Troncy, Raphael ED - Hertling, Sven ID - 33734 KW - Neural network KW - Concept learning KW - Description logics T2 - The Semantic Web - 20th Extended Semantic Web Conference (ESWC 2023) TI - Neural Class Expression Synthesis VL - 13870 ER - TY - GEN AB - Knowledge bases are widely used for information management on the web, enabling high-impact applications such as web search, question answering, and natural language processing. They also serve as the backbone for automatic decision systems, e.g. for medical diagnostics and credit scoring. As stakeholders affected by these decisions would like to understand their situation and verify fair decisions, a number of explanation approaches have been proposed using concepts in description logics. However, the learned concepts can become long and difficult to fathom for non-experts, even when verbalized. Moreover, long concepts do not immediately provide a clear path of action to change one's situation. Counterfactuals answering the question "How must feature values be changed to obtain a different classification?" have been proposed as short, human-friendly explanations for tabular data. In this paper, we transfer the notion of counterfactuals to description logics and propose the first algorithm for generating counterfactual explanations in the description logic $\mathcal{ELH}$. Counterfactual candidates are generated from concepts and the candidates with fewest feature changes are selected as counterfactuals. In case of multiple counterfactuals, we rank them according to the likeliness of their feature combinations. For evaluation, we conduct a user survey to investigate which of the generated counterfactual candidates are preferred for explanation by participants. In a second study, we explore possible use cases for counterfactual explanations. AU - Sieger, Leonie Nora AU - Heindorf, Stefan AU - Blübaum, Lukas AU - Ngonga Ngomo, Axel-Cyrille ID - 37937 T2 - arXiv:2301.05109 TI - Counterfactual Explanations for Concepts in ELH ER - TY - CONF AU - Baci, Alkid AU - Heindorf, Stefan ID - 46575 T2 - CIKM TI - Accelerating Concept Learning via Sampling ER - TY - CHAP AB - Class expression learning in description logics has long been regarded as an iterative search problem in an infinite conceptual space. Each iteration of the search process invokes a reasoner and a heuristic function. The reasoner finds the instances of the current expression, and the heuristic function computes the information gain and decides on the next step to be taken. As the size of the background knowledge base grows, search-based approaches for class expression learning become prohibitively slow. Current neural class expression synthesis (NCES) approaches investigate the use of neural networks for class expression learning in the attributive language with complement (ALC). While they show significant improvements over search-based approaches in runtime and quality of the computed solutions, they rely on the availability of pretrained embeddings for the input knowledge base. Moreover, they are not applicable to ontologies in more expressive description logics. In this paper, we propose a novel NCES approach which extends the state of the art to the description logic ALCHIQ(D). Our extension, dubbed NCES2, comes with an improved training data generator and does not require pretrained embeddings for the input knowledge base as both the embedding model and the class expression synthesizer are trained jointly. Empirical results on benchmark datasets suggest that our approach inherits the scalability capability of current NCES instances with the additional advantage that it supports more complex learning problems. NCES2 achieves the highest performance overall when compared to search-based approaches and to its predecessor NCES. We provide our source code, datasets, and pretrained models at https://github.com/dice-group/NCES2. AU - Kouagou, N'Dah Jean AU - Heindorf, Stefan AU - Demir, Caglar AU - Ngonga Ngomo, Axel-Cyrille ID - 47421 SN - 0302-9743 T2 - Machine Learning and Knowledge Discovery in Databases: Research Track TI - Neural Class Expression Synthesis in ALCHIQ(D) ER - TY - CHAP AU - Ngonga Ngomo, Axel-Cyrille AU - Demir, Caglar AU - Kouagou, N'Dah Jean AU - Heindorf, Stefan AU - Karalis, Nikoloas AU - Bigerl, Alexander ID - 46460 T2 - Compendium of Neurosymbolic Artificial Intelligence TI - Class Expression Learning with Multiple Representations ER - TY - JOUR AU - Demir, Caglar AU - Wiebesiek, Michel AU - Lu, Renzhong AU - Ngonga Ngomo, Axel-Cyrille AU - Heindorf, Stefan ID - 46248 JF - ECML PKDD TI - LitCQD: Multi-Hop Reasoning in Incomplete Knowledge Graphs with Numeric Literals ER - TY - CHAP AU - KOUAGOU, N'Dah Jean AU - Heindorf, Stefan AU - Demir, Caglar AU - Ngonga Ngomo, Axel-Cyrille ID - 33740 SN - 0302-9743 T2 - The Semantic Web TI - Learning Concept Lengths Accelerates Concept Learning in ALC ER - TY - CONF AB - Classifying nodes in knowledge graphs is an important task, e.g., predicting missing types of entities, predicting which molecules cause cancer, or predicting which drugs are promising treatment candidates. While black-box models often achieve high predictive performance, they are only post-hoc and locally explainable and do not allow the learned model to be easily enriched with domain knowledge. Towards this end, learning description logic concepts from positive and negative examples has been proposed. However, learning such concepts often takes a long time and state-of-the-art approaches provide limited support for literal data values, although they are crucial for many applications. In this paper, we propose EvoLearner - an evolutionary approach to learn ALCQ(D), which is the attributive language with complement (ALC) paired with qualified cardinality restrictions (Q) and data properties (D). We contribute a novel initialization method for the initial population: starting from positive examples (nodes in the knowledge graph), we perform biased random walks and translate them to description logic concepts. Moreover, we improve support for data properties by maximizing information gain when deciding where to split the data. We show that our approach significantly outperforms the state of the art on the benchmarking framework SML-Bench for structured machine learning. Our ablation study confirms that this is due to our novel initialization method and support for data properties. AU - Heindorf, Stefan AU - Blübaum, Lukas AU - Düsterhus, Nick AU - Werner, Till AU - Golani, Varun Nandkumar AU - Demir, Caglar AU - Ngonga Ngomo, Axel-Cyrille ID - 29290 T2 - WWW TI - EvoLearner: Learning Description Logics with Evolutionary Algorithms ER - TY - CONF AB - Smart home systems contain plenty of features that enhance wellbeing in everyday life through artificial intelligence (AI). However, many users feel insecure because they do not understand the AI’s functionality and do not feel they are in control of it. Combining technical, psychological and philosophical views on AI, we rethink smart homes as interactive systems where users can partake in an intelligent agent’s learning. Parallel to the goals of explainable AI (XAI), we explored the possibility of user involvement in supervised learning of the smart home to have a first approach to improve acceptance, support subjective understanding and increase perceived control. In this work, we conducted two studies: In an online pre-study, we asked participants about their attitude towards teaching AI via a questionnaire. In the main study, we performed a Wizard of Oz laboratory experiment with human participants, where participants spent time in a prototypical smart home and taught activity recognition to the intelligent agent through supervised learning based on the user’s behaviour. We found that involvement in the AI’s learning phase enhanced the users’ feeling of control, perceived understanding and perceived usefulness of AI in general. The participants reported positive attitudes towards training a smart home AI and found the process understandable and controllable. We suggest that involving the user in the learning phase could lead to better personalisation and increased understanding and control by users of intelligent agents for smart home automation. AU - Sieger, Leonie Nora AU - Hermann, Julia AU - Schomäcker, Astrid AU - Heindorf, Stefan AU - Meske, Christian AU - Hey, Celine-Chiara AU - Doğangün, Ayşegül ID - 34674 KW - human-agent interaction KW - smart homes KW - supervised learning KW - participation T2 - International Conference on Human-Agent Interaction TI - User Involvement in Training Smart Home Agents ER - TY - CHAP AU - Zahera, Hamada Mohamed Abdelsamee AU - Heindorf, Stefan AU - Balke, Stefan AU - Haupt, Jonas AU - Voigt, Martin AU - Walter, Carolin AU - Witter, Fabian AU - Ngonga Ngomo, Axel-Cyrille ID - 33738 SN - 0302-9743 T2 - The Semantic Web: ESWC 2022 Satellite Events TI - Tab2Onto: Unsupervised Semantification with Knowledge Graph Embeddings ER -