TY - CHAP AU - Wehrheim, Heike AU - Hüllermeier, Eyke AU - Becker, Steffen AU - Becker, Matthias AU - Richter, Cedric AU - Sharma, Arnab ED - Haake, Claus-Jochen ED - Meyer auf der Heide, Friedhelm ED - Platzner, Marco ED - Wachsmuth, Henning ED - Wehrheim, Heike ID - 45886 T2 - On-The-Fly Computing -- Individualized IT-services in dynamic markets TI - Composition Analysis in Unknown Contexts VL - 412 ER - TY - CONF AB - Testing is one of the most frequent means of quality assurance for software. Property-based testing aims at generating test suites for checking code against user-defined properties. Test input generation is, however, most often independent of the property to be checked, and is instead based on random or user-defined data generation.In this paper, we present property-driven unit testing of functions with numerical inputs and outputs. Alike property-based testing, it allows users to define the properties to be tested for. Contrary to property-based testing, it also uses the property for a targeted generation of test inputs. Our approach is a form of learning-based testing where we first of all learn a model of a given black-box function using standard machine learning algorithms, and in a second step use model and property for test input generation. This allows us to test both predefined functions as well as machine learned regression models. Our experimental evaluation shows that our property-driven approach is more effective than standard property-based testing techniques. AU - Sharma, Arnab AU - Melnikov, Vitaly AU - Hüllermeier, Eyke AU - Wehrheim, Heike ID - 32311 T2 - Proceedings of the 10th IEEE/ACM International Conference on Formal Methods in Software Engineering (FormaliSE) TI - Property-Driven Testing of Black-Box Functions ER - TY - JOUR AU - Sharma, Arnab AU - Demir, Caglar AU - Ngonga Ngomo, Axel-Cyrille AU - Wehrheim, Heike ID - 25213 JF - CoRR TI - MLCheck- Property-Driven Testing of Machine Learning Models VL - abs/2105.00741 ER - TY - CONF AB - In recent years, we observe an increasing amount of software with machine learning components being deployed. This poses the question of quality assurance for such components: how can we validate whether specified requirements are fulfilled by a machine learned software? Current testing and verification approaches either focus on a single requirement (e.g., fairness) or specialize on a single type of machine learning model (e.g., neural networks). In this paper, we propose property-driven testing of machine learning models. Our approach MLCheck encompasses (1) a language for property specification, and (2) a technique for systematic test case generation. The specification language is comparable to property-based testing languages. Test case generation employs advanced verification technology for a systematic, property dependent construction of test suites, without additional user supplied generator functions. We evaluate MLCheck using requirements and data sets from three different application areas (software discrimination, learning on knowledge graphs and security). Our evaluation shows that despite its generality MLCheck can even outperform specialised testing approaches while having a comparable runtime AU - Sharma, Arnab AU - Demir, Caglar AU - Ngonga Ngomo, Axel-Cyrille AU - Wehrheim, Heike ID - 28350 T2 - Proceedings of the 20th IEEE International Conference on Machine Learning and Applications (ICMLA) TI - MLCHECK–Property-Driven Testing of Machine Learning Classifiers ER - TY - CONF AU - Sharma, Arnab AU - Wehrheim, Heike ID - 19656 T2 - Proceedings of the 32th IFIP International Conference on Testing Software and Systems (ICTSS) TI - Automatic Fairness Testing of Machine Learning Models ER - TY - JOUR AU - Sharma, Arnab AU - Wehrheim, Heike ID - 20279 JF - CoRR TI - Testing Monotonicity of Machine Learning Models VL - abs/2002.12278 ER - TY - CONF AU - Sharma, Arnab AU - Wehrheim, Heike ID - 16724 T2 - Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA). TI - Higher Income, Larger Loan? Monotonicity Testing of Machine Learning Models ER - TY - CONF AB - For optimal placement and orchestration of network services, it is crucial that their structure and semantics are specified clearly and comprehensively and are available to an orchestrator. Existing specification approaches are either ambiguous or miss important aspects regarding the behavior of virtual network functions (VNFs) forming a service. We propose to formally and unambiguously specify the behavior of these functions and services using Queuing Petri Nets (QPNs). QPNs are an established method that allows to express queuing, synchronization, stochastically distributed processing delays, and changing traffic volume and characteristics at each VNF. With QPNs, multiple VNFs can be connected to complete network services in any structure, even specifying bidirectional network services containing loops. We discuss how management and orchestration systems can benefit from our clear and comprehensive specification approach, leading to better placement of VNFs and improved Quality of Service. Another benefit of formally specifying network services with QPNs are diverse analysis options, which allow valuable insights such as the distribution of end-to-end delay. We propose a tool-based workflow that supports the specification of network services and the automatic generation of corresponding simulation code to enable an in-depth analysis of their behavior and performance. AU - Schneider, Stefan Balthasar AU - Sharma, Arnab AU - Karl, Holger AU - Wehrheim, Heike ID - 3287 T2 - 2019 IFIP/IEEE International Symposium on Integrated Network Management (IM) TI - Specifying and Analyzing Virtual Network Services Using Queuing Petri Nets ER - TY - GEN AU - Sharma, Arnab AU - Wehrheim, Heike ID - 7752 SN - 978-3-88579-686-2 T2 - Proceedings of the Software Engineering Conference (SE) TI - Testing Balancedness of ML Algorithms VL - P-292 ER - TY - CONF AU - Sharma, Arnab AU - Wehrheim, Heike ID - 7635 T2 - IEEE International Conference on Software Testing, Verification and Validation (ICST) TI - Testing Machine Learning Algorithms for Balanced Data Usage ER - TY - CONF AU - Sharma, Arnab AU - Wehrheim, Heike ED - Becker, Steffen ED - Bogicevic, Ivan ED - Herzwurm, Georg ED - Wagner, Stefan ID - 10094 T2 - Software Engineering and Software Management, {SE/SWM} 2019, Stuttgart, Germany, February 18-22, 2019 TI - Testing Balancedness of ML Algorithms VL - {P-292} ER -