TY - CONF AU - Hotegni, Sedjro Salomon AU - Mahabadi, Sepideh AU - Vakilian, Ali ID - 45695 KW - Fair range clustering T2 - Proceedings of the 40th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. TI - Approximation Algorithms for Fair Range Clustering ER - TY - CONF AB - Tire and road wear are a major source of emissions of nonexhaust particulate matter (PM) and make up the largest share of microplastics in the environment. To reduce tire wear through numerical optimization of a vehicle's suspension system, fast simulations of the representative usage of a vehicle are needed. Therefore, this contribution evaluates if instead of a full simulation of a representative test drive, only specific driving maneuvers resulting from a clustering of the driving data can be used to predict tire wear. As a measure for tire wear, the friction work between tire and road is calculated. It is shown that enough clusters result in negligible deviations between the total friction work of the full simulation and the cluster simulations as well as between the distributions of the friction work over the tire width. The calculation time can be reduced to about 1% of the full simulation. AU - Muth, Lars AU - Noll, Christian AU - Sextro, Walter ED - Orlova, Anna ED - Cole, David ID - 29934 KW - Tire Wear KW - Vehicle Dynamics KW - Clustering KW - Virtual Test SN - 978-3-031-07304-5 T2 - Advances in Dynamics of Vehicles on Roads and Tracks II - Proceedings of the 27th Symposium of the International Association of Vehicle System Dynamics, IAVSD 2021 TI - Generation of a Reduced, Representative, Virtual Test Drive for Fast Evaluation of Tire Wear by Clustering of Driving Data ER - TY - CONF AB - In the industry 4.0 era, there is a growing need to transform unstructured data acquired by a multitude of sources into information and subsequently into knowledge to improve the quality of manufactured products, to boost production, for predictive maintenance, etc. Data-driven approaches, such as machine learning techniques, are typically employed to model the underlying relationship from data. However, an increase in model accuracy with state-of-the-art methods, such as deep convolutional neural networks, results in less interpretability and transparency. Due to the ease of implementation, interpretation and transparency to both domain experts and non-experts, a rule-based method is proposed in this paper, for prognostics and health management (PHM) and specifically for diagnostics. The proposed method utilizes the most relevant sensor signals acquired via feature extraction and selection techniques and expert knowledge. As a case study, the presented method is evaluated on data from a real-world quality control set-up provided by the European prognostics and health management society (PHME) at the conference’s 2021 data challenge. With the proposed method, our team took the third place, capable of successfully diagnosing different fault modes, irrespective of varying conditions. AU - Aimiyekagbon, Osarenren Kennedy AU - Muth, Lars AU - Wohlleben, Meike Claudia AU - Bender, Amelie AU - Sextro, Walter ED - Do, Phuc ED - King, Steve ED - Fink, Olga ID - 27111 IS - 1 KW - PHME 2021 KW - Feature Selection Classification KW - Feature Selection Clustering KW - Interpretable Model KW - Transparent Model KW - Industry 4.0 KW - Real-World Diagnostics KW - Quality Control KW - Predictive Maintenance T2 - Proceedings of the European Conference of the PHM Society 2021 TI - Rule-based Diagnostics of a Production Line VL - 6 ER - TY - CONF AB - One of the most popular fuzzy clustering techniques is the fuzzy K-means algorithm (also known as fuzzy-c-means or FCM algorithm). In contrast to the K-means and K-median problem, the underlying fuzzy K-means problem has not been studied from a theoretical point of view. In particular, there are no algorithms with approximation guarantees similar to the famous K-means++ algorithm known for the fuzzy K-means problem. This work initiates the study of the fuzzy K-means problem from an algorithmic and complexity theoretic perspective. We show that optimal solutions for the fuzzy K-means problem cannot, in general, be expressed by radicals over the input points. Surprisingly, this already holds for simple inputs in one-dimensional space. Hence, one cannot expect to compute optimal solutions exactly. We give the first (1+eps)-approximation algorithms for the fuzzy K-means problem. First, we present a deterministic approximation algorithm whose runtime is polynomial in N and linear in the dimension D of the input set, given that K is constant, i.e. a polynomial time approximation scheme (PTAS) for fixed K. We achieve this result by showing that for each soft clustering there exists a hard clustering with similar properties. Second, by using techniques known from coreset constructions for the K-means problem, we develop a deterministic approximation algorithm that runs in time almost linear in N but exponential in the dimension D. We complement these results with a randomized algorithm which imposes some natural restrictions on the sought solution and whose runtime is comparable to some of the most efficient approximation algorithms for K-means, i.e. linear in the number of points and the dimension, but exponential in the number of clusters. AU - Blömer, Johannes AU - Brauer, Sascha AU - Bujna, Kathrin ID - 2367 KW - unsolvability by radicals KW - clustering KW - fuzzy k-means KW - probabilistic method KW - approximation algorithms KW - randomized algorithms SN - 9781509054732 T2 - 2016 IEEE 16th International Conference on Data Mining (ICDM) TI - A Theoretical Analysis of the Fuzzy K-Means Problem ER - TY - JOUR AU - Ackermann, Marcel R. AU - Blömer, Johannes AU - Sohler, Christian ID - 2990 IS - 4 JF - ACM Trans. Algorithms KW - k-means clustering KW - k-median clustering KW - Approximation algorithm KW - Bregman divergences KW - Itakura-Saito divergence KW - Kullback-Leibler divergence KW - Mahalanobis distance KW - random sampling SN - 1549-6325 TI - Clustering for Metric and Nonmetric Distance Measures ER - TY - CONF AB - In this paper, we present a framework that supports experimenting with evolutionary hardware design. We describe the framework's modules for composing evolutionary optimizers and for setting up, controlling, and analyzing experiments. Two case studies demonstrate the usefulness of the framework: evolution of hash functions and evolution based on pre-engineered circuits. AU - Kaufmann, Paul AU - Platzner, Marco ID - 6508 KW - integrated circuit design KW - hardware evolution KW - evolutionary hardware design KW - evolutionary optimizers KW - hash functions KW - preengineered circuits KW - Hardware KW - Circuits KW - Design optimization KW - Visualization KW - Genetic programming KW - Genetic mutations KW - Clustering algorithms KW - Biological cells KW - Field programmable gate arrays KW - Routing SN - 076952866X T2 - Second NASA/ESA Conference on Adaptive Hardware and Systems (AHS 2007) TI - MOVES: A Modular Framework for Hardware Evolution ER - TY - JOUR AB - In this paper, it is shown that a correlation criterion is the appropriate criterion for bottom-up clustering to obtain broad phonetic class regression trees for maximum likelihood linear regression (MLLR)-based speaker adaptation. The correlation structure among speech units is estimated on the speaker-independent training data. In adaptation experiments the tree outperformed a regression tree obtained from clustering according to closeness in acoustic space and achieved results comparable with those of a manually designed broad phonetic class tree AU - Haeb-Umbach, Reinhold ID - 11778 IS - 3 JF - IEEE Transactions on Speech and Audio Processing KW - acoustic space KW - adaptation experiments KW - automatic generation KW - bottom-up clustering KW - broad phonetic class regression trees KW - correlation criterion KW - correlation methods KW - maximum likelihood estimation KW - maximum likelihood linear regression based speaker adaptation KW - MLLR adaptation KW - pattern clustering KW - phonetic regression class trees KW - speaker-independent training data KW - speech recognition KW - speech units KW - statistical analysis KW - trees (mathematics) TI - Automatic generation of phonetic regression class trees for MLLR adaptation VL - 9 ER -