TY - CONF AU - Epple, Nico AU - Dari, Simone AU - Drees, Ludwig AU - Protschky, Valentin AU - Riener, Andreas ID - 15009 SN - 9781728105604 T2 - 2019 IEEE Intelligent Vehicles Symposium (IV) TI - Influence of Cruise Control on Driver Guidance - a Comparison between System Generations and Countries ER - TY - CONF AU - Tornede, Alexander AU - Wever, Marcel Dominik AU - Hüllermeier, Eyke ED - Hoffmann, Frank ED - Hüllermeier, Eyke ED - Mikut, Ralf ID - 15011 SN - 978-3-7315-0979-0 T2 - Proceedings - 29. Workshop Computational Intelligence, Dortmund, 28. - 29. November 2019 TI - Algorithm Selection as Recommendation: From Collaborative Filtering to Dyad Ranking ER - TY - CONF AU - Brinker, Klaus AU - Hüllermeier, Eyke ID - 15013 T2 - Proceedings ECML/PKDD, European Conference on Machine Learning and Knowledge Discovery in Databases TI - A Reduction of Label Ranking to Multiclass Classification ER - TY - CONF AU - Hüllermeier, Eyke AU - Couso, Ines AU - Diestercke, Sebastian ID - 15014 T2 - Proceedings SUM 2019, International Conference on Scalable Uncertainty Management TI - Learning from Imprecise Data: Adjustments of Optimistic and Pessimistic Variants ER - TY - JOUR AU - Henzgen, Sascha AU - Hüllermeier, Eyke ID - 15015 JF - ACM Transactions on Knowledge Discovery from Data SN - 1556-4681 TI - Mining Rank Data ER - TY - JOUR AU - Bengs, Viktor AU - Eulert, Matthias AU - Holzmann, Hajo ID - 14027 JF - Journal of Multivariate Analysis SN - 0047-259X TI - Asymptotic confidence sets for the jump curve in bivariate regression problems ER - TY - JOUR AU - Bengs, Viktor AU - Holzmann, Hajo ID - 14028 JF - Electronic Journal of Statistics SN - 1935-7524 TI - Adaptive confidence sets for kink estimation ER - TY - GEN AU - Mohr, Felix AU - Wever, Marcel Dominik AU - Tornede, Alexander AU - Hüllermeier, Eyke ID - 13132 T2 - INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik – Informatik für Gesellschaft TI - From Automated to On-The-Fly Machine Learning ER - TY - CONF AB - Existing tools for automated machine learning, such as Auto-WEKA, TPOT, auto-sklearn, and more recently ML-Plan, have shown impressive results for the tasks of single-label classification and regression. Yet, there is only little work on other types of machine learning problems so far. In particular, there is almost no work on automating the engineering of machine learning solutions for multi-label classification (MLC). We show how the scope of ML-Plan, an AutoML-tool for multi-class classification, can be extended towards MLC using MEKA, which is a multi-label extension of the well-known Java library WEKA. The resulting approach recursively refines MEKA's multi-label classifiers, nesting other multi-label classifiers for meta algorithms and single-label classifiers provided by WEKA as base learners. In our evaluation, we find that the proposed approach yields strong results and performs significantly better than a set of baselines we compare with. AU - Wever, Marcel Dominik AU - Mohr, Felix AU - Tornede, Alexander AU - Hüllermeier, Eyke ID - 10232 TI - Automating Multi-Label Classification Extending ML-Plan ER - TY - JOUR AU - Rohlfing, Katharina AU - Leonardi, Giuseppe AU - Nomikou, Iris AU - Rączaszek-Leonardi, Joanna AU - Hüllermeier, Eyke ID - 20243 JF - IEEE Transactions on Cognitive and Developmental Systems TI - Multimodal Turn-Taking: Motivations, Methodological Challenges, and Novel Approaches ER - TY - CONF AU - Mohr, Felix AU - Wever, Marcel Dominik AU - Hüllermeier, Eyke AU - Faez, Amin ID - 2479 T2 - SCC TI - (WIP) Towards the Automated Composition of Machine Learning Services ER - TY - GEN AB - Object ranking is an important problem in the realm of preference learning. On the basis of training data in the form of a set of rankings of objects, which are typically represented as feature vectors, the goal is to learn a ranking function that predicts a linear order of any new set of objects. Current approaches commonly focus on ranking by scoring, i.e., on learning an underlying latent utility function that seeks to capture the inherent utility of each object. These approaches, however, are not able to take possible effects of context-dependence into account, where context-dependence means that the utility or usefulness of an object may also depend on what other objects are available as alternatives. In this paper, we formalize the problem of context-dependent ranking and present two general approaches based on two natural representations of context-dependent ranking functions. Both approaches are instantiated by means of appropriate neural network architectures, which are evaluated on suitable benchmark task. AU - Pfannschmidt, Karlson AU - Gupta, Pritha AU - Hüllermeier, Eyke ID - 19524 T2 - arXiv:1803.05796 TI - Deep Architectures for Learning Context-dependent Ranking Functions ER - TY - CONF AU - Mohr, Felix AU - Lettmann, Theodor AU - Hüllermeier, Eyke AU - Wever, Marcel Dominik ID - 2857 T2 - Proceedings of the 1st ICAPS Workshop on Hierarchical Planning TI - Programmatic Task Network Planning ER - TY - JOUR AU - Ramaswamy, Arunselvan AU - Bhatnagar, Shalabh ID - 24150 IS - 6 JF - IEEE Transactions on Automatic Control TI - Stability of stochastic approximations with “controlled markov” noise and temporal difference learning VL - 64 ER - TY - JOUR AU - Demirel, Burak AU - Ramaswamy, Arunselvan AU - Quevedo, Daniel E AU - Karl, Holger ID - 24151 IS - 4 JF - IEEE Control Systems Letters TI - Deepcas: A deep reinforcement learning algorithm for control-aware scheduling VL - 2 ER - TY - CONF AU - Mohr, Felix AU - Wever, Marcel Dominik AU - Hüllermeier, Eyke ID - 2471 T2 - SCC TI - On-The-Fly Service Construction with Prototypes ER - TY - JOUR AB - In machine learning, so-called nested dichotomies are utilized as a reduction technique, i.e., to decompose a multi-class classification problem into a set of binary problems, which are solved using a simple binary classifier as a base learner. The performance of the (multi-class) classifier thus produced strongly depends on the structure of the decomposition. In this paper, we conduct an empirical study, in which we compare existing heuristics for selecting a suitable structure in the form of a nested dichotomy. Moreover, we propose two additional heuristics as natural completions. One of them is the Best-of-K heuristic, which picks the (presumably) best among K randomly generated nested dichotomies. Surprisingly, and in spite of its simplicity, it turns out to outperform the state of the art. AU - Melnikov, Vitalik AU - Hüllermeier, Eyke ID - 3402 JF - Machine Learning SN - 1573-0565 TI - On the effectiveness of heuristics for learning nested dichotomies: an empirical analysis ER - TY - JOUR AB - Automated machine learning (AutoML) seeks to automatically select, compose, and parametrize machine learning algorithms, so as to achieve optimal performance on a given task (dataset). Although current approaches to AutoML have already produced impressive results, the field is still far from mature, and new techniques are still being developed. In this paper, we present ML-Plan, a new approach to AutoML based on hierarchical planning. To highlight the potential of this approach, we compare ML-Plan to the state-of-the-art frameworks Auto-WEKA, auto-sklearn, and TPOT. In an extensive series of experiments, we show that ML-Plan is highly competitive and often outperforms existing approaches. AU - Mohr, Felix AU - Wever, Marcel Dominik AU - Hüllermeier, Eyke ID - 3510 JF - Machine Learning KW - AutoML KW - Hierarchical Planning KW - HTN planning KW - ML-Plan SN - 0885-6125 TI - ML-Plan: Automated Machine Learning via Hierarchical Planning ER - TY - CONF AU - Mohr, Felix AU - Wever, Marcel Dominik AU - Hüllermeier, Eyke ID - 3552 T2 - Proceedings of the Symposium on Intelligent Data Analysis TI - Reduction Stumps for Multi-Class Classification ER - TY - CONF AB - In automated machine learning (AutoML), the process of engineering machine learning applications with respect to a specific problem is (partially) automated. Various AutoML tools have already been introduced to provide out-of-the-box machine learning functionality. More specifically, by selecting machine learning algorithms and optimizing their hyperparameters, these tools produce a machine learning pipeline tailored to the problem at hand. Except for TPOT, all of these tools restrict the maximum number of processing steps of such a pipeline. However, as TPOT follows an evolutionary approach, it suffers from performance issues when dealing with larger datasets. In this paper, we present an alternative approach leveraging a hierarchical planning to configure machine learning pipelines that are unlimited in length. We evaluate our approach and find its performance to be competitive with other AutoML tools, including TPOT. AU - Wever, Marcel Dominik AU - Mohr, Felix AU - Hüllermeier, Eyke ID - 3852 KW - automated machine learning KW - complex pipelines KW - hierarchical planning T2 - ICML 2018 AutoML Workshop TI - ML-Plan for Unlimited-Length Machine Learning Pipelines ER -