{"publication":"2024 International Joint Conference on Neural Networks (IJCNN)","main_file_link":[{"open_access":"1","url":"https://ieeexplore.ieee.org/document/10650994"}],"year":"2024","citation":{"bibtex":"@inproceedings{Hotegni_Berkemeier_Peitz_2024, place={Yokohama, Japan}, title={Multi-Objective Optimization for Sparse Deep Multi-Task Learning}, DOI={10.1109/IJCNN60899.2024.10650994}, booktitle={2024 International Joint Conference on Neural Networks (IJCNN)}, publisher={IEEE}, author={Hotegni, Sedjro Salomon and Berkemeier, Manuel Bastian and Peitz, Sebastian}, year={2024}, pages={9} }","mla":"Hotegni, Sedjro Salomon, et al. “Multi-Objective Optimization for Sparse Deep Multi-Task Learning.” 2024 International Joint Conference on Neural Networks (IJCNN), IEEE, 2024, p. 9, doi:10.1109/IJCNN60899.2024.10650994.","ieee":"S. S. Hotegni, M. B. Berkemeier, and S. Peitz, “Multi-Objective Optimization for Sparse Deep Multi-Task Learning,” in 2024 International Joint Conference on Neural Networks (IJCNN), Yokohama, Japan, 2024, p. 9, doi: 10.1109/IJCNN60899.2024.10650994.","ama":"Hotegni SS, Berkemeier MB, Peitz S. Multi-Objective Optimization for Sparse Deep Multi-Task Learning. In: 2024 International Joint Conference on Neural Networks (IJCNN). IEEE; 2024:9. doi:10.1109/IJCNN60899.2024.10650994","chicago":"Hotegni, Sedjro Salomon, Manuel Bastian Berkemeier, and Sebastian Peitz. “Multi-Objective Optimization for Sparse Deep Multi-Task Learning.” In 2024 International Joint Conference on Neural Networks (IJCNN), 9. Yokohama, Japan: IEEE, 2024. https://doi.org/10.1109/IJCNN60899.2024.10650994.","apa":"Hotegni, S. S., Berkemeier, M. B., & Peitz, S. (2024). Multi-Objective Optimization for Sparse Deep Multi-Task Learning. 2024 International Joint Conference on Neural Networks (IJCNN), 9. https://doi.org/10.1109/IJCNN60899.2024.10650994","short":"S.S. Hotegni, M.B. Berkemeier, S. Peitz, in: 2024 International Joint Conference on Neural Networks (IJCNN), IEEE, Yokohama, Japan, 2024, p. 9."},"_id":"46649","conference":{"start_date":"2024-06-30","end_date":"2024-07-05","location":"Yokohama, Japan","name":"2024 International Joint Conference on Neural Networks (IJCNN)"},"has_accepted_license":"1","type":"conference","date_created":"2023-08-24T07:44:36Z","place":"Yokohama, Japan","doi":"10.1109/IJCNN60899.2024.10650994","language":[{"iso":"eng"}],"oa":"1","status":"public","author":[{"last_name":"Hotegni","full_name":"Hotegni, Sedjro Salomon","id":"97995","first_name":"Sedjro Salomon"},{"full_name":"Berkemeier, Manuel Bastian","last_name":"Berkemeier","first_name":"Manuel Bastian","id":"51701"},{"id":"47427","first_name":"Sebastian","last_name":"Peitz","full_name":"Peitz, Sebastian","orcid":"0000-0002-3389-793X"}],"department":[{"_id":"655"}],"user_id":"97995","publisher":"IEEE","page":"9","title":"Multi-Objective Optimization for Sparse Deep Multi-Task Learning","publication_identifier":{"eissn":[" 2161-4407"],"eisbn":["979-8-3503-5931-2"]},"date_updated":"2024-09-27T10:24:22Z","publication_status":"published","abstract":[{"text":"Different conflicting optimization criteria arise naturally in various Deep\r\nLearning scenarios. These can address different main tasks (i.e., in the\r\nsetting of Multi-Task Learning), but also main and secondary tasks such as loss\r\nminimization versus sparsity. The usual approach is a simple weighting of the\r\ncriteria, which formally only works in the convex setting. In this paper, we\r\npresent a Multi-Objective Optimization algorithm using a modified Weighted\r\nChebyshev scalarization for training Deep Neural Networks (DNNs) with respect\r\nto several tasks. By employing this scalarization technique, the algorithm can\r\nidentify all optimal solutions of the original problem while reducing its\r\ncomplexity to a sequence of single-objective problems. The simplified problems\r\nare then solved using an Augmented Lagrangian method, enabling the use of\r\npopular optimization techniques such as Adam and Stochastic Gradient Descent,\r\nwhile efficaciously handling constraints. Our work aims to address the\r\n(economical and also ecological) sustainability issue of DNN models, with a\r\nparticular focus on Deep Multi-Task models, which are typically designed with a\r\nvery large number of weights to perform equally well on multiple tasks. Through\r\nexperiments conducted on two Machine Learning datasets, we demonstrate the\r\npossibility of adaptively sparsifying the model during training without\r\nsignificantly impacting its performance, if we are willing to apply\r\ntask-specific adaptations to the network weights. Code is available at\r\nhttps://github.com/salomonhotegni/MDMTN.","lang":"eng"}]}