{"language":[{"iso":"eng"}],"publication_status":"epub_ahead","has_accepted_license":"1","date_updated":"2022-10-21T12:27:16Z","title":"On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation","citation":{"bibtex":"@article{Bieker_Gebken_Peitz_2022, title={On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation}, volume={44}, DOI={10.1109/TPAMI.2021.3114962}, number={11}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, publisher={IEEE}, author={Bieker, Katharina and Gebken, Bennet and Peitz, Sebastian}, year={2022}, pages={7797–7808} }","chicago":"Bieker, Katharina, Bennet Gebken, and Sebastian Peitz. “On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation.” IEEE Transactions on Pattern Analysis and Machine Intelligence 44, no. 11 (2022): 7797–7808. https://doi.org/10.1109/TPAMI.2021.3114962.","ama":"Bieker K, Gebken B, Peitz S. On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2022;44(11):7797-7808. doi:10.1109/TPAMI.2021.3114962","short":"K. Bieker, B. Gebken, S. Peitz, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (2022) 7797–7808.","apa":"Bieker, K., Gebken, B., & Peitz, S. (2022). On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 7797–7808. https://doi.org/10.1109/TPAMI.2021.3114962","ieee":"K. Bieker, B. Gebken, and S. Peitz, “On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 7797–7808, 2022, doi: 10.1109/TPAMI.2021.3114962.","mla":"Bieker, Katharina, et al. “On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, IEEE, 2022, pp. 7797–808, doi:10.1109/TPAMI.2021.3114962."},"publisher":"IEEE","user_id":"47427","department":[{"_id":"101"},{"_id":"530"},{"_id":"655"}],"article_type":"original","main_file_link":[{"open_access":"1","url":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9547772"}],"_id":"20731","intvolume":" 44","year":"2022","author":[{"last_name":"Bieker","full_name":"Bieker, Katharina","first_name":"Katharina","id":"32829"},{"full_name":"Gebken, Bennet","last_name":"Gebken","id":"32643","first_name":"Bennet"},{"full_name":"Peitz, Sebastian","last_name":"Peitz","orcid":"0000-0002-3389-793X","first_name":"Sebastian","id":"47427"}],"status":"public","publication":"IEEE Transactions on Pattern Analysis and Machine Intelligence","issue":"11","date_created":"2020-12-15T07:46:36Z","type":"journal_article","page":"7797-7808","file_date_updated":"2021-09-25T11:59:15Z","oa":"1","file":[{"file_name":"On_the_Treatment_of_Optimization_Problems_with_L1_Penalty_Terms_via_Multiobjective_Continuation.pdf","creator":"speitz","file_size":7990831,"date_updated":"2021-09-25T11:59:15Z","file_id":"25040","access_level":"closed","success":1,"date_created":"2021-09-25T11:59:15Z","content_type":"application/pdf","relation":"main_file"}],"volume":44,"doi":"10.1109/TPAMI.2021.3114962","abstract":[{"lang":"eng","text":"We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning (e.g., for the training of neural networks). Sparsity is an important feature to ensure robustness against noisy data, but also to find models that are interpretable and easy to analyze due to the small number of relevant terms. It is common practice to enforce sparsity by adding the ℓ1-norm as a weighted penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when we minimize the main objective and the ℓ1-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the weighting method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method which is specifically tailored to MOPs with two objective functions one of which is the ℓ1-norm. Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight that can be gained by this multiobjective approach."}],"ddc":["510"]}