{"type":"book_chapter","doi":"10.1007/978-3-030-64580-9_7","language":[{"iso":"eng"}],"citation":{"mla":"Yegenoglu, Alper, et al. “Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-Performing Gradient Descent.” Lecture Notes in Computer Science, Springer International Publishing, 2021, doi:10.1007/978-3-030-64580-9_7.","apa":"Yegenoglu, A., Krajsek, K., Pier, S. D., & Herty, M. (2021). Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent. In Lecture Notes in Computer Science. Springer International Publishing. https://doi.org/10.1007/978-3-030-64580-9_7","bibtex":"@inbook{Yegenoglu_Krajsek_Pier_Herty_2021, place={Cham}, title={Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent}, DOI={10.1007/978-3-030-64580-9_7}, booktitle={Lecture Notes in Computer Science}, publisher={Springer International Publishing}, author={Yegenoglu, Alper and Krajsek, Kai and Pier, Sandra Diaz and Herty, Michael}, year={2021} }","ama":"Yegenoglu A, Krajsek K, Pier SD, Herty M. Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent. In: Lecture Notes in Computer Science. Springer International Publishing; 2021. doi:10.1007/978-3-030-64580-9_7","chicago":"Yegenoglu, Alper, Kai Krajsek, Sandra Diaz Pier, and Michael Herty. “Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-Performing Gradient Descent.” In Lecture Notes in Computer Science. Cham: Springer International Publishing, 2021. https://doi.org/10.1007/978-3-030-64580-9_7.","ieee":"A. Yegenoglu, K. Krajsek, S. D. Pier, and M. Herty, “Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent,” in Lecture Notes in Computer Science, Cham: Springer International Publishing, 2021.","short":"A. Yegenoglu, K. Krajsek, S.D. Pier, M. Herty, in: Lecture Notes in Computer Science, Springer International Publishing, Cham, 2021."},"author":[{"id":"117951","first_name":"Alper","orcid":"0000-0001-8869-215X","last_name":"Yegenoglu","full_name":"Yegenoglu, Alper"},{"first_name":"Kai","full_name":"Krajsek, Kai","last_name":"Krajsek"},{"full_name":"Pier, Sandra Diaz","last_name":"Pier","first_name":"Sandra Diaz"},{"first_name":"Michael","full_name":"Herty, Michael","last_name":"Herty"}],"publication_identifier":{"isbn":["9783030645793","9783030645809"],"issn":["0302-9743","1611-3349"]},"place":"Cham","_id":"60901","publication":"Lecture Notes in Computer Science","date_updated":"2025-08-08T11:36:59Z","user_id":"117951","publication_status":"published","status":"public","publisher":"Springer International Publishing","date_created":"2025-08-06T15:02:38Z","title":"Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent","year":"2021","abstract":[{"lang":"eng","text":"The successful training of deep neural networks is dependent on initialization schemes and choice of activation functions. Non-optimally chosen parameter settings lead to the known problem of exploding or vanishing gradients. This issue occurs when gradient descent and backpropagation are applied. For this setting the Ensemble Kalman Filter (EnKF) can be used as an alternative optimizer when training neural networks. The EnKF does not require the explicit calculation of gradients or adjoints and we show this resolves the exploding and vanishing gradient problem. We analyze different parameter initializations, propose a dynamic change in ensembles and compare results to established methods."}]}