{"status":"public","_id":"42160","external_id":{"arxiv":["2302.07160"]},"date_created":"2023-02-15T20:57:20Z","user_id":"47427","main_file_link":[{"url":"https://arxiv.org/pdf/2302.07160","open_access":"1"}],"year":"2023","author":[{"first_name":"Stefan","last_name":"Werner","full_name":"Werner, Stefan"},{"orcid":"0000-0002-3389-793X","last_name":"Peitz","full_name":"Peitz, Sebastian","first_name":"Sebastian","id":"47427"}],"citation":{"apa":"Werner, S., & Peitz, S. (2023). Learning a model is paramount for sample efficiency in reinforcement  learning control of PDEs. In arXiv:2302.07160.","chicago":"Werner, Stefan, and Sebastian Peitz. “Learning a Model Is Paramount for Sample Efficiency in Reinforcement  Learning Control of PDEs.” ArXiv:2302.07160, 2023.","mla":"Werner, Stefan, and Sebastian Peitz. “Learning a Model Is Paramount for Sample Efficiency in Reinforcement  Learning Control of PDEs.” ArXiv:2302.07160, 2023.","ieee":"S. Werner and S. Peitz, “Learning a model is paramount for sample efficiency in reinforcement  learning control of PDEs,” arXiv:2302.07160. 2023.","bibtex":"@article{Werner_Peitz_2023, title={Learning a model is paramount for sample efficiency in reinforcement  learning control of PDEs}, journal={arXiv:2302.07160}, author={Werner, Stefan and Peitz, Sebastian}, year={2023} }","ama":"Werner S, Peitz S. Learning a model is paramount for sample efficiency in reinforcement  learning control of PDEs. arXiv:230207160. Published online 2023.","short":"S. Werner, S. Peitz, ArXiv:2302.07160 (2023)."},"oa":"1","type":"preprint","department":[{"_id":"655"}],"language":[{"iso":"eng"}],"date_updated":"2023-02-15T20:58:33Z","title":"Learning a model is paramount for sample efficiency in reinforcement learning control of PDEs","abstract":[{"lang":"eng","text":"The goal of this paper is to make a strong point for the usage of dynamical models when using reinforcement learning (RL) for feedback control of dynamical systems governed by partial differential equations (PDEs). To breach the gap between the immense promises we see in RL and the applicability in complex engineering systems, the main challenges are the massive requirements in terms of the training data, as well as the lack of performance guarantees. We present a solution for the first issue using a data-driven surrogate model in the form of a convolutional LSTM with actuation. We demonstrate that learning an actuated model in parallel to training the RL agent significantly reduces the total amount of required data sampled from the real system. Furthermore, we show that iteratively updating the model is of major importance to avoid biases in the RL training. Detailed ablation studies reveal the most important ingredients of the modeling process. We use the chaotic Kuramoto-Sivashinsky equation do demonstarte our findings."}],"publication":"arXiv:2302.07160"}