Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning

S. Peitz, J. Stenner, V. Chidananda, O. Wallscheid, S.L. Brunton, K. Taira, Physica D: Nonlinear Phenomena 461 (2024) 134096.

Journal Article | English
Author
Peitz, SebastianLibreCat ; Stenner, JanLibreCat; Chidananda, Vikas; Wallscheid, OliverLibreCat ; Brunton, Steven L.; Taira, Kunihiko
Abstract
We present a convolutional framework which significantly reduces the complexity and thus, the computational effort for distributed reinforcement learning control of dynamical systems governed by partial differential equations (PDEs). Exploiting translational equivariances, the high-dimensional distributed control problem can be transformed into a multi-agent control problem with many identical, uncoupled agents. Furthermore, using the fact that information is transported with finite velocity in many cases, the dimension of the agents’ environment can be drastically reduced using a convolution operation over the state space of the PDE, by which we effectively tackle the curse of dimensionality otherwise present in deep reinforcement learning. In this setting, the complexity can be flexibly adjusted via the kernel width or by using a stride greater than one (meaning that we do not place an actuator at each sensor location). Moreover, scaling from smaller to larger domains – or the transfer between different domains – becomes a straightforward task requiring little effort. We demonstrate the performance of the proposed framework using several PDE examples with increasing complexity, where stabilization is achieved by training a low-dimensional deep deterministic policy gradient agent using minimal computing resources.
Publishing Year
Journal Title
Physica D: Nonlinear Phenomena
Volume
461
Page
134096
LibreCat-ID

Cite this

Peitz S, Stenner J, Chidananda V, Wallscheid O, Brunton SL, Taira K. Distributed Control of Partial Differential Equations Using  Convolutional Reinforcement Learning. Physica D: Nonlinear Phenomena. 2024;461:134096. doi:10.1016/j.physd.2024.134096
Peitz, S., Stenner, J., Chidananda, V., Wallscheid, O., Brunton, S. L., & Taira, K. (2024). Distributed Control of Partial Differential Equations Using  Convolutional Reinforcement Learning. Physica D: Nonlinear Phenomena, 461, 134096. https://doi.org/10.1016/j.physd.2024.134096
@article{Peitz_Stenner_Chidananda_Wallscheid_Brunton_Taira_2024, title={Distributed Control of Partial Differential Equations Using  Convolutional Reinforcement Learning}, volume={461}, DOI={10.1016/j.physd.2024.134096}, journal={Physica D: Nonlinear Phenomena}, publisher={Elsevier}, author={Peitz, Sebastian and Stenner, Jan and Chidananda, Vikas and Wallscheid, Oliver and Brunton, Steven L. and Taira, Kunihiko}, year={2024}, pages={134096} }
Peitz, Sebastian, Jan Stenner, Vikas Chidananda, Oliver Wallscheid, Steven L. Brunton, and Kunihiko Taira. “Distributed Control of Partial Differential Equations Using  Convolutional Reinforcement Learning.” Physica D: Nonlinear Phenomena 461 (2024): 134096. https://doi.org/10.1016/j.physd.2024.134096.
S. Peitz, J. Stenner, V. Chidananda, O. Wallscheid, S. L. Brunton, and K. Taira, “Distributed Control of Partial Differential Equations Using  Convolutional Reinforcement Learning,” Physica D: Nonlinear Phenomena, vol. 461, p. 134096, 2024, doi: 10.1016/j.physd.2024.134096.
Peitz, Sebastian, et al. “Distributed Control of Partial Differential Equations Using  Convolutional Reinforcement Learning.” Physica D: Nonlinear Phenomena, vol. 461, Elsevier, 2024, p. 134096, doi:10.1016/j.physd.2024.134096.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
Restricted Closed Access

Export

Marked Publications

Open Data LibreCat

Search this title in

Google Scholar