Deep reinforcement learning for wireless sensor scheduling in cyber–physical systems

A.S. Leong, A. Ramaswamy, D.E. Quevedo, H. Karl, L. Shi, Automatica (2019).

Download
Restricted leoram20a.pdf 675.38 KB
Journal Article | Published | English
Author
Leong, Alex S.; Ramaswamy, ArunselvanLibreCat ; Quevedo, Daniel E.; Karl, HolgerLibreCat; Shi, Ling
Abstract
In many cyber–physical systems, we encounter the problem of remote state estimation of geo- graphically distributed and remote physical processes. This paper studies the scheduling of sensor transmissions to estimate the states of multiple remote, dynamic processes. Information from the different sensors has to be transmitted to a central gateway over a wireless network for monitoring purposes, where typically fewer wireless channels are available than there are processes to be monitored. For effective estimation at the gateway, the sensors need to be scheduled appropriately, i.e., at each time instant one needs to decide which sensors have network access and which ones do not. To address this scheduling problem, we formulate an associated Markov decision process (MDP). This MDP is then solved using a Deep Q-Network, a recent deep reinforcement learning algorithm that is at once scalable and model-free. We compare our scheduling algorithm to popular scheduling algorithms such as round-robin and reduced-waiting-time, among others. Our algorithm is shown to significantly outperform these algorithms for many example scenario
Publishing Year
Journal Title
Automatica
Article Number
108759
ISSN
LibreCat-ID

Cite this

Leong AS, Ramaswamy A, Quevedo DE, Karl H, Shi L. Deep reinforcement learning for wireless sensor scheduling in cyber–physical systems. Automatica. 2019. doi:10.1016/j.automatica.2019.108759
Leong, A. S., Ramaswamy, A., Quevedo, D. E., Karl, H., & Shi, L. (2019). Deep reinforcement learning for wireless sensor scheduling in cyber–physical systems. Automatica. https://doi.org/10.1016/j.automatica.2019.108759
@article{Leong_Ramaswamy_Quevedo_Karl_Shi_2019, title={Deep reinforcement learning for wireless sensor scheduling in cyber–physical systems}, DOI={10.1016/j.automatica.2019.108759}, number={108759}, journal={Automatica}, author={Leong, Alex S. and Ramaswamy, Arunselvan and Quevedo, Daniel E. and Karl, Holger and Shi, Ling}, year={2019} }
Leong, Alex S., Arunselvan Ramaswamy, Daniel E. Quevedo, Holger Karl, and Ling Shi. “Deep Reinforcement Learning for Wireless Sensor Scheduling in Cyber–Physical Systems.” Automatica, 2019. https://doi.org/10.1016/j.automatica.2019.108759.
A. S. Leong, A. Ramaswamy, D. E. Quevedo, H. Karl, and L. Shi, “Deep reinforcement learning for wireless sensor scheduling in cyber–physical systems,” Automatica, 2019.
Leong, Alex S., et al. “Deep Reinforcement Learning for Wireless Sensor Scheduling in Cyber–Physical Systems.” Automatica, 108759, 2019, doi:10.1016/j.automatica.2019.108759.
Main File(s)
File Name
leoram20a.pdf 675.38 KB
Access Level
Restricted Closed Access
Last Uploaded
2020-01-31T15:57:50Z


Export

Marked Publications

Open Data LibreCat

Search this title in

Google Scholar