{"type":"working_paper","user_id":"477","title":"DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning","date_updated":"2022-11-18T09:59:27Z","author":[{"orcid":"0000-0001-8210-4011","full_name":"Schneider, Stefan Balthasar","id":"35343","first_name":"Stefan Balthasar","last_name":"Schneider"},{"full_name":"Karl, Holger","last_name":"Karl","first_name":"Holger","id":"126"},{"last_name":"Khalili","first_name":"Ramin","full_name":"Khalili, Ramin"},{"full_name":"Hecker, Artur","first_name":"Artur","last_name":"Hecker"}],"status":"public","ddc":["004"],"date_created":"2022-10-20T16:44:19Z","abstract":[{"text":"Macrodiversity is a key technique to increase the capacity of mobile networks. It can be realized using coordinated multipoint (CoMP), simultaneously connecting users to multiple overlapping cells. Selecting which users to serve by how many and which cells is NP-hard but needs to happen continuously in real time as users move and channel state changes. Existing approaches often require strict assumptions about or perfect knowledge of the underlying radio system, its resource allocation scheme, or user movements, none of which is readily available in practice.\r\n\r\nInstead, we propose three novel self-learning and self-adapting approaches using model-free deep reinforcement learning (DRL): DeepCoMP, DD-CoMP, and D3-CoMP. DeepCoMP leverages central observations and control of all users to select cells almost optimally. DD-CoMP and D3-CoMP use multi-agent DRL, which allows distributed, robust, and highly scalable coordination. All three approaches learn from experience and self-adapt to varying scenarios, reaching 2x higher Quality of Experience than other approaches. They have very few built-in assumptions and do not need prior system knowledge, making them more robust to change and better applicable in practice than existing approaches.","lang":"eng"}],"language":[{"iso":"eng"}],"file":[{"date_updated":"2022-10-20T16:41:10Z","file_id":"33855","file_size":2521656,"creator":"stschn","relation":"main_file","date_created":"2022-10-20T16:41:10Z","access_level":"open_access","content_type":"application/pdf","file_name":"preprint.pdf"}],"citation":{"ama":"Schneider SB, Karl H, Khalili R, Hecker A. DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning.; 2021.","apa":"Schneider, S. B., Karl, H., Khalili, R., & Hecker, A. (2021). DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning.","ieee":"S. B. Schneider, H. Karl, R. Khalili, and A. Hecker, DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning. 2021.","short":"S.B. Schneider, H. Karl, R. Khalili, A. Hecker, DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning, 2021.","mla":"Schneider, Stefan Balthasar, et al. DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning. 2021.","chicago":"Schneider, Stefan Balthasar, Holger Karl, Ramin Khalili, and Artur Hecker. DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning, 2021.","bibtex":"@book{Schneider_Karl_Khalili_Hecker_2021, title={DeepCoMP: Coordinated Multipoint Using Multi-Agent Deep Reinforcement Learning}, author={Schneider, Stefan Balthasar and Karl, Holger and Khalili, Ramin and Hecker, Artur}, year={2021} }"},"year":"2021","_id":"33854","department":[{"_id":"75"}],"keyword":["mobility management","coordinated multipoint","CoMP","cell selection","resource management","reinforcement learning","multi agent","MARL","self-learning","self-adaptation","QoE"],"has_accepted_license":"1","file_date_updated":"2022-10-20T16:41:10Z","oa":"1","project":[{"name":"SFB 901 - C: SFB 901 - Project Area C","_id":"4"},{"_id":"16","name":"SFB 901 - C4: SFB 901 - Subproject C4"},{"_id":"1","name":"SFB 901: SFB 901"}]}