---
res:
  bibo_abstract:
  - Upcoming sensing applications (acoustic or video) will have high processing requirements
    not satisfiable by a single node or need input from multiple sources (e.g., speaker
    localization). Offloading these applications to cloud or mobile edge is an option,
    but when running in a wireless senor network (WSN), it might entail needlessly
    high data rate and latency. An alternative is to spread processing inside the
    WSN, which is particularly attractive if the application comprises individual
    components. This scenario is typical for applications like acoustic signal processing.
    Mapping components to nodes can be formulated as wireless version of the NP-hard
    Virtual Network Embedding (VNE) problem, for which various heuristics exist. We
    propose a Reinforcement Learning (RL) framework, which relies on Q-Learning and
    uses either Greedy Epsilon or Epsilon Decay for exploration. We compare both exploration
    methods to the result of an optimization approach and show empirically that the
    RL framework achieves good results in terms of network delay within few number
    of steps.@eng
  bibo_authorlist:
  - foaf_Person:
      foaf_givenName: Haitham
      foaf_name: Afifi, Haitham
      foaf_surname: Afifi
      foaf_workInfoHomepage: http://www.librecat.org/personId=65718
  - foaf_Person:
      foaf_givenName: Holger
      foaf_name: Karl, Holger
      foaf_surname: Karl
      foaf_workInfoHomepage: http://www.librecat.org/personId=126
  dct_date: 2020^xs_gYear
  dct_language: eng
  dct_title: Reinforcement Learning for Virtual Network Embedding in Wireless Sensor
    Networks@
...
