Deep Reinforcement Learning for Computation Offloading in Mobile Edge Computing
Master thesis
Permanent lenke
https://hdl.handle.net/11250/3001134Utgivelsesdato
2022-06-01Metadata
Vis full innførselSamlinger
- Master theses [218]
Sammendrag
As 5G-networks are deployed worldwide, mobile edge computing (MEC) has been developed to help alleviate resource-intensive computations from an application. Here, IoT devices can offload their computation to an MEC server and receive the computed result. This offloading scheme can be viewed as an optimization problem, where the complexity quickly increases when more devices join the system. In this thesis, we solve the optimization problem and introduce different strategies that are compared to the optimal solution. The strategies implemented are full local computing, full offload to an MEC server, random search, optimal solution, Q- learning, and a deep Q-network (DQN). The main objective for each strategy is to minimize the total cost of the system, where the cost is a combination of energy consumption and delay. However, as the number of devices in the system increases, the results view numerous challenges. This thesis shows that the performance of random search, Q-learning, and DQN strategies are very close to the optimal solution for up to 20 devices. However, the results show generally poor performance for the strategies that can address more than 20 devices. In the end, we further discuss the performance and convergence of a DQN in MEC.