Vis enkel innførsel

dc.contributor.authorKallestad, Jakob Vigerust
dc.contributor.authorHasibi, Ramin
dc.contributor.authorHemmati, Ahmad
dc.contributor.authorSörensen, Kenneth
dc.date.accessioned2023-05-25T08:56:56Z
dc.date.available2023-05-25T08:56:56Z
dc.date.created2023-01-30T10:47:56Z
dc.date.issued2023
dc.identifier.issn0377-2217
dc.identifier.urihttps://hdl.handle.net/11250/3068956
dc.description.abstractMany problem-specific heuristic frameworks have been developed to solve combinatorial optimization problems, but these frameworks do not generalize well to other problem domains. Metaheuristic frameworks aim to be more generalizable compared to traditional heuristics, however their performances suffer from poor selection of low-level heuristics (operators) during the search process. An example of heuristic selection in a metaheuristic framework is the adaptive layer of the popular framework of Adaptive Large Neighborhood Search (ALNS). Here, we propose a selection hyperheuristic framework that uses Deep Reinforcement Learning (Deep RL) as an alternative to the adaptive layer of ALNS. Unlike the adaptive layer which only considers heuristics’ past performance for future selection, a Deep RL agent is able to take into account additional information from the search process, e.g., the difference in objective value between iterations, to make better decisions. This is due to the representation power of Deep Learning methods and the decision making capability of the Deep RL agent which can learn to adapt to different problems and instance characteristics. In this paper, by integrating the Deep RL agent into the ALNS framework, we introduce Deep Reinforcement Learning Hyperheuristic (DRLH), a general framework for solving a wide variety of combinatorial optimization problems and show that our framework is better at selecting low-level heuristics at each step of the search process compared to ALNS and a Uniform Random Selection (URS). Our experiments also show that while ALNS can not properly handle a large pool of heuristics, DRLH is not negatively affected by increasing the number of heuristics.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleA general deep reinforcement learning hyperheuristic framework for solving combinatorial optimization problemsen_US
dc.typeJournal articleen_US
dc.typePeer revieweden_US
dc.description.versionpublishedVersionen_US
dc.rights.holderCopyright 2023 the authorsen_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2
dc.identifier.doi10.1016/j.ejor.2023.01.017
dc.identifier.cristin2118066
dc.source.journalEuropean Journal of Operational Researchen_US
dc.source.pagenumber446-468en_US
dc.identifier.citationEuropean Journal of Operational Research. 2023, 309 (1), 446-468.en_US
dc.source.volume309en_US
dc.source.issue1en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal