Show simple item record

dc.contributor.authorHansen, Viktor
dc.date.accessioned2018-10-22T16:30:39Z
dc.date.available2018-10-22T16:30:39Z
dc.date.issued2018-06-26
dc.date.submitted2018-06-25T22:00:11Z
dc.identifier.urihttp://hdl.handle.net/1956/18665
dc.description.abstractClassical Multi-Armed Bandit solutions often assumes independent arms as a simplification of the problem. This has shown great results in many different fields of practice, but could in some cases, presumably leave untapped potential. In this paper I explore network based MAB solutions using explore-exploit algorithms as nodes to further minimize regret, and take advantage of inter-Bandit dependencies. I explore two network approaches; Hierarchical and Flat network. As well as a special cases of the Bernoulli Bandit with dependent arms, referred to as Symbiotic Bandit. The results show that some networked solutions prevail the single node versions in both the Bernoulli Bandit and the Symbiotic Bandit regret wise.en_US
dc.language.isoengeng
dc.publisherThe University of Bergeneng
dc.subjectNetworkseng
dc.subjectOnline Learningeng
dc.subjectMABeng
dc.titleMulti-Armed Bandit Networks: Exploring Online Learning with Networkseng
dc.typeMaster thesisen_US
dc.date.updated2018-06-25T22:00:11Z
dc.rights.holderCopyright the author. All rights reserved.en_US
dc.description.degreeMasteroppgave i informasjonsvitenskap
dc.description.localcodeINFO390
dc.subject.nus735115eng
fs.subjectcodeINFO390
fs.unitcode15-17-0


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record