Multi-Armed Bandit Networks: Exploring Online Learning with Networks
Not peer reviewed
MetadataShow full item record
Classical Multi-Armed Bandit solutions often assumes independent arms as a simpliﬁcation of the problem. This has shown great results in many diﬀerent ﬁelds of practice, but could in some cases, presumably leave untapped potential. In this paper I explore network based MAB solutions using explore-exploit algorithms as nodes to further minimize regret, and take advantage of inter-Bandit dependencies. I explore two network approaches; Hierarchical and Flat network. As well as a special cases of the Bernoulli Bandit with dependent arms, referred to as Symbiotic Bandit. The results show that some networked solutions prevail the single node versions in both the Bernoulli Bandit and the Symbiotic Bandit regret wise.
PublisherThe University of Bergen
Copyright the Author. All rights reserved