Vis enkel innførsel

dc.contributor.authorMeland, Sander
dc.date.accessioned2024-07-02T23:51:46Z
dc.date.available2024-07-02T23:51:46Z
dc.date.issued2024-06-03
dc.date.submitted2024-06-03T10:02:23Z
dc.identifierENERGI399I 0 O ORD 2024 VÅR
dc.identifier.urihttps://hdl.handle.net/11250/3137553
dc.description.abstractIn recent years, the escalation of electricity generation from variable renewable energy sources coupled with rising electricity consumption, has introduced greater instabilities within the power grids. These instabilities increase the risk of power outages and other critical disruptions. Transmission System Operators (TSOs) are responsible for ensuring the stability of grid frequency and depend on reserve markets to control these variations. There is a growing need for TSOs to have a broader participation in the reserve markets, including smaller enterprises and households that are currently unable to participate due to market limitations. These smaller actors need the assistance of an aggregator to combine the loads of multiple small actors, allowing them to engage in the markets through the aggregator that can optimize bids in the reserve markets. This thesis first develops a mathematical model designed to maximize revenue within reserve markets, demonstrating the potential viability for aggregators to operate effectively. Using this model, the best bidding strategies for an aggregator in the Norwegian Reserve Markets are examined over two separate time frames, employing actual market prices, volumes, and customer consumption data. Nonetheless, due to the unpredictable nature of real-world market conditions, where prices and volumes for the reserve markets and load consumption are not known in advance, the mathematical model is not applicable in real time. Consequently, this thesis presents a Deep Reinforcement Learning (DRL) framework designed for optimal real-time bidding. The DRL framework employs various heuristics to submit bids to the reserve markets. These heuristics are evaluated using a matheuristic model, which not only validates the heuristics but also serves to offer baseline values for the DRL framework. To evaluate the DRL framework, four linear function approximation models and one Deep Q Network (DQN) model are introduced to determine the feasibility of developing a DRL framework that can adeptly navigate the complexities of the Norwegian Reserve Markets. Experimental results indicate that most models employed within this framework can effectively learn from training data and generalize to unseen data. The DQN model generally demonstrates superior performance by delivering consistent results and nearly matching the highest theoretical revenue during the October test period, highlighting its potential effectiveness for real-time participation in reserve markets.
dc.language.isoeng
dc.publisherThe University of Bergen
dc.rightsCopyright the Author. All rights reserved
dc.subjectReserve Markets
dc.subjectMathematical Programming
dc.subjectDeep Q Network
dc.subjectDeep Reinforcement Learning
dc.subjectHeuristics
dc.subjectMatheuristics
dc.titleDeveloping a Deep Reinforcement Learning Framework For Aggregators In The Norwegian Reserve Markets
dc.typeMaster thesis
dc.date.updated2024-06-03T10:02:23Z
dc.rights.holderCopyright the Author. All rights reserved
dc.description.degreeMasteroppgave i energi
dc.description.localcodeENERGI399I
dc.description.localcode5MAMN-ENER
dc.subject.nus752903
fs.subjectcodeENERGI399I
fs.unitcode12-44-0


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel