Reinforcement Learning-Based Solution to Power Grid Planning and Operation Under Uncertainties
TimeFriday, 13 November 202011:25am - 11:50am EST
DescriptionWith the ever-increasing stochastic and dynamic behavior observed in today’s bulk power systems, securely and economically planning future operational scenarios that meet all reliability standards under uncertainties becomes a challenging computational task, which typically involves searching feasible and suboptimal solutions in a highly dimensional space via massive numerical simulations. This paper presents a novel approach to achieving this goal by adopting the state-of-the-art reinforcement learning algorithm, soft actor critic (SAC). First, the optimization problem of finding feasible solutions under uncertainties is formulated as Markov decision process. Second, a general and flexible framework is developed to train SAC agents by adjusting generator active power outputs in searching feasible operating conditions. A software prototype is developed that verifies the effectiveness of the proposed approach via numerical studies conducted on the future planning cases of the SGCC Zhejiang Electric Power Company.