SC20 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Reinforcement Learning-Based Solution to Power Grid Planning and Operation Under Uncertainties


Workshop:AI4S: Workshop on Artificial Intelligence and Machine Learning for Scientific Applications

Authors: Xiumin Shang (Global Energy Interconnection Research Institute North America (GEIRI); University of California, Merced); Lin Ye and Jing Zhang (Zhejiang Electric Power Company); Jingping Yang, Jianping Xu, and Qin Lyu (Jinhua Electric Power Company); and Ruisheng Diao (Global Energy Interconnection Research Institute North America (GEIRI))


Abstract: With the ever-increasing stochastic and dynamic behavior observed in today’s bulk power systems, securely and economically planning future operational scenarios that meet all reliability standards under uncertainties becomes a challenging computational task, which typically involves searching feasible and suboptimal solutions in a highly dimensional space via massive numerical simulations. This paper presents a novel approach to achieving this goal by adopting the state-of-the-art reinforcement learning algorithm, soft actor critic (SAC). First, the optimization problem of finding feasible solutions under uncertainties is formulated as Markov decision process. Second, a general and flexible framework is developed to train SAC agents by adjusting generator active power outputs in searching feasible operating conditions. A software prototype is developed that verifies the effectiveness of the proposed approach via numerical studies conducted on the future planning cases of the SGCC Zhejiang Electric Power Company.





Back to AI4S: Workshop on Artificial Intelligence and Machine Learning for Scientific Applications Archive Listing



Back to Full Workshop Archive Listing