This is the first official release and first stable version of the opfgym environment framework for learning the optimal power flow (OPF) with reinforcement learning (RL).
Features
- Gymnasium-compatible base class `OpfEnv`, which allows for easy creation of RL environments that represent OPF problems.
- Five benchmark RL-OPF environments, representing different OPF problems (Economic dispatch, voltage control, etc.)
- Various pre-implemented choosable environment design options, like different reward functions, observation spaces, etc.
- Several more advanced OPF features like multi-stage OPF, stochastic OPF, discrete actions, etc. (see examples)
- Allows for easy creation of labeled datasets for supervised learning from any OpfEnv environment.
- Fully compatible with the Gymnasium API.
Future Work
- Add more example environments to demonstrate the more advanced features.
- Add more convenience functionality to simplify tasks (e.g. action space definition or adding constraints).
- Add an advanced baseline OPF solver that can deal with discrete actions, multi-stage OPF, etc.
- Improve seeding according to Gymnasium API.