DIP-QL: A Novel Reinforcement Learning Method for Constrained Industrial Systems
- 주제(키워드) Aerospace electronics , Constrained action space , Costs , Distancebased update schemes , Industrial control system , Informatics , Microgrid control , Microgrids , Optimization , Reinforcement learning , Reinforcement learning , Safety
- 등재 SCIE, SCOPUS
- 발행기관 IEEE Computer Society
- 발행년도 2022
- 총서유형 Journal
- URI http://www.dcollection.net/handler/ewha/000000193479
- 본문언어 영어
- Published As https://doi.org/10.1109/TII.2022.3159570
초록/요약
Existing reinforcement learning (RL) methods have limited applicability to real-world industrial control problems because of their various constraints. To overcome this challenge, we devise a novel RL method to enable the optimization of a policy while strictly satisfying the system constraints. By leveraging a value-based RL approach, our proposed method is not limited by the challenges faced when searching a constrained policy. Our method has two main features. First, we devise two distance-based Q-value update schemes, incentive and penalty updates, which enable the agent to decide on controls in the feasible region by replacing an infeasible control with the nearest feasible continuous control. The proposed update schemes can adjust the values of both continuous and original infeasible controls. Second, we define the penalty cost as a shadow price-weighted penalty to achieve efficient, constrained policy learning. We apply our method to the microgrid control, and the case study demonstrates its superiority. IEEE
more