Power distribution network overcurrent protection method based on deep reinforcement learning

A technology of reinforcement learning and overcurrent protection, applied in neural learning methods, automatic disconnection emergency protection devices, emergency protection circuit devices, etc., can solve problems such as reducing the correlation between samples

Active Publication Date: 2020-05-08
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Every time the Actor-Critic main network interacts with the environment, it will generate a set of samples and put them into the memory bank, and randomly take them out from the memory bank when needed, which reduces the correlation between samples

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Power distribution network overcurrent protection method based on deep reinforcement learning
  • Power distribution network overcurrent protection method based on deep reinforcement learning
  • Power distribution network overcurrent protection method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0071] The invention aims at the problem of misoperation and refusal of line overcurrent protection caused by the overly complex distribution of distributed power sources, and describes this problem as a Markov decision process (MDP), introduces a deep reinforcement learning mechanism, and uses intelligent agents to pass Continuously interact with the grid environment to obtain the optimal dynamic threshold setting strategy.

[0072] Such as figure 1 Shown is a flow chart of a distribution network overcurrent protection method based on deep reinforcement learning, and the method includes steps:

[0073] (1) Start the protection, and judge whether the current quick-break protection operates within a cycle:

[0074] If the current quick-break protection does not operate, there is no need to optimize the setting;

[0075] If the current quick-break protection operates, optimize the fixed value;

[0076] (2) Determine the optimal setting value according to the MA-DDPG algorithm...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a power distribution network overcurrent protection method based on deep reinforcement learning. The method comprises the steps: starting protection, and judging whether current quick-break protection in a cycle acts or not: determining that constant value optimization does not need to be performed if the current quick-break protection does not act; if the current quick-break protection action is performed, carrying out constant value optimization; determining an optimal constant value according to an MA-DDPG algorithm after training is completed; judging a size relationship between a current effective value and the optimal constant value; if the current effective value is greater than the optimal constant value, making a protection outlet act; if the current effective value is less than or equal to the optimal constant value, judging the size relationship between the current and a starting value; if the current is less than the starting value, returning protection; otherwise, returning to a step of judging whether the current quick-break protection in one cycle acts or not and carrying out circulation. In the invention, a content of deep reinforcement learning is applied to the field of relay protection for the first time, and an artificial intelligence technology and a traditional relay protection technology are combined to improve protection efficiency.

Description

technical field [0001] The invention relates to the technical field of electric power system relay protection, in particular to a distribution network overcurrent protection method based on deep reinforcement learning. Background technique [0002] With the increasing energy and environmental problems, distributed generation (DG), which is characterized by high energy efficiency, compatibility with the environment, and adaptation to renewable energy, has increasingly become a research hotspot. The joint operation of DG and the large power grid has social benefits such as power supply flexibility, reliability and safety, and also has economic benefits such as peak shaving and valley filling, reducing network loss, and improving the utilization rate of existing equipment. But on the other hand, the access of DG has changed the radial structure of the single power source of the distribution network, and also changed the operating status and fault level of the power system, ther...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H02H1/00H02H3/00G06N3/08G06N3/04
CPCH02H1/0092H02H3/006G06N3/08G06N3/045
Inventor 李嘉文余涛
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products