Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A distribution network overcurrent protection method based on deep reinforcement learning

A technology of reinforcement learning and overcurrent protection, applied in neural learning methods, automatic disconnection emergency protection devices, emergency protection circuit devices, etc., can solve problems such as reducing the correlation between samples

Active Publication Date: 2021-02-19
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Every time the Actor-Critic main network interacts with the environment, it will generate a set of samples and put them into the memory bank, and randomly take them out from the memory bank when needed, which reduces the correlation between samples

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A distribution network overcurrent protection method based on deep reinforcement learning
  • A distribution network overcurrent protection method based on deep reinforcement learning
  • A distribution network overcurrent protection method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0071] The invention aims at the problem of misoperation and refusal of line overcurrent protection caused by the overly complex distribution of distributed power sources, and describes this problem as a Markov decision process (MDP), introduces a deep reinforcement learning mechanism, and uses intelligent agents to pass Continuously interact with the grid environment to obtain the optimal dynamic threshold setting strategy.

[0072] Such as figure 1 Shown is a flow chart of a distribution network overcurrent protection method based on deep reinforcement learning, and the method includes steps:

[0073] (1) Start the protection, and judge whether the current quick-break protection operates within a cycle:

[0074] If the current quick-break protection does not operate, there is no need to optimize the setting;

[0075] If the current quick-break protection operates, optimize the fixed value;

[0076] (2) Determine the optimal setting value according to the MA-DDPG algorithm...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a distribution network overcurrent protection method based on deep reinforcement learning, which includes the steps of: starting the protection, and judging whether the current quick-break protection operates within a cycle: if the current quick-break protection does not operate, it is not necessary to optimize the fixed value; if When the current quick-break protection operates, optimize the setting value; determine the optimal setting value according to the trained MA-DDPG algorithm; judge the relationship between the current effective value and the optimal setting value: if the current constant current effective value is greater than the optimal setting value , then the protection outlet will act; if the current effective value of the current is less than or equal to the optimal setting value, then judge the relationship between the current and the starting value: if the current is less than the starting value, the protection will return; otherwise, it will return to judge whether the current quick-break protection operates within one cycle Steps are cycled. The present invention applies the content of deep reinforcement learning to the field of relay protection for the first time, and combines artificial intelligence technology with traditional relay protection technology to improve the efficiency of protection.

Description

technical field [0001] The invention relates to the technical field of electric power system relay protection, in particular to a distribution network overcurrent protection method based on deep reinforcement learning. Background technique [0002] With the increasing energy and environmental problems, distributed generation (DG), which is characterized by high energy efficiency, compatibility with the environment, and adaptation to renewable energy, has increasingly become a research hotspot. The joint operation of DG and the large power grid has social benefits such as power supply flexibility, reliability and safety, and also has economic benefits such as peak shaving and valley filling, reducing network loss, and improving the utilization rate of existing equipment. But on the other hand, the access of DG has changed the radial structure of the single power source of the distribution network, and also changed the operating status and fault level of the power system, ther...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H02H1/00H02H3/00G06N3/08G06N3/04
CPCH02H1/0092H02H3/006G06N3/08G06N3/045
Inventor 李嘉文余涛
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products