Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Network autonomous intelligent management and control method based on deep reinforcement learning

A technology of reinforcement learning and autonomous intelligence, applied in the field of artificial intelligence, can solve problems such as training data correlation overfitting, limited discrete state and action space, and not suitable for SDN network systems, and achieve the effect of network performance

Active Publication Date: 2021-08-31
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF3 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, using the traditional Q learning algorithm in the SDN network may require a huge storage space to maintain the Q table, and the query of the Q table will also bring additional time overhead
The Deep Q Network (DQN) method can combine the perception ability of deep learning and the decision-making ability of reinforcement learning to optimize the routing process. However, it is limited by the discrete state and action space and is not suitable for dynamic SDN network systems.
Policy-based reinforcement learning methods, such as deterministic policy gradient (Deterministic Policy Gradient, DPG), can be used to deal with continuous action spaces, but they use linear functions as policy functions, and there is an overfitting problem caused by the correlation of training data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Network autonomous intelligent management and control method based on deep reinforcement learning
  • Network autonomous intelligent management and control method based on deep reinforcement learning
  • Network autonomous intelligent management and control method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0047] This embodiment uses ONOS as the network controller. Simulate the SDN network environment through Mininet (a network emulator connected by some virtual terminal nodes, switches, and routers), and use Mininet's topology construction API to generate the following figure 1 Experimental topology shown.

[0048] The topology consists of 24 switch nodes and 37 bidirectional links. Each switch is connected to a terminal host by default, and the number is the same as that of the switch. The four performance parameters of link bandwidth, delay, jitter and packet loss rate are configured through Mininet's TCLink class. The rated bandwidth of each link is set to 10Mbps, the range of link delay is 10-100ms, the range of delay jitter is 0-20ms, and the range of packet loss rate is 0-2%.

[0049] In this embodiment, the operation process of the DDPG agent is as follows figure 2 As shown, it specifically includes the following steps:

[0050] S1. Initialize the current number of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of artificial intelligence, and particularly relates to a network autonomous intelligent management and control method based on deep reinforcement learning. The method comprises the steps of firstly constructing a network topology, then introducing a CNN, an LSTM layer and a delay updating strategy to construct a routing decision-making model based on a DDPG reinforcement learning algorithm, and finally performing iterative training on the routing decision-making model based on deep reinforcement learning. In each iterative training, an intelligent agent obtains an output action, namely a group of link weights, according to a measured network state and a neural network, and calculates a service route by using a shortest path algorithm according to the link weights. According to a routing calculation result, the intelligent agent issues a flow table, and acquires end-to-end time delay and a packet loss probability of the service to calculate a reward value of the iteration. The algorithm has good convergence, and can effectively reduce the end-to-end delay and packet loss rate of the service.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence, and in particular relates to a network autonomous intelligent management and control method based on deep reinforcement learning. Background technique [0002] In recent years, with the expansion of network scale and the increase of application types, formulating intelligent routing strategies for services is an important part of realizing business service quality assurance and independent intelligent network management and control. The emergence of software-defined network (Software Defined Network, SDN) has brought new ideas to the deployment of network autonomous intelligent routing. Different from the tightly coupled vertical structure of traditional networks, SDN separates the data plane and the control plane. The data plane is implemented by SDN switches that support the OpenFlow protocol, and the control plane is implemented by software to provide network programmability. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04L12/721G06N3/04G06N3/08G06N20/00
CPCH04L45/124G06N3/049G06N3/084G06N20/00G06N3/044
Inventor 张梓强苏俭
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products