Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for maximizing system benefits in dynamic environment based on deep reinforcement learning

A technology of reinforcement learning and dynamic environment, applied in the direction of neural learning methods, data processing applications, prediction, etc., can solve problems such as inability to provide computing services to end users

Inactive Publication Date: 2019-11-08
NANJING UNIV OF SCI & TECH
View PDF2 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, with the exponential growth of end-user smart devices in recent years, the number of data service requests generated by them has also surged, and traditional mobile edge computing services have been unable to provide end-users with the required computing services

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for maximizing system benefits in dynamic environment based on deep reinforcement learning
  • Method for maximizing system benefits in dynamic environment based on deep reinforcement learning
  • Method for maximizing system benefits in dynamic environment based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0058] The following will further clarify the relevant content of the present invention in conjunction with the method flow chart, system model diagram, and specific algorithm framework diagram of the design in the accompanying drawings. It should be understood that these embodiments are only used to illustrate the present invention and not to limit the present invention. scope, after reading the present invention, the modifications of various equivalent forms of the present invention by those skilled in the art all fall within the scope defined by the appended claims of the present application.

[0059]The present invention focuses on the reasonable and efficient path planning and design for the UAV when the UAV is used as a mobile edge server in the edge computing architecture to provide highly reliable and low-delay computing services for terminal real-time mobile users based on the deep reinforcement learning algorithm.

[0060] As an example, the method needs to consider: ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention designs an unmanned aerial vehicle path planning method for providing a low-delay and high-reliability calculation service for a dynamic user under a mobile edge calculation architecturebased on deep reinforcement learning. Considering that the unmanned aerial vehicle has a convenient infrastructure, a communication channel can be quickly built in a remote or disaster area, and computing resources can also be erected to provide services for a terminal mobile user, the unmanned aerial vehicle is considered to serve as a mobile computing server, and efficient interaction servicesare provided for the terminal mobile user above the terminal mobile user. According to the invention, the real-time movement of the terminal user is considered; the method comprises the following steps: establishing a Gaussian-Markov mobile model by modeling, establishing a user position state, an unmanned aerial vehicle position state, an unmanned aerial vehicle battery capacity state and a channel state between an unmanned aerial vehicle and a user, and planning an unmanned aerial vehicle path by combining a deep reinforcement learning algorithm to maximize the long-term benefit of the system.

Description

technical field [0001] The present invention relates to the field of mobile edge computing in the communication industry, the emerging field of unmanned aerial vehicles, and the field of neural network-based deep reinforcement learning algorithms in the computer industry. Background technique [0002] With the rapid development of communication technology, in order to provide high-quality services to terminal real-time mobile users, Mobile Edge Computing (MEC), which is located in the edge area of ​​the network system, emerged as the times require, which can use the wireless access network to provide terminal The high-performance, low-latency, and high-bandwidth services required by mobile users enable end users to enjoy uninterrupted high-quality network experience. However, with the exponential growth of end-user smart devices in recent years, the number of data service requests generated by them has also surged, and traditional mobile edge computing services have been una...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06Q10/04G06N3/08
CPCG06Q10/047G06N3/08
Inventor 刘倩丁冉邢志超吴平阳赵熙唯李骏桂林卿
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products