Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unmanned vehicle reinforcement learning training environment construction method and training system thereof

A technology of reinforcement learning and construction methods, applied in neural learning methods, neural architectures, biological neural network models, etc., can solve the problem that reinforcement learning strategies are difficult to transfer directly.

Inactive Publication Date: 2020-10-20
ZHEJIANG UNIV
View PDF6 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the defect in the prior art that it is difficult to directly migrate the reinforcement learning strategy trained in the simulation environment to the real environment, the present invention provides a method for constructing an unmanned vehicle reinforcement learning training environment and its training system, using the image domain conversion algorithm, Convert the pictures in the training environment into pictures in the simulated real scene, and input them into the reinforcement learning algorithm as the state, and then obtain the decision-making of each step of the unmanned vehicle

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unmanned vehicle reinforcement learning training environment construction method and training system thereof
  • Unmanned vehicle reinforcement learning training environment construction method and training system thereof
  • Unmanned vehicle reinforcement learning training environment construction method and training system thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] Embodiments of the present invention are described in detail below, and the embodiments are exemplary and intended to explain the present invention, but should not be construed as limiting the present invention. The technical features of the various implementations in the present invention can be combined accordingly on the premise that there is no conflict with each other.

[0020] Below in conjunction with accompanying drawing, the present invention is further described, as figure 1 As shown, a method for constructing an unmanned vehicle reinforcement learning training environment includes the following steps:

[0021] Step 1: In the real unmanned vehicle application scenario, use the real vehicle camera to collect pictures as the real domain data set. The real domain data set should be taken from the first-person perspective of the unmanned vehicle, take 1000 images, and collect each A picture is compressed into a size of 256pixel×256pixel×3channels.

[0022] Step ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unmanned vehicle reinforcement learning training environment construction method and a training system thereof, and belongs to the field of robot navigation and the field ofrobot simulation platforms. The method comprises the following steps: constructing a real scene and simulation scene data set; enhancing a data set; training an image domain conversion algorithm andstoring a model; and establishing an API interface of the simulation environment model and the reinforcement learning algorithm. During training in a simulation environment, a camera on the unmanned vehicle model collects an observed simulation environment image, the simulation environment image is converted into a simulated real scene image through an image domain conversion network, the simulated real scene image serves as a state to be input into a reinforcement learning network, and an action instruction is output through decision making and issued to the unmanned vehicle model of a simulation end. In practical application, the unmanned vehicle camera collects real scene pictures in reality, and the simulated real scene pictures input by the reinforcement learning algorithm in trainingare very similar to the real scene pictures, so that the trained algorithm can be directly migrated or migrated to a real scene after fine adjustment.

Description

technical field [0001] The invention relates to the field of robot navigation and robot simulation platform, in particular to a method for constructing an unmanned vehicle intensive learning training environment and a training system thereof. Background technique [0002] Robot navigation is to get it from an initial position to a target position without colliding with obstacles during the process. Traditional robot navigation is based on maps. If the environment model (map) is known in advance, it is a global path planning problem. This method has low requirements on the computing power of the robot system and can find the optimal solution. If the environment model is unknown in advance or only locally known, the robot needs to perceive the surrounding environment through sensors such as lidar or camera fixed on it during the movement, and perform real-time modeling and correction of the surrounding environment. This method is called Local path planning problem. Local pa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G01C21/20G06N3/04G06N3/08
CPCG01C21/20G06N3/08G06N3/045Y02T10/40
Inventor 蒋焕煜陈词马保建娄明照陆金科
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products