Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep reinforcement learning-based low-speed vehicle following decision-making method

A technology that reinforces learning and decision-making methods, applied in vehicle position/route/altitude control, motor vehicles, two-dimensional position/airway control, etc., can solve problems such as gaps, improve fidelity, improve driving comfort and traffic Effects of security, strong versatility and flexibility

Active Publication Date: 2019-01-15
SOUTHEAST UNIV
View PDF2 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The movement of cells is discrete in space and time. This method is mainly used in traffic simulation, which has a large gap with driving in the actual environment.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep reinforcement learning-based low-speed vehicle following decision-making method
  • Deep reinforcement learning-based low-speed vehicle following decision-making method
  • Deep reinforcement learning-based low-speed vehicle following decision-making method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] Below in conjunction with accompanying drawing and specific embodiment the present invention is described in further detail:

[0044] The present invention provides a vehicle low-speed car-following decision-making method based on deep reinforcement learning. The vehicle low-speed car-following decision-making method based on deep reinforcement learning not only improves driving comfort, but also ensures traffic safety, and improves the safety of congested lanes. smooth rate

[0045] In this example, if figure 1 The shown frame diagram provides the specific process of this embodiment:

[0046] Step 101, receiving the position, speed, and acceleration information of the front vehicle and the rear vehicle in real time through the Internet of Vehicles, and expressing the current state and behavior of the unmanned vehicle as the environmental state, specifically including:

[0047] (1) The position, speed, and acceleration information of the three vehicles in front receiv...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep reinforcement learning-based low-speed vehicle following decision-making method, which is implemented in the following manner: at first, receiving position, speed and acceleration information of front and back vehicles as an environmental state in real time through the Internet of vehicles, and expressing a present state and behavior of an unmanned vehicle; then, constructing an Actor-Critic framework-based deep reinforcement learning structure; and finally, selecting, by Actor, an appropriate action according to the present environmental state, and continuouslyperforming training and learning through an evaluation made by Critic, thereby obtaining an optimal control strategy to ensure that the unmanned vehicle can be kept at a certain safe distance away from the front and back vehicles and implement automatic low-speed running of the vehicle following the front vehicle under an urban congestion working condition. According to the deep reinforcement learning-based low-speed vehicle following decision-making method, the driving comfort is improved, the traffic safety is also ensured, and the clarity of a congested lane is further improved.

Description

technical field [0001] The invention relates to the field of automobile automatic driving, in particular to a vehicle low-speed car-following decision-making method based on deep reinforcement learning. Background technique [0002] With the development of cities and traffic, traffic congestion often occurs in the main road sections of morning and evening peaks in many cities. People's driving behavior is mainly in a stop-and-go state when vehicles are congested. Driving on congested roads for a long time will cause drivers to feel irritable. And driving fatigue, resulting in negligent or excessive driving behavior, resulting in collisions, rear-end collisions and other traffic accidents, further aggravating urban road traffic congestion, and bringing great inconvenience to people driving. [0003] The existing car-following technology based on advanced driving assistance technology mainly builds a car-following decision model based on the distance between the front and rear...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G05D1/02
CPCG05D1/0214G05D1/0221G05D1/0223G05D1/0276
Inventor 孙立博秦文虎翟金凤
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products