Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Emergency vehicle hybrid lane changing decision-making method based on reinforcement learning and avoidance strategies

An emergency vehicle and reinforcement learning technology, applied in neural learning methods, biological neural network models, control devices, etc., can solve the problems of not considering the influence of normal traffic flow, difficult generalization, and not making full use of real-time traffic data

Active Publication Date: 2021-02-26
TSINGHUA UNIV
View PDF8 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these macro-level approaches do not take full advantage of real-time traffic data, seldom provide micro-level control of self-driving emergency vehicles, hardly consider the impact on normal traffic flow, and ignore the delay of response time on straight roads.
[0003] In addition, a small number of studies have mentioned deterministic algorithms for micro-controlling the automatic driving of emergency vehicles on straight roads, such as a series of car-following and lane-changing strategies, including some targeted avoidance strategies, but the strategies obtained by deep reinforcement learning They are more difficult to generalize to various traffic scenarios than them, and are not necessarily optimal in exploring faster speeds

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Emergency vehicle hybrid lane changing decision-making method based on reinforcement learning and avoidance strategies
  • Emergency vehicle hybrid lane changing decision-making method based on reinforcement learning and avoidance strategies
  • Emergency vehicle hybrid lane changing decision-making method based on reinforcement learning and avoidance strategies

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0172] The following is a detailed description of the effect of the present invention on the decision-making of intelligent network emergency vehicle sections through specific examples:

[0173] 1. First of all, the reinforcement learning part of the algorithm has achieved a good convergence effect, such as Figure 4 As shown, it is described that the effect of the loss function value tending to zero after nearly 200,000 steps of training is significant;

[0174] 2. During the training process, monitor the speed convergence of the DQN strategy and the "DQN+avoidance" hybrid strategy, such as Figure 5 As shown, all can converge to a lower transit time than the baseline (the default car-following model: shown by the dotted line in the figure);

[0175] 3. The mixed strategy should have been more stable, but if Figure 5 It can be seen that this is not the case, and often occurs as Figure 6 In the scenario shown, the vehicle in front continues to perform evasive actions due ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an emergency vehicle hybrid lane changing decision-making method based on reinforcement learning and avoidance strategies. The method comprises the following steps: determining an optimized road section and execution strategies of ICCV and ICEV to be planned; initializing a DQN network of the ICEV to be planned; obtaining a state space of the DQN network based on the stateinformation of the ICEV to be planned, the six neighbor vehicles of the ICEV to be planned and the avoidance strategy execution condition of the preceding vehicle of the ICEV; obtaining an output value based on the state space of the DQN, and obtaining a preliminary decision and an action space based on the output value; establishing an action selection barrier, and verifying and selecting the obtained preliminary decision until the action finally selected from the output value or the action space meets the traffic rule and the road physical structure; defining a reward function for calculating a total reward corresponding to the action; and training the DQN network to obtain a trained DQN network. The method can be widely applied to the field of road lane changing decision control.

Description

technical field [0001] The invention belongs to the field of road lane-changing decision-making control, and in particular relates to a mixed lane-changing decision-making method for emergency vehicles based on reinforcement learning and an avoidance strategy. Background technique [0002] At present, most of the relevant research on reducing the response time of emergency vehicles focuses on route optimization and traffic light control, trying to solve the problem from the perspective of macro-scheduling, such as: Dijkstra algorithm, ant colony algorithm (ACA), A* and hybrid leapfrog algorithm (SFLA). However, these macro-level approaches do not take full advantage of real-time traffic data, seldom provide micro-level control of autonomous emergency vehicles, hardly consider the impact on normal traffic flow, and ignore the delay of response time on straight roads. [0003] In addition, a small number of studies have mentioned deterministic algorithms for micro-controlling...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): B60W30/18B60W50/00G06N3/04G06N3/08
CPCB60W30/18163B60W50/00G06N3/08B60W2050/0019G06N3/045
Inventor 胡坚明牛浩懿裴欣张毅
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products