Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and system for vision-centric deep-learning-based road situation analysis

A deep learning and vision technology, applied in the field of image processing, can solve problems such as limited horizontal spatial information, reduced resolution, and low resolution

Active Publication Date: 2017-11-21
TCL CORPORATION
View PDF3 Cites 62 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

A single sensor cannot provide a complete, robust, accurate input
The depth perception ability of the image sensor is weak, and the resolution ability is higher than that of lidar and radar
The lateral spatial information displayed by radar is limited, because it cannot reach all, its field of view is narrow, or its resolution is reduced at long distances
Although lidar has a wide field of view and solves some of the problems mentioned above, it has other problems, such as low resolution, clustering errors, and recognition delays.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for vision-centric deep-learning-based road situation analysis
  • Method and system for vision-centric deep-learning-based road situation analysis
  • Method and system for vision-centric deep-learning-based road situation analysis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] In order to facilitate the understanding of the present invention, the present invention will be described more fully below with reference to the associated drawings. Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. Wherever possible, the same reference numbers have been used for the same parts in the various drawings. Apparently, the described embodiments are only some but not all of the embodiments of the present invention. Based on the disclosed embodiments, those skilled in the art can derive other embodiments consistent with this embodiment, and all these embodiments belong to the protection scope of the present invention.

[0038] According to different embodiments of the disclosed subject matter, the present invention provides a vision-centric deep learning-based road condition analysis method and system.

[0039] The vision-centric deep learning-based road condition analysis system is also known ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method and a system for vision-centric deep-learning-based road situation analysis are provided. The method can include: receiving real-time traffic environment visual input from a camera; determining, using a ROLO engine, at least one initial region of interest from the real-time traffic environment visual input by using a CNN training method; verifying the at least one initial region of interest to determine if a detected object in the at least one initial region of interest is a candidate object to be tracked; using LSTMs to track the detected object based on the real-time traffic environment visual input, and predicting a future status of the detected object by using the CNN training method; and determining if a warning signal is to be presented to a driver of a vehicle based on the predicted future status of the detected object.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a vision-centered road condition analysis method and system based on deep learning. Background technique [0002] To improve the quality of mobility on the road, driver assistance systems (DAS) offer a way to highlight and enhance active and integrated safety among other things. Today, establishing advanced driver assistance systems (ADAS) to support rather than replace human drivers has become a trend in current intelligent vehicle research. These systems support drivers by enhancing their perception, providing timely warnings to avoid mistakes, and reducing the driver's control workload. [0003] ADAS systems usually use more than one type of sensor: such as image sensor, lidar, radar, etc. A single sensor cannot provide a complete, robust, accurate input. The depth perception ability of the image sensor is weak, and the resolution ability is higher than tha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06K9/00B60W50/00
CPCB60W50/00G06N3/08B60W2050/0075B60W2050/0043B60W2554/80B60W2554/00B60W2555/60G06V40/10G06V20/582G06V20/584G06V20/588G06V20/58G06V2201/08G06T2207/20084G06T7/277G06T2207/30261G06T2207/20081B60W2050/143B60W2420/40B60W2420/52G06V20/56G06V10/454G06N3/044G06N3/045B60W50/14B60W2050/146
Inventor 宁广涵汪灏泓薄文强任小波
Owner TCL CORPORATION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products