Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for accurately positioning moving object based on deep learning

A technology for precise positioning of moving objects, applied in the field of precise positioning of moving objects based on deep learning, can solve problems such as large amount of calculation, decreased real-time performance, and failure to be applied, and achieve real-time positioning and high real-time performance

Active Publication Date: 2019-06-25
GUIZHOU UNIV
View PDF6 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method is more delicate, but the amount of calculation is large, resulting in a decrease in real-time performance, which cannot meet the needs of some systems with high real-time requirements
At present, some researchers have proposed the optical flow method, but most of the optical flow methods are quite complicated to calculate and have poor anti-noise performance. If there is no special hardware device, they cannot be applied to real-time processing of full-frame video streams.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for accurately positioning moving object based on deep learning
  • Method for accurately positioning moving object based on deep learning
  • Method for accurately positioning moving object based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] Example 1. A method for precise positioning of moving objects based on deep learning, such as Figure 1-5 shown, follow the steps below:

[0050] a. Obtain the video sequence to be detected and the corresponding depth map;

[0051] b. Use darknet-yolo-v3 to detect the moving target in the video sequence and mark the logo frame;

[0052] c. Combined with the depth of field information in the depth map, use the correlation function of Opencv to find the contour in the depth map, and draw the rectangular boundary surrounding the contour to obtain a rectangle of the region of interest;

[0053] d. Calculate the area of ​​the logo frame, the center point of the logo frame, the area of ​​the rectangle, and the center point of the rectangle;

[0054] e. Match the area of ​​the marked frame, the center point of the marked frame, the area of ​​the rectangle, and the center point of the rectangle. When the two match within a preset threshold range, the position of the marked f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for accurately positioning a moving object based on deep learning. The method comprises the following steps of: a, acquiring a to-be-detected video sequence and a corresponding depth map; B, adopting darknet-yolo-v3 to detect a moving target in the video sequence and marking an identification box; C, combining with depth-of-field information in the depth map, searching a contour in the depth map by adopting a correlation function of Opencv, and drawing a rectangular boundary surrounding the contour to obtain a rectangle of an interested area; D, calculating thearea of the identification frame, the central point of the identification frame, the rectangular area and the rectangular central point; and e, matching the area of the identification frame with thecentral point of the identification frame, the rectangular area and the rectangular central point, and when the two are matched within a preset threshold range, determining that the position of the identification frame is the position of the moving target. According to the method, the cavity phenomenon can be avoided, the real-time performance is high, and the recognition accuracy is high.

Description

technical field [0001] The invention relates to a moving object positioning method, in particular to a method for precise positioning of a moving object based on deep learning. Background technique [0002] Moving object detection refers to the process of subtracting redundant information in time and space in video through computer vision, and effectively extracting objects that change in spatial position. Research in this direction has always been an important research topic in the field of computer vision. When detecting moving targets in video streams, precise positioning of moving objects has become the most challenging research direction in the field of computer vision research, and it involves many cutting-edge disciplines, such as: deep learning, image processing, model Identification, etc., combined with these disciplines has become a research hotspot. [0003] In many scenarios, such as in the security monitoring system of important large places such as high-speed...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/246G06N3/08
CPCY02T10/40
Inventor 刘宇红何倩倩张荣芬林付春马治楠王曼曼
Owner GUIZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products