Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Video semantic segmentation method based on optical flow feature fusion

A feature fusion and semantic segmentation technology, applied in the field of video processing, can solve the problems of many instances, large data volume, and high segmentation delay, and achieve the effect of increasing speed, improving accuracy, and enriching semantic information

Active Publication Date: 2020-09-11
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] First, in the field of autonomous driving applications, there are many and complex instances in video data, resulting in low semantic segmentation accuracy of video semantic segmentation algorithms
[0005] Second, compared with the image semantic segmentation task, the video semantic segmentation task processes a larger amount of data, resulting in a larger amount of calculation for the video semantic segmentation algorithm and high segmentation delay.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video semantic segmentation method based on optical flow feature fusion
  • Video semantic segmentation method based on optical flow feature fusion
  • Video semantic segmentation method based on optical flow feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] Such as figure 1 As shown, a kind of video semantic segmentation method based on optical flow feature fusion provided by the present invention comprises the following steps:

[0058] Step 1, determine that the current video frame image of the video sequence is a key frame image or a non-key frame image; if it is a key frame image, then perform step 2, if it is a non-key frame image, then perform step 3;

[0059] Step 2, extracting the high-level semantic feature map of the fusion position-dependent information and channel-dependent information of the current video frame image;

[0060] Step 3, obtain the high-level semantic feature map of the current video frame image by calculating the optical flow field;

[0061] Step 4, upsampling the high-level semantic feature map obtained in step 2 and step 3 to obtain a semantic segmentation map.

[0062] The characteristics and performance of the present invention will be described in further detail below in conjunction with t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video semantic segmentation method based on optical flow feature fusion. The method comprises the following steps: the step 1, judging whether a current video frame image ofa video sequence is a key frame image or a non-key frame image; if the image is a key frame image, executing the step 2, and if the image is a non-key frame image, executing the step 3; the step 2, extracting a high-level semantic feature map of fusion position dependent information and channel dependent information of the current video frame image; the step 3, calculating an optical flow field toobtain a high-level semantic feature map of the current video frame image; and the step 4, performing up-sampling on the high-level semantic feature maps obtained in the step 2 and the step 3 to obtain a semantic segmentation map. According to the method, the thought of an optical flow field and an attention mechanism is fused, so that the rate and accuracy of video semantic segmentation can be improved.

Description

technical field [0001] The invention relates to the technical field of video processing, in particular to a video semantic segmentation method based on optical flow feature fusion. Background technique [0002] With the increasing market demand for automotive active safety and intelligence, more and more companies and research institutions have begun to devote themselves to the research and development of autonomous driving systems. The environment perception technology in the automatic driving system acts as the eyes and ears of the automatic driving vehicle, and provides support for the behavioral decision-making system of the automatic driving. In the autonomous driving environment perception technology, fast and accurate semantic segmentation of the real-time video data collected by the vehicle camera is a crucial technology. [0003] Self-driving cars perform semantic segmentation of real driving scenes. The core issue is to extract road semantic information, improve t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/34G06K9/46G06K9/62
CPCG06V20/46G06V20/48G06V20/49G06V20/56G06V10/454G06V10/267G06F18/22G06F18/253
Inventor 周世杰王蒲程红蓉刘启和廖永建潘鸿韬
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products