Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A video saliency target detection method based on a cascade convolutional network and optical flow

A convolutional network and object detection technology, applied in the field of video salient object detection based on cascaded convolutional network and optical flow, can solve the problems of slow speed, high computational complexity, easy loss of edge information, etc., and achieve improved detection speed , clear edges, fine grained effect

Active Publication Date: 2019-05-21
NORTHWESTERN POLYTECHNICAL UNIV
View PDF12 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in this method, both static and dynamic saliency target detection use the same deep fully convolutional network structure, which has high computational complexity and slow speed; and the saliency map granularity is not fine enough, and edge information is easily lost

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A video saliency target detection method based on a cascade convolutional network and optical flow
  • A video saliency target detection method based on a cascade convolutional network and optical flow
  • A video saliency target detection method based on a cascade convolutional network and optical flow

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

[0035] Step 1 Build a cascaded network structure

[0036] The original image is down-sampled to obtain three images of different scales, which are the original image (high scale), the image downsampled by 2 times (medium scale), and the image downsampled by 4 times (low scale). For low-scale images, after 5 convolutional blocks, each convolutional block contains 3 convolutional layers, and the last layer of the first three convolutional blocks each contains a pooling layer with a step size of 2, resulting in a 32-fold downsampling The feature map F1, F1 obtains the saliency map S1 of the low-scale image after 2 times upsampling and the SoftMax layer; the medium-scale image passes through 3 convolutional blocks, and each convolutional block contains 3 convolutional layers and a step size A pooling layer of 2, and then a dilated convolutional layer with a step siz...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a video saliency target detection method based on a cascade convolutional network and an optical flow, and the method comprises the steps: carrying out the pixel-level saliency prediction of an image of a current frame in the high scale, the middle scale and the low scale through employing a cascade network structure. A cascade network structure is trained by using an MSAR10K image data set, a saliency annotation graph is used as supervision information of training, and a loss function is a cross entropy loss function. after the training is ended, static saliency prediction is carried out on each frame of image in the video by using the trained cascade network. A classic Locus- Kanada algorithm is used to carry out optical flow field extraction. a three-layer convolutional network structure is used to construct a dynamic optimization network structure. the static detection result and the optical flow field detection result of each frame of image are spliced toobtain input data of the optimized network. And a Davis video data set is used to optimize the network, and pixel-level significance classification is carried out on the video frame by using a staticdetection result and optical flow information.

Description

technical field [0001] The invention belongs to the field of image processing, and relates to a video salient target detection method based on a cascaded convolutional network and optical flow. Background technique [0002] A large number of image salient object detection algorithms proposed in recent years are based on a bottom-up or top-down framework and mainly rely on artificial features, but relatively few algorithms based on video salient object detection. The biggest difference between video salient target detection and image salient target detection is: image salient target detection can assume that the focus of human visual attention mechanism is located in the center of the image, but for video salient target detection, human visual attention will change as the salient target moves. If the image salient object detection algorithm is simply used to process video salient object detection, the motion information of salient objects in the video cannot be fully utilize...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04
Inventor 李映郑清萍刘凌毅崔凡
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products