Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for preprocessing pictures in video saliency detection task

A detection task and preprocessing technology, applied in the field of computer vision, can solve problems such as inability to apply multi-scene videos, reading frames without effective recognition models, etc., and achieve the effects of easy packaging, easy fitting, and improved accuracy

Inactive Publication Date: 2021-02-19
SOUTHWEST PETROLEUM UNIV
View PDF16 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The technical problem to be solved by the present invention is: for the existing video saliency detection technology, there is no effective recognition model to read frames and cannot be applied to multi-scene videos

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for preprocessing pictures in video saliency detection task
  • Method for preprocessing pictures in video saliency detection task
  • Method for preprocessing pictures in video saliency detection task

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It can be understood that the specific implementation described here is only used to explain the relevant method, but not to limit the use of the method. It should also be noted that, for ease of description, only parts related to the method are shown in the drawings. Hereinafter, the present application will be described in detail with reference to the accompanying drawings.

[0035] figure 1 It is the technical roadmap of the method, which can be used in a variety of deep learning video tasks, and can improve the accuracy and robustness of the model without changing the model parameters, including the following steps:

[0036] Step S100, read the preselected picture.

[0037] In this method, the pictures used for preprocessing need to know the fixed number of frames of the subsequent model first, and expand the search range of the preproc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for preprocessing pictures in a video saliency detection task, in particular to a method for judging the correlation between the pictures read by the video task and the task and screening before a video frame is input into a deep learning model as a judgment basis, and belongs to the field of computer vision. Aiming at the problems that an existing video saliencyidentification technology does not have an effective identification model reading frame and is not suitable for a multi-scene video, the method realizes a function of adaptively reading a picture by amodel through a method of carrying out redundant picture detection and scene switching identification in picture reading, and improves the accuracy of the model. The method is packaged, the packagedmodule can be added into any deep learning model for processing video tasks, and the robustness of the model for various video scenes is improved. The to-be-tested pictures are concentrated before being input into the model, and the pictures irrelevant to the task are removed, so that the deep learning model is easier to fit.

Description

technical field [0001] The invention relates to a method for image preprocessing in a video saliency detection task, belonging to the field of computer vision. Background technique [0002] When humans see rich and changing scenes, the human visual system can quickly locate key areas and blur other parts. The goal of video saliency detection is to simulate the characteristics of the human eye through deep learning models. This method can locate key areas or key frames from a large amount of video data, can effectively eliminate a large amount of redundant data, and speed up the efficiency of deep learning algorithms, so this type of method is widely used in video surveillance, video extraction, In computer vision tasks such as video compression and scene segmentation. [0003] Thanks to the advancement of artificial intelligence technology, especially the vigorous development of deep learning technology in recent years, many video saliency detection algorithms have been dev...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/41G06N3/045G06F18/214
Inventor 王杨吴尚睿庄月圆
Owner SOUTHWEST PETROLEUM UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products