Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unsupervised video segmentation method integrated with temporal-spatial multi-feature representation

A video segmentation and multi-feature technology, applied in image analysis, image data processing, instruments, etc., can solve problems such as inaccurate motion information, blurred objects, etc., to achieve improved robustness, improved robustness, and accurate edge segmentation low degree of effect

Inactive Publication Date: 2018-02-02
NANJING UNIV OF INFORMATION SCI & TECH
View PDF1 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In general, the difficulty of segmentation lies in the irregular movement and deformation of the segmented target, the rapidly changing complex background, the inaccurate motion information and the blurring of the target, etc., but to obtain accurate information, it is necessary to use accurate segmentation results. caught in a loop
So far, there is no general and reliable unsupervised segmentation algorithm that can be applied to all complex transformation scenes. At present, most of the video segmentation algorithms proposed by many scholars at home and abroad are aimed at a specific application or a specific type of image. video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised video segmentation method integrated with temporal-spatial multi-feature representation
  • Unsupervised video segmentation method integrated with temporal-spatial multi-feature representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The present invention will be further described below in conjunction with the accompanying drawings, so that those skilled in the art can implement it by referring to the text of the description.

[0025] Such as figure 1 As shown, the present invention provides an unsupervised video segmentation method based on non-local spatio-temporal feature learning, including obtaining the video sequence to be segmented, using superpixel segmentation to process the video sequence, using optical flow to match front and rear frame information, and according to the video sequence The optical flow information of adjacent frames obtains the approximate range of the moving target, uses non-local spatiotemporal information to optimize the matching result, establishes a graph model, solves and outputs the superpixel level segmentation result, and uses the superpixel level segmentation result as a priori training mixture Gaussian model, using the mixed Gaussian model trained to carry out p...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unsupervised video segmentation method integrated with temporal-spatial multi-feature representation. The features of a target are extracted and identified according to themotion information and saliency and color features of the target, and the target is segmented stably and accurately through a Gaussian mixture model. The method includes the following steps: super pixel segmentation; optical flow matching; optimizing the matching result; building a graph model, and calculating the super pixel-level segmentation result; using the segmentation result to train the parameters of a Gaussian mixture model; calculating the pixel-level segmentation result; and getting a final segmentation result according to the super pixel-level segmentation result and the pixel-level segmentation result. Through super pixel segmentation on each frame of image, the computational complexity is greatly reduced. The robustness of segmentation is improved by optimizing the optical flow matching information according to non-local temporal-spatial information. The introduction of the Gaussian mixture model makes up for the large edge matching error in the process of super pixel segmentation. The saliency feature further improves the accuracy and credibility of the segmentation result.

Description

technical field [0001] The invention relates to an unsupervised video segmentation method that combines temporal and spatial multi-feature representations, belongs to the field of computer vision, and specifically relates to the field of video segmentation in image processing. Background technique [0002] Video refers to an image sequence composed of a series of continuous single images, and usually includes text, voice and other information. In order to facilitate transmission and use, it is usually necessary to segment the video, remove some areas in the video that are not of interest to the user, and obtain the data characteristics of the target content for subsequent feature extraction and analysis. [0003] Video segmentation, also known as motion segmentation, refers to dividing an image sequence into multiple regions according to a certain standard, and its purpose is to separate meaningful entities from the video sequence. In image processing technology, image and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/215G06T7/246
CPCG06T7/215G06T7/251G06T2207/10016G06T2207/20081
Inventor 张开华李雪君宋慧慧
Owner NANJING UNIV OF INFORMATION SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products