Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for video labeling and device for video labeling

A video and unlabeled technology, applied in image data processing, instrumentation, computing, etc., can solve the problems of less feature expression information, less processing and labeling data, etc., and achieve the effect of improving efficiency and reducing image information loss

Inactive Publication Date: 2013-04-24
ANKE SMART CITY TECH PRC
View PDF2 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to overcome the disadvantages of the prior art that there is a small amount of labeled data and a large number of unlabeled quantities, as well as the lack of single feature expression information, the present invention provides a video labeling method and device, which introduces unlabeled data into kernel density estimation, and comprehensively The feature information of labeled samples and unlabeled samples is used to improve the efficiency of video labeling and the accuracy of kernel density estimation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for video labeling and device for video labeling
  • Method for video labeling and device for video labeling
  • Method for video labeling and device for video labeling

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] In order to make the technical problems, technical solutions and beneficial effects to be solved by the present invention clearer and clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0055] Such as figure 1 Shown is a flow chart of a video tagging method provided by an embodiment of the present invention, and the method includes the following steps:

[0056] S101, performing lens segmentation on the video;

[0057] S102. Extract a set of key frames in each segmented shot;

[0058]Specifically, the content between frames in the same shot usually has considerable redundancy, so a frame image reflecting the main information content of a shot can be selected as a key frame to express the shot concisely. In the case of determining the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for video labeling and a device for video labeling and belongs to the field of video processing. The method for the video labeling comprises carrying out a shot segmentation to a video; extracting a key frame assembly in each segmented shot; extracting relevant low layer eigenvectors of each key frame assembly; using semi-supervised kernel density estimation arithmetic to label each unlabeled sample with a category label and labeling key frames corresponding to the unlabeled samples with category. Due to the fact that the eigenvectors combined by various low layer characteristics of an image are adopted to represent the key frames, and loss of pattern information is reduced. The semi-supervised kernel density estimation arithmetic is used for labeling each unlabeled sample with a category, unlabeled data is leaded to a kernel density estimation, feature information of the labeled samples and unlabeled samples are comprehensively applied, and efficiency of video labeling and accuracy of kernel density estimation are improved.

Description

technical field [0001] The invention relates to the fields of video processing and machine learning, in particular to a video tagging method and device. Background technique [0002] With the development of computer and network technology, ordinary users can access more and more video data. Video data provides a lot of useful information, and its content is more abundant, intuitive and vivid than other forms of data. On the one hand, the massive amount of information contained in rich video data is unmatched by other media; on the other hand, its increasingly huge data volume, unstructured data forms and ambiguity in content make it more convenient for user interaction. Operations set up obstacles that prevent it from functioning more effectively. [0003] In order to mine the potential value of large video collections, users need to be able to efficiently retrieve the desired video clips. Video annotation is a technology that links text and video semantic content. It is ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/00
Inventor 秦兴德吴金勇王一科王军钟翔宇
Owner ANKE SMART CITY TECH PRC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products