Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video feature learning method, device, electronic device and readable storage medium

A video feature and learning method technology, applied in the computer field, can solve the problems of resource consumption and cost, manual labeling, etc., and achieve the effect of strong versatility and good adaptability

Active Publication Date: 2020-07-28
XIAMEN MEITUZHIJIA TECH
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The current video feature learning method is mainly based on video tags and classification information, and the above video tags and classification information require manual labeling work, which consumes a lot of resources and costs in actual business application scenarios with a huge amount of data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video feature learning method, device, electronic device and readable storage medium
  • Video feature learning method, device, electronic device and readable storage medium
  • Video feature learning method, device, electronic device and readable storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051]In the process of implementing the technical solutions provided by the embodiments of the present invention, the inventors of the present application found that the currently used supervised video feature learning method is based on video labels and classification information, which requires manual labeling operations. In actual business application scenarios with a huge amount of data, It consumes resources and cost very much. In view of the above problems, although the existing unsupervised video feature learning methods can improve the above problems to a certain extent, after careful study by the inventors, it is found that the current unsupervised video feature learning methods mainly use The continuous motion information of the main object in the video is obtained, and the visual properties of the video are unsupervised. However, due to the dependence on the motion of objects in the video, the effect is not good when there is little or no change in the video picture...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a video feature learning method and device, an electronic device and a readable storage medium. The method comprises the steps that a to-be-trained video sample is acquired, wherein the video sample comprises a plurality of frames of images; the video sample is segmented to obtain a plurality of continuous video segments; visual features of each video segment are extracted, and the number of motion elements of each video segment is calculated; visual features of the video sample are extracted, and the number of motion elements of the video sample is calculated; based on the number of the motion elements of each video segment, the number of the motion elements of the video sample and preset constraint conditions, a target classification model is trained so as to obtain a trained target classification model, so that learning of the video features is achieved. Therefore, unsupervised learning of the video features can be achieved without knowing tags and classification information of a video, the resource and cost consumption can be reduced, and the method can be applied to a wide range of video scenes.

Description

technical field [0001] The present invention relates to the field of computer technology, in particular to a video feature learning method, device, electronic equipment and readable storage medium. Background technique [0002] Video feature learning has a wide range of applications, such as video classification, similar video retrieval, video matching, etc. The current video feature learning method is mainly based on video tags and classification information, and the above video tags and classification information require manual labeling work, which consumes a lot of resources and costs in actual business application scenarios with a huge amount of data. Contents of the invention [0003] In order to overcome the above-mentioned deficiencies in the prior art, the purpose of the present invention is to provide a video feature learning method, device, electronic equipment and readable storage medium, which can realize unsupervised learning of video features without knowing ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/78G06F16/783G06N3/08
CPCG06F16/783G06F16/7847G06N3/088
Inventor 丁大钧赵丽丽刘旭
Owner XIAMEN MEITUZHIJIA TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products