Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video ucl semantic indexing method and device based on deep learning

A deep learning and video technology, applied in the Internet field, can solve problems such as poor performance, difficulty in unified management of video resources, and single model input method

Active Publication Date: 2021-03-19
SOUTHEAST UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the following problems still exist in the current video feature extraction and indexing: First, the traditional video feature extraction method does not perform well in the high-level semantic extraction of videos. If the video semantic features are extracted artificially, the time efficiency is not high, and As a result, there are often differences in semantic features due to different personal opinions, and it is difficult to generate high-level semantic features of videos under a unified standard framework; secondly, video semantic extraction methods based on deep learning still need to be improved in terms of description accuracy, for example , S2VT (Sequence to Sequence-Video to Text) video natural language description model (Venugopalan S, et al. Sequence to Sequence--Video to Text [C]. IEEE International Conference on Computer Vision (ICCV). IEEE, 2015) for surrounding videos Fragments pay less attention, and the input method of its model is too single; finally, the multimedia content description interface MPEG-7 can index video features, but it can only standardize visual features such as color, texture, shape, and some semantics features, and other semantic features require the user to define a new description mode
If there is no unified video content coding specification, the system needs to design a method for obtaining specified video features for different coding formats. The universality of video features in various recommendation systems cannot be guaranteed, and it creates difficulties for the unified management of video resources.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video ucl semantic indexing method and device based on deep learning
  • Video ucl semantic indexing method and device based on deep learning
  • Video ucl semantic indexing method and device based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] Below in conjunction with specific embodiment, further illustrate the present invention, should be understood that these embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various equivalent forms of the present invention All modifications fall within the scope defined by the appended claims of the present application.

[0033] Such as figure 1 As shown, the deep learning-based video UCL semantic indexing method disclosed in the embodiment of the present invention, the specific implementation steps are as follows:

[0034]Step 1, low-level semantic feature extraction and video segmentation. The neural network is used to extract the low-level semantic features of the video, and then according to the low-level semantic features, a video segmentation algorithm based on backward search is designed to segment the vide...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a UCL semantic indexing method and device based on deep learning. First, the neural network is used to extract the low-level semantic features of the video; then, based on the flexible sampling of features and the attention mechanism, the video natural language description generation model S2VT is improved to generate the S2VT-FFSA model, which takes the low-level semantic features of the video as input and outputs the video Natural language description features, combined with voice natural language description features to generate high-level semantic features such as video keywords, solves the problem of insufficient semantic feature extraction to a certain extent; finally, using UCL to index rich semantic features, propose video content The UCL indexing method makes video indexing more standardized. The invention can not only extract rich semantic features of videos accurately, but also index these features objectively and standardizedly.

Description

technical field [0001] The invention relates to a video UCL semantic indexing method and device based on deep learning, which uses deep learning technology to automatically extract video low-level features and high-level features, and based on the UCL national standard GB / T35304-2017 to index video semantic features, belonging to the Internet technology field. Background technique [0002] With the rapid development of computer technology and information technology, video production methods and upload channels have become increasingly convenient, resulting in the existence of massive video resources on the Internet. In order to solve the problem of video information overload, major video portals will provide users with video search and recommendations. In order to effectively manage video resources and efficiently realize the above functions, it is particularly important to accurately extract video features and standardize indexing. However, the following problems still ex...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/42G06V20/46G06F18/22
Inventor 杨鹏张晓刚李幼平余少波徐镜媛
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products