Video content semantic understanding method based on a recurrent convolutional neural network

A neural network and recursive convolution technology, applied in the field of computer vision, can solve problems such as loss of information, dimensionality disaster, and poor robustness of scene switching, and achieve accurate recognition results, fast calculation speed, and small space occupation.

Inactive Publication Date: 2019-04-12
SHANDONG UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] First, the accurate feature representation of the convolutional neural network requires a high-dimensional model output, and the recurrent neural network training and application calculation costs require a low-dimensional input data. The contradiction between the two leads to the cascade combination of the two. There is a critical bottleneck in the method, and a large amount of key information is lost. Therefore, it is impossible to effectively use the inter-frame relationship of the video to provide effective supervision for the training of the neural network model. In practical applications, it is difficult to accurately make a reasonable detection of the overall content of the video.
[0008] Second, such traditional methods center on object

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video content semantic understanding method based on a recurrent convolutional neural network
  • Video content semantic understanding method based on a recurrent convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0043] A method of semantic understanding of video content based on recursive convolutional neural network. Recursive convolutional neural network is the model, such as figure 1 As shown, the convolutional neural network is used as the core of the recurrent neural network. In this method, the initial frame of the video is input to the recurrent neural network, and the initial variable that represents the initial state of the video is connected according to the depth dimension of the picture. The network uses a convolutional neural network for feature extraction, and the obtained feature output is used as the new hidden layer data to characterize the video state, which is passed to the next time step, and the above operation is repeated. On this basis, the hidden layer state of the recurrent neural network is used as the output and provided to the fully connected neural network classifier. After the feature reorganization of the fully connected classifier, the category output of t...

Embodiment 2

[0057] According to the method for semantic understanding of video content based on recursive convolutional neural network described in embodiment 1, the difference lies in:

[0058] In step (3), after the recursive convolutional neural network inputs a certain frame of video data, combined with the state data passed at the previous moment, the feature extraction on the current frame is performed, as shown in formula (I):

[0059] Ht+1=C{Ht: F t+1 } (Ⅰ)

[0060] In formula (Ⅰ), F t+1 Represents the t+1 frame data of the video, Ht is the video state represented by the hidden layer state of the previous time step, and C represents the convolution operation;

[0061] Step (5): After the final output of the sixth layer of the recursive convolutional neural network passes through the neural network classifier, the probability distribution of the data in each action category is calculated through the softmax operation, as shown in formula (II):

[0062] Prediction=softmax{W·H n } (Ⅱ)

[0063] ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video content semantic understanding method based on a recurrent convolutional neural network. The method is used for carrying out content analysis and classification on video data such as network videos and monitoring videos. According to the method, a convolutional neural network is placed in a recurrent neural network to serve as a kernel; Presenting video state concepts, organic combination of target detection and inter-frame association in the video data is realized; By performing recursive convolution operation between video frames, accurate and efficient extraction of video features is realized, video representations with more semantic representations are obtained, and on the basis, tasks of video classification, event detection, scene recognition and thelike are completed by adopting an artificial neural network full-connection classifier. The method provided by the invention overcomes the problems of information loss, poor feature characterization capability, training convergence difficulty and the like in the traditional method, and is an accurate, efficient and advanced method with a wide application prospect.

Description

Technical field [0001] The invention relates to a method for semantic understanding of video content based on a recursive convolutional neural network, and belongs to the technical field of computer vision. Background technique [0002] Video content understanding is one of the important basic issues in computer vision. Its goal is to extract features from images in the video and model the relationship between video frames, and finally obtain the feature representation of the entire video to facilitate subsequent image analysis The semantic understanding of video and video can be used in technical fields such as autonomous driving, surveillance video real-time intelligent detection and network video auditing. [0003] Traditional video content processing methods include simple single-frame image processing, optical flow method, feature extraction method based on convolutional neural network, feature extraction method based on recurrent neural network, or a combination of multiple m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/41G06V20/46G06F18/24G06F18/214
Inventor 李玉军冀先朋邓媛洁马宝森
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products