Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Double-flow convolution behavior recognition method based on 3D time flow and parallel spatial flow

A recognition method and spatial flow technology, applied in character and pattern recognition, neural learning methods, instruments, etc., can solve the problems of high storage and calculation costs of optical flow images, insufficient accuracy for practical scenarios, and feature information extraction that needs to be improved, etc. problems, to achieve the effect of improving prediction accuracy, improving recognition accuracy, and reducing recognition error probability

Active Publication Date: 2021-01-05
SHANDONG UNIV
View PDF5 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method needs to extract the optical flow in advance, so the storage and calculation costs of the optical flow image are too high, and the accuracy is not enough to apply to the actual scene, and the extraction of feature information needs to be improved
In addition, factors such as light intensity and complex scenes in the video scene also affect the accuracy of the model to a certain extent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Double-flow convolution behavior recognition method based on 3D time flow and parallel spatial flow
  • Double-flow convolution behavior recognition method based on 3D time flow and parallel spatial flow
  • Double-flow convolution behavior recognition method based on 3D time flow and parallel spatial flow

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the drawings in the embodiments of the present invention.

[0047] The present invention provides a dual-stream convolution behavior recognition method based on 3D time stream and parallel spatial stream, such as figure 1 As shown, the specific examples are as follows:

[0048] 1. Video processing

[0049] (1) For the input video, a plurality of positive sequence video frames are randomly selected for optical flow extraction to form multiple optical flow blocks, as follows:

[0050] Randomly select 8 frames of video frames from the input video, and perform two-way optical flow extraction on these 8 frames of pictures, and stack them in order to obtain 8 optical flow blocks with 8 frames of optical flow graphs. The calculation method of optical flow extraction is as follows:

[0051]

[0052] in,

[0053] u=[1:w],v=[1:h],k=[-L+1:L...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a double-flow convolution behavior recognition method based on 3D time flow and parallel spatial flow, and the method comprises the following steps: firstly carrying out the optical flow block extraction of an input video; secondly, segmenting an input video, extracting a video frame, and cutting out a human body part; inputting the optical flow block into a 3D convolutional neural network, and inputting the clipped frame into a parallel spatial flow convolutional network; finally, fusing the classification results of the parallel spatial streams and splicing with the scores of the time streams to form a full connection layer, and finally outputing an identification result through an output layer. According to the invention, the human body part cutting and the parallel spatial flow network are utilized to carry out single-frame identification, the single-frame identification accuracy is improved in space, and the 3D convolutional neural network is utilized to carry out action feature extraction of the optical flow, so that the identification accuracy of the time flow part is improved, and decision fusion is carried out by using the final single-layer neuralnetwork in combination with the spatial appearance features and the time action features, so that the overall recognition effect is improved.

Description

technical field [0001] The invention relates to the technical field of human behavior recognition, in particular to a dual-stream convolution behavior recognition method based on 3D time stream and parallel space stream. Background technique [0002] With the development of Internet multimedia, especially the rapid commercialization of 5G technology, large-scale cameras generate and transmit a large number of videos every moment, which will put enormous pressure on public security monitoring. In order to cope with the information explosion, it is very necessary and urgent to analyze and process the video. Human action recognition in videos is an important branch of computer vision and is crucial for public safety analysis and smart city construction. [0003] Before the convolutional neural network was proposed in 2012, the video behavior recognition algorithm was mainly based on traditional algorithms. Among them, the improved dense optical flow method achieved the best re...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06V40/20G06V20/49G06V20/41G06V10/255G06V2201/07G06N3/045G06F18/253
Inventor 熊海良周智伟许玉丹王宏蕊张雅琪沈航宇
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products