Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Dynamic gesture recognition method based on two-pass deep convolutional neural network

A deep convolution and neural network technology, applied in the field of computer vision and machine learning, can solve problems such as influence and difference, and achieve the effect of increasing feature information, eliminating differences, and high recognition rate

Active Publication Date: 2019-12-10
SOUTH CHINA UNIV OF TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The method of artificially extracting three-dimensional features has great limitations, and usually requires prior knowledge, experience, and a large number of manual adjustments, and the recognition rate of the algorithm model is easily affected by the difference in the speed, direction, and size of the dynamic gesture operation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic gesture recognition method based on two-pass deep convolutional neural network
  • Dynamic gesture recognition method based on two-pass deep convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0022] Such as figure 1 As shown, the present embodiment is based on the dynamic gesture recognition method of the dual-channel deep convolutional neural network, comprising the following steps:

[0023] S1, collect the image sequence of dynamic gesture from depth camera, comprise depth (depth) image sequence and color (RGB) image sequence;

[0024] The data output by the depth camera includes depth (depth) and color (RGB) image sequences, and the resolution can be 640*480 pixels or 320*240 pixels.

[0025] S2. Then perform preprocessing operations on the depth image sequence and the color image sequence to obtain a depth foreground image sequence of 16 frames and a color foreground image sequence of 16 frames for dynamic gestures;

[0026] The preprocessing operation of the image sequence includes: through the method of subtracting the pixels of the front and rear frame images, the foreground image sequence (the calculation formula is as follows (1)) is obtained, which repre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a dynamic gesture recognition method based on a dual-channel deep convolutional neural network. First, a depth image sequence and a color image sequence of a dynamic gesture are collected from a depth camera, and a preprocessing operation is performed to obtain a depth foreground image sequence and a color image sequence of the dynamic gesture. Color foreground image sequence; design a two-pass deep convolutional neural network, input the deep foreground image sequence and the color foreground image sequence into the two-pass deep convolutional neural network, and extract the dynamic The temporal and spatial features of gestures in depth space and color space are fused, and then input to the softmax classifier; the final gesture recognition result is obtained according to the output of the softmax classifier. The invention adopts a dual-channel deep convolutional neural network model, extracts and fuses the features on the color and depth space of dynamic gestures, and greatly improves the recognition rate of dynamic gestures.

Description

technical field [0001] The invention belongs to the technical field of computer vision and machine learning, and in particular relates to a dynamic gesture recognition method based on a dual-path deep convolutional neural network. Background technique [0002] Gesture recognition is divided into static gesture recognition and dynamic gesture recognition. Compared with static gesture recognition, dynamic gesture recognition can bring us richer interaction methods and interactive experience. At present, dynamic gesture recognition has become an important research hotspot in the field of computer vision, because this technology can be applied to many real-world fields, such as robot navigation, video surveillance, games, etc. Although the industry has invested a lot of time and energy in visual dynamic gesture recognition in the past few decades, visual dynamic gesture recognition is still a challenging research direction. Because there are many categories in visual dynamic ge...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/40G06K9/62G06N3/04G06T7/55
CPCG06T7/55G06V40/28G06V10/30G06N3/045G06F18/241
Inventor 罗阳星徐向民邢晓芬
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products