Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Gesture recognition method based on convolutional neural network and 3D estimation

A technology of convolutional neural network and gesture recognition, which is applied in the field of gesture recognition based on convolutional neural network and 3D estimation, can solve the problems of limited model parameter scale, large uncertainty, and limitations of application scenarios, so as to improve reliability and the effect of precision

Pending Publication Date: 2019-12-10
CHINA UNIV OF GEOSCIENCES (WUHAN)
View PDF1 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The above traditional machine learning methods have achieved long-term development to a certain extent, but there are still deficiencies: (1) The feature extraction methods of most algorithms or models are designed for specific gestures, and the feature selection depends on the researcher's own Experience, large uncertainty, manual parameter setting mechanism limits the scale of model parameters, and application scenarios are limited; (2) The number and diversity of samples used are not rich, and the robustness and adaptability of the algorithm are not strong enough
2D joint point estimation works well for simple and clear hand poses, but cannot cope with self-occlusion of fingers and finger collisions caused by similar colors
It is challenging and novel to effectively combine gesture estimation with deep learning and apply it to gesture recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Gesture recognition method based on convolutional neural network and 3D estimation
  • Gesture recognition method based on convolutional neural network and 3D estimation
  • Gesture recognition method based on convolutional neural network and 3D estimation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0055] Example 1: Reconstruction results of test samples after CBN-VAE network training

[0056] The data set used in this embodiment is a virtual data set (Rendered hand pose dataset, RHD) obtained by synthesizing a 3D animation character model and a natural background. 16 characters are randomly selected, and the samples of 31 actions performed by them are divided into training The data set contains 41258 320×320×3 images, denoted as RHD_train. The samples of the other 4 characters and the other 8 actions they perform are divided into a test data set, which contains 2728 images of 320×320×3, denoted as RHD_test.

[0057] The SegNet-base and SegNet-prop networks are trained, and the performance of the network is evaluated on the RHD_test test set. There is no other operation on the test data except for size adjustment. The evaluation results are shown in Table 5.

[0058] Table 5 RHD dataset performance comparison

[0059]

[0060] The evaluation results show that the Se...

Embodiment 2

[0061] Embodiment 2: train on the RHD_train data set and the STB_train data set respectively, then evaluate on the RHD_test data set and the STB_test data set respectively, and the evaluation results are shown in Table 6 and Image 6 . Table 6 shows the EPE differences between the two. The endpoint error in the table is in pixel units. At the same time, the AUC with an error threshold range of 0 to 30 is calculated. The arrow points to represent the change of performance with the value, and the upward indicates the value The bigger the better the performance.

[0062] Table 6 DetectNet performance evaluation

[0063] EPE mean(px)↓ EPE median(px)↓ AUC(0~30)↑ RHD_test 11.619 9.427 0.814 STB_test 8.823 8.164 0.917

[0064] According to the PCK curves of the DetectNet network under different endpoint error thresholds, it can be seen that when the error threshold is less than 15 pixels, the PCK value of the model increases rapidly; when the err...

Embodiment 3

[0065] Example 3: Evaluation of a complete 2D hand joint detection network

[0066]A complete hand joint detection network is formed by cascading the hand segmentation network SegNet-prop and the joint detection network DetectNet, which is denoted as PoseNet2D here. First, only train the hand segmentation network on RHD_train, that is, adopt the conclusion in the previous example. Then get a better DetectNet model on the Joint_train training set, and finally test the performance of the PoseNet2D network on the RHD_test, STB_test and Dexter data sets. Such as Figure 7 As shown, the solid line graph represents the PCK curves of the DetectNet model under the RHD_test test set and the STB_test test set respectively, and the dotted line graph represents the PCK curves of the PoseNet2D model under the RHD_train, STB_test and Dexter test sets respectively.

[0067] The results of the examples show that: the PoseNet2D model is suitable for RHD and STB datasets, and performs well, a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a gesture recognition method based on a convolutional neural network and 3D estimation, and the method comprises the steps: processing a to-be-recognized image through employinga SegNet-base network model, so as to extract a hand mask feature map in the to-be-recognized image; constructing a deep convolutional network DetectNet based on supervised learning, and performing network positioning on hand joint points in the hand mask feature map to obtain a gesture recognition result of 2D estimation; and processing the 2D estimated gesture recognition result by adopting a PoseNormNet model based on the standard frame and viewpoint estimation to obtain a 3D estimated gesture recognition result.

Description

technical field [0001] The invention relates to the technical field of artificial intelligence, in particular to a gesture recognition method based on convolutional neural network and 3D estimation. Background technique [0002] Gesture refers to the actions performed by the hand, including palm, finger or arm movements, and gesture recognition as the core of the new human-computer interaction is the top priority of the entire interactive system. The interactive interface based on gesture control is intuitive and easy to operate, which can greatly improve the user experience and has a wide range of applications in life. The application fields include: virtual reality field, smart home field and medical field. In addition, gesture recognition also has very important practical significance for assisting car driving, sign language translation and other fields. The recognition of simple static hand movements or complex dynamic hand movements is an important interface for human-...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/113G06V40/28G06F18/24G06F18/214
Inventor 陈分雄蒋伟王晓莉熊鹏涛韩荣叶佳慧王杰
Owner CHINA UNIV OF GEOSCIENCES (WUHAN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products