Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A monocular static gesture recognition method based on multi-feature fusion

A multi-feature fusion and gesture recognition technology, applied in the field of image recognition, can solve the problems of low recognition accuracy, single gesture feature, and not widely used, and achieve the effects of easy promotion and application, high recognition accuracy, and low equipment cost

Inactive Publication Date: 2019-01-11
SOUTH CHINA UNIV OF TECH
View PDF5 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the high cost of the kinect camera, it is not widely used, so this kind of gesture recognition method cannot be popularized and applied
The existing monocular static gesture recognition method uses a single gesture feature, which leads to weak robustness of the gesture recognition system and low recognition accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A monocular static gesture recognition method based on multi-feature fusion
  • A monocular static gesture recognition method based on multi-feature fusion
  • A monocular static gesture recognition method based on multi-feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0071] Such as figure 1 As shown, a monocular static gesture recognition method based on multi-feature fusion, its process is as follows: gesture image acquisition step, image preprocessing step, gesture feature extraction step and gesture recognition step.

[0072] Wherein, S1, gesture image acquisition steps:

[0073] Use a monocular camera to collect RGB images containing gestures. The monocular camera should be located directly in front of the human body. In the collected images, the human face and hands are the two larger areas of all skin-color and skin-like areas.

[0074] Wherein, S2, image preprocessing step:

[0075] Such as figure 2 As shown, the image preprocessing steps are as follows:

[0076] S201, skin color segmentation, the specific process is as follows:

[0077] S2011, converting the color space, converting the input image from the RGB color space to the YCr'Cb' color space, the specific conversion formula is as follows:

[0078] y=0.299×r+0.587×g+0.1...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a monocular static gesture recognition method based on multi-feature fusion. The method comprises the following steps: gesture image collection: collecting an RGB image containing gesture by a monocular camera; image preprocessing: using human skin color information for skin color segmentation, using morphological processing and combining with the geometric characteristicsof the hand, separating the hand from the complex background, and locating the palm center and removing the arm region of the hand through the distance transformation operation to obtain the gesture binary image; gesture feature extraction: calculating the ratio of perimeter to area, Hu moment and Fourier descriptor feature of gesture and forming gesture feature vector; gesture recognition: usingthe input gesture feature vector to train the BP neural network to achieve static gesture classification. The invention combines the skin color information and the geometrical characteristics of the hand, and realizes accurate gesture segmentation under monocular vision by using morphological processing and distance transformation operation. By combining various gesture features and training BP neural network, a gesture classifier with strong robustness and high accuracy is obtained.

Description

technical field [0001] The invention relates to the field of image recognition, in particular to a monocular static gesture recognition method based on multi-feature fusion. Background technique [0002] Gesture, as a natural and intuitive human-computer interaction mode, has gradually developed into a research hotspot in the field of human-computer interaction, and has been widely used in somatosensory games, robot control, computer control, etc. Compared with data glove-based gesture recognition technology, vision-based gesture recognition technology has the advantages of low equipment requirements and natural interaction, and has become the mainstream method of gesture recognition. [0003] Gesture segmentation is a key link in vision-based gesture recognition. The effect of segmentation affects feature extraction, which in turn affects gesture classification results. In the static gesture recognition method based on monocular vision, the result of gesture segmentation i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06T7/11G06T7/136
CPCG06T7/11G06T7/136G06V40/113G06V10/56G06F18/241
Inventor 周智恒许冰媛
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products