Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep information based sign language recognition method

A technology of depth information and recognition methods, applied in character and pattern recognition, instruments, computing, etc., can solve problems such as target tracking and segmentation

Active Publication Date: 2015-10-28
SHANDONG UNIV
View PDF4 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method is easily affected by factors such as illumination, and it is much more complicated in target tracking and segmentation, involving many key technologies in the field of digital image processing.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep information based sign language recognition method
  • Deep information based sign language recognition method
  • Deep information based sign language recognition method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0066] A sign language recognition method based on depth information, the specific steps include:

[0067] (1) Multi-threshold gesture segmentation based on depth information: use the Kinect camera to obtain user depth data and bone data, perform multi-threshold segmentation on gestures, and obtain the binary image of the right hand after scale transformation; at the same time, extract the right hand and right hand respectively. The bone space coordinates of the four bone points of the index finger, right wrist and right shoulder;

[0068] (2) using a 5×5 window to perform median filtering and morphological closing operation on the binary image of the right hand obtained in step (1), that is, perform smoothing processing, and use the nearest neighbor method to extract the gesture contour;

[0069] (3) Based on the improved SURF algorithm, the eigenvalues ​​of the hand shape are obtained;

[0070] (4) Motion trajectory feature extraction based on angular velocity and distance:...

Embodiment 2

[0075] The depth information-based sign language recognition method according to Embodiment 1 is different in that the depth information-based multi-threshold gesture segmentation includes the following specific steps:

[0076] a. Use the user depth data obtained by the Kinect camera to perform an AND operation with the PlayerIndexBitmask. The default value of the PlayerIndexBitmask is 7 to obtain the user index value. According to the difference of the user index value, the human body and the background are divided;

[0077] b. When there are multiple human bodies within the effective line-of-sight of the Kinect camera, the effective line-of-sight of the Kinect camera is 1.2m-3.5m, select the threshold T to further segment the depth image segmented in step a, and the threshold T is 2.5m- 3.5m;

[0078] c. In the depth image processed in step b, if there are still multiple human bodies within the threshold T, calculate the average value of the depth data of each human body, an...

Embodiment 3

[0087] The method for sign language recognition based on depth information according to Embodiment 1, the difference is that in step a, the user index value consists of 2 bytes and 16 bits, wherein the upper 13 bits represent the user to the Kinect camera The lower 3 bits represent the user index value, convert the binary user index value to decimal, the value is 0-7; the user index value is 0, the pixel is the background, if the user index value is 1 to 7, the pixel point for the human body.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep information based sign language recognition method. The method comprises steps of: (1) identification of a single gesture: dividing a sign language into a hand shape and a motion track; using deep information based multi-threshold hand gesture segmentation, and obtaining a feature value of the hand shape by using an improved SURF algorithm; obtaining the feature value of the motion track by using angular velocity and distance based motion characteristics, and performing gesture identification by using extracted feature value of the hand shape and the feature value of the motion track as an input of BP neural network; and (2) correction of a gesture sequence: according to the recognized gesture, performing automatic reasoning correction on gestures that have not been correctly recognized or that have polysemy by using a Bayesian algorithm. According to the method provided by the invention, the hand gesture segmentation is performed by using the deep information obtained by a Kinect camera, thereby overcoming the interference caused by illumination in the conventional vision based hand gesture segmentation, and improving naturality of human-computer interaction. The use of improved SURF algorithm reduces the calculation amount and improves the identification speed.

Description

technical field [0001] The invention relates to a sign language recognition method based on depth information, and belongs to the technical field of intelligent perception and intelligent computing. Background technique [0002] Sign language uses gestures to measure actions, and simulates images or syllables to form certain meanings or words according to the changes of gestures. It is a language for people with hearing impairments to communicate and exchange ideas with each other. aids”, and for the hearing-impaired, it is the primary communication tool. Sign language is mainly divided into finger language and sign language. Finger language is a form of language expression in which the indicated changes of fingers represent letters, and words are spelled out in the order of pinyin. There are one-finger language and two-finger language. Sign language is to express thoughts and communicate with hand movements and facial expressions. There are more than 20 million deaf-mute...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/107
Inventor 许宏吉曹海波刘琚党娟李石李文强
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products