Improved gesture image feature extraction method based on DenseNet network

An image feature extraction and gesture technology, applied in neural learning methods, biological feature recognition, biological neural network models, etc., can solve problems such as difficult to accurately identify information and redundancy, and achieve suppression of overfitting and reduction of feature information redundancy , the effect of speeding up network training

Pending Publication Date: 2022-03-11
ZHEJIANG SCI-TECH UNIV
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In view of the problem that the features obtained by traditional gesture image feature fusion and recognition methods are difficult to accurately identify images with different proportions of gesture targets in complex backgrounds and the problem of information redundancy that may be caused by dense connections in the DenseNet network, the purpose of the present invention is to propose an adaptive The effective feature extraction and fusion method of images with different proportions of gesture targets can meet the needs of high-precision recognition for gesture targets with different proportions in complex backgrounds

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Improved gesture image feature extraction method based on DenseNet network
  • Improved gesture image feature extraction method based on DenseNet network
  • Improved gesture image feature extraction method based on DenseNet network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0057] The technical scheme flow chart of the present invention is as figure 1 shown.

[0058] Data set of the present invention adopts ASL (American sign language) open source sign language data set, and its part data are as figure 2 shown. Contains gesture images of different gesture target proportions, different angles, different lighting and different background environments, including 28 gesture categories and 1 non-gesture category, a total of 29 classification categories.

[0059] 1) Normalize the size of the original gesture image to a 224×224×3 RGB image, then normalize the original gesture image, and map the original gesture image from an integer between 0 to 255 to a float between 0 and 1 The points are used as input to the neural network.

[0060] 2) Input the standardized gesture image figure 1 In the downsampling network shown, the d...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an improved gesture image feature extraction method based on a DenseNet network. Acquiring a gesture to obtain an original gesture image; performing convolution downsampling through a convolution downsampling network structure, selecting feature tensors of a shallow layer and a deep layer, and inputting the feature tensors into a DenseNet-B module of a fused Drop-Path module to obtain two feature tensors; after fusion, obtaining a feature tensor of multi-scale feature fusion, compressing the feature tensor through a transition layer, and inputting the compressed feature tensor into a DenseNet-B module of the fusion Drop-Path module to obtain a feature tensor containing multiple scales and high dimensions; and obtaining a classification result through a global average pooling layer, a full connection layer and a softmax classifier. According to the method, feature tensors of different depths in a down-sampling network structure are included, large target objects and small target objects can be accurately recognized, meanwhile, a Drop-Path module is fused in the DenseNet network, the parameter quantity is effectively reduced while the precision is not reduced, the model training speed is increased, overfitting is prevented, and the gesture recognition accuracy is improved.

Description

technical field [0001] The invention relates to a gesture image extraction method, in particular to an improved gesture image feature extraction method based on a DenseNet network. Background technique [0002] Gesture recognition is a hot human-computer interaction method in recent years, and it is widely used in various fields such as sign language recognition, intelligent monitoring and virtual reality. The initial gesture recognition mainly uses wearable devices to directly detect the angle and spatial position of each joint of the hand and arm. Although these devices can provide good detection results, they are expensive to apply in common fields. In recent years, gesture recognition technology has shifted from wearable devices to machine vision gesture recognition methods and deep learning-based gesture recognition methods. Although gesture recognition technology has made great progress, there are still many challenges in the real environment, such as lighting, target...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V40/10G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/2415G06F18/253
Inventor 周梓豪田秋红
Owner ZHEJIANG SCI-TECH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products