Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human motion counting method based on deep convolutional neural network

A neural network and deep convolution technology, applied in the field of human motion counting based on deep convolutional neural network, can solve problems such as inability to automatically and achieve the effect of correcting laziness

Inactive Publication Date: 2019-08-23
NANJING SILICON INTELLIGENCE TECH CO LTD
View PDF1 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] In order to overcome the deficiencies of the above-mentioned prior art, the present invention aims to provide a human body movement counting method based on a deep convolutional neural network, to solve the problem that the prior art cannot automatically and efficiently record push-ups, sit-ups, pull-ups, etc. Multiple Movement Counting Problems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human motion counting method based on deep convolutional neural network
  • Human motion counting method based on deep convolutional neural network
  • Human motion counting method based on deep convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] In a more optimal embodiment, after the K-means clustering algorithm of step (2), also include the data enhancement process to the key action frame as training sample, the method for described data enhancement comprises translation, rotation, scale transformation and Color dithering.

[0057] The purpose of the data augmentation process is to make the features learned by the neural network robust. For this purpose, 60% of all samples collected are used as training set and 40% as testing set. In order to balance the training data, try to ensure that the number of training samples for each type of exercise is basically balanced.

Embodiment 2

[0059] In a more optimal embodiment, the used deep convolutional neural network has a structure such as image 3 As shown, compared with the traditional AlexNet neural network, two fully connected layers are reduced; only one convolutional layer B2, one pooling layer B3 and one fully connected output layer B4 are retained;

[0060] The video frame input is P1 in the figure, and the classification structure output is P5 in the figure.

[0061] Through the above simplification measures, the processing speed of the deep convolutional neural network can reach 33FPS, thereby greatly improving the calculation speed of the classification model, which can also be achieved in real time on the mobile phone.

[0062] The present invention adopts the technique of fine tune to preset the trained AlexNet weight coefficients on the ImageNet data set. The advantage is that it is easy to use the trained data without retraining the model every time, which greatly improves the practical efficie...

Embodiment 3

[0070] In a more preferred embodiment, any key action gesture sequence included in the judging process of step (5) includes at least 3 video frames.

[0071] Considering that motion occurs continuously, the output results are smoothed to reduce the error rate of recognition; and when a state appears continuously for more than 3 frames, it can be accurately judged that the key action gesture has occurred.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human motion counting method based on a deep convolutional neural network. The method comprises the following steps: defining five basic human motion types and three key motion posture sequences of each motion type; enabling different people to perform five motions and recording video sequences of the five motions and other types of motions, taking the video sequences astraining samples, combining the training samples with 16 types of classifiers, performing an action recognition training process based on a deep convolutional neural network, and outputting a classification model after training is completed; capturing a video frame of human motion through a camera; inputting the data into a trained classification model for classification operation; judging the action posture of the exerciser and the type of the performed exercise according to the classification operation result, and adding one to the belonging exercise count. Five kinds of motion can be automatically and efficiently identified and counted, an exerciser can do body-building motion without distractions, the identification process of a motion video is calibrated in a standard mode, and nonstandard motion counting is eliminated.

Description

[0001] 【Technical field】 [0002] The invention relates to the technical field of motion information processing, in particular to a human body motion counting method based on a deep convolutional neural network. [0003] 【Background technique】 [0004] With the progress of society and the improvement of people's living standards, people pay more and more attention to healthy life. In order to obtain a healthy life, people are often keen on exercise and fitness. At the same time, more and more people realize the importance of reasonable exercise. However, the quantification of the amount of exercise is often measured visually, which cannot give a reasonable reference and cannot guarantee accuracy. In recent years, with the rapid development of multimedia technology and the continuous improvement of computer performance, image processing technology is increasingly favored by people, and has achieved fruitful results. It is widely used in traffic management, object tracking, human...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/42G06N3/045G06F18/23213G06F18/214
Inventor 司马华鹏
Owner NANJING SILICON INTELLIGENCE TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products