Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A human body posture visual recognition method for moving and carrying nursing robots

A human body posture and visual recognition technology, applied in the field of human body posture visual recognition, can solve the problems of unrecognized or misidentified, heavy algorithm calculation, troublesome and other problems, achieve good adaptability to the home environment, ensure human-machine safety, and high recognition accuracy Effect

Inactive Publication Date: 2020-12-25
HEBEI UNIV OF TECH
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, human body posture recognition is mostly contact recognition. The posture of the human body is estimated by sticking points. Inertial tracker to detect the rotation and extension of the patient's forearm and wrist; Chinese Dalian University of Technology patent 201611272616.X uses 12 data acquisition nodes placed in different positions of the human body to measure the movement posture of different human body; contact The measurement requires the patient to wear a variety of sensors or optical signs, which is not only troublesome, but also affects the movement of the patient, causing psychological discomfort to the patient, which is not conducive to the practical application of fully automatic detection
Non-contact human body posture measurement is mainly based on vision, and the color map human body posture recognition method PAF (PartAffinity Field) (Cao Z, Simon T, Wei S E, et al. Realtime Multi-Person 2D Pose Estimation using PartAffinity Fields[J].2016. ) realizes high-precision and high-reliability human pose recognition, but it can only provide 2D pixel coordinates, and the estimation cannot be transferred to 3D global coordinates; Zimmermann et al. (Zimmermann C, Welschehold T, Dornhege C, et al.3D Human Pose Estimation in RGBD Images for Robotic TaskLearning[J].2018.) A 3D convolutional network based on RGBD information was proposed for the grasping and teaching function of service robots. This algorithm has a large amount of computation, which is not conducive to lightweight hardware and real-time output
The Kinect currently launched by Microsoft is the most common human body posture visual recognition product in the nursing industry: the lifting transfer robot RIBA (MukaiT, Hirano S, Yoshida M, et al. Tactile-based motion adjustment for the Nursing-care assistant robot RIBA[C] / / IEEE International Conference on Robotics&Automation.2011.) Kinect camera and its SDK are used to recognize human body posture, but the algorithm based on random forest is too close to the camera or the human body is partially occluded On the other hand, the Kinect SDK cannot recognize whether the user is front or back, that is, the recognition of the left and right directions of the user's joints cannot be guaranteed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A human body posture visual recognition method for moving and carrying nursing robots
  • A human body posture visual recognition method for moving and carrying nursing robots
  • A human body posture visual recognition method for moving and carrying nursing robots

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0052] Define human body posture, such as figure 2 As shown, there are a total of 15 joint points (the number of human body posture joint points can be set according to the requirements). The technical solution for realizing the close-range posture visual recognition of people is: make full use of RGBD information, and use the first neural network to estimate the human body joint pixels in the color image Coordinates to realize the adaptability of the human joint recognition algorithm to close-range human postures, and then use the second-level neural network based on the depth map and joint heat map to upgrade the dimension and optimize the accuracy of the joint 2D coordinate estimation of the first-level neural network. 3D human pose. The flow process of the human body joint recognition algorithm of the present invention is as follows: image 3 shown.

[0053] In order to realize the estimation of the human body posture in the color image, the present invention adopts the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention is a kind of human body gesture visual recognition method of transferring and carrying care robot, this method utilizes the two-stage network based on RGBD (color map RGB+depth map Depth) (the first stage neural network is PAF, the second stage neural network is improved ResNet) achieves higher-precision recognition at short distances, improves the ResNet network that is usually used for classification, and modifies its input and output structures so that it can handle human joint recognition problems. As a second-level neural network, it is different from existing methods. A higher accuracy was obtained compared to . The nearest neighbor algorithm proposed by the present invention can automatically track the underarm contour of the human body under the condition of only relying on the coordinates of the two joints, further use the convex hull algorithm to repair the underarm contour, and finally obtain the center of the underarm, that is, the underarm point. The point recognition method improves accuracy while reducing the dependence of axillary points on joint locations.

Description

Technical field: [0001] The invention belongs to the technical field of nursing robots, and in particular relates to visual recognition of human body gestures for transferring and transporting nursing robots. Background technique: [0002] my country has entered an aging society. At present, there are more than 230 million people over the age of 60, and the degree of aging is becoming more and more serious. With the continuous progress of my country's economy and technology, the demand for intelligent transfer, transportation and nursing robots in the Chinese market is increasing. bigger and bigger. The intelligent perception and understanding of the environment is the key to the intelligentization of the transfer, handling and nursing robots, and human body posture recognition is a key to environmental perception. [0003] At present, human body posture recognition is mostly contact recognition. The posture of the human body is estimated by sticking points. Inertial tracke...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/10G06V40/28G06N3/045G06F18/24147
Inventor 陈梦倩李顺达郭士杰刘今越贾晓辉刘彦开
Owner HEBEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products