Behavior recognition method based on multi-modal data fusion

A technology of data fusion and recognition methods, applied in the field of computer vision, can solve the problems of high accuracy requirements for behavior recognition and side effects, and achieve the effect of simplifying time complexity and reducing overhead

Pending Publication Date: 2022-08-05
HOHAI UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the bottleneck in the research of single-modal human behavior recognition, and the side effects caused by noise samples to the experiment, the accuracy of behavior recognition is high.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Behavior recognition method based on multi-modal data fusion
  • Behavior recognition method based on multi-modal data fusion
  • Behavior recognition method based on multi-modal data fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings. The embodiments described below with reference to the accompanying drawings are exemplary and are only used to explain the present invention, but not to be construed as a limitation of the present invention.

[0043] With the continuous development of computer vision, behavior recognition algorithms are widely used. However, due to the bottleneck of single-modal human behavior recognition research and the side effects caused by noise samples to the experiment, the accuracy of behavior recognition is required to be high. In view of the above problems, the present invention designs a behavior recognition method based on multimodal data fusion.

[0044] like figure 1As shown, a behavior recognition method based on multimodal data fusion of the present invention includes the following steps:

[0045] 1. Depth data action feature extraction. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a behavior recognition method based on multi-modal data fusion. The method is applied to depth video and skeleton sequence data. For depth video data, the method comprises the following steps: firstly, extracting a DMI depth feature map on a video sequence; then, dividing the original depth action sequence into two segments of subsequences with the same length according to the frame number, and respectively extracting sub DMI depth feature maps on the subsequences; and forming a space-time depth motion graph by using the two obtained sub-DMI feature graphs and the DMI image, and extracting the depth features of the motion by using an HOG (Histogram of Oriented Gradient) algorithm. For skeleton sequence data, a space-time diagram convolution feature extractor improved based on a space-time diagram convolution network model is used to directly process a skeleton sequence, and skeleton features of actions are extracted. After action features on two data modalities are obtained, a high-credibility mean value sample fusion algorithm based on CCA improvement is used for fusing the two types of features, and fused features are obtained. And finally, the fusion features are classified by using an SVM. According to the method, an existing behavior recognition algorithm model is improved, the influence of single-mode data and noise samples on an algorithm experiment is overcome, and the recognition accuracy of a behavior recognition algorithm on an existing public data set is improved.

Description

technical field [0001] The present patent application relates to a behavior recognition method based on multimodal data fusion, and belongs to the field of computer vision. Background technique [0002] In recent years, artificial intelligence has developed rapidly, and technologies such as machine vision, pattern recognition, and natural language processing have been widely used in many aspects of social development. The fields involved include intelligent manufacturing, autonomous driving, intelligent robots, etc. Machine vision plays a pivotal role in artificial intelligence. Machine vision is committed to using computers to achieve the role of biological vision, which greatly promotes the development of artificial intelligence. Machine vision is a comprehensive technology, including computer software technology, sensors, optical imaging, image processing, video processing, electric light source lighting, etc. The computer realizes the perception and understanding of t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06V20/40G06K9/62G06N3/04G06N3/08G06V10/50G06V10/774G06V10/80G06V10/82
CPCG06N3/08G06N3/045G06F18/253G06F18/214
Inventor 吴谦涵黄倩
Owner HOHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products