Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method, device and equipment for identifying baby cry category through multi-feature fusion

A multi-feature fusion, infant technology, applied in neural learning methods, character and pattern recognition, speech analysis, etc., can solve problems such as low accuracy, achieve the effect of improving accuracy and reducing misjudgment of crying detection

Pending Publication Date: 2021-06-25
武汉星巡智能科技有限公司
View PDF3 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In view of this, the embodiment of the present invention provides a method, device and equipment for multi-feature fusion recognition of baby crying category, which is used to solve the technical problem of low accuracy in judging baby crying through speech recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method, device and equipment for identifying baby cry category through multi-feature fusion
  • Method, device and equipment for identifying baby cry category through multi-feature fusion
  • Method, device and equipment for identifying baby cry category through multi-feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0075] See figure 1 , figure 1 It is a schematic flow chart of the method for identifying the category of baby crying in Embodiment 1 of the present invention; the method includes:

[0076] S10: Obtain the electrical signal corresponding to the vibration of the baby's vocal cords when the baby is crying;

[0077] Specifically, when it is determined that the baby is crying, the electrical signal generated by the vibration of the vocal cords is obtained. The electrical signal can be an electrical signal obtained by converting the vibration parameters of the vocal cords, or an electrical signal obtained by converting an optical image signal; the vibration signal is a continuous It is non-stationary; it should be noted that the vibration of the vocal cords can be collected by piezoelectric sensors, and the vibration parameters of the vocal cords can also be obtained by other optical components, such as infrared rays, radar waves, and video collected by cameras.

[0078] S11: out...

Embodiment 2

[0117] In Example 1, the crying category of the baby's crying is determined by the vibration parameters corresponding to the vocal cord vibration. Since the baby's vocal cords are in the early stage of development, the difference in vocal cord vibration is small, and the accuracy of the collected vibration parameters is low, which ultimately affects the crying category. detection accuracy. Therefore embodiment 2 of the present invention also carries out further analysis to the audio signal that baby cry produces on the basis of embodiment 1; Please refer to Figure 7 , the method includes:

[0118] S20: Obtain the audio characteristics of the baby's cry and the vibration spectrum corresponding to the vibration of the baby's vocal cords;

[0119] Specifically, when the baby is crying, the audio signal containing the crying sound and the vibration parameters of the corresponding vocal cords are collected; the audio features are obtained by processing the audio signal, and the v...

Embodiment 3

[0170] In Embodiment 1 and Embodiment 2, the cry category of the baby's cry is determined by the audio signal of the vibration parameter cry corresponding to the vibration of the vocal cords. Since the baby's vocal cords are in the early stages of development and the vocal cords are not fully developed, the vibration of the vocal cords and crying The voice has a small range of expressions for the needs, so that the samples that can be matched are limited, which eventually leads to wrong judgments; therefore, on the basis of Embodiment 1, the gesture information corresponding to the crying state of the baby is introduced for further improvement; please refer to Figure 14 , the method includes;

[0171] S30: Obtain the audio features of the sound of the baby crying, the action feature values ​​of the baby's gestures and actions in the image, and the vibration spectrum of the vocal cord vibration;

[0172] Specifically, when a baby is detected to be crying, the video stream incl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of voice recognition, solves the technical problem of low accuracy of judging baby cry through voice recognition, and provides a method, a device and equipment for identifying baby cry category through multi-feature fusion. The method comprises the following steps: acquiring an audio feature when a baby cries, an action feature value of a posture action and a vibration spectrum of vocal cord vibration; converting the actionfeature value into a standard feature value in a database; based on the standard feature value, performing feature fusion on the audio feature and the vibration spectrum; inputting the fused features into a preset neural network, and obtaining the cry category of the baby according to a coding feature vector output by the neural network, wherein the standard feature value is a probability value of each cry category represented by the corresponding posture action. The invention further comprises a device for executing the method and equipment According to the method, the requirements of the baby are enhanced by utilizing the posture features, so that misjudgment can be reduced, and the cry detection accuracy is improved.

Description

technical field [0001] The present invention relates to the technical field of speech recognition, in particular to a method, device and equipment for multi-feature fusion recognition of baby crying categories. Background technique [0002] With the development of speech recognition technology, speech recognition is applied to more and more fields, such as recognizing various types of cries of babies to determine various conditions corresponding to babies. For the identification of baby crying, the general method is: use voice collection technology to collect crying, match the collected crying with the set baby crying, determine whether it is baby crying, and then confirm the baby crying The sound matches the cry category that has been set. After the match is successful, you can confirm the cry category corresponding to the collected cry, and finally confirm the specific meaning of the baby's cry. However, due to the differences between individual babies and the different n...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L25/63G10L25/03G10L25/18G10L25/24G10L25/30G10L25/45G10L25/57G10L17/02G06K9/62G06N3/04G06N3/08
CPCG10L25/63G10L25/03G10L25/18G10L25/24G10L25/30G10L25/45G10L25/57G10L17/02G06N3/08G06N3/045G06F18/253G06F18/214Y02D10/00
Inventor 陈辉张智谢鹏雷奇文艾伟胡国湖
Owner 武汉星巡智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products