Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and system for breath sound identification based on machine learning

A machine learning, breath sound technology, applied in neural learning methods, instruments, stethoscopes, etc., to assist clinical research, intelligent disease analysis and identification, and facilitate clinical research.

Active Publication Date: 2017-10-24
SUZHOU INST OF BIOMEDICAL ENG & TECH CHINESE ACADEMY OF SCI
View PDF8 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In the prior art, the electronic auscultation system can be used to collect and store audio data of multiple parts of the user, but it is impossible to accurately and intelligently analyze the audio data of different users, different parts, and different times collected in real time and identification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for breath sound identification based on machine learning
  • Method and system for breath sound identification based on machine learning
  • Method and system for breath sound identification based on machine learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach 1

[0047] Such as figure 1 As shown, the present invention provides a kind of breathing sound discrimination method based on machine learning, and it comprises the following steps:

[0048] S10, collecting breath sound data of all auscultation points of users of multiple age groups;

[0049] S20, recording relevant information matched with each breath sound data, and packaging the matched relevant information and breath sound data into a breath sound data packet;

[0050] S30, performing deep learning classification on the breath sound data packet, and obtaining a breath sound machine learning classifier for each age group;

[0051] S40. Obtain the encapsulated real-time breath sound data package, and select a corresponding breath sound learning classifier to perform data analysis according to the age group to which the real-time breath sound data package belongs, and obtain an analysis result.

[0052] In the above embodiment, in step S10, considering that the probability of o...

Embodiment approach 2

[0065] On the basis of Embodiment 1, the embodiment of the present invention provides a breath sound identification system based on machine learning, such as Figure 5 As shown, it includes an electronic stethoscope 10 , a handheld operating terminal 20 , a data analysis server 30 and a database 40 .

[0066] The electronic stethoscope 10 is used to collect breath sound data of all auscultation points of users in multiple age groups involved in step S10, and may also record relevant information matching the collected breath sound data mentioned in step S20. For example, while collecting breath sounds at all auscultation points, the doctor selects the auscultation user's personal information including at least age, sex, age, height, weight, and auscultation points through the electronic stethoscope 10, including at least blood pressure, blood sugar, heart rate, etc. Blood oxygen, disease history, smoking history and other detected health status information, as well as other inf...

Embodiment 1

[0076] An electronic stethoscope 10 is provided with related information such as the user's age, gender, auscultation point, and auscultation time, and transmits breath sound data and related information to the user's mobile phone (ie, the user's hand-held operating terminal 21) during transmission. The electronic stethoscope 10 communicates with the mobile phone through bluetooth, and has two-way data communication. The mobile APP is installed on the mobile phone end, accesses the Internet through wireless mode, and communicates with the data service server 50 . The data analysis server 30 , the database 40 and the data service server 50 are all integrated into the cloud server 60 .

[0077] Utilize electronic stethoscope 10, gather the breath sound data of user auscultation point, at first transmit to the APP of user's mobile phone, then by the APP of user's mobile phone through data service server 50, with the other end doctor's mobile phone (medical hand-held operating ter...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a method and system for breath sound identification based on machine learning. The method comprises the steps: collecting breath sound data of all the auscultation point locations of a plurality of users in different age groups; recording the correlation information matching with each breath sound data, and packaging the matched correlation information and the breath sound data to a breath sound data package; performing deep learning classification of the breath sound data, and obtaining a breath sound machine learning classifier aiming at each age group; and selecting a corresponding breath sound learning classifier to perform data analysis and obtain an analysis result according to the age group to which the obtained real-time breath sound data packet belongs. The breath sound data in different age groups and the matching correlation information are packaged to the breath sound data packet to perform deep learning classification to obtain the breath sound machine learning classifier of each age group so as to perform data analysis of the real-time obtained breath sound data packet and obtain a result, realize accurate and intelligent disease analysis and identification and facilitate assistance of doctor clinic research.

Description

technical field [0001] The present invention relates to the technical field of electronic stethoscopes, and more specifically, the present invention relates to a method and system for identifying breath sounds based on machine learning. Background technique [0002] Auscultation means that the doctor uses the ear or a stethoscope to listen to the sounds (usually heart sounds, breath sounds, etc.) To diagnose whether there is disease in the relevant organs. The stethoscope has a certain amplification effect on the sound of organ activities and can block the noise in the environment. Subcutaneous emphysema sounds, muscle fasciculation sounds, joint movement sounds, fracture surface friction sounds, etc. [0003] With the development of electronic technology, the type of stethoscope has developed from acoustic stethoscope to electronic stethoscope system. The electronic auscultation system uses electronic technology to amplify the sound of the body, converts the collected so...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06K9/66G06N3/08A61B7/04
CPCG06N3/08A61B7/04G06V40/10G06V30/194G06F18/241
Inventor 耿辰佟宝同戴亚康舒林华许姜姜
Owner SUZHOU INST OF BIOMEDICAL ENG & TECH CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products