Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Audio event classification method and computer equipment based on stacking base sparse representation

A technology of sparse representation and classification method, applied in speech analysis, speech recognition, instruments, etc., can solve the problems of low classification accuracy, increased audio event classification, insufficient training samples, etc.

Active Publication Date: 2020-05-05
SHANDONG NORMAL UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Insufficient training samples and noise interference increase the difficulty of audio event classification, resulting in low classification accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio event classification method and computer equipment based on stacking base sparse representation
  • Audio event classification method and computer equipment based on stacking base sparse representation
  • Audio event classification method and computer equipment based on stacking base sparse representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0073] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0074] Such as figure 1 As shown, the audio scene recognition method proposed by the present invention is mainly divided into two modules: a training process and a classification test process. Among them, the training process includes audio frame processing of training data, audio feature extraction and building a large audio dictionary by stacking bases. The classification test process includes four processes: audio frame processing, audio feature extraction, sparse representation coefficient extraction and classification discrimination. Each part will be introduced in detail below.

[0075] First introduce the training process:

[0076] (1) Audio frame processing

[0077] The training audio document is divided into frames, and each frame is regarded as an audio sample. According to the rule of thumb, the present invention sets the frame length as...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an audio event classification method based on stack-based sparse representation and a computer device. The method comprises the following steps: at the training stage, first, creating audio dictionaries of all kinds of audio events; then, constructing a large-scale dictionary by stacking the audio dictionaries of the all kinds of audio events; at the testing stage, extracting a sparse representation coefficient of a tested audio sample according to the large-scale dictionary constructed at the training stage, and mapping the sparse representation coefficient through a softmax function; and finally, constructing the confidence degree of the tested audio file on the all kinds of audio events according to the mapped coefficient, and carrying out classified judgment according to the magnitude of the confidence degree. According to the method, the large-scale dictionary is constructed through the stacking base innovatively, and then the sparse representation coefficient of the sample is obtained; the extracted sparse representation coefficient can well represent the audio event sample, the inter-class difference of the samples is increased, the intra-class difference is reduced, and the classification accuracy is improved.

Description

technical field [0001] The invention belongs to the field of audio event classification, in particular to an audio event classification method and computer equipment based on stacking base sparse representation. Background technique [0002] As one of the important contents of audio information research, audio event classification has received extensive attention. Audio monitoring based on audio event classification can be used as an auxiliary means of video monitoring. Compared with video signals, audio signals are not affected by light and occlusion, and can well protect personal privacy, so they have a very wide range of application values. Audio event classification technology can be used in intelligent robots to help robots better perceive the surrounding environment and make correct decisions; audio event classification technology can also be widely used in various fields such as urban planning, smart home and ecological acoustics. [0003] The existing audio event c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L15/02G10L15/06G10L15/08
CPCG10L15/02G10L15/063G10L15/08
Inventor 冷严周耐齐广慧徐新艳李登旺
Owner SHANDONG NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products