Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Audio scene recognition method combining deep neural network and topic model and system thereof

A deep neural network, topic model technology, applied in biological neural network models, neural learning methods, character and pattern recognition, etc.

Inactive Publication Date: 2019-03-08
SHANDONG NORMAL UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This type of method relies entirely on neural networks and does not combine other excellent models

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio scene recognition method combining deep neural network and topic model and system thereof
  • Audio scene recognition method combining deep neural network and topic model and system thereof
  • Audio scene recognition method combining deep neural network and topic model and system thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0097] Embodiment 1: as figure 1 As shown, the audio scene recognition method proposed by the present invention is mainly divided into two modules: a training process and a classification test process. Among them, the training process includes DNN-based audio event classification model training, audio document topic model training, and DNN-based audio scene recognition model training. The classification test process includes DNN-based audio event classification, audio document topic analysis, and DNN-based audio scene recognition. Each part will be introduced in detail below.

[0098] First introduce the training process:

[0099] (1) DNN-based audio event classification model training

[0100]The training data in the training set consists of two parts: audio clips of audio events, and audio scene documents. The DNN-based audio event classification model is trained with audio clips of clean audio events. First, the audio clip of the pure audio event is divided into frames...

Embodiment 2

[0115] Embodiment 2: the present disclosure also provides an audio scene recognition system combining a deep neural network and a topic model;

[0116] An audio scene recognition system combining deep neural networks and topic models, including:

[0117] The audio event classification model training module uses training audio event fragments to train an audio event classification model based on a deep neural network;

[0118] The characterization vector extraction module of the training audio scene document inputs the training audio scene document into the trained audio event classification model based on the deep neural network, and outputs the characterization vector of the training audio scene document;

[0119] The topic distribution vector extraction module of the training audio scene document uses the representation vector of the training audio scene document to train the topic model, and outputs the topic distribution vector of the audio scene document after training; ...

Embodiment 3

[0124] Embodiment 3: The present disclosure also provides an electronic device, including a memory, a processor, and computer instructions stored in the memory and run on the processor. When the computer instructions are executed by the processor, each operation in the method is completed , for the sake of brevity, it is not repeated here.

[0125] It should be understood that in the present disclosure, the processor may be a central processing unit CPU, and the processor may also be other general-purpose processors, digital signal processors DSP, application-specific integrated circuits ASICs, off-the-shelf programmable gate arrays FPGAs, or other available Program logic devices, discrete gate or transistor logic devices, discrete hardware components, and more. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the like.

[0126] The memory may include read-only memory and random access memory, and provide instruction...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an audio scene recognition method combining deep neural network and a theme model as well as a system thereof. The method trains the audio event classification DNN neural network, a PLSA theme model, and an audio scene recognition DNN neural network at a training phase. In a testing phase, a test audio file passes through the audio event classification DNN neural network frame by frame; then an audio file-audio event co-occurrence matrix is constructed by the output of the neural network, the co-occurrence matrix is matrix-decomposed by the PLSA theme model, and decomposition is carried out to obtain the theme distribution of the test audio file on the potential theme; finally, the audio file-theme distribution is used as the input of the audio scene recognition DNNneural network, and the recognition result is obtained. The method innovatively combines the deep neural network with the theme model, and the introduction of the theme model is beneficial to providemore useful information for the deep neural network, and the method improves the classification and recognition capability of the network.

Description

technical field [0001] The present disclosure relates to the technical field of audio scene recognition, in particular to an audio scene recognition method and system combining a deep neural network and a theme model. Background technique [0002] The statements in this section merely enhance the background related to the present disclosure and may not necessarily constitute prior art. [0003] Audio scene recognition is an important research content in the field of computational hearing. It can be widely used in intelligent security monitoring in public places, smart home engineering and intelligent robots, and has a very wide range of application values. [0004] In recent years, some studies have applied deep learning technology to audio scene recognition. This type of research usually uses audio documents as the input of the neural network, and directly outputs the recognition result at the output of the neural network. Such methods rely entirely on neural networks wit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/02G10L15/08G10L15/16G06K9/62G06N3/08
CPCG06N3/08G10L15/02G10L15/083G10L15/16G06F18/24
Inventor 冷严齐广慧李登旺华庆方敬
Owner SHANDONG NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products