Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Audio scene recognition method and system combining deep neural network and topic model

A deep neural network and topic model technology, applied in biological neural network models, neural learning methods, character and pattern recognition, etc., to avoid processing problems, improve analysis accuracy, and improve classification and recognition capabilities.

Inactive Publication Date: 2021-05-11
SHANDONG NORMAL UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This type of method relies entirely on neural networks and does not combine other excellent models

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio scene recognition method and system combining deep neural network and topic model
  • Audio scene recognition method and system combining deep neural network and topic model
  • Audio scene recognition method and system combining deep neural network and topic model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0097] Embodiment 1: as figure 1 As shown, the audio scene recognition method proposed by the present invention is mainly divided into two modules: a training process and a classification test process. Among them, the training process includes DNN-based audio event classification model training, audio document topic model training, and DNN-based audio scene recognition model training. The classification test process includes DNN-based audio event classification, audio document topic analysis, and DNN-based audio scene recognition. Each part will be introduced in detail below.

[0098] First introduce the training process:

[0099] (1) DNN-based audio event classification model training

[0100]The training data in the training set consists of two parts: audio clips of audio events, and audio scene documents. The DNN-based audio event classification model is trained with audio clips of clean audio events. First, the audio clip of the pure audio event is divided into frames...

Embodiment 2

[0115] Embodiment 2: the present disclosure also provides an audio scene recognition system combining a deep neural network and a topic model;

[0116] An audio scene recognition system combining deep neural networks and topic models, including:

[0117] The audio event classification model training module uses training audio event fragments to train an audio event classification model based on a deep neural network;

[0118] The characterization vector extraction module of the training audio scene document inputs the training audio scene document into the trained audio event classification model based on the deep neural network, and outputs the characterization vector of the training audio scene document;

[0119] The topic distribution vector extraction module of the training audio scene document uses the representation vector of the training audio scene document to train the topic model, and outputs the topic distribution vector of the audio scene document after training; ...

Embodiment 3

[0124] Embodiment 3: The present disclosure also provides an electronic device, including a memory, a processor, and computer instructions stored in the memory and run on the processor. When the computer instructions are executed by the processor, each operation in the method is completed , for the sake of brevity, it is not repeated here.

[0125] It should be understood that in the present disclosure, the processor may be a central processing unit CPU, and the processor may also be other general-purpose processors, digital signal processors DSP, application-specific integrated circuits ASICs, off-the-shelf programmable gate arrays FPGAs, or other available Program logic devices, discrete gate or transistor logic devices, discrete hardware components, and more. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.

[0126] The memory may include read-only memory and random access memory, and provide instructions...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present disclosure discloses an audio scene recognition method and system combining a deep neural network and a topic model. The method trains an audio event classification DNN neural network, a PLSA topic model, and an audio scene recognition DNN neural network respectively in the training phase. In the testing phase, the test audio document is first classified by the DNN neural network frame by frame through the audio event; then the output of the neural network is used to construct the "audio document-audio event" co-occurrence matrix, and the PLSA topic model is used to perform matrix decomposition on the co-occurrence matrix to decompose the co-occurrence matrix. The topic distribution of the test audio document on the potential topic is obtained; finally, the "audio document-topic" distribution is used as the input of the audio scene recognition DNN neural network, and the recognition result is obtained. The invention innovatively combines the deep neural network and the topic model, and the introduction of the topic model is beneficial to provide more useful information for the deep neural network, thereby improving the classification and identification capability of the network.

Description

technical field [0001] The present disclosure relates to the technical field of audio scene recognition, in particular to an audio scene recognition method and system combining a deep neural network and a theme model. Background technique [0002] The statements in this section merely enhance the background related to the present disclosure and may not necessarily constitute prior art. [0003] Audio scene recognition is an important research content in the field of computational hearing. It can be widely used in intelligent security monitoring in public places, smart home engineering and intelligent robots, and has a very wide range of application values. [0004] In recent years, some studies have applied deep learning technology to audio scene recognition. This type of research usually uses audio documents as the input of the neural network, and directly outputs the recognition result at the output of the neural network. Such methods rely entirely on neural networks wit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L15/02G10L15/08G10L15/16G06K9/62G06N3/08
CPCG06N3/08G10L15/02G10L15/083G10L15/16G06F18/24
Inventor 冷严齐广慧李登旺华庆方敬
Owner SHANDONG NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products