Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep-learning-based automatic audio annotation method

A deep learning and audio technology, applied in audio data retrieval, audio data indexing, speech analysis, etc., can solve the problems of inability to fully describe audio details, inability to achieve automatic labeling, low accuracy, etc., to improve the efficiency of audio labeling, Improve the effect of labeling accuracy

Active Publication Date: 2018-05-18
成都潜在人工智能科技有限公司
View PDF18 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional method of relying on experts to extract timbre, melody, and rhythm cannot fully describe the audio details, cannot realize automatic labeling, and has a low accuracy rate

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep-learning-based automatic audio annotation method
  • Deep-learning-based automatic audio annotation method
  • Deep-learning-based automatic audio annotation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] The present invention will be further described in detail below in conjunction with test examples and specific embodiments. However, it should not be understood that the scope of the above subject matter of the present invention is limited to the following embodiments, and all technologies realized based on the content of the present invention belong to the scope of the present invention.

[0043] see figure 1 , an audio automatic labeling method based on deep learning, including the following implementation steps:

[0044] S1. Input the original audio file, and obtain several original spectral image segments through audio preprocessing;

[0045] S2. Input the original spectrum image segment into the convolutional neural network for training to build a deep learning model;

[0046] S3. Input the audio file to be marked, and obtain several spectral image segments to be marked through audio preprocessing;

[0047] S4. Based on the deep learning model, perform audio ann...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an audio annotation method, in particular to a deep-learning-based automatic audio annotation method. The deep-learning-based automatic audio annotation method comprises the following steps that original audio files are input, and by means of audio preprocessing, multiple original sound spectrograph segments are obtained; the original sound spectrograph segments are inputinto a convolutional neural network for training, and a deep-learning model is built; to-be-annotated audio files are input, and by means of audio preprocessing, multiple to-be-annotated sound spectrograph segments are obtained; on the basis of the deep-learning model, the to-be-annotated sound spectrograph segments are subjected to audio annotation. Accordingly, the convolutional neural network is utilized for training an audio deep-learning network, and the automatic audio annotation method is realized; compared with a traditional manual annotation mode, the annotation accuracy is improved,and the audio annotation efficiency is improved.

Description

technical field [0001] The present invention relates to an audio labeling method, in particular to an audio automatic labeling method based on deep learning. Background technique [0002] The structural representation of audio is an important issue in MIR (Music Information Retrieval), which mainly extracts features from the audio signal itself to achieve audio retrieval. The traditional way of relying on experts to extract timbre, melody, and rhythm cannot fully describe the audio details, cannot realize automatic labeling, and has a low accuracy rate. Contents of the invention [0003] The purpose of the present invention is to overcome the above-mentioned deficiencies in the prior art, and provide a method for using a convolutional neural network to train an audio deep learning network, construct a deep learning model, and use a maximum voting algorithm to realize automatic audio labeling. [0004] In order to realize the above-mentioned purpose of the invention, the p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L25/30G10L25/48G10L25/03G06F17/30
CPCG10L25/03G10L25/30G10L25/48G06F16/61G06F16/683
Inventor 尹学渊江天宇
Owner 成都潜在人工智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products