Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice feature matching method based on convolutional neural network

A convolutional neural network and voice feature technology, applied in the field of voice feature matching based on convolutional neural network, can solve the problems of poor software operation robustness, complex voice recognition system, low voice recognition accuracy, etc. The effect of enhancing software robustness and improving feature extraction efficiency

Inactive Publication Date: 2019-10-25
湖南检信智能科技有限公司
View PDF7 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there are problems such as low accuracy rate of speech recognition, relatively complex speech recognition system, and poor robustness of software operation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice feature matching method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] Such as figure 1 As shown, a speech feature matching method based on convolutional neural network, including:

[0050] S1, preprocessing, extracting the mel spectrogram of the audio signal, cutting it into image segments in the time domain, performing Fourier transform on the image segments to obtain spectral signals; and extracting feature vectors;

[0051]S2, arranging the feature vectors of the audio samples in chronological order and performing pooling processing to form a voice recording file, and converting the voice recording file into a binary feature sequence;

[0052] S3, voice feature matching, using the voice query file to compare with the voice record file, and find out the voice record file with the same content as the voice query file;

[0053] S4. After classifying the matched voice recording files, decode and convert them into text information, and match and recognize corresponding emotion classification templates. After completing the emotion matching...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a voice feature matching method based on a convolutional neural network. The method comprises the steps of S1, carrying out preprocessing, extracting Mel spectrograms in audiosignals, cutting the Mel spectrograms into image segments in a time domain, carrying out Fourier transform in the image segments to obtain spectrum signals, and extracting feature vectors; S2, arranging the feature vectors of audio samples according to time sequences, carrying out pooling to form voice record files, and converting the voice record files into binary feature sequences; and S3, carrying out voice feature matching, comparing voice query files with the voice record files, and searching the voice record files which have the same content as the voice query files. According to the method, a voice recognition accuracy rate is improved, complexity of a voice recognition system is reduced, and software robustness is improved.

Description

technical field [0001] The present invention relates to the technical field of speech recognition, and more specifically, relates to a speech feature matching method based on a convolutional neural network. Background technique [0002] Voice is an important tool for people to communicate, such as voice calls, voice chats, and voice function prompts. With the in-depth development of the information age, voice interaction technology has received extensive attention in recent years. [0003] In the existing speech processing technology, for example, the Chinese patent with the publication number CN103236260B discloses a speech recognition system, including: a storage unit for storing the speech model of at least one user; a speech collection and preprocessing unit for collecting the Recognizing the speech signal, performing format conversion and encoding on the speech signal to be recognized; the feature extraction unit is used to extract speech feature parameters from the enc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/16G10L15/02G10L15/26
CPCG10L15/02G10L15/16G10L15/26
Inventor 李剑峰
Owner 湖南检信智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products