Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Fusion method based on end-to-end speech recognition model and language model

A technology of speech recognition model and language model, which is applied in speech recognition, speech analysis, instruments, etc., can solve the problem that the algorithm is not self-adaptive, and achieve the effect of improving the effect

Pending Publication Date: 2022-06-07
SOUTH CHINA UNIV OF TECH
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the deficiencies of the existing language model fusion technology and internal language model estimation technology, the present invention proposes a fusion method based on end-to-end speech recognition model and language model. The algorithm does not have the ability to adapt

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Fusion method based on end-to-end speech recognition model and language model
  • Fusion method based on end-to-end speech recognition model and language model
  • Fusion method based on end-to-end speech recognition model and language model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0027] like figure 1 , figure 2 , image 3 A fusion method based on an end-to-end speech recognition model and a language model is shown, and the selected speech recognition model used in this implementation consists of a Conformer encoder, an additive attention mechanism, and an LSTM decoder. Among them, the Conformer encoder consists of 12 layers, each layer has a width of 512 dimensions, and the number of self-attention mechanisms of the encoder is eight. Use random dropout during training to prevent the model from overfitting. The decoder is composed of a two-layer long short-term memory network, and the width of each layer is 2048. The language model uses the RNN language model, which consists of a three-layer LSTM network. The width of the hidden layer of the LSTM network is 2048 dimensions. During the training process, random discard is used to prevent the model from overfitting. In this example, the Chinese general data set is used to train the speech recognition ...

Embodiment 2

[0055] The training end-to-end speech recognition model used in this implementation can be expressed as the following structure. The speech to be recognized is extracted into a feature sequence and then input into the encoder of the speech recognition model. After being encoded by the encoder, it is output to the attention mechanism module and saved for use. . The decoder uses an autoregressive method and an integrated attention mechanism to calculate the prediction output at the current moment. The specific formula is as follows:

[0056] H=Conformer(X)

[0057]

[0058] q i =FNN 1 (s i )

[0059] c i =Attention(H,q i )

[0060]

[0061] Among them, X=[x 1 ,x 2 ,…,x t ,…,x T ] is the audio feature sequence to be identified, where x t represents the audio feature of the t-th frame, and X∈R T×d , T is the audio sequence length, d is the feature dimension; H=[h 1 ,h 2 ,…,h t ,…,h T ] is the output encoded by the encoder, h t is the encoded output correspo...

Embodiment 3

[0067] This implements a fusion method based on an end-to-end speech recognition model and a language model, including the following steps:

[0068] S1. Use speech and text pairs to train an end-to-end speech recognition model, and use text data to train an external language model; the end-to-end speech recognition model includes an encoder, a decoder, and an attention mechanism, and the decoder uses an attention mechanism Obtain the acoustic information processed by the encoder;

[0069] The encoder is a Conformer encoder, a BLSTM encoder or a Transformer encoder;

[0070] The decoder is an LSTM decoder or a Transformer decoder;

[0071] The attention mechanism is an additive attention mechanism, a position-sensitive attention mechanism, or a monotonic attention mechanism. The end-to-end speech recognition model can also be formed by combining the above modules.

[0072] S2. Take out the decoder of the trained end-to-end speech recognition model separately, and replace the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of end-to-end speech recognition, and discloses a fusion method based on an end-to-end speech recognition model and a language model, and the method comprises the following steps: S1, training an end-to-end speech recognition model through employing a speech and text pair, and training an external language model through employing text data; s2, independently taking out a decoder part of the trained speech recognition model and forming an independent model; s3, independently training the independent model by using training data to a text, and obtaining an estimation model of the internal language model after convergence; and S4, the score fusion of the estimation models of the speech recognition model, the external language model and the internal language model is decoded to obtain a decoding result. The algorithm can improve the recognition accuracy after the speech recognition model and the language model are fused, and has a wide application prospect in the field of speech recognition.

Description

technical field [0001] The invention belongs to the technology in the field of speech recognition, and in particular relates to a fusion method based on an end-to-end speech recognition model and a language model. Background technique [0002] At present, the most classic speech recognition method is based on the combination of Hidden Markov Model (HMM) and neural network (Deep Neural Network, DNN). Although this method makes good use of the short-term stationary characteristics of speech signals, it still has shortcomings such as multi-model cascades of acoustic models, pronunciation dictionaries, and language models, inconsistent model training objectives, and large decoding space. The invention of end-to-end speech recognition simplifies the entire speech recognition process, and the training objectives are simple and consistent. [0003] At present, end-to-end speech recognition models can be mainly divided into three categories: continuous time classification model (Co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/06G10L15/183G10L15/26G10L19/16
CPCG10L15/063G10L15/183G10L15/26G10L19/16G10L2015/0631
Inventor 柳宇非张伟彬邢晓芬徐向民
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products