Voice frequency signal frame loss compensation method and device

A compensation method and technology for speech and audio, applied in the field of speech and audio coding and decoding, can solve the problems of insignificant frame loss compensation effect in transform domain, high computational complexity and general compensation effect, etc.

Active Publication Date: 2013-04-24
ZTE CORP
View PDF9 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method is simple to implement and has no delay, the compensation effect is general; other compensation methods such as GAPES (Gap Data Amplitude Phase Estimation Technology) need to convert MDCT coefficients into DSTFT (Discrete Short-Time Fourier Transform) coefficients before compensation , this method has high computational complexity and consumes a lot of memory; another method uses shaping noise insertion technology to compensate voice and audio frame loss. Difference
[0004] To sum up, most of the published transform domain frame loss compensation techniques have no obvious effects, high computational complexity and long delay time, or poor compensation effects for some signals

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice frequency signal frame loss compensation method and device
  • Voice frequency signal frame loss compensation method and device
  • Voice frequency signal frame loss compensation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0086] This embodiment describes the compensation method when the first frame following the correctly received frame is lost, such as figure 1 shown, including the following steps:

[0087] Step 101: Determine the type of the first lost frame, and execute step 102 when the first lost frame is a non-multiharmonic frame, otherwise execute step 104;

[0088] Step 102: When the first lost frame is a non-multiharmonic frame, use the MDCT coefficients of one or several frames before the first lost frame to calculate the MDCT coefficients of the first lost frame, according to the MDCT coefficients of the first lost frame obtaining a time-domain signal of the first lost frame, and using the time-domain signal as an initial compensation signal of the first lost frame;

[0089]The following methods can be used to calculate the MDCT coefficient value of the first lost frame: for example, the weighted average of the MDCT coefficients of the previous frames can be used and the value after...

Embodiment 2

[0166] This embodiment describes the compensation method when more than two consecutive frames are lost immediately following the correctly received frame, as in Figure 6 shown, including the following steps:

[0167] Step 201: determine the type of the lost frame, and execute step 202 when the lost frame is a non-multiharmonic frame, otherwise execute step 204;

[0168] Step 202: When the lost frame is a non-multiharmonic frame, calculate the MDCT coefficient value of the current lost frame using the MDCT coefficients of one or several frames before the current lost frame, and then obtain the MDCT coefficient value of the current lost frame according to the MDCT coefficient of the currently lost frame A time-domain signal, using the time-domain signal as an initial compensation signal;

[0169] Preferably, the weighted average of the MDCT coefficients of the previous frames and the value after proper attenuation can be used as the MDCT coefficient of the current lost frame,...

Embodiment 3

[0176] This embodiment describes the recovery processing flow after frame loss when only one non-multi-harmonic frame is lost in the frame loss process. In the case of multiple frame loss or when the frame loss type is a multi-harmonic frame, this process operation is not required. Such as Figure 7 As shown, in this embodiment, the first lost frame is the first lost frame immediately following the correctly received frame, and the first lost frame is a non-multiharmonic frame, and the correctly received frame is immediately followed by the first lost frame A correctly received frame includes the following steps:

[0177] Step 301: Decoding to obtain the time domain signal of the correctly received frame;

[0178] Step 302: Adjust the estimated value of the pitch period used when compensating for the first lost frame. The specific adjustment methods include:

[0179] Note that the estimated value of the pitch period used when compensating for the first lost frame is T, and s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a voice frequency signal frame loss compensation method and a voice frequency signal frame loss compensation device so as to obtain better compensation effects and at the same time guarantee zero time delay and low complexity. The method comprises the steps that when a following first frame is lost after frames are received correctly, the frame type of the first lost frame is judged, and when the first lost frame is a non-multiple-harmonic frame, a modified discrete cosine transform (MDCT) coefficient of the first lost frame is worked out by using an MDCT coefficient of a prior frame or MDCT coefficients of a plurality of frames prior to the first lost frame; an original compensation signal of the first lost frame is obtained according to the MDCT coefficient of the first lost frame; the original compensation signal of the first lost frame undergoes a first type wave form adjustment, and a time domain signal obtained after the adjustment is used as the time domain signal for the first lost frame. The voice frequency signal frame loss compensation device comprises a frame type judging module, an MDCT coefficient obtaining module, an original compensation signal obtaining module and an adjustment module. Compared with the prior art, the voice frequency signal frame loss compensation method and the voice frequency signal frame loss compensation device have the advantages of being free of delay, small in computing quantity and storage quantity, easy to realize, good in compensation effect and the like.

Description

technical field [0001] The present invention relates to the field of speech and audio coding and decoding, in particular to a frame loss compensation method and device for speech and audio signals in the MDCT (Modified Discrete Cosine Transform, improved discrete cosine transform) domain. Background technique [0002] In network communication, packet technology is widely used. Various forms of information such as voice or audio data are encoded and then transmitted on the network using packet technology, such as VoIP (Voice over Internet Protocol) and so on. Due to the limitation of the transmission capacity of the information sending end, or the packet information frame does not reach the buffer of the receiving end within the specified delay time, or the frame information is lost due to network congestion, etc., resulting in a sharp decline in the synthesized sound quality at the decoding end, it is necessary to use compensation technology to compensate for lost frame data...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10L19/02
CPCG10L19/005G10L19/0212G10L19/00G10L21/00G10L19/02
Inventor 关旭袁浩彭科黎家力
Owner ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products