Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Method and device for compensating drop frame after start frame of voiced sound

A compensation method and a start frame technology, applied in the field of speech coding and decoding, can solve the problem that the compensation sound quality is not guaranteed, etc.

Inactive Publication Date: 2013-02-06
ZTE CORP
View PDF6 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Different frame loss compensation methods are selected according to the different types of adjacent frames before the lost frame, but the frame loss after the voiced sound start frame usually uses a compensation method similar to the frame loss after the voiced sound frame, so that when the frame loss occurs before the voiced sound start Compensation sound quality is not guaranteed when after the start frame

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for compensating drop frame after start frame of voiced sound
  • Method and device for compensating drop frame after start frame of voiced sound
  • Method and device for compensating drop frame after start frame of voiced sound

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0032] This embodiment describes a method for compensating after loss of the first frame immediately after the voiced sound start frame, such as figure 1 shown, including the following steps:

[0033] Step 101, the voiced sound start frame is correctly received, and it is judged whether the first frame (hereinafter referred to as the first lost frame) following the voiced sound start frame is lost, if lost, execute step 102, otherwise the process ends;

[0034] Step 102, selecting a corresponding pitch delay estimation method according to the stability condition of the voiced sound start frame to infer the pitch delay of the first lost frame;

[0035] Specifically: if the voiced sound onset frame meets the stability condition, the following pitch delay inference method is used to infer the pitch delay of the first lost frame: using the integer part of the pitch delay of the last subframe in the voiced sound onset frame (T -1 ) as the pitch delay of each subframe in the first...

Embodiment 2

[0103] This embodiment describes a method for compensating after loss of the first frame immediately after the start frame of voiced sound, and the difference from Embodiment 1 is that a second correction process is added.

[0104] Step 201 is the same as step 101 in embodiment 1;

[0105] Step 202, the main difference between this step and step 102 is that when the starting frame of voiced sound does not meet the stability condition, use the first correction amount to T -1 After making the correction, the corrected T -1 The second correction process is performed, and the result after the correction process is used as the final estimated value of the pitch delay of each subframe of the first lost frame.

[0106] Specifically, the second correction process is as follows:

[0107] Judging if the following two conditions are met, take T -1 is the median value of the pitch delay: condition 1: modified T -1 (i.e. T c =T -1 +f s *f m ) and T -1 The absolute value of the dif...

Embodiment 3

[0138] This embodiment describes a method for compensating after the loss of two or more frames immediately after the voiced sound start frame, where the lost frames include the first lost frame and one or more lost frames immediately after the first lost frame, such as Figure 4 shown, including the following steps:

[0139] Step 301, using the method in embodiment 1 or embodiment 2 to infer the pitch delay and adaptive codebook gain of the first lost frame;

[0140] Step 302, for one or more lost frames following the first lost frame, use the pitch delay of the previous lost frame of the current lost frame as the pitch delay of the current lost frame;

[0141] Step 303, the adaptive codebook gain value obtained after attenuation and interpolation of the estimated value of the adaptive codebook gain of the last subframe of the previous lost frame of the current lost frame is used as the adaptive codebook gain value of each subframe in the current lost frame. codebook gain; ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method and a device for compensating a drop frame after a start frame of voiced sound and guarantees against delaying of compensation to the drop frame after the start frame of the voiced sound. The method includes selecting different manners to deduce pitch delay of a first drop frame following the start frame of the voiced sound on the condition of stability of the start frame of the voiced sound; deducing self-adaptive codebook gain of the first drop frame according to self-adaptive codebooks of one or multiple sub-frames received before the first drop frame, or deducing the self-adaptive codebook gain of the first drop frame according to energy change of time-domain voice signals of the start frame of the voiced sound; and compensating the first drop frame by the pitch delay and the self-adaptive codebook gain deduced. After compensation, each sub-frame of the frame correctly received after the start frame the voiced sound is decoded to acquire the self-adaptive codebook gain, the self-adaptive codebook gain times a scale factor to obtain the new self-adaptive codebook gain of the corresponding sub-frame, and the new self-adaptive codebook gain substitutes for the self-adaptive codebook gain acquired by decoding to participate in voice synthesis. Therefore, error transmission caused by the drop frame can be decreased, and energy for voice synthesis can be controlled.

Description

technical field [0001] The invention relates to the technical field of speech coding and decoding, in particular to a compensation method and device for frame loss after the initial frame of voiced sound. Background technique [0002] When voice frames are transmitted in a channel, such as a wireless environment or an IP network, various complex factors involved in the transmission process may cause frame loss during reception, which seriously degrades the synthesized voice quality at the receiving end. The purpose of the frame loss compensation technology is to reduce the voice quality degradation caused by the frame loss, so as to improve people's subjective experience. [0003] CELP (Code Excited Linear Prediction) type speech codecs are widely used in practical communication systems because they can provide better speech quality at medium and low rates. The CELP type speech codec is a prediction-based speech codec. The speech frame of the current codec not only depends ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L19/008G10L21/003
CPCG10L19/005G10L19/09G10L19/008G10L21/003
Inventor 关旭袁浩彭科黎家力
Owner ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products