Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for converting lip image sequence into voice coding parameters

A speech coding and image sequence technology, applied in speech analysis, speech synthesis, computer components, etc., can solve the problem of complex conversion process and achieve the effect of facilitating construction training

Active Publication Date: 2018-10-12
SHANGHAI UNIVERSITY OF ELECTRIC POWER
View PDF9 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with the previous technology, the recognition rate is higher, but the conversion process is also very complicated

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for converting lip image sequence into voice coding parameters
  • Method for converting lip image sequence into voice coding parameters
  • Method for converting lip image sequence into voice coding parameters

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0050] The following is a specific implementation method, but the methods and principles described in the present invention are not limited to the specific numbers given therein.

[0051] (1) Predictor, which can be realized by artificial neural network. Predictors can also be built using other machine learning techniques. In the following process, the predictor uses a deep artificial neural network, that is, the predictor is equivalent to a deep artificial neural network;

[0052] like image 3 As shown, the artificial neural network is mainly composed of 3 convolutional LSTM network layers (ConvLSTM2D) and 2 fully connected layers (Dense) connected in turn, as shown in the following figure. Each ConvLSTM2D is followed by a pooling layer (MaxPooling2D), and the two Dense layers are preceded by a dropout layer (Dropout), for the clarity of the structure, these are in image 3 not drawn in.

[0053] Among them, each of the three layers of convolutional LSTM has 80 neurons, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for converting a lip image sequence into voice coding parameters. The method comprises the following steps that (1) a voice coding parameter converter is built and comprises an input buffer and a predictor with configured parameters; (2) lip images are sequentially received according to the time sequence and are stored in the input buffer of the converter; (3) k latest lip images buffered at the current time serve as a short-time image sequence to be sent into the predictor at stated times, thus a prediction result is obtained, wherein the prediction result isa coding parameter vector of a voice frame; and (4) the voice coding parameter converter outputs the prediction result. Compared with the prior art, the method has the advantages that direct converting is achieved, text conversion is not needed, and training constructing is facilitated.

Description

technical field [0001] The invention relates to the technical fields of computer vision, digital image processing and microelectronics, in particular to a conversion method from a lip image sequence to speech coding parameters Background technique [0002] Lip recognition is to generate corresponding text expressions based on lip videos. The following are the existing related technical solutions: [0003] (1) CN107122646A, title of invention: a method for realizing lip language unlocking. The principle is to compare the lip features collected in real time with the pre-stored lip features to determine the identity, but only the lip features can be obtained. [0004] (2) CN107437019A, title of invention: identity verification method and device for lip language recognition. The principle is similar to (1), the difference lies in the use of 3D images. [0005] (3) CN106504751A, Title of Invention: Adaptive lip language interaction method and interaction device. The principle...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L13/08G10L13/027G10L25/30G10L25/57G06K9/00
CPCG10L13/027G10L13/08G10L25/30G10L25/57G06V40/20
Inventor 贾振堂
Owner SHANGHAI UNIVERSITY OF ELECTRIC POWER
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products