Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for converting lip image feature into speech coding parameter

A technology of speech coding and image features, applied in speech analysis, speech synthesis, neural learning methods, etc., can solve the problem of complex conversion process, and achieve the effect of convenient construction and training

Active Publication Date: 2018-09-14
SHANGHAI UNIVERSITY OF ELECTRIC POWER
View PDF9 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with the previous technology, the recognition rate is higher, but the conversion process is also very complicated

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for converting lip image feature into speech coding parameter
  • Method for converting lip image feature into speech coding parameter
  • Method for converting lip image feature into speech coding parameter

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] The following is a specific implementation method, but the method and principle of the present invention are not limited to the specific numbers given therein.

[0049] (1) The predictor can be realized by artificial neural network. Predictors can also be constructed using other machine learning techniques. In the following process, the predictor uses an artificial neural network, that is, the predictor is equivalent to an artificial neural network.

[0050] In this embodiment, the neural network is composed of 3 LSTM layers + 2 fully connected layers Dense connected in sequence. A Dropout layer is added between each two layers and between the internal feedback layers of LSTM. For the sake of clarity, these are not drawn in the figure. like image 3 Shown:

[0051] Among them, the three layers of LSTM each have 80 neurons, and the first two layers use the "return_sequences" mode. The two Dense layers have 100 neurons and 14 neurons respectively.

[0052] The first...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for converting a lip image feature into a speech coding parameter, comprising the following steps: 1) constructing a speech coding parameter converter including an input cache and a trained predictor, successively receiving lip feature vectors in chronological order, and storing the lip feature vectors in the input cache of the converter; 2) inputting k latest lipfeature vectors cached at the current time into the predictor as a short-time vector sequence at regular intervals,, and obtaining a predicted result which is a coding parameter vector of one speechframe; and 3) enabling the speech coding parameter converter to output the predicted result. Compared with the prior art, the method requires no intermediate characters, has high conversion efficiency, and facilitates training.

Description

technical field [0001] The invention relates to the fields of computer vision, digital image processing and microelectronics technology, in particular to a conversion method from lip image features to speech coding parameters Background technique [0002] Lip language recognition is to generate corresponding text expressions based on lip videos. The following are related existing technical solutions: [0003] (1) CN107122646A, title of invention: a method for realizing lip language unlocking. The principle is to compare the lip features collected in real time with the pre-stored lip features to determine the identity, but only the lip features can be obtained. [0004] (2) CN107437019A, title of invention: identity verification method and device for lip recognition. Its principle is similar to (1), the difference is that 3D images are used. [0005] (3) CN106504751A, title of invention: adaptive lip language interaction method and interaction device. The principle is sti...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L13/08G10L13/027G10L25/57G10L25/30G06N3/08G06N3/04G06K9/62G06K9/46G06K9/00
CPCG06N3/084G10L13/027G10L13/08G10L25/30G10L25/57G06V40/20G06V20/41G06V20/46G06V10/44G06N3/045G06F18/214
Inventor 贾振堂
Owner SHANGHAI UNIVERSITY OF ELECTRIC POWER
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products