Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice synthesis method based on voice radar and video

A speech synthesis and speech technology, applied in the field of radar, can solve the problem of speech synthesis of radar signals and image information, etc., and achieve the effects of natural pronunciation, strong anti-noise, and wide application scenarios.

Active Publication Date: 2019-05-17
NANJING UNIV OF SCI & TECH
View PDF2 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Biomedical radar technology has expanded the voice signal, and the quality of the obtained voice signal is comparable to that of the microphone signal; Fusion with speech information features improves the effect of speech recognition under background noise. However, there is no method in the prior art to combine radar signals with image information for speech synthesis.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice synthesis method based on voice radar and video
  • Voice synthesis method based on voice radar and video
  • Voice synthesis method based on voice radar and video

Examples

Experimental program
Comparison scheme
Effect test

preparation example Construction

[0016] In conjunction with accompanying drawing, a kind of speech synthesis method based on speech radar and video of the present invention comprises the following steps:

[0017] Step 1. Use the radar echo signal to obtain the fundamental frequency information of the voice, specifically: the non-contact voice radar sends a continuous sine wave to the speaker, the receiving antenna receives the echo signal, and then preprocesses the received echo signal, Fundamental frequency and higher harmonic mode decomposition, time-frequency signal processing, so as to obtain the frequency of the time-varying vocal cord vibration, that is, the fundamental frequency of the speech signal;

[0018] The radar echo signal is the vocal cord vibration signal of the speaker collected by the radar echo; the speaker's pronunciation is the sound of a certain character.

[0019] Step 2. Fitting the time-varying motion feature extracted from the lip video information when the speaker is pronounced and...

Embodiment

[0040] In this embodiment, an adult man sends the English character "A", the speaker obtains the fundamental frequency information of the voice from the radar echo signal when he sends "A", and the non-contact voice radar sends the continuous sine wave to the speaker, and receives the The antenna receives, preprocesses the echo, decomposes the fundamental frequency and higher harmonic modes, and processes the time-frequency signal, so as to obtain the frequency of the time-varying vocal cord vibration, that is, the fundamental frequency of the speech signal.

[0041] The motion features extracted from the lip video information when other speakers pronounce "A" and the formants extracted from the speech signals obtained by the microphone synchronously are fitted to obtain the lip motion features and the empirical formula of the mapping relationship of the three groups of formants; by the empirical formula, Taking the video information of the speaker's lips to be synthesized as i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a voice synthesis method based on voice radar and videos. The method comprises the following steps: acquiring vocal cord vibration frequency as basic voice frequency from a radar echo signal; fitting movement characteristics extracted from lip video information of a speaker in talking with a resonance peak extracted from microphone synchronous acquired voice signals, and acquiring an experience equation of a mapping relationship of the lip movement characteristics and the resonance peak; by taking a lip video of a tester in talking as input, acquiring a time-varying resonance peak; and finally, carrying out voice synthesis on obtained basic frequency and the time-varying resonance peak. By adopting the method disclosed by the invention, voice radar can be combined with image information without contacting a speaker, and voice synthesis can be achieved.

Description

technical field [0001] The invention belongs to the technical field of radar, in particular to a novel speech synthesis method based on speech radar. Background technique [0002] Voice is one of the most effective ways for humans to communicate and communicate. Speech reconstruction and restoration have always been studied by scientists. Biomedical radar technology has expanded the speech signal, and the quality of the obtained speech signal is comparable to that of the microphone signal; in recent years, many international computer technology researchers have used digital image processing technology and digital speech Fusion with voice information features improves the effect of voice recognition under background noise. However, there is no method for combining radar signals and image information for voice synthesis in the prior art. SUMMARY OF THE INVENTION [0003] The purpose of the present invention is to provide a novel speech synthesis method based on speech rada...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L13/02G06K9/00G01S7/41
Inventor 洪弘李慧顾陈赵恒顾旭高茜奚梦婷李彧晟孙理朱晓华
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products