A Speech Synthesis Method Based on Speech Radar and Video

A technology of speech synthesis and speech, which is applied in the field of radar, can solve the problem of speech synthesis of radar signal and image information, and achieve the effects of natural pronunciation, wide application scenarios and strong anti-noise

Active Publication Date: 2021-02-12
NANJING UNIV OF SCI & TECH
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Biomedical radar technology has expanded the voice signal, and the quality of the obtained voice signal is comparable to that of the microphone signal; Fusion with speech information features improves the effect of speech recognition under background noise. However, there is no method in the prior art to combine radar signals with image information for speech synthesis.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Speech Synthesis Method Based on Speech Radar and Video
  • A Speech Synthesis Method Based on Speech Radar and Video
  • A Speech Synthesis Method Based on Speech Radar and Video

Examples

Experimental program
Comparison scheme
Effect test

preparation example Construction

[0016] In conjunction with accompanying drawing, a kind of voice synthesis method based on voice radar and video of the present invention comprises the following steps:

[0017] Step 1. Use the radar echo signal to obtain the fundamental frequency information of the voice, specifically: the non-contact voice radar sends a continuous sine wave to the speaker, the receiving antenna receives the echo signal, and then preprocesses the received echo signal, Fundamental frequency and higher harmonic mode decomposition, time-frequency signal processing, so as to obtain the frequency of time-varying vocal fold vibration, that is, the fundamental frequency of the speech signal;

[0018] The radar echo signal is the vocal fold vibration signal of the speaker collected by the radar echo; the speaker's pronunciation is the sound of a certain character.

[0019] Step 2. Fit the time-varying motion feature extracted from the lip video information when the speaker is speaking and the time-va...

Embodiment

[0040] In this embodiment, an adult man sends an English character "A", and the speaker obtains the fundamental frequency information of the voice by the radar echo signal when sending "A". Antenna reception, preprocessing of the echo, decomposition of the fundamental frequency and high-order harmonic mode, and time-frequency signal processing, so as to obtain the time-varying frequency of vocal fold vibration, that is, the fundamental frequency of the speech signal.

[0041] The motion features extracted from lip video information and the formants extracted from the voice signal obtained synchronously by the microphone when other speakers pronounce "A" are fitted to obtain the empirical formula for the mapping relationship between lip motion features and three groups of formants; from the empirical formula, The video information of the speaker's lips to be synthesized is used as input for calculation, and the output is three sets of time-varying formants of the speaker's voice...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a new speech synthesis method based on speech radar. The processing steps are as follows: the vocal cord vibration frequency is obtained from the radar echo signal as the speech fundamental frequency; the motion feature extracted from the lip video information and the microphone are synchronously obtained by the speaker when speaking. The formant extracted from the signal is fitted to obtain the empirical formula of the lip motion characteristics and the formant mapping relationship; the lip video of the tester is used as input to obtain the time-varying formant; finally, the obtained fundamental frequency and time-varying resonance Peak for speech synthesis. With the method of the invention, voice synthesis can be realized by combining voice radar and image information without contacting the speaker.

Description

technical field [0001] The invention belongs to the technical field of radar, in particular to a novel voice synthesis method based on voice radar. Background technique [0002] Speech is one of the most effective ways for human beings to communicate and communicate. Speech reconstruction and restoration has been studied by scientists. Biomedical radar technology has expanded the voice signal, and the quality of the obtained voice signal is comparable to that of the microphone signal; Fusion with speech information features improves the effect of speech recognition under background noise. However, there is no method for combining radar signals and image information for speech synthesis in the prior art. Contents of the invention [0003] The purpose of the present invention is to provide a novel voice synthesis method based on voice radar. [0004] The technical solution that realizes the object of the present invention is: a kind of novel speech synthesis method based ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G10L13/02G06K9/00G01S7/41
Inventor 洪弘李慧顾陈赵恒顾旭高茜奚梦婷李彧晟孙理朱晓华
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products