Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Many-to-many speaker conversion method based on Perceptual STARGAN

A conversion method and speaker technology, applied in speech analysis, speech recognition, instruments, etc., can solve problems such as network degradation

Pending Publication Date: 2019-12-20
NANJING UNIV OF POSTS & TELECOMM
View PDF19 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Purpose of the invention: the technical problem to be solved by the present invention is to provide a perceptual STARGAN-based many-to-many speaker conversion method, which solves the problem of network degradation in the training process of existing methods, reduces the learning difficulty of the coding network for semantics, Realize the learning function of the deep spectrum features of the model, improve the spectrum generation quality of the decoding network, and avoid the information loss and noise problems caused by the Batch Norm process, and more fully learn the semantic features and the speaker's personalized features, so as to compare Greatly improve the personality similarity and voice quality of the converted voice

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Many-to-many speaker conversion method based on Perceptual STARGAN
  • Many-to-many speaker conversion method based on Perceptual STARGAN
  • Many-to-many speaker conversion method based on Perceptual STARGAN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] The perceptual network can be applied in the image field to calculate the perceptual loss, which is beneficial to obtain more delicate details and edge features in the converted image. The present invention applies the idea of ​​the perceptual network to the field of speech conversion. However, in the image field, a pre-trained network is used to extract high-dimensional image information, and there is no general pre-trained network in the speech field, so the present invention creatively uses the discriminator as a perceptual network to calculate the perceptual loss, thereby extracting high-dimensional information in the spectrum to Improve the ability of the model to extract the semantic features and personality features of the spectrum and improve the quality of the generated speech. The present invention regards part of the network structure of the discriminator D as a perceptual network Using this perceptual network to calculate the perceptual loss of the deep sem...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a many-to-many speaker conversion method based on perceptual STARGAN. The method comprises a training stage and a conversion stage. The STARGAN and a perception network are combined to realize a speech conversion system. The perception loss is calculated by using the perception network to improve the extraction capability of a model for the deep semantic features and personal features of a speech spectrum, so that the learning capability of the model for the semantics and personal features of the speech spectrum can be improved well, thereby improving the personal similarity and speech quality of speech generated after conversion well, solving the problem of poor similarity and natural degree of the speech generated after conversion in the STARGAN, and implementinga high-quality voice conversion method. Through adoption of the method, the voice conversion under a non-parallel text condition can be realized; no alignment process is required in the training process; a conversions system of a plurality of source-target speaker pairs can be integrated into one conversion model; the model complexity is lowered; and the multi-speaker to multi-speaker conversion is realized.

Description

technical field [0001] The invention relates to a many-to-many speaker conversion method, in particular to a perceptual STARGAN-based many-to-many speaker conversion method. Background technique [0002] Speech conversion is a research branch in the field of speech signal processing, which is developed and extended on the basis of speech analysis, recognition and synthesis. The goal of voice conversion is to change the voice personality of the source speaker so that it has the voice personality of the target speaker, while retaining semantic information, that is, to make the voice of the source speaker sound like the voice of the target speaker after conversion. . [0003] After years of research on voice conversion technology, many classic conversion methods have emerged. These include Gaussian Mixed Model (GMM), Neural networks (NNs) (including Restricted Boltzmann machines (RBMs), Feed forward NNs (FNNs), Recurrent NNs (RNNs), Convolutional NNs (CNNs), Long Short-Term ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L21/013G10L25/18G10L25/24G10L25/30G10L13/04G10L13/08G10L15/06G10L15/16
CPCG10L21/013G10L25/24G10L25/18G10L25/30G10L15/063G10L13/08G10L15/16G10L2021/0135G10L13/00
Inventor 李燕萍徐东祥张燕
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products