Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Personalized speech synthesis model construction method and device, speech synthesis method and device, and personalized speech synthesis model test method and device

A technology of speech synthesis and construction method, which is applied in speech synthesis, speech analysis, speech recognition, etc., which can solve the problems of unable to synthesize speakers and achieve the effect of improving user experience

Pending Publication Date: 2021-05-28
ALIBABA GRP HLDG LTD
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] For the multi-speaker Neural TTS model, given any speaker in the training set, the multi-speaker Neural TTS model can be used to synthesize the speaker's voice, but for a specific speaker (hereinafter referred to as) , the model is unable to synthesize a particular style of speech for that particular speaker

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Personalized speech synthesis model construction method and device, speech synthesis method and device, and personalized speech synthesis model test method and device
  • Personalized speech synthesis model construction method and device, speech synthesis method and device, and personalized speech synthesis model test method and device
  • Personalized speech synthesis model construction method and device, speech synthesis method and device, and personalized speech synthesis model test method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0177] Assuming that the training set data of multiple speakers of the multi-speaker speech synthesis model includes speaker A, speaker B, speaker C, speaker D and speaker E, their IDs in the multi-speaker speech synthesis model ID1, ID2, ID3, ID4 and ID5 respectively. Use the training data of these speakers to train the multi-speaker Neural TTS model to obtain a trained multi-speaker Neural TTS model.

[0178] Currently there is a personalized speaker as speaker F, refer to Figure 6A In the flow chart shown, the personalized data of the speaker F, that is, speech data and text, are respectively subjected to speech synthesis, automatic labeling and speech data preprocessing, and corresponding linguistic features and acoustic features are extracted.

[0179] Figure 6B Shown is the process of how to extract linguistic features from text and speech. For example, the pronunciation annotation and prosodic annotation in the text are first extracted through the TTS front-end, and...

Embodiment 2

[0185] Similar to Embodiment 1, the training set data of multiple speakers of the multi-speaker speech synthesis model in Embodiment 2 includes speaker A, speaker B, speaker C, speaker D and speaker E The IDs of all training data in the multi-speaker speech synthesis model are ID1, ID2, ID3, ID4 and ID5 respectively. Use the training data of these speakers to train the multi-speaker Neural TTS model to obtain a trained multi-speaker Neural TTS model.

[0186] Currently there is a personalized speaker as speaker F, refer to Figure 7A In the flow chart shown, the personalized data of the speaker F, that is, speech data and text, are respectively subjected to speech synthesis, automatic labeling and speech data preprocessing, and corresponding linguistic features and acoustic features are extracted. Specifically how to extract can refer to the Figure 6B with Figure 6C .

[0187] The difference from Embodiment 1 is that the corresponding vectors are calculated for each sent...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a personalized speech synthesis model construction method and device, a speech synthesis method and device, and a personalized speech synthesis model test method and device. The construction method of the personalized speech synthesis model comprises the following steps: determining training data similar to a user from training set data of a plurality of speakers of a multi-speaker speech synthesis model; selecting similar speakers belonging to the same category as the user from the plurality of speakers except the speaker to which the approximate training data belongs; and training the multi-speaker speech synthesis model according to training data similar to the user and the selected similar speakers to obtain a personalized speech synthesis model of the user. The voice of the specific speaking style of the user can be synthesized, and the user experience is improved.

Description

technical field [0001] The invention relates to the technical field of artificial intelligence, in particular to a construction method of a personalized speech synthesis model, a speech synthesis method, a testing method and a device. Background technique [0002] Voice interaction scenarios in artificial intelligence technology require personalized voice synthesis. Personalized speech synthesis is a strong demand in business, and it is also one of the future trends in the field of speech synthesis. [0003] In traditional speech synthesis technology, using hundreds of hours of training data from hundreds of speakers, a multi-speaker speech synthesis system based on massive data can be constructed. Specifically, a multi-speaker speech synthesis model can be used, for example, based on neural The text-to-speech (Neural TTS (Text-To-Speech)) model of the network, in the training data of this model, the voice data volume of a single speaker often ranges from a few hours to doz...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L13/02G10L13/047G10L13/10G10L15/05
CPCG10L13/02G10L13/047G10L13/10G10L15/05
Inventor 黄智颖雷鸣
Owner ALIBABA GRP HLDG LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products