Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for synchronizing portrait mouth shape and audio, and storage medium

An audio and portrait technology, which is applied in the field of audio and video synthesis, can solve the problems of not being able to take into account the difficulty of making character image effects, and achieve the effect of solving the difficulty of production, reducing the difficulty and cost of realization, and the effect of mouth shape image

Pending Publication Date: 2022-07-22
北京有限元科技有限公司
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Embodiments of the present disclosure provide a method, device, and storage medium for synchronizing a portrait's mouth shape and audio, so as to at least solve the technology in the prior art that cannot take into account the image effect of a character and the difficulty of production in the process of generating a virtual portrait question

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for synchronizing portrait mouth shape and audio, and storage medium
  • Method and device for synchronizing portrait mouth shape and audio, and storage medium
  • Method and device for synchronizing portrait mouth shape and audio, and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0021] According to this embodiment, an embodiment of a method for synchronizing a portrait lip and audio is also provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions and, although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.

[0022] The method embodiments provided in this embodiment may be executed in a server or a similar computing device. figure 1 A block diagram of a hardware structure of a computing device for implementing a method for synchronizing a portrait mouth and audio is shown. like figure 1 As shown, a computing device may include one or more processors (the processors may include, but are not limited to, processing means such as a microprocessor MCU or a programmable logic device FPGA, etc.), memory for storing data, and memory for com...

Embodiment 2

[0065] Figure 5 An apparatus 500 for synchronizing a portrait lip and audio according to this embodiment is shown, and the apparatus 500 corresponds to the method according to the first aspect of Embodiment 1. refer to Figure 5 As shown, the device 500 includes: a pronunciation determining module 510 for determining multiple pronunciations contained in the target audio and time nodes at which the multiple pronunciations are emitted in the target audio; a mouth image determining module 520 for determining from a preset resource Acquire multiple pronunciation mouth images corresponding to multiple pronunciations in the library, wherein the resource library is used to store the pronunciation mouth images; and the synchronous rendering module 530 is used to render the multiple pronunciation mouth images to the preset according to the time node. The lip area of ​​the portrait video and is synchronized with the target audio.

[0066] Optionally, the mouth shape image determinati...

Embodiment 3

[0073] Image 6 An apparatus 600 for synchronizing mouth shape and audio according to this embodiment is shown, and the apparatus 600 corresponds to the method according to the first aspect of Embodiment 1. refer to Image 6 As shown, the apparatus 600 includes: a processor 610; and a memory 620, connected to the processor 610, for providing the processor 610 with instructions for processing the following processing steps: determining a plurality of utterances included in the target audio and a plurality of utterances in the target audio The time node emitted in the audio; obtain multiple pronunciation lip images corresponding to multiple pronunciations from a preset resource library, wherein the resource library is used to store the pronunciation lip images; Renders to the lip region of a preset portrait video and syncs with the target audio.

[0074] Optionally, obtain multiple pronunciation mouth shape images corresponding to multiple pronunciations from the preset resour...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a portrait mouth shape and audio synchronization method and device and a storage medium. The method comprises the following steps: determining a plurality of pronunciations contained in a target audio and time nodes of the pronunciations in the target audio; obtaining a plurality of pronunciation mouth shape images corresponding to the plurality of pronunciations from a preset resource library, wherein the resource library is used for storing the pronunciation mouth shape images; and rendering the plurality of pronunciation mouth shape images to a lip region of a preset portrait video according to the time node, and synchronizing with the target audio.

Description

technical field [0001] The present application relates to the technical field of audio and video synthesis, and in particular, to a method, a device, and a storage medium for synchronizing human mouth and audio. Background technique [0002] At present, virtual portraits are widely used in various scenarios such as video games, social entertainment, business marketing, daily life, smart cities, etc., especially for interactive virtual anchors, virtual customer service and other application requirements are very common. [0003] The current implementation of interactive virtual portrait applications is basically divided into two categories. One is to preset several simple videos such as speaking, smiling, waiting, etc., and switch the corresponding preset videos according to the scene flow. The other type uses the combination of deep learning neural network and computer graphics, which enables the computer to understand the speech content and finely drive the lip movements, f...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N5/262H04N21/43H04N21/44
CPCH04N5/262H04N21/44012H04N21/4307
Inventor 张磊井绪海夏溧吴海英王洪斌蒋宁
Owner 北京有限元科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products