Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional face modeling method based on double-tributary network

A technology of three-dimensional face and modeling method, which is applied in the field of three-dimensional face modeling based on dual-tributary network, and can solve the problems of reducing the accuracy of three-dimensional face reconstruction, difficulty in obtaining, and limited amount of facial texture map data.

Active Publication Date: 2021-01-29
WUHAN UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the amount of data available for training facial texture maps is very limited, and there are disadvantages of difficulty in obtaining and high cost, so this method limits its application range
[0005] To sum up, the existing methods either use the linear 3DMM method to estimate the model parameters with limited expression ability by using the neural network; or use the unconstrained three-dimensional face expression method, but do not specifically deal with the attributes of the facial expression. , so it will cause prediction errors under the condition of large-scale expressions, thereby reducing the final 3D face reconstruction accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional face modeling method based on double-tributary network
  • Three-dimensional face modeling method based on double-tributary network
  • Three-dimensional face modeling method based on double-tributary network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0075] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0076] figure 1 It is a flow chart of the present invention. After the input face image is preprocessed, it enters the double-branch network. For the facial shape reconstruction branch, first use the shape encoder to extract convolutional features from the input face image to obtain the hidden encoding feature vector of the input face image; then construct an identity space graph decoder, an expression space graph decoder and a The parameter decoder; the identity space map decoder and the expression space map decoder are used to predict the identity space map and the expression space map respectiv...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional face modeling method based on a double-tributary network, and the method comprises the following steps: 1) constructing a face shape reconstruction tributary based on an auto-encoder structure, and obtaining an identity space diagram and an expression space diagram; 2) respectively expressing an identity attribute component and an expression attribute component by utilizing the identity space diagram and the expression space diagram; 3) constructing a facial texture reconstruction branch based on a semantic region to obtain a facial texture map withhigh truth; and 4) constructing a fusion module: correspondingly combining the reconstructed face shape model and the reconstructed face texture map according to a predefined three-dimensional vertextopological relation to obtain a final reconstructed three-dimensional face model. According to the invention, decoupling learning is carried out on the identity and expression attributes of the humanface, and processing of the face shape and the face texture is carried out by using the double-tributary network, so that accurate three-dimensional human face reconstruction with large-amplitude expressions is achieved.

Description

technical field [0001] The invention belongs to the fields of computer vision and computer graphics, and in particular relates to a three-dimensional human face modeling method based on a double-branch network. Background technique [0002] Monocular face reconstruction aims to recover the corresponding 3D face model from a single face image. In recent years, because of its wide application in practice, such as face alignment, face editing, and virtual reality, a lot of research work has been invested in this field. [0003] However, it is very challenging to reconstruct accurate face geometry and recover realistic facial texture maps from a single image. One of the inescapable challenges is the diversity of facial expressions, an innate human property. The diversity and ambiguity of facial expressions have become the key problems that must be solved in the process of 3D face reconstruction. [0004] In order to improve the effect of face reconstruction, many methods have...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T17/00G06T15/04G06K9/00G06N3/04G06N3/08
CPCG06T17/00G06T15/04G06N3/08G06V40/168G06N3/045
Inventor 陈军柴笑宇梁超徐东曙孙志宏李希希邱焰升姚红豆
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products