Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Multi-RGB-D Full Face Material Restoration Method Based on Deep Learning

A technology of deep learning and restoration method, which is applied in the field of 3D face reconstruction, can solve problems such as the lack of standardization of data sets and texture data, the inability to cover the expression of the side and rear of the human head, and the difficulty of texture image material restoration, etc., to expand the data range, Improve the effect of optimization and strong practicability

Active Publication Date: 2022-04-29
ZHEJIANG UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Only a single RGB image is input, and only the geometry and material reconstruction of the front face can be performed, and the expression behind the head cannot be covered.
In addition, in the current reconstruction method that inputs multiple RGB-D data, it is still difficult to restore the texture of the mapped texture image
There are still relatively few algorithms for image processing and material restoration in the full face range, and there is no effective standardization specification for data sets and texture data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Multi-RGB-D Full Face Material Restoration Method Based on Deep Learning
  • A Multi-RGB-D Full Face Material Restoration Method Based on Deep Learning
  • A Multi-RGB-D Full Face Material Restoration Method Based on Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0077] The inventor tested the effectiveness of the differentiable rendering optimization module in step 2 in a simulation data set. Such as Figure 8 Figure (A) is the original image, Figure (B) is the schematic diagram of the 0th iteration, Figure (C) is the schematic diagram of the 10th iteration, and Figure (D) is the schematic diagram of the 150th iteration. As the number of iterations increases, the optimized material data is closer to the standard value than the initial results obtained using only the material estimation module.

Embodiment 2

[0079] The inventor tested the effectiveness of the differentiable rendering optimization module in step 2 for improving the loss function in the simulation data set. Figure 9 The test situation of a group of samples is shown, in which, picture (A) is a schematic diagram of the input image, picture (B) is a schematic diagram of the rendering result before improvement, picture (C) is a schematic diagram of the rendering result after improvement, and picture (D) is The albedo standard map, picture (E) is a schematic diagram of the albedo result before improvement, and picture (F) is a schematic diagram of the albedo result after improvement. It can be seen that before the loss function is improved, the error of the rendering result is small, but the error of restoring the texture is large. After the loss function is improved, the error of the rendering result is almost unchanged, and the restoration effect of the albedo texture is significantly improved.

Embodiment 3

[0081] The inventors tested the effectiveness of the inventive method on real samples. Such as Figure 10 Shown is the comparison diagram of the material restoration effect of the present invention in the actual sample test, wherein, Figure (A) is a schematic diagram of collecting photos, Figure (B) is a texture image synthesized by equipment, and Figure (C) is an optimized rendering result figure, and Figure (B) is a graph of optimized rendering results. (D) is a schematic diagram of putting back the original image for comparison. This method can restore the full face texture range including ears and neck, and the restored material data has high fidelity.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-RGB‑D full face material restoration method based on deep learning. The invention includes two steps of image-based human face material information estimation and gradient optimization based on differentiable rendering. Step 1: First, preprocess the geometry and texture data to generate a mask containing the skin part of the whole face; then build the texture estimation module and illumination estimation module to generate a simulation training data set; finally use the material texture and illumination estimation module to combine with the simulation training The data set obtains the initial value of the texture information and the illumination coefficient. Step 2 first processes the scanned geometric data, and then expands to realize the full-face rendering equation; then improves the loss function to obtain the optimization result; finally, optimizes the details for special areas. The invention can expand the data range of the face material recovery technology, and improve the optimization effect of the material recovery technology.

Description

technical field [0001] The present invention relates to the field of three-dimensional reconstruction of human face, in particular to a multi-RGB-D full-face material restoration method based on deep learning. [0002] technical background [0003] Today, with the increasing development of smart phone entertainment applications, face applications can be better developed by obtaining geometric and texture information through face 3D information reconstruction. The face 3D information reconstruction method generally mainly includes three modules: face geometric reconstruction, face texture mapping and texture material restoration. The current 3D face reconstruction technology can reconstruct geometric and texture information by inputting one or more RGB images, and can also obtain more refined geometric information and texture mapping results by inputting RGB-D data. [0004] However, the algorithms that have been implemented so far also have some deficiencies. Only a single ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T17/00G06T15/00
CPCG06T17/00G06T15/005G06T2200/04
Inventor 任重於航翁彦琳周昆
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products