Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Face visible region analysis and segmentation method, face makeup method and mobile terminal

A face area and area technology, which is applied in the fields of face makeup, mobile terminals and computer-readable storage media, can solve problems such as time-consuming, difficult real-time development, and insufficient robustness of occlusion, so as to improve computing efficiency, Stable prediction results and improved real-time performance

Pending Publication Date: 2021-04-06
XIAMEN MEITUZHIJIA TECH
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, the existing face methods mainly have the following disadvantages: First, they are time-consuming, and almost all need to rely on powerful GPUs, such as Nvidia 1080TI, Nvidia TITIAN, etc., and are not targeted at mobile terminals (running iOS or Android, especially Android hardware capabilities) Worse, real-time more difficult) development; the second is to focus too much on academic data sets, which is not robust enough for various occlusion situations that may occur in reality

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face visible region analysis and segmentation method, face makeup method and mobile terminal
  • Face visible region analysis and segmentation method, face makeup method and mobile terminal
  • Face visible region analysis and segmentation method, face makeup method and mobile terminal

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0049] The first embodiment (face segmentation method)

[0050] Such as figure 1 with figure 2 As shown, the present embodiment provides a human face visible area analysis and segmentation method, which includes the following steps:

[0051] A. Obtain the face area image of the sample image to obtain the initial image;

[0052] B. Randomly select more than one occluder from the material library, and add the occluder to a random area of ​​the face region image to obtain an occluded image;

[0053] C. Utilize the initial image and the occluded image to form an image pair (such as Figure 3-a with Figure 3-b ), and adopt U-Net network to carry out training to described image respectively, obtain face parsing model;

[0054] D. Predict the image to be processed by using the face parsing model (such as figure 2 shown), to obtain the segmentation result map of the image to be processed (such as Figure 4-a with Figure 4-b ).

[0055] In this embodiment, the segmentation r...

no. 2 example

[0073] Second embodiment (face makeup method)

[0074] The accuracy of face analysis is very important for face-centered analysis, which can be applied in facial expression analysis, virtual reality, makeup special effects and other fields. This embodiment also provides a face makeup method on the basis of the face visible area analysis and segmentation method, which includes the following steps:

[0075] Obtain image to be processed (as Figure 8 Shown) the segmentation result map (such as Figure 10 shown);

[0076] The segmentation result map and the image to be processed are superimposed to obtain the area to be applied makeup (such as Figure 11 shown);

[0077] The makeup material map (such as Figure 9 shown) and the area to be applied for makeup are superimposed to obtain a makeup effect map (such as Figure 12 shown).

[0078] Wherein, the segmentation result image is a grayscale image, and the range of each pixel point is 0-1, 1 represents the confidence of no...

no. 3 example

[0080] Third Embodiment (Mobile Terminal with Image Processing Function or Virtual Makeup Function)

[0081] This embodiment also provides a mobile terminal, the mobile terminal includes a memory, a processor, and an image processing program stored in the memory and operable on the processor, the image processing program is executed by the processor During execution, the steps of the human face visible area analysis and segmentation method described in any one of the above and / or the steps of the human face makeup method described above are realized.

[0082] The mobile terminal includes: a mobile terminal with a camera function, such as a mobile phone, a digital camera, or a tablet computer, or a mobile terminal with an image processing function, or a mobile terminal with an image display function. The mobile terminal may include components such as a memory, a processor, an input unit, a display unit, and a power supply.

[0083] Wherein, the memory can be used to store soft...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a face visible region analysis and segmentation method, a face makeup method and a mobile terminal, and the method comprises the steps: obtaining an initial image through obtaining a face region image of a sample image; randomly selecting more than one shelter from a material library, and adding the shelter to a random region of the face region image to obtain a shelter image; forming an image pair by using the initial image and the shelter image, and training the image pair by using a U-Net network to obtain a face analysis model; and predicting a to-be-processed image by utilizing the face analysis model to obtain a segmentation result graph of the to-be-processed image. The invention can be applied to application scenarios such as makeup processing of the to-be-processed image according to the prediction result, the real-time performance and robustness of face analysis and segmentation processing are greatly improved, and the application range is wider.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a face visible region analysis and segmentation method and a face makeup method applying the method, a mobile terminal and a computer-readable storage medium. Background technique [0002] Face analysis, face analysis, is to decompose the human head including facial features, and obtain the analysis results of each facial area of ​​the face, including but not limited to the following 17 semantic areas: background, face, skin, left / Right eyebrow, left / right eye, nose, upper lip / inside of mouth / lower lip, left / right ear, neck, glasses and sunglasses; different parts are marked with different colors. [0003] However, the existing face methods mainly have the following disadvantages: First, they are time-consuming, and almost all need to rely on powerful GPUs, such as Nvidia 1080TI, Nvidia TITIAN, etc., and are not targeted at mobile terminals (running iOS or Andro...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06T7/11G06T11/60
CPCG06T7/11G06T11/60G06T2207/20132G06T2207/20081G06T2207/20084G06T2207/30201G06V40/171
Inventor 林煜苏灿平
Owner XIAMEN MEITUZHIJIA TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products