Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Face detection model training method, face key point detection method and device

A face key point and face detection technology, which is applied in the field of data processing, can solve problems such as inaccurate face key point detection, and achieve the effects of enriching application scenarios, improving performance, and improving accuracy

Active Publication Date: 2019-03-22
BIGO TECH PTE LTD
View PDF4 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Embodiments of the present invention provide a training method for a human face detection model, a method for detecting key points of a human face, a training device for a human face detection model, a detection device, equipment and a storage medium for key points of a human face, To solve the problem of inaccurate face key point detection in existing face key point detection methods, in order to improve the accuracy of face key point detection

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face detection model training method, face key point detection method and device
  • Face detection model training method, face key point detection method and device
  • Face detection model training method, face key point detection method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] figure 1 It is a flow chart of a training method for a face detection model provided by Embodiment 1 of the present invention. This embodiment of the present invention is applicable to the situation where the training face detection model generates a UV coordinate map containing three-dimensional coordinates. The training device of detection model is carried out, and this device can be realized by the mode of software and / or hardware, and is integrated in the device that carries out this method, specifically, as figure 1 As shown, the method may include the following steps:

[0046] S101. Acquire a training face image.

[0047] Specifically, the training face image may be a two-dimensional image containing a human face, and the storage format of the two-dimensional image may be a format such as BMP, JPG, PNG, or TIF. Among them, BMP (Bitmap) is a standard image file format in the Windows operating system. BMP adopts a bitmap storage format. The image depth of BMP file...

Embodiment 2

[0060] Figure 2A It is a flowchart of a face detection model training method provided by Embodiment 2 of the present invention. On the basis of Embodiment 1, the embodiment of the present invention optimizes the three-dimensional reconstruction and generation of training UV coordinate maps. Specifically, as Figure 2A As shown, the method may include the following steps:

[0061] S201. Acquire a training face image.

[0062] S202. Select M three-dimensional face models.

[0063] In the embodiment of the present invention, M three-dimensional human face models can be selected from the preset three-dimensional human face model library, and the selected three-dimensional human face models can be preprocessed, and the preprocessed three-dimensional human face models can be aligned by using the optical flow method , to obtain the aligned 3D face model.

[0064] The 3D face model is generated by 3D scanning. Different scanners have different imaging principles. There may be miss...

Embodiment 3

[0119] Figure 3A It is a flowchart of a method for detecting key points of a human face provided by Embodiment 3 of the present invention. This embodiment of the present invention is applicable to the situation of detecting key points of a human face through a face image. device, the device can be implemented by means of software and / or hardware, and integrated in the device for performing the method, specifically, such as Figure 3A As shown, the method may include the following steps:

[0120] S301. Acquire a target face image.

[0121] In this embodiment of the present invention, the target face image may be a face image to which video special effects are to be added. For example, in the process of live video streaming or short video recording, when the user selects operations such as color contact lenses, adding stickers, and face-lifting to add video special effects, the live video app detects the user's operation and intercepts a frame from the video frames collected ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a face detection model training method, a face key point detection method, a device, a device and a storage medium. Performing three-dimensional reconstruction on the training face image based on a preset three-dimensional face model to obtain a training three-dimensional face model; Generating a training UV coordinate map including three-dimensional coordinates of the training three-dimensional face model according to the training three-dimensional face model; The training face image and the training UV coordinate diagram are used for training the semantic segmentation network, so that that face detection model is obtain, As that embodiment of the invention doe not need to manually label the train face image and the training UV coordinate map, Thepresent invention solves the problem that the training data need to be manually estimated and labeled, which leads to inaccurate training data and inaccurate coordinates of the key points of the faceoutput by the CNN, and improves the performance of the face detection model and the accuracy of the key points detection of the face.

Description

technical field [0001] The embodiments of the present invention relate to the technical field of data processing, and in particular to a training method of a human face detection model, a detection method of human face key points, a training device of a human face detection model, and a detection of human face key points Devices, equipment and storage media. Background technique [0002] With the development of Internet technology, various video applications appear, through which people can communicate more intuitively. [0003] In the process of live streaming or recording short videos, users usually need to perform some special effects processing on the video, such as adding special effects such as beautification and stickers to the faces in the video. The addition of the above special effects depends on the eyes, mouth, Key points such as the nose, therefore, the accuracy of detecting the key points of the face is especially important for the processing of special effect...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06V10/764G06V10/774
CPCG06V20/64G06V40/161G06V10/454G06V10/82G06V10/764G06V10/7715G06V10/774G06F18/2413G06N20/00G06F18/214G06F18/2135
Inventor 陈德健
Owner BIGO TECH PTE LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products