Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Real person verification method via videos

A video and real-person technology, applied in the field of real-person verification, can solve problems such as long verification time, application limitations of the face identity authentication system, forged faces, etc., and achieve the effect of improving authenticity

Inactive Publication Date: 2018-04-20
CHENGDU REMARK TECH CO LTD +1
View PDF8 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, face attack technology has severely restricted the application of face identity authentication systems. In order to solve this problem, face liveness detection technology has become a research hotspot.
The current living body recognition technology basically adopts the method of face key point detection, but requires the user to cooperate to make corresponding actions, the operation is complicated, and the verification time is long. However, this cannot solve the problem of fake faces in videos and 3D models

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real person verification method via videos
  • Real person verification method via videos

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0032] A method of real-person verification via video, such as figure 1 As shown, it mainly includes the following steps:

[0033] Step A1: Collect continuous video, convert the continuous video frames in the video stream into multiple single-channel images, and then combine the multiple single-channel images into one multi-channel image;

[0034] Step A2: Input the three-channel image synthesized in Step A1 into the deep learning training model and extract deep features;

[0035] Step A3: Use the living body judgment method to judge whether the person in the current image is alive, and output the result.

[0036] In the present invention, if repeated detection is successful for 3 consecutive times, the living body verification is successful; otherwise, the living body verification fails.

[0037] The present invention uses a camera to capture continuous video of the person to be verified, converts the continuous video frames in the video stream into multiple single-channel ...

Embodiment 2

[0040] This embodiment is further optimized on the basis of embodiment 1, as figure 2 As shown, the generation steps of the training model shown mainly include the following steps:

[0041] Step A31: Collect multiple real and known video clips of the living body, mark the video clips of the living body and non-living body video clips, and combine each video clip into a multi-channel image. channel image with 0 as label;

[0042] Step A32: input the marked multi-channel image information in step A31 into the modified VGG Face model to obtain a fine-tuned face recognition model;

[0043] Step 33: After several iterations, a living body recognition model is obtained, that is, a trained VGG Face deep learning model is obtained.

[0044] The deep learning model in the step A2 is a VGG Face deep learning model; the output parameter mum-outptut of fc8 in the VGG Face model is 2, and the name parameter name of fc8 is fc8_living; in the environment of Caffe, use the mark The actual...

Embodiment 3

[0047] This embodiment is further optimized on the basis of Embodiment 1 or 2. The method of processing the video stream into a multi-channel image is as follows: first collect the continuous video of the verifier through the camera, and then extract multiple frames of color images in the video stream, and convert the color images into multi-channel images. The image is converted to a single-channel grayscale image; three single-channel grayscale images are combined into a three-channel color image.

[0048] The method for synthesizing multi-channel images in the present invention is to extract multi-frame color images in the video stream first, and convert the color images into single-channel grayscale images; three single-channel grayscale images are combined to form a three-channel color image, wherein the first One image is used as the B channel of the color image, the second image is used as the G channel of the color image, and the third image is used as the R channel of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a real person verification method via videos Videos are collected continuously, continuous video frames in a video stream are converted into single-channel images, the single-channel images are combined into one multi-channel image, a trained deep learning model is used to process the synthesized multi-channel image, depth characteristics are extracted, an in-vivo determination method is used to determine whether there is a real person in the image, and a determination result is output. Thus, the real person can be determined effectively. The in-vivo is detected onlinevia a deep learning manner, and the problem that the face is faked via a picture, video or 3D model can be solved effectively.

Description

technical field [0001] The invention belongs to the technical field of real person verification, and in particular relates to a method for real person verification through video. Background technique [0002] Traditional identity authentication methods include passwords, ID cards, smart cards, etc. With the development of technology, identity authentication methods such as face, fingerprint, retina, iris, palm veins, and finger veins have appeared. At the same time, attacks against new technologies have gradually emerged. Such as forging human faces through photos, videos and even 3D models, and forging irises through high-resolution photos and contact lenses. [0003] However, face attack technology has severely restricted the application of face identity authentication systems. In order to solve this problem, face liveness detection technology has become a research hotspot. The current living body recognition technology basically adopts the method of face key point detect...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/172G06V20/41G06V40/45
Inventor 王飞
Owner CHENGDU REMARK TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products