Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video face recognition method combining deep Q learning and attention model

An attention model and face recognition technology, applied in the field of video face recognition, can solve the problem of insufficient accuracy of video face matching

Inactive Publication Date: 2019-09-27
HOHAI UNIV
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the above problems, the present invention proposes a video face recognition method combining deep Q-learning (deep Q-learning) and attention model, realizes video face recognition, and solves the technical problem of insufficient video face matching accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video face recognition method combining deep Q learning and attention model
  • Video face recognition method combining deep Q learning and attention model
  • Video face recognition method combining deep Q learning and attention model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0072] Video Face Recognition Approaches Combining Deep Q-Learning and Attention Models, as figure 1 As shown, it specifically includes the following steps:

[0073] Step S1: Video feature extraction: Convolutional Neural Networks (CNN) is used to train video data, and different feature planes are extracted, which are combined into multi-dimensional features of the video.

[0074] In the step S1, use the labeled video sample data to train the convolutional neural network, use the trained convolution model to extract features from the video data, and each convolution kernel slides the calculated matrix representation on the input data It is called a feature surface. Multiple convolution kernels perform convolution calculations to generate multiple feature surfaces. Multiple sets of feature surfaces are combined to form multi-dimensional features of the video. There is no neuron connection between each feature surface. The previous layer The output of the characteristic surface...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video face recognition method combining deep Q learning and an attention model. The method comprises the five steps of video feature extraction, video time continuity information extraction, local face positioning, video optimal frame sequence sorting and video face recognition matching, and has the advantages that the video face recognition can be achieved, and the problem that the video face matching precision is insufficient is solved.

Description

technical field [0001] The invention belongs to the technical field of video face recognition, and in particular relates to a video face recognition method combining deep Q-learning and an attention model. Background technique [0002] Video face recognition can be divided into two categories: the matching of video and still images and the matching of video and video. In real application scenarios, the ease of collection and storage of still images has led to the use of still image matching methods for face recognition. However, in many scenarios, such as the identification of criminal suspects in the public security system, video-to-video matching methods are often used, and the time information of the video itself is used as an important element in the analysis to improve the matching accuracy and make up for poor facial posture and lighting. The impact of insufficiency, etc. In the video-to-video matching process, video information processing has a very important impact...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/168G06V20/46G06N3/044G06N3/045
Inventor 刘惠义郑秋文居明宇
Owner HOHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products