Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence

A technology of artificial intelligence and control method, applied in neural learning methods, computer parts, cathode ray tube indicators, etc., can solve problems such as dry facial skin, eye diseases, and corneal dryness.

Pending Publication Date: 2021-03-26
宋彦震
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Multiple high-brightness computer screens not only increase the electromagnetic radiation but also make the human eyes more prone to fatigue. Facial skin deterioration will cause long-term fatigue and relaxation of the eyeball muscles and intraocular lens muscles, and dry the cornea outside the eyeball, which is likely to cause eye damage. facial diseases, at the same time, the facial skin is prone to dryness, leading to spots and wrinkles

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence
  • Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence
  • Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0024] The first embodiment, such as figure 1 and image 3 .

[0025] The training process of the neural network can be divided into the following steps.

[0026] Step 1: Define the neural network, including some learnable parameters or weights.

[0027] Step 2: Input the data into the network for training and calculate the loss value.

[0028] Step 3: Backpropagate the gradient to the parameters or weights of the network, update the weights of the network accordingly, and train again.

[0029] Step 4: Save the final trained model.

[0030] After the face image is input into the neural network, features are extracted through convolution operations, and then output to the next layer after down-sampling operations. After multiple convolution, activation, and pooling layers, the result is output to the fully connected layer, and then mapped to the final classification result after fully connected: nine face orientations.

[0031] Load the model for facial orientation recogn...

no. 2 example

[0034] Because the original image acquired by the camera device will be subject to various conditions and random interference, it is often not used directly, and it must be pre-processed in the early stage of image processing, such as grayscale correction and noise filtering. The preprocessing process mainly includes light compensation, grayscale transformation, histogram equalization, normalization, filtering and sharpening of face images. To put it simply, it is to finely process the captured images, and divide the detected faces into pictures of a certain size for easy recognition and processing.

[0035] In the neural network, first input an image, the size of the input image is W × H × 3, then convolve the input image with the convolution kernel, and then perform activation and pooling operations. The convolution process uses 6 convolution kernels, Each convolution kernel has three channels of R, G, and B. The number of channels of the convolution kernel is the same as th...

no. 3 example

[0044] The third embodiment, such as Figure 1-Figure 3 .

[0045] The present invention includes the following steps.

[0046] Step S11: Enter the parameter collection stage before neural network training and learning: Divide the face directions into nine types, namely up, down, left, right, upper left, upper right, lower left, lower right and middle directions, and collect these nine facial directions images are used as training set samples. The image information of various facial orientations is used as input parameters before neural network training and learning, and nine facial orientations are used as output parameters before neural network training and learning.

[0047] Step S12: Establish a face orientation-display ID comparison table.

[0048] Step S13: Entering the neural network training and learning stage: the neural network is trained and learned according to known input parameters and output parameters, and the learning results are used as hidden layer data i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

According to the invention, the face directions are divided into nine types, the image information of various face directions is used as an input parameter before neural network training and learning,and the nine face directions are used as output parameters before neural network training and learning. And the neural network performs training learning according to the known input parameters and output parameters, and the learning result is used as hidden layer data of the neural network in the use stage. Photograph information, collected by the camera device in real time, of a user is input into the neural network which completes training and learning, the neural network automatically matches a known learning result of the hidden layer according to new information of the input layer, andthe face direction of the current user is output on the output layer. After a processor obtains the face direction, the processor firstly reads the face direction-display screen ID comparison table, then the brightness of the display screen corresponding to the current face direction is improved, and the brightness of the remaining display screens is reduced. Therefore, the brightness of each display screen is adjusted in real time according to the requirements of users, and the harm of high-brightness display screens and electromagnetic radiation to human eyes is reduced.

Description

technical field [0001] The invention relates to the technical field of computer display screen expansion, in particular to a control method for adjusting multi-screen brightness based on artificial intelligence face tracking. Background technique [0002] There are four ways to display the computer screen: computer screen only mode, copy mode, extend mode and second screen only mode. [0003] Extended mode: extend the computer desktop to the external screen, the extended desktop displayed by the projection has no desktop icons and taskbar shortcut icons, only a blank desktop, you can drag the content displayed on the computer screen to the projection screen to display, so as to expand The area of ​​the computer desktop is convenient for users to work and study with multiple screens. [0004] In the extended mode, the computer displays the information to be displayed on the external display screen through the data cable. [0005] Users can expand multiple display screens ac...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08G09G5/10
CPCG06N3/084G09G5/10G09G2300/02G06V40/172G06N3/045G06F18/214G06F18/2414
Inventor 宋彦震
Owner 宋彦震
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products