Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Model training and scene recognition method and device, equipment and medium

A scene recognition and model training technology, applied in the field of model training and scene recognition, can solve the problem of large impact on accuracy, and achieve the effect of improving accuracy and high extraction ability

Pending Publication Date: 2022-02-15
BIGO TECH PTE LTD
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, scene recognition has a great influence on the accuracy of the review results of machine review.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model training and scene recognition method and device, equipment and medium
  • Model training and scene recognition method and device, equipment and medium
  • Model training and scene recognition method and device, equipment and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0058] figure 1 A schematic diagram of the scene recognition model training process provided by the embodiment of the present invention, the process includes the following steps:

[0059] S101: Obtain parameters of the core feature extraction layer and the global information feature extraction layer through training using the first scene label of the sample image and the standard cross-entropy loss.

[0060] S102: Train the weight parameters of the LCS modules at each level according to the feature maps output by the LCS modules at each level and the loss values ​​calculated pixel by pixel from the first scene label of the sample image.

[0061] S103: Using the first scene label of the sample image and the standard cross-entropy loss, train to obtain parameters of the fully-connected decision-making layer.

[0062] Wherein, the scene recognition model includes a core feature extraction layer, a global information feature extraction layer connected to the core feature extracti...

Embodiment 2

[0077] The core feature extraction layer includes a first-type grouping multi-receptive field residual convolution module and a second-type grouping multi-receptive field residual convolution module;

[0078] The multi-receptive field residual convolution module of the first type group includes a first group, a second group and a third group, the convolution sizes of the first group, the second group and the third group are different, and the first group The grouping, the second grouping and the third grouping include the residual calculation bypass structure; each grouping outputs feature maps through convolution operation and residual calculation, and the feature maps output by each grouping are concatenated in the channel dimension and channel shuffled, volume After the product is fused, it is output to the next module;

[0079] The multi-receptive field residual convolution module of the second grouping includes a fourth grouping, a fifth grouping and a sixth grouping, the...

Embodiment 3

[0082] The first scene label and the standard cross-entropy loss of the sample image are used to obtain the parameters of the core feature extraction layer and the global information feature extraction layer through training:

[0083] The feature maps of different levels in the core feature extraction layer are upsampled using deconvolution operations with different expansion factors, and the bilinear interpolation algorithm is used to align the number of channels in the channel dimension, and the feature maps of each level are added and merged channel by channel , the merged feature map group is convolutionally fused, and the global information feature vector is obtained by channel-by-channel global average pooling, the global information feature vector and the fully connected layer FC feature vector are spliced, and the standard cross-entropy loss is used to train The parameters of the core feature extraction layer and the global information feature extraction layer are obtai...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a model training and scene recognition method and device, equipment and a medium. The method comprises the following steps: training a first scene label and standard cross entropy loss of a sample image during the training of a scene recognition model, obtaining the parameters of a core feature extraction layer and a global information feature extraction layer through the training, then, according to a feature map output by the LCS module of each level and a loss value obtained by pixel-by-pixel calculation of the first scene label of the sample image, training a weight parameter of the LCS module of each level, and finally, training to obtain a parameter of a full-connection decision layer of the scene recognition model. Therefore, the scene recognition model has the high-richness feature extraction capability, scene recognition is performed based on the scene recognition model, and the accuracy of scene recognition is greatly improved.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a model training and scene recognition method, device, equipment and medium. Background technique [0002] Machine review technology (referred to as machine review) is more and more widely used in large-scale short video / picture review. The illegal pictures identified by machine review are pushed to the staff for review (referred to as human review), and finally determine whether the picture violates regulations. The emergence of machine review has greatly improved the efficiency of picture review. However, machine review tends to rely on the visual commonality of images to make violation judgments, thus ignoring changes in review results caused by changes in the general environment. For example, in the review of gun violations, when a machine review recognizes a gun in an image, it will generally consider the picture to be a violation, but the accuracy of such ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06V20/40G06V10/774G06V10/82G06N3/04G06N3/08
CPCG06N3/084G06N3/048G06N3/045G06F18/214
Inventor 罗雄文卢江虎项伟
Owner BIGO TECH PTE LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products