Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Model training method, device and equipment and storage medium

A model training, type of technology, applied in the field of deep learning

Pending Publication Date: 2020-11-13
GUANGDONG OPPO MOBILE TELECOMM CORP LTD
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, many features in the video are related. Identifying a feature alone and ignoring other features associated with it will cause a certain negative impact. In order to avoid this negative impact, convolution can be used at the same time. Neural networks and recurrent neural networks recognize associated features, and how to train convolutional neural networks and recurrent neural networks in this scenario has become an urgent problem to be solved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model training method, device and equipment and storage medium
  • Model training method, device and equipment and storage medium
  • Model training method, device and equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] In order to make the purpose, technical solution and advantages of the present application clearer, the implementation manners of the present application will be further described in detail below in conjunction with the accompanying drawings.

[0023] In practical applications, some features of content that do not have serialization characteristics in videos can usually be identified by convolutional neural networks. For example, video frames in videos do not have serialization characteristics, so their features can usually be identified by convolution neural network for recognition. The features of some serialized content in the video can usually be identified through a cyclic neural network. For example, the audio contained in the video has a serialized feature, so its features can usually be identified through a cyclic neural network.

[0024] In related technologies, according to the features to be identified, the convolutional neural network can be selected to iden...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a model training method, device and equipment and a storage medium, and belongs to the technical field of deep learning. The method comprises the following steps: obtaining a training video sample, wherein the training video sample comprises a training video and at least two real tags with an association relationship; respectively inputting the training video into an initial convolutional neural network and an initial recurrent neural network to obtain at least two training tags which are output by the initial convolutional neural network and the initial recurrent neural network and have an association relationship; and training the initial convolutional neural network and the initial recurrent neural network based on the difference between the at least two trainingtags and the difference between the at least two training tags and the at least two real tags to obtain a trained target convolutional neural network and a trained target recurrent neural network. According to the technical scheme provided by the embodiment of the invention, a method for training a convolutional neural network and a recurrent neural network capable of identifying associated features in a video at the same time is provided.

Description

technical field [0001] The present application relates to the technical field of deep learning, in particular to a model training method, device, equipment and storage medium. Background technique [0002] In practical applications, some features in the video can be identified by convolutional neural networks, and at the same time, other features in the video can be identified by cyclic neural networks. In related technologies, according to the features to be identified, the convolutional neural network can be selected to identify the video, or the cyclic neural network can be independently selected to identify the video. [0003] However, many features in the video are related. Identifying a feature alone and ignoring other features associated with it will cause a certain negative impact. In order to avoid this negative impact, convolution can be used at the same time. Neural networks and recurrent neural networks recognize associated features, and how to train convolution...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V20/40G06N3/045
Inventor 崔志佳范泽华
Owner GUANGDONG OPPO MOBILE TELECOMM CORP LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products