Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

RNN-based multi-task learning method

A multi-task learning and task technology, applied in the field of RNN-based multi-task learning, can solve problems such as gradient disappearance and complex structure, and achieve the effect of solving gradient disappearance and realizing information sharing.

Pending Publication Date: 2018-06-22
HRG INT INST FOR RES & INNOVATION
View PDF0 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing multi-task learning methods have complex structures, such as LSTM models, and are prone to gradient disappearance in the process of backpropagation. Based on this, the present invention provides a multi-task learning method based on RNN. The RNN has a GRU structure , which can effectively prevent the gradient disappearance problem, and is simpler than the LSTM structure, making the obtained features more accurate

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • RNN-based multi-task learning method
  • RNN-based multi-task learning method
  • RNN-based multi-task learning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] In order to make the technical solution of the present invention clearer and clearer, the following will be further described in detail with reference to the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, and are not intended to limit the present invention.

[0025] figure 2 It is a schematic diagram of generating a predicted label vector during RNN multi-task learning based on public feature compensation in the present invention, and the specific method is as follows:

[0026] definition For the samples under each task, where N r Indicates the number of samples in the sample, M r Indicates the dimensionality of the sample. We assume the same number of samples for each task, N r Represented by N. Therefore, a sample representation of the different views in each task is as follows

[0027]

[0028] We divide the sample into two parts, one with labeled N l samples for ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an RNN-based multi-task learning method. The method includes steps of S1, initializing a system parameter theta=(W, U, B, V); S2, inputting samples x<1, i>... x<R, i>, learningpublic information Xco and compensating the public information into training of single tasks; S3, calculating predication label vector output, as in the description, of each neural network and calculating loss Lr of a task <r, i>; S4, solving the gradient of theta=(W, U, B, V) according to a gradient decrease method and a BPTT algorithm and determining the gradient of a task r relative to the public information Xco; S5, determining a learning rate eta and updating each weight gradient W=W-eta*deltaW; S6, judging whether a neural network reaches stability or not, if yes, performing a step S7 and if not, returning to step S2 and performing iteration update on model parameters; S7, outputting an optimized model. According to the invention, public features among RNN learning multiple tasks canbe utilized effectively and the public features are input to learning of the single tasks, so that information share is realized. Besides, through introduction of a GRU structure in RNN, the problemof gradient vanishment can be solved effectively.

Description

technical field [0001] The invention relates to the field of neural network multi-task learning, in particular to an RNN-based multi-task learning method. Background technique [0002] In real-world applications, different tasks can be related to each other in different ways. And multi-task learning has more advantages than single-task learning. For example, when we only have a small part of the available data for each task, then multi-task learning can combine the data of multiple related tasks for learning. It is also possible that tasks are linked by some underlying common representation. For example, in object recognition, the first few steps in the formation of the human visual system are all by learning a common set of features to represent all objects. Most of the previous methods on multi-task learning link the relationship between tasks through a functional concept. [0003] For the modeling of sequence data, the model based on neural network (Neural Network) ha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/084G06N3/044G06N3/045
Inventor 王磊翟荣安王纯配顾仓王毓刘晶晶王飞于振中李文兴
Owner HRG INT INST FOR RES & INNOVATION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products