Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Multilayer neural network language model training method and device based on knowledge distillation

A multi-layer neural network and language model technology, applied in biological neural network models, neural learning methods, knowledge expression, etc., can solve problems such as large and complex network structures, slow training speed, etc., to achieve fast training speed and good coding ability. , the effect of improving the accuracy

Active Publication Date: 2020-09-01
HUAIYIN INSTITUTE OF TECHNOLOGY
View PDF4 Cites 47 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most of the current text pre-trained language models have shortcomings such as large and complex network structures and slow training speeds.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multilayer neural network language model training method and device based on knowledge distillation
  • Multilayer neural network language model training method and device based on knowledge distillation
  • Multilayer neural network language model training method and device based on knowledge distillation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] In order to clearly illustrate the technical solution of the present invention, the related technologies involved in the present invention will firstly be briefly described below.

[0027] BERT (Bidirectional Encoder Representation from Transformers, Transformer's bidirectional encoding representation) language model: BERT uses the masked model to realize the bidirectionality of the language model, which proves the importance of bidirectionality for language representation pre-training. The BERT model is a two-way language model in the true sense, and each word can use the context information of the word at the same time. BERT is the first fine-tuning model to achieve the best results in both sentence-level and token-level natural language tasks. It is proved that pre-trained representations can alleviate the design requirements of special model structures for different tasks. BERT achieves the best results on 11 natural language processing tasks. And in BERT's extens...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multilayer neural network language model training method and device based on knowledge distillation. The method comprises the steps that firstly, a BERT language model and amulti-layer BILSTM model are constructed to serve as a teacher model and a student model, the constructed BERT language model comprises six layers of transformers, and the multi-layer BILSTM model comprises three layers of BILSTM networks; then, after the text corpus set is preprocessed, the BERT language model is trained to obtain a trained teacher model; and the preprocessed text corpus set is input into a multilayer BILSTM model to train a student model based on a knowledge distillation technology, and different spatial representations are calculated through linear transformation when an embedding layer, a hiding layer and an output layer in a teacher model are learned. Based on the trained student model, the text can be subjected to vector conversion, and then a downstream network is trained to better classify the text. According to the method, the text pre-training efficiency and the accuracy of the text classification task can be effectively improved.

Description

technical field [0001] The invention relates to the fields of unsupervised text pre-training and deep learning, in particular to a multi-layer neural network language model training method and device based on knowledge distillation. Background technique [0002] With the rapid increase of online text information data on the Internet, the language model plays a vital role in information processing. It is a key technology for processing large-scale text information and promotes the development of information processing in the direction of automation. A language model is simply the probability distribution of a sequence of words. Building a reasonable pre-trained language model can solve many current text information problems, such as text classification, text similarity, reading comprehension, etc., and then can efficiently use a large amount of text corpus data on the Internet to better provide people with More convenient service. However, most of the current text pre-train...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/35G06F40/289G06F40/30G06F40/211G06N3/04G06N3/08G06N5/02
CPCG06F16/355G06F40/289G06F40/30G06F40/211G06N3/049G06N3/08G06N5/02G06N3/045
Inventor 高尚兵李文婷李伟王通阳姚宁波周泓朱全银相林于坤陈晓兵张正伟
Owner HUAIYIN INSTITUTE OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products