Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-language model compression method and device based on knowledge distillation

A multi-language and model technology, applied in the field of machine learning, can solve problems such as complex model structure and too many model parameters, and achieve the effect of less model parameters, simplified structure, and short time consumption

Pending Publication Date: 2020-04-24
北京知道智慧信息技术有限公司
View PDF5 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the embodiments of the present application is to provide a multilingual model compression method and device based on knowledge distillation, electronic equipment and storage media, so as to improve the above-mentioned problem of "complex model structure and too many model parameters".

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-language model compression method and device based on knowledge distillation
  • Multi-language model compression method and device based on knowledge distillation
  • Multi-language model compression method and device based on knowledge distillation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.

[0027] At present, there are two ways to train multilingual models. The first one is to prepare a large amount of corpus in multiple languages ​​to form a large vocabulary, so that the model can learn semantic representations of multiple languages ​​in one training process. The second is to dynamically add a vocabulary of a new language given a model that has been trained in a certain language, map the vocabulary to the weight matrix of the hidden layer, retain the weight matrix of the original model, and add a new vocabulary The corresponding weight matrix, and initialize the weight matrix corresponding to the new vocabulary, the training process is to use the corpus of the new language to train the language model. However, the above two methods will increase the model parameters of the model to be...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-language model compression method and device based on knowledge distillation, electronic equipment and a storage medium, and belongs to the technical field of machine learning. The method comprises the following steps: taking trained N language models of different languages as teacher models; training a student model by utilizing the teacher model through a knowledgedistillation algorithm, wherein the vocabulary of the student model comprises all vocabularies in the teacher model. In the embodiment of the invention, the trained N language models of different languages are taken as teacher models; through the knowledge distillation algorithm, the teacher model is used for training the student model, so that compared with a multi-language model in the prior art, the student model obtained through final training has the advantages that model parameters are few, the structure of the model is simplified, and the performance and effect of the model can be guaranteed under the condition that the model parameters are few.

Description

technical field [0001] The present application relates to the technical field of machine learning, in particular, to a multilingual model compression method and device based on knowledge distillation, electronic equipment and storage media. Background technique [0002] In recent years, the dynamic word vector training model based on the language model has performed well in NLP (Natural Language Processing, Natural Language Processing) tasks. When multiple languages ​​are required in some scenarios, multilingual models need to be used. Currently, there are two ways to train multilingual models. The first one is to prepare a large amount of corpus in multiple languages ​​to form a large vocabulary, so that the model can be trained in one training process. to learn semantic representations for multiple languages. The second is to dynamically add a vocabulary of a new language given a model that has been trained in a certain language, map the vocabulary to the weight matrix of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N20/00
CPCG06N20/00
Inventor 杨焱麒
Owner 北京知道智慧信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products