Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compression method and device of neural network model

A technology of neural network model and compression method, which is applied in the field of neural network model compression, which can solve the problems of large amount of parameters, redundancy, and large volume of translation models, etc., and achieve the effect of reducing volume and parameter amount

Pending Publication Date: 2021-10-22
BEIJING KINGSOFT DIGITAL ENTERTAINMENT CO LTD
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The parameter quantity in the transformer translation model is mainly concentrated in the embedding layer. In order to improve the performance of the translation model, the vocabulary size in the embedding layer is mostly tens of thousands. As the vocabulary size increases, the parameter quantity also increases linearly. At the same time, the feedforward network layer is composed of two linear layers, using a bottleneck network structure. The feedforward network layer in the transformer translation model has 25 million parameters, so the parameter quantity in the transformer translation model is huge. At the same time There is also a lot of redundancy, resulting in a huge size of the transformer translation model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression method and device of neural network model
  • Compression method and device of neural network model
  • Compression method and device of neural network model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0095] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the application. However, the present application can be implemented in many other ways different from those described here, and those skilled in the art can make similar promotions without violating the connotation of the present application. Therefore, the present application is not limited by the specific implementation disclosed below.

[0096] Terms used in one or more embodiments of the present application are for the purpose of describing specific embodiments only, and are not intended to limit the one or more embodiments of the present application. As used in one or more embodiments of this application and the appended claims, the singular forms "a", "the", and "the" are also intended to include the plural forms unless the context clearly dictates otherwise. It should also be understood that the term "and / or" used in one or more embodiments of th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a compression method and device of a neural network model. The compression method of the neural network model comprises the following steps: acquiring a parameter matrix in an embedded layer of the neural network model; receiving an order raising instruction, and performing order raising on the parameter matrix according to the order raising instruction to obtain a high-order tensor; decomposing the high-order tensor to obtain a decomposition result of the high-order tensor; and updating the parameter matrix according to the decomposition result to obtain a compressed neural network model. According to the method provided by the invention, on the basis of keeping the performance of the neural network model, the parameter quantity in the embedded layer of the neural network model is effectively reduced, and the volume of the neural network model is remarkably reduced.

Description

technical field [0001] The present application relates to the field of computer technology, and in particular to a neural network model compression method and device, computing equipment, and a computer-readable storage medium. Background technique [0002] With the development of computer technology, neural network models are more and more widely used, such as the Transformer translation model, the Transformer translation model includes an encoder and a decoder, and the encoder and decoder include an embedding layer and 6 networks with the same structure. Layer, that is, the encoder includes 6 encoding layers, and the decoder includes 6 decoding layers. Each encoding layer or decoding layer includes a self-attention layer and a feed-forward network layer, and the serial connection between the 6 encoding layers , serial connection between six decoding layers. [0003] The parameter quantity in the transformer translation model is mainly concentrated in the embedding layer. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06N3/04G06F40/58
CPCG06N3/082G06N3/045
Inventor 李长亮王怡然郭馨泽
Owner BEIJING KINGSOFT DIGITAL ENTERTAINMENT CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products