Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Gradient Compression Method for Distributed DNN Training in Edge Computing Environment

A technology of edge computing and compression method, which is applied in the field of edge computing, can solve the problems of communication efficiency and poor optimization of model accuracy, and achieve the effect of reducing communication costs

Active Publication Date: 2022-06-28
HOHAI UNIV +1
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since quantization and sparsification are at the expense of model accuracy in exchange for communication efficiency, gradient compression schemes with different degrees of quantization and sparsification will have significant differences. Poor accuracy optimization

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Gradient Compression Method for Distributed DNN Training in Edge Computing Environment
  • Gradient Compression Method for Distributed DNN Training in Edge Computing Environment
  • Gradient Compression Method for Distributed DNN Training in Edge Computing Environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] Below in conjunction with specific embodiments, the present invention will be further illustrated, and it should be understood that these embodiments are only used to illustrate the present invention and not to limit the scope of the present invention. The modifications all fall within the scope defined by the appended claims of this application.

[0048]In this embodiment, an adaptive sparse ternary gradient compression method for distributed DNN training in a multi-edge computing environment. By establishing the selection criteria based on the number of gradients, designing a sparse threshold selection algorithm based on entropy, introducing gradient residuals and momentum correction to optimize the loss of model accuracy due to sparse compression, combining ternary gradient quantization and lossless coding techniques, effectively reducing edge distributed training Communication cost per iteration.

[0049] figure 1 This is an application scenario of the adaptive sp...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a gradient compression method for distributed DNN training in an edge computing environment, establishes a selection standard based on the number of gradients, and screens the gradient network layer that meets the model compression standard; evaluates the importance of the gradient according to the gradient entropy, and selects it adaptively The threshold of gradient sparsification is based on the flexible threshold to compress the gradient sparsification; according to the gradient residual and momentum correction mechanism, the gradient residual is accumulated and optimized to reduce the performance loss of the training model caused by gradient sparsity; according to the ternary quantization compression scheme, quantization The sparse gradient is obtained to obtain a sparse ternary tensor; according to the lossless coding technology, the distance of the non-zero gradient in the transfer tensor is recorded, and the optimized encoding is performed to output the sparse ternary gradient. The sparse ternary gradient compression algorithm based on the gradient quantity and gradient entropy of the present invention can adaptively compress the gradient size of the gradient exchange stage in the distributed DNN training, and effectively improve the communication efficiency of the distributed DNN training.

Description

technical field [0001] The invention relates to a gradient compression method for distributed DNN training in an edge computing environment, in particular to an adaptive sparse ternary gradient compression method for distributed DNN training in an edge computing environment, belonging to the technical field of edge computing. Background technique [0002] With the rapid development of artificial intelligence, deep neural network (DNN) is widely used in various intelligent fields, including computer vision, natural language processing, and big data analysis. In every domain, the high accuracy of deep learning comes at the expense of high computational and storage requirements during the training phase. DNN models, on the other hand, need to iteratively optimize millions of parameters over multiple time periods, which makes training deep neural network models both time and computationally expensive. Edge computing can meet the training requirements of DNN to a certain extent,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50G06N3/04G06N3/08
CPCG06F9/5072G06N3/08G06N3/045
Inventor 毛莺池吴俊聂华黄建新徐淑芳屠子健戚荣志郭宏乐
Owner HOHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products