Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Distributed training gradient compression acceleration method based on AllReduce

A distributed, gradient technique applied in the field of deep learning

Pending Publication Date: 2021-03-19
BEIJING UNISOUND INFORMATION TECH +1
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The present invention provides an AllReduce-based distributed training gradient compression acceleration method, which can solve the problem of synchronous communication bandwidth for training large-scale model parameters

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed training gradient compression acceleration method based on AllReduce
  • Distributed training gradient compression acceleration method based on AllReduce
  • Distributed training gradient compression acceleration method based on AllReduce

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The principles and features of the present invention are described below in conjunction with the accompanying drawings, and the examples given are only used to explain the present invention, and are not intended to limit the scope of the present invention.

[0019] An AllReduce-based distributed training gradient compression acceleration method provided by an embodiment of the present invention. The following is a detailed explanation.

[0020] figure 1 It is a distributed deep gradient compression training architecture with Params Server (PS) structure, figure 2 It is an AllReduce-based distributed deep gradient compression training framework according to the embodiment of the present invention. Among them, the GPUs of each machine in the PS architecture form a closed loop to transmit the gradient after Intra-node compression; there is no communication connection between the working machines, and the gradient after inter-node compression is transmitted between the w...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a distributed training gradient compression acceleration method based on AllReduce, FP32 is converted into FP16 for Intra-node, an EF-SGD method is used for gradient compression for the Intra-node, loss is reduced compared with a sparse method, and a bandwidth bottleneck is eliminated through an AllReduce architecture compared with a Params Server communication structure.

Description

technical field [0001] The invention relates to the technical field of deep learning, in particular to an AllReduce-based distributed training gradient compression acceleration method. Background technique [0002] There are some problems in the existing centralized distributed training method based on the parameter server method, or in the selection of partial gradient values ​​​​based on the sparse method. For example, the method based on the sparse method has a relatively large loss of gradient information; it is used in intra-node and inter-node With the same gradient compression method, the loss of gradient information is further increased; the Params Server communication structure has a bandwidth bottleneck compared to AllReduce itself. Contents of the invention [0003] The invention provides an AllReduce-based distributed training gradient compression acceleration method, which can solve the problem of synchronous communication bandwidth for training large-scale mo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/04G06N3/08
Inventor 谢远东梁家恩
Owner BEIJING UNISOUND INFORMATION TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products