Graph execution pipeline parallel method and device for neural network model calculation

A neural network model and execution device technology, applied in the field of deep learning, can solve the problems of limiting the acceleration ratio and throughput rate of distributed systems, high execution costs, and low resource utilization of graph execution systems, and achieve easy distributed training and use The effect of low threshold

Pending Publication Date: 2022-05-27
ZHEJIANG LAB
View PDF4 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] With the rapid development of artificial intelligence industrial applications, the demand for large models in actual application scenarios has become more and more urgent, and the structure of machine learning workloads is becoming more and more complex. very expensive to implement
Most of the existing graph execution methods for neural network model calculation are based on synchronization methods, resulting in low resource utilization of the entire graph execution system, which limits the speedup ratio and throughput of distributed systems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Graph execution pipeline parallel method and device for neural network model calculation
  • Graph execution pipeline parallel method and device for neural network model calculation
  • Graph execution pipeline parallel method and device for neural network model calculation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0073] see Figure 4 , the construction of the physical calculation graph is composed of forward operator x->forward operator y->forward operator z and reverse operator Z->reverse operator Y->reverse operator X, respectively according to each The operator creates an executive body that runs its own kernel function, corresponding to the execution calculation graph that constitutes executive body a->executive body b->executive body c->executive body C->executive body B->executive body A; Executor, which runs the entire computational graph in parallel.

[0074] T1 moment:

[0075] Input the first batch of data, execute body a input data: execute body a runs the kernel function of forward operator x, and write the output tensor of the running result into the free memory block r11.

[0076] Execution body b, execution body c, execution body C, execution body B, and execution body A have no readable input tensor data, so execution bodies b, c, C, B, and A are in the waiting state....

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a neural network model calculation-oriented graph execution pipeline parallel method and device, and provides a neural network model calculation-oriented graph execution pipeline parallel method and device in a deep learning training system. Comprising a graph execution process in a neural network model-oriented calculation process and a process of cooperative work of each functional module. According to the neural network model calculation-oriented graph execution pipeline parallel method, graph execution bodies on a local machine are created according to a physical calculation graph compiled and generated by a deep learning framework, and a plurality of free memory blocks are distributed for each graph execution body. The whole calculation graph can participate in deep learning training tasks of different batches of data at the same time in a pipeline parallel mode, and the utilization rate of a memory and the parallel speed of the data are fully improved.

Description

technical field [0001] The present invention relates to the technical field of deep learning, and in particular, to a graph execution pipeline parallel method and device for neural network model calculation. Background technique [0002] With the rapid development of industrial applications of artificial intelligence, the demand for large models in practical application scenarios has become more and more urgent, and the structure of machine learning workloads has become more and more complex large models, resulting in graphs used for large model calculations. The execution cost is very high. Most of the existing graph execution methods for neural network model computation are based on synchronization methods, resulting in low resource utilization of the entire graph execution system, which limits the speedup ratio and throughput rate of the distributed system. [0003] In order to solve the above problems, the graph execution pipeline parallel method for neural network mode...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08G06F12/02
CPCG06N3/08G06F12/0253G06N3/045Y02D10/00
Inventor 王宏升谭博文鲍虎军陈光
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products