Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

GPU cluster deep learning task parallelization method, device and electronic equipment

A GPU cluster and deep learning technology, applied in the Internet field, can solve problems such as not considering the physical characteristics of resources and the characteristics of tasks themselves, reducing node GPU utilization, and not being able to make full use of GPU resources, so as to improve utilization and execution efficiency. Avoid unbalanced resource allocation and improve resource utilization

Active Publication Date: 2022-01-21
BEIJING UNIV OF POSTS & TELECOMM
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although this method realizes the parallelization of deep learning tasks to a certain extent, this method mainly considers the usage of resources and does not consider the physical characteristics of resources and the characteristics of the task itself. It cannot achieve efficient parallelization of deep learning tasks, which will reduce the Execution Efficiency of Deep Learning Workloads
At the same time, this method does not support the fine-grained multi-task allocation of the GPU, and cannot make full use of the GPU resources on the node, which will affect the efficient execution of deep learning tasks, reduce the GPU utilization of the node, and thus affect the resource utilization of the GPU cluster.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU cluster deep learning task parallelization method, device and electronic equipment
  • GPU cluster deep learning task parallelization method, device and electronic equipment
  • GPU cluster deep learning task parallelization method, device and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] The following will clearly and completely describe the technical solutions in the embodiments of the application with reference to the drawings in the embodiments of the application. Apparently, the described embodiments are only some of the embodiments of the application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.

[0071] The embodiment of the present application discloses a GPU cluster deep learning task parallelization method, device, electronic equipment, storage medium, and computer program product including instructions, which will be described respectively below.

[0072] The embodiment of this application provides a GPU cluster deep learning task parallelization method, see figure 1 , figure 1 It is a schematic diagram of the GPU cluster deep learning task parallelization method of the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The GPU cluster deep learning task parallelization method, device, and electronic equipment provided in the embodiments of the present application relate to the field of Internet technology. By analyzing the similarity between the pending deep learning task and each computing node of the GPU cluster, it is determined that the pending deep learning task is The target computing node in the GPU cluster to reduce the possibility of computing node resource contention, thereby improving the utilization rate and execution efficiency of deep learning task system resources, and then according to the number of GPUs required by the deep learning task to be processed, the The processing deep learning task is divided into multiple target subtasks, and the interference level and communication cost of the target subtasks are analyzed, so as to determine the target GPU of the target subtask in the target computing node, so as to avoid unbalanced resource allocation on the GPU in the computing node, It achieves a high degree of parallelization of deep learning tasks, improves the resource utilization of GPU clusters, and improves the execution efficiency of deep learning tasks.

Description

technical field [0001] The present application relates to the field of Internet technology, in particular to a parallelization method, device and electronic equipment for GPU cluster deep learning tasks. Background technique [0002] With the deepening of deep learning research, deep learning technology has achieved fruitful results in computer vision, speech recognition, text processing and other fields, bringing great convenience to people's lives. However, the complex structure of the neural network model and the huge amount of data put forward higher requirements for computing power. GPU (Graphic Processing Unit, Image Processor) cluster integrates multiple GPU computing resources, provides powerful and efficient parallel computing capabilities for computing-intensive deep learning tasks, and effectively solves the computing needs of multiple deep learning tasks. [0003] However, when a deep learning task runs on a resource-sharing GPU cloud platform, its execution eff...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50G06N3/00
CPCG06F9/5027G06N3/006
Inventor 张海涛耿欣马华东
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products