Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A cluster GPU resource scheduling system and method

A resource scheduling and resource scheduling module technology, applied in resource allocation, multi-programming devices, etc., can solve problems such as low efficiency, the inability of a single GPU to carry complex computing tasks, and the inability of GPU cards to be plug-and-play

Active Publication Date: 2014-10-29
XIAMEN MEIYA PICO INFORMATION
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, the present invention provides a cluster GPU resource scheduling system and method to solve the problem that the existing single GPU cannot carry complex computing tasks, and the existing cluster GPU resource scheduling method is not efficient, and the GPU cards in the cluster Unable to plug and play problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A cluster GPU resource scheduling system and method
  • A cluster GPU resource scheduling system and method
  • A cluster GPU resource scheduling system and method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to solve the problems in the prior art, the embodiment of the present invention provides a cluster GPU resource scheduling system and method. The solution provided by the present invention forms all GPU resources into a cluster, and the master node uniformly schedules each child node in the cluster, each The child node only needs to set a unique ID number and computing power, and send its own information to the master node, and the master node will classify GPU resources according to the received information of each node; for the input task, the master node will The task is basically divided and distributed to each sub-node, and each scheduled sub-node further divides the sub-task into fine blocks to match the parallel computing mode of the GPU.

[0041] Embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0042] figure 1A schematic structural diagram of a cluster GPU resource scheduling system...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a cluster GPU resource scheduling system. The system comprises a cluster initialization module, a GPU master node, and a plurality of GPU child nodes, wherein the cluster initialization module is used for initializing the GPU master node and the plurality of GPU child nodes; the GPU master node is used for receiving a task inputted by a user, dividing the task into a plurality of sub-tasks, and allocating the plurality of sub-tasks to the plurality of GPU child nodes by scheduling the plurality of GPU child nodes; and the GPU child nodes are used for executing the sub-tasks and returning the task execution result to the GPU master node. The cluster GPU resource scheduling system and method provided by the invention can fully utilize the GPU resources so as to execute a plurality of computation tasks in parallel. In addition, the method can also achieve plug and play function of each child node GPU in the cluster.

Description

technical field [0001] The present invention relates to the technical field of computer networks, in particular to a cluster GPU resource scheduling system and method. Background technique [0002] In recent years, the graphics processing unit (Graphic Processing Unit, GPU) has achieved continuous rapid development in hardware architecture, and has evolved into a highly parallel, multi-threaded and multi-processing core processor with powerful computing capabilities. The single instruction multiple thread (Single Instruction Multiple Thread, SIMT) architecture of the central processing unit (Central Processing Unit, CPU) increases the flexibility of programming. GPUs are designed to solve problems that can be expressed as data-parallel computing, meaning that most data elements have the same data path, resulting in extremely high computational density (ratio of math operations to memory operations), which hides memory access latency. With its powerful computing capabilitie...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/46G06F9/50
Inventor 汤伟宾吴鸿伟罗佳
Owner XIAMEN MEIYA PICO INFORMATION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products