Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling

A dynamic scheduling and global scheduling technology, applied in the field of distributed computing, can solve problems such as the shortest task completion time, failure to fully utilize the cluster computing capacity, and inconsistent end times of computing nodes, so as to shorten the task processing time and realize pipeline processing. , Guaranteed not to wait for the effect of each other

Active Publication Date: 2019-01-22
NARI TECH CO LTD +4
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method has obvious shortcomings, so the prediction may not be accurate enough, which will cause the end time of each computing node to be inconsistent, cause some nodes to have a long tail phenomenon, and other nodes may be idle in the final stage, and the computing power of the cluster is not fully utilized. did not reach the shortest

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling
  • Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling
  • Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0059] Such as figure 1 Shown is the software frame diagram of the dynamic scheduling method of the present invention, figure 1 The global scheduling module in the system can be deployed on any node and adopts active and standby redundancy to ensure reliability; the node scheduling module works on each node; the global scheduling module is responsible for distributing data blocks in the system according to the computing power of each node. Each node has two data storage queues, namely the current processing queue and the data cache queue; the current processing queue stores the data blocks being processed by the current CPU and GPU; the data cache queue stores network transmission To the local pending data block.

[0060] Such as figure 2 Shown is a schematic diagram of data distribution of the global scheduling module. figure 2 The middle global scheduling module firstly determines the computing power weight of each node according to parameters such as processor (includi...

Embodiment 2

[0066] Based on the same inventive concept as in Embodiment 1, the embodiment of the present invention provides a dynamic scheduling system based on two-level scheduling for CPU and GPU collaborative computing, including:

[0067] A system-level resource real-time monitoring module; the real-time resource monitoring module monitors the relevant parameters of the CPU and GPU in each node in real time; the relevant parameters include parameters such as the model of the CPU, the main frequency, the number of cores, and the average idle rate, And parameters such as the model of the GPU and the number of stream processors;

[0068] Global scheduling module; the global scheduling module receives the information sent by the resource real-time monitoring module, and estimates the processing capacity of each node in the system, and according to the request of the node scheduling module in each node, according to the estimated processing of each node in batches Ability to dynamically di...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a dynamic scheduling method and a system for collaborative computing between a CPU and a GPU based on two-level scheduling. The method includes: forecasting the processing capacity of each node in the system, a global scheduling module dynamically distributing data to each node according to the processing capacity of each node according to the batch number of the request ofthe node scheduling module in each node, when the node scheduling module finds that the data queue used to place the processing data is empty, requesting the next batch of data to be processed from the global scheduling module, and dynamically scheduling the tasks according to the CPU and GPU processing capacity in the node. According to the heterogeneity of system resources, the invention allowsweak nodes to share fewer tasks and strong nodes to process more tasks, which can improve the overall concurrency degree of the CPU / GPU heterogeneous hybrid parallel system and reduce the task completion time.

Description

technical field [0001] The invention belongs to the technical field of distributed computing, and in particular relates to a dynamic scheduling method based on two-level scheduling of CPU and GPU cooperative computing. Background technique [0002] The CPU / GPU heterogeneous hybrid parallel system has become a new type of high-performance computing platform due to its strong computing power, high cost performance and low energy consumption. However, its complex architecture also poses a huge challenge for parallel computing research. In the prior art, research on task scheduling in CPU / GPU heterogeneous hybrid parallel systems generally adopts the prediction of the computing power of various types of hardware or the running time of tasks on various processors, and then performs one-time task allocation. . This method has obvious shortcomings, so the prediction may not be accurate enough, which will cause the end time of each computing node to be inconsistent, cause some node...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50
CPCG06F9/5044G06F9/5066Y02D10/00
Inventor 高原顾文杰李华东张磊陈泊宇张用顾雯轩陈素红丁雨恒
Owner NARI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products