Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Dynamic resource scheduling method based on analysis technology and data flow information on heterogeneous many-cores

A technology of dynamic resources and scheduling methods, applied in resource allocation, electrical digital data processing, program control design, etc., can solve communication delays, low utilization of heterogeneous core resources, etc., to reduce communication delays, improve parallelism and Calculation efficiency and the effect of load balancing

Pending Publication Date: 2022-01-04
SOUTHWEAT UNIV OF SCI & TECH
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] In the current resource scheduling scheme for heterogeneous platforms, programmers need to manually allocate the number of GPUs, and the task allocation part also needs to be manually divided by programmers. However, GPUs are suitable for data computing tasks that are highly uniform in type and independent of each other, so only This type of data can be allocated to the GPU to run, which will cause the problem of low utilization of heterogeneous core resources
In addition, most of the existing dynamic scheduling schemes obtain work from the CPU and distribute it to the GPU to achieve load balancing when an idle GPU is detected, which will cause problems such as communication delays.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic resource scheduling method based on analysis technology and data flow information on heterogeneous many-cores
  • Dynamic resource scheduling method based on analysis technology and data flow information on heterogeneous many-cores
  • Dynamic resource scheduling method based on analysis technology and data flow information on heterogeneous many-cores

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The present invention is based on the MGUPUSim heterogeneous platform, and a resource scheduling method is added to the platform, so as to illustrate the purpose, advantages and key technical features of the present invention.

[0032] MGUPUSim is a highly configurable heterogeneous platform. Users can freely choose a unified multi-GPU model or a discrete multi-GPU model, as well as configure the number of GPUs. However, configuring the number of GPUs by the user will cause waste of GPU resources and affect existing data. Dependent tasks cannot be manually assigned to the GPU to run, so there may be low utilization of GPU resources and resource imbalance. Therefore, the steps of the resource scheduling method based on profiling technology and data flow information on the heterogeneous simulator MGPUSim are as follows:

[0033] Step 1: Program analysis to determine the execution times of the parallel loop body.

[0034] Using offline analysis technology, select the loop ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a dynamic resource scheduling method based on an analysis technology and data flow information on heterogeneous many-cores, and relates to the field of heterogeneous many-core systems. The method comprises the following steps: analyzing a program, and determining the execution times of a parallelizable loop body; determining a data flow diagram of the loop body; setting a threshold value, and calculating the number of required GPUs; dividing the GPU task size according to dependence; distributing the tasks to different GPUs for operation according to the data streams of the tasks; and checking whether the platform is load-balanced. The invention mainly aims at providing a resource scheduling method based on information during program analysis and program data flow information aiming at the current situation that programmers need to set the number of GPUs and manually divide tasks on a heterogeneous platform. The execution times and the data dependence of the loop statements obtained through analysis are utilized, a threshold value is set to determine the number of the set GPUs, the thread granularity with data dependent task division is increased, tasks are distributed to each GPU for running by utilizing data flow information at the same time, and a work-stepping algorithm is actively detected and combined to solve the problem of platform load balancing, therefore, the dynamic resource scheduling method capable of automatically setting the number of GPUs, improving the utilization rate of GPU computing resources, actively detecting and realizing load balancing is realized.

Description

technical field [0001] The invention mainly relates to the field of heterogeneous many-core systems, in particular to a dynamic resource scheduling method based on analysis technology and data flow information on the heterogeneous many-core simulator MGPUSim. Background technique [0002] The many-core processor structure is an exploration of the development path of the micro-processing chip architecture with on-chip scalability reaching thousands of cores and computing power reaching trillions of operations. Its computing resource density is higher, the on-chip communication overhead is significantly reduced, it can achieve high scalability of chip structure and performance, and it can well deal with the problems of power consumption, line delay and design complexity of nanotechnology generation chip design. [0003] Graphics processing units (GPUs) have far surpassed general-purpose processors in terms of integration and computing-intensive problem processing capabilities,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50
CPCG06F9/505G06F9/5083
Inventor 王耀彬王欣夷唐苹苹孟慧玲
Owner SOUTHWEAT UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products