Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

GPU resource elastic scheduling method based on heterogeneous application platform

A technology of application platform and scheduling method, applied in resource allocation, inter-program communication, instruments, etc., can solve problems such as inconsistency of GPU resource scheduling information, unfavorable GPU resource utilization, non-support of heterogeneous resources and heterogeneous application expansion, etc. achieve maximum utilization

Active Publication Date: 2021-04-23
SHANDONG COMP SCI CENTNAT SUPERCOMP CENT IN JINAN +1
View PDF5 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method must be unique to the scheduling system used on the same platform, otherwise it will lead to inconsistent scheduling information for GPU resources, resulting in resource occupation conflicts; the commonly used scheduling platform for physical node-level scheduling is the same as the hardware-level scheduling platform, and all GPU computing nodes belong to It is based on a certain resource management platform and does not involve sharing and multiplexing. Its elastic expansion is limited to the inside of the platform. It does not support heterogeneous resources and heterogeneous application expansion, which is not conducive to improving the overall GPU resource utilization of the platform.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU resource elastic scheduling method based on heterogeneous application platform
  • GPU resource elastic scheduling method based on heterogeneous application platform
  • GPU resource elastic scheduling method based on heterogeneous application platform

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0039] like figure 1 As shown, a schematic diagram of general flexible scheduling of GPU resources in the present invention is given. The three application platforms are high-performance computing application platform, cloud computing application platform and container application platform, and their identification IDs are 1, 2, and 3 respectively. At the same time, the platform also has a public GPU node resource pool for flexible scheduling and dynamic scaling. As the core platform for resource elastic scaling, it is mainly composed of configuration management module, elastic scheduling module, resource allocation module, resource recycling module, initialization module and collection module. The specific functions of each module are as follows:

[0040] The configuration management module is used to configure the management platform scheduling info...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a GPU resource elastic scheduling method based on a heterogeneous application platform. The method comprises the following steps: a) obtaining GPU resource utilization rate information; b) setting a trigger threshold value and the number of times; c) screening and sequencing the capacity-reducing platform queues; d) screening and sequencing the expansion platform queues; with the steps of 1) selecting a to-be-reduced platform; 2) establishing a GPU node list; 3) processing a locking state node; 4) offline the to-be-migrated node; 5) adding to a resource queue; and 6) judging whether the volume reduction is finished or not. According to the GPU resource elastic scheduling method, flexible adjustment can be conducted according to the GPU load condition of the whole platform, so that maximized utilization of platform GPU resources is achieved, scheduling of the platform is mainly achieved through an existing scheduling assembly of an adaptive platform on the lower layer, dynamic resource monitoring, information collection and execution operation issuing are achieved through interface calling. And rapid and flexible deployment and implementation of cloud computing, big data, artificial intelligence and high-performance computing scene platforms can be satisfied.

Description

technical field [0001] The present invention relates to a method for flexible scheduling of GPU resources, and more specifically, to a method for flexible scheduling of GPU resources based on a heterogeneous application platform. Background technique [0002] Graphics processing unit (GPU) resources have been increasingly used in the fields of cloud computing, artificial intelligence, and high-performance computing in recent years due to their excellent parallel computing capabilities, higher bandwidth, and main frequency. . At the same time, because the price of GPU resources is generally higher than that of CPU, GPU resources are scarce resources in different computing application scenarios. In order to improve the utilization rate of GPU resources, it is generally realized mainly through resource scheduling. [0003] GPU resource scheduling can generally be divided into task-level scheduling, hardware-level scheduling, and node-level scheduling. The task-level schedulin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50G06F9/4401G06F9/54
CPCG06F9/5011G06F9/5027G06F9/546G06F9/4403G06F2209/5012G06F2209/548
Inventor 王继彬刘鑫郭莹杨美红
Owner SHANDONG COMP SCI CENTNAT SUPERCOMP CENT IN JINAN
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products