Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)

An implementation method and multi-task technology, applied in the direction of multi-programming devices, resource allocation, etc., can solve problems such as multi-task sharing that have not been found, and achieve the effects of simple multi-task sharing, simplified programming work, and good performance

Inactive Publication Date: 2012-10-03
HUAWEI TECH CO LTD +1
View PDF2 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] At present, no patent or literature has been found to discuss multi-task sharing on GPU

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)
  • Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)
  • Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] The following uses a specific example to further illustrate the present invention. However, it should be noted that the purpose of publishing the embodiments is to help further understand the present invention, but those skilled in the art can understand that various substitutions and modifications are possible without departing from the spirit and scope of the present invention and the appended claims. It is possible. Therefore, the present invention should not be limited to the content disclosed in the embodiments, and the scope of protection claimed by the present invention is subject to the scope defined by the claims.

[0060] A specific example is: 3 calculation tasks (the content of the specific tasks has no effect here).

[0061] Tasks have the following constraints: Task 1 must be completed after task 0 is completed, because task 1 needs to use the result of task 0, and task 2 has no constraint relationship with task 0 and task 1. (Attachment 3(a), the circle repr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for sharing a GPU (graphics processing unit) by multiple tasks based on a CUDA (compute unified device architecture). The method includes creating a mapping table in a Global Memory, determining each task number and task block numbers which are executed by a corresponding Block in a corresponding combined Kernel; starting N blocks by one Kernel every time; and meeting constraint relations among the original tasks by a marking and blockage waiting method; and performing sharing by the multiple tasks for a Shared Memory in a pre-application and static distribution mode. The N is equal to the sum of the task block numbers of all the tasks. By the aid of the method, sharing by the multiple tasks can be realized on the existing hardware architecture of the GPU simply and conveniently, programming work in actual application can be simplified, and a good performance is obtained under certain conditions.

Description

Technical field [0001] The invention relates to a method for implementing multi-task sharing GPU, in particular to a method for merging multiple tasks in the CUDA architecture of NVIDA to realize task parallelism, and belongs to the field of GPGPU computing. Background technique [0002] GPGPU (General-purpose computing on graphics processing units) is a technology that uses GPU to perform large-scale computing. CUDA is the GPGPU architecture provided by NVIDA. Since its introduction, CUDA has become a widely used form of many-core parallel computing. [0003] GPU has much higher floating-point computing power and memory bandwidth than CPU (attached figure 1 ), and due to its high degree of parallelism, it is very suitable for large-scale data processing. [0004] However, due to the hardware design of the GPU, the programming on the GPU is different from the parallel programming on the CPU. A significant difference is that the GPU does not support multi-task sharing: each task ru...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50
Inventor 黄锟陈一峯蒋吴军
Owner HUAWEI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products