Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

CUDA-based Gridding algorithm optimization method and device

An optimization method and a technology for optimizing devices, which are applied in the field of parallelization, can solve problems such as low timeliness, and achieve the effects of improving operating efficiency, reducing economic costs, and saving time and cost

Inactive Publication Date: 2019-08-30
PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU
View PDF6 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problem of low timeliness in the application of existing gridding algorithms, the present invention proposes a CUDA-based gridding algorithm optimization method and device

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • CUDA-based Gridding algorithm optimization method and device
  • CUDA-based Gridding algorithm optimization method and device
  • CUDA-based Gridding algorithm optimization method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0040] Such as figure 1 As shown, a CUDA-based Gridding algorithm optimization method includes the following steps:

[0041] Step S101: According to the number of function calls in the Gridding algorithm, the top M functions with the highest calling times are obtained; specifically, the main functions that can be optimized in the Gridding algorithm and the number of times they are called are shown in Table 1.

[0042] Table 1 The main functions that can be optimized in the Gridding algorithm and the number of times they are called

[0043]

[0044]

[0045] It can be seen from Table 1 that the grdsf function is called the most times, reaching 370 times, anti_aliasing_calculate (anti-aliasing function) is called 185 times, convolutional_degrid (convolution degrid function) is called 47 times, convolutional_grid (convolution grid function) was called 44 times, weight_gridding was called 2 times, gridder and gridder_numba were not called. In this embodiment, the top three...

Embodiment 2

[0082] Such as Figure 5 As shown, a CUDA-based Gridding algorithm optimization device includes:

[0083] The comparison module 201 is used to obtain the top M functions with the highest number of calls according to the number of function calls in the Gridding algorithm;

[0084] The parallelization module 202 is configured to perform GPU parallelization processing on the M functions in CUDA.

[0085] Specifically, it also includes:

[0086] The replacement module 203 is configured to implement a GPU-based Gridding algorithm in CUDA, and replace the Gridding algorithm in the ARL algorithm library.

[0087] Specifically, as Figure 6 As shown, the parallelization module 202 includes:

[0088] The allocation sub-module 2021 is used to allocate an array on the video memory for the M functions that need to be parallelized by the GPU;

[0089] The transmission sub-module 2022 is used to transmit the data to be processed from the internal memory to the video memory;

[0090] D...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of parallelization, and discloses a CUDA-based Gridding algorithm optimization method, which comprises the steps: 1, obtaining first M functions with the highest calling times according to the function calling times in a Gridding algorithm; and 2, carrying out GPU parallel processing on the M functions in a CUDA through a matrix vectorization method. The invention also discloses a CUDA-based Gridding algorithm optimization device. The device comprises a comparison module and a parallelization module. According to the method and the device, the overall operation time of the Gridding algorithm is shortened, the operation efficiency is improved, and the format and size of input and output data are not influenced.

Description

technical field [0001] The invention relates to the technical field of parallelization, in particular to a CUDA-based Gridding algorithm optimization method and device. Background technique [0002] Astronomical data is growing at an astonishing rate in terms of data volume, data complexity, and data growth rate. In the past few decades, the research and development of radio telescopes have greatly improved in sensitivity, data quality and image resolution, and the rate at which the telescope collects data is very high. As the largest synthetic aperture radio telescope in the world, SKA plans to collect more than 12Tb of data per second, which is equivalent to 3.5 times the international export bandwidth of China's Internet at the end of 2013 and 30 times the annual data volume of Google. Such scientific data processing requires exascale supercomputers to complete. This processing speed is equivalent to 52 times the performance of Tianhe-2, the world's fastest supercomputer...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06T1/20
CPCG06F9/5072G06T1/20
Inventor 胡馨艺赵亚群赵志诚
Owner PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products