Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Data parallel processing method and system

A parallel processing and data technology, applied in electrical digital data processing, resource allocation, program control design, etc., can solve problems such as lack of high-performance processors, failure to use GPU high-speed computing cards to accelerate computing, and poor fault tolerance mechanisms, and to improve Execution efficiency and the effect of enhancing data computing and processing capabilities

Active Publication Date: 2016-06-15
SHANGHAI JIAO TONG UNIV +1
View PDF5 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method proposes a construction strategy for a distributed parallel computing platform applied to power system simulation, but only multiple single CPU cores are used in this architecture strategy, and the GPU with supercomputing capabilities that is relatively mature in technology development today is not used. High-speed computing cards are used to accelerate computing, and at the same time, a good fault-tolerant mechanism has not been built. Once an error occurs, it cannot be quickly reconfigured to restore computing power.
[0005] Generally speaking, the current high-performance large-scale data parallel processing methods for many-core processors lack strong software support for hardware such as GPU high-speed computing cards and high-performance processors; on the other hand, there is still an iterative calculation process It does not make full use of the characteristics of high-memory and fast computing, needs to continuously read and write disks, node job scheduling strategies and data distribution are unreasonable, and fault-tolerant mechanisms are poor, and the parallelism of computing nodes needs to be further improved.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data parallel processing method and system
  • Data parallel processing method and system
  • Data parallel processing method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be described in detail below using specific examples. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that those skilled in the art can make several changes and improvements without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

[0041] The technical scheme of data parallel processing method of the present invention is as follows:

[0042] (1) Task and data division for high-performance data processing of many-core processors

[0043] The core concept of the large-scale data parallel processing framework for many cores is: through the use of advanced and mature semiconductor process technology, distributed computer architecture system, GPU high-speed computing card, many-core design and other technologies, a large amount of cost is relatively T...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a data parallel processing method. The data parallel processing method comprises the following steps that 1, a main management node receives data and acquires the incidence relation of the data; 2, the main management node calculates allocatable GPUs and GPU work loads of work computing nodes; 3, the main management node partitions the data and distributes the partitioned data to all the work computing nodes; 4, the work computing nodes perform parallel processing on the received data and transmit processing results back to the main management node; 5, the main management node merges the results and then outputs the results. The data parallel processing method has the following advantages that a master-slave architectural pattern is adopted to be used for high-performance large-scale data parallel processing, operation stage partition is performed on specific operations converted by application programs according to DNA feature modeling, node granularity grade operation deployment is performed according to a partition result, and the execution efficiency of a parallel task of data flow in a single node is improved by adopting a thread parallel optimization mechanism and fully utilizing multiple computing kernels.

Description

technical field [0001] The invention belongs to the field of computer systems and high-performance computing, and specifically relates to a high-performance parallel processing method for large-scale data on a many-core processor architecture, which is a high-efficiency processing-intensive big data and can provide auxiliary decision-making Useful ways. Background technique [0002] As scientific research, e-commerce, social networking, mobile communications and other industries generate a large amount of data all the time, the types of these data are becoming increasingly complex and the quantity is increasing, and the processing scale has evolved from TB level to PB level, until today's Exabyte level, which poses a severe challenge to the efficiency and real-time performance of big data processing. Semiconductor process technology and architecture continue to develop, and processor functions, distributed storage technology, GPU high-speed computing cards, microprocessor s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50
CPCG06F9/5066
Inventor 黄林鹏吴仁克李素敏周浩杰余鹏沈凯石秋伟
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products