Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Buffer method and system for multilevel pipeline parallel computing

A technology of parallel computing and buffering method, which is applied in the field of data processing, can solve the problems of wasting time and power consumption, and achieve the effect of less error-prone, simple and efficient error, and data transmission

Inactive Publication Date: 2017-11-28
INST OF MICROELECTRONICS CHINESE ACAD OF SCI
View PDF4 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention provides a buffering method and system for multi-stage pipeline parallel computing, which solves the problem of wasting time and power consumption in the prior art by copying data transfer between subtask processing modules

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Buffer method and system for multilevel pipeline parallel computing
  • Buffer method and system for multilevel pipeline parallel computing
  • Buffer method and system for multilevel pipeline parallel computing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The following describes in detail the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary and are only used to explain the present invention, but not to be construed as a limitation of the present invention.

[0045] Traditional serial computing processes sequential tasks one by one. The processing of each task includes the processing of several subtasks, and each subtask has its own processing module. The following is an example of dividing each task of the pipeline into six levels of subtasks. As shown in Figure 1(a), a task T n The processing sequence of T n After processing S1, S2, S3, S4, S5, and S6, the task T n+1 to be processed. If task T n The processing end time point of T i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a buffer method and system for multilevel pipeline parallel computing. The method includes the following steps that all tasks in a pipeline is divided into multiple levels of subtasks in advance, and the subtasks, at the same level, of different tasks of the pipeline are processed by the same subtask processing module; a fixed independent data space is set for each task of the pipeline, and each independent data space has a fixed address; the subtasks, at different levels, of the same task of the pipeline share the independent data space of the task, after processing, the subtask processing module of the first-level subtasks inherits data of the independent data space of the prior-level subtask of the same task by updating the address of a pointer, and data transfer is thus achieved. By updating the address of the pointer, data transfer is achieved, and the situation that copying wastes much time and needs high power consumption is avoided.

Description

technical field [0001] The invention relates to the field of data processing, in particular to a buffering method and system for multi-stage pipeline parallel computing. Background technique [0002] Traditional serial computing processes each sequential task one by one. The processing of each task includes the processing of several subtasks. Each subtask has its own processing module. The processing sequence of a task Tn goes through each subtask processing module. Task Tn+1 is processed only after the subtasks of all levels of Tn are processed. If the processing end time point of task Tn is earlier than the arrival time point of task Tn+1, this serial calculation method is feasible. [0003] However, if the processing end time of task Tn is later than the arrival time of task Tn+1, this serial computing method is not feasible. At this time, it is necessary to use parallel computing so that the processing of task Tn can be completed before the processing of task Tn When +...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48
CPCG06F9/4843G06F2209/483Y02D10/00
Inventor 吴玉平
Owner INST OF MICROELECTRONICS CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products