Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for conducting instruction dispatching based on network load features in data stream architecture

A network load and instruction scheduling technology, applied in concurrent instruction execution, electrical digital data processing, machine execution devices, etc., can solve the problems of computing node stall, idle, execution speed bottleneck, etc., to improve utilization and network bandwidth utilization efficiency, improving component utilization and network resource utilization and throughput

Inactive Publication Date: 2018-02-06
北京中科睿芯智能计算产业研究院有限公司
View PDF1 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

If instructions are mapped locally in a certain direction, according to the processing speed of the original on-chip routing, this direction will cause a bottleneck in the execution speed.
In addition, if the instruction is still scheduled according to the original polling strategy, it will cause the pause and idleness of the pipeline inside the computing node.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for conducting instruction dispatching based on network load features in data stream architecture
  • Method for conducting instruction dispatching based on network load features in data stream architecture
  • Method for conducting instruction dispatching based on network load features in data stream architecture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0028] The present invention provides a method for command scheduling based on network load characteristics in a data flow architecture, which includes the following steps:

[0029] A congestion detection unit is set inside each on-chip route, an output buffer unit is set at each exit of each on-chip route, and an instruction selection unit is set inside each computing node;

[0030] Figure 4 Schematic diagram for the output buffer components in on-chip rout...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for conducting instruction dispatching based on network load features in a data stream architecture. The method comprises the following steps of arranging a congestiondetection part inside each router-on-chip, arranging an output buffering part in each outlet of each router-on-chip, and arranging an instruction selection part inside each processing element; makingeach blocking detection part detect the blocking condition of each outlet of each router-on-chip and send the condition to the corresponding processing element in real time, wherein the blocking condition comprises a blocking state and a non-blocking state; making each instruction selection part select a priority scheduling instruction from an instruction slot and send the instruction to the corresponding router-on-chip, wherein the selection is conducted based on the condition that the instruction is in a 'ready' state and the outlet direction corresponding to the instruction is in the non-blocking state; making each router-on-chip temporarily store the received priority scheduling instruction to the output buffering part in the corresponding direction; making each route-on-chip send theinstruction to the next route-on-chip according to the target direction of the priority scheduling instruction which is temporarily stored in the corresponding output buffering part.

Description

technical field [0001] The invention relates to an instruction scheduling method in a data flow architecture, in particular to a method for executing instruction scheduling based on network load characteristics in a data flow architecture. Background technique [0002] With the development of computer architecture, domain-specific computer architecture has become the main development trend. When oriented to a specific application, the special-purpose structure uses the application characteristics to optimize the structure accordingly, so as to better exert the computing performance of the hardware. In the field of high-performance computing, data flow computing is an important branch of domain-specific computing structures, and data flow computing has shown good performance and applicability. The data flow architecture usually includes several or more than a dozen computing nodes (computing nodes are called processing elements, or PEs for short), and each computing node is ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/38
CPCG06F9/3867
Inventor 冯煜晶张浩吴冬冬叶笑春
Owner 北京中科睿芯智能计算产业研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products