Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Accelerating core virtual scratch pad memory method based on heterogeneous multi-core platform

A heterogeneous multi-core and memory technology, applied in the field of heterogeneous multi-core platform memory access optimization, can solve problems affecting the performance of heterogeneous multi-core platforms, limited data bus bandwidth, slow access speed, etc., to save SPM size and improve interaction speed , the effect of reducing costs

Inactive Publication Date: 2013-08-28
ZHEJIANG UNIV
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this will bring new problems: the shared data interaction between the general processing core and the acceleration core on the heterogeneous multi-core platform requires multiple copies of the data, which will involve multiple memory accesses. For SPM, the access speed is very slow, which seriously slows down the overall running speed
In addition, the bandwidth of the data bus between the general-purpose processing core and the acceleration core is limited, and a large amount of data transmission also has a large delay, which also affects the overall performance of the heterogeneous multi-core platform to a certain extent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Accelerating core virtual scratch pad memory method based on heterogeneous multi-core platform
  • Accelerating core virtual scratch pad memory method based on heterogeneous multi-core platform
  • Accelerating core virtual scratch pad memory method based on heterogeneous multi-core platform

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

[0022] like figure 1 In the storage hierarchical architecture diagram of the heterogeneous multi-core platform shown, L1 Cache is the first-level cache of general-purpose processing cores, which is private to each general-purpose processing core; L2 Cache is the second-level cache, which is shared by all general-purpose processing cores, but Because the memory access characteristics of the accelerated core are very different from those of the general-purpose processing core, it does not participate in the shared L2 Cache; SPM is the abbreviation of Scratch Pad Memory, and SPM is used as the local memory of the accelerated core to store Generated local data, and acts as a cache between the accelerated core and memory.

[0023] like figure 2 As shown, the present invention is figure 1 Based on the storage hierarchy in , make some optimizati...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an accelerating core virtual scratch pad memory method based on a heterogeneous multi-core platform. The accelerating core virtual scratch pad memory method includes the following steps: (1) dividing shared L2Cache into two portions of common L2Cache and a virtual scratch pad memory (SPM) logically; (2) setting a virtual SPM access interface; (3) setting a replacing strategy of the common L2Cache and the virtual SPM again; (4) addressing the virtual SPM and a memory uniformly; and (5) defining a virtual SPM space request and a released MIPS assembling command. By optimizing a storage cache sub-system of the heterogeneous multi-core platform partially, data interaction between a general processing core and an accelerating core is no longer finished through the memory and is finished by achieving data sharing in the virtual SPM. By means of the method, shared data interaction speed between the general processing core and the accelerating core is increased effectively, and the integral performance of the heterogeneous multi-core platform is obviously improved. Meanwhile, the virtual SPM can replace an SPM of the accelerating core partially, the capacity of the SPM of the accelerating core itself can be saved, and the cost of hardware is reduced.

Description

technical field [0001] The invention belongs to the field of memory access optimization for heterogeneous multi-core platforms of computer architecture, and specifically relates to a method for accelerating core virtual note memory based on heterogeneous multi-core platforms. Background technique [0002] In recent decades, with the development of semiconductor technology and the demand for high-performance computing, computer architecture has developed rapidly. The development of semiconductor technology follows Moore's Law, and the number of transistors integrated on the processor chip continues to break through. The chip manufacturing process has developed from 10μm in 1971 to the current 22nm. It is expected that in 2014, Intel will launch a processor chip with a 14nm process. Processor architecture has experienced the evolution from single-core to multi-core, from simple to complex. [0003] The number of general-purpose cores integrated on the current mainstream mul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F13/16G06F9/455
Inventor 陈天洲潘平袁明敏孟静磊吴斌斌
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products