Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-core shared final stage cache management method and device for mixed memory

A technology of last-level cache and management methods, applied in memory systems, electrical digital data processing, instruments, etc., to achieve the effect of reducing interference

Active Publication Date: 2017-06-30
SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
View PDF12 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] In view of the above technical problems, the object of the present invention is to provide a hybrid main memory-oriented multi-core shared last-level cache management method and device, which comprehensively considers the differences in physical characteristics between different main memory media in the hybrid main memory system, and optimizes the traditional The LRU replacement algorithm aimed at reducing the number of misses reduces storage energy consumption, achieves the purpose of reducing inter-core interference and improving the hit rate, and effectively improves the memory access performance of the last level cache

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-core shared final stage cache management method and device for mixed memory
  • Multi-core shared final stage cache management method and device for mixed memory
  • Multi-core shared final stage cache management method and device for mixed memory

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0072] refer to figure 1 , shows a mixed main memory-oriented multi-core shared last-level cache management method provided by the present invention, the mixed main memory includes DRAM and NVM, and the last-level cache is divided into multiple cache groups, and each cache group includes multiple caches row, the data in the mixed main memory and the last level cache have a multi-way set associative mapping relationship, and the management method includes the following steps:

[0073] S101: Obtain a method for dividing the number of last-level cache ways of the multi-core of the processor.

[0074] S102: Determine whether the access request received by the last-level cache hits a cache line of the last-level cache,

[0075]If hit, proceed to step S103 to execute the cache line promotion policy (Promotion Policy);

[0076] If there is no hit, you need to obtain data from the upper-level cache or main memory, and directly proceed to step S104 to execute the cache line insertion...

Embodiment 2

[0092] refer to figure 2 , shows another mixed main memory-oriented multi-core shared last-level cache management method provided by the present invention, the mixed main memory includes DRAM and NVM, and the last-level cache is divided into multiple cache groups, and each cache group includes multiple The cache line, the data in the mixed main memory and the last level cache have a multi-way set associative mapping relationship, and the management method includes the following steps:

[0093] S201: Obtain a method for dividing the number of last-level cache ways of the multi-core of the processor.

[0094] S202: Divide the cache line (cache line) in the last level cache (Last Level Cache, referred to as LLC) into four types: dirty NVM data (Dirty-NVM, denoted as DN), dirty DRAM data (Dirty-DRAM, denoted as DD), clean NVM data (Clean-NVM, denoted as CN) and clean DRAM data (Clean-DRAM, denoted as CD), the priorities of the four cache lines of DN, DD, CN and CD are respective...

Embodiment approach

[0125] refer to image 3 , which shows a schematic diagram of the overall system architecture provided by this embodiment. The main memory of the system is composed of DRAM and NVM. It is in the same linear address space. The on-chip cache system presents a multi-level hierarchical structure. Cores (core1 and core2) are shared. In addition, the present invention sets an AFM for each core of the processor to identify the memory access characteristics of the application program on the corresponding core, so as to obtain the hit situation of the cache line corresponding to the application program.

[0126] refer to Figure 4 , shows a schematic diagram of the internal structure of the AFM provided by this embodiment. The time when the sum of the number of instructions run by multiple cores of the processor reaches 100Million from zero is taken as a counting cycle. At the beginning of each counting cycle, 32 buffers are selected. A group is used as a monitoring sample of the acc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of computer storage, in particular to a multi-core shared final stage cache management method and device for a mixed memory. The invention discloses a multi-core shared final stage cache management method for the mixed memory. The method comprises the following steps: obtaining a final stage cache number partition mode of a processor, and judging whether or not the access request received by the final stage cache hits a cache line of the final stage cache. The invention also discloses a multi-core shared final stage cache management device for the mixed memory. The device comprises a final stage cache number partition module and a judgment module. The multi-core shared final stage cache management method and device for the mixed memory have the advantages of synthetically considering the physical characteristics of different main memory media in a mixed memory system, optimizing the traditional LRU replacement algorithm aiming at reducing the number of deletions, reducing storage energy overhead, achieving the purpose of reducing inter-cell interference and improving the hit rate, and effectively improving the memory access performance of the final stage cache.

Description

technical field [0001] The invention relates to the technical field of computer storage, in particular to a mixed main memory-oriented multi-core shared last-level cache management method and device. Background technique [0002] As the scale of data sets processed by applications (such as search engines and machine learning) continues to expand and the number of on-chip processor cores continues to increase, SRAM / DRAM-based storage systems have gradually become the bottleneck of system energy consumption and scalability. The recent non-volatile memory NVM (Non-Volatile Memory), such as magnetoresistive random access memory (Magnetic Random Access Memory, referred to as MRAM), spin-transfer torque magnetoresistive memory (Spin-transfer-torque Magnetic Random Access Memory, referred to as STT-MRAM), Resistive Random Access Memory (ReRAM for short), and Phase-change Random Access Memory (PCM for short) are considered to be very competitive memories in the next-generation stora...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F12/0811G06F12/126G06F12/128G06F12/0842G06F12/0897
CPCG06F12/0811G06F12/0842G06F12/0897G06F12/126G06F12/128Y02D10/00
Inventor 张德闪
Owner SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products