Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Cache Realization Method Based on Interleaved Storage

An implementation method and cross-storage technology, applied in the field of integrated circuit design, can solve the problems of large area overhead, high power consumption, and large number of small blocks, and achieve the effect of ensuring correct indication, reducing read power consumption, and small area

Active Publication Date: 2020-07-24
XIAN MICROELECTRONICS TECH INST
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] To solve this problem, two solutions can be thought of: the first one is to use one memory bank for each way in the Cache, and there are M ways to use M memories, and the bit width of all memory banks is the size of a Cache line, that is, N words , the biggest disadvantage of this solution is that the power consumption is too large. In the read cycle, it is necessary to read all the N words of the corresponding rows of all M channels (a total of M*N words), and then select the corresponding K words required by the processor. If K is much smaller than N, the power consumption of useless data reading is greatly wasted, and the memory of this scheme is not conducive to realization, because memory with a large bit width of a given capacity does not necessarily exist in the selected process library
The second solution is to subdivide each memory bank in the first solution, for example, it can be divided into N memories, the depth of each memory remains unchanged, and the bit width is 1 word, or if N is K It can be divided into N / K memories, the depth of each memory is constant, and the bit width is K words. The biggest disadvantage of this scheme is that there are too many small blocks (memory banks) divided into, which brings The cost of the area is too high
[0007] What kind of organizational structure does the cache memory adopt, and how to ensure that a cache line of the same way can be updated at the same time in one cycle to ensure the consistency with the effective bits, and ensure that the K words corresponding to the lower addresses of all ways are fetched in the same cycle The data, and the power consumption and hardware area overhead are smaller than the above two schemes. After searching the relevant literature and patents, no solution to this problem has been found.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Cache Realization Method Based on Interleaved Storage
  • A Cache Realization Method Based on Interleaved Storage
  • A Cache Realization Method Based on Interleaved Storage

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention provides a kind of Cache realization method based on interleaved memory, when satisfying Under the conditions, where N is the size of the Cache line, K is the data bit width between the pipeline and the Cache, and N is an integer multiple of K, and M is the number of Cache ways), one cycle can fill all the cache lines N words, at the same time, the same address can be used to read all M channels corresponding to K words in the hit judgment period, which meets the timing requirements of the pipeline for Cache access.

[0034] A kind of Cache implementation method based on interleaved storage of the present invention, comprises the following steps:

[0035] S1. Determine the organizational structure of the DATA memory and the TAG memory according to the number of ways of the Cache determined in the design, the size of each Cache line, the data bandwidth between the pipeline and the Cache, and the capacity of the Cache;

[0036] For example, the Cac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a Cache implementation method based on interleaving storage. Under the satisfied conditions, all N words of a Cache line are filled in one cycle, and at the same time, the same address is used to read K words corresponding to all M paths in the hit judgment cycle. , to meet the timing requirements of the pipeline for Cache access, N is the size of the Cache line, K is the data bit width between the pipeline and the Cache, and N is an integer multiple of K, and M is the number of Cache ways. The invention ensures that all the data of one Cache line in the same way can be written simultaneously, and the data of the same address in different ways can be read out at the same time, fully utilizes the data bandwidth of the high-performance on-chip bus, and satisfies the timing sequence of the processor pipeline for the Cache requirements.

Description

technical field [0001] The invention belongs to the technical field of integrated circuit design, and in particular relates to a Cache implementation method based on cross-memory. Background technique [0002] A high-performance processor usually uses a hierarchical multi-level cache as a buffer for data and instructions to reduce the speed difference between the processor and the memory. Among them, the first-level Cache is usually located inside the processor core, closely cooperates with the pipeline, has a small access delay, and is basically consistent with the execution rate of the processor. In order to obtain parallel access to instructions and data, the Harvard structure is usually used, namely Divided into independent instruction cache and data cache. Level 1 Cache is always an important factor affecting processor performance. The main memory of an embedded high-performance processor is usually a fast memory with a burst access mode, coupled with a high-bandwidth...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F12/0871
CPCG06F12/0871G06F12/0851
Inventor 崔媛媛李红桥郭娜娜谢琰瑾杨博
Owner XIAN MICROELECTRONICS TECH INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products