Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cache memory and method for handling effects of external snoops colliding with in-flight operations internally to the cache

a cache memory and internal snoop technology, applied in the field of cache memories in microprocessors, can solve the problems of cache coherence, increase the timing and complexity of the cache control logic to handle the cancelled in-flight operation, and achieve the effect of improving the processing cycle timing and reducing the complexity of other caches

Active Publication Date: 2006-03-23
IP FIRST
View PDF9 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0021] An advantage of the present invention is that the cache advantageously keeps the results of the snoop collision completely contained within itself. This potentially improves processor cycle timing, particularly by eliminating the problems associated with inter-cache communications across the processor integrated circuit previously needed by the conventional approach to handle a cancellation of an in-flight operation whose address collided with an external snoop operation. Additionally, it reduces the complexity of other caches in the processor that initiate the in-flight operation.

Problems solved by technology

The presence of multiple processors each having its own cache that caches data from a shared memory introduces a problem of cache coherence.
A colliding snoop while the castout is in-flight introduces significant design problems that must be addressed.
However, this approach has negative side effects.
It increases the timing and complexity of the cache control logic to be able to handle the cancelled in-flight operation.
The longer the L1 must wait to overwrite the castout line, the more complicated the process to back out and / or retry the operation.
Also, the added delay may adversely affect performance.
Furthermore, the added communication between the caches in the form of cancellation and handshaking may take place on signals between the two caches that are relatively long and have significant propagation delay if the two cache blocks are a relatively great distance from one another, which may consequently create critical timing paths.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cache memory and method for handling effects of external snoops colliding with in-flight operations internally to the cache
  • Cache memory and method for handling effects of external snoops colliding with in-flight operations internally to the cache
  • Cache memory and method for handling effects of external snoops colliding with in-flight operations internally to the cache

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] Referring now to FIG. 1, a block diagram illustrating a cache hierarchy in a microprocessor 100 according to the present invention is shown.

[0031] Microprocessor 100 comprises a cache hierarchy that includes a level-one instruction (L1I) cache 102, a level-one data (L1D) cache 104, and a level-two (L2) cache 106. The L1I 102 and L1D 104 cache instructions and data, respectively, and L2 cache 106 caches both instructions and data, in order to reduce the time required for microprocessor 100 to fetch instructions and data. L2 cache 106 is between the system memory and the L1I 102 and L1D 104 in the memory hierarchy of the system. The L1I 102, L1D 104, and L2 cache 106 are coupled together. The L1I 102 and L2 cache 106 transfer cache lines between one another, and the L1D 104 and L2 cache 106 transfer cache lines between one another. For example, the L1I 102 and L1D 104 may castout cache lines to or load cache lines from L2 cache 106.

[0032] Microprocessor 100 also includes a bu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A cache memory that completes an in-flight operation with another cache that collides with a snoop operation, rather than canceling the in-flight operation. Operations to the cache comprise a query pass and one or more finish passes. When the cache detects a snoop query intervening between the query pass and a finish pass of the in-flight operation, the cache generates a more up-to-date status for the snoop query that takes into account the tag status to which the in-flight finish pass will update the implicated cache line. This is necessary because otherwise the snoop query might not see the affect of the in-flight finish pass status update. This allows the in-flight finish pass to complete instead of being cancelled and the snoop finish pass to correctly update the status after the in-flight finish pass, and to provide modified data from the cache line to the externally snooped transaction.

Description

[0001] This application claims priority based on U.S. Provisional Application, Serial No. 60 / 375469, filed Apr. 24, 2002, entitled METHOD FOR HANDLING AFFECTS OF EXTERNAL SNOOPS INTERNALLY TO L2 CACHE.FIELD OF THE INVENTION [0002] This invention relates in general to the field of cache memories in microprocessors, and particularly to multi-pass pipelined caches and the effects of external snoop operations thereon. BACKGROUND OF THE INVENTION [0003] Many modern computer systems are multi-processor systems. That is, they include multiple processors coupled together on a common bus that share the computing load of the system. In addition, the multiple processors typically share a common system memory. Still further, each of the processors includes a cache memory, or typically a hierarchy of cache memories. [0004] A cache memory, or cache, is a memory internal to the processor that stores a subset of the data in the system memory and is typically much smaller than the system memory. Tra...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F12/00G06F12/08
CPCG06F12/0831
Inventor HARDAGE, JAMES N. JR.
Owner IP FIRST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products