Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Handling cache miss in an instruction crossing a cache line boundary

A high-speed cache and memory line technology, applied in memory systems, instruction analysis, concurrent instruction execution, etc., can solve problems such as increasing complexity

Inactive Publication Date: 2008-07-16
QUALCOMM INC
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, such solutions add complexity in the construction of the fetch stage, in its interconnection with other memory resources, and in the management of the instruction flow to and through the fetch stage

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Handling cache miss in an instruction crossing a cache line boundary
  • Handling cache miss in an instruction crossing a cache line boundary
  • Handling cache miss in an instruction crossing a cache line boundary

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] In the following detailed description, numerous specific details are set forth by way of example in order to provide a thorough understanding of the related teachings. However, it will be apparent to one skilled in the art that the present teachings may be practiced without such details. In other instances, well-known methods, procedures, components, and circuits have been described at a relatively high level and without detail in order to avoid unnecessarily obscuring aspects of the present teachings.

[0023]As discussed herein, examples of systems or portions of a processor intended to fetch instructions for the processor include an instruction cache and a plurality of processing stages. Thus, the fetch part itself is usually formed by a processing stage pipeline. Instructions are allowed to cross cache line boundaries. When the stage from which a request for higher level memory is issued has a first portion of an instruction that crosses a cache line boundary, the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A fetch section of a processor comprises an instruction cache and a pipeline of several stages for obtaining instructions. Instructions may cross cache line boundaries. The pipeline stages process two addresses to recover a complete boundary crossing instruction. During such processing, if the second piece of the instruction is not in the cache, the fetch with regard to the first line is invalidated and recycled. On this first pass, processing of the address for the second part of the instruction is treated as a pre-fetch request to load instruction data to the cache from higher level memory, without passing any of that data to the later stages of the processor. When the first line address passes through the fetch stages again, the second line address follows in the normal order, and both pieces of the instruction are can be fetched from the cache and combined in the normal manner.

Description

technical field [0001] The present subject matter relates to efficiently handling the fetching of an instruction that crosses a cache line boundary, especially if a second portion of the instruction is not already in the cache from which the processor attempted to fetch the instruction (cache miss). Background technique [0002] Modern microprocessors and other programmable processor circuits utilize memory hierarchies to store and serve instructions. Common levels include the instruction cache, or L1 cache, relatively close to the core of the processor, eg, on the processor chip. Instructions are loaded from a slightly distant or L2 cache, which stores both instructions and data, into the L1 instruction cache. One or both cache memories are loaded with instructions from main memory, and the main memory may be loaded from a more distant source, such as a disk drive of the device incorporating the processor. Cache memory improves performance. Fetching instructions from, fo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/38G06F12/08
CPCG06F12/0886G06F9/3816G06F9/3802G06F9/30149G06F12/0875G06F2212/655G06F9/30047G06F9/30181G06F9/3814G06F9/3875G06F9/38G06F12/08
Inventor 布赖恩·迈克尔·斯坦普尔杰弗里·托德·布里奇斯罗德尼·韦恩·史密斯托马斯·安德鲁·萨托里乌斯
Owner QUALCOMM INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products