Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

46 results about "Trace Cache" patented technology

Trace Cache (also known as execution trace cache) is a very specialized cache which stores the dynamic stream of instructions known as trace. It helps in increasing the instruction fetch bandwidth and decreasing power consumption (in the case of Intel Pentium 4) by storing traces of instructions that have already been fetched and decoded. Trace Processor is an architecture designed around the Trace Cache and processes the instructions at trace level granularity.

Method and apparatus for dynamic branch prediction utilizing multiple stew algorithms for indexing a global history

Toggling between accessing an entry in a global history with a stew created from branch predictions implied by the ordering of instructions within a trace of a trace cache when a trace is read out of a trace cache, and accessing an entry in a global history with repeatable variations of a stew when there is more than branch instruction within a trace within the trace cache and at least a second branch instruction is read out.
Owner:INTEL CORP

Transitioning from instruction cache to trace cache on label boundaries

Various embodiments of methods and systems for implementing a microprocessor that includes a trace cache and attempts to transition fetching from instruction cache to trace cache only on label boundaries are disclosed. In one embodiment, a microprocessor may include an instruction cache, a branch prediction unit, and a trace cache. The prefetch unit may fetch instructions from the instruction cache until the branch prediction unit outputs a predicted target address for a branch instruction. When the branch prediction unit outputs a predicted target address, the prefetch unit may check for an entry matching the predicted target address in the trace cache. If a match is found, the prefetch unit may fetch one or more traces from the trace cache in lieu of fetching instructions from the instruction cache.
Owner:MEDIATEK INC

Mechanism and method for two level adaptive trace prediction

A trace cache system is provided comprising a trace start address cache for storing trace start addresses with successor trace start addresses, a trace cache for storing traces of instructions executed, a trace history table (THT) for storing trace numbers in rows, a branch history shift register (BHSR) or a trace history shift register (THSR) that stores histories of branches or traces executed, respectively, a THT row selector for selecting a trace number row from the THT, the selection derived from a combination of a trace start address and history information from the BHSR or the THSR, and a trace number selector for selecting a trace number from the selected trace number row and for outputting the selected trace number as a predicted trace number.
Owner:IBM CORP

Hypertransport exception detection and processing

In accordance with the present invention a system for detecting transaction errors in a system comprising a plurality of data processing devices using a common system interconnect bus, comprises a node controller operably connected to said system interconnect bus and a plurality of interface agents communicatively coupled to said node controller. Error corresponding to transactions between said interface agents and other processing modules in said system are directed to said node controller; and wherein transaction errors that would not normally be communicated to said system interconnect bus are communicated by said node controller to said system interconnect bus to be available for detection. In an embodiment of the present invention, the interface agents operate in accordance with the hypertransport protocol. A system control and debug unit and a trace cache operably connected to the system bus can be used to diagnose and store errors conditions.
Owner:AVAGO TECH INT SALES PTE LTD

Virtualized platform based Method for swapping in disc page

The invention discloses a virtualized platform method for swapping in a disc page, which comprises the following steps of: (1) establishing trace cache used for tracing page swap-in operations of various progresses, wherein the trace cache comprises a plurality of items and each item is recorded with the information of a progress on the latest page swap-in operation; (2) tracing the page swap-in operation of each client progress, and when interrupting due to page missing, finding a matched item in the trace cache according to an identifier of the progress subjected to interruption; and (3) calculating the number of prefetched pages to be swapped-in by the current pages according to the obtained item. A dynamic page swapping-in calculation method realized by the invention improves the page continuity of the internal memory of each client through a page swapping mechanism based on the internal memory state of the client, and can dynamically change the page swapping-in number each time when paging, therefore disc IO (Input Output) access frequency is sufficiently lowered and the system efficiency is improved on the premise of ensuring the disc switching cache hit.
Owner:ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products