Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

234 results about "Speculative execution" patented technology

Speculative execution is an optimization technique where a computer system performs some task that may not be needed. Work is done before it is known whether it is actually needed, so as to prevent a delay that would have to be incurred by doing the work after it is known that it is needed. If it turns out the work was not needed after all, most changes made by the work are reverted and the results are ignored.

System and Method for Performing Incremental Register Checkpointing in Transactional Memory

Systems and methods described herein for performing incremental register checkpointing may employ a special register to indicate which registers have already been checkpointed. This register may include one bit per register. These systems may also include a special pointer register whose value identifies a location in user memory or in dedicated on-chip storage at which a copy of a register's value should be saved by a checkpointing operation. Only registers modified during speculative execution or execution of a transaction may be checkpointed (e.g., when register modifying instructions are encountered) and subsequently restored (e.g., due to misspeculation or transaction abort), rather than all of the registers of the processor. Each register may be checkpointed at most once for a given speculative episode or atomic transaction. Setting a bit in the special register may prevent checkpointing of the corresponding register. Setting all of the bits in the special register may disable checkpointing.
Owner:ORACLE INT CORP

Paralleling processing method, system and program

Paralleling processing system and method. When clusters are formed based on strongly connected components, a single cluster (fat cluster) having at least a predetermined number of blocks, or an expected processing time exceeding a predetermined threshold, is formed. The fat cluster is subjected to an unrolling process to make multiple copies of the processing of the fat cluster and to assign the copies to individual processors. Processing of the fat cluster is executed by the multiple processor devices in a pipelined manner. If a fat cluster to be iteratively executed cannot be executed in the pipelined manner because a processing result of an nth iteration of the fat cluster depends on a processing result of a preceding iteration of the fat cluster an input value needed for execution of the fat cluster is generated based on a certain prediction, and the fat cluster is speculatively executed.
Owner:IBM CORP

Processor, multiprocessor system and method for speculatively executing memory operations using memory target addresses of the memory operations to index into a speculative execution result history storage means to predict the outcome of the memory operation

When a processor executes a memory operation instruction by means of data dependence speculative execution, a speculative execution result history table which stores history information concerning success / failure results of the speculative execution of memory operation instructions of the past is referred to and thereby whether the speculative execution will succeed or fail is predicted. In the prediction, the target address of the memory operation instruction is converted by a hash function circuit into an entry number of the speculative execution result history table (allowing the existence of aliases), and an entry of the table designated by the entry number is referred to. If the prediction is “success”, the memory operation instruction is executed in out-of-order execution speculatively (with regard to data dependence relationship between the instructions). If the prediction is “failure”, the speculative execution is canceled and the memory operation instruction is executed later in the program order non-speculatively. Whether the speculative execution of the memory operation instructions has succeeded or failed is judged by detecting the data dependence relationship between the memory operation instructions, and the speculative execution result history table is updated taking the judgment into account.
Owner:NEC CORP

Tracing speculatively executed instructions

A trace unit for generating items of trace data indicative of processing activities of a processor executing a stream of instructions, the stream of instructions comprising a plurality of groups of instructions, the processor executing at least some of the instructions speculatively is disclosed. The trace unit comprises: trace circuitry for monitoring a behaviour of the processor; storage circuitry for storing current trace control data for controlling the trace circuitry; a data store for storing at least some of the trace control data; the trace circuitry being configured to store the trace control data in the data store in response to detection of execution of the group of instructions; the trace circuitry being responsive to detecting the at least one processor cancelling at least one group of the speculatively executed instructions to retrieve at least some of the trace control data stored in the data store for the group of instructions executed before the cancelled speculatively executed instructions and to store the retrieved trace control data in the storage circuitry.
Owner:ARM LTD

Load lookahead prefetch for microprocessors

The present invention allows a microprocessor to identify and speculatively execute future load instructions during a stall condition. This allows forward progress to be made through the instruction stream during the stall condition which would otherwise cause the microprocessor or thread of execution to be idle. The data for such future load instructions can be prefetched from a distant cache or main memory such that when the load instruction is re-executed (non speculative executed) after the stall condition expires, its data will reside either in the L1 cache, or will be enroute to the processor, resulting in a reduced execution latency. When an extended stall condition is detected, load lookahead prefetch is started allowing speculative execution of instructions that would normally have been stalled. In this speculative mode, instruction operands may be invalid due to source loads that miss the L1 cache, facilities not available in speculative execution mode, or due to speculative instruction results that are not available via forwarding and are not written to the architected registers. A set of status bits are used to dynamically keep track of the dependencies between instructions in the pipeline and a bit vector tracks invalid architected facilities with respect to the speculative instruction stream. Both sources of information are used to identify load instructions with valid operands for calculating the load address. If the operands are valid, then a load prefetch operation is started to retrieve data from the cache ahead of time such that it can be available for the load instruction when it is non-speculatively executed.
Owner:INTEL CORP

Preventing duplicate entries in a non-blocking tlb structure that supports multiple page sizes

One embodiment provides a system that prevents duplicate entries in a non-blocking TLB that supports multiple page sizes and speculative execution. During operation, after a request for translation of a virtual address misses in the non-blocking TLB, the system receives a TLB fill. Next, the system determines a page size associated with the TLB fill, and uses this page size to determine a set of bits in the virtual address that identify the virtual page associated with the TLB fill. The system then compares this set of bits with the corresponding bits of other virtual addresses associated with pending translation requests. If the system detects that a second virtual address for another pending translation request is also satisfied by the TLB fill, the system invalidates the duplicate translation request associated with the second virtual address.
Owner:ORACLE INT CORP

Suppression of control transfer instructions on incorrect speculative execution paths

Techniques are disclosed relating to a processor that is configured to execute control transfer instructions (CTIs). In some embodiments, the processor includes a mechanism that suppresses results of mispredicted younger CTIs on a speculative execution path. This mechanism permits the branch predictor to maintain its fidelity, and eliminates spurious flushes of the pipeline. In one embodiment, a misprediction bit is be used to indicate that a misprediction has occurred, and younger CTIs than the CTI that was mispredicted are suppressed. In some embodiments, the processor may be configured to execute instruction streams from multiple threads. Each thread may include a misprediction indication. CTIs in each thread may execute in program order with respect to other CTIs of the thread, while instructions other than CTIs may execute out of program order.
Owner:ORACLE INT CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products