Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

71 results about "Microprocessor architecture" patented technology

Hard Object: Hardware Protection for Software Objects

ActiveUS20080222397A1Efficiently implement enforceable separation of programDigital computer detailsAnalogue secracy/subscription systemsProcessor registerPhysical address
In accordance with one embodiment, additions to the standard computer microprocessor architecture hardware are disclosed comprising novel page table entry fields 015 062, special registers 021 022, instructions for modifying these fields 120 122 and registers 124 126, and hardware-implemented 038 runtime checks and operations involving these fields and registers. More specifically, in the above embodiment of a Hard Object system, there is additional meta-data 061 in each page table entry beyond what it commonly holds, and each time a data load or store is issued from the CPU, and the virtual address 032 translated to the physical address 034, the Hard Object system uses its additional PTE meta-data 061 to perform memory access checks additional to those done in current systems. Together with changes to software, these access checks can be arranged carefully to provide more fine-grain access control for data than do current systems.
Owner:WILKERSON DANIEL SHAWCROSS +1

Automatic configuration of a microprocessor

A method for automatically configuring a microprocessor architecture so that it is able to efficiently exploit instruction level parallelism in a particular application. Executable code for another microprocessor type is translated into the specialised instruction set of the configured microprocessor. The configured microprocessor may then be used as a coprocessor in a system containing another microprocessor running the original executable code.
Owner:CRITICAL BLUE

High-performance superscalar-based computer system with out-of order instruction execution and concurrent results distribution

The high-performance, RISC core based microprocessor architecture includes an instruction fetch unit for fetching instruction sets from an instruction store and an execution unit that implements the concurrent execution of a plurality of instructions through a parallel array of functional units. The fetch unit generally maintains a predetermined number of instructions in an instruction buffer. The execution unit includes an instruction selection unit, coupled to the instruction buffer, for selecting instructions for execution, and a plurality of functional units for performing instruction specified functional operations. A unified instruction scheduler, within the instruction selection unit, initiates the processing of instructions through the functional units when instructions are determined to be available for execution and for which at least one of the functional units implementing a necessary computational function is available. Unified scheduling is performed across multiple execution data paths, where each execution data path, and corresponding functional units, is generally optimized for the type of computational function that is to be performed on the data: integer, floating point, and boolean. The number, type and computational specifics of the functional units provided in each data path, and as between data paths, are mutually independent.
Owner:SAMSUNG ELECTRONICS CO LTD

Microprocessor architecture including unified cache debug unit

A microprocessor architecture including a unified cache debug unit. A debug unit on the processor chip receives data / command signals from a unit of the execute stage of the multi-stage instruction pipeline of the processor and returns information to the execute stage unit. The cache debug unit is operatively connected to both instruction and data cache units of the microprocessor. The memory subsystem of the processor may be accessed by the cache debug unit through either of the instruction or data cache units. By unifying the cache debug in a separate structure, the need for redundant debug structure in both cache units is obviated. Also, the unified cache debug unit can be powered down when not accessed by the instruction pipeline, thereby saving power.
Owner:ARC INT LTD

Configurable microprocessor architecture incorporating direct execution unit connectivity

An architecture for a highly configurable and scalable microprocessor architecture designed for exploiting instruction level parallelism in specific application code. It consists of a number of execution units with configurable connectivity between them and a means to copy data through execution units under software control.
Owner:CRITICAL BLUE

Microprocessor and method for register addressing therein

A microprocessor architecture comprising a microprocessor operably coupled to a plurality of registers and arranged to execute at least one instruction. The microprocessor is arranged to determine a class of data operand. The at least one instruction comprises one or more codes in a register specifier that indicates whether relative addressing or absolute addressing is used in accessing a register. In this manner, absolute and relative register addressing is supported within a single instruction word.
Owner:NXP USA INC

High-performance superscalar-based computer system with out-of-order instruction execution and concurrent results distribution

The high-performance, RISC core based microprocessor architecture includes an instruction fetch unit for fetching instruction sets from an instruction store and an execution unit that implements the concurrent execution of a plurality of instructions through a parallel array of functional units. The fetch unit generally maintains a predetermined number of instructions in an instruction buffer. The execution unit includes an instruction selection unit, coupled to the instruction buffer, for selecting instructions for execution, and a plurality of functional units for performing instruction specified functional operations. A unified instruction scheduler, within the instruction selection unit, initiates the processing of instructions through the functional units when instructions are determined to be available for execution and for which at least one of the functional units implementing a necessary computational function is available. Unified scheduling is performed across multiple execution data paths, where each execution data path, and corresponding functional units, is generally optimized for the type of computational function that is to be performed on the data: integer, floating point, and boolean. The number, type and computational specifics of the functional units provided in each data path, and as between data paths, are mutually independent.
Owner:SAMSUNG ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products