Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

57 results about "Shared memory multiprocessor" patented technology

Shared-Memory Multiprocessors. [§5] Symmetric multiprocessors (SMPs) are the most common multiprocessors. They provide a shared address space, and each processor has its own cache. All processors and memories attach to the same interconnect, usually a shared bus. SMPs dominate the server market, and are the building blocks for larger systems.

Method and apparatus for deterministic replay of java multithreaded programs on multiprocessors

A method (and apparatus) of determinstically replaying an observable run-time behavior of distributed multi-threaded programs on multiprocessors in a shared-memory multiprocessor environment, wherein a run-time behavior of the programs includes sequences of events, each sequence being associated with one of a plurality of execution threads, includes identifying an execution order of critical events of the program, wherein the program includes critical events and non-critical events, generating groups of critical events of the program, generating, for each given execution thread, a logical thread schedule that identifies a sequence of the groups associated with the given execution thread, and storing the logical thread schedule for subsequent reuse.
Owner:IBM CORP

Method and apparatus for managing memory operations in a data processing system using a store buffer

A shared memory multiprocessor (SMP) data processing system includes a store buffer implemented in a memory controller for temporarily storing recently accessed memory data within the data processing system. The memory controller includes control logic for maintaining coherency between the memory controller's store buffer and memory. The memory controller's store buffer is configured into one or more arrays sufficiently mapped to handle I / O and CPU bandwidth requirements. The combination of the store buffer and the control logic operates as a front end within the memory controller in that all memory requests are first processed by the control logic / store buffer combination for reducing memory latency and increasing effective memory bandwidth by eliminating certain memory read and write operations.
Owner:GOOGLE LLC

Self-scheduled real time software using real time asynchronous messaging

TICC™ (Technology for Integrated Computation and Communication), a patented technology [1], provides a high-speed message-passing interface for parallel processes. TICC™ does high-speed asynchronous message passing with latencies in the nanoseconds scale in shared-memory multiprocessors and latencies in microseconds scale over distributed-memory local area TICCNET™ (Patent Pending, [2]. Ticc-Ppde (Ticc-based Parallel Program Development and Execution platform, Patent Pending, [3]) provides a component based. parallel program development environment, and provides the infrastructure for dynamic debugging and updating of Ticc-based parallel programs, self-monitoring, self-diagnosis and self-repair. Ticc-Rtas (Ticc-based Real Time Application System) provides the system architecture for developing self-scheduled real time distributed parallel processing software with real-time asynchronous messaging, using Ticc-Ppde. Implementation of a Ticc-Rtas real time application using Ticc-Ppde will automatically generate the self-monitoring system for the Rtas. This self-monitoring. system may be used to monitor the Rtas during its operation, in parallel with its operation, to recognize and report a priori specified observable events that may occur in the application or recognize and report system malfunctions, without interfering with the timing requirements of the Ticc-Rtas. The structure, innovations underlying their operations, details on developing Rtas using Ticc-Ppde and TICCNET™ are presented here together with three illustrative examples: one on sensor fusion, the other on image fusion and the third on. power transmission control in a fuel cell powered automobile.
Owner:EDSS INC

Using broadcast-based tlb sharing to reduce address-translation latency in a shared-memory system with electrical interconnect

The disclosed embodiments provide a system that uses broadcast-based TLB-sharing techniques to reduce address-translation latency in a shared-memory multiprocessor system with two or more nodes that are connected by an electrical interconnect. During operation, a first node receives a memory operation that includes a virtual address. Upon determining that one or more TLB levels of the first node will miss for the virtual address, the first node uses the electrical interconnect to broadcast a TLB request to one or more additional nodes of the shared-memory multiprocessor in parallel with scheduling a speculative page-table walk for the virtual address. If the first node receives a TLB entry from another node of the shared-memory multiprocessor via the electrical interconnect in response to the TLB request, the first node cancels the speculative page-table walk. Otherwise, if no response is received, the first node instead waits for the completion of the page-table walk.
Owner:ORACLE INT CORP

Cache coherency in a shared-memory multiprocessor system

A method of making cache memories of a plurality of processors coherent with a shared memory includes one of the processors determining whether an external memory operation is needed for data that is to be maintained coherent. If so, the processor transmits a cache coherency request to a traffic-monitoring device. The traffic-monitoring device transmits memory operation information to the plurality of processors, which includes an address of the data. Each of the processors determines whether the data is in its cache memory and whether a memory operation is needed to make the data coherent. Each processor also transmits to the traffic-monitoring device a message that indicates a state of the data and the memory operation that it will perform on the data. The processors then perform the memory operations on the data. The traffic-monitoring device performs the transmitted memory operations in a fixed order that is based on the states of the data in the processors' cache memories.
Owner:SK HYNIX INC

Mechanism for handling load lock/store conditional primitives in directory-based distributed shared memory multiprocessors

Each processor in a distributed shared memory system has an associated memory and a coherence directory. The processor that controls a memory is the Home processor. Under certain conditions, another processor may obtain exclusive control of a data block by issuing a Load Lock instruction, and obtaining a writeable copy of the data block that is stored in the cache of the Owner processor. If the Owner processor does not complete operations on the writeable copy of the data prior to the time that the data block is displaced from the cache, it issues a Victim To Shared message, thereby indicating to the Home processor that it should remain a sharer of the data block. In the event that another processor seeks exclusive rights to the same data block, the Home processor issues an Invalidate message to the Owner processor. When the Owner processor is ready to resume operation on the data block, the Owner processor again obtains exclusive control of the data block by issuing a Read-with Modify Intent Store Conditional instruction to the Home processor. If the Owner processor is still a sharer, a writeable copy of the data block is sent to the Owner processor, who completes modification of the data block and returns it to the Home processor with a Store Conditional instruction.
Owner:HEWLETT-PACKARD ENTERPRISE DEV LP

Prefetch Termination at Powered Down Memory Bank Boundary in Shared Memory Controller

A prefetch scheme in a shared memory multiprocessor disables the prefetch when an address falls within a powered down memory bank. A register stores a bit corresponding to each independently powered memory bank to determine whether that memory bank is prefetchable. When a memory bank is powered down, all bits corresponding to the pages in this row are masked so that they appear as non-prefetchable pages to the prefetch access generation engine preventing an access to any page in this memory bank. A powered down status bit corresponding to the memory bank is used for masking the output of the prefetch enable register. The prefetch enable register is unmodified. This also seamlessly restores the prefetch property of the memory banks when the corresponding memory row is powered up.
Owner:TEXAS INSTR INC

Shared-Memory Multiprocessor System and Method for Processing Information

Large-scale table data stored in a shared memory are sorted by a plurality of processors in parallel. According to the present invention, the records subjected to processing are first divided for allocation to the plurality of processors. Then, each processor counts the numbers of local occurrences of the field value sequence numbers associated with the records to be processed. The numbers of local occurrences of the field value sequence numbers counted by each processor is then converted into global cumulative numbers, i.e., the cumulative numbers used in common by the plurality of processors. Finally, each processor utilizes the global cumulative numbers as pointers to rearrange the order of the allocated records.
Owner:ESS HLDG CO LTD

System and method for limited fanout daisy chaining of cache invalidation requests in a shared-memory multiprocessor system

A protocol engine is for use in each node of a computer system having a plurality of nodes. Each node includes an interface to a local memory subsystem that stores memory lines of information, a directory, and a memory cache. The directory includes an entry associated with a memory line of information stored in the local memory subsystem. The directory entry includes an identification field for identifying sharer nodes that potentially cache the memory line of information. The identification field has a plurality of bits at associated positions within the identification field. Each respective bit of the identification field is associated with one or more nodes. The protocol engine furthermore sets each bit in the identification field for which the memory line is cached in at least one of the associated nodes. In response to a request for exclusive ownership of a memory line, the protocol engine sends an initial invalidation request to no more than a first predefined number of the nodes associated with set bits in the identification field of the directory entry associated with the memory line.
Owner:SK HYNIX INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products