Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

4180results about "Program synchronisation" patented technology

Clustered file management for network resources

Methods for operating a network as a clustered file system is disclosed. The methods involve client load rebalancing, distributed Input and Output (I/O) and resource load rebalancing. Client load rebalancing refers to the ability of a client enabled with processes in accordance with the current invention to remap a path through a plurality of nodes to a resource. Distributed I/O refers to the methods on the network which provide concurrent input/output through a plurality of nodes to resources. Resource rebalancing includes remapping of pathways between nodes, e.g. servers, and resources, e.g. volumes/file systems. The network includes client nodes, server nodes and resources. Each of the resources couples to at least two of the server nodes. The method for operating comprising the acts of: redirecting an I/O request for a resource from a first server node coupled to the resource to a second server node coupled to the resource; and splitting the I/O request at the second server node into an access portion and a data transfer portion and passing the access portion to a corresponding administrative server node for the resource, and completing at the second server nodes subsequent to receipt of an access grant from the corresponding administrative server node a data transfer for the resource. In an alternate embodiment of the invention the methods may additionally include the acts of: detecting a change in an availability of the server nodes; and rebalancing the network by applying a load balancing function to the network to re-assign each of the available resources to a corresponding available administrative server node responsive to the detecting act.
Owner:HEWLETT-PACKARD ENTERPRISE DEV LP

Dynamic load balancing of a network of client and server computer

Methods for load rebalancing by clients in a network are disclosed. Client load rebalancing allows the clients to optimize throughput between themselves and the resources accessed by the nodes. A network which implements this embodiment of the invention can dynamically rebalance itself to optimize throughput by migrating client I/O requests from overutilized pathways to underutilized pathways. Client load rebalancing refers to the ability of a client enabled with processes in accordance with the current invention to remap a path through a plurality of nodes to a resource. The remapping may take place in response to a redirection command emanating from an overloaded node, e.g. server. These embodiments disclosed allow more efficient, robust communication between a plurality of clients and a plurality of resources via a plurality of nodes. In an embodiment of the invention a method for load balancing on a network is disclosed. The network includes at least one client node coupled to a plurality of server nodes, and at least one resource coupled to at least a first and a second server node of the plurality of server nodes. The method comprises the acts of: receiving at a first server node among the plurality of server nodes a request for the at least one resource; determining a utilization condition of the first server node; and re-directing subsequent requests for the at least one resource to a second server node among the plurality of server nodes in response to the determining act. In another embodiment of the invention the method comprises the acts of: sending an I/O request from the at least one client to the first server node for the at least one resource; determining an I/O failure of the first server node; and re-directing subsequent requests from the at least one client for the at least one resource to an other among the plurality of server nodes in response to the determining act.
Owner:HEWLETT-PACKARD ENTERPRISE DEV LP

Novel massively parallel supercomputer

A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input / Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.
Owner:INT BUSINESS MASCH CORP

Apparatus, system, and method for storage space recovery after reaching a read count limit

An apparatus, system, and method are disclosed for storage space recovery after reaching a read count limit. A read module reads data in a storage division of solid-state storage. A read counter module then increments a read counter corresponding to the storage division. A read counter limit module determines if the read count exceeds a maximum read threshold, and if so, a storage division selection module selects the corresponding storage division for recovery. A data recovery module reads valid data packets from the selected storage division, stores the valid data packets in another storage division of the solid-state storage, and updates a logical index with a new physical address of the valid data.
Owner:UNIFICATION TECH LLC

Apparatus, system, and method for redundant write caching

An apparatus, system, and method are disclosed for redundant write caching. The apparatus, system, and method are provided with a plurality of modules including a write request module, a first cache write module, a second cache write module, and a trim module. The write request module detects a write request to store data on a storage device. The first cache write module writes data of the write request to a first cache. The second cache write module writes the data to a second cache. The trim module trims the data from one of the first cache and the second cache in response to an indicator that the storage device stores the data. The data remains available in the other of the first cache and the second cache to service read requests.
Owner:SANDISK TECH LLC

Apparatus, system, and method for managing data using a data pipeline

An apparatus, system, and method are disclosed for managing data in a solid-state storage device. A solid-state storage and solid-state controller are included. The solid-state storage controller includes a write data pipeline and a read data pipeline The write data pipeline includes a packetizer and an ECC generator. The packetizer receives a data segment and creates one or more data packets sized for the solid-state storage. The ECC generator generates one or more error-correcting codes (“ECC”) for the data packets received from the packetizer. The read data pipeline includes an ECC correction module, a depacketizer, and an alignment module. The ECC correction module reads a data packet from solid-state storage, determines if a data error exists using corresponding ECC and corrects errors. The depacketizer checks and removes one or more packet headers. The alignment module removes unwanted data, and re-formats the data as data segments of an object.
Owner:UNIFICATION TECH LLC

Hardware multithreading systems and methods

According to some embodiments, a multithreaded microcontroller includes a thread control unit comprising thread control hardware (logic) configured to perform a number of multithreading system calls essentially in real time, e.g. in one or a few clock cycles. System calls can include mutex lock, wait condition, and signal instructions. The thread controller includes a number of thread state, mutex, and condition variable registers used for executing the multithreading system calls. Threads can transition between several states including free, run, ready and wait. The wait state includes interrupt, condition, mutex, I-cache, and memory substates. A thread state transition controller controls thread states, while a thread instructions execution unit executes multithreading system calls and manages thread priorities to avoid priority inversion. A thread scheduler schedules threads according to their priorities. A hardware thread profiler including global, run and wait profiler registers is used to monitor thread performance to facilitate software development.
Owner:GEO SEMICONDUCTOR INC

Method and system for leasing storage

A method and system for leasing storage locations in a distributed processing system is provided. Consistent with this method and system, a client requests access to storage locations for a period of time (lease period) from a server, such as the file system manager. Responsive to this request, the server invokes a lease period algorithm, which considers various factors to determine a lease period during which time the client may access the storage locations. After a lease is granted, the server sends an object to the client that advises the client of the lease period and that provides the client with behavior to modify the lease, like canceling the lease or renewing the lease. The server supports concurrent leases, exact leases, and leases for various types of access. After all leases to a storage location expire, the server reclaims the storage location.
Owner:SUN MICROSYSTEMS INC

Method for optimizing locks in computer programs

A method and several variants for using information about the scope of access of objects acted upon by mutual exclusion, or mutex, locks to transform a computer program by eliminating locking operations from the program or simplifying the locking operations, while strictly performing the semantics of the original program. In particular, if it can be determined by a compiler that the object locked can only be accessed by a single thread it is not necessary to perform the "acquire" or "release" part of the locking operation, and only its side effects must be performed. Likewise, if it can be determined that the side effects of a locking operation acting on a variable which is locked in multiple threads are not needed, then only the locking operation, and not the side effects, needs to be performed. This simplifies the locking operation, and leads to faster programs which use fewer computer processor resources to execute; and programs which perform fewer shared memory accesses, which in turn not only causes the optimized program, but also other programs executing on the same computing machine to execute faster. The method also describes how information about the semantics of the locking operation side effects and the information about the scope of access can also be used to eliminate performing the side effect parts of the locking operation, thereby completely eliminating the locking operation. The method also describes how to analyze the program to compute the necessary information about the scope of access. Variants of the method show how one or several of the features of the method may be performed.
Owner:IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products