Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

706 results about "Memory pool" patented technology

Memory pools, also called fixed-size blocks allocation, is the use of pools for memory management that allows dynamic memory allocation comparable to malloc or C++'s operator new. As those implementations suffer from fragmentation because of variable block sizes, it is not recommendable to use them in a real time system due to performance. A more efficient solution is preallocating a number of memory blocks with the same size called the memory pool. The application can allocate, access, and free blocks represented by handles at run time.

ATM architecture and switching element

An ATM switching system architecture of a switch fabric-type is built of, a plurality of ATM switch element circuits and routing table circuits for each physical connection to / from the switch fabric. A shared pool of memory is employed to eliminate the need to provide memory at every crosspoint. Each routing table maintains a marked interrupt linked list for storing information about which ones of its virtual channels are experiencing congestion. This linked list is available to a processor in the external workstation to alert the processor when a congestion condition exists in one of the virtual channels. The switch element circuit typically has up to eight 4-bit-wide nibble inputs and eight 4-bit-wide nibble outputs and is capable of connecting cells received at any of its inputs to any of its outputs, based on the information in a routing tag uniquely associated with each cell.
Owner:PMC SEIRRA

Memory control method for embedded systems

The invention provides a memory control method for embedded systems. The method includes: applying for memory in an operating system, and using part of the memory as a memory pool and the other part as a reserved memory area; establishing a cache for each thread by pool management of the memory pool and thread caching technology; subjecting the reserved memory area to management by TLSF (two-level segregated fit) algorithm; dividing the memory pool into memory blocks different in size, connecting the memory blocks in the same size to form a double linked list, adding a memory management unit for each memory block, and placing the memory management unit and the memory blocks in different memory areas respectively; and establishing a memory statistical linked list for each thread to connect the memory blocks applied by the thread so as to facilitate troubleshooting of memory leak. In addition, a memory coverage checking mechanism is added without increasing cost of the memory control method.
Owner:WUHAN POST & TELECOMM RES INST CO LTD

Method for realizing local file system through object storage system

InactiveCN107045530AReduce the number of interactionsImprove the performance of accessing swift storage systemSpecial data processing applicationsApplication serverFile system
The invention discloses a method for realizing a local file system through an object storage system. A metadata cache algorithm of the file system and a memory description structure are adopted, so that the interactive frequency of an application and a background of a swift storage system is reduced and the performance of accessing the swift storage system by the application is improved; a policy of pre-allocating a memory pool and recovering idle memory blocks in batches in a delayed manner is adopted, so that the efficiency of traversing a directory comprising a large amount of subdirectories and files is improved; a memory description structure of an open file handle is adopted, so that the application can efficiently perform file reading-writing operation; a pre-reading policy is adopted, so that the frequency of network interaction between an application server and a swift storage back end is effectively reduced and the reading performance of the file system is improved; and a zero copying and block writing policy is adopted, so that no any data copying and caching exist in a file writing process, system call during each write is a complete block writing operation, and the file writing efficiency is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Method of scheduling jobs using database management system for real-time processing

A method of scheduling jobs in real time using a database management system is provided. An application task classifies jobs as any one transaction type of a hot type and a normal type. A processing area in a memory pool that is a common resource is allocated to the application task, and the job is transferred to a database job manager through a client application program interface (API). The job manager loads a request node of the job in a list of a transaction type corresponding to the job, of a mailbox of the DB task, which classifies job request nodes as the hot type and the normal type with respect to the type of transaction and manages the nodes, so that the job request node can be scheduled in units of transactions. The job manager transfers the job request nodes loaded in the mailbox for the DB task, one by one to the DB task so that the job request nodes can be processed in units of transactions in a manner in which a job corresponding to a hot-type transaction is processed with a priority over a job corresponding to a normal-type transaction, and between jobs of an identical truncation type, jobs are processed in order of job requesting time. The DB task loads through the job manager, the processing result of the job in a mailbox of the application task which requested the processing of the job, so that the corresponding application task can use the processing result in the future.
Owner:FUSIONSOFT

Thread for high-performance computer NUMA perception and memory resource optimizing method and system

ActiveCN104375899ASolve the problem of excessive granularity of memory managementSolve fine-grained memory access requirementsResource allocationComputer architecturePerformance computing
The invention discloses a thread for high-performance computer NUMA perception and a memory resource optimizing method and system. The system comprises a runtime environment detection module used for detecting hardware resources and the number of parallel processes of a calculation node, a calculation resource distribution and management module used for distributing calculation resources for parallel processes and building the mapping between the parallel processes and the thread and a processor core and physical memory, a parallel programming interface, and a thread binding module which is used for providing the parallel programming interface, obtaining a binding position mask of the thread according to mapping relations and binding the executing thread to a corresponding CPU core. The invention further discloses a multi-thread memory manager for NUMA perception and a multi-thread memory management method of the multi-thread memory manager. The manager comprises a DSM memory management module and an SMP module memory pool which manage SMP modules which the MPI processes belong to and memory distributing and releasing in the single SMP module respectively, the system calling frequency of the memory operation can be reduced, the memory management performance is improved, remote site memory access behaviors of application programs are reduced, and the performance of the application programs is improved.
Owner:INST OF APPLIED PHYSICS & COMPUTATIONAL MATHEMATICS

Multi-tenant memory service for memory pool architectures

A memory management service occupies a configurable portion of an overall memory system in a disaggregate compute environment. The service provides optimized data organization capabilities over the pool of real memory accessible to the system. The service enables various types of data stores to be implemented in hardware, including at a data structure level. Storage capacity conservation is enabled through the creation and management of high-performance, re-usable data structure implementations across the memory pool, and then using analytics (e.g., multi-tenant similarity and duplicate detection) to determine when data organizations should be used. The service also may re-align memory to different data structures that may be more efficient given data usage and distribution patterns. The service also advantageously manages automated backups efficiently.
Owner:IBM CORP

Dynamic memory management of unallocated memory in a logical partitioned data processing system

A method, system, and program for dynamic memory management of unallocated memory in a logical partitioned data processing system. A logical partitioned data processing system typically includes multiple memory units, processors, I / O adapters, and other resources enabled for allocation to multiple logical partitions. A partition manager operating within the data processing system manages allocation of the resources to each logical partition. In particular, the partition manager manages allocation of a first portion of the multiple memory units to at least one logical partition. In addition, the partition manager manages a memory pool of unallocated memory from among the multiple memory units. Responsive to receiving a request for a memory loan from one of the allocated logical partitions, a second selection of memory units from the memory pool is loaned to the requesting logical partition. The partition manager, however, is enabled to reclaim the loaned selection of memory units from the requesting logical partition at any time.
Owner:IBM CORP

Distributed switch memory architecture

A distributed memory switch system for transmitting packets from source ports to destination ports, comprising: a plurality of ports including a source port and a destination port wherein a packet is transmitted from the source port to the destination port; a memory pool; and an interconnection stage coupled between the plurality of ports and the memory pool such that the interconnection stage permits a packet to be transmitted from the source port to the destination port via the memory pool.
Owner:INTEL CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products