Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

194 results about "Memory object" patented technology

Object memory involves processing features of an object or material such as texture, color, size, and orientation. It is processed mainly in the ventral regions of the brain. A few studies have shown that on average most people can recall up to four items each with a set of four different visual qualities.

System and method for memory reclamation

A method for memory reclamation is disclosed that includes marking a memory object when an attempt to alter a reference to the memory object is detected by a software write barrier. Marking be by using representations (“black,”“white”, “gray”) stored as part of a garbage collection information data structure associated with each memory object. Initially, all allocated memory objects are marked white. Objects are then processed such that objects referenced by pointers are marked gray. Each object marked gray is then processed to determine objects referenced by pointers in it, and such objects are marked gray. When all objects referenced by pointers in the selected gray objected have been processed, the selected gray object is marked black. When all processing has been completed, objects still marked white may be reclaimed. Also described is a garbage collector which runs as a task concurrently with other tasks. A priority of the garbage collector may be increased to prevent interruption during certain parts of the garbage collection procedure.
Owner:WIND RIVER SYSTEMS

Dynamic adaptive tenuring of objects

Run time sampling techniques have been developed whereby representative object lifetime statistics may be obtained and employed to adaptively affect tenuring decisions, memory object promotion and / or storage location selection. In some realizations, object allocation functionality is dynamically varied to achieve desired behavior on an object category-by-category basis. In some realizations, phase behavior affects sampled lifetimes e.g., for objects allocated at different phases of program execution, and the dynamic facilities described herein provide phase-specific adaptation tenuring decisions, memory object promotion and / or storage location selection. In some realizations, reversal of such decisions is provided.
Owner:ORACLE INT CORP

Implementation of an object memory centric cloud

Embodiments of the invention provide systems and methods to implement an object memory fabric including hardware-based processing nodes having memory modules storing and managing memory objects created natively within the memory modules and managed by the memory modules at a memory layer, where physical address of memory and storage is managed with the memory objects based on an object address space that is allocated on a per-object basis with an object addressing scheme. Each node may utilize the object addressing scheme to couple to additional nodes to operate as a set of nodes so that all memory objects of the set are accessible based on the object addressing scheme, which defines invariant object addresses for the memory objects that are invariant with respect to physical memory storage locations and storage location changes of the memory objects within the memory module and across all modules interfacing the object memory fabric.
Owner:ULTRATA LLC

Object memory fabric performance acceleration

Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. Embodiments described herein can provide transparent and dynamic performance acceleration, especially with big data or other memory intensive applications, by reducing or eliminating overhead typically associated with memory management, storage management, networking, and data directories. Rather, embodiments manage memory objects at the memory level which can significantly shorten the pathways between storage and memory and between memory and processing, thereby eliminating the associated overhead between each.
Owner:ULTRATA LLC

Universal single level object memory address space

Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. Embodiments described herein can eliminate typical size constraints on memory space of commodity servers and other commodity hardware imposed by address sizes. Rather, physical addressing can be managed within the memory objects themselves and the objects can be in turn accessed and managed through the object name space.
Owner:ULTRATA LLC

Extensible memory object storage system based on heterogeneous memory

The invention relates to an extensible memory object storage system based on a heterogeneous memory, which comprises a DRAM (Dynamic Random Access Memory) and an NVM (Non-Volatile Memory), and is configured to: execute a memory allocation operation through a Slab-based memory allocation mechanism and divide each Slab Class into a DRAM (Dynamic Random Access Memory) domain and an NVM domain; monitor access heat degree information of each memory object at an application layer level; and dynamically adjust a storage area of corresponding key value data in each Slab Class based on the access heatdegree information of each memory object, store the key value data of the memory object with relatively high access heat in each Slab Class in a DRAM domain, and store the key value data of the memoryobject with relatively low access heat in each Slab Class in an NVM domain; and monitor the access heat of the memory object at the application layer level. Dynamic use of the DRAM / NVM heterogeneousmemory is achieved, and compared with a traditional method of monitoring the access heat of the application at the hardware or operating system level, huge hardware and operating system expenditure iseliminated.
Owner:HUAZHONG UNIV OF SCI & TECH

Method to customize function behavior based on cache and scheduling parameters of a memory argument

Disclosed are a method, a system and a computer program product of operating a data processing system that can include or be coupled to multiple processor cores. In one or more embodiments, each of multiple memory objects can be populated with work items and can be associated with attributes that can include information which can be used to describe data of each memory object and / or which can be used to process data of each memory object. The attributes can be used to indicate one or more of a cache policy, a cache size, and a cache line size, among others. In one or more embodiments, the attributes can be used as a history of how each memory object is used. The attributes can be used to indicate cache history statistics (e.g., a hit rate, a miss rate, etc.).
Owner:IBM CORP

Methods and systems for structured ASIC electronic design automation

Electronic design automation (“EDA) methods and systems for structured ASICs include accessing or receiving objects representative of source code for a structured ASIC. The objects are flattened to remove hierarchies associated with the source code, such as functional RTL hierarchies. The flattened objects are clustered to accommodate design constraints associated with the structured ASIC. The clustered objects are floorplanned within a design area of the structured ASIC. The objects are then placed within the portions of the design areas assigned to the corresponding clusters. The objects optionally include logic objects and one or more memory objects and / or proprietary objects, wherein the one or more memory objects and / or proprietary objects are placed concurrently with the logic objects.
Owner:TERA SYST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products