Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

81 results about "Dram cache" patented technology

DRAM is typically your system's main memory. Cache memory is typically a small volume of very expensive high-performance SRAM that can be accessed and written to by the CPU much faster than main memory access.

Programmable SRAM and DRAM cache interface with preset access priorities

A cache interface that supports both Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM) is disclosed. The cache interface preferably comprises two portions, one portion on the processor and one portion on the cache. A designer can simply select which RAM he or she wishes to use for a cache, and the cache controller interface portion on the processor configures the processor to use this type of RAM. The cache interface portion on the cache is simple when being used with DRAM in that a busy indication is asserted so that the processor knows when an access collision occurs between an access generated by the processor and the DRAM cache. An access collision occurs when the DRAM cache is unable to read or write data due to a precharge, initialization, refresh, or standby state. When the cache interface is used with an SRAM cache, the busy indication is preferably ignored by a processor and the processor's cache interface portion. Additionally, the disclosed cache interface allows speed and size requirements for the cache to be programmed into the interface. In this manner, the interface does not have to be redesigned for use with different sizes or speeds of caches.
Owner:IBM CORP

DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management

The invention provides a DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory system adopting software and hardware collaborative management. According to the system, an NVM is taken as a large-capacity NVM for use while DRAM is taken as a cache of the NVM. Hardware overhead of a hierarchical heterogeneous memory architecture in traditional hardware management is eliminated through effective use of certain reserved bits in a TLB (translation lookaside buffer) and a page table structure, a cache management problem of the heterogeneous memory system is transferred to the software hierarchy, and meanwhile, the memory access delay after missing of final-stage cache is reduced. In view of the problems that many applications in a big data application environment have poorer data locality and cache pollution can be aggravated with adoption of a traditional demand-based data prefetching strategy in the DRAM cache, a Utility-Based data prefetching mechanism is adopted in the DRAM-NVM hierarchical heterogeneous memory system, whether data in the NVM are cached into the DRAM is determined according to the current memory pressure and memory access characteristics of the applications, so that the use efficiency of the DRAM cache and the use efficiency of the bandwidth from the NVM main memory to the DRAM cache are increased.
Owner:HUAZHONG UNIV OF SCI & TECH

Dram/nvm hierarchical heterogeneous memory access method and system with software-hardware cooperative management

ActiveUS20170277640A1Eliminates hardwareReducing memory access delayMemory architecture accessing/allocationMemory systemsTerm memoryPage table
The present invention provides a DRAM/NVM hierarchical heterogeneous memory system with software-hardware cooperative management schemes. In the system, NVM is used as large-capacity main memory, and DRAM is used as a cache to the NVM. Some reserved bits in the data structure of TLB and last-level page table are employed effectively to eliminate hardware costs in the conventional hardware-managed hierarchical memory architecture. The cache management in such a heterogeneous memory system is pushed to the software level. Moreover, the invention is able to reduce memory access latency in case of last-level cache misses. Considering that many applications have relatively poor data locality in big data application environments, the conventional demand-based data fetching policy for DRAM cache can aggravates cache pollution. In the present invention, an utility-based data fetching mechanism is adopted in the DRAM/NVM hierarchical memory system, and it determines whether data in the NVM should be cached in the DRAM according to current DRAM memory utilization and application memory access patterns. It improves the efficiency of the DRAM cache and bandwidth usage between the NVM main memory and the DRAM cache.
Owner:HUAZHONG UNIV OF SCI & TECH

NVMM: An Extremely Large, Logically Unified, Sequentially Consistent Main-Memory System

Embodiments of both a non-volatile main memory (NVMM) single node and a multi-node computing system are disclosed. One embodiment of the NVMM single node system has a cache subsystem composed of all DRAM, a large main memory subsystem of all NAND flash, and provides different address-mapping policies for each software application. The NVMM memory controller provides high, sustained bandwidths for client processor requests, by managing the DRAM cache as a large, highly banked system with multiple ranks and multiple DRAM channels, and large cache blocks to accommodate large NAND flash pages. Multi-node systems organize the NVMM single nodes in a large inter-connected cache/flash main memory low-latency network. The entire interconnected flash system exports a single address space to the client processors and, like a unified cache, the flash system is shared in a way that can be divided unevenly among its client processors: client processors that need more memory resources receive it at the expense of processors that need less storage. Multi-node systems have numerous configurations, from board-area networks, to multi-board networks, and all nodes are connected in various Moore graph topologies. Overall, the disclosed memory architecture dissipates less power per GB than traditional DRAM architectures, uses an extremely large solid-state capacity of a terabyte or more of main memory per CPU socket, with a cost-per-bit approaching that of NAND flash memory, and performance approaching that of an all DRAM system.
Owner:JACOB BRUCE LEDLEY

SSD controller based on read-write cache separation of STT-MRAM

InactiveCN105550127AAddressing Inadequate Data ProtectionImprove reliabilityDigital storageMemory systemsCapacitanceDram cache
The invention belongs to the technical field of computer memory equipment and particularly relates to an SSD controller based on read-write cache separation of an STT-MRAM. The SSD controller comprises a control logic module, a read-write cache module, an error correction module and a read-write driver module, wherein the read-write cache module comprises an STT-MRAM and a DRAM; the STT-MRAM caches all data required to be written in a FLASH memory array and an LBA modification increment table; and the DRAM caches all data required to be read from the FLASH memory array, an LBA mapping table, a controller program and user configuration. The SSD controller has the beneficial effects that the STT-MRAM is adopted as a write cache in the SSD controller; by utilizing high-speed read-write performance and power failure nonvolatile performance of the STT-MRAM, the problem of data protection shortage after power failure in the SSD controller is solved; and original power failure protection capacitor and power failure detection circuit in the controller are removed, so that the system reliability is improved and the complexity of system design is lowered.
Owner:CETHIK GRP

A new USB protocol based computer acceleration device using multi I/O channel SLC NAND and DRAM cache

This study presents a new USB protocol based computer acceleration device that uses multi-channel single-level cell NAND type flash memory (SLC NAND) and Dynamic random-access memory (DRAM) cache. This device includes a main controller chip, at least one SLC NAND module, and a USB interface to connect the device to a computer. It then creates and assigns a cache file in SLC NAND and DRAM for the computer cache system, caches the common used applications, and read and pre-reads frequently used files. The device drive improves the USB protocol, optimizes the BOT protocol in the traditional USB interface protocol, and optimizes resource allocation for the USB transport protocol.The algorithm and framework of the device employ the following design:1. The device virtualizes the application programs for pre-storing all program files and the system environment files required by programs into the device.2. The device works in multi I / O channel mode, an array module integrates an array of SLC NAND chips and uses main controller chip that can deal with multi I / O channel.3. By monitoring long-term user habits, data that will be used by system can be estimated, and the data can be pre-stored in the device.4. The device allows intelligent compression and automatic release of system memory in background.
Owner:WEIJIA ZHANG

Novel USB protocol computer accelerating device based on multi-channel SLC NAND and DRAM cache memory

The invention relates to a novel USB protocol computer accelerating device based on the multi-channel SLC NAND and the DRAM cache memory. The novel USB protocol computer accelerating device comprises a main control chip and an SLCNAND module and is provided with a USB interface connected with a computer. Cache files, a cache system and regularly-used files of application programs are established for the computer and distributed in the SLCNAND and a DRAM and scattered files which are frequently read and written are used as high-speed cache. Meanwhile, a device driver improves a USB protocol. A BOT protocol in a traditional USB interface protocol is optimized and resource distribution and optimization is performed on a USB transport protocol. The algorithm and the frame of the device adopt the design which includes that firstly, virtualization is carried out on application programs by the device, and therefore, all program files and system environment files needed by the programs are pre-stored in the device; secondly, a multi-channel mode is adopted, and an array module integrates multiple SLCNANA chips and adopts multi-channel main control; thirdly, data to be used are judged by the system by monitoring habits of users for a long time and pre-stored in the device; fourthly, intelligent compressing and backstage automatic releasing are performed on system memory.
Owner:张维加

Methods for Caching and Reading Data to be Programmed into a Storage Unit and Apparatuses Using the Same

A method for caching and reading data to be programmed into a storage unit, performed by a processing unit, including at least the following steps. A write command for programming at least a data page into a first address is received from a master device via an access interface. It is determined whether a block of data to be programmed has been collected, where the block contains a specified number of pages. The data page is stored in a DRAM (Dynamic Random Access Memory) and cache information is updated to indicate that the data page has not been programmed into the storage unit, and to also indicate the location of the DRAM caching the data page when the block of data to be programmed has not been collected.
Owner:SILICON MOTION INC (TW)

Static random access memory (SRAM) compatible, high availability memory array and method employing synchronous dynamic random access memory (DRAM) in conjunction with a single DRAM cache and tag

A static random access memory (SRAM) compatible, high availability memory array and method employing synchronous dynamic random access memory (DRAM) in conjunction with a single DRAM cache and tag provides a memory architecture comprising low cost DRAM memory cells that is available for system accesses 100% of the time and is capable of executing refreshes frequently enough to prevent data loss. Any subarray of the memory can be written from cache or refreshed at the same time any other subarray is read or written externally.
Owner:UNITED MEMORIES +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products