Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

258 results about "Memory processing" patented technology

Cache memory background preprocessing

A cache memory preprocessor prepares a cache memory for use by a processor. The processor accesses a main memory via a cache memory, which serves a data cache for the main memory. The cache memory preprocessor consists of a command inputter, which receives a multiple-way cache memory processing command from the processor, and a command implementer. The command implementer performs background processing upon multiple ways of the cache memory in order to implement the cache memory processing command received by the command inputter.
Owner:ANALOG DEVICES INC

Big-data parallel computing method and system based on distributed columnar storage

InactiveCN107329982AReduce the frequency of read and write operationsShort timeResource allocationSpecial data processing applicationsParallel computingLarge screen
The invention discloses a big-data parallel computing method and system based on distributed columnar storage. Data which is most often accessed currently is stored by using the NoSQL columnar storage based on a memory, the cache optimizing function is achieved, and quick data query is achieved; a distributed cluster architecture, big data storing demands are met, and the dynamic scalability of the data storage capacity is achieved; combined with a parallel computing framework based on Spark, the data analysis and the parallel operation of a business layer are achieved, and the computing speed is increased; the real-time data visual experience of the large-screen rolling analysis is achieved by using a graph and diagram engine. In the big-data parallel computing method and system, the memory processing performance and the parallel computing advantages of a distributed cloud server are given full play, the bottlenecks of a single server and serial computing performance are overcome, the redundant data transmission between data nodes is avoided, the real-time response speed of the system is increased, and quick big-data analysis is achieved.
Owner:SOUTH CHINA UNIV OF TECH

Memory centric computing

A hybrid memory system. This system can include a processor coupled to a hybrid memory buffer (HMB) that is coupled to a plurality of DRAM and a plurality of Flash memory modules. The HMB module can include a Memory Storage Controller (MSC) module and a Near-Memory-Processing (NMP) module coupled by a SerDes (Serializer / Deserializer) interface. This system can utilize a hybrid (mixed-memory type) memory system architecture suitable for supporting low-latency DRAM devices and low-cost NAND flash devices within the same memory sub-system for an industry-standard computer system.
Owner:RAMBUS INC

Non-aligning access and storage processing method

A non-aligned access and memory processing method includes: setting translation threshold according to the objective set structure, executing pitching pile to the access and memory instructions in the translator, to obtain the non-aligned access and memory instruction information; when the implementation number of the translation unit is greater than the translation threshold, the non-aligned access and memory instruction information advises the translator to select a suitable instruction to translate the translation unit into the local code; the non-aligned access and memory instructions undiscovered by the translator pitching pile are generated into the corresponding non-aligned access and memory instruction sequence according to the exception handling mechanism, inserting in the exception handling address, and embedding in the executing code. Adoption of the method can largely reduce the number of exception times of the non-aligned access and memory produced in the binary translator, and improves the efficiency of the binary translator; can better handle the non-aligned access and memory exception appearing in the application program whose code implementation action varies with different input sets, and can effectively improve the operating efficiency of the binary translation system.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Mixed size data processing operation

A data processing system 2 includes a processor core 4 and a memory 6. The processor core 4 includes processing circuitry 12, 14, 16, 18, 26 controlled by control signals generated by decoder circuitry 24 which decodes program instructions. The program instructions include mixed operand size instructions (either load / store instructions or arithmetic instructions) which have a first input operand of a first operand size and a second input operand of a second input operand size where the second operand size is smaller than the first operand size. The processing performed first converts the second operand so as to have the first operand size. The processing then generates a third operand using as inputs the first operand of the first operand size and the second operand now converted to have the first operand size.
Owner:ARM LTD

Virtual machine thermal migration memory processing method, device and system

The invention provides a virtual machine thermal migration memory processing method, device and system. The method comprises the following steps: compression processing of a current to-be-transmitted first memory page block of a virtual machine on a source physical machine is performed by using the first compression algorithm, and storing compression information of the first memory page block; if N memory page blocks containing the first memory page block do not meet preset compression performance after being subjected to compression processing through judging according to the compression information of the first memory page block and compression information of (N-1) memory page blocks transmitted before the first memory page block, and then M memory page blocks after the first memory page block are not subjected to compression processing and are directly transmitted to a target physical machine, wherein the N is greater than 1 and the M is greater than 1. According to the method, the device and the system provided by the invention, the thermal migration performance of the virtual machine is improved, the CPU (Central Processing Unit) overhead of the source physical machine is reduced and processing resources are saved.
Owner:HUAWEI TECH CO LTD

In-memory calculation method based on coarse-grained reconfigurable array

The invention relates to an in-memory processing system based on CGRA. The method is characterized by comprising a central processing unit, a main memory, a reconfigurable array and a global instruction register; a 3D stacking mode is adopted, each main memory block corresponds to a logic layer, and the logic layers and a memory chip are directly connected through the TSV technology; the processing unit of the reconfigurable array is configured as a storage unit or an arithmetic logic unit; the storage unit is used for exchanging data with the memory; and an arithmetic logic unit is used for performing calculating according to the register data, the nearby storage unit data and the configuration information. The in-memory processing system has the beneficial effects that the in-memory processing system has obvious performance advantages and wide application advantages, can realize the function simulation of the architecture under a simulation platform, is applied to a specific data intensive algorithm, adapts to more algorithm applications, and has higher flexibility; the reconfigurable array global instruction memories are all designed asymmetrically; and the transmission efficiency of the internal configuration data of the reconfigurable array is greatly improved.
Owner:SHANGHAI JIAO TONG UNIV

Electronic document increment memory processing method

The present invention relates to a method for generating incremental document after edition of electronic document, wherein the method comprises the following steps: reading the data from a reference document into an internal memory and dividing; calculating the index value corresponding with the divided reference data block and relating; sequentially reading the edited data block into the internal memory and calculating the corresponding index value; comparing the index value corresponding with the reference data block with the index value corresponding with the data block after editing, if the two index value are matching with each other, writing the matching position of reference document and matching mark into the incremental document, and otherwise writing the unmatched segment in the document after editing, the unmatched length and the unmatched mark into the incremental document. The invention also relates to a method for recovering the document after editing according to incremental document. The invention generates the incremental document through establishing a mapping relationship between two documents before and after editing and according to the recorded mapping relationship. The incremental document only requires processing for substituting the document after editing in the aspects of data storing, filing, backup, etc. for further reducing the storing burden or network transmission burden.
Owner:BEIJING 21VIANET DATA CENT

Memory processing method and apparatus, computer apparatus and computer readable storage medium

Embodiments of the invention disclose a memory processing method and apparatus, a computer apparatus and a computer readable storage medium, which relate to the technical field of computers and are used for solving the problem of difficulty in full utilization of defragmentation for expanding available memory space in the prior art. The method comprises the steps of performing recycling attempt onmultiple physical pages in a memory of a terminal for multiple times; and when the memory capacity released by the recycling attempt exceeds a release threshold, performing defragmentation on the memory, wherein the process of performing the recycling attempt on the physical pages for one time comprises the steps of judging whether the activity degrees of the physical pages are higher than a recycling standard or not, wherein the activity degrees of the physical pages are used for marking the activity levels of the physical pages, and the values of the activity degrees of the physical pages have positive correlation with the activity levels of the physical pages; and if yes, reducing the activity degrees of the physical pages, otherwise, recycling the physical pages.
Owner:MEIZU TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products