Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Quick direct memory access (DMA) ping-pong caching method

A ping-pong cache and fast technology, applied in the field of information processing, to achieve the effect of reducing the amount of DMA data movement

Inactive Publication Date: 2010-02-10
ZTE CORP
View PDF4 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For example, in video compression, the existing ping-pong buffer method is to start DMA to move the 48 lines of reference data required by the next macroblock line to the on-chip BUFFER_B while performing motion estimation on the current macroblock line, such as figure 1 As shown, here the current macroblock line uses BUFFER_A, which is moved in by DMA when the previous macroblock line is used for motion estimation. It can be noticed from the data structure of the reference frame used for motion estimation in video coding. 2 / 3 of the 32 rows of pixels in two adjacent DMA transfers are exactly the same, and the existing DMA ping-pong processing method will bring a lot of redundancy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Quick direct memory access (DMA) ping-pong caching method
  • Quick direct memory access (DMA) ping-pong caching method
  • Quick direct memory access (DMA) ping-pong caching method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0014] The method of the present invention will be described in further detail below in conjunction with the accompanying drawings.

[0015] to combine figure 2 , image 3 Shown, for the sake of simplicity, assume that the byte quantity of the data block that CPU of the present invention can handle each time is M byte, and the first address of purpose cache is Addr, and size is N byte, and adjacent two when setting ping-pong cache The same amount of data in the data blocks moved by the DMA is B, and the ratio of B to M bytes is a=B / M, then the number of times to cover the destination cache N bytes for DMA once is

[0016] L=floor((N-M) / (M(1-a)))+1, where floor() represents rounding down.

[0017] The present invention is accomplished according to the following steps:

[0018] The first step: use DMA to start moving the M-byte data block, the destination address is Addr, which is the first address of the destination cache, and the address interval moved to the destination c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a quick direct memory access (DMA) ping-pong caching method, which is used for moving data blocks of which part of data are the same from adjacent data blocks. The method comprises the following steps that: the DMA firstly moves the data blocks in a processable byte quantity by a CPU into a target cache and then sequentially moves the data blocks into the target cache until the target cache is completely covered and the turn of data moving is completed, wherein the byte quantity of the sequentially moved data block is equal to that of the part of the different data inthe adjacent data blocks from the data blocks which need to be moved. The method can reduce the redundancy in the process of processing the part of the same data in the adjacent data so as to reduce the quantity of the data moved by the DMA at each time, thereby decreasing the wait time of the CPU.

Description

technical field [0001] The invention relates to the technical field of information processing, in particular to a fast DMA (Direct Memory Access, direct memory access) ping-pong cache method. Background technique [0002] In existing mainstream chip processors (such as DSP, FPGA, ASIC, etc.), their on-chip memory space is limited, and the access speed difference between on-chip memory and off-chip memory is quite large. Therefore, when processing high-intensity data (such as audio and video signals), because the amount of data is very large, a relatively slow off-chip memory must be used. In addition, the CPU (central processing unit) directly accessing the off-chip memory will cause data loss. The lack of reading and writing leads to low processing efficiency. DMA is a component that can work independently of the CPU. In existing mainstream processors, DMA is an indispensable component. [0003] In order to reduce the loss of data reading and writing, a commonly used proc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F13/28
Inventor 陈晨航
Owner ZTE CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products