Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Low-power consumption high-performance repeating data deleting system

A high-performance technology for data deduplication, applied to redundancy in computing for data error detection, transmission systems, electrical digital data processing, etc. Low cost, low implementation cost, good versatility

Inactive Publication Date: 2011-08-17
NANKAI UNIV
View PDF2 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This new method described in this patented allows for faster calculations compared to current methods while also reducing costs or improving efficiency. It uses an optimized algorithm called CPU/GPU instead of expensive specialized hardware like Xilinx's Celeron chip. Additionally, it achieves higher accuracy rates at reduced power usage levels without sacrificing speed. Overall, these technical improvements improve the overall effectiveness and benefits of the proposed solution.

Problems solved by technology

Technological Problem addressed in this patents relates to improving efficiency when performing deduponation processes due to increasing processing requirements associated with increased storage capacity. Current solutions involve saving unnecessary or redundant data without deleting any important ones while also ensuring proper organization and retrieval of these irrelevant data. Additionally, some techniques like compressing hash values require significant computation resources, making them challenging to implement efficiently overnight.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Low-power consumption high-performance repeating data deleting system
  • Low-power consumption high-performance repeating data deleting system
  • Low-power consumption high-performance repeating data deleting system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0061] refer to image 3 , shows the data batch collection algorithm of the present invention. The specific principle and operation steps are as follows:

[0062] Step S301, generating corresponding metadata information for the data block received from the network.

[0063] In step S302, the metadata generated in step S301 is mounted on the corresponding global metadata linked list for use by subsequent threads.

[0064] In step S303, the size of the data block is obtained and stored in a corresponding location of the global data buffer.

[0065] Step S304, putting the data block into the corresponding position of the global data buffer.

[0066] The batch encapsulation of data streams is mainly data preprocessing for changing the traditional serial data deduplication process into a pipeline mechanism. At the same time, encapsulating data streams in batches and storing data and metadata separately is also a preparation for subsequent GPU parallel computing.

Embodiment 2

[0068] refer to Figure 4 , showing the flow of the GPU compression algorithm of the present invention. The specific principle and operation steps are as follows:

[0069] Step S401, obtain the ID of the current process.

[0070] Step S402, because the data is pre-organized during the accumulation process. Therefore, the position of the data to be processed by the thread in the total data block can be obtained according to the previous thread ID.

[0071] Step S403, obtain the size of the data block to be processed by the thread from the starting position of the data block, and store it in the pointer position obtained in the previous step.

[0072] Step S404, compress the data block using a certain compression algorithm.

Embodiment 3

[0074] refer to Figure 5 , shows the process of using GPU to do Bloomfilter in the present invention. The specific principle and operation steps are as follows:

[0075] Step S501, acquiring the ID of the current thread.

[0076] In step S502, the data to be processed by the thread is obtained. The data is pre-organized, so the first address of the data in the total data block can be obtained according to the current thread ID. The data length is fixed (160 bits).

[0077] Step S503, using a certain algorithm to perform Bloomfilter calculation on the block of data.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a low-power consumption high-performance repeating data deleting system, comprising a production center, a computation center and a backup center, wherein the production center is used for copying user request data and sending to the computation center; the computation center is used for deleting the repeating data and sending the non-repeating data to the backup center; and the backup center is used for storing the received data. The computation center uses a Very Innovative Architecture (VIA) processor to reduce the operation power consumption of the system. The performance of the system is improved via the following policies: (1) a special assembler command of the coprocessor module provided by the VIA processor is used for summary calculation and data encryption to improve the system performance via the hardware; (2) the computation center uses a Graphics Processing Unit (GPU) to quicken the data compressing procedure in the repeating data deleting system and the computation process of Bloomfilter; and the concurrent processing of the GPU is used for improving the operation efficiency of the system; and (3) the system performance is further improved by using two flow line mechanisms.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Owner NANKAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products