Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cache data access method and system based on TCMU virtual block device

A virtual block device and cache data technology, applied in memory systems, electrical digital data processing, instruments, etc., can solve the problem of low read and write performance of TCMU, improve read and write performance, and prevent cache pollution

Inactive Publication Date: 2017-09-05
深圳市联云港科技有限公司
View PDF5 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The technical problem to be solved by the present invention is to provide a cache data access method and system based on the TCMU virtual block device to solve the problem of low read and write performance of the TCMU caused by the existing cache method.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cache data access method and system based on TCMU virtual block device
  • Cache data access method and system based on TCMU virtual block device
  • Cache data access method and system based on TCMU virtual block device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0086] This embodiment provides a cache data access method based on the TCMU virtual block device, the access is a write access, such as Figure 4 with Figure 5 , the method includes:

[0087] S10. When receiving a write request carrying (offset, length, data), distribute the write request to its corresponding processing pool according to the length, where the processing pool is the same as 4K, 8K, 16K, 32K, 64K and One correspondence in 128K;

[0088] S20. Search for the offset in the cache hash table corresponding to the processing pool, if not found, execute step S30, and if found, execute step S40;

[0089] S30. Use offset as the key to hash into the hash table, and record its count as 1 and execute step 80, wherein the cache hash table does not cache the data;

[0090] S40. Add 1 to the access times count corresponding to the offset, and cache the data in the cache hash table;

[0091] S50. Send offset and count to the cache linked list corresponding to the processin...

Embodiment 2

[0096] This embodiment provides a cache data access method based on a TCMU virtual block device, the access is a read access, such as Figure 4 with Image 6 , the method includes:

[0097] M10. When receiving a write request carrying (offset, length, buffer), distribute the write request to its corresponding processing pool according to the length, where the processing pool is compatible with 4K, 8K, 16K, 32K, 64K and One correspondence in 128K;

[0098] M20. Search for the offset in the cache hash table corresponding to the processing pool, if not found, execute step S30, and if found, execute step S40;

[0099] M30. Read the data from the back-end storage device through the network, hash it into the hash table with offset as the key, and record the count of the number of visits to the location as 1, wherein the cache hash table does not cache said data;

[0100] M40. Comparing the number of visits count with 1, if it is equal to 1, the execution is not M50, if it is gre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cache data access method and system based on a TCMU virtual block device. The method includes the steps that an access request of an application program is received; a processing pool which data in the access request belongs to is determined according to the length, and a cache hash table corresponding to the processing pool is searched for the number of access times corresponding to the position; when the access request is a write request and the number of access times is larger than or equal to 1, 1 is added to the number of access times, data corresponding to the write request is stored in the cache hash table, and a cache chain table corresponding to the processing pool is searched for the position; when the position is not found, the position and the number of access times are inserted into the head of the cache chain table. By means of the method in accordance with the IO read-write characteristics of a TCMU, the read and write performance is improved; by means of the harsh table adopting secondary affirmation and the cache chain table adopting access-time-based elimination, hotspot data accessed many times recently can be effectively cached, massive accidental data access can be filtered out, and cache pollution is avoided.

Description

technical field [0001] The present invention relates to the technical field of intelligent terminals, in particular to a cache data access method and system based on a TCMU virtual block device. Background technique [0002] Today, with the vigorous development of big data, with the maturity of cloud computing, virtualization, network and other technologies, storage, which is the cornerstone of the entire ecology, is also playing an increasingly important role. In cloud computing, storage is mostly presented in a distributed form. Among the many attempts to generate virtual block devices, the TCMU incorporated into the Linux kernel 3.18 provides a relatively satisfactory method of generating virtual block devices. [0003] As a kernel module of the Linux operating system, TCMU needs to process its requests in time, otherwise it will cause a crash of the Linux kernel. How to design a high-speed and effective cache system for TCMU has become a key issue in performance optimi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F12/0875G06F12/0877
CPCG06F12/0875G06F12/0877
Inventor 文畅
Owner 深圳市联云港科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products