Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

A kind of caching method and device

A cache and buffer area technology, applied in the field of network communication, can solve the problems of queue congestion, waste of cache use, and inability to effectively solve QoS scheduling failure, and achieve the effect of ensuring QoS service quality and good experience.

Active Publication Date: 2022-06-24
NEW H3C BIG DATA TECH CO LTD
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the disadvantage of the first solution is: cache reservation can only support a small number of queues, cannot solve the problem of abnormal WRED threshold, and cannot effectively solve the problem of QoS scheduling failure
This solution can solve the problem of QoS scheduling failure, but it causes a huge waste of cache usage. In live network applications, the cache is shared by all queues, but the queues are not necessarily all congested. If only half of the queues are congested, half of the queues will be congested. Cache resources will not be used, and the congestion of all queues is rare after all. At the same time, if the total shared cache is not large, the average cache allocated to each queue may not reach the minimum limit of QoS scheduling, that is, evenly allocated After reaching the cache of all queues, the problem of QoS scheduling failure still cannot be solved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A kind of caching method and device
  • A kind of caching method and device
  • A kind of caching method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] Specifically, as image 3 As shown, the specific solution flow of the embodiment of the present disclosure is as follows:

[0029] A caching method, comprising the following steps:

[0030] S1. Divide the shared cache into multiple cache areas, each of which corresponds to a different priority type;

[0031] The multiple buffer areas are respectively used for buffering and forwarding corresponding message queues, and may include two or more buffer areas. For example, three buffer areas are included, wherein the first buffer area buffers the packet-free queue, the second buffer area buffers the high-priority queue, and the third buffer area buffers the low-priority queue. The proportion of the first buffer area, the second buffer area, and the third buffer area to the shared buffer area is less than or equal to 100%.

[0032] S2. Send the message queue to the corresponding buffer area according to the sending priority of the message queue. The sending priority may in...

Embodiment 2

[0040] Specifically, as Figure 4 As shown, the specific solution flow of the embodiment of the present disclosure is as follows:

[0041] First, the shared buffer is divided into a plurality of buffer areas, and each buffer area corresponds to a different priority type; then, according to the sending priority of the message queue, the message queue is sent to the corresponding buffer area. According to the queue priority and the possible congestion state, the transmission priority of the queue is divided into three types: no packet loss queue, high priority queue, and low priority queue.

[0042] No packet loss queue: The following table is set as the queue aggregation group Group A. There are not many queues and they are used for internal protocol packets of the device. In order not to cause protocol flapping, this part of the queue cannot lose packets, nor can the delay It is too large, so it is generally not involved in QoS scheduling. As long as the packets are queued, t...

Embodiment 3

[0061] like Image 6 As shown, a cache device includes:

[0062] The partition module 401 is configured to divide the shared cache into multiple cache areas, each of which corresponds to a different priority type. Among them, a plurality of buffer areas are respectively used for buffering and forwarding corresponding message queues, wherein the first buffer area buffers the packet-free queue, the second buffer area buffers the high-priority queue, and the third buffer area buffers the low-priority queue. queue. The proportion of the first buffer area, the second buffer area, and the third buffer area to the shared buffer area is less than or equal to 100%.

[0063] The sending module 402 is configured to send the message queue to the corresponding buffer area according to the sending priority of the message queue. Sending priorities include: no packet loss, high priority, and low priority.

[0064] If the ratio of the length of the message queue to the capacity of the corr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present disclosure discloses a caching method and device, which are applied to a shared cache of network communication equipment. The method includes the following steps: dividing the shared cache into multiple cache areas, and each cache area corresponds to a different priority type; According to the sending priority of the message queue, the message queue is sent to the corresponding buffer area. In the case of more reasonable and efficient use of shared cache resources, this disclosure can ensure queue priority scheduling of different service types, and prevent too many low-priority messages from occupying the cache even when the total cache size is small. High-priority packets enter the queue space to ensure QoS service quality normally and bring users a better experience when surfing the Internet.

Description

technical field [0001] The present disclosure relates to the technical field of network communication, and in particular, to a caching method and device. Background technique [0002] WRED (Weighted Random Early Detection) is a method that monitors the usage of network resources (such as queues or memory buffers) and actively discards packets when congestion tends to intensify. A flow control mechanism to relieve network overload. Its implementation depends on the size of the buffer area. In reality, it is impossible to have an unlimited buffer area size. Once the total buffer area size is exceeded, the WRED mechanism will be abnormal. At the same time, the QoS (Quality of Service) supported by WRED ) queue scheduling will also skew. [0003] The normally forwarded queues basically do not buffer packets, and the traffic is scheduled according to a specific QoS priority, such as figure 1 As shown, each queue will set the maximum buffer usage limit, that is, the total lengt...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04L47/24H04L47/50H04L47/6275
CPCH04L47/24H04L47/6215H04L47/6275
Inventor 寇远芳
Owner NEW H3C BIG DATA TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products