Sharing cache dynamic threshold early drop device for supporting multi queue

A dynamic threshold and shared cache technology, applied in digital transmission systems, electrical components, transmission systems, etc., can solve the problems of difficult parameter setting, difficult hardware high-speed implementation, and a large number of resources, and achieve the effect of avoiding floating-point operations

Inactive Publication Date: 2006-05-24
TSINGHUA UNIV
View PDF0 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But the main disadvantages of this mechanism are:
[0007] (1) It is difficult to set parameters, and its performance is sensitive to changes in parameters and network conditions;
[0008] (2) It is necessary to set parameters and calculations for each single queue, which is not conducive to the expansion of the number of queues or the number of network flows;
[0009] (3) There are many multiplication operations in the RED algorithm, which require a lot of resources and are difficult to implement at high speed with hardware

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sharing cache dynamic threshold early drop device for supporting multi queue
  • Sharing cache dynamic threshold early drop device for supporting multi queue
  • Sharing cache dynamic threshold early drop device for supporting multi queue

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] The Internet Engineering Task Force (IETF) proposed the AQM technology and recommended the RED mechanism. When a router implementing the RED algorithm detects a precursor to congestion, it randomly discards some packets in the buffer queue in advance, instead of discarding all new packets after the buffer is full. When the average queue length of the cache of the intermediate node device in the network exceeds a specified minimum threshold min th When , it is considered that there is a congestion precursor, and at this time the router presses a certain probability p a Drop a packet, the probability p a is a function of the average queue length avg(t):

[0064] p b = p max ( avg ( t ) - min th ) ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

Belonging to IP technical area, the invention is realized on a piece of field programmable gate array (FPGA). Characters of the invention are that the device includes IP grouping circuit, early discarding circuit with dynamic threshold, cell counting circuit, idle block management circuit, DDR controller, queue dispatch circuit, storage outside DDR. Based on average queue size of each current active queue and average queue size in whole shared buffer area, the invention adjusts parameters of random early detection (RED) algorithm to put forward dynamic threshold early discarding method of supporting multiple queues and sharing buffer. The method keeps both advantages of mechanism of RED and dynamic threshold. It is good for realization in FPGA to use cascaded discarding curve approximation. Features are: smaller rate of packet loss, higher use ratio of buffer, and a compromise of fairness.

Description

technical field [0001] The present invention belongs to the field of IP technology. Background technique [0002] Current Internet intermediate node devices, such as routers, affect and control network congestion by controlling the length of their internal cache queues. The traditional cache queue management mechanism currently seen in equipment is the tail drop (TailDrop) mechanism: a length threshold is set for each queue, if the queue length does not reach the set threshold, all packets are received into the queue; otherwise, the arriving packets are discarded. This mechanism is simple to implement, but there are three serious flaws: [0003] (1) Continuous full queue status (Full Queues); [0004] (2) The deadlock (Lock Out) of the service flow to the cache; [0005] (3) Global Synchronization of business volume (Global Synchronization); [0006] The currently generally accepted method is to add an enhanced function of the intermediate node: Active Queue Management (...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04L12/861
Inventor 胡成臣刘斌陈雪飞陈洪明
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products