Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for achieving interface caching dynamic allocation

A dynamic allocation and dynamic configuration technology, applied in memory architecture access/allocation, memory systems, digital transmission systems, etc., can solve problems such as turbulence, system stability damage, and system traffic bursts, achieve smooth transition, and achieve dynamic sharing Effect

Inactive Publication Date: 2015-06-17
SANECHIPS TECH CO LTD
View PDF6 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, when multiple interfaces are connected at the same time, the cache sharing structure of the linked list cannot accurately limit the flow of the input interface.
This will cause the packet sending and stop sending operations of all interfaces to be synchronous, resulting in sudden and turbulent system traffic, which will greatly damage the stability of the system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for achieving interface caching dynamic allocation
  • Method and device for achieving interface caching dynamic allocation
  • Method and device for achieving interface caching dynamic allocation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0046]figure 1 It is a schematic diagram of the implementation flow of the method for realizing the dynamic allocation of interface cache described in the embodiment of the present invention, as shown in figure 1 As shown, the method includes:

[0047] Step 101: set the docking relationship between the interface to be accessed during the application and the free cache block in advance or when the system is running, and then transmit the data packet input by the interface to the cache block;

[0048] Here, it is assumed that the total number of interfaces supported by the system is M, and there are N cache blocks for receiving data (M≥N), but in actual use, the number of interfaces used at the same time will often not exceed N, so that N can be used The cache supports M interfaces. Wherein, the N cache blocks may be obtained by equally dividing the entire interface cache.

[0049] In the embodiment of the present invention, a multi-input multi-output cross matrix (input quant...

Embodiment 2

[0065] In order to clearly express the present invention and discuss the differences between the present embodiment and the cache sharing structure in the form of a linked list, the description is now based on a scenario in which multiple interfaces simultaneously receive messages. Such as figure 2 As shown, it is a cache sharing structure in the form of a linked list. Using the linked list method to deal with the simultaneous input of multiple interfaces requires a two-level cache structure. The first-level interface cache is small and is mainly used for multi-interface aggregation. The second-level shared cache has a large capacity and is used as the actual storage body of data. At the same time, each interface may send packet slices to the first-level interface cache, and each receiving cache checks the integrity of the packet, for example: checking whether the packet is a complete packet and / or a type of packet that is allowed to enter and / or Or ultra-long and ultra-shor...

Embodiment 3

[0074] The embodiment of the present invention provides an implementation method of the present invention in an application scenario, such as Figure 5 shown, including the following steps:

[0075] Step 501: According to the needs of the application scenario, determine the interface that needs to be connected, perform docking settings from the cache block to the interface, and select an idle cache block to connect to the interface;

[0076] Step 502: Configure the working mode of each interface, such as: support full packet mode / interleaved mode, and the ID number of the specific data packet received by the relevant buffer block in the interleaved mode;

[0077] Step 503: According to the setting situation, the corresponding cache block stores the message for calling by subsequent modules;

[0078] Step 504: Schedule and output the data in all cache blocks according to certain scheduling rules, such as RR, SP, etc., and send the output messages to the next level for processing...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method for implementing interface cache dynamic allocation is disclosed in the present invention. The method includes: setting, in advance or when a system is running, the corresponding relationship between a free cache block and an interface required to be accessed in the application, and then sending data packets inputted from the interface to the cache block; when the system is running, if the interface required to be accessed needs to be increased, revoked or modified, adjusting the corresponding relationship between the changed interface and the corresponding cache block in real time. A device and computer storage medium for implementing the method are also disclosed in the present invention.

Description

technical field [0001] The invention relates to the field of network transmission control, in particular to a method and device for realizing dynamic allocation of interface cache. Background technique [0002] With the continuous upgrading of network capacity, the number of interfaces supported by routers is increasing, and the requirements for flexibility are also increasing, so as to meet the needs of different application scenarios; different application scenarios may require different combinations of interfaces. This requires that the current design must support all interfaces of possible application scenarios. For the simultaneous access of multiple interfaces, it is necessary to assign an exclusive buffer to each accessed interface to achieve simultaneous buffering and reception of data; however, if an exclusive buffer is allocated to all supported interfaces, it will inevitably cause caching As the number and capacity of the cache increase, the number of interfaces ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04L12/861
CPCG06F12/084G06F13/128G06F12/0844G06F2212/154G06F2212/6042H04L67/568G06F12/0813
Inventor 廖智勇于忠前
Owner SANECHIPS TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products