Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

70results about How to "Reduce queuing delay" patented technology

TDMA based long propagation delay wireless link time slot distribution method

The invention discloses a TDMA based long propagation delay wireless link time slot distribution method which is mainly used for solving the problem of low throughput rate and low channel utilization rate of an existing long propagation delay wireless ad hoc network. The implementation scheme comprises the steps of 1, initializing a node; 2, judging whether the node receives a synchronous frame, if so, carrying out network access synchronization by the node, and otherwise, building a network by regarding the local clock as a reference by the node; 3, after synchronization is finished, generating a super-frame structure automatically by the node, and dividing the local time into multiple time slots with different functions; and 4, judging whether the current time slot is a service time slot by the node, if so, sending and receiving data, and otherwise, updating network information. as the TDMA is adopted as a channel access mode, the node can access the channel without any conflict; and through an interactive sending mechanism and a data frame queue scheduling mechanism, the network throughput rate and the channel utilization rate are increased, the queuing delay of the data frames is reduced; and the TDMA based long propagation delay wireless link time slot distribution method can be used for a time division multiple access ad hoc network.
Owner:XIDIAN UNIV

Vehicle scheduling method for junction without signal lights in autonomous driving environment

The invention provides a vehicle scheduling method for a junction without signal lights in an autonomous driving environment. The method includes that communication requests of a vehicle are transmitted to an RSU (remote subscriber unit) and form communication request sets, and the communication request sets are sequenced in an ascending manner according to scheduled time when the vehicle reachesthe junction; the RSU updates intermediate times of the vehicle according to the nearest vehicle allocation time and nearest vehicle belonging road segment, and the communication request sets are constructed to optimize the vehicle allocation time; the RSU traverses the communication request sets of the vehicle allocation time, the allocation time of the vehicle passing through the junction is calculated, and the intermediate time of the vehicle is updated; the head end of the vehicle reaches the junction at the time allocated by the RSU, the vehicle leaves finally, and the RSU updates the vehicle allocation time and the vehicle belonging road section; the RSU schedules all the vehicles corresponding to the communication requests in the communication request sets to pass through the junction. Compared with the prior art, the vehicle scheduling method has the advantages that time for the vehicles passing through the junction is integrally reduced.
Owner:WUHAN UNIV

Adaptive rate control method based on mobility and DSRC/WAVE network relevance feedback

The invention, which belongs to the technical field of car networking communinication, relates to an adaptive rate control method based on mobility and DSRC/WAVE network relevance feedback. The method comprises establishment of a traffic flow density prediction module, a t+1 time communication interference calculation module, an SINR calculation module, a t+1 time available link bandwidth calculation module, a channel congestion cost calculating module and an adaptive message generation rate calculation module. A traffic flow density value at a next time is predicted; according to the density value of the next time, a transmitting power, and a rate, an interference module of a communication process is established, a signal to noise ratio is calculated, and an available link bandwidth of a node at the next time is predicted; on the basis of mismatching of a transmission rate and mismatching of a transmission queue length, a channel congestion cost module is established, so that a message generation rate at the next time is adjusted adaptively. According to the method, adaptive rate adjustment is carried out in advance by using the prediction technology, so that the channel congestion is avoided; and the low communication delay and the high data packet transmission rate are guaranteed with the low calculation time and cost.
Owner:DALIAN UNIV OF TECH

Adaptive congestion control method for communication network

The invention relates to an adaptive congestion control method for a communication network and belongs to the technical field of network engineering. The method comprises the following steps: calculating a queue difference e(k) in each sampling period; measuring input flow rate r(k) of a data package; calculating a data flow rate difference x(k); calculating a variance ratio deltar(k) of the dataflow rate; obtaining an average value delta*(k) of the variance ratio of the data flow rate; comparing a transient queue error e(k) with a preset error threshold e[th]; calculating a modified price pr(k) and probability conversion coefficient mu(k) according to a selected control method; calculating a drop probability p(k) by the following formula according to the price pr(k) and the conversion coefficient mu(k), and performing data package drop operation. The method requires simple structure and has strong expandability, and can effectively solve technical problems in REM, reduces route queuing time delay and jitter, ensures high link availability, and has good adaptability in a complicated dynamic environment.
Owner:SHANGHAI JIAO TONG UNIV

Load balancing edge cooperative caching method for Internet scene differentiated service

ActiveCN112039943AMeet the needs of different service levelsReduce queuing delayTransmissionDifferentiated serviceQueuing delay
The invention discloses a load balancing edge cooperative caching method for an Internet scene differentiated service. The method comprises the following steps: S1, defining response actions and caching parameters made by edge nodes in an edge cooperative caching system after a user sends an application service request; and S2, initializing parameters, executing an edge cooperative caching process, and calling a load balancing strategy and a differentiated service strategy. The differentiated service strategy is adopted in the edge cooperative caching system to meet the requirements of different service levels of different users in an Internet scene, queuing delay of user requests is reduced through the load balancing strategy, stability of node response request delay is improved, and userexperience is improved.
Owner:SUN YAT SEN UNIV

Non-blocking content caching method and device for content router

The embodiment of the invention provides a non-blocking content caching method and device for a content router. The method comprises the steps of analyzing a packet head of a received first target interest packet request; obtaining keywords and offset; judging whether content corresponding to the keywords exists in a CS (Content Store) or not through utilization of a bloom filter; if the content exists, judging whether an I/O waiting queue in the CS is smaller than a preset threshold or not; if the I/O waiting queue is smaller than the preset threshold, pushing a first target data packet to a server which sends the first target interest packet request; if the I/O waiting queue is not smaller than the preset threshold, judging whether the keywords exists in a pending interest table PIT or not; for the fact that the keywords do not exist in the PIT, sending the keywords to an FIB (forwarding information base); carrying out routing forwarding on the keywords by an upstream router through utilization of the FIB; receiving a sent second target data packet; sending the second target data packet to the server corresponding to a port and then deleting a mapping relationship record; and adding the keywords to a data structure of the bloom filter. According to the method, the problem that the CS is congested frequently is solved.
Owner:BEIJING UNIV OF POSTS & TELECOMM

Fair network flow control method and device

The invention discloses a fair network flow control method and device, and the method comprises the steps of: (1) when a packet with the length of 1 arrives, determining whether to allow the packet to pass by a flow controller according to the token occupation amount of a flow to which the packet belongs and the number of tokens in a current global token bucket; if the packet is allowed to pass, increasing the token occupation amount of the stream by 1 through a Count-min Sketch; if the packet is allowed to pass and the token occupation amount of the stream to which the packet belongs is 0 before the packet arrives, inserting the stream into the tail part of the active stream linked list; and (2) generating the tokens at a preset speed, when the tokens are generated, adding 1 to the number of the tokens in the global token bucket, then taking out a stream from the head of the active stream linked list, reducing 1 to the number of the tokens occupied by the stream through a Count-min Sketch structure, and at the moment, if the token occupation amount of the stream is greater than 0, re-inserting the stream into the tail of the active stream linked list. According to the method, the tokens can be fairly allocated to each active flow, so that each flow passing through the flow controller fairly shares bandwidth resources.
Owner:XI AN JIAOTONG UNIV

New energy automobile charging control system

The invention discloses a new energy automobile charging control system which comprises the following parts of new energy automobiles 1, a control center 2, user phones 3, charging piles 4 and a powergrid 5. The charging automobiles are accurately distributed to the charging piles according to the time sequence, charging is standard and controllable, and charging and discharging control efficiency is brought into full play. In the whole automobile dispatching process, the automobiles do not need to queue up to wait, queuing time delay is reduced, the time required for charging and dischargingis shortened, due to the fact that voice or video call data packages serve as channel carriers, channels can be established between automobile users and the control center at any time, hidden addresses are embedded in effective information of the data packages, the capacity of the channels is greatly increased, and the channels are unlikely to interfere in each other when the automobiles are multiple.
Owner:宁波市鄞州智伴信息科技有限公司

Single fiber bidirectional transmission multi wave length optical network system capable of realizing partial wave length reusing

A multi-wavelength optical network using single-fibre bidirectional transmission to realize the multiplexing of partial wavelengthes is composed of a link / control module, two bidirectional star couplers and 2(N-1) nodes. Each bidirectional star coupler is connected to the link / control module via a port and an optical fiber, and can be connected to (N-1) nodes via an optical fibre for each node. Its advantages are more number of nodes (increased by one time), high network through put, short time delay, and optical fibres saved by one time.
Owner:SHANGHAI BELL

AFDX (avionics full-duplex switch Ethernet) network switch with time-space separation characteristic

The invention discloses an AFDX (avionics full-duplex switch Ethernet) network switch with a time-space separation characteristic. The AFDX network switch comprises N input ports, N output ports, 2N DMAs (direct memory accesses), a switch controller and N memories, wherein each input port is connected with one memory by one DMA, and the memory is further connected with an output port by one DMA; and the 2N DMAs are connected with the switch controller. The switch with a time-space separation characteristic in an AFDX system provided by the invention is easy to realize by software and hardware; and in case of keeping 100% of throughput, the time delay of a data package after passing through the switch, the queuing time delay of the data package in a buffer area of the switch, and the queue size of the buffer area are respectively and greatly reduced.
Owner:SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products