Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

296results about How to "Effective bandwidth" patented technology

Method and system to increase the throughput of a communications system that uses an electrical power distribution system as a communications pathway

A method and system to increase the throughput of a communications system that uses an electrical power distribution system as a communications pathway, determines the phase of the power distribution power cycle and compares this determined phase to predetermined regions of the power cycle. If the power cycle is within a predetermined region, a particular communication scheme is used for transmitting and receiving information. The power cycle can have two or more predetermined regions. Optionally, the throughput for any or all regions can be determined so as to provide for modification of the associated communication scheme if the throughput is determined to be outside some predetermined range.
Owner:AMPERION

Dynamic bandwidth allocation and service differentiation for broadband passive optical networks

A dynamic upstream bandwidth allocation scheme is disclosed, i.e., limited sharing with traffic prediction (LSTP), to improve the bandwidth efficiency of upstream transmission over PONs. LSTP adopts the PON MAC control messages, and dynamically allocates bandwidth according to the on-line traffic load. The ONU bandwidth requirement includes the already buffered data and a prediction of the incoming data, thus reducing the frame delay and alleviating the data loss. ONUs are served by the OLT in a fixed order in LSTP to facilitate the traffic prediction. Each optical network unit (ONU) classifies its local traffic into three classes with descending priorities: expedited forwarding (EF), assured forwarding (AF), and best effort (BE). Data with higher priority replace data with lower priority when the buffer is full. In order to alleviate uncontrolled delay and unfair drop of the lower priority data, the priority-based scheduling is employed to deliver the buffered data in a particular transmission timeslot. The bandwidth allocation incorporates the service level agreements (SLAs) and the on-line traffic dynamics. The basic limited sharing with traffic prediction (LSTP) scheme is extended to serve the classified network traffic.
Owner:NEW JERSEY INSTITUTE OF TECHNOLOGY

Method and system for providing a secure peer-to-peer file delivery network

A method and system for electronically delivering files over a public network is disclosed. The network includes a plurality of computers including at least one server node and multiple client nodes. In a first aspect of the present invention, the method and system enable secure and reliable peer-to-peer file sharing between two client nodes. First, a digital fingerprint is generated and associated with a file in response to the file being selected for publication on a first client node. An entry for the file is then added to a searchable index of shared files on the server node, and the fingerprint for the file is also stored on the server. In response to a second client selecting the file from the search list on the server node, the file is automatically transferred from the first client node directly to the second client node. The second client node then generates a new fingerprint for the file and compares with the new fingerprint with the fingerprint from the server node, thereby verifying the authenticity of the file and publisher. In a second aspect of the present invention, the method and system also enable subscription-based decentralized file downloads to the client nodes. First, the client nodes are allowed to subscribe with the server node to periodically receive copies of one of the files. To provide a current subscribing client node with the file, the geographically closest client node containing the file is located, and the file is transferred from the closest node directly to the current subscribing node, thereby efficiently utilizing bandwidth.
Owner:QURIO HLDG

Data center network architecture

Data center network architectures that can reduce the cost and complexity of data center networks. The data center network architectures can employ optical network topologies and optical nodes to efficiently allocate bandwidth within the data center networks, while reducing the physical interconnectivity requirements of the data center networks. The data center network architectures also allow computing resources within data center networks to be controlled and provisioned based at least in part on a combined network topology and application component topology, thereby enhancing overall application program performance.
Owner:HEWLETT-PACKARD ENTERPRISE DEV LP

Method and system for adaptively obtaining bandwidth allocation requests

A method and apparatus for adaptively obtaining bandwidth requests in a broadband wireless communication system. The method and apparatus includes dynamically varying technique combinations enabling a plurality of users to efficiently request bandwidth from a shared base station. A user may “piggyback” a new bandwidth request upon, or set a “poll-me bit” within, presently allocated bandwidth. A base station may poll users, individually or in groups, by allocating unrequested bandwidth for new requests. Polling may respond to a “poll-me bit,” and / or it may be adaptively periodic at a rate based on communication status parameters, such as recent communication activity and connection QoS levels. Group polling permits a possibility of collisions. Polling policies may be established for dynamically varying user groups, or may be determined for each user. Dynamic selection of appropriate polling techniques makes use of efficiency benefits associated with each technique.
Owner:WI LAN INC

DMA engine for protocol processing

InactiveUS20060206635A1Determinism and uniformity in operationPredictable performance gainElectric digital data processingData transmissionProtocol processing
A DMA engine, includes, in part, a DMA controller, an associative memory buffer, a request FIFO accepting data transfer requests from a programmable engine, such as a CPU, and a response FIFO that returns the completion status of the transfer requests to the CPU. Each request includes, in part, a target external memory address from which data is to be loaded or to which data is to be stored; a block size, specifying the amount of data to be transferred; and context information. The associative buffer holds data fetched from the external memory; and provides the data to the CPUs for processing. Loading into and storing from the associative buffer is done under the control of the DMA controller. When a request to fetch data from the external memory is processed, the DMA controller allocates a block within the associative buffer and loads the data into the allocated block.
Owner:PMC-SIERRA

Method and system for assuring near uniform capacity and quality of channels in cells of wireless communications systems having cellular architectures

InactiveUS6011970AMaximized cellular concept of frequency reuseReduce power outputRadio/inductive link selection arrangementsTransmission monitoringCellular architectureSignal-to-noise ratio (imaging)
A method and system for use with wireless communication systems having a cellular architecture with at least a first and a second cell. The method and system provided ensure near uniform capacity and quality of channels within the second cell via the following steps. The noise signal power in unused data channels within the second cell is monitored. When a request for channel access is received, a determination is made whether the request for channel access is either a request for handoff from the first cell into the second cell, or not. In the event that the request is not a request for handoff, a determination is made whether idle channels exist to satisfy the request for channel access. In the event of a determination either that the request for channel access is a request for handoff, or both that the request is not a request for handoff and that idle channels exist to satisfy the request, a measured received signal power of a mobile unit subscriber unit making the request is determined. One of the unused channels in the second cell is then preferentially assigned to the mobile subscriber unit where such preference in assigning is to assign a channel, provided that a signal to noise ratio calculated upon the monitored received signal power and the monitored noise signal power of the preferentially assigned noisy channel meets or exceeds a required signal to noise ratio.
Owner:NORTEL NETWORKS LTD

Method and system for adaptively obtaining bandwidth allocation requests

A method and apparatus for adaptively obtaining bandwidth requests in a broadband wireless communication system. The method and apparatus includes dynamically varying technique combinations enabling a plurality of users to efficiently request bandwidth from a shared base station. A user may “piggyback” a new bandwidth request upon, or set a “poll-me bit” within, presently allocated bandwidth. A base station may poll users, individually or in groups, by allocating unrequested bandwidth for new requests. Polling may respond to a “poll-me bit,” and / or it may be adaptively periodic at a rate based on communication status parameters, such as recent communication activity and connection QoS levels Group polling permits a possibility of collisions. Polling policies may be established for dynamically varying user groups, or may be determined for each user. Dynamic selection of appropriate polling techniques makes use of efficiency benefits associated with each technique.
Owner:WI LAN INC

Method of maximizing use of bandwidth for communicating with mobile platforms

ActiveUS7187690B2Maximize bandwidth usageMaximize bandwidth for communicatingTime-division multiplexRadio transmissionData contentTransfer mode
A system and method for switching between transmission modes provides more efficient use of available bandwidth. A content delivery system determines, based upon a predetermined limit, whether to broadcast content to a plurality of mobile platforms or unicast the data content via a point-to-point communication link. Acknowledgment signals from the mobile platforms are used to determine if the predetermined limit has been exceeded. A specific number or percentage defines the predetermined limit within a specified time period. The number of acknowledgment signals received is compared to the limit to determine if an exceedance condition has occurred.
Owner:THE BOEING CO

Packet prioritization and associated bandwidth and buffer management techniques for audio over IP

The present invention is directed to voice communication devices in which an audio stream is divided into a sequence of individual packets, each of which is routed via pathways that can vary depending on the availability of network resources. All embodiments of the invention rely on an acoustic prioritization agent that assigns a priority value to the packets. The priority value is based on factors such as whether the packet contains voice activity and the degree of acoustic similarity between this packet and adjacent packets in the sequence. A confidence level, associated with the priority value, may also be assigned. In one embodiment, network congestion is reduced by deliberately failing to transmit packets that are judged to be acoustically similar to adjacent packets; the expectation is that, under these circumstances, traditional packet loss concealment algorithms in the receiving device will construct an acceptably accurate replica of the missing packet. In another embodiment, the receiving device can reduce the number of packets stored in its jitter buffer, and therefore the latency of the speech signal, by selectively deleting one or more packets within sustained silences or non-varying speech events. In both embodiments, the ability of the system to drop appropriate packets may be enhanced by taking into account the confidence levels associated with the priority assessments.
Owner:AVAYA INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products