Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

470 results about "Wire speed" patented technology

In computer networking, wire speed or wirespeed refers to the hypothetical peak physical layer net bitrate (useful information rate) of a cable (consisting of fiber-optical wires or copper wires) combined with a certain digital communication device, interface, or port. For example, the wire speed of Fast Ethernet is 100 Mbit/s also known as the peak bitrate, connection speed, useful bit rate, information rate, or digital bandwidth capacity. The wire speed is the data transfer rate that a telecommunications standard provides at a reference point between the physical layer and the datalink layer.

Fibre channel switch

A Fibre Channel switch is presented that tracks the congestion status of destination ports in an XOFF mask at each input. A mapping is maintained between virtual channels on an ISL and the destination ports to allow changes in the XOFF mask to trigger a primitive to an upstream port that provides virtual channel flow control. The XOFF mask is also used to avoid sending frames to a congested port. Instead, these frames are stored on a single deferred queue and later processed in a manner designed to maintain frame ordering. A routing system is provided that applies multiple routing rules in parallel to perform line speed routing. The preferred switch fabric is cell based, with techniques used to manage path maintenance for variable length frames and to adapt to varying transmission rates in the system. Finally, the switch allows data and microprocessor communication to share the same crossbar network.
Owner:MCDATA SERVICES CORP +1

Method and apparatus for wire-speed application layer classification of upstream and downstream data packets

A data packet classifier to classify a plurality of N-bit input tuples, said classifier comprising a hash address, a memory and a comparison unit. The hash address generator generate a plurality of M-bit hash addresses from said plurality of N-bit input tuples, wherein M is significantly smaller than N. The memory has a plurality of memory entries and is addressable by said plurality of M-bit hash addresses, each such address corresponding to a plurality of memory entries, each of said plurality of memory entries capable of storing one of said plurality of N-bit tuples and an associated process flow information. The comparison unit determines if an incoming N-bit tuple can be matched with a stored N-bit tuple. The associated process flow information is output if a match is found and wherein a new entry is created in the memory for the incoming N-bit tuple if a match is not found.
Owner:CISCO SYST ISRAEL

Real-time network monitoring and security

There is provided a hardware device for monitoring and intercepting data packetized data traffic at full line rate. In preferred high bandwidth embodiments, full line rate corresponds to rates that exceed 100 Mbytes / s and in some cases 1000 Mbytes / s. Monitoring and intercepting software, alone, is not able to operate on such volumes of data in real-time. A preferred embodiment comprises: a data delay buffer (208) with multiple delay outputs (216); a search engine logic (210) for implementing a set of basic search tools that operate in real-time on the data traffic; a programmable gate array (206); an interface (212) for passing data quickly to software sub-systems; and control means for implementing software control of the operation of the search tools. The programmable gate array (206) inserts the data packets into the delay buffer (208), extracts them for searching at the delay outputs and formats and schedules the operation of the search engine logic (210). One preferred embodiment uses an IP co-processor as the search engine logic.
Owner:BAE SYSTEMS PLC

Apparatus and method for storage processing through scalable port processors

A system including a storage processing device with an input/output module. The input/output module has port processors to receive and transmit network traffic. The input/output module also has a switch connecting the port processors. Each port processor categorizes the network traffic as fast path network traffic or control path network traffic. The switch routes fast path network traffic from an ingress port processor to a specified egress port processor. The storage processing device also includes a control module to process the control path network traffic received from the ingress port processor. The control module routes processed control path network traffic to the switch for routing to a defined egress port processor. The control module is connected to the input/output module. The input/output module and the control module are configured to interactively support data virtualization, data migration, data journaling, and snapshotting. The distributed control and fast path processors achieve scaling of storage network software. The storage processors provide line-speed processing of storage data using a rich set of storage-optimized hardware acceleration engines. The multi-protocol switching fabric provides a low-latency, protocol-neutral interconnect that integrally links all components with any-to-any non-blocking throughput.
Owner:AVAGO TECH INT SALES PTE LTD

Wire Speed Monitoring and Control of Electronic Financial Transactions

InactiveUS20140289094A1Faster message processingSpeed maximizationFinanceComputer hardwareTraffic capacity
An in-line hardware message filter device inspects incoming securities transactions. The invention is implemented as an integrated circuit (IC) device which contains computer code in the form of on-chip hardware instructions. Data messages comprising orders enter the device in exchange-specific formats. Messages that satisfy pre-determined risk assessment filters are allowed to pass through the device to the appropriate securities exchange for execution. The system functions as a passive device for all legitimate network traffic passing directly or indirectly between a customer's computer and a securities exchange's order-acceptance computer. Advantageously, the invention allows the broker-dealer to check and pass messages or orders as they come through the system without having to store the full message before making a risk assessment decision. The hardware-only nature of the invention serves to maximize the speed of order validation and to perform pre-trade checks in a cut-through or store-and-forward mode.
Owner:DEUTSCHE BANK

Apparatus and method for data virtualization in a storage processing device

A system including a storage processing device with an input / output module. The input / output module has port processors to receive and transmit network traffic. The input / output module also has a switch connecting the port processors. Each port processor categorizes the network traffic as fast path network traffic or control path network traffic. The switch routes fast path network traffic from an ingress port processor to a specified egress port processor. The storage processing device also includes a control module to process the control path network traffic received from the ingress port processor. The control module routes processed control path network traffic to the switch for routing to a defined egress port processor. The control module is connected to the input / output module. The input / output module and the control module are configured to interactively support data virtualization, data migration, data journaling, and snapshotting. The distributed control and fast path processors achieve scaling of storage network software. The storage processors provide line-speed processing of storage data using a rich set of storage-optimized hardware acceleration engines. The multi-protocol switching fabric provides a low-latency, protocol-neutral interconnect that integrally links all components with any-to-any non-blocking throughput.
Owner:AVAGO TECH INT SALES PTE LTD

High-speed traffic measurement and analysis methodologies and protocols

InactiveUS20050220023A1Minimal communication overheadMore bandwidth is requiredError preventionFrequency-division multiplex detailsNODALWire speed
We formulate the network-wide traffic measurement/analysis problem as a series of set-cardinality-determination (SCD) problems. By leveraging recent advances in probabilistic distinct sample counting techniques, the set-cardinalities, and thus, the network-wide traffic measurements of interest can be computed in a distributed manner via the exchange of extremely light-weight traffic digests (TD's) amongst the network nodes, i.e. the routers. A TD for N packets only requires O(loglog N) bits of memory storage. The computation of such O(loglog N)-sized TD is also amenable for efficient hardware implementation at wire-speed of 10 Gbps and beyond. Given the small size of the TD's, it is possible to distribute nodal TD's to all routers within a domain by piggybacking them as opaque data objects inside existing control messages, such as OSPF link-state packets (LSPs) or I-BGP control messages. Once the required TD's are received, a router can estimate the traffic measurements of interest for each of its local link by solving a series of set-cardinality-determination problems. The traffic measurements of interest are typically in form of per-link, per-traffic-aggregate packet counts (or flow counts) where an aggregate is defined by the group of packets sharing the same originating and/or destination nodes (or links) and/or some intermediate nodes (or links). The local measurement results are then distributed within the domain so that each router can construct a network-wide view of routes/flow patterns of different traffic commodities where a commodity is defined as a group of packets sharing the same origination and/or termination nodes or links. After the initial network-wide traffic measurements are received, each router can further reduce the associated measurement/estimation errors by locally conducting a minimum square error (MSE) optimization based on network-wide commodity-flow conservation constraints.
Owner:LUCENT TECH INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products