Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

1819 results about "Goodput" patented technology

In computer networks, goodput (a portmanteau of good and throughput) is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent (or delivered) until the last bit of the last packet is delivered.

Novel massively parallel supercomputer

A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input / Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.
Owner:INT BUSINESS MASCH CORP

Ultrascalable petaflop parallel supercomputer

InactiveUS20090006808A1Massive level of scalabilityUnprecedented level of scalabilityProgram control using stored programsArchitecture with multiple processing unitsMessage passingPacket communication
A novel massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. Novel use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.
Owner:IBM CORP

Antenna/Beam Selection Training in MIMO Wireless LANs with Different Sounding Frames

A method selects antennas in a multiple-input, multiple-output (MIMO) wireless local area network (WLAN) that includes a plurality of stations, and each station includes a set of antennas. Plural consecutive packets, received at a station, include plural consecutive sounding packets. Each sounding packet corresponds to a different subset of the set of antennas, and at least one of the plural consecutive packets includes a high throughput (HT) control field including a signal to initiate antenna selection and a number N indicative of a number of sounding packets which follow the at least one packet including the HT control field and which are to be used for antenna selection. A channel matrix is estimated based on a characteristic of the channel as indicated by the received N sounding packets, and a subset of antennas is selected according to the channel matrix. Station and computer program product embodiments include similar features.
Owner:FREEDOM PATENTS LLC

System and method of base station performance enhancement using coordinated antenna array

In wireless system, a group of Basestations (BTSs) can be managed by a centralized network management identity or can be self-organized by communicating with each other via wireless air-interfaces or wired interfaces. One such example are Femtocell systems. When the BTSs are using the same frequency for transmitting and receiving with relatively large transmitting power and when they are closer to each other, performance of such a system and user throughput or QoS (Quality of Service) gets degraded due to the interference between the BTSs and among the users. Smart antenna technique can be used in a coordinated way among a group of BTSs, such as Femtocells, to avoid or reduce interference or manage how interference happens to achieve performance enhancement such as higher system throughput or better QoS to individual applications.
Owner:AIRHOP COMMUNICATIONS

Congestion control for internet protocol storage

A network system for actively controlling congestion to optimize throughput is provided. The network system includes a sending host which is configured to send packet traffic at a set rate. The network system also includes a sending switch for receiving the packet traffic. The sending switch includes an input buffer for receiving the packet traffic at the set rate where the input buffer is actively monitored to ascertain a capacity level. The sending switch also includes code for setting a probability factor that is correlated to the capacity level where the probability factor increases as the capacity level increases and decreases as the capacity level decreases. The sending switch also has code for randomly generating a value where the value is indicative of whether packets being sent by the sending switch are to be marked with a congestion indicator. The sending switch also includes transmit code that forwards the packet traffic out of the sending switch where the packet traffic includes one of marked packets and unmarked packets. The network system also has a receiving end which is the recipient of the packet traffic and also generates acknowledgment packets back to the sending host where the acknowledgment packets are marked with the congestion indicator when receiving marked packets and are not marked with the congestion indicator when receiving unmarked packets. In another example, the sending host is configured to monitor the acknowledgment packets and to adjust the set rate based on whether the acknowledgment packets are marked with the congestion indicator. In a further example, the set rate is decreased every time one of the marked packets is detected and increased when no marked packets are detected per round trip time (PRTT).
Owner:ADAPTEC +1

Flexible DMA engine for packet header modification

A pipelined linecard architecture for receiving, modifying, switching, buffering, queuing and dequeuing packets for transmission in a communications network. The linecard has two paths: the receive path, which carries packets into the switch device from the network, and the transmit path, which carries packets from the switch to the network. In the receive path, received packets are processed and switched in an asynchronous, multi-stage pipeline utilizing programmable data structures for fast table lookup and linked list traversal. The pipelined switch operates on several packets in parallel while determining each packet's routing destination. Once that determination is made, each packet is modified to contain new routing information as well as additional header data to help speed it through the switch. Each packet is then buffered and enqueued for transmission over the switching fabric to the linecard attached to the proper destination port. The destination linecard may be the same physical linecard as that receiving the inbound packet or a different physical linecard. The transmit path consists of a buffer/queuing circuit similar to that used in the receive path. Both enqueuing and dequeuing of packets is accomplished using CoS-based decision making apparatus and congestion avoidance and dequeue management hardware. The architecture of the present invention has the advantages of high throughput and the ability to rapidly implement new features and capabilities.
Owner:CISCO TECH INC

Selection of routing paths based upon path qualities of a wireless routes within a wireless mesh network

The invention includes an apparatus and method for determining an optimal route based upon path quality of routes to an access node of a wireless mesh network. The method includes receiving routing packets at the access node through at least one wireless route. Each routing packet including route information that identifies the wireless route of the routing packet. A success ratio of a number of successfully received routing packets versus a number of transmitted routing packets is determined over a period of time T1, for each wireless route. The wireless route having a greatest success ratio is first selected, as are other wireless routes that have success ratios within a predetermined amount of the greatest success ratio. Of the first selected routes, routing packets are at the access node through the first selected routes. Again, each routing packet including route information that identifies the wireless route of the routing packet. A success long ratio of a number of successfully received routing packets versus a number of transmitted routing packets is determined over a period of time T2, wherein T2 is substantially greater than T1, for each first selected route. The wireless route having a greatest success long ratio are second selected, as are other wireless routes that have success long ratios within a second predetermined amount of the greatest success long ratio. The second selected routes having a greatest throughput are third selected. An optimal wireless route based upon the third selected routes is determined.
Owner:ABB POWER GRIDS SWITZERLAND AG

Method and device for configuring DMRS (demodulation reference signal) scrambling code sequence

InactiveCN102340382AGuaranteed signal demodulation performanceReduce signal demodulation performanceWireless communicationError prevention/detection by diversity receptionMultiplexingMimo transmission
The invention relates to the field of communication, and discloses a method and device for configuring a DMRS (demodulation reference signal) scrambling code sequence so as to reduce signal interference between UE (user equipment) adopting DMRS multiplexing and guarantee the signal demodulation performance of UE. In the method, a base station refers to the current application scene, and indicates the initial value configuration of the downlink DMRS scrambling code sequence used by single UE or UE group in current transmission through a signaling so as to ensure that the inter-cell or in-cell UE can flexibly perform MU (multiple -user)-MIMO (multiple input multiple output) multiplexing, thereby reducing the mutual interference between the multiplexing UE and guaranteeing the signal demodulation performance of the UE; and meanwhile, the overhead of a PDCCH (physical downlink control channel) is reduced through joint coding, the configuration of the same DMRS scrambling code sequence for the UE of different cells can be flexibly supported, and the configuration of different DMRS scrambling code sequences for the UE of the same cell can be flexibly supported, thereby supporting high-rank MU-MIMO transmission and effectively increasing the system throughput.
Owner:CHINA ACAD OF TELECOMM TECH

Automatic pipelining of noc channels to meet timing and/or performance

Systems and methods for automatically generating a Network on Chip (NoC) interconnect architecture with pipeline stages are described. The present disclosure includes example implementations directed to automatically determining the number and placement of pipeline stages for each channel in the NoC. Example implementations may also adjust the buffer at one or more routers based on the pipeline stages and configure throughput for virtual channels.
Owner:INTEL CORP

Method and apparatus for a parallel data storage and processing server

The present invention concerns a parallel multiprocessor-multidisk storage server which offers low delays and high throughputs when accessing and processing one-dimensional and multi-dimensional file data such as pixmap images, text, sound or graphics. The invented parallel multiprocessor-multidisk storage server may be used as a server offering its services to computer, to client stations residing on a network or to a parallel host system to which it is connected. The parallel storage server comprises (a) a server interface processor interfacing the storage system with a host computer, with a network or with a parallel computing system; (b) an array of disk nodes, each disk node being composed by one processor electrically connected to at least one disk and (c) an interconnection network for connecting the server interface processor with the array of disk nodes. Multi-dimensional data files such as 3-d images (for example tomographic images), respectively 2-d images (for example scanned aerial photographs) are segmented into 3-d, respectively 2-d file extents, extents being striped onto different disks. One-dimensional files are segmented into 1-d file extents. File extents of a given file may have a fixed or a variable size. The storage server is based on a parallel image and multiple media file storage system. This file storage system includes a file server process which receives from the high level storage server process file creation, file opening, file closing and file deleting commands. It further includes extent serving processes running on disk node processors, which receive from the file server process commands to update directory entries and to open existing files and from the storage interface server process commands to read data from a file or to write data into a file. It also includes operation processes responsible for applying in parallel geometric transformations and image processing operations to data read from the disks and a redundancy file creation process responsible for creating redundant parity extent files for selected data files.
Owner:AXS TECH

Bandwidth efficient source tracing (BEST) routing protocol for wireless networks

A bandwidth efficient routing protocol for wireless ad-hoc networks. This protocol can be used in ad-hoc networks because it considerably reduces control overhead, thus increasing available bandwidth and conserving power at mobile stations. It also gives very good results in terms of the throughput seen by the user. The protocol is a table-driven distance-vector routing protocol that uses the same constraints used in on-demand routing protocols, i.e., paths are used as long as they are valid and updates are only sent when a path becomes invalid. The paths used by neighbors are maintained and this allows the design of a distance-vector protocol with non-optimum routing and event-driven updates, resulting in reduced control overhead.
Owner:RGT UNIV OF CALIFORNIA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products