Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

37results about How to "Latency" patented technology

Apparatuses, Systems, and Methods for Apparatus Operation and Remote Sensing

A method and system for controlling an apparatus including receiving data indicative of an actual state of the apparatus, defining a first viewpoint relative to at least one of the environment and the apparatus, determining a first predicted state of the apparatus at time T, determining a first predicted state of the environment at time T, producing a first virtualized view from the first viewpoint, sending a first control signal to the apparatus after producing the first virtualized view, defining a second viewpoint relative to at least one of the apparatus and the environment, determining a second predicted state of the apparatus at time T+delta T, determining a second predicted state of the environment at time T+delta T, producing the second virtualized view from the second viewpoint, sending a second control signal to the apparatus after producing the second virtualized view, and changing the actual state of the apparatus based on the first control signal.
Owner:CARNEGIE MELLON UNIV

System and method for decreasing latency in locating routes between nodes in a wireless communication network

A system and method for controlling the dissemination of Routing packets, and decreasing the latency in finding routes between nodes. The system and method provides message exchanges between wireless devices to determine optimized communication routes with a minimum of overhead messages and buffered data. Exchanged messages are reduced to a specific series of exchanges indicating destination, destination node detection, and route, preferably using a series of IAP devices. Routes are discovered in an efficient manner and latency in finding routes between nodes is reduced, thereby reducing buffered information levels at individual devices.
Owner:STRONG FORCE IOT

Microprocessor for executing speculative load instructions with retry of speculative load instruction without calling any recovery procedures

InactiveUS6918030B2Gain in efficiency and resource utilizationLatencyDigital computer detailsConcurrent instruction executionCompilerInstruction stream
A system, method and apparatus is provided that splits a microprocessor load instruction into two (2) parts, a speculative load instruction and a check speculative load instruction. The speculative load instruction can be moved ahead in the instruction stream by the compiler as soon as the address and result registers are available. This is true even when the data to be loaded is not actually required. This speculative load instruction will not cause a fault in the memory if the access is invalid, i.e. the load misses and a token bit is set. The check speculative load instruction will cause the speculative load instruction to be retried in the event the token bit was set equal to one. In this manner, the latency associated with branching to an interrupt routine will be eliminated a significant amount of the time. It is very possible that the reasons for invalidating the speculative load operation are no longer present (e.g. page in memory is not present) and the load will be allowed to complete. Therefore, substantial gains in efficiency and resource utilization can be achieved by deferring the branch to recovery routines until after the speculative load is retried.
Owner:IBM CORP

Apparatus and method for communicating between computer systems using a sliding send window for ordered messages in a clustered computing environment

A clustered computer system includes multiple computer systems (or nodes) coupled together via one or more networks that can become members of a group to work on a particular task. Each node includes a cluster engine, a cluster communication mechanism that includes a sliding send window, and one or more service tasks that process messages. The sliding send window allows a node to send out multiple messages without waiting for an individual acknowledgment to each message. The sliding send window also allows a node that received the multiple messages to send a single acknowledge message for multiple received messages. By using a sliding send window to communicate with other computer systems in the cluster, the communication traffic in the cluster is greatly reduced, thereby enhancing the overall performance of the cluster. In addition, the latency between multiple messages sent concurrently is dramatically reduced.
Owner:IBM CORP

Method and apparatus for managing data object size in a multi-user environment

One or more embodiments of the invention enable improved communication with a database comprising multiple clients utilizing multiple large data objects concurrently. For example when a client system interacts with a server with respect to a data object that is over a threshold size, the system may utilizing a communication methodology that minimizes system resource usage such as CPU utilization and network utilization. In one embodiment of the invention when a client request for an object falls within the relevant size threshold, one or more embodiments of the invention segment the object into smaller size chunks. Hence the server is not required to assemble all data associated with a request at once, but is instead able to immediately start transmitting smaller segments of data. Allowing for the transmission of smaller data chunks prevents the server from allocating large blocks of memory to one object and although the server may be required to handle more memory allocations, each allocation is smaller in size and can therefore be processed much faster. The determination of the chunk size is dependent on inherent system resources such as the amount of server memory, and the available bandwidth of the network. In addition, the determination of chunk size is dependent on environmental factors such as the time of day, the day of the week, the number of users, the number of predicted users for a given time and day based on historical logging, and the current and predicted network utilization for a given time and day. One or more embodiments of the invention obtain the chunk size and optionally obtain a chunk transfer size from a server that may alter these quantities dynamically in order to minimize resource utilization.
Owner:REGIONAL RESOURCES LTD

Pipelined intra-prediction hardware architecture for video coding

As the quality and quantity of shared video content increases, video encoding standards and techniques are being developed and improved to reduce bandwidth consumption over telecommunication and other networks. One technique to reduce bandwidth consumption is intra-prediction, which exploits spatial redundancies within video frames. Each video frame may be segmented into blocks, and intra-prediction may be applied to the blocks. However, intra-prediction of some blocks may rely upon the completion (e.g., reconstruction) of other blocks, which can make parallel processing challenging. Provided are exemplary techniques for improving the efficiency and throughput associated with the intra-prediction of multiple blocks.
Owner:QUALCOMM INC

Reducing write I/O latency using asynchronous fibre channel exchange

A FCP initiator sends a FCP write command to a FCP target within a second FC Exchange, and the target sends one or more FC write control IUs to the initiator within a first FC Exchange to request a transfer of data associated with the write command. The first and second FC exchanges are distinct from one another. A payload of each write control IU includes an OX_ID value with which the initiator originated the second Exchange and a RX_ID value assigned by the FCP target for the second exchange. The two Exchanges yield a full-duplex communication environment between the initiator and target that enables the reduction or elimination of latencies incurred in a conventional FCP write I / O operation due to the half-duplex nature of a single FC Exchange. The write control IU may be an enhanced FCP_XFER_RDY IU or a new FC IU previously undefined by the FCP standard.
Owner:AVAGO TECH INT SALES PTE LTD

System for interactively distributing information services

An interactive information distribution system includes service provider equipment for generating an information stream that is coupled to an information channel and transmitted to subscriber equipment. The service provider also generates a command signal that is coupled to a command channel and transmitted to the subscriber equipment. The service provider also receives information manipulation requests from the subscriber via a back channel. A communication network supporting the information channel, command channel and back channel is coupled between the service provider equipment and the subscriber equipment.
Owner:COMCAST IP HLDG I

Method for measuring and eliminating delay between sounding data and positioning data of unmanned ship

ActiveCN112902931AEliminate sounding delayEliminate propagation timeSynchronisation arrangementMeasuring open water depthTimestampMarine engineering
The invention discloses a method for measuring and eliminating delay between sounding data and positioning data of an unmanned ship. The method comprises the steps: providing an anechoic pool for single-beam precision measurement, installing a transducer of the unmanned ship on a support of the anechoic pool, transversely transmitting beams, and fixing a positioning module of the unmanned ship at the top of the support. Unique corresponding water depth values are positioned for different positions in the movement process of the unmanned ship, and synchronous change of the positioning data and the water depth data under the condition that the precision and the range are known is achieved. The invention realizesthe method for measuring and eliminating delay between sounding data and positioning data of the unmanned ship; and when GNSS data acquisition is carried out, a timestamp signal is transmitted to a depth finder, and after the time of transmitting a ping signal at the moment is recorded, and after the depth finder completes the ping search and returns depth data, packaging with previous waiting positioning data is carried out, so that the propagation time of sound waves and the processing delay of the data of the depth finder are eliminated.
Owner:SHANGHAI HUACE NAVIGATION TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products