Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

180 results about "Network processing unit" patented technology

Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products.

Packet routing and switching device

A method for routing and switching data packets from one or more incoming links to one or more outgoing links of a router. The method comprises receiving a data packet from the incoming link, assigning at least one outgoing link to the data packet based on the destination address of the data packet, and after the assigning operation, storing the data packet in a switching memory based on the assigned outgoing link. The data packet extracted from the switching memory, and transmitted along the assigned outgoing link. The router may include a network processing unit having one or more systolic array pipelines for performing the assigning operation.
Owner:CISCO TECH INC

Method and system for network processor scheduling based on service levels

InactiveUS20020023168A1Error preventionTransmission systemsMaximum burst sizeComing out
A system and method of moving information units from an output flow control toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to service based on a weighted fair queue where position in the queue is adjusted after each service based on a weight factor and the length of frame, a process which provides a method for and system of interaction between different calendar types is used to provide minimum bandwidth, best effort bandwidth, weighted fair queuing service, best effort peak bandwidth, and maximum burst size specifications. The present invention permits different combinations of service that can be used to create different QoS specifications. The "base" services which are offered to a customer in the example described in this patent application are minimum bandwidth, best effort, peak and maximum burst size (or MBS), which may be combined as desired. For example, a user could specify minimum bandwidth plus best effort additional bandwidth and the system would provide this capability by putting the flow queue in both the NLS and WFQ calendar. The system includes tests when a flow queue is in multiple calendars to determine when it must come out.
Owner:IBM CORP

Method and system for network processor scheduling outputs using queueing

A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a weighted fair queueing system where the position of the next service in a best efforts system for using bandwidth which is not used by committed bandwidth is determined based on the length of the frame and the weight of the particular flow. A “back pressure” system keeps a flow from being selected if its output cannot accept an additional frame because the current level of that port queue exceeds a threshold.
Owner:IBM CORP

Method and apparatus for performing network processing functions

A novel network architecture that integrates the functions of an internet protocol (IP) router into a network processing unit (NPU) that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances. The NPU appears logically separate from the host computer even though, in one embodiment, it is sharing the same chip.
Owner:NVIDIA CORP

Architecture for combining media processing with networking

Systems and methods for processing media streams for transport over a network based on network conditions. An integrated circuit comprises a media processing unit coupled to receive feedback from a network processing unit. The media processing unit converts a media stream from a compressed input stream to a compressed output stream such that the compressed output stream has characteristics that are best suited for the network conditions. Network conditions can include, for example, characteristics of the network (e.g., latency or bandwidth) or characteristics of the remote playback devices (e.g., playback resolution). Changes in the network conditions can result in a change in the conversion process.
Owner:NXP USA INC

Power consumption information acquisition system safety isolation gateway and application method thereof

InactiveCN106941494AGuaranteed uptimePerfect and effective safety protection measuresTransmissionNetwork processing unitComputer terminal
The invention relates to a power consumption information acquisition system safety isolation gateway and an application method thereof; the safety isolation gateway comprises the following units: an internal network processing unit used for receiving a message sent by an acquisition server, sending packaged pure application data to an isolation exchange unit, and receiving the data transmitted by an external network processing unit from the isolation exchange unit; the external network processing unit used for receiving the message sent by the acquisition terminal, sending the packaged pour application data to the isolation exchange unit, and receiving the data transmitted by the internal network processing unit from the isolation exchange unit; the isolation exchange unit arranged between the internal and external network processing units and used for storing the pure application data transmitted by the internal and external network processing units, thus realizing controllable exchange of the pure application data between the internal and external network processing units; a code processing unit used for carrying out code protocol inspection for the data processed by the isolation exchange unit in a flow pass mode, and providing code examination and decryption services.
Owner:CHINA ELECTRIC POWER RES INST +1

Efficient conversion method and device for deep learning model

ActiveCN107480789ADecreased structural correlationAchieve early optimizationFuzzy logic based systemsAlgorithmNetwork processing unit
An efficient conversion method for a deep learning model provided by the embodiment of the invention is used to solve the technical problem that the development efficiency and operation efficiency of a deep learning model are low. The method includes the following steps: building a data standardization framework corresponding to an NPU (Neural-Network Processing Unit) model according to a general deep learning framework; using the data standardization framework to convert the parameters of a deep learning model into the standard parameters of the data standardization framework; and converting the standard parameters into the parameters of the NPU model. According to the invention, a unified data standardization framework is built for a specific processor according to the parameter structures of general deep learning frameworks. Standard data can be formed using the unified data structure of the data standardization framework according to the parameters of a deep learning model formed by a general deep learning framework. Thus, the process of data analysis by the processor depends much less on the structure of the deep learning model, and the development of the processing process of the processor and the development of the deep learning model can be separated. A corresponding efficient conversion device is also provided.
Owner:VIMICRO CORP

Method and system for network processor scheduling based on service levels

A system and method of moving information units from an output flow control toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to service based on a weighted fair queue where position in the queue is adjusted after each service based on a weight factor and the length of frame, a process which provides a method for and system of interaction between different calendar types is used to provide minimum bandwidth, best effort bandwidth, weighted fair queuing service, best effort peak bandwidth, and maximum burst size specifications. The present invention permits different combinations of service that can be used to create different QoS specifications. The “base” services which are offered to a customer in the example described in this patent application are minimum bandwidth, best effort, peak and maximum burst size (or MBS), which may be combined as desired. For example, a user could specify minimum bandwidth plus best effort additional bandwidth and the system would provide this capability by putting the flow queue in both the NLS and WFQ calendar. The system includes tests when a flow queue is in multiple calendars to determine when it must come out.
Owner:IBM CORP

Communication Device

There is provided a communication device which can search for a desired communication device and request a service, without being conscious of the status of the power supply of other communication devices on a network, and achieve a reduction in power consumption. A communication device 100 includes a main processing unit 110 to process main service provided for other communication devices, a network processing unit 120 to transmit and receive a request packet and a response packet among other communication devices, and an integrated power supply unit 150 to stop supplying power to the main processing unit 110 in a state of being able to supply it again and to supply the power to the network processing unit 120, wherein the network processing unit 120 is provided with an automatic responding unit 703 to determine whether or not it can respond to the received request packet only by itself and, when the response is possible, to transmit the response packet to the communication device, and a power supply controlling unit 704, when the response is not possible, to control a main-power supply unit 151 to supply the power to the main processing unit 110.
Owner:PANASONIC CORP

High speed regular expression matching hybrid system and method based on FPGA and NPU (field programmable gate array and network processing unit)

The invention provides a high speed regular expression matching hybrid system and method based on FPGA and NPU (field programmable gate array and network processing unit); the system is mainly composed of an FPGA chip and a multicore NPU; a plurality of parallel hardware matching engines are implemented on the FPGA chip, a plurality of software matching engines are instantiated on the NPU, and the hardware engines and the software engines operate in running water manner. In addition, a high speed RAM (random-access memory) on the FGPA chip and an off-chip DDR3 SDARM (double-date-rate three synchronous dynamic random-access memory) are used to construct two-level storage architecture; secondly, a regular expression rule set is compiled to generate a hybrid automaton; thirdly, state table items of the hybrid automaton are configured; fourthly, network messages are processed. The high speed regular expression matching hybrid system and method based on FPGA and NPU have the advantages that matching performance under complex rule sets is improved greatly, and the problem that the complex rule sets have poor performance is solved.
Owner:NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products