Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Efficient network IO processing method based on NUMA and hardware assisted technology

A hardware-assisted, processing method technology, applied in the field of network virtualization, can solve problems such as unknowable running status, delayed processing of interrupt information, and time-consuming interrupts, so as to reduce VM-Exit operations, optimize interrupt processing efficiency, and reduce interrupt delivery Delay effect

Active Publication Date: 2017-08-11
SHANGHAI JIAO TONG UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, Intel APICv technology does not consider the underlying NUMA architecture of the current server and the network card, the CPU where the virtual machine management software resides, and the NUMA affinity of the interrupt destination CPU when submitting interrupts at the hardware level. The load balancing of virtual CPUs does not have a unified mechanism for cooperating with the Posted-interrupt interrupt delivery mechanism, which causes the notification interruption of the Posted-interrupt mechanism in a multi-core server to occur in the following two situations:
[0006] 1. When the CPU where the virtual machine management software resides is not on the same NUMA node as the CPU where the virtual machine resides, it takes a long time to notify the interrupt;
[0007] 2. After the interrupt information is written into the delivery interrupt descriptor of the destination virtual CPU and the interrupt is notified, the virtual CPU is transferred or scheduled to another physical CPU by the CFS scheduler, and the interrupt information will be delayed.
[0008] And the above two situations will affect each other, that is, the farther the NUMA "distance" of the two CPUs involved in the notification interrupt is, the worse the affinity is, and the more likely the virtual CPU will be transferred away by the scheduler
Therefore, this ignorance of the underlying NUMA architecture and the operating status of the virtual CPU restricts the improvement of the network I / O processing efficiency of the Intel APICv hardware technology.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Efficient network IO processing method based on NUMA and hardware assisted technology
  • Efficient network IO processing method based on NUMA and hardware assisted technology
  • Efficient network IO processing method based on NUMA and hardware assisted technology

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] Such as figure 1 As shown, the virtual machine software maintains a series of simulated devices for each virtual machine, wherein the front end of the device runs inside the virtual machine, and the back end of the device runs in the virtual machine management software. The backend of the device communicates with the real physical device through the virtual network bridge. The direction of the arrow in the figure is the process of receiving data packets from the virtual machine. The virtual machine management software receives new data packets from the physical network card, and the virtual network bridge processes the data packets. Division, so as to identify the virtual machine to which the data packet belongs, and then forward it to the shared memory of the back-end device of the virtual machine and the front-end device, and then trigger an interrupt to notify the front-end device to process the data packet. On different CPUs, this notification interrupt is often an ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses an efficient network I / O processing method based on the NUMA and the hardware assisted technology. The method is characterized in that in the virtualized environment, when an SRIOV (Single-Root I / O Virtualization) direct allocation device or a paravirtualization device generates a piece of physical interruption, by analyzing the CPU for processing the physical interruption, the target CPU and the node affinity of the NUMA (Non- Uniform Memory Access Architecture) where the underlying network card is located are interrupted; combined with the virtual CPU running information, interruption processing efficiency of the Intel APICv hardware technology and the Posted-Interrupt mechanism on the multi-core server is optimized; and under the condition that the context switching load caused by the VM-exit is fully reduced, all submission delays and call delays from generating interruption to processing interruption by the virtual machine are effectively eliminated, so that the I / O response rate of the virtual machine is greatly improved, and data packet processing efficiency of the data center network is greatly optimized.

Description

technical field [0001] The invention relates to the field of network virtualization, in particular to an efficient network IO processing method based on NUMA and hardware-assisted technology. Background technique [0002] Under the current cloud computing infrastructure architecture, the processing efficiency of the virtual network function integrated in the virtual machine is crucial to the efficiency of efficient network I / O processing. The existing DMA technology and zero-copy memory sharing mechanism greatly eliminate network I / O. / O data flow level load, the current main performance bottleneck is concentrated in the network I / O control flow level. NUMA (Non-Uniform Memory Access Architecture) architecture is a relatively mature multi-core processor architecture solution, which divides multi-core server hardware resources into multiple nodes (Nodes), each node has its own processor and memory resources, Among them, the speed of accessing the memory of the same node by t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/455G06F9/50
CPCG06F9/45558G06F9/5027G06F2009/4557G06F2009/45579
Inventor 李健张望管海兵马汝辉胡小康
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products