Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A virtual cpu scheduling optimization method for numa architecture

An optimization method and architecture technology, applied in the field of virtualization, can solve problems such as destroying the transparency of the virtualization layer, failing to meet requirements, optimizing shared resources and remote memory access overhead, etc., to achieve low overall cost, improve performance, and optimize the system performance effect

Active Publication Date: 2018-05-11
HUAZHONG UNIV OF SCI & TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

None of the existing resource scheduling methods in the virtualization environment accurately optimize shared resources and remote memory access overhead through VCPU scheduling; in addition, some related research optimizes performance at the operating system or application level, but requires a virtual machine monitor Expose the underlying NUMA architecture information to the virtual machine, which will destroy the transparency of the virtualization layer, thus failing to meet the requirements

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A virtual cpu scheduling optimization method for numa architecture
  • A virtual cpu scheduling optimization method for numa architecture
  • A virtual cpu scheduling optimization method for numa architecture

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples.

[0019] like figure 1 As shown, under the NUMA architecture, each node has an independent memory block, a memory access controller and a shared cache, and data is transmitted between nodes through an interconnect bus. In a virtualized environment, the virtual machine monitor (VMM), which is located between the underlying hardware and the upper-level guest operating system, is the core of virtualization technology. VMM is responsible for the allocation and management of underlying hardware resources, and can support multiple independent virtual machines running on the same physical machine. Each virtual machine has its own VCPU, which is used to run the applications in the virtual machine. In particular, the VCPU scheduler in VMM is responsible for...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a VCPU scheduling optimization method oriented to NUMA architecture, specifically: collect the memory access information of each VCPU, analyze and calculate the memory access characteristics of each VCPU; according to the memory block location and type of each VCPU, Memory-intensive VCPUs are evenly allocated to different NUMA nodes to ensure maximum local memory access; when there is an idle CPU, a suitable VCPU is selected for the idle CPU according to the CPU load size and the node information to which the CPU belongs. run. The present invention aims at the performance problem of memory access-intensive application programs in a virtualized environment based on NUMA architecture, and optimizes the allocation and migration mechanism of VCPUs according to the memory access characteristics of VCPUs. On the premise of maintaining the transparency of the virtualization layer, it can effectively It can greatly reduce remote memory access and alleviate shared resource contention, thereby improving the performance of memory-intensive applications.

Description

technical field [0001] The invention belongs to the field of virtualization, and more particularly, relates to how to optimize virtual CPU (VCPU) scheduling to improve the performance of memory-intensive application programs in a virtualization environment based on NUMA architecture. Background technique [0002] With the development of the multi-core architecture, the number of processor cores continues to increase, and the competition for a single memory access controller in the traditional UMA architecture becomes more and more serious, so the NUMA architecture emerges as the times require. A server based on the NUMA architecture includes multiple NUMA nodes (referred to as nodes), and each node has multiple physical CPUs (referred to as CPUs), independent memory blocks and memory access controllers. For a certain CPU or memory block, we call the node where it is located as the local node, and other nodes as remote nodes. Data is transmitted between each node through the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50G06F9/455
Inventor 吴松金海孙华华周理科
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products