Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Micro-architecture sensitive thread scheduling (MSTS) method

A technology of architecture and operating system, applied in the direction of multi-programming device, etc., can solve the problem of providing users with a unified and difficult cache failure rate analysis and processing of memory access addresses, so as to alleviate cache jitter and mutual coverage, and reduce memory access delays. , Improve the effect of Cache hit rate

Inactive Publication Date: 2011-06-01
NAT UNIV OF DEFENSE TECH
View PDF2 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Considering that the thread scheduling algorithm must be simple and efficient, it is often difficult to analyze the cache failure rate and memory access address without hardware support.
Modern processors generally provide PMU, which is a set of counters that can count the underlying events of the processor. Although the counters of most processors are architecture-related and do not provide users with a unified interface, most of them can provide The statistics of basic information such as Cache failure rate and memory access address analysis, how to make full use of PMU to perceive the underlying information in real time and provide guidance for the operating system to schedule (thread) threads has attracted more and more attention from the academic community.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Micro-architecture sensitive thread scheduling (MSTS) method
  • Micro-architecture sensitive thread scheduling (MSTS) method
  • Micro-architecture sensitive thread scheduling (MSTS) method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific implementation methods.

[0034] Traditional uniprocessors cannot meet the needs of modern commercial and scientific computing fields because they cannot overcome obstacles such as storage walls and power consumption walls. In order to reduce power consumption and improve processor memory access speed and bandwidth, most of the current Using a ccNUMA parallel processor structure composed of CMP nodes, figure 1 Shown is a parallel processor architecture consisting of two CMP nodes. Each core has its own local Cache. Each core on the CMP node shares the global Cache with a larger capacity at the last level, and has a built-in memory controller to improve memory access speed. It is connected to other CMPs through a high-speed interconnection mechanism. Realize high-speed data transmission. In order to improve the scalability of the system, the inter...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a micro-architecture sensitive thread scheduling (MSTS) method. The method comprises two-level scheduling strategies: in the inner portions of nodes, according to the structural characteristics of CMP, through acquiring relevant information of cache invalidation in real time, the threads which excessively compete for the shared cache resources operate in a staggered mode in time and spatial dimension, and the threads which can carry out effective mutual data sharing operate simultaneously or successively; and in a layer between nodes, through sampling memory data areas frequently accessed by the threads, the threads are bound to the nodes in which data are arranged as far as possible so as to reduce the access to a remote memory and reduce the amount of communication between chips. The method provided by the invention has the advantages of reducing the access delay caused by simultaneously operating a plurality of threads with large shared cache contention, and improving the operation speeds and system throughput rates of the threads.

Description

technical field [0001] The invention mainly relates to the field of thread scheduling design in the operating system, in particular to the thread scheduling design in shared Cache on-chip multi-core processor (CMP) nodes and among distributed shared storage structure system nodes composed of multiple slices. In particular, it refers to an operating system thread scheduling method that is aware of microarchitecture information. Background technique [0002] The current multi-core processor technology alleviates the contradiction between the rapid growth of processor performance and the slow growth of memory performance to a certain extent, but the storage speed of memory is still a key factor restricting the improvement of processor performance. Modern computer systems implement functions such as out-of-order execution, multi-instruction launch, and loop unrolling through hardware, which greatly hides processor memory access delays. At the operating system level, the priorit...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/46
Inventor 阳国贵余飞姜波
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products