Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Performing thread distribution method for multi-nucleus multi-central processing unit

A technology of central processing unit and execution thread, applied in the direction of multi-programming device, etc., can solve the problems of time slice being unable to be synchronized, calling in sequence, waiting for requests by multiple execution threads, etc., to improve efficiency, improve running speed and efficiency, avoid Effects of resource conflict problems

Active Publication Date: 2008-02-27
ZHIGU HLDG
View PDF0 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These problems are caused by the fact that a single execution thread is scheduled by different CPUs in turn at runtime, and its time slices cannot be synchronized or cannot be called sequentially according to the design ideas.
[0006] In the test, the known test (Diagnostic) platform provides the function of assigning the entire process to a single central processing unit to run, but such processing can only minimize the occurrence of problems such as resource competition, and such processing will lead to Other CPUs are not under stress so not a good solution
[0007] Especially in some applications, such as: testing (Diagnostic), usually due to the characteristics of the test project or the characteristics of the hardware device, only one test execution thread is allowed to use a specific hardware device or the same resource at the same time. It will be easy for multiple execution threads to wait for requests
The above problems not only exist in the stress test of multi-core multi-CPU, but also exist in the normal multi-core multi-CPU computer system software call

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Performing thread distribution method for multi-nucleus multi-central processing unit
  • Performing thread distribution method for multi-nucleus multi-central processing unit
  • Performing thread distribution method for multi-nucleus multi-central processing unit

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043]When multiple execution cores of multiple central processing units need to call upper-level modules, that is, modules of the software layer, the known technology usually adopts the method of executing thread scheduling to make calls, so it can be considered that each execution thread is assigned to a single central processing unit to run on a single execution core. For example, in conjunction with the embodiment shown in FIG. 2 , it is relatively simple to sequentially assign the started execution threads to the first execution core 10 of the first central processing unit 1, the second execution core 12 of the first central processing unit 1, the second The first execution core 14 of the CPU 2 and the second execution core 16 of the second CPU 2 . However, such a solution may have a dynamic link library (Dynamic Link Libraries, DLL) that provides multiple execution threads that can be called separately, and memory or overall variables will be shared between these executi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

This invention discloses a kind of execute Thread allocation method of multiple kernels central processors. At first establish the execute choice and the list corresponding with the original code and during start up the every execute choice's execute Thread generate the execute choice and the list corresponding with the execute Thread identification code and then according to the two kinds of list allocate for the execute Thread. Besides, it also can increase one priority specify list in the corresponding relationship list between the execute choice and the original code if it in order to identify the every execute choice's execute Thread grouping priority specify types or offer a potential resource collision interface further priority consider the priority specify or the potential resource collision setting when executing the Thread grouping.

Description

technical field [0001] The present invention relates to an execution thread scheduling processing method of a processor, in particular to an execution thread allocation method proposed under the framework of a multi-core multi-central processing unit (Central processing unit, CPU). Background technique [0002] At present, in the face of the rapidly growing trend of globalization of streaming media information and network applications, enterprises and consumers require computer processors to provide more convenience and more obvious advantages, so the use of multi-core central processing unit (Central processing unit, CPU) ) of various servers came into being. A multi-core central processing unit refers to a central processing unit with more than two processor cores on a single central processing unit substrate. It is a new generation of computer central processing unit mainly for professional users or home multimedia users. Moreover, with the development of science and tec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/46
Inventor 段秋月陈镇陈玄同刘文涵
Owner ZHIGU HLDG
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products