Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A task processing method and server

A task processing and server technology, applied in the field of resource optimization, can solve problems such as affecting user performance experience, reducing the efficiency of operation centers, and large resource distribution, so as to improve resource utilization, avoid congestion, and improve overall performance.

Active Publication Date: 2019-07-26
LENOVO (BEIJING) LTD
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Unreasonable resource distribution (such as too many tasks on a certain node that have a large memory bandwidth overhead) will affect the user's performance experience and reduce the efficiency of the operation center
At present, there is no case of balancing memory bandwidth in the software system solution to provide users with better services

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A task processing method and server
  • A task processing method and server
  • A task processing method and server

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0022] figure 1 It is a schematic flow diagram of a task processing method in an embodiment of the present invention; as figure 1 As shown, the method includes:

[0023] Step 101: collecting first resource characteristic parameters corresponding to at least two central processing units;

[0024] Here, the method described in this embodiment can be specifically applied to a server or a server cluster; specifically, when the method described in this embodiment is applied to a server, as figure 2 As shown, the server may specifically include at least two central processing units (CPUs). At this time, using the method described in this embodiment can implement task migration between at least two central processing units of the server, so as to realize the The load balancing of the memory bandwidth between CPUs in the server improves the overall performance of the server.

[0025] Or, when the method described in the embodiment of the present invention is applied to a server cl...

Embodiment 2

[0037] Figure 4 It is a schematic flow diagram of the realization of the second task processing method in the embodiment of the present invention; as Figure 4 As shown, the method includes:

[0038]Step 401: collecting first resource characteristic parameters corresponding to at least two central processing units;

[0039] Here, the method described in this embodiment can be specifically applied to a server or a server cluster; specifically, when the method described in this embodiment is applied to a server, as figure 2 As shown, the server may specifically include at least two central processing units (CPUs). At this time, using the method described in this embodiment can implement task migration between at least two central processing units of the server, so as to realize the The load balancing of the memory bandwidth between CPUs in the server improves the overall performance of the server.

[0040] Or, when the method described in the embodiment of the present inven...

Embodiment 3

[0066] This embodiment provides a server, such as Figure 6 As shown, the server includes:

[0067] The collection unit 61 is configured to collect first resource characteristic parameters corresponding to at least two central processors, and collect second resource characteristic parameters currently occupied by at least one execution task corresponding to the at least two central processors;

[0068]The processing unit 62 is configured to determine whether to determine whether to Satisfy the task migration condition; when the task migration condition is met, at least one execution task of at least one first central processor of the at least two central processors is migrated to at least one second central processor of the at least two central processors in the CPU.

[0069] In an embodiment, the processing unit 62 is further configured to perform group processing on the at least two central processors when the task migration condition is satisfied, to obtain a first group ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

An embodiment of the invention discloses a task processing method. The method comprises the steps of acquiring first resource characteristic parameters corresponding to at least two central processing units; acquiring second resource characteristic parameters occupied by at least one execution task currently corresponding to the at least two central processing units; judging whether a task migration condition is met or not based on the first resource characteristic parameters corresponding to the at least two central processing units and the second resource characteristic parameters occupied by the at least one execution task currently corresponding to the at least two central processing units; and when the task migration condition is met, migrating the at least one execution task in at least one first central processing unit of the at least two central processing units into at least one second central processing unit of the at least two central processing units. An embodiment of the invention furthermore discloses a server.

Description

technical field [0001] The invention relates to resource optimization technology, in particular to a task processing method and server. Background technique [0002] Memory bandwidth is one of the main performance factors for big data analysis and scientific engineering computing. In a cluster environment, different users and different workloads have different requirements for memory bandwidth and latency response. Unreasonable resource distribution (such as too many tasks on a certain node that have a large memory bandwidth overhead) will affect the user's performance experience and reduce the efficiency of the operation center. At present, there is no case of providing better services for users by balancing memory bandwidth in software system solutions. Contents of the invention [0003] In order to solve the existing technical problems, the embodiment of the present invention provides a task processing method and a server. [0004] The technical scheme of the embodim...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50
CPCG06F9/5088
Inventor 刘显杨立中张振
Owner LENOVO (BEIJING) LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products