Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Scheduling method and system for relieving memory pressure in distributed data processing systems

A data processing system and distributed data technology, applied in the field of distributed systems, can solve problems such as inability to apply data processing systems, and achieve the effects of avoiding waiting, reducing overflow, and reducing disk read and write

Inactive Publication Date: 2017-08-18
HUAZHONG UNIV OF SCI & TECH
View PDF1 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of the defects of the prior art, the purpose of the present invention is to provide a scheduling method for alleviating memory pressure in a distributed data processing system, and its purpose is to solve the technical problem that the existing method cannot be applied to a service-oriented data processing system
[0008] 1. The present invention can solve the technical problem that the existing method cannot be applied to the data processing system of the service-oriented mode due to the same processing method for all tasks: since the present invention adopts steps (1) to (6), according to each The execution information of a task in the data processing system, such as the input information and output information of the task information, the memory usage information of the memory information, etc., calculate the memory usage growth rate of each task, and evaluate the task’s impact on the memory usage growth rate according to the memory usage growth rate. The impact of memory pressure, therefore, the present invention can clearly distinguish the impact of different tasks on memory pressure, thereby processing the tasks with small growth rate of memory occupation first, slowing down the memory pressure in the data processing system, and avoiding the long-term impact on tasks with little impact on memory pressure. time waiting

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scheduling method and system for relieving memory pressure in distributed data processing systems
  • Scheduling method and system for relieving memory pressure in distributed data processing systems

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0014] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific implementations described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0015] Such as figure 1 As shown, the scheduling method for alleviating memory pressure in the distributed data processing system of the present invention includes the following steps:

[0016] (1) Obtain all task information and memory information in the data processing system;

[0017] Specifically, the task information is located in the data processing system, and the memory information...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a scheduling method for relieving memory pressure in distributed data processing systems. The scheduling method comprises the following steps of: analyzing a memory using law according to characteristics of an operation carried out on a key value pair by a user programming interface, and establishing a memory using model of the user programming interface in a data processing system; speculating memory using models of tasks according to a sequence of calling the programming interface by the tasks; distinguishing different models by utilizing a memory occupation growth rate; and estimating the influence, on memory pressure, of each task according to the memory using model and processing data size of the currently operated task, and hanging up the tasks with high influences until the tasks with low influences are completely executed or the memory pressure is relieved. According to the method, the influences, on the memory pressure, of all the tasks during the operation are monitored and analyzed in rea time in the data processing systems, so that the expandability of service systems is improved.

Description

technical field [0001] The invention belongs to the field of distributed systems, and more specifically relates to a scheduling method and system for alleviating memory pressure in a distributed data processing system. Background technique [0002] The application of distributed data processing systems in big data processing is becoming more and more widespread, and the development is also very rapid. This is due to the fact that most types of data processing systems are developed in high-level object-oriented languages, such as Java, C#, etc. However, on the one hand, it is limited by the hardware memory space, and on the other hand, this type of object-oriented language has a managed execution environment, such as JVM, .NET, etc. Data is stored in memory in the form of objects, and data structures such as additional references and modified data are introduced. Causes the problem of memory expansion to be prominent. At the same time, the hosting environment automatically m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/46G06F9/50
CPCG06F9/465G06F9/5016
Inventor 石宣化金海张雄柯志祥
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products