Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Parallel Computing Scheduling Method in Heterogeneous Environment

A technology of parallel computing and scheduling method, applied in the direction of resource allocation, multi-programming device, etc., can solve the problem of low resource utilization rate of data parallel model, and achieve the effect of increasing job throughput, speeding up operation speed, and improving resource utilization rate.

Active Publication Date: 2016-09-21
ZHEJIANG UNIV
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention aims at the shortcoming of low resource utilization rate of the existing batch data parallel model, and provides a novel parallel computing scheduling method in a heterogeneous environment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Parallel Computing Scheduling Method in Heterogeneous Environment
  • Parallel Computing Scheduling Method in Heterogeneous Environment
  • Parallel Computing Scheduling Method in Heterogeneous Environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0034] Parallel computing scheduling method in a heterogeneous environment, the specific process is as follows figure 1 shown, including the following specific steps:

[0035] 1) Build multiple JVM task slots on a heterogeneous cluster. The multiple JVM task slots are composed of memory spaces of different or the same size. The heterogeneous cluster includes a master node and a slave node, and the JVM task slots are located on the slave nodes. superior;

[0036] 2) The master node monitors the I / O utilization and CPU utilization of all slave nodes, and builds an array of idle task slots Q 1 and Q 2 , the array of free task slots Q 1 and Q 2 Each consists of one or more JVM task slots;

[0037] 3) The distributed file system built on the heterogeneous cluster receives the input data to be processed uploaded by the user, and stores the input data in the form of data blocks on the nodes of the heterogeneous cluster; the distributed file system receives the parallel data subm...

Embodiment 2

[0053] According to the steps listed in embodiment 1, a specific parallel computing scheduling test is carried out, and the specific steps are as follows:

[0054] Step 1): The heterogeneous cluster contains four physical nodes, one master node and three slave nodes (slave node 1, slave node 2, and slave node 3). Each node has a single-core 64-bit Xeon processor with a main frequency of 2.00Hz, and the memory of the three slave nodes is 4GB, 8GB, and 8GB respectively. All machines are connected in the same Gigabit LAN, and their disk read and write speeds are the same. Construct 2 JVM task slots on each slave node, and the memory of each JVM task slot is based on the formula Calculated as: 768MB, 1792MB and 1792MB;

[0055] Step 2): At a certain moment, the master node monitors and obtains that the I / O utilization rates of each slave node are 40%, 70% and 60%, and the CPU utilization rates are 40%, 30% and 20% respectively. At this time, all task slots are in the idle stat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of parallel computing, and discloses a parallel computing scheduling method in a heterogeneous environment. The method divides tasks in a parallel computing job into I / O by constructing a variety of JVM task slots with different memory and an array of idle task slots. Intensive and CPU-intensive, and assign the above tasks to the appropriate task slots for calculation, to achieve the purpose of optimizing parallel computing efficiency in heterogeneous environments. The invention has the advantages of dynamically determining the size and type of memory required by the task, improving the resource utilization rate of heterogeneous clusters, reducing the overall running time of parallel computing jobs, and avoiding the situation of memory overflow during task running .

Description

technical field [0001] The invention relates to the field of parallel computing, in particular to a scheduling method for parallel computing in a heterogeneous environment. Background technique [0002] With the continuous emergence of new information publishing methods represented by social networking site SNS and location-based service LBS, as well as the rise of technologies such as cloud computing, Internet of Things and mobile computing, data is growing and accumulating at an unprecedented rate. We have entered the era of big data. According to statistics, Baidu has to process 10-100PB of data every day, and the New York Stock Exchange generates about 1TB of transaction data every day. In the era of big data, a single machine cannot meet the performance and time requirements of data processing, such as data mining, building inverted indexes and other algorithms, so multi-machine parallel processing technology has emerged. Big data analysis technology is a process of c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50
Inventor 吴朝晖何延彰姜晓红黄鹏毛宇
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products