Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Working scheduling method and device

A job scheduling and job technology, applied in the field of cloud computing resource management, can solve problems such as low job execution efficiency and low resource utilization, and achieve the effect of solving resource waste, improving resource utilization, and

Inactive Publication Date: 2014-01-22
HAINAN UNIVERSITY
View PDF6 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention provides a job scheduling method and device to solve the problems of low job execution efficiency and low resource utilization rate caused by the non-local task of suspending scheduling when the existing Hadoop cluster performs job scheduling

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Working scheduling method and device
  • Working scheduling method and device
  • Working scheduling method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0025] refer to figure 1, shows a flow chart of steps of a job scheduling method according to Embodiment 1 of the present invention.

[0026] The steps of the job scheduling method in this embodiment are as follows:

[0027] Step S102: During the execution of the current job, filter out a set of candidate computing nodes according to the first set rule; and filter out an unexecuted map task in the current job as the map task to be executed according to the second set rule.

[0028] Wherein, a job includes at least one map task.

[0029] The first setting rule and the second setting rule in this embodiment can be set by those skilled in the art according to actual needs. For example, the first setting rule can be set as follows: when there are idle computing nodes in the Hadoop cluster executing the current task, all idle computing nodes will form a set of candidate computing nodes; when there are no idle computing nodes, map will be executed The computing nodes of the task ...

Embodiment 2

[0044] refer to figure 2 , shows a flow chart of steps of a job scheduling method according to Embodiment 2 of the present invention.

[0045] The specific steps of the job scheduling method in this embodiment include:

[0046] Step S202: During the execution of the current job, the master node judges whether there is an idle computing node in the Hadoop cluster executing the current job. If yes, execute step S204; if not, execute step S208.

[0047] Step S204: When there are idle computing nodes in the Hadoop cluster executing the current job, the master node takes all idle computing nodes as a set of candidate computing nodes.

[0048] Step S206: Select an idle computing node from the set of candidate computing nodes as a computing node meeting the set criteria.

[0049] It should be noted that when selecting an idle computing node as a computing node that meets the set criteria, any idle computing node can be selected, or a local node of a subsequently selected map task ...

Embodiment 3

[0079] refer to image 3 , shows a flow chart of steps of a job scheduling method according to Embodiment 3 of the present invention.

[0080] In this embodiment, the job scheduling method in the present invention is described in detail on the premise that each computing node in the Hadoop cluster is executing the map task in the current job.

[0081] In this embodiment, it is assumed that there is a Hadoop cluster C, which is composed of N TT (TaskTracker, computing node) and a JT (JobTracker, master node), the cluster can be expressed as C={JT,TT i |i∈[0,N)}. Suppose a job J consists of m map (mapping) tasks and r reduce tasks. Since it is generally believed that the reduce task does not have data locality problems, it is not considered in this embodiment. A job can be simplified as J={M i |i∈[0,m)}. For the input data of a job, Hadoop defaults to a map task corresponding to an input data block, then the input data of the job can be expressed as I={B i |i∈[0,m)}. Then ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a working scheduling method and device. The working scheduling method comprises the steps that during a current working executing process, a candidate computational node set is screened out according to a first set rule; an unexecuted map task in current working is screened out according to a second set rule as a map task to be executed; a computational node which meets a set standard is selected from the candidate computational node set; whether the map task to be executed is a local task of the computational node is judged; when the map task to be executed is not the local task of the computational node, a data block corresponding to the map task to be executed is transmitted to the computational node which meets the set standard from a source computational node for storage; when the computational node which meets the set standard requests map task distribution, the screened map task to be executed is distributed to the computational node which meets the set standard. According to the working scheduling method and device, the resource using rate and the working executing efficiency are improved.

Description

technical field [0001] The present invention relates to the technical field of cloud computing resource management, in particular to a job scheduling method and device. Background technique [0002] With the rapid development of the Internet, cloud computing has developed and evolved rapidly under the impetus of IT (Information Technology, information technology) giants, especially with the advent of the era of big data. Hadoop was developed by Doug Cutting, the founder of the Apache Lucene server search engine. It is an open source framework for large-scale data parallel processing and is currently the most widely used open source cloud computing platform. Hadoop mainly includes two core parts: HDFS (Hadoop Distributed File System, Hadoop Distributed File System) and MapReduce (mapping simplification). MapReduce is a distributed programming model proposed by Google and a distribution for large-scale data processing and generation. A processing framework, which is the basis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48
Inventor 黄梦醒万兵段茜冯文龙
Owner HAINAN UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products