Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Model-free data center resource scheduling algorithm based on reinforcement learning

A data center and resource scheduling technology, applied in resource allocation, integrated learning, digital data processing, etc., can solve the problems that cloud computing algorithms cannot adapt to changing environments, modeling is difficult, and task allocation is not scientific and reasonable enough to achieve tasks. more scientific and rationalized distribution, avoid environmental modeling difficulties, and achieve the effect of efficient utilization

Pending Publication Date: 2019-10-18
白紫星
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to provide a model-free data center resource scheduling algorithm based on reinforcement learning in order to solve the problems that existing cloud computing algorithms cannot adapt to changing environments, modeling is difficult, and task allocation is not scientific and reasonable.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model-free data center resource scheduling algorithm based on reinforcement learning
  • Model-free data center resource scheduling algorithm based on reinforcement learning
  • Model-free data center resource scheduling algorithm based on reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0027] Embodiment 1: A model-free data center resource scheduling algorithm based on reinforcement learning, including an environment model and a DRL model. The environment model includes a time model, a VM model, and a Task model. The Task model is used to store tasks that have not yet been executed, and the VM model It is used to execute tasks. The DRL model includes Agent1 model and Agent2 model. The Agent1 model is used to judge whether the task is executed. The Agent2 model is used to increase or decrease virtual machines to achieve load balancing. The Agent1 model and Agent2 model include state space, action space and reward Function and deep neural network are composed of four parts;

[0028] The state space 1 of the Agent1 model is (e t ,c t ,m t ,n t ); where e t is the execution time of the task, c t is the cost of the task, n t is the proportion of the busy virtual machine; the state space 2 of the Agent2 model is in Represents the load of the environment ...

Embodiment 2

[0035] Embodiment 2: A model-free data center resource scheduling algorithm based on reinforcement learning, including an environment model and a DRL model. The environment model includes a time model, a VM model, and a Task model. The Task model is used to store tasks that have not yet been executed, and the VM model It is used to execute tasks. The DRL model includes Agent1 model and Agent2 model. The Agent1 model is used to judge whether the task is executed. The Agent2 model is used to increase or decrease virtual machines to achieve load balancing. The Agent1 model and Agent2 model include state space, action space and reward Function and deep neural network are composed of four parts;

[0036] The Agent1 model needs to obtain the priority value cost of the task before judging whether the task is executed, cost=(e t +d t ) / e t ; where e t Indicates the execution time of the task, d t Indicates the waiting time of tasks in the queue;

[0037] The state space 1 of the ...

Embodiment 3

[0044] Embodiment 3: A model-free data center resource scheduling algorithm based on reinforcement learning, including an environment model and a DRL model. The environment model includes a time model, a VM model, and a Task model. The Task model is used to store tasks that have not yet been executed, and the VM model It is used to execute tasks. The DRL model includes Agent1 model and Agent2 model. The Agent1 model is used to judge whether the task is executed. The Agent2 model is used to increase or decrease virtual machines to achieve load balancing. The Agent1 model and Agent2 model include state space, action space and reward Function and deep neural network are composed of four parts;

[0045] The Agent1 model needs to obtain the priority value cost of the task before judging whether the task is executed, cost=(e t +d t ) / e t ; where e t Indicates the execution time of the task, d t Indicates the waiting time of tasks in the queue;

[0046] The state space 1 of the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a model-free data center resource scheduling algorithm based on reinforcement learning. The algorithm comprises an environment model and a DRL model. The environmental model includes a time model, a VM model, a task model. The Task model is used for storing tasks which are not executed yet. The VM model is used for performing tasks. The DRL model comprises an Agent1 model and an Agent2 model. The Agent1 model is used for judging whether a task is executed or not, the Agent2 model is used for increasing or decreasing virtual machines, and the Agent1 model and the Agent2model respectively comprise a state space, an action space, a return function and a deep neural network. In the present invention, tasks arriving at a data center are large in size fluctuation, cost is provided to measure the waiting time of the task. Compared with the traditional fair scheduling, the shortest task first execution strategy and the first-in-first execution strategy, the task allocation is more scientific and reasonable, and meanwhile, for the resource waste caused by the change of the arrival number of the tasks, the number of the VMs in the cluster is dynamically adjusted, andthe efficient utilization and the load balance of the data center resources are realized.

Description

technical field [0001] The invention belongs to the technical field of data resource scheduling, and specifically relates to a model-free data center resource scheduling algorithm based on reinforcement learning. Background technique [0002] With the development of the times, big data and cloud computing are becoming more and more important. The development and maturity of big data and cloud computing technologies have promoted the construction of domestic data centers. However, as the scale of data centers becomes larger and the environment becomes more complex, traditional resource allocation The scheme can no longer cope with the changing environment of the data center. Most of the traditional resource allocation schemes are based on heuristic algorithms, such as: fair scheduling, shortest task first execution strategy, first come first execution strategy, etc. In order to cope with the complex data center environment, the traditional heuristic algorithm needs to careful...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/455G06F9/50G06N20/20
CPCG06F9/45558G06F9/5027G06N20/20G06F2009/45562
Inventor 白紫星
Owner 白紫星
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products