Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for realizing dynamic layout of high-performance server based on group structure

A dynamic deployment and server technology, applied in multi-programming devices, digital transmission systems, electrical components, etc.

Active Publication Date: 2006-03-08
LANGCHAO ELECTRONIC INFORMATION IND CO LTD +1
View PDF0 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, since the computing characteristics of the nodes have been determined by the local operating system and system software, if the nodes in different functional partitions cannot change the supported applications, even if the function pool with low utilization is added to the function with high utilization In the partition, it is not necessarily possible to share the tasks in the functional partition with high utilization

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for realizing dynamic layout of high-performance server based on group structure
  • Method for realizing dynamic layout of high-performance server based on group structure
  • Method for realizing dynamic layout of high-performance server based on group structure

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] Such as figure 2 As shown; the cluster supports three types of applications, and the nodes used run in functional partitions 1, 2, and 3 respectively. Assume that the current workload in functional partition 1 is very heavy, while the workload in functional partition 2 is light. Dynamically deployed Proceed as follows:

[0029] 1. Monitor the new computing resources in the newly added cluster as backup computing resources;

[0030] 2. Monitor the workload of nodes in each functional partition in the cluster, and assign nodes with lighter workloads

[0031] 3. As a backup computing resource;

[0032] 4. It is possible to extract a certain computing node in functional partition 2, such as C2;

[0033] 5. Dynamically bind image S1 required by functional partition 1 for C2;

[0034] 6. C2 guides and executes the image S1, and builds a new computing node C’ that supports functional partition 1;

[0035] 7. Add C' to functional partition 1 to increase the computing powe...

Embodiment 2

[0039] image 3 What is shown is a dynamic deployment console, which controls the interactive process of the console, computing nodes, and NFS server during the process of starting a node newly added to the cluster from the image on the NFS server. It can be seen that if the computing node only uses the image on the NFS server to build a new computing node, there is no need for SAN equipment, which can save the investment in expensive SAN equipment. It's just that the NFS server as a centralized storage will bring IO access bottlenecks, so the method for computing nodes to use the storage image on the NFS server to build nodes can be regarded as a cheap solution with low performance requirements.

[0040] image 3 Steps 1-14 except steps 4-10 are the same as the process of establishing a diskless workstation based on NFS, PXE, and tftp tools. The purpose of steps 4-10 is to check whether the computing resources match the bound image resources to prevent The difference caused...

Embodiment 3

[0042] Such as Figure 4As shown, in the process of using a dynamic deployment console to control a node newly added to the cluster to start from the image on the SAN, the key is to use the two-stage boot process provided by the operating system itself. In the first stage, use the boot of the Ethernet card Function, guide the kernel of the operating system and load the driver program of the memory card on the Ethernet (steps 12 and 13); in the second chicken stage, the memory card without the network boot function is identified after the operating system startup stage is over and before the root file system is switched Take out the storage device, and switch the root file system to the identified SAN (step 14). Although this node is booted on a low-speed Ethernet, it can use a high-speed network during operation (such as infiniband, fiber channel network) For high-speed communication and network storage.

[0043] It can be seen from the above-mentioned embodiments that the dy...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The method prepares following steps: separating computing resources from storing resources in computer cluster, and setting up different identifiers for them; then, binding computing resources and storing resources in different ID dynamically so as to build new computing nodes; changing computing performances of new computing nodes and adding the said new computing nodes to functional partitions in lack of resources in order to raise integral use ratio of server. The method changes computing performances of nodes quickly and easily, and adds modified computing performances of idle resources dynamically to functional partitions in heavy loads in order to raise use ratio of server.

Description

technical field [0001] The invention relates to the field of high-performance server architecture, in particular to a method for realizing dynamic deployment of high-performance servers based on cluster structure. Background technique [0002] With the rapid development of network technology, 10 Gigabit Ethernet and 10Gb infiniband networks have been maturely applied one after another, making high-speed interconnection between computing resources and storage resources possible; with the transfer of operating systems to networked distributed systems, network Protocol functions have become a must-have function for modern operating system software. In the design of computer network cards and other components, support for network startup is added. In this way, during the system startup process, the carrier of the computing environment can be selected at a certain stage, and it becomes possible to use network storage resources as a carrier to establish a computing environment. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/46H04L12/24
Inventor 王恩东李景山魏健王守昊胡雷钧董小社伍卫国
Owner LANGCHAO ELECTRONIC INFORMATION IND CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products