Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Distributed decision making supporting massive high-concurrency access I/O (Input/output) server load balancing system

A distributed and server technology, applied in the computer field, can solve problems such as undiscovered, unbalanced, and lacking, and achieve the effects of high computational complexity, minimized access conflict probability, and minimized I/O access conflicts

Inactive Publication Date: 2013-07-03
BEIHANG UNIV
View PDF7 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Load mass data hierarchical storage method (application number: 200710118116.5), metadata service system, metadata synchronization method and write server update method (200810224708.X), a dynamic data distribution method for cluster file system (application number: 201210184965.1), a A method and device for extracting file layout from a parallel file system (application number: 201110003511.5), a dynamic load balancing method for cluster servers (03118565.7), a server load balancing method for weighted minimum connection allocation (03147308.3), etc., However, the existing mature parallel file system server balancing technology and patent research at home and abroad pay more attention to the load balancing of computing nodes, and lack of load balancing among large-scale I / O servers in parallel file systems, especially "large-scale I / O" / O server" and "complex I / O access mode" (such as small file I / O access in PB-level environment, etc.) Examination of high-performance computers, the research on load balancing among large-scale I / O servers in parallel file systems in higher-performance computers is relatively few or even lacking; finally, traditional load balancing methods are not enough to solve the problems in higher-performance computers. Load imbalance problems due to significant increase in the number of servers and complex I / O access patterns, etc.
There is no server load balancing system among large-scale high-concurrency access I / O servers in a high-performance computer file system that supports distributed decision-making based on the basic flow of load balancing processing, especially one that can simultaneously support high-concurrency file access requests. File-level striping, file striping that minimizes I / O access conflicts, and dynamic adaptive load balancing based on distributed architecture Three functions of load balancing methods and systems related research and patents

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed decision making supporting massive high-concurrency access I/O (Input/output) server load balancing system
  • Distributed decision making supporting massive high-concurrency access I/O (Input/output) server load balancing system
  • Distributed decision making supporting massive high-concurrency access I/O (Input/output) server load balancing system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] In order to make the purpose of the present invention, technical solutions and advantages more clearly expressed, taking PVFS (a typical parallel file system platform) and data server load balancing as examples, below in conjunction with the accompanying drawings (such as Figure 1-Figure 5 ) and specific examples will further describe the present invention in detail, but do not constitute a limitation to the present invention. The specific implementation method is as follows:

[0040] Firstly, the description of the mathematical symbols involved in this example is shown in Table 1.

[0041] Table 1. Mathematical symbols involved and their actual meanings

[0042]

[0043] Such as figure 1 As shown, based on the parallel IO system architecture (from top to bottom, application layer -> parallel file system layer -> physical hardware layer), based on the typical parallel file system architecture (client, metadata server and data Server) and the load balancing processi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a distributed decision making supporting massive high-concurrency access I / O server load balancing system, and is used for solving the problems of high performance computers, for example, the high performance computer is lack of file strips with high data concurrency access file level, an allocation method which is used for the strip-oriented files and can fully consider dynamic file access features, and a load balancing which supports the distributed decision making. The system disclosed by the invention provides a novel dynamic load balancing support which has high expandability and fully considers network delay and migration cost. Modularly, the system mainly comprises file level strips oriented to high-concurrency access request, file allocation capable of minimizing IO access conflict and the load balancing based on distributed decision making. The system can meet the application needs of the load balancing system of the high performance computers with massive high concurrency and distributed decision making. Thus, the system has the characteristics of being wide in application foreground and capable of generating remarkable economic benefit, and the like.

Description

technical field [0001] The invention discloses a large-scale IO server load balancing system, in particular to a load balancing system among large-scale high-concurrency access I / O servers in a high-performance computer file system supporting distributed decision-making. I / O refers to input / output output. It belongs to the field of computer technology. Background technique [0002] Parallel input / output (Input / Output) system has always been a hot research direction in the field of computer system architecture, which aims to create multiple data paths between memory and disk to alleviate the I / O performance bottleneck of computer systems. As one of the core software of the parallel input / output system, the parallel file system not only provides the semantics and interfaces required for parallel access to file data, but also enables file segmentation, file allocation, and dynamic load balancing between data servers. means to ensure the aggregated access speed of file data. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04L29/08G06F9/50
Inventor 阮利董斌肖利民祝明发
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products