Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

System and method for network traffic management and load balancing

a network traffic management and load balancing technology, applied in the field of network traffic management and load balancing, can solve the problems of no special hardware or software to purchase, nothing to install and maintain, etc., and achieve the effects of improving web application performance, scalability and availability, and improving performan

Inactive Publication Date: 2010-09-02
YOTTAA
View PDF82 Cites 447 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0025]In general, in another aspect, the invention features a system for providing load balancing among a set of computing nodes running a network accessible computer service. The system includes a first network providing network connections between a set of computing nodes and a plurality of clients, a computer service that is hosted at one or more servers comprised in the set of computing nodes and is accessible to clients via the first network and a second network comprising a plurality of traffic processing nodes and load balancing means. The load balancing means is configured to provide load balancing among the set of computing nodes running the computer service. The system also includes means for redirecting network traffic comprising client requests to access the computer service from the first network to the second network, means for selecting a traffic processing node of the second network for receiving the redirected network traffic, means for determining for every client request for access to the computer service an optimal computing node among the set of computing nodes running the computer service by the traffic processing node via the load balancing means, and means for routing the client request to the optimal computing node by the traffic processing node via the second network. The system also includes real-time monitoring means that provide real-time status data for selecting optimal traffic processing nodes and optimal computing nodes during traffic routing, thereby minimizing service disruption caused by the failure of individual nodes.
[0026]Among the advantages of the invention may be one or more of the following. The present invention deploys software onto commodity hardware (instead of special hardware devices) and provides a service that performs global traffic management. Because it is provided as a web delivered service, it is much easier to adopt and much easier to maintain. There is no special hardware or software to purchase, and there is nothing to install and maintain. Comparing to load balancing approaches in the prior art, the system of the present invention is much more cost effective and flexible in general. Unlike load balancing techniques for content delivery networks, the present invention is designed to provide traffic management for dynamic web applications whose content can not be cached. The server nodes could be within one data center, multiple data centers, or distributed over distant geographic locations. Furthermore, some of these server nodes may be “Virtual Machines” running in a cloud computing environment.
[0027]The present invention is a scalable, fault-tolerant traffic management system that performs load balancing and failover. Failure of individual nodes within the traffic management system does not cause the failure of the system. The present invention is designed to run on commodity hardware and is provided as a service delivered over the Internet. The system is horizontally scalable. Computing power can be increased by just adding more traffic processing nodes to the system. The system is particularly suitable for traffic management and load balancing for a computing environment where node stopping and starting is a common occurrence, such as a cloud computing environment.
[0028]Furthermore, the present invention also takes session stickiness into consideration so that requests from the same client session can be routed to the same computing node persistently when session stickiness is required. Session stickiness, also known as “IP address persistence” or “server affinity” in the art, means that different requests from the same client session will always to be routed to the same server in a multi-server environment. “Session stickiness” is required for a variety of web applications to function correctly.
[0030]The present invention may also be used to provide an on-demand service delivered over the Internet to web site operators to help them improve their web application performance, scalability and availability, as shown in FIG. 20. Service provider H00 manages and operates a global infrastructure H40 providing web performance related services, including monitoring, load balancing, traffic management, scaling and failover, among others. The global infrastructure has a management and configuration user interface (UI) H30, as shown in FIG. 20, for customers to purchase, configure and manage services from the service provider. Customers include web operator H10, who owns and manages web application H50. Web application H50 may be deployed in one data center, or in a few data centers, in one location or in multiple locations, or run as virtual machines in a distributed cloud computing environment. H40 provides services including monitoring, traffic management, load balancing and failover to web application H50 which results in delivering better performance, better scalability and better availability to web users H20. In return for using the service, web operator H10 pays a fee to service provider H00.

Problems solved by technology

There is no special hardware or software to purchase, and there is nothing to install and maintain.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • System and method for network traffic management and load balancing
  • System and method for network traffic management and load balancing
  • System and method for network traffic management and load balancing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055]The present invention utilizes an overlay virtual network to provide traffic management and load balancing for networked computer services that have multiple replicated instances running on different servers in the same data center or in different data centers.

[0056]Traffic processing nodes are deployed on the physical network through which client traffic travels to data centers where a network application is running. These traffic processing nodes are called “Traffic Processing Units” (TPU). TPUs are deployed at different locations, with each location forming a computing cloud. All the TPUs together form a “virtual network”, referred to as a “cloud routing network”. A traffic management mechanism intercepts all client traffic directed to the network application and redirects it to the TPUs. The TPUs perform load balancing and direct the traffic to an appropriate server that runs the network application. Each TPU has a certain amount of bandwidth and processing capacity. These...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method for providing load balancing and failover among a set of computing nodes running a network accessible computer service includes providing a computer service that is hosted at one or more servers comprised in a set of computing nodes and is accessible to clients via a first network. Providing a second network including a plurality of traffic processing nodes and load balancing means. The load balancing means is configured to provide load balancing among the set of computing nodes running the computer service. Providing means for redirecting network traffic comprising client requests to access the computer service from the first network to the second network. Providing means for selecting a traffic processing node of the second network for receiving the redirected network traffic comprising the client requests to access the computer service and redirecting the network traffic to the traffic processing node via the means for redirecting network traffic. For every client request for access to the computer service, determining an optimal computing node among the set of computing nodes running the computer service by the traffic processing node via the load balancing means, and then routing the client request to the optimal computing node by the traffic processing node via the second network.

Description

CROSS REFERENCE TO RELATED CO-PENDING APPLICATIONS[0001]This application claims the benefit of U.S. provisional application Ser. No. 61 / 156,050 filed on Feb. 27, 2009 and entitled METHOD AND SYSTEM FOR SCALABLE, FAULT-TOLERANT TRAFFIC MANAGEMENT AND LOAD BALANCING, which is commonly assigned and the contents of which are expressly incorporated herein by reference.[0002]This application claims the benefit of U.S. provisional application Ser. No. 61 / 165,250 filed on Mar. 31, 2009 and entitled CLOUD ROUTING NETWORK FOR BETTER INTERNET PERFORMANCE, RELIABILITY AND SECURITY, which is commonly assigned and the contents of which are expressly incorporated herein by reference.FIELD OF THE INVENTION[0003]The present invention relates to network traffic management and load balancing in a distributed computing environment.BACKGROUND OF THE INVENTION[0004]The World Wide Web was initially created for serving static documents such as Hyper-Text Markup Language (HTML) pages, text files, images, au...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F15/16G06F15/177G06F9/455G06F15/173
CPCH04L29/04H04L67/1023H04L45/126H04L47/125H04L61/1511H04L63/1433H04W4/02H04L67/1097H04L67/1008H04L67/1027H04L67/18H04L67/1002H04L67/327H04L67/1004H04L67/1017H04L29/12066H04L61/4511H04L67/1001H04L67/52H04L67/63H04L69/14
Inventor WEI, COACH
Owner YOTTAA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products