Load Balancing for Network Servers

From Computing and Software Wiki

(Difference between revisions)
Jump to: navigation, search
m (Dividing Servers Based On Use)
(Caching)
Line 35: Line 35:
===Caching===
===Caching===
 +
Caching is used to reduce load on servers in the data tier and sometimes also in the business tier.  In principle, results of common database queries are cached in RAM so that fewer requests necessitate a read from disk.  Traditional, platter-based hard drives are usually the slowest component in any server, which creates a significant bottleneck in applications that rely heavily on databases.  Additionally, some database operations such as JOINs are computationally very expensive.  If the result of these costly queries can be cached in RAM, fewer disk accesses are required in normal operation of the application and therefore the application will be faster, and load on the database server will be reduced.
 +
 +
In LAMP stacks one of the most popular caching tools is '''[http://www.danga.com/memcached/ Memcache]'''.  Memcache implements what is essentially a very large hash map which allows very quick access to cached data.  It works with PHP, Java, Ruby, Python, and several other languages.
==Case Study: Facebook's Photo System==
==Case Study: Facebook's Photo System==

Revision as of 20:44, 12 April 2009

Load balancing refers to methods used to distribute network traffic amongst multiple hosts. This can be done by having different hosts used for different tasks (for example, separate servers for image and text content for a website) or by using a pool of redundant servers from which a load balancer can choose a single host to use for a given connection. Load balancing is usually achieved transparently to the client—that is, the service requested by the client appears to come from one place, even though it may be coming from multiple servers or a server at a different IP address.

Contents

Effects of Load Balancing

Load balancing is motivated by the need to distribute load across enough resources to service network requests. It can be used to ease network traffic by routing requests to various destination hosts, to make use of the increased computing power available in multiple hosts, or to increase storage capacity by allowing the use of more than one host. These three problems can frequently go hand-in-hand and effective load balancing can solve all three.

Common Scenarios

Load balancing is used in many network-related application but perhaps its best-documented use is in web applications. Web sites generally use servers with consumer hardware, since more powerful computers (such as supercomputers) are extremely expensive, and many of the issues involved with using (relatively) cheap hardware have been solved with increased research in distributed computing, of which load balancing is a part. Extremely popular websites such as Google, Slashdot, or Facebook each field hundreds (if not thousands) of requests per seconds, and having a single server servicing these requests is simply untenable. Bandwidth, compute power, and storage are all bottlenecks here. Even if enough bandwidth was available a single server would not have enough RAM to service the overhead involved in using TCP connections.

Large websites frequently use load balancing in three ways:

  1. Offloading bandwidth-intensive content such as images, video, and other media to Content Delivery Networks, discussed briefly below.
  2. Redirecting a request to a server that is physically close to the user and/or is not at its load capacity
  3. Storing databases in segments on different servers. There can be heuristics to determine which parts of each database are stored on which server, or they can be randomly distributed. The idea is to reduce the number of requests that come to each database.

Methods of Load Balancing

Dividing Servers Based On Use

One of the simplest forms of load balancing, this method divides servers based on intended use. In the simplest of configurations, this may be one or more server which hosts or generates HTML pages, while a separate server (or multiple servers) hosts any images displayed on the site. For example, the user might access www.example.com, which would serve him the HTML with embedded image tags that point to images.example.com, where images.example.com points to a different host.

Content Delivery Networks are an extension of this concept. A CDN is a global network of servers owned by a third party such as Akamai which hosts content for client web sites. This content is frequently larger files such as images, video, and other media. The CDN determines the server closest to the user and serves the requested content from there. CDNs are used to increase the speed at which users can access content by decreasing bandwidth demands on client (or "source") sites. The operation of CDNs is a very complex topic and is beyond the scope of this article. See below for more information.

Web sites frequently separate their operations into two or three functional tiers or layers:

  1. Presentation tier – this tier takes care of rendering the web page. (For example, in a LAMP stack it executes PHP code to generate HTML and CSS.)
  2. Business tier – this tier involves processing data supplied by the data tier (below) and supplying the end result to the presentation tier for display to the user.
  3. Data tier – this tier is responsible for maintaining the data underlying the application. This frequently involves maintaining and querying databases.

This topology allows a separation of concerns, an important design methodology for better code maintainability, but by assigning each server a specific task, it allows computation to be easily distributed among servers. This topology is known as a three-tier architecture.

Multiple Redundant Servers

Traditional Load Balancing

Optimizing the Return Trip

Caching

Caching is used to reduce load on servers in the data tier and sometimes also in the business tier. In principle, results of common database queries are cached in RAM so that fewer requests necessitate a read from disk. Traditional, platter-based hard drives are usually the slowest component in any server, which creates a significant bottleneck in applications that rely heavily on databases. Additionally, some database operations such as JOINs are computationally very expensive. If the result of these costly queries can be cached in RAM, fewer disk accesses are required in normal operation of the application and therefore the application will be faster, and load on the database server will be reduced.

In LAMP stacks one of the most popular caching tools is Memcache. Memcache implements what is essentially a very large hash map which allows very quick access to cached data. It works with PHP, Java, Ruby, Python, and several other languages.

Case Study: Facebook's Photo System

See Also

Content delivery network

References

  1. Tony Bourke: Server Load Balancing, O'Reilly, ISBN 0-596-00050-2
  2. Matthew Syme, Philip Goldie: Optimizing network performance with content switching, Prentice Hall PTR, ISBN 0-13-101468-4
  3. Jason Sobel: Needle in a Haystack: Efficient Storage of Billions of Photos
  4. Aditya Agarwal: Facebook: Science and the Social Graph
Personal tools