Source Allies Logo

Sharing Our Passion for Technology

& Continuous Learning

<   Back to Blog

Loadbalancing and its benefits

What is load balancing?

Load balancing is the practice of distributing a workload across multiple computers for improved performance. Load balancing distributes work among resources in such a way that no one resource should be overloaded and each resource can have improved performance, depending on the load balancing algorithm. Items such as network traffic, SSL requests, database queries, or even hardware resources such as memory can be load balanced. This practice is commonly used in server farms where multiple physical boxes are coordinated to fulfill the requests of many end users.

How to accomplish load balancing?

Load balancing can be accomplished through software but is usually achieved with the help of a dedicated hardware appliance due to the speed requirements. A load balancer can use many different algorithms. The most basic method is round-robin, which sends a request to each server in the cluster successively. More sophisticated techniques make decisions based on CPU usage, number of requests queued, average response time or even number of lost packets. One problem with load balancing is that a user's session is likely not replicated across the the whole server cluster so typically a single user's requests will be "sticky" and always be routed to the same backend server.

Another, more global, method of load balancing is Anycast. Anycast is an addressing strategy often used in DNS where hosts in multiple geographical locations represent the same IP address. This balances workload because any request is routed to the closest host (as determined by the routing protocol). In the event of one of the hosts going offline the request is then routed to the next closest available host. Routing requests to the nearest host has the added benefits of increasing response time and reliability.

Why load balance?

Simply put, work gets done more efficiently when resources are not used to their capacity and as a result user response times are generally better. The best reason, however, is often times failover. Failover refers to the ability of a system to remain operational while one or more components have failed or gone down. Say your clumsy friend trips over a power cord, eliminating one of three nodes in a cluster hosting a website. People opening a new connection to your website will not notice because the load balancer will realize that the one node is not responding and will route all requests to the other two nodes until the first one is responsive again. The only users who might notice would have been sticky to that one node. Failover is also useful for maintenance. If something has to be upgraded across all nodes, then one node can be taken down at a time to be upgraded without any downtime for whatever service the cluster may be hosting.

Load balancing and failover are powerful concepts which allow network administrators to have some peace of mind and users to keep on working.