This article aims to cover a detailed explanation of Server Side Load Balancing. The following is a brief to get you started.
- Load balancing refers to efficiently distributing the incoming network traffic across a group of backend servers
- Load Balancing is of Two Types
- Server Side load balancing
- Client Side Load Balancing
Scope of the Article
This article covers the basic definition of Load Balancing and explains the Server Side Load Balancing along with some consequences to keep in mind when using Server Side Load Balancing and its advantages.
In Server-side load balancing, the instances of the service are deployed on multiple servers and then a load balancer is put in front of them. It is generally a hardware load balancer. All the incoming requests traffic firstly comes to this load balancer acting as a middle component. It then decides to which server a particular request must be directed based on some algorithm.
We could understand it in this way – It accepts incoming network, and application traffic, and distributes the traffic across the multiple backend servers by using various methods. The middle component is responsible for distributing the client requests to the server.
How Does Server Load Balancing Work?
Server load balancing works within two main types of load balancing:
- Transport-level load balancing is a DNS-based approach that acts independently of the application payload.
- Application-level load balancing uses traffic load to make balancing decisions such as with windows server load balancing.
Advantages of Server Load Balancing
Distributing incoming network traffic through web server load balancers across multiple servers aims to increase the efficiency of application delivery to end users for a reliable application experience. IT teams are increasingly relying on server load balancers to:
- Increase Scalability: load balancers are able to spin up or down server resources based on spikes in traffic to the pool of servers that are best suited to handle these increases in traffic and keep applications’ performance optimized.
- Redundancy: Using multiple web servers to deliver applications or websites provides a safeguard against the inevitable hardware failure and application downtime. When server load balancers are in place they can automatically transfer traffic to working servers from servers that go down with little to no impact on the end user.
- Maintenance and Performance: Businesses with web servers distributed across multiple locations and a variety of cloud environments can schedule maintenance at any time to improve performance with minimal impact on application uptime as server load balancers can redirect traffic to resources that are not undergoing maintenance.
Some Consequences of using Server Side Load Balancing
- The server-side load balancer acts as a single point of failure if it fails, all the instances of the microservice become inaccessible as only the load balancer has the list of servers.
- Since each microservice will have a separate load balancer, the overall complexity of the system increases and it becomes hard to manage.
- The network latency increases as the number of hops for the request increases from one to two with the load balancer, one to the load balancer, and then another from the load balancer to the microservice.
- Server Load Balancing (SLB) is a technology that distributes high-traffic sites among several servers using network-based hardware or software-defined appliance.
- The servers can be on-premises in a company’s own data centers or hosted in a private cloud or the public cloud.
- Server load balancing distributes client traffic to servers to ensure consistent, high-performance application delivery.
- Server load balancing ensures application delivery, scalability, reliability, and high availability.