Understanding Load Balancers: What They Are and How They Function
A load balancer is a crucial component of distributed systems, enabling horizontal scaling by distributing traffic efficiently across multiple servers. It provides a single point of contact for clients to interact with the backend systems.
Why Use a Load Balancer?
Without a load balancer, users would need to know the IP addresses of individual servers to communicate with them. This setup is not only cumbersome but also impractical when dealing with dynamic server environments.
A load balancer has either a static IP address or a static DNS name, which allows clients to interact with the backend seamlessly. This abstraction ensures that clients don’t need to worry about server details or changes.
Key Role of a Load Balancer
The load balancer acts as the intermediary between clients and backend servers. It:
Hides the backend servers from clients.
Enables the addition of servers without the client knowing.
Provides horizontal scalability, as more servers can be added to the pool easily.
Request Flow
Here’s how the load balancer manages traffic:
Client Request: The client sends an API call to the load balancer using its IP or domain name (e.g.,
http://xyz.com/api/Dashboard
).Load Balancer Processing: The load balancer forwards the request to one of the backend servers based on its algorithm.
Backend Response: The selected server processes the request and sends the response back to the load balancer.
Client Response: The load balancer returns the response to the client.
This process applies not only to user requests but also to inter-server communication within the backend.
Load Balancer Algorithms
The load balancer uses various algorithms to determine how to distribute requests:
1. Round Robin
Distributes requests sequentially across servers in a uniform manner. Ideal for systems with servers of similar specifications.
2. Weighted Round Robin
Distributes requests iteratively but accounts for server capacities by assigning weights. Useful when servers have different specifications (e.g., varying RAM, CPU, or GPU).
3. Least Connections
Routes requests to the server with the least active connections. Best suited for scenarios where response times vary significantly, as it prioritizes relatively idle servers.
4. Hash-Based Routing
Uses a hash of a parameter (e.g., user ID) and assigns requests to a specific server deterministically. This ensures sticky sessions, where a user consistently interacts with the same server.
Advantages of Load Balancers
Scalability
Easily scale horizontally by adding more servers to the backend without affecting the client experienceAvailability
If a server crashes or becomes unhealthy, the load balancer stops routing traffic to it and forwards requests to healthy servers, ensuring continuous availability.Improved Performance
By distributing traffic effectively, load balancers prevent overloading any single server, ensuring optimal system performance.
Learn from System Design for Beginners. You can check it out here.