Load Balancer | System Design Interview Concept

For more comprehensive system design content and to try out the design questions yourself, check out System Design School.

What is a Load Balancer?

Load balancing refers to the "efficiently distributing incoming network traffic across a group of backend servers". Load balancing is a means of horizontal scaling for web applications.

Layer 4 vs Layer 7 Load Balancing

If we get into details of computer networking, there are two ways to load balance in terms of OSI networking model - Layer 4 and Layer 7 load balancing.

Layer 4 is the Transport Layer. Layer 4 load balancing is done on the packet-level and requires access to lower level routing devices. Layer 7 is Application Layer load balancing and is done entirely on the HTTP level. There are pros and cons to load balancing at each level. If you are interested, you can read more about it here.

Software engineers mostly deal with Layer 7 since web applications we develop live on Layer 7. So don't worry about it if you don't understand the lower level computer networking terminologies (that's more of the jobs of network engineers and maybe SREs). In the following sections, we will explore the Layer 7 load balancing with Nginx.

How Load Balancers Work

In the simplest terms, a load balancer sits in front of a number of web servers and distributes traffic to them according to predefined rules. The servers' addresses (IPs) and the load balancing algorithms are defined in load balancer's config file.

how load balancers work

Types of Load Balancing Algorithm

The following load balancing methods are supported in Nginx:

  • (default) round-robin: requests to the application servers are distributed in a round-robin fashion. Each server is assigned an equal portion of the traffic and in circular order. For example, R1(Request 1) is sent to Server 1, R2 is sent to Server 2, R3 is sent to Server 3 and R4 is sent to Server 1 (since there is no more next server we go back to the first one) so on so forth.

load balancing algorithms

  • least-connected: next request is assigned to the server with the least number of active connections,
  • ip-hash: a hash-function is used to determine what server should be selected for the next request based on the clientโ€™s IP address's hash value.

ip hash load balancing algorithm

Load Balancer Demo with Nginx

We will use a demo to illustrate how load balancer works and why we need load balancing. We will use an Nginx load balancer to distribute traffic to three web servers. If you want to try this demo on your computer, you need to install Flask, Nginx.

Here is a simple flask web server. All it does is to return Hello {port number}! as response.

1# demo.py
2from flask import Flask
3import sys
4
5app = Flask('demo')
6port = 8000
7
8@app.route('/')
9def hello_world():
10    return f'Hello, {port}!\n'
11
12if __name__ == '__main__':
13    port = sys.argv[1]
14    app.run(host='127.0.0.1', port=port)

Now we can start three server instances.

1python demo.py 8001
2python demo.py 8002
3python demo.py 8003

Next, we configure Nginx as a load balancer to distribute traffic to the three web servers. We create a new config file /etc/nginx/conf.d/demo.conf

1upstream demo {
2    server localhost:8001;
3    server localhost:8002;
4    server localhost:8003;
5}
6
7server {
8    listen 8080;
9
10    location / {
11      proxy_pass http://demo;
12    }
13}

Then use sudo systemctl restart nginx to restart the Nginx service.

Now you should be able to access http://localhost:8080/. The default load balancing method is round-robin, so the response our load balancer returns should be Hello, 8001!, Hello, 8002!, and Hello, 8003! in order.

Load Balancer and High Availability

To emulate servers going offline, we stop the web server that runs on 8002 and 8003 and visit our load balancer at http://localhost:8080/. It should be still accessible and always returns Hello, 8001!, the message returned by the first server. This is why load balancer and high availability - two web servers are down and we are still serving requests from the remaining one without the user noticing any downtime. We can scale a service by adding more web servers and put them behind load balancers.

Here is a video demo on Debian 11. Nginx Load Balancing Demo

  1. https://www.nginx.com/resources/glossary/load-balancing/

Got a question?ย Ask the Teaching Assistantย anything you don't understand.

Still not clear? Ask in the Forum, ย Discordย orย Submitย the part you don't understand to our editors.

โ†
โ†‘TA ๐Ÿ‘จโ€๐Ÿซ