Load Balancing Magic: Unleashing the Potential of NGINX.

Load Balance Multiple servers with Nginx. Run multiple servers (containers) using Docker Compose.

Sagar #OpenToWork
7 min readApr 22, 2023

What is NGINX:

Nginx (pronounced engine x) is an open source HTTP and reverse proxy server.

What’s is a Load Balancer:

A load balancer sits in front of all your web server and distributes all client requests. By doing so it reduces the chance of service failure and also exposes a single point of request.

Img source: Nginx

In this article we’ll see how to use NGINX as Load Balancer and Reverse Proxy.

All the project codes are available on this GitHub repository. This repository has 2 projects. The Nodejs project uses local nginx installation and the python app uses nginx as a container.


  • Docker
  • Nginx

Example 1: Using Nodejs , Docker and Nginx:

Lets install Nginx and nodejs, use the command below:

sudo apt update && sudo apt install nginx -y
sudo apt install nodejs

Nodejs is required only to run the application locally. You can skip if you want.

Building our application:

The server.js in our main application that runs on port 2000 and when its run it prints the hostname.

To run the application locally, you can use “npm run start”.

Build and run the container:

Since we need to see Load Balancing same application, we need to create multiple instances of the app.

But we’ll use Docker to spin up 3 servers in and our application will transfer request to all servers and we should see hostname changing with each request we make (the default laod balancing algorithm is Round Robin).


# Using node image as base
FROM node:18

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install
RUN npm i express
# If you are building your code for production
# RUN npm ci --omit=dev

# Bundle app source
COPY . .

CMD [ "npm", "run", "start"]

To Build the docker images and run use the below commands:

docker build -t nlb:1.0 .
docker run -d -p 2000:2000 nlb:1.0

Lets run more containers in different ports.

└──| docker run -d -p 3000:2000 nlb:1.0
└──| docker run -d -p 4000:2000 nlb:1.0

View more abut docker build:

Now browse the ip or url with port 2000,3000 and 4000. We should see a response with hostname.

Note: The hostname here are the IDs of docker containers.

But wait, that’s nothing exiting, we don’t have any load balancing. That’s right, we’ll setup nginx to act as a load balancer to the 3 containers and we should be able to see varying hostname changes with single ip (the default load balancing algorithm is Round Robin).

To configure nginx as load balancer, we must edit the nginx.conf file under /etc/nginx. We need to http black here like below:

http {

upstream beservers{

Note: the “besevers” is just an identification that we’ll use in our virtual host configuration.

Now all the requests coming from all these different servers (ports in this example) will be transferred to “beserver”, but what is this name how does it forward the request. For that we’ll be using proxy pass.

Add the following block to the default site or create a new site.

location / {
proxy_pass http://beservers/;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;

I m going to create a new site under /etc/nginx/sites-available called example.com (can be any name). And create a symlink to sites-enabled.


server {
listen 80;
server_name example.com #this is your website url
root /usr/share/nginx/html;
try_files index.html =404;

location / {
proxy_pass http://beservers/;

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

Now, lets browse our application. These hostnames are coming from Docker containers. use “docker ps” to verify their IDs with the ones from here.


Note: This “sites-available” , “sites-enabled” are only applicable incase of Debian/Ubuntu and its derivatives. In redhat family of linux, all these configurations are to be done under conf.d directory.

If you want however, you can imitate the same debian/ubuntu like directory structure by creating “sites-available” , “sites-enabled” and including the “sites-enabled” in nginx.conf.

Cretae the directoies:

$sudo mkdir /etc/nginx/sites-available /etc/nginx/sites-enabled

$sudo vim /etc/nginx/nginx.conf

Add this line to the end of the file:

IncludeOptional sites-enabled/*.conf

Example 2: Using Python and Docker Compose:


  1. Docker and Docker compose.

What is Docker Compose:

Compose is a tool for defining and running multi-container Docker applications.

Lets discuss the application.

Same as the first example, the hello.py in our main application that runs on the defined port and prints the hostname.


# using alpine varinat of python image
FROM python:alpine3.17

# define working directory
WORKDIR /flaskapp

# copy everything to working directory
COPY . .

# intstall required python packes (flask and gunicorn for this example)
RUN pip install -r requirements.txt

# start application with guincorn
CMD gunicorn --bind hello:app


worker_connections 100;


listen 80;

location /{
proxy_pass http://flaskapp:3000/;

In this example, we don’t need local nginx instance, and its better to stop nginx if its running on your host since in this example we are exposing nginx on 80, it will conflict otherwise.

Here the proxy_pass transfers the request to flaskapp:3000 which is our python application.


version: '3'

context: app
- "3000"
image: nginx:1.24.0
- ./nginx.conf:/etc/nginx/nginx.conf
- flaskapp
- "80:80"

Here we are launching 2 services, the app is our python application and nginx. You might ask why is there no host port defined for the first service. Since we are going to launch multiple instances of our app, mentioning a particular host will launch all containers on same port which is not possible and after 1 instance of our app is created the next 2 will fail to start.

Not mentioning any host port means the containers will be assigned with random ports. Now comes another problem, if we don’t know the ports then how can we configure upstream servers? In this case docker uses the services to find the ip and port.

context: app
- "3000"

Now the 2nd service is to download and start nginx container. Here volume is used to mount a file from our host to the container path. The depends on notifies that the nginx container should start after the first containers are available.

Alright, lets build it. To build using docker compose we use “docker compose up” and to stop and remove the containers use “docker compose down”.

With docker compose, we can also scale number of containers using — scale option.

└──| docker compose up -d --scale flaskapp=3

This launches 3 instances of the “flaskapp” container and a nginx container.

Check all the running containers with docker ps:

──| docker ps
3479b1e2e677 nginx:1.24.0 "/docker-entrypoint.…" About a minute ago Up About a minute>80/tcp, :::80->80/tcp flaskapp-nginx-1
64f5725ab5b8 flaskapp-flaskapp "/bin/sh -c 'gunicor…" About a minute ago Up About a minute>3000/tcp, :::32773->3000/tcp flaskapp-flaskapp-1
41ff26ba1ff8 flaskapp-flaskapp "/bin/sh -c 'gunicor…" About a minute ago Up About a minute>3000/tcp, :::32774->3000/tcp flaskapp-flaskapp-2
1e201d31fbb7 flaskapp-flaskapp "/bin/sh -c 'gunicor…" About a minute ago Up About a minute>3000/tcp, :::32775->3000/tcp flaskapp-flaskapp-3

Time to browse our application. Every refresh shows different hostnames from the above list of container IDs (Except the first one, since its Nginx container).

Congratulations !!

Reference: YouTube 1, 2 | Docker Docs | Nodejs

More on Docker : Build and Push your first Image

Read Further on Nginx: