Configuring Haproxy using Ansible on VM and on AWS cloud

Harshil Shah
6 min readMar 26, 2021

configuring reverse proxy on linux OS using ansible, first on oracle VM and than on AWS cloud.

Github repo linkhttps://github.com/iamShahHarshil/haproxy-with-ansible

Haproxy is a software of program that we will use to configure reverse proxy and Load-Balancer for Apache web server

Configuring Haproxy is a 3 step process.

  1. We will install haproxy program on the node that we want working as load balancer.
  2. We will edit config file of haproxy.
  3. start haproxy service.

We can do all steps manually, but since this article is to show hot to do using ansible, we will take help of ansible.

Step-1 Install service

with use of package module we will install haproxy service.

Step-2 Edit config file

haproxy’s config file is located at loaction → /etc/haproxy/haproxy.cfg

So we need this file in target node that is supposed to act as load balancer. Using copy command of linux, we first copy this file to the directory where our playbook is currently located.

Above image shows how haproxy.cfg file looks like. The first highlighted part that you can see,

bind *:8080

is port for frontend. So when client connects to load balancer, it uses port number 8080. We can change port number to any value ranging between 0–2¹⁶ . So if client wish to connect to a web app or website, than it will type in browser → www.example.com:8080, so client will be directed to load balancer. Client will never connect to web server directly.

Now load balancer will connect to different webserver (one at a time) and distribute the load. This part in haproxy.cfg file comes under the section highlighted as backend app.

The IPs you see in the red box are IPs of different web server.

So finally we will have to copy this file after editing and save it in same location as → /etc/haproxy/haproxy.cfg , in target node(load balancer node).

But the problem is, whenever new webserver come online, we manually have to edit haproxy.cfg file and add IPs of these webservers under backend app section. And to overcome this problem we have to kind of bring intelligence to this.

Jinja:

Using jinja we can overcome this problem. Jinja is a modern and designer friendly templating language for python, modelled after Django templates. Jinja is basically an engine used to generate HTML or XML returned to the user via an http response.

Being a python library, jinja requires python to run and expose the features.

copy module we use in ansible, will not parse the data which it copies, whereas template module does. For copy module, everything is a string, whereas we need to tell the template module that which are the variables, loops, conditions, etc inside the file.

{{ }} → whatever is inside this curly braces, template module will process it and give the value of it. Behind the template module, ansible use keywords like {{, }}, #, etc. and this keywords comes from a framework known as jinja.

So once you copy file to the directory where our playbook is located, we will edit this file using jinja.

As you can see I have made this file more dynamic by first replacing frontend port no. with some variable. So we just have to change value of this variable in playbook and again run the playbook, so port no. will be changed.

Second, the red box you see at the end, contains a for loop written with jinja syntax. This loop will go to inventory file, take the first ip from under host group name → “webserver” , and store it in variable i. It will continue to do so until there are no IPs left under “webserver” group name.

groups is a ansible keyword that always takes values from inventory file. And loop.index is to print numbers starting from 0-N.

Now since this file is written in jinja, we have to save it with .j2 extension. So here I have renamed the file that we copied before, using below command.

mv haproxy.cfg haproxy.cfg.j2

Finally our config file is ready to we will use template module to copy it to target node and than start the service.

final play for haproxy.

NOTE: I have written play to configure webserer in same playbook.

As you can see in above video I have configured an instance having ip →192.168.1.15 , as reverse proxy server.

Now I will configure same setup over AWS cloud using instance over there.

→ First launch 3 instances. Here I am using Amazon Linux. Also I will configure load balancer on my controller node itself. And the remaining two instances as web server. I am launching only 3 instances because I am using free tier aws account and there is a limit of storage of 30gb.

Note: All my instances are Amazon Linux instance. And I have installed ansible by following steps,

  1. download epel repository

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

2. Install epel repo

yum install epel-release-latest-7.noarch.rpm

3. update epel repo

yum update -y

4. Install ansible

pip3 install ansible

→ Performing same steps as I did previously, but here when configuring ansible on cloud instance, my ansible.cfg file looks something like this as below.

→ in inventory file instead of password, we have to give ssh public key. For that first you have to copy public key (.pem) file in aws instance using winscp

Note: when you get error code 3 i.e server permission denied when connecting to instance on aws while creating folder or transferring file then, give permission. So suppose you cannot access root folder than in terminal write,

#chmod -R 777 /root

now you can view root directory, create folder in root directory and transfer file as well

inventory file → ip.txt

→ playbook looks something like this. It is similar as used previously. I copied it to AWS instance using winscp.

playbook → loadbalancer.yml

→ running the playbook successfully.

→ Now when I connect to load-balancer instance with it’s public IP and frontend port number — 8080, than as you can see once it will connect to backend web server having IP 172.31.40.180 and when I am refreshing page, it connects to another web server.

So that is it for this practical. See you in next one.

thank you for reading

--

--