In this article we’ll show how to configure the HAProxy as a load balancer for two Nginx web servers (you can replace it with Apache). CentOS is used as a host operating system in all cases.
HAProxy is installed on a separate server that accepts client requests and redirects them to Nginx web servers. You can see the general system architecture below:
Nginx Configuration on Backend Servers
We start with the installation and configuration of Nginx on our web servers the load will be balanced between. Install EPEL repository and nginx using yum (or dnf on RHEL/CentOS 8):
HAProxy aims to optimise resource usage, maximise throughput, minimise response time, and avoid overloading any single resource. It is available for install on many Linux distributions like CentOS 8 in this guide, but also on Debian 8 and Ubuntu 16 systems. For example you can run haproxy on a VM inside a Windows host under Hyper-V, if you're that much into Windows. But in reality I would not either, because this makes you dependant on the Windows IP stack, and I doubt anyone would want this in a production when he has other choices, especially when said stack has bugs like this. Haproxy is not available for Windows but there are some alternatives that runs on Windows with similar functionality. The most popular Windows alternative is nginx, which is both free and Open Source. If that doesn't suit you, our users have ranked 14 alternatives to Haproxy and seven of them are available for Windows so hopefully you can find a suitable replacement.
#yum install epel-release -y
#yum install nginx -y
Then, in nginx.conf files specify that the servers must process requests from HAProxy server and backend servers only:
Backend server 1:
Backend server 2:
Haproxy 2.0 Install
The nginx configuration file is default, we have just added the servers to listen IP and denied access to everyone except our servers using allow and deny directives.
In order the web server could start working, open the firewall HTTP port using firewalld or iptables:
#firewall-cmd --permanent –add-service=http
#firewall-cmd –reload
Perform a test check on any of your backend servers: Halex electronic dartboard game list.
# curl IP_of_the_second_server
The server has returned a standard nginx index file. To make the check more convenient, I have changed the contents of the index file on each backend server to see in my browser, which server has processed a current request.
The nginx index file is located in /usr/share/nginx/html/.
HAProxy Load Balancer Configuration
Let’s install and configure HAProxy on the server that will be used as a load balancer.
Install the HAProxy:#yum install epel-release -y
#yum install haproxy -y

To enable HAProxy, you need to add Enabled=1 to the /etc/default/haproxy file:
#nano /etc/default/haproxy
Now let’s move on to HAProxy configuration. In our simplest configuration, the load balancer server will process all HTTP requests and send them in turn to backend servers:
#nano /etc/haproxy/haproxy.cfg
After saving your configuration, do check the haproxy.cfg syntax:
#haproxy -f /etc/haproxy/haproxy.cfg -c
If it is OK, you will get a message like this:
Then restart HAProxy and add it to Linux startup. And open the HTTP port in the firewall.
#systemctl restart haproxy
#systemctl enable haproxy
#firewall-cmd —permanent –add-service=http
#firewall-cmd –reload
Thus, the load balancer has been configured. Let’s check it by opening the HAProxy server IP address in a browser:
Haproxy.cfg Configuration File ParametersInstall Haproxy On Windows 7
Let’s consider the main examples of HAProxy algorithms:
roundrobin
— is the default algorithm, sends requests to the servers in turn. We have used this method in our example.leastconn
– selects a server with the least number of active connections. It is recommended to be applied for projects, in which sessions are used for a long time.source
– selects a server based on a hash of user IP addresses. In this mode, a client will connects the same web server if the user IP address remains unchanged.
Let’s describe some configuration file parameters.
The global block:
log
— writes the log in /dev/log saving local0 as the object valuechroot
— security settings, locks HAProxy to the specified directorymaxconn
— the maximum number of concurrent connections per processdaemon
— running a process as a daemon
The default block. This block sets the default parameters for all other sections following it:
log
— sets which log are the entries written to (in this case, global means that the parameters set in the global section are used)mode
— sets the communication protocol and has one of the following values: tcp, http or healthretries
— the number of attempts to connect to the server in case of a failureoption httplog
— the log format used if HAProxy is proxying HTTP requestsoption redispatch
— allows a program to terminate and redispatch a session in case of a server failurecontimeout
— the maximum waiting time till the connection with the server is successfully established
There are also a lot of parameters related to different timeouts.
Collecting HAProxy Stats
Add the stats block to the configuration file: Gta vice city underground download torrent kickass.
Description:
bind
– the port you can view the statistics onstats enable
– enables statistic reportsstats uri
– sets the statistics page addressstats auth
– login and password to access

Accept the incoming connection on port specified above in your firewall:
firewall-cmd --permanent –add-port=10001/tcp
firewall-cmd –reload
To view the HAProxy reports, follow this link:
http://hostname_haproxy:10001/haproxy_stats
Open the balancer IP address in your browser and start pressing F5. The HAproxy statistics will change.
In this article we have considered basic HAProxy configuration. There are more cases of using HAProxy module.
In our schema, the load balancing HAProxy server becomes a single point of failure. To increase the fault tolerance of your web service, you can add another HAProxy server and implement the high-availability load balancer configuration using Keepalived. You will get a schema like this: