Clustered HAproxy for load balancing web sites


In this setup I configure 2 clustered HAproxies in CentOS 7 to be the frontend of a web application.

I set static IPs and add them to /etc/hosts:

10.0.0.1/24 haproxy01
10.0.0.2/24 haproxy02

Disable firewalld

# systemctl stop firewalld.service # systemctl mask firewalld.service

Disable SELinux # setenforce 0

Edit /etc/selinux/config:

SELINUX=permissive

Add to /etc/sysctl.d/haproxy.conf:

net.ipv4.ip_nonlocal_bind = 1

Install the required packages

# yum install pacemaker corosync haproxy pcs fence-agents-all

pcsd is in charge of synchronize the cluster configuration across the nodes. http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/_setup.html

# passwd hacluster # systemctl enable pcsd.service pacemaker.service corosync.service haproxy.service # systemctl start pcsd.service # pcs cluster auth haproxy01 haproxy02 # pcs cluster setup --start --name http-cluster haproxy01 haproxy02 # pcs cluster enable --all

We check if everything is all right: http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/_verify_corosync_installation.html

# corosync-cfgtool -s # corosync-cmapctl  | grep members # pcs status corosync # pcs status

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/ch05.html # pcs property set stonith-enabled=false

If we only have two nodes:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/_perform_a_failover.html # pcs property set no-quorum-policy=ignore

To prevent a resource to fail back when a node recovers: # pcs resource defaults resource-stickiness=100

Is the config ok?:

# crm_verify -L -V

Add cluster resources: # pcs resource create ClusterIP-01 ocf:heartbeat:IPaddr2 ip=10.0.0.3 cidr_netmask=24 op monitor interval=5s # pcs resource create ClusterIP-02 ocf:heartbeat:IPaddr2 ip=10.0.0.4 cidr_netmask=24 op monitor interval=5s # pcs resource create HAproxy systemd:haproxy op monitor interval=5s We group the IPs together: # pcs resource group add HAproxyIPs ClusterIP-01 ClusterIP-02

Add some constraints to move the IPs to the other host when HAproxy is down. # pcs constraint colocation add HAproxy HAproxyIPs INFINITY # pcs constraint order HAproxyIPs then HAproxy

Finally I’ve configured HAproxy with the two web applications and different backends, with http and https. This /etc/haproxy/haproxy.cnf must be the same in both servers.

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     100000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
    ssl-server-verify none
    tune.ssl.default-dh-param 2048

defaults
    log                     global
    mode                    http
    option                  httplog
    option                  dontlognull
    option                  redispatch
    option forwardfor       except 127.0.0.0/8
    option http-server-close
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 50000

peers ha-web
    peer haproxy01 10.0.0.1:1024
    peer haproxy02 10.0.0.2:1024

# Frontend servers

listen admin
    bind *:8080
    stats enable

frontend web-http
    bind *:80
    default_backend apache-80

frontend www1-https
    bind 10.0.0.3:443 ssl crt /etc/haproxy/www1.pem
    reqadd X-Forwarded-Proto: https
    default_backend apache-www1-443

frontend laboratorios-https
    bind 10.0.0.4:443 ssl crt /etc/haproxy/www2.pem
    reqadd X-Forwarded-Proto: https
    default_backend apache-www2-443

# Backend servers

backend apache-80
    stick-table type ip size 20k peers ha-web
    stick on src
    balance roundrobin
    option httpchk GET /server-status
    fullconn    10000
    server apache1 10.0.10.1:80 check maxconn 5000
    server apache2 10.0.10.2:80 check maxconn 5000

backend apache-www1-443
    stick-table type ip size 20k peers ha-web
    stick on src
    balance roundrobin
    #option ssl-hello-chk
    option httpchk GET /server-status
    fullconn    10000
    server apache1 10.0.10.1:443 check port 80 ssl verify none maxconn 5000
    server apache2 10.0.10.2:443 check port 80 ssl verify none maxconn 5000

backend apache-www2-443
    stick-table type ip size 20k peers ha-web
    stick on src
    balance roundrobin
    #option ssl-hello-chk
    option httpchk GET /server-status
    fullconn    10000
    server apache3 10.0.10.3:443 check port 80 ssl verify none maxconn 5000
    server apache4 10.0.10.4:443 check port 80 ssl verify none maxconn 5000

See also