When deploying SAFR in a cluster of 3 or more servers, its important to use an external load balancer in order to leverage the full benefits of a SAFR Cluster.  While its possible to use software load balancing available in the SAFR Primary server, a failure of the primary will result in a failure of the system until traffic is either routed to the node that is elected as the new primary server or the primary server is restored.


This article provides guidance on setting up an external load balancer for a SAFR cluster and implementing healthchecks.  It assumes the reader is familiar with and has a hardware or virtual load balancer.


There are several ports that need to be load balanced against for each node in the backend SAFR server cluster that will be setup.  Each service has both an HTTP and HTTPS listener.

 

Most of the services use /version as the healthcheck URL except for 8085/Virga, which uses /health. 

    •    COVI API service

    ◦    HTTP Port: 8080

    ◦    HTTPS Port: 8081

    ◦    Healthcheck URL: /version

    •    Event API service

    ◦    HTTP Port: 8082

    ◦    HTTPS Port: 8083

    ◦    Healthcheck URL: /version

    •    Object Storage API service

    ◦    HTTP Port: 8086

    ◦    HTTPS Port: 8087

    ◦    Healthcheck URL: /version

    •    Virga API service

    ◦    HTTP Port: 8084

    ◦    HTTPS Port: 8085

    ◦    Healthcheck URL: /health

We’re assuming that the load balancer be HTTPS on the ports listed above and will terminate the HTTPS session on the load balancer and host the SSL certificate.  You can then load balance the request to the backend servers over either HTTP or HTTPS ports.

 

Round robin or least connection traffic routing is fine, and stickiness/affinity for clients is not required.

 

The backend servers HTTPS ports by default host a self-signed certificate, or you can use your own certificate.