
However, just having multiple instances up and running doesn’t make the environment high-available just yet. It requires processes and tools to be in place to make sure that, while performing maintenance on a node or service, the environment can continue serving requests and stay available while performing these maintenance tasks.
In this blog I will explain how to update a clustered WSO2 environment with Ansible, where the environment should stay available for incoming traffic while doing the update. In operational terms we call this a rolling update / deployment.
Environment architecture
To be able to have an environment that is always available, we should have a clustered setup. The following picture gives a rough idea what type of setup we are talking about:

From the top we have a virtual IP (VIP) pointing to a HAProxy failover setup supported by PCS (Pacemaker / Corosync). HAProxy has a backend server configuration that is load balancing in a round robin style to three WSO2 Enterprise Integrator (EI) nodes aptly named node1, node2 and node3. The WSO2 EI cluster is pointing to an “AlwaysOn“ database cluster.
Make HAProxy manageable
In order to be able to disable and enable backend servers from and to the load balancer, we have to provide Unix access to the HAProxy socket in the “haproxy.cfg” file like this:
   # turn on stats unix socket   stats socket /var/lib/haproxy/stats mode 600 level admin
Rolling updates with Ansible
Now we are ready to implement all the routines needed for a rolling update. I assume you already use Ansible and you have all the roles and playbooks available for provisioning a clustered environment with for instance WSO2 EI. But it can be any product provisioned by Ansible.
Inventory
First, we did the host inventory like this:
---all: children:   wso2base:     children:       wso2ei:         children:           wso2ei_node1:             hosts:               wso2ei01.yenlo.com:                 ansible_host: wso2ei01.yenlo.com           wso2ei_node2:             hosts:               wso2ei02.yenlo.com:                 ansible_host: wso2ei02.yenlo.com           wso2ei_node3:             hosts:               wso2ei03.yenlo.com:                 ansible_host: wso2ei03.yenlo.com   lbservers:     hosts:       haproxy01.yenlo.com:         ansible_host: haproxy01.yenlo.com       haproxy02.yenlo.com:         ansible_host: haproxy02.yenlo.com
We split the WSO2 nodes and HAProxy lbservers nodes as separate groups.
For each WSO2 EI node inventory we defined placeholders like this:
wso2ei_lb_backend: "PRD_wso2ei"wso2ei_lb_server: "node1"
The values correspond with the names defined in the haproxy.cfg:
backend PRD_wso2ei   balance roundrobin   server  node1 wso2ei01.yenlo.com:8243 check ssl verify none   server  node2 wso2ei02.yenlo.com:8243 check ssl verify none   server  node3 wso2ei03.yenlo.com:8243 check ssl verify none
Each backend call is forwarded to the HTTPS WSO2EI access for reaching the backend services. For the purpose of this blog I disabled SSL certificate validation. In production it’s recommended to switch this on for security-sake.
Ansible playbook
We can setup the Ansible playbook like this:
--- - hosts: wso2ei # Make sure the playbook will execute one WSO2 node at a time serial: 1 become: yes
 # Before updating WSO2 we have to disable the server node from HAProxy pre_tasks:
  - name: Disable WSO2 in HAProxy     haproxy:       state: disabled       host: "{{ wso2ei_lb_server }}"       socket: /var/lib/haproxy/stats       backend: "{{ wso2ei_lb_backend }}"       wait: yes   delegate_to: "{{ item }}" # This task will be executed on the lbservers     delegate_facts: True     loop: "{{ groups['lbservers'] }}"     ignore_errors: yes
 # Here we will do the actual provisioning of WSO2 roles:   - role: yenlo.wso2ei
 # After provisioning is finished we are ready to make the server available # in the backend cluster post_tasks:
     - name: Wait for WSO2 to come up    wait_for:      host: "{{ inventory_hostname }}"      port: 8243      delay: 20      state: started      timeout: 120 # We assume WSO2 doesn't need more than 120 seconds to start
    - name: Enable WSO2 in HAProxy    haproxy:      state: enabled      host: "{{ wso2ei_lb_server }}"      socket: /var/lib/haproxy/stats      backend: "{{ wso2ei_lb_backend }}"      wait: yes     delegate_to: "{{ item }}" # This task will be executed to the lbservers   delegate_facts: True     loop: "{{ groups['lbservers'] }}"     ignore_errors: yes
Conclusion
Ansible is a great tool for creating a structured and responsible way for doing CI/CD pipelines. Especially when an environment needs to be always available for the user group. In this blog we used HAProxy as an example for the load balancer. But with Ansible it’s possible to manage any kind of load balancer technology the same way. With regards to WSO2 technology we are also not bound. The playbook structure is modular. Meaning you can apply or leave out any part.
If you have any questions about this blog post we encourage you to leave a comment below or contact us. In addition, you can also view our WSO2 Training, Managed Services Support, webinars or white papers for more information.