In the beginning of June 2019, WSO2 held a webinar on running WSO2 API Manager in a Kubernetes environment. In a nutshell Kubernetes (k8s) is a containerized workload and service management tool that can do autoscaling, auto healing as well as perform workload management, health monitoring and network routing on a containerized environment. These capabilities make K8s a very powerful tool to host your software components on. Up until recently there was no support for K8s in WSO2. A missing part was support for the clustering mechanism to use Kubernetes capabilities for its membership change-notifications. With K8s support coming onto the WSO2 platform, the carbon nodes can now use K8s to detect other members in the WSO2 cluster and various of the WSO2 clustering features can be used. As K8s support is progressively being added to the WSO2 Integration stack I decided to give WSO2 Enterprise Integrator on Kubernetes a spin. In this blog I’ll describe the steps I took to setup a WSO2 Enterprise Integrator cluster and share the problems and solutions I encountered along the way.
I have written this blog in a beginner style manner explaining all the steps I took to get to a K8s cluster setup with a WSO2 Enterprise Integrator deployment. Several issues which were encountered along the way have been described. I purged my local K8s setup beforehand and the process starts from an empty K8s cluster.
Please be aware that to follow these steps you need a wso2 support subscription as the (WSO2 supplied) images come from the wso2 docker hub which requires a subscription to access.
On Github WSO2 posted information on how to work WSO2 EI with kubernetes. I was looking for a simple setup to start with and so decided to try out the ‘scalable-integrator’ Helm chart version [4]. The goal is to setup a cluster of two EI-instances and see how that works out on K8s.
I’m executing these steps on a Docker Desktop with a local Kubernetes cluster. My Docker desktop has been configured with 6GB max memory and 2 CPU cores. In most cases this is sufficient to run a two-node cluster of any WSO2 product and have some other side-carred containers running if needed.
My first step was to have a look at the sources provided in the Github repo. Looking through the Helm chart sources I noticed that I needed an NFS server for storing the CAR files and to allow for tenant specific configuration files to be stored.
I did not want to setup NFS for this playground setup so I decided to change the provided Helm chart to use a hostpath persistent volume claim instead of the current NFS one.
I changed the persistent-volumes.yaml file and replaced the NFS configuration with a hostPath alternative;
After that I started the actions needed to create the K8s cluster.
I first started to install Helm and Tiller.WSO2 has Helm charts that hold all the configuration we need to setup this K8s-cluster.
I installed Helm using the binary resources from [3]. Directly after installing Helm I initiated the installation of MySQL through Helm.
# helm install --name wso2ei-scalable-integrator-rdbms-service -f ../mysql/values.yaml stable/
mysql --namespace wso2
|
This resulted in an error stating ‘could not find a ready tiller pod’.
The error seems to indicate what’s wrong so let’s check the pod status;
# kubectl get pods --namespace kube-system
|
The output of the above was (shortened for brevity):
# >tiller-deploy-74497878f7-ccjll 1/1 Running 0 5m
|
Tiller appears to be running, could it be that I was impatient? Let’s try again to install MySQL.
Great, that worked as Helm indicated ‘STATUS: DEPLOYED’.
Doing a quick check using kubectl showed that MySQL is running.
# kubectl get pods --namespace wso2
|
# NAME READY STATUS RESTARTS AGE
wso2ei-scalable-integrator-rdbms-service-mysql-89b8b4fb4-dnc4d 1/1 Running 0 58s
|
The next step was to deploy enterprise integrator.
helm install --name wso2ei-scalable-integrator ./ --namespace wso2
|
Almost instantaneously there was again a successful response, STATUS: DEPLOYED.
Now let’s see whether everything is OK by checking the pod status.
Hmm, that does not look good. Kubernetes is unable to pull the wso2 images properly…
To find out what is wrong with my pod want to see the pod status.
kubectl describe pod wso2ei-scalable-integrator-deployment -n wso2
|
In the last few lines the Events are shown and there I see an error regarding accessing wso2’s docker hub instance, ‘Get https://docker.wso2.com/v2/wso2ei-integrator/manifests/6.2.0: unauthorized: authentication required’
Let’s try to fix that…
Reading up on the documentation of WSO2 I realized that I did not specify my wso2 subscription credentials so I adjusted the appropriate config files (values.yaml) and delete and (re)create the integrator instance.
As we’re using Helm to do all that I decided to delete the Helm release and recreate it again.
helm ls --all --short |grep wso2
|
This gave me two results:
wso2ei-scalable-integrator wso2ei-scalable-integrator-rdbms-service |
I deleted the integrator release and recreated it again
helm delete --purge wso2ei-scalable-integrator helm install --name wso2ei-scalable-integrator ./ --namespace wso2 |
Checking the pod status again using
kubectl describe pod wso2ei-scalable-integrator-deployment -n wso2
|
I now see that the last event is that the image is being pulled from the docker-repo. This is looking hopeful!
Checking the pod status shows that the container is being created at the moment
kubectl describe pod wso2ei-scalable-integrator-deployment -n wso2 kubectl get pods -n wso2
|
NAME READY STATUS RESTARTS AGE
wso2ei-scalable-integrator-deployment-7dbc5df68b-bfc77 0/1 ContainerCreating 0 2m
wso2ei-scalable-integrator-deployment-7dbc5df68b-g9cnq 0/1 ContainerCreating 0 2m
wso2ei-scalable-integrator-rdbms-service-mysql-89b8b4fb4-dnc4d 1/1 Running 0 24m
|
Let’s wait for a while and see what happens.
Pulling the image could take a while as its quite big, near 1.5Gb.
As I did not see any change in my K8s pod statuses for a while I decided to check docker to see if the image was already there.
Checking the docker image list showed that there was a (new) image.
docker image ls |grep wso2
docker.wso2.com/wso2ei-integrator 6.2.0 14cb7eacbb76 18 hours ago
|
After about 6 minutes the container was finally started.
kubectl get pods -n wso2
|
NAME READY STATUS RESTARTS AGE
wso2ei-scalable-integrator-deployment-7dbc5df68b-bfc77 1/1 Running 0 7m
wso2ei-scalable-integrator-deployment-7dbc5df68b-g9cnq 1/1 Running 0 7m
wso2ei-scalable-integrator-rdbms-service-mysql-89b8b4fb4-dnc4d 1/1 Running 0 28m
|
There are now three pods running, one MySQL and two WSO2EI instances.
Let’s see whether we can access the admin-console of one those WSO2 EI instances.
To know how to access the instances we need to determine the ingress network addresses.
This can be shown through:
kubectl get ing -n wso2
|
NAME HOSTS ADDRESS PORTS AGE wso2ei-scalable-integrator-gateway-tls-ingress wso2ei-scalable-integrator-gateway 80, 443 8m wso2ei-scalable-integrator-ingress wso2ei-scalable-integrator 80, 443 8m |
Hmm, no address. Now what to do? We can’t access our instances as the hosts of the pods are not bound to any local addresses.
I then noticed then I did not setup the Nginx ingress-controller which was recommended at the beginning of the wso2 page.
We’ll have to do that now;
helm install stable/nginx-ingress --name nginx-wso2ei --set rbac.create=true
|
then after installing Nginx it still did not show any ingress addresses.
I first bring down the number of deployments to see if a deployment-restart may work.
I get the name of the deployment:
kubectl get deployments -n wso2
|
This shows wso2ei-scalable-integrator-deployment as name and I bring down the number of replicas to 0 to effectively stop all replicas.
kubectl scale deployment --replicas=0 wso2ei-scalable-integrator-deployment -n wso2
|
After the scaling command we can see that there are now 0 instances.
kubectl get deployments -n wso2
|
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wso2ei-scalable-integrator-deployment 0 0 0 0 22m
|
Let’s scale is back up again to start the containers again and see whether this influences the ingress-addresses.
kubectl scale deployment --replicas=2 wso2ei-scalable-integrator-deployment -n wso2
kubectl get deployments -n wso2
|
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wso2ei-scalable-integrator-deployment 2 2 2 0 25m
|
Do you see the AVAILABLE column? There is a 0 now and I expected a 2 to be there…
Checking the pods statuses showed that there were two pods already.
kubectl get pods -n wso2
|
NAME READY STATUS RESTARTS AGE
wso2ei-scalable-integrator-deployment-7dbc5df68b-lqklh 0/1 Running 0 47s
wso2ei-scalable-integrator-deployment-7dbc5df68b-m69gb 0/1 Running 0 47s
|
They are just not ready yet so again, lets see if there is something wrong or that I’m just impatient.
kubectl describe pod wso2ei-scalable-integrator-deployment-7dbc5df68b-lqklh -n wso2
|
The bottom event line states:
Normal Started 55s kubelet, docker-for-desktop Started container
|
This indicates the container has started and I’m apparently just impatient.
After about 3 minutes the deployment information shows that all deployments are now available.
wso2ei-scalable-integrator-deployment 2 2 2 2 27m
|
Getting the ingress address still show no address.
Unfortunately, I did not come any further here. For some reason I couldn’t get K8s to expose my service through an external IP address.
In the K8s documentation [1] on Ingress types we can see that there are various types of ingress.
If we look at the Helm charts that WSO2 created, we can determine that the name-based virtual hosting type was used as the integrator-ingress.yml contains a host value (wso2ei-scalable-integrator) in its rules.
spec:
tls:
- hosts:
- wso2ei-scalable-integrator
rules:
- host: wso2ei-scalable-integrator
http:
paths:
- path: /
backend:
serviceName: wso2ei-scalable-integrator-service
servicePort: 9443
|
This means that we must include this hostname in all requests that we do to this ingress-endpoint as the loadbalancer needs it to determine the route.
In this setup there are two ingress endpoints, one for the administrative APIs (wso2ei-scalable-integrator) and one for the services deployed on the EI instance (wso2ei-scalable-integrator-gateway-service).
If we look at the cluster configuration, we can see that there is a Loadbalancer listening on localhost.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-wso2ei-nginx-ingress-controller LoadBalancer 10.106.232.56 localhost 80:30706/TCP,443:30784/TCP 1h
|
This endpoint, together with the name-based virtual hosting routes, we may be able to access the admin console and the services.
Testing if this will work can be done using curl or by editing the hosts file on your machine.
Let’s try curl first;
curl -k https://localhost/carbon -H 'Host: wso2ei-scalable-integrator' -v
|
Again, we’re running out of luck…
We get an empty HTTP/2 response.
This indicates that we do end up on our ingress controller but that within that controller something is going wrong.
Let’s see what is going wrong;
We open its logs by logging the pod-logs. We first determine the podname and then open its logs.
kubectl get pods -n wso2
|
NAME READY STATUS RESTARTS AGE
nginx-wso2ei-nginx-ingress-controller-6bd6db7876-4m7gd 1/1 Running 0 3m
|
As I want to analyze what is going on, I open a terminal and continuously log the ingress-controller logs.
kubectl logs -n wso2 nginx-wso2ei-nginx-ingress-controller-6bd6db7876-4m7gd -f
|
When now doing the curl command again I see that the downstream service is responding with an unexpected response.
‘…upstream sent no valid HTTP/1.0 header while reading response header from upstream…’ is the error I see.
It appears that the request to the WSO2EI instance results in some erroneous response causing Nginx to throw this error. Let’s see and open the WSO2EI logs too.
First, I need to know the pod-names;
kubectl get pods -n wso2
|
pod/wso2ei-scalable-integrator-deployment-7dbc5df68b-p997c 1/1 Running 0 22m
pod/wso2ei-scalable-integrator-deployment-7dbc5df68b-szmq2 1/1 Running 0 22m
|
To open the logs on fir:
kubectl logs -n wso2 wso2ei-scalable-integrator-deployment-7dbc5df68b-p997c -f
|
and in another terminal the other nodes logs:
kubectl logs -n wso2 wso2ei-scalable-integrator-deployment-7dbc5df68b-szmq2 -f
|
Doing the curl again shows…. nothing.
It appears the ingress controller is not able to effectively communicate with the downstream services.
Our ingress controller has been configured to only passthrough SSL traffic as the Ingress configuration (integrator-ingress.yaml) show exactly that.
Could it be that there is an SSL handshake issue and should we change the configuration of Nginx to enable SSL traffic properly!?
The documentation for ingress-nginx [2] has a large set of annotations which we can choose from that influence SSL/TLS and HTTPS.
Searching annotations which relate to HTTPS traffic there is one that caught my eye, ‘Backend protocol’.
Using this annotation, we can tell Nginx how to handle backend communication and as is stated there Nginx assumes HTTP as default.
Let’s edit the ingress controller configuration and add this setting to see what happens.
As I want to make sure that my configuration changes survive restarts and redeployments.
Hence, I’m editing the helm configuration for this change;
in the integrator-gateway-ingress.yaml and integrator-ingress.yaml I add the following annotation in the annotations section of these config files:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
|
Then I delete and recreate the helm release:
helm delete --purge wso2ei-scalable-integrator
helm install --name wso2ei-scalable-integrator . --namespace wso2
|
The K8s cluster information showed after a several seconds that the cluster is up and running again.
I open the kubectl logs again for the Nginx ingress controller and the two WSO2 EI instances to see what will happen when I retry. Once I see that the instances are fully started, I execute curl again.
curl -k https://localhost/carbon -H 'Host: wso2ei-scalable-integrator' -v
|
This curl command will show several lines amongst which:
< HTTP/2 302
|
and
< location: https://wso2ei-scalable-integrator/carbon/admin/index.jsp
|
It works! As there is a redirect location coming back, I can deduce that we have now successfully been able to communicate with a WSO2 EI instance as that redirect is initiated by the WSO2 login process.
Testing another curl to see if we can access the services endpoints as well shows:
curl -k https://localhost/services -H 'Host: wso2ei-scalable-integrator-gateway' -vvv
< HTTP/2 200 < content-type: text/html |
and in the body, we see an HTML page showing the deployed services.
Great! We have now successfully been able to request the admin console endpoint and the services endpoint.
To make this work in a browser is easy, just edit your hosts file and add this hostname for the 127.0.0.1 IP address. And while you’re at it also add the other hostname, we need to access services on the WSO2 Enterprise Integrator instance (wso2ei-scalable-integrator-gateway):
127.0.0.1 localhost wso2ei-scalable-integrator wso2ei-scalable-integrator-gateway
|
Now opening a browser and using https://wso2ei-scalable-integrator/carbon as the URL we will be presented with a login screen of the EI instance.
This successfully concludes my actions and there is a two node WSO2 Enterprise Integrator cluster running in a Kubernetes cluster.
Summary
This blog describes the steps I took to setup a Kubernetes cluster having a WSO2 Enterprise Integrator service. During the journey of configuring the K8s cluster I encountered various issues and the path to solve these has been described with several commands I used to find out where a problem was or where a solution could be found.
This setup is far from production ready as there are several things missing. I switched off the NFS-persistent volume mapping and the image used for WSO2 Enterprise Integrator is left fully default without any customizations for security and other performance optimizations.
However, it does show the approach to get it up and running.
Links:
[1]: https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress
[2]: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md
[3]: https://github.com/helm/helm/releases/tag/v2.14.1
Do you want to learn more about WSO2? Join one of our trainings!