fb
info@yenlo.com
WSO2 Tutorial 6 min

Starting with Kubernetes Dashboard UI

Thijs Volders
Thijs Volders
Strategic Technology Officer
Starting with Kubernetes dashboard UI

I ran into a strange problem where I could not start my WSO2 Identity Server Docker container as one of its designated ports was in use on my system. In this blog I will show you the problem and the solution with Kubernetes Dashboard UI.

When starting the IS-container I got an error stating that a bind port was already in use, particularly the 9443 admin-console port. I immediately resorted to my terminal and asked the (Mac)OS which process was claiming this port.

sudo lsof -n -i4TCP:9443 | grep LISTEN

The result was …. nothing. That’s strange, I’ve never seen that happen before.

Kubernetes cluster

I realized that I had created a deployment on my local Kubernetes cluster a while ago. Maybe that was still running and thus claiming the port. A quick disabling of my local Kubernetes cluster showed that I was able to start WSO2 Identity Server container successfully. Apparently, Kubernetes was claiming the port after all.

Disabling the Kubernetes local cluster was a workaround but not a solution, and thus I started the Kubernetes cluster again and started to find the culprit deployment or service that is claiming this port.

After starting the cluster again, I did a

kubectl get deployments –all-namespaces=true

which showed me that there were some deployments that defined the use of this particular port. I had to execute several commands to determine which service was actually claiming the port I needed and ended up removing the deployment altogether as it had become obsolete anyway.

Going through kubectl commands to get the job done can be a bit frightening for many, and having a user interface to find the information you’re looking for can be quite valuable.

For Docker there is a well-known UI called Portainer and there is a Kubernetes alternative too. It’s called the Kubernetes dashboard and is available as importable Kubernetes configuration. Several actions must be performed to get access into the dashboard.

Starting the dashboard itself is as simple as adding the dashboard configuration to your cluster using:

kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml

​This will add a deployment and a service to the kubernetes-dashboard namespace in the Kubernetes-cluster.

Then start the kubernetes proxy to be able to access the UI.

kubectl proxy

Voilà, the dashboard should be available at:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

When opening the dashboard, a login screen is presented.  In this screen, a kubeconfig file or an access token must be provided.

image-10

There are two ways to login to the dashboard.

One is by getting the secret token using several kubectl commands and the other being the generation of a kubeconfig file which contains the login-info. The first will give insight in how the construction of the service account, its roles and secrets is done, the other is simpler in the long run as a kubeconfig file is persistent and can be more easily reused over time.

Token

To start with the first option of setting up a new account to login to the Kubernetes dashboard you have to either reuse or create a service account, provide it with a cluster role binding and then find its secret token value. The token value can then be pasted in the login-screen password-field and login should succeed.

To create that service account, we’ll call it dashuser, the following command can be used. The account is created in the default namespace for convenience.

kubectl create serviceaccount dashuser -n default

To get the most out of the UI it’s simplest to have an account that’s able to be admin of the cluster and therefore providing the dashuser with a root-like access.

This is possible by attaching a clusterrolebinding of cluster-admin to the dash user.

kubectl create clusterrolebinding dashboard-admin -n default
--clusterrole=cluster-admin --serviceaccount=default:dashuser

This creates the clusterrolebinding named dashboard-admin in the default namespace, with the cluster role cluster-admin attached to it and it applies the binding on the service account default:dashuser.

This should be sufficient to access the dashboard with this user.

To get into the dashboard the access token of this service account is needed. This access token exists in the secret-store of the dashuser. It was generated automatically when the user was created, and you must retrieve it to login.

To determine the secret name for the service account get its information

kubectl get serviceaccount dashuser -o json

This will show us (abbreviated)

...
"secrets": [
  {
"name": "dashuser-token-np6c6"
  }
]
...

Then, to get the value of this secret, use this name as parameter in the following command:

kubectl get secret dashuser-token-np6c6 -o json

In the output there should be a data field with a token element which contains the base64 encoded value of the required token. Copy its value, base64 decode it and then there is the value which can be supplied in the dashboard login screen.

The following command is a simple dynamic one-liner which combined the three commands above into a single statement:

kubectl get secret $(kubectl get serviceaccount dashuser -o
jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

There you have it, the token to login to the dashboard. Copy the value and paste it in the input-field on the dashboard in your browser and login should succeed.

For any follow up logins to that dashboard you need to execute the last command again to get the token value again and paste it in the login screen.

Kubeconfig

The other option is to create a kubeconfig file that holds the information to login to the dashboard. To create a kubeconfig file several of the commands above need to be reused and the results fitted into a json-structure which is then placed in the kubeconfig file.

At Stackoverflow [https://stackoverflow.com/a/47776588/2417422] there is a very valuable snippet on creating this kubeconfig file from a simple shell script. I’ve amended it slightly to be a little more user-friendly for my liking. In this version you can specify the service account name instead of the token-name.

# your server name goes here
server=https://localhost:8443
# the name of the service account goes here
name=dashuser

# Enable this line if you’ve use a custom namespace for the user. This will switch the current namespace to the designated one. 
#kubectl config set-context $(kubectl config current-context) --namespace=default

secretname=$(kubectl get serviceaccount $name -o jsonpath="{.secrets[0].name}")
ca=$(kubectl get secret/$secretname -o jsonpath='{.data.ca.crt}' )
token=$(kubectl get secret/$secretname -o jsonpath='{.data.token}' | base64 --decode)
#namespace=$(kubectl get secret/$secretname -o jsonpath='{.data.namespace}' | base64 --decode)

echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
 cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: default-context
 context:
 cluster: default-cluster
 namespace: default
 user: default-user
current-context: default-context
users:
- name: default-user
 user:
 token: ${token}
" > sa.kubeconfig

This script will generate a file named sa.kubeconfig in the current folder. Go back to the browser and select Kubeconfig option and supply this file to login.

If you get an error like this
image-11

then the kubeconfig file is incomplete. Check that the above script was provided with the proper service-account name and that the correct namespace was used to find the service-account and its secret info.

Dashboard

The dashboard provides an overview of the complete cluster configuration. Information like namespaces, deployments, services and replica-sets is visible there.
It’s is a valuable tool to have in your toolbox when you’re running Kubernetes as it allows a quick overview of components in the cluster and perform some management actions too.

image-12

Interested to read more about Kubernetes? Have a look at my other blog about Kubernetes and WSO2 Enterprise Integrator as well. Any questions? Don’t hesitate to leave a comment below.

Yenlo is the leading, global, multi-technology integration specialist in the field of API-management, Integration technology and Identity Management. Known for our strong focus on best-of-breed hybrid and cloud-based iPaaS technologies.
Yenlo is the product leader and multi-award winner in WSO2, Boomi, MuleSoft and Microsoft Azure technologies and offers best-of-breed solutions from multiple leading integration vendors.

With over 240+ experts in the API, integration, and Identity Access Management domain and over $35 million in annual revenue, Yenlo is one of the largest and best API-first and Cloud-first integration specialists worldwide. 

Whitepaper:
The Identity and Access Management selection guide

iamco
Get it now
eng
Close
We appreciate it
Care to share

Please select one of the social media platforms below to share this pages content with the world