As microservices are increasingly being deployed on Kubernetes (K8s), the need to expose these microservices as well-documented, easy to consume, managed APIs becomes important to develop great applications. Operators are software extensions of K8s that uses the custom resources for deploying packages and managing applications. Operators hide the complexity of the deployment and they also remove the requirement of domain-specific knowledge for application management.
API Operator for Kubernetes
API Operator is an extension for K8s that communicates with the Kubernetes API server to deploy APIs on the Kubernetes cluster in the most convenient way. It makes APIs a first-class citizen in the K8s ecosystem.
- API Operator can be used to deploy individual API for Individual microservices on Kubernetes cluster.
- API Operator can be used to deploy individual API for multiple microservices on Kubernetes cluster.
- API Operator helps users to expose their microservice as managed API (with features such as security, rate limiting, marketplace for API discovery, and API documentation) in the Kubernetes environment without any additional work.
- API Operator provides fully automated experience for cloud native API management.
- OpenAPI (Swagger) definition of an API for a given microservice or group of microservices is the single source of truth.
- API Operator handles automatic scaling for API Gateways.
- API Operator helps in the deployment and management of backend services.
- Easy to promote APIs between the environments (Dev, QA, and Production).
API Operator Ecosystem:
In the example above, we have three different microservices: Product Microservice, Inventory Microservice and Review Microservice.
- Kubernetes without API Operator: In this scenario, consumers have direct access the microservices deployed in the Kubernetes environment. There is no access control for the services.
- Kubernetes with API Operator: In this scenario, an API Gateway is deployed, and consumers don’t have direct access the services, but via an API Gateway. This helps enforce API management features such as security, rate limiting, monitoring, mediations et cetera.
In this ecosystem, we have three planes as in the Kubernetes environment and these planes have various components as shown in the above diagram.
- Data Plane: It consists of various Microservices and API gateway.
- Control Plane: It consists of Key manager (STS), which is a token generation service and Traffic Manager, which helps in rate limiting.
- Management Plane: It consists of API Publisher where API Developers and Product Managers come together for API lifecycle management, Developer Portal where Application Developers and External users discover and subscribe to an API, and Business Insights which are implemented using API Analytics server.
API Operator Overview:
APICTL: This is a command line tool developed by WSO2 to interact with Kubernetes to deploy an API using OpenAPI definition.
apictl add api -n <API name> --from-file <filepath>/<filename>.yaml
Once the above command is executed, an API Custom Resource Definition (CRD) is added to Kubernetes along with Config map which contains OpenAPI definition for the deployed API.
There are four different types of Custom Resource Definition (CRD):
- API: This definition holds API-related information. You can see the API definition and data structure for API here. API takes the Swagger definition as a configMap along with replica count and micro-gateway deployment mode. More details can be found on GitHub.
- Target Endpoints: During the deployment of APIs created with the apim operator, sometimes it requires to deploy the endpoint services associated with those APIs. The target endpoint kind provides the flexibility to deploy the backend services by specifying the relevant docker images and parameters. More details can be found on GitHub.
- Security: APIs created with Kubernetes apim operator can be secured by defining security with security kind. It supports basic, JWT and Oauth2 security types. More details can be found on GitHub.
- Rate Limiting: Rate limiting policies can be applied to the APIs created with the Kubernetes operator, to throttle out requests according to the desired limit. More details can be found on GitHub.
Each of these Custom Resource Definitions (CRD) aligns with Custom Resource Controllers.
|Target Endpoint||→||Endpoint Controller|
|Rate Limiting||→||Rate Limiting Controller|
Effectively, when an OpenAPI (Swagger) definition is deployed on Kubernetes, these CRDs are deployed and consumed by the corresponding custom controllers and acted upon.
API Controller: As shown in the diagram above, the API Controller performs the operations below:
- Runs Kaniko: Kaniko is a Google utility that builds docker images in Kubernetes. This will create an image for API Microgateway based on your OpenAPI (Swagger) definition and push this image to the docker hub.
- Creates resources like Deployment, Service, HPA (Horizontal Pod Autoscaling) for API Microgateway and deploys them in the Kubernetes cluster.
Once the above step is done, we get a pod in Kubernetes with an API Microgateway container. More information on each of the Custom Controllers can be found on GitHub.
In a nutshell, running the above apictl command with OpenAPI (Swagger) definition gives us a pod with a running API Microgateway container.
Deployment Modes for APIs
Private Jet: In this mode we have one pod for an API Microgateway container and separate pods for Microservice containers. Here you can scale separately. This mode also ensures a dedicated gateway for the API.
Sidecar: In this mode, the is one pod for both an API Microgateway container and Microservice containers. This deployment pattern is useful if you want to scale both API Microgateway and Microservice together. This mode also ensures a dedicated API Microgateway for a Microservice.
Shared: in this mode, you can have two or more APIs in the same API Microgateway container as compared to Private Jet mode.
Note: Shared mode is not available for release version WSO2 API Manager 3.1.0 but will be available in the next release.
API Operator Installation
- To try out API Operator, you can install minikube on your machine based on your operating system by following the instructions provided on the Kubernetes website.
- Once single node Kubernetes cluster is running on your machine (installed with minikube), follow the steps as outlined on GitHub.
- You can try out various sample scenarios that are provided at the end of the installation instructions.
- You can also install the K8s API operator using Operator HUB.
As microservices are largely deployed on Kubernetes clusters, the need for exposing them as APIs (managed APIs) is an important factor for application developers and business owners. Since the API Operator makes APIs a first-class citizen in the Kubernetes ecosystem, this makes deployment of APIs easier on the Kubernetes cluster with the API definition and the API controller (apictl) CLI tool. Hence, developers don’t have to worry about API management logic, deployment-related details, scalability, et cetera. They can focus solely on business logic. It also simplifies the promotion of APIs from the development environment to the production environment.
The API Operator is ideal for anyone looking for a fast, scalable, and robust API Management solution on a Kubernetes cluster that both exposes your microservices and easily manages APIs.