Hi all,
This thread is to initiate the discussion on providing native API
Management support for K8s.
The intention of this is to make convenient managing APIs in Kubernetes
cluster in a cloud-native manner.
This email contains the basic design and flow of the process.
Following diagram depicts the overview design of the intended
implementation.
[image: apimoperatos.jpg]
*Design overview*
WSO2 API Microgateway[1] will be used and backed for managing APIs (
proxying, throttling, security etc).
To cater to the above, four custom resource definitions(CRDs) for K8s
cluster will be introduced as mentioned below.
1. API Kind
- This will be deploying the API with the user given swagger
definition. It will pass the swagger definition to the micro-gateway
toolkit with the API name so that the given API will be exposed in the
Kubernetes cluster via micro-gateway.
2. Endpoint Kind
- The endpoint can be given either as an endpoint URL or docker
image. If it is defined as a docker image, the controller will create k8s
artefacts (deployments/services) for the endpoint using the details
mentioned in the endpoint kind. This will be referred from the
API swagger
definition using vendor extensions.
3. Rate-limiting Kind
- Contains the throttle policy details which would ultimately create
a policy.yaml file. This will generate the necessary policies
source files
once mounted to the micro-gateway project(toolkit).
4. Security Kind
- It defines the API security. Accept user credentials and
certificates followed by adding credentials to the micro-gateway config
file and add the certificates to the micro-gateway trust store.
After applying all the above resources including the API operator in k8s
cluster, an API can be exposed in a k8s cluster with a simple command as
"*kubectl add api <api_name> --from-file=<path to swagger.json>".*
The ultimate result would be k8s deployments using the micro-gateway docker
image exposing the service for the given API definitions.
Once the final micro-gateway docker image is built, it will be pushed to a
docker registry so that it can be used across different environments (QA,
Dev, Pre-prod, Production etc) and spin up necessary APIs in the k8s
cluster efficiently.
Since docker images are built inside k8s cluster, we are using a Google
container tool "Kaniko" [1]. Kaniko builds container images from a
Dockerfile, inside a container of a Kubernetes cluster.
*Implementation details of APIM Controller/operator:*
1. The operator reads the swagger definition from the config map and
resolves if it refers to other kinds( such as endpoints/security etc).
2. The resolved swagger definition will be mounted to the kaniko[2] pod
along with the other necessary artefacts (micro-gw.cong, policies.yaml,
certificates) to generate the micro-gateway image(micro-gateway executable
with micro-gateway runtime).
3. Kaniko container uses Dockerfile, build context etc to build the
final docker image and pushes to a destination registry.
- Dockerfile which is used in Kaniko will be a multi-staged docker
file.
- Stage 1:
- Run the micro-gateway toolkit and generate the micro-gateway
executable file
- Stage 2:
- Pass the generated executable file to the micro-gateway
runtime and start the service
- The created docker image's name would be in the format of
*<docker_registry>/<api_name>:<api_version>*
4. If the relevant docker image is already available in the registry,
the operator would avoid running the Kaniko Pod. Instead, it will create
the k8s deployments and services using the relevant available docker images.
[1] https://wso2.com/api-management/api-microgateway/
[2] https://github.com/GoogleContainerTools/kaniko/blob/master/README.md
Thanks,
DinushaD
--
*Dinusha Dissanayake* | Senior Software Engineer | WSO2 Inc
(m) +94 71 293 9439 | (e) [email protected]
<https://wso2.com/signature>
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev