lucmichea commented on issue #2171:
URL: 
https://github.com/apache/apisix-ingress-controller/issues/2171#issuecomment-2046001720

   @shantanu10 Hi! I was having a similar issue when working with the 
'standalone' (composite deployment) of the ingress-controller. 
   
   We found out that this is due to the way the schemas (configuration 
validation) are loaded by the ingress-controller. Here is an approximate 
snapshot of the trail to follow (if interested):
   
   1. [Standalone Config 
Condition](https://github.com/apache/apisix-ingress-controller/blob/master/pkg/apisix/cluster.go#L172)
 
   2. [Loading the 
controller](https://github.com/apache/apisix-ingress-controller/blob/master/pkg/apisix/cluster.go#L178)
 (standalone)
   3. [Loading of 
schemas](https://github.com/apache/apisix-ingress-controller/blob/master/pkg/apisix/cluster.go#L190)
   4. [In Memory Validation of 
schema](https://github.com/apache/apisix-ingress-controller/blob/master/pkg/apisix/route.go#L272)
 (standalone)
   
   Long story short, as a temporary fix what I did was create another mounted 
file for the `apisix-schema.json` from a configmap that contains the schemas 
(of all plugins). To do so, you need to:
   
   1. Create a ConfigMap with all schemas
   
   You can obtain all the schemas by calling an endpoint of the `apisix` 
container (even the schema of the custom plugins you created) from the 
composite deployment. To do so, you can port-forward on the deployment:
   
   `kubectl port-forward deployment/ingress-apisix-composte-deployment -n 
ingress-apisix 9090:9090`
   
   Then call the endpoint and save the result in a file:
   
   `curl http://localhost:9090/v1/schema > /tmp/schema.json`
   
   (for a bit of clarity on the result I would recommend `| jq` before saving 
it to the file)
   
   Finally, apply the ConfigMap:
   
   `kubectl create configmap plugin-schema-json -n ingress-apisix 
--from-file=apisix-schema.json=/tmp/schema.json`
   
   2. Modify the composite deployment
   
   For the sake of brevity I added `# [...]` to represent skipped sections.
   
   ```yaml
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: ingress-apisix-composite-deployment
     namespace: ingress-apisix
   spec:
     replicas: 1
     selector:
       matchLabels:
         app.kubernetes.io/name: ingress-apisix-composite-deployment
     template:
       metadata:
         labels:
           app.kubernetes.io/name: ingress-apisix-composite-deployment
       spec:
         volumes:
           - name: apisix-config-yaml-configmap
             configMap:
               name: apisix-gw-config.yaml
          - name: plugin-schema-json
            configMap:
              name: plugin-schema-json
         containers:
           - livenessProbe:
               tcpSocket:
                 port: 8080
               # [...]
             # [...]
             name: ingress-apisix
             image: apache/apisix-ingress-controller:1.7.0
             volumeMounts:
               - mountPath: /ingress-apisix/conf/apisix-schema.json
                 name: plugin-schema-json
                 subPath: apisix-schema.json
   # [...]
   ```
   
   ---
   
   Hope it helps! Looking forward to having a better solution coming from 
contributors 😸. I would propose using the `ApiSix` api in the standalone mode 
and not a static file to load all the schemas, the solution I provided which 
creates a ConfigMap containing the result of the call to `ApiSix` is a 
placeholder as in composite deployment the `apisix-ingress-controller` could 
call the api on `localhost:9090` without any issue. (can be reproduced by shell 
into the `ingress-controller` container and running the following `apk add curl 
&& curl http://localhost:9090/v1/schema`)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@apisix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to