lsergio commented on issue #5069:
URL: https://github.com/apache/camel-k/issues/5069#issuecomment-1900711844

   During more experimentation, I ended up with a scenario that might have the 
same root cause:
   
   I created an Integration and provided less resources than it needs in the 
Container trait (only 20MB for a jvm integration):
   ```
   apiVersion: camel.apache.org/v1
   kind: Integration
   metadata:
     name: test
   spec:
     sources:
     - name: main.groovy
       content: |-
         from('rest://GET:/test')
             .to("direct:start")
   
         from("direct:start")
           .to("https://httpbin.org/delay/2?bridgeEndpoint=true";)
           .to("log:info")
     traits:
       container:
         requestCPU: "200m"
         requestMemory: 20Mi
         limitMemory: 20Mi
       quarkus:
         buildMode:
         - jvm
       affinity:
         enabled: true
         nodeAffinityLabels:
           - "karpenter=false"
       knative-service:
         minScale: 1
   ```
   After applying it, I see the pod in a CrashLookBackoff due to OOM error and 
the Integration is in error status:
   ```
   k get pod
   NAME                                     READY   STATUS             RESTARTS 
     AGE
   camel-k-operator-6cbc656bbd-bmmwp        1/1     Running            0        
     156m
   test-00001-deployment-8456cb79f9-9pxbl   0/2     CrashLoopBackOff   1 (14s 
ago)   16s
   k get it
   NAME   PHASE   RUNTIME PROVIDER   RUNTIME VERSION   KIT                      
  REPLICAS
   test   Error   quarkus            3.2.3             kit-cml7drk14nurfir6ingg 
  1
   ```
   Then I fixed the amount of memory in the Container trait and updated the 
Integration.
   Now I see 2 pods for my Integration:
   ```
   k get pod
   NAME                                     READY   STATUS             RESTARTS 
     AGE
   camel-k-operator-6cbc656bbd-bmmwp        1/1     Running            0        
     158m
   test-00001-deployment-8456cb79f9-9pxbl   0/2     CrashLoopBackOff   4 (55s 
ago)   2m25s
   test-00002-deployment-6ffff56fb5-r9wqc   1/2     Running            0        
     7s
   ```
   where the second one is running correctly.
   But the Integration is still in Error status:
   ```
   NAME   PHASE   RUNTIME PROVIDER   RUNTIME VERSION   KIT                      
  REPLICAS
   test   Error   quarkus            3.2.3             kit-cml7drk14nurfir6ingg 
  2
   ```
   As it shows 2 replicas, I assume it's considering both pods for determining 
that it is failing.
   At this point, the ksvc state is:
   ```
   k get ksvc
   NAME   URL                                              LATESTCREATED   
LATESTREADY   READY   REASON
   test   http://test.sensedia.poc-luis.sensedia-eng.com   test-00002      
test-00002    True    
   ```
   And there are 2 knative revisions:
   ```
   k get revision
   NAME         CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON    
    ACTUAL REPLICAS   DESIRED REPLICAS
   test-00001   test                             1            False   
ExitCode137   0                 
   test-00002   test                             2            True              
    1                 1
   ```
   After around ~10 minutes, the old pod disappeated and the Integration moved 
to a success state.
   And the revision list shows me:
   ```
   NAME         CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON    
                 ACTUAL REPLICAS   DESIRED REPLICAS
   test-00001   test                             1            False   
ProgressDeadlineExceeded   0                 
   test-00002   test                             2            True              
                 1                 1
   ```
    
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@camel.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to