surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-939395365


   I have gone ahead and added the ability for additional `Container`s to be 
included. If a System Admin takes issue with this functionality, they can 
disable it with the provided flag.
   
   I shall go ahead and add the overwriting-merge for the `Volume Mounts` and 
`Ports`. These are essential for sidecars and various other support container 
patterns.
   
   The Pod Template is simple with the sidecar loading an `alpine` image. I 
performed some tests on merging spec lists and encountered a bug, which I have 
squashed. All changes remain on `dev` pending a merge.
   
   I am working on a desktop that is CPU constrained so the Pods will remain in 
a `pending` state.
   
   <details><summary>Pod Template</summary>
   
   ```yaml
   apiVersion: v1
   kind: PodTemplate
   metadata:
     name: pod-template-example
     namespace: default
   template:
     metadata:
       name: acking-pod-template-example
     spec:
       containers:
         # Executor container
         - name: executor
           securityContext:
             allowPrivilegeEscalation: false
           env:
           - name: Var_One
             value: "First Variable"
           - name: Var_Two
             value: "Second Variable"
           - name: Var_Three
             value: "Third Variable"
           - name: POD_NAME
             value: "MUST BE OVERWRITTEN"
           - name: HOST
             value: "REPLACED WITH ACTUAL HOST"
   
         # Sidecar container
         - name: sidecar-container
           image: alpine
   ```
   
   </details>
   
   <details><summary>describe pods acking-0</summary>
   
   ```bash
   Name:           acking-0
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-74f89d8bd9
                   statefulset.kubernetes.io/pod-name=acking-0
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6008/TCP, 6001/TCP, 6002/TCP, 6009/TCP, 6004/TCP, 6006/TCP, 
6007/TCP, 6005/TCP, 6003/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 
0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && 
./heron-core/bin/heron-downloader 
distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0-3560430130176919824.tar.gz
 . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && 
./heron-core/bin/heron-executor --topology-name=acking 
--topology-id=acking196b29f8-aad9-42c2-a6c8-7987ef4602e9 
--topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 
--state-manager-root=/heron 
--state-manager-config-file=./heron-conf/statemgr.yaml 
--tmanager-binary=./heron-core/bin/heron-tmanager 
--stmgr-binary=./heron-core/bin/heron-stmgr 
--metrics-manager-classpath=./heron-core/lib/metricsmgr/* 
--instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" 
--classpath=heron-api-examples.jar 
--heron-internals-config-file=./heron-conf/heron_internals.yaml 
--override-config-file=./heron-conf/override.yaml 
--component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" 
--pkg-type=jar --topology-bi
 nary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME 
--heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes 
--role=saad --environment=default 
--instance-classpath=./heron-core/lib/instance/* 
--metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml 
--scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
 --python-instance-binary=./heron-core/bin/heron-python-instance 
--cpp-instance-binary=./heron-core/bin/heron-cpp-instance 
--metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* 
--metricscache-manager-mode=disabled --is-stateful=false 
--checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
 --stateful-config-file=./heron-conf/stateful.yaml 
--checkpoint-manager-ram=1073741824 --health-manager-mode=disabled 
--health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID 
--server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 
--she
 ll-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 
--metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 
--checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:        (v1:status.podIP)
         POD_NAME:   acking-0 (v1:metadata.name)
         Var_One:    First Variable
         Var_Three:  Third Variable
         Var_Two:    Second Variable
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from 
kube-api-access-h62wk (ro)
     sidecar-container:
       Image:        alpine
       Port:         <none>
       Host Port:    <none>
       Environment:  <none>
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from 
kube-api-access-h62wk (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-h62wk:
       Type:                    Projected (a volume that contains injected data 
from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Burstable
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute 
op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute 
op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute 
op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute 
op=Exists for 300s
   Events:
     Type     Reason            Age                 From               Message
     ----     ------            ----                ----               -------
     Warning  FailedScheduling  38s (x2 over 115s)  default-scheduler  0/1 
nodes are available: 1 Insufficient cpu.
   ```
   
   </details>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to