surahman edited a comment on pull request #3725:
URL: https://github.com/apache/incubator-heron/pull/3725#issuecomment-961440275


   Deployment testing with `Persistent Volume Claim` created, a `Volume`, and 
`Volume Mounts` all created successfully. I have not configured a 
`StorageClass` which is why there is an error in the status of the Pod.
   
   Manual cleanup of the `Persistent Volume Claim` is required and there will 
be a `conflict` error if the PVC names clash during submit.
   
   <details><summary>Commands</summary>
   
   ```bash
   ~/bin/heron submit kubernetes ~/.heron/examples/heron-api-examples.jar \
   --verbose \
   --config-property 
heron.kubernetes.pod.template.configmap.name=pod-templ-cf-map.pod-template.yaml 
\
   org.apache.heron.examples.api.AckingTopology acking \
   --config-property 
heron.kubernetes.volumes.persistentVolumeClaim.volumenameofchoice.claimName=volume-claim-name
 \
   --config-property 
heron.kubernetes.volumes.persistentVolumeClaim.volumenameofchoice.storageClassName=storage-class-name
 \
   --config-property 
heron.kubernetes.volumes.persistentVolumeClaim.volumenameofchoice.accessModes=ReadWriteOnce,ReadOnlyMany
 \
   --config-property 
heron.kubernetes.volumes.persistentVolumeClaim.volumenameofchoice.sizeLimit=555Gi
 \
   --config-property 
heron.kubernetes.volumes.persistentVolumeClaim.volumenameofchoice.volumeMode=Block
 \
   --config-property 
heron.kubernetes.volumes.persistentVolumeClaim.volumenameofchoice.path=path/to/mount
 \
   --config-property 
heron.kubernetes.volumes.persistentVolumeClaim.volumenameofchoice.subPath=sub/path/to/mount
   ```
   
   </details>
   
   <details><summary>Persistent Volume Claim</summary>
   
   ```bash
   Name:          volume-claim-name
   Namespace:     default
   StorageClass:  storage-class-name
   Status:        Pending
   Volume:        volumenameofchoice
   Labels:        <none>
   Annotations:   <none>
   Finalizers:    [kubernetes.io/pvc-protection]
   Capacity:      0
   Access Modes:  
   VolumeMode:    Block
   Used By:       acking-0
                  acking-1
                  acking-2
   Events:        <none>
   
   ```
   
   </details>
   
   <details><summary>Describe Pod</summary>
   
   ```bash
   Name:           acking-1
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-7fd8fb9fbd
                   statefulset.kubernetes.io/pod-name=acking-1
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       5555/TCP, 5556/UDP, 6001/TCP, 6002/TCP, 6003/TCP, 6004/TCP, 
6005/TCP, 6006/TCP, 6007/TCP, 6008/TCP, 6009/TCP
       Host Ports:  0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 
0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && 
./heron-core/bin/heron-downloader 
distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--543424291072273987.tar.gz
 . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && 
./heron-core/bin/heron-executor --topology-name=acking 
--topology-id=acking32cfd6f1-a5f7-4472-a251-311397a61f3f 
--topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 
--state-manager-root=/heron 
--state-manager-config-file=./heron-conf/statemgr.yaml 
--tmanager-binary=./heron-core/bin/heron-tmanager 
--stmgr-binary=./heron-core/bin/heron-stmgr 
--metrics-manager-classpath=./heron-core/lib/metricsmgr/* 
--instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" 
--classpath=heron-api-examples.jar 
--heron-internals-config-file=./heron-conf/heron_internals.yaml 
--override-config-file=./heron-conf/override.yaml 
--component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" 
--pkg-type=jar --topology-bi
 nary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME 
--heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes 
--role=saad --environment=default 
--instance-classpath=./heron-core/lib/instance/* 
--metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml 
--scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/*
 --python-instance-binary=./heron-core/bin/heron-python-instance 
--cpp-instance-binary=./heron-core/bin/heron-cpp-instance 
--metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* 
--metricscache-manager-mode=disabled --is-stateful=false 
--checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*:
 --stateful-config-file=./heron-conf/stateful.yaml 
--checkpoint-manager-ram=1073741824 --health-manager-mode=disabled 
--health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID 
--server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 
--she
 ll-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 
--metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 
--checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:        (v1:status.podIP)
         POD_NAME:   acking-1 (v1:metadata.name)
         var_one:    variable one
         var_three:  variable three
         var_two:    variable two
       Mounts:
         /shared_volume from shared-volume (rw)
         /var/run/secrets/kubernetes.io/serviceaccount from 
kube-api-access-8scq5 (ro)
         path/to/mount from volumenameofchoice (rw,path="sub/path/to/mount")
     sidecar-container:
       Image:        alpine
       Port:         <none>
       Host Port:    <none>
       Environment:  <none>
       Mounts:
         /shared_volume from shared-volume (rw)
         /var/run/secrets/kubernetes.io/serviceaccount from 
kube-api-access-8scq5 (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     shared-volume:
       Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
       Medium:     
       SizeLimit:  <unset>
     volumenameofchoice:
       Type:       PersistentVolumeClaim (a reference to a 
PersistentVolumeClaim in the same namespace)
       ClaimName:  volume-claim-name
       ReadOnly:   false
     kube-api-access-8scq5:
       Type:                    Projected (a volume that contains injected data 
from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Burstable
   Node-Selectors:              <none>
   Tolerations:                 node.kubernetes.io/not-ready:NoExecute 
op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute 
op=Exists for 10s
   Events:
     Type     Reason            Age                  From               Message
     ----     ------            ----                 ----               -------
     Warning  FailedScheduling  52s (x3 over 3m20s)  default-scheduler  0/1 
nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
   ```
   
   <details>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to