surahman edited a comment on pull request #3725:
URL: https://github.com/apache/incubator-heron/pull/3725#issuecomment-965805550


   > One potential issue or design consideration... it seems to only make a 
single PVC for the entire topology. I was assuming it would be a PVC per pod. 
Here is the naming convention from the [Spark 
code](https://github.com/apache/spark/blob/de59e01aa4853ef951da080c0d1908d53d133ebe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStep.scala#L76).
 It seems they have an index `$i` on the end as they loop through?
   
   The design is such that a single PVC is generated per dynamically 
allocated/backed volume for the entire `StatefulSet`. All of the `executor` 
pods will gain entries added for the `Volume Mounts` and the entire pod 
template will receive an entry for the shared `Volume`. Since it is a shared 
`Volume` for the `StatefulSet`, my assumption being this is to support amongst 
many other things, a shared disk between support containers makes perfect 
sense. You may allocate as many PVCs as you want for topology, though.
   
   **_Edit:_** If we can use variables in the K8s Pod Template which K8s can 
resolve, specifically for the Pod number, then this can be done. Say we have 
the variable `$pod.number` then I would only need to update the following two 
entries in the configs:
   `PersistentVolumeClaim.metadata.name`: 
ondemand-[topology-name]-[volume-name]-$pod.number
   `Volumes.name`: [volume-name]-$pod.number
   With this, we make the above field entries with variables in the Pod 
Templates. When the PVCs are deployed to K8s we add the `$pod.number` suffix to 
the PVC names on the fly. This is all dependant on the ability to extract the 
Pod number via a variable.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to