nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-920537024


   Looking over the code, I think I failed to point out an important aspect of 
what Spark is doing with this feature. This code is important to understand the 
goal. When the scheduler is defining the Pod, it will check for a PodTemplate, 
and if not defined create a default. Currently the Heron scheduler only starts 
with a default Pod that it creates from scratch.
   
   This [Spark 
code](https://github.com/apache/spark/blob/0494dc90af48ce7da0625485a4dc6917a244d580/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesDriverBuilder.scala#L30-L38)
 illustrates the branching logic that checks for an existing PodTemplate config 
item.
   
   This is the 
[method](https://github.com/apache/spark/blob/bbb33af2e4c90d679542298f56073a71de001fb7/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L96)
 used by Spark to actually load the template and create the pod spec.
   
   The mounting of the ConfigMap as a volume, is actually something Spark 
specific that we might not need in Heron. In Spark there is a concept of a 
driver pod, and this creates the executor pods. So this is why they mount the 
Executor's PodTemplate ConfigMap into the Driver pod. If I'm not mistaken, our 
version of `loadPodFromTemplate` could directly lookup the ConfigMap 
PodTemplate without the need to mount the ConfigMap into the pod.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to