surahman commented on pull request #3725:
URL: https://github.com/apache/incubator-heron/pull/3725#issuecomment-962005835


   > Does this support mounting pre-existing PVCs and also dynamically creating 
the PVC? In the Spark implementation they had a special claim name `OnDemand` 
that would signify to create a dynamic PVC. With the current implementation, it 
seems we always create a new PVC. If so, this would preclude the ability to 
mount existing PVCs.
   
   Correct, this implementation will always create a new dynamic PVC but the 
`OnDemand` functionality should be straightforward. There needs to be a CLI 
parameter for the `OnDemand` option and an addition to the 
`createPersistentVolumeClaims` routine to skip the PVC generation. The PVC to 
K8s deployment will then not deploy a PVC to the K8s cluster. The mounts and 
volumes will be added as they are now.
   
   > For the dynamic PVCs, I think it's ok that the PVC is not cleaned up 
dynamically when the analytic is killed. Perhaps that can be a config item on 
the scheduler that toggles the behavior of whether it should clean up any 
dynamically provisioned PVCs? We can always add this feature later. Not a 
blocker for merging this current Pull Request.
   
   I was looking into this and we need to understand the various components 
within Heron very well before making changes that will store resources created 
in the K8s cluster.
   
   I am also working on trying to validate input for these configs (names and 
required parameters) because it is a hateful experience when the configs are 
bad. I do not think we should be doing any of this and that it should be the 
user's responsibility to get the commands right, what do you think? These 
checks add complexity and execution overhead during the submit process.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to