Hello everyone, I am wondering if there is a way to mount a Kubernetes ConfigMap into a directory in a Spark executor on Kubernetes. Poking around the docs, the only volume mounting options I can find are for a PVC, a directory on the host machine, and an empty volume. I am trying to pass in configuration files that alter the start up of the container for a specialized Spark executor image, and a ConfigMap seems to be the natural Kubernetes solution for storing and accessing these files in the cluster. I have no way for the Spark executor to access them however.
I appreciate any help or insight the userbase can offer for this issue. Thanks, Steven Stetzler