Hi Mridul,
If you are using Spark on Kubernetes, you can make use of admission
controller to validate or mutate the confs set in the spark defaults
configmap. But this approach will work only for cluster deploy mode and not
for client.
Regards,
Shrikant
On Fri, 12 Aug 2022 at 12:26 AM, Tom Grave
A few years ago when I was doing more deployment management I kicked around
the idea of having different types of configs or different ways to specify the
configs. Though one of the problems at the time was actually with users
specifying a properties file and not picking up the spark-defaults.
Hi,
Wenchen, would be great if you could chime in with your thoughts - given
the feedback you originally had on the PR.
It would be great to hear feedback from others on this, particularly folks
managing spark deployments - how this is mitigated/avoided in your
case, any other pain points with c
I find there's substantial value in being able to set defaults, and I think
we can see that the community finds value in it as well, given the handful
of "default"-like configs that exist today as mentioned in Shardul's email.
The mismatch of conventions used today (suffix with ".defaultList", chan