Dear Spark Users,
We have run into an issue where with spark 3.3.2 using auto scaling with
STS is working fine, but with 3.4.2 or 3.5.2 executors are being left
behind and not scaling down.
Driver makes a call to remove the executor but some (not all)
executors never get removed.
Has anyone else
Hi Jon,
Using IAM as suggested by Jorn is the best approach.
We recently moved our spark workload from HDP to Spark on K8 and utilizing
IAM.
It will save you from secret management headaches and also allows a lot
more flexibility on access control and option to allow access to multiple
S3 buckets
Greetings Everyone!
We are in need to ship spark (driver and executor) logs (not spark event
logs) from K8 to cloud bucket ADLS/S3.
Using fluentbit we are able to ship the log files but only to one single
path container/logs/.
This will cause a huge number of files in a single folder and will cre
really
appreciated.
Regards
Jayabindu Singh
here you go. Please update the values for your specific bucket to be used.
spark-defaults.com - to make sure event logs go to ADLS
spark.eventLog.enabled true
spark.eventLog.dir - abfss://
containen...@storageaccount.dfs.core.windows.net/tenant/spark/eventlogs
spark.h