Setting fs.s3a.aws.credentials.provider through a connect server.

2023-11-17 Thread Leandro Martelli
Hi all! Has anyone been through this already? I have a spark docker images that are used in 2 different environments and each one requires a different credentials provider for s3a. That parameter is the only difference between them. When passing via --conf, it works as expected. When --conf is

Re: Spark-submit without access to HDFS

2023-11-17 Thread Mich Talebzadeh
Hi, How are you submitting your spark job from your client? Your files can either be on HDFS or HCFS such as gs, s3 etc. With reference to --py-files hdfs://yarn-master-url hdfs://foo.py', I assume you want your spark-submit --verbose \ --deploy-mode cluster \