Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14601
@steveloughran Please feel free to reopen this PR. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does no
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14601
I know this hasn't been updated, but it is still important. I can take it
on if all it needs is a test case
---
If your project is set up for it, you can reply to this email and have your
rep
Github user agsachin commented on the issue:
https://github.com/apache/spark/pull/14601
Thanks @jiangxb2987 will add this test case by tomorrow and will be update
the pr with results
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14601
gentle ping @agsachin.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wish
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14601
Testing should not be too hard. Here's my *untested* attempt
```scala
val sconf = new SparkConf(false)
sconf.set("fs.example.value", "true")
val conf = new Conf
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14601
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14601
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14601
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes s
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14601
@agsachin Are you still working on this? And if it is so, would you please
update the description and provide some snapshot to demo the behaviors before &
after the changes? It would also be gre
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14601
1. It's good to have some tests
2. I note that `appendS3AndSparkHadoopConfigurations()` has a weakness in
how it propagates env vars: no propagation of the session environment
{{AWS_SESSIO
Github user agsachin commented on the issue:
https://github.com/apache/spark/pull/14601
@steveloughran I have updated the pull request to fs.*
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14601
spark.hadoop.fs.* would work.
The (not yet shipped in ASF code) Azure Data Lake FS has, for reasons I
don't know and have only just noticed, adopted "dfs.adl" as their prefix.
That's
Github user agsachin commented on the issue:
https://github.com/apache/spark/pull/14601
@lresendecan we go ahead with key.startsWith("fs."). so that we don't need
to check so many conditions.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14601
(gentle ping @agsachin)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wis
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14601
@agsachin Are you planning to address these updates on this PR ? It would
be good to have this as part of Spark as it affects multiple usage scenarios in
cloud platforms and other cases as well.
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14601
Could an automated test be done here. propagation can be tested with a a
function run on the
executor (such as a map) which fails if the required properties are missing
1. (set so
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14601
I'd like to propose that the list of filesystem properties to propagate is
actually defined as a list in a spark property, default could be "fs.s3a,
fs.s3n, fs.s3, fs.swift, fs.wasb". This wil
17 matches
Mail list logo