Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19103
Looks good (since the master PR didn't merge). Merging to 2.2. @redsanket
please close the PR manually.
---
-
To unsubscribe,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19103
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19103
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81522/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19103
**[Test build #81522 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81522/testReport)**
for PR 19103 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19103
**[Test build #81522 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81522/testReport)**
for PR 19103 at commit
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/19103
Jenkins,test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/19103
@vanzin @tgravescs sorry for the delay, will put up a PR against master, we
can move further discussion there, about the suggested improvements, I put up a
PR against master just for workaround.
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/19103
yes user stated he will be opening one for master, but that is quite a bit
different due to the credentials stuff moving around so I think this one will
have to stay open anyway. But I
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19103
BTW Oozie can also disable the HDFS provider
(`spark.yarn.security.credentials.hadoopfs.enabled=false`, I think). But it
would be nice if Spark was able to do that by itself is the current UGI does
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/19103
hive and hbase token fetch can be turned off (ie
spark.yarn.security.tokens.hive.enabled=false). I thought they didn't work the
same as hdfs core as far as not getting one if you have, but would
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19103
I don't know; perhaps they'll fail, which is why I think the correct
behavior would be to skip this credential manager code altogether if a TGT
doesn't exist.
But that would at least be the
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19103
@vanzin From my understanding seems like it is a workaround to avoid
issuing new HDFS tokens (since this user credential we already has HDFS
tokens). But how to handle HBase/Hive thing without
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19103
That's when using principal / keytab and generating new tokens; it's
separate from the code path being changed here. The initial tokens are obtained
in `Client.scala` with the current user's
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19103
@tgravescs , I think it is in `AMCredentialRenewer` we explicitly create a
new `Credential` every time when issuing new tokens.
```
// HACK:
// HDFS will not issue new
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19103
In general it feels like this code shouldn't even be running if the current
user doesn't have a TGT to start with.
But this patch restores the behavior from Spark 2.1, so if the PR is opened
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19103
>Oozie client gets the necessary tokens the application needs before
launching. It passes those tokens along to the oozie launcher job (MR job)
which will then actually call the Spark client to
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19103
This needs to be opened against master first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19103
cc @vanzin @mgummelt
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
18 matches
Mail list logo