Github user arinconstrio commented on the issue:
https://github.com/apache/spark/pull/16788
We are going to continue working on our solution which implies much more
than a feature for Mesos, so I close this PR, and create a new one in the short
future.
---
If your project is set up
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16788
Hello all, if you're not going to update this PR then it should be closed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/16788
>Trying to put it differently: if Spark had its own, secure method for
distributing the initial set of delegation tokens needed by the executors (+ AM
in case of YARN), then the YARN backend would
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16788
> Ah, I didn't realize that the --keytab parameter was expected to be an
HDFS location. Thanks.
It's not. It's just how the YARN module chose to distribute the keytab.
But the main p
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/16788
Ah, I didn't realize that the --keytab parameter was expected to be an HDFS
location. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHu
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16788
> In order to renew delegation tokens, the ApplicationMaster needs access
to the keytab, right?
Yes.
> So why must the driver send delegation tokens to the ApplicationMaster,
if th
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/16788
In order to renew delegation tokens, the `ApplicationMaster` needs access
to the keytab, right?
https://github.com/apache/spark/blob/master/resource-managers/yarn/src/main/scala/org/apache/spark/de
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16788
> I take it you mean that the driver logs in via Kerberos, and submits the
resulting token (TGT?) via amContainer.setTokens
No. `amContainer.setTokens` is used to distribute delegation tokens
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/16788
@vanzin Consolidating a Mesos and YARN kerberos solution sounds nice, but
it does worry me. I'm worried it's going to a) be quite a chore to factor out
the YARN Kerberos code, and more importantly
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/16788
@vanzin When you say "distributing the principal's credentials", I take it
you mean that the driver logs in via Kerberos, and submits the resulting token
(TGT?) via `amContainer.setTokens`. That's
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16788
I took a very quick look at this; Mridul and Saisai have already raised
good questions about this.
But I'm a little worried that this is creating yet another way of dealing
with Kerberos tha
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/16788
Shipping tokens with tasks may have a big issue as discussed with @mridulm
before. Some Spark applications has long running out-band operations to
community with HDFS, like Spark Streaming's WAL,
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/16788
@tgravescs , @vanzin - this PR for mesos changes how spark handles kerberos
tokens fundamentally; would be good to have your views.
+CC @jerryshao to also look at the PR, since you have work
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/16788
A few high-level issues:
- The title of this PR seems misleading. This is about Kerberos support in
general, not just proxy user support.
- There's no delegation token renewer. Execut
14 matches
Mail list logo