Ah, man, there are a few known issues with KMS delegation tokens. The main
one we've run into is HADOOP-14445, but it's only fixed in new versions of
Hadoop. I wouldn't expect you guys to be running those, but if you are, it
would be good to know.

In our forks we added a hack to work around that issue, maybe you can try
it out:
https://github.com/cloudera/spark/commit/108c1312d3a2b52090cb2713e7f8d68b9a0be8b1#diff-585a75e78c688c892d640281cfc56fed


On Thu, Jan 3, 2019 at 10:12 AM Paolo Platter <paolo.plat...@agilelab.it>
wrote:

> Hi,
>
>
>
> The spark default behaviour is to request a brand new token every 24
> hours, it is not going to renew delegation tokens, and it is the better
> approach for long running applications like streaming ones.
>
>
>
> In our use case using keytab and principal is working fine with
> hdfs_delegation_token but is NOT working with “kms-dt”.
>
>
>
> Anyone knows why this is happening ? Any suggestion to make it working
> with KMS ?
>
>
>
> Thanks
>
>
>
>
>
>
>
> [image: cid:image001.jpg@01D41D15.E01B6F00]
>
> *Paolo Platter*
>
> *CTO*
>
> E-mail:        paolo.plat...@agilelab.it
>
> Web Site:   www.agilelab.it
>
>
>
>
> ------------------------------
> *Da:* Marcelo Vanzin <van...@cloudera.com.INVALID>
> *Inviato:* Thursday, January 3, 2019 7:03:22 PM
> *A:* alinazem...@gmail.com
> *Cc:* user
> *Oggetto:* Re: How to reissue a delegated token after max lifetime passes
> for a spark streaming application on a Kerberized cluster
>
> If you are using the principal / keytab params, Spark should create
> tokens as needed. If it's not, something else is going wrong, and only
> looking at full logs for the app would help.
> On Wed, Jan 2, 2019 at 5:09 PM Ali Nazemian <alinazem...@gmail.com> wrote:
> >
> > Hi,
> >
> > We are using a headless keytab to run our long-running spark streaming
> application. The token is renewed automatically every 1 day until it hits
> the max life limit. The problem is token is expired after max life (7 days)
> and we need to restart the job. Is there any way we can re-issue the token
> and pass it to a job that is already running? It doesn't feel right at all
> to restart the job every 7 days only due to the token issue.
> >
> > P.S: We use  "--keytab /path/to/the/headless-keytab", "--principal
> principalNameAsPerTheKeytab" and "--conf
> spark.hadoop.fs.hdfs.impl.disable.cache=true" as the arguments for
> spark-submit command.
> >
> > Thanks,
> > Ali
>
>
>
> --
> Marcelo
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

-- 
Marcelo

Reply via email to