Did you try disabling ticket cache usage?

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="xxx""
  principal="dddd"
  storeKey=true
*  useTicketCache=false*
  serviceName="zookeeper";
};

On Wed, 10 Jan 2018 at 12:57 Schneider, Jonathan <j...@sclhs.net> wrote:

> For reference, the specific error I get is:
>
> 2018-01-10 09:55:55,988 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.hive.PutHiveQL
> PutHiveQL[id=3a4f82fd-015f-1000-0000-00005aa22fb2] Failed to update Hive
> for
> StandardFlowFileRecord[uuid=7ba71cdb-7557-4eab-bd2d-bd89add1c73f,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1515205062419-12378,
> container=default, section=90], offset=342160,
> length=247],offset=0,name=vp_employmentstat.orc,size=247] due to
> java.sql.SQLException: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException; it is possible that
> retrying the operation will succeed, so routing to retry:
> java.sql.SQLException: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException
> java.sql.SQLException: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException
>         at
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:308)
>         at
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:241)
>         at
> org.apache.hive.jdbc.HivePreparedStatement.execute(HivePreparedStatement.java:98)
>         at
> org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
>         at
> org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
>         at
> org.apache.nifi.processors.hive.PutHiveQL.lambda$null$3(PutHiveQL.java:218)
>         at
> org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
>         at
> org.apache.nifi.processors.hive.PutHiveQL.lambda$new$4(PutHiveQL.java:199)
>         at
> org.apache.nifi.processor.util.pattern.Put.putFlowFiles(Put.java:59)
>         at
> org.apache.nifi.processor.util.pattern.Put.onTrigger(Put.java:101)
>         at
> org.apache.nifi.processors.hive.PutHiveQL.lambda$onTrigger$6(PutHiveQL.java:255)
>         at
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
>         at
> org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
>         at
> org.apache.nifi.processors.hive.PutHiveQL.onTrigger(PutHiveQL.java:255)
>         at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1118)
>         at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>         at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>         at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException:
> org.apache.http.client.ClientProtocolException
>         at
> org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:297)
>         at
> org.apache.thrift.transport.THttpClient.flush(THttpClient.java:313)
>         at
> org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
>         at
> org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
>         at
> org.apache.hive.service.cli.thrift.TCLIService$Client.send_ExecuteStatement(TCLIService.java:223)
>         at
> org.apache.hive.service.cli.thrift.TCLIService$Client.ExecuteStatement(TCLIService.java:215)
>         at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1374)
>         at com.sun.proxy.$Proxy174.ExecuteStatement(Unknown Source)
>         at
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:299)
>         ... 24 common frames omitted
> Caused by: org.apache.http.client.ClientProtocolException: null
>         at
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187)
>         at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:118)
>         at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>         at
> org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:251)
>         ... 35 common frames omitted
> Caused by: org.apache.http.HttpException: null
>         at
> org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:86)
>         at
> org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
>         at
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:183)
>         at
> org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
>         at
> org.apache.http.impl.execchain.ServiceUnavailableRetryExec.execute(ServiceUnavailableRetryExec.java:85)
>         at
> org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
>         at
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>         ... 38 common frames omitted
> Caused by: org.apache.http.HttpException: null
>         at
> org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:68)
>         at
> org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:74)
>         ... 44 common frames omitted
> Caused by: java.lang.reflect.UndeclaredThrowableException: null
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1884)
>         at
> org.apache.hive.service.auth.HttpAuthUtils.getKerberosServiceTicket(HttpAuthUtils.java:83)
>         at
> org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:62)
>         ... 45 common frames omitted
> Caused by: org.ietf.jgss.GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)
>         at
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>         at
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>         at
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>         at
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
>         at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
>         at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
>         at
> org.apache.hive.service.auth.HttpAuthUtils$HttpKerberosClientAction.run(HttpAuthUtils.java:183)
>         at
> org.apache.hive.service.auth.HttpAuthUtils$HttpKerberosClientAction.run(HttpAuthUtils.java:151)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>         ... 47 common frames omitted
>
> Jonathan Schneider
> Hadoop/UNIX Administrator, STSC
> SCL Health
> 17501 W. 98th St
> <https://maps.google.com/?q=17501+W.+98th+St&entry=gmail&source=g>,
> Pillars 25-33
> Lenexa, KS  66219
> P: 913.895.2999 <(913)%20895-2999>
> j...@sclhs.net
> www.sclhealthsystem.org
>
>
>
>
> -----Original Message-----
> From: Joe Witt [mailto:joe.w...@gmail.com]
> Sent: Wednesday, January 10, 2018 9:55 AM
> To: users@nifi.apache.org
> Subject: Re: [EXTERNAL EMAIL]Re: Kerberos hive failure to renew tickets
>
> Cool.  This is probably fixed in Apache NiFi 1.5.0 but please share the
> stack dump when it is stuck.
>
> bin/nifi.sh dump
>
> Then send us the logs dir content.
>
> Thanks
>
> On Wed, Jan 10, 2018 at 8:54 AM, Schneider, Jonathan <j...@sclhs.net>
> wrote:
> > Joe,
> >
> > I can reproduce this easily.  Set up a connection to a kerberized Hive
> instance.  After 24 hours you will get errors about an expired TGT.
> Restarting the Ni-Fi process is the only way I've found to get it to renew
> the TGT.
> >
> > Jonathan Schneider
> > Hadoop/UNIX Administrator, STSC
> > SCL Health
> > 17501 W. 98th St, Pillars 25-33
> > Lenexa, KS  66219
> > P: 913.895.2999 <(913)%20895-2999>
> > j...@sclhs.net
> > www.sclhealthsystem.org
> >
> >
> >
> >
> > -----Original Message-----
> > From: Joe Witt [mailto:joe.w...@gmail.com]
> > Sent: Wednesday, January 10, 2018 9:53 AM
> > To: users@nifi.apache.org
> > Subject: [EXTERNAL EMAIL]Re: Kerberos hive failure to renew tickets
> >
> > *** CAUTION!  This email came from outside SCL Health. Do not open
> > attachments or click links if you do not recognize the sender. ***
> >
> > Georg
> >
> > We'd need to see what you mean to really understand.  Can you please
> share NiFi logs directory content and if the flow is stuck/locked up please
> share a nifi thread dump which will be in the logs if you first run
> bin/nifi.sh dump.
> >
> > thanks
> >
> > On Wed, Jan 10, 2018 at 8:50 AM, Georg Heiler <georg.kf.hei...@gmail.com>
> wrote:
> >> Hi
> >> In production I observe problems with ticket renewal for the nifi
> >> hive processor.
> >>
> >> A workaround is to restart the hive service but that doesn't seem right.
> >>
> >> Is there a real fix for this problem?
> >>
> >> Best Georg
>
>

Reply via email to