Hello,

We are having problems with the delegation of the token in a secure
cluster: Delegation Token can be issued only with kerberos or web
authentication

We have a spark process which is generating the hfiles to be loaded into
hbase. To generate these hfiles, (we are using a back-ported version of the
latest hbase/spark code), we are using this method
HBaseRDDFunctions.hbaseBulkLoadThinRows.

I think the problem is in the below piece of code. This function is
executed in every partition of the rdd, when the executors are trying to
execute the code, the executors do not have a valid kerberos credential and
cannot execute anything.

private def hbaseForeachPartition[T](configBroadcast:

 Broadcast[SerializableWritable[Configuration]],
                                        it: Iterator[T],
                                        f: (Iterator[T], Connection) =>
Unit) = {

    val config = getConf(configBroadcast)

    applyCreds
    // specify that this is a proxy user
    val smartConn = HBaseConnectionCache.getConnection(config)
    f(it, smartConn.connection)
    smartConn.close()
  }

I have attached the spark-submit and the complete error log trace. Has
anyone faced this problem before?

Thanks in advance.

Regards,
Abel.
-- 
Un saludo - Best Regards.
Abel
Script for submitting the spark action
#!/bin/bash

SPARK_CONF_DIR=conf-hbase spark-submit --master yarn-cluster \
  --executor-memory 6G \
  --num-executors 10 \
  --queue cards \
  --executor-cores 4 \
  --driver-java-options "-Dlog4j.configuration=file:log4j.properties" \
  --driver-class-path "$2" \
  --jars file:/opt/orange/lib/rocksdbjni-4.5.1.jar \
  --conf 
"spark.driver.extraClassPath=/var/cloudera/parcels/CDH/lib/hbase/lib/htrace-core-3.2.0-incubating.jar:/var/cloudera/parcels/CDH/jars/hbase-server-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/jars/hbase-common-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-client-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-protocol-1.0.0-cdh5.5.4.jar:/opt/orange/lib/rocksdbjni-4.5.1.jar:/var/cloudera/parcels/CLABS_PHOENIX-4.5.2-1.clabs_phoenix1.2.0.p0.774/lib/phoenix/lib/phoenix-core-1.2.0.jar:/var/cloudera/parcels/CDH/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.5.4.jar"
 \
  --conf 
"spark.executor.extraClassPath=/var/cloudera/parcels/CDH/lib/hbase/lib/htrace-core-3.2.0-incubating.jar:/var/cloudera/parcels/CDH/jars/hbase-server-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/jars/hbase-common-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-client-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-protocol-1.0.0-cdh5.5.4.jar:/opt/orange/lib/rocksdbjni-4.5.1.jar:/var/cloudera/parcels/CLABS_PHOENIX-4.5.2-1.clabs_phoenix1.2.0.p0.774/lib/phoenix/lib/phoenix-core-1.2.0.jar:/var/cloudera/parcels/CDH/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.5.4.jar"\
  --principal hb...@company.corp \
  --keytab /opt/company/conf/hbase.keytab \
  --files 
"owl.properties,conf-hbase/log4j.properties,conf-hbase/hbase-site.xml,conf-hbase/core-site.xml,$2"
 \
  --class $1 \
  cards-batch-$3-jar-with-dependencies.jar $2
Complete log
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner: +++ Cleaning closure 
<function1> 
(org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoadThinRows$1) +++
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared fields: 2
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public static final long 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoadThinRows$1.serialVersionUID
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      private final 
scala.Function1 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoadThinRows$1.mapFunction$1
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared methods: 2
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final 
java.lang.Object 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoadThinRows$1.apply(java.lang.Object)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final scala.Tuple2 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoadThinRows$1.apply(java.lang.Object)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + inner classes: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer classes: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer objects: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + populating accessed fields 
because this is the starting closure
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + fields accessed by starting 
closure: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + there are no enclosing 
objects!
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  +++ closure <function1> 
(org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoadThinRows$1) is now 
cleaned +++
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner: +++ Cleaning closure 
<function1> 
(org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1) +++
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared fields: 3
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public static final long 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.serialVersionUID
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      private final 
org.apache.hadoop.hbase.spark.HBaseContext 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.$outer
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      private final 
scala.Function2 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.f$1
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared methods: 2
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final 
java.lang.Object 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.apply(java.lang.Object)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final void 
org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.apply(scala.collection.Iterator)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + inner classes: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer classes: 1
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      
org.apache.hadoop.hbase.spark.HBaseContext
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer objects: 1
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      
org.apache.hadoop.hbase.spark.HBaseContext@5ec594de
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + populating accessed fields 
because this is the starting closure
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + fields accessed by starting 
closure: 1
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      (class 
org.apache.hadoop.hbase.spark.HBaseContext,Set(tmpHdfsConfgFile, 
tmpHdfsConfiguration, appliedCredentials, broadcastedConf, credentialsConf, 
credentials))
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outermost object is not a 
closure, so do not clone it: (class 
org.apache.hadoop.hbase.spark.HBaseContext,org.apache.hadoop.hbase.spark.HBaseContext@5ec594de)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  +++ closure <function1> 
(org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1) is now 
cleaned +++
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner: +++ Cleaning closure 
<function1> 
(org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29) +++
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared fields: 2
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public static final long 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.serialVersionUID
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      private final 
scala.Function1 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.cleanF$10
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared methods: 2
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final 
java.lang.Object 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(java.lang.Object)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final void 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(scala.collection.Iterator)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + inner classes: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer classes: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer objects: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + populating accessed fields 
because this is the starting closure
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + fields accessed by starting 
closure: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + there are no enclosing 
objects!
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  +++ closure <function1> 
(org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29) is now 
cleaned +++
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner: +++ Cleaning closure 
<function2> (org.apache.spark.SparkContext$$anonfun$runJob$5) +++
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared fields: 2
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public static final long 
org.apache.spark.SparkContext$$anonfun$runJob$5.serialVersionUID
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      private final 
scala.Function1 org.apache.spark.SparkContext$$anonfun$runJob$5.cleanedFunc$1
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + declared methods: 2
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final 
java.lang.Object 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(java.lang.Object,java.lang.Object)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:      public final 
java.lang.Object 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + inner classes: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer classes: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + outer objects: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + populating accessed fields 
because this is the starting closure
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + fields accessed by starting 
closure: 0
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  + there are no enclosing 
objects!
16/11/18 10:42:34 DEBUG [Driver] ClosureCleaner:  +++ closure <function2> 
(org.apache.spark.SparkContext$$anonfun$runJob$5) is now cleaned +++
16/11/18 10:42:34 INFO [Driver] SparkContext: Starting job: foreachPartition at 
HBaseContext.scala:104
16/11/18 10:42:34 DEBUG [IPC Parameter Sending Thread #1] Client: IPC Client 
(558031577) connection to quorum3/<ip_address>:8020 from hbase sending #23
16/11/18 10:42:34 DEBUG [IPC Client (558031577) connection to 
quorum3/<ip_address>:8020 from hbase] Client: IPC Client (558031577) connection 
to quorum3/<ip_address>:8020 from hbase got value #23
16/11/18 10:42:34 WARN [dag-scheduler-event-loop] DAGScheduler: Creating new 
stage failed due to exception - job: 0
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
can be issued only with kerberos or web authentication
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7469)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:541)
        at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:661)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:964)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

        at org.apache.hadoop.ipc.Client.call(Client.java:1472)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy14.getDelegationToken(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:909)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy15.getDelegationToken(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1061)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1451)
        at 
org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:538)
        at 
org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:516)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2137)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:129)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:110)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:84)
        at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:193)
        at 
parquet.hadoop.ParquetInputFormat.listStatus(ParquetInputFormat.java:343)
        at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
        at 
parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:299)
        at 
org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:115)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:82)
        at 
org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:78)
        at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:226)
        at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:224)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.dependencies(RDD.scala:224)
        at 
org.apache.spark.scheduler.DAGScheduler.visit$2(DAGScheduler.scala:388)
        at 
org.apache.spark.scheduler.DAGScheduler.getAncestorShuffleDependencies(DAGScheduler.scala:405)
        at 
org.apache.spark.scheduler.DAGScheduler.registerShuffleDependencies(DAGScheduler.scala:370)
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getShuffleMapStage(DAGScheduler.scala:253)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$visit$1$1.apply(DAGScheduler.scala:354)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$visit$1$1.apply(DAGScheduler.scala:351)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
org.apache.spark.scheduler.DAGScheduler.visit$1(DAGScheduler.scala:351)
        at 
org.apache.spark.scheduler.DAGScheduler.getParentStages(DAGScheduler.scala:363)
        at 
org.apache.spark.scheduler.DAGScheduler.getParentStagesAndId(DAGScheduler.scala:266)
        at 
org.apache.spark.scheduler.DAGScheduler.newResultStage(DAGScheduler.scala:300)
        at 
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:734)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1477)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1469)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
16/11/18 10:42:34 DEBUG [sparkDriver-akka.actor.default-dispatcher-25] 
AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message 
AkkaMessage(ReviveOffers,false) from Actor[akka://sparkDriver/deadLetters]
16/11/18 10:42:34 DEBUG [sparkDriver-akka.actor.default-dispatcher-25] 
AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: 
AkkaMessage(ReviveOffers,false)
16/11/18 10:42:34 DEBUG [sparkDriver-akka.actor.default-dispatcher-25] 
AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message 
(0.719944 ms) AkkaMessage(ReviveOffers,false) from 
Actor[akka://sparkDriver/deadLetters]
16/11/18 10:42:34 INFO [Driver] DAGScheduler: Job 0 failed: foreachPartition at 
HBaseContext.scala:104, took 0.017348 s
16/11/18 10:42:34 ERROR [Driver] ApplicationMaster: User class threw exception: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
can be issued only with kerberos or web authentication
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7469)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:541)
        at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:661)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:964)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
can be issued only with kerberos or web authentication
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7469)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:541)
        at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:661)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:964)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

        at org.apache.hadoop.ipc.Client.call(Client.java:1472)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy14.getDelegationToken(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:909)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy15.getDelegationToken(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1061)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1451)
        at 
org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:538)
        at 
org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:516)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2137)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:129)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:110)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:84)
        at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:193)
        at 
parquet.hadoop.ParquetInputFormat.listStatus(ParquetInputFormat.java:343)
        at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
        at 
parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:299)
        at 
org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:115)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:82)
        at 
org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:78)
        at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:226)
        at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:224)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.dependencies(RDD.scala:224)
        at 
org.apache.spark.scheduler.DAGScheduler.visit$2(DAGScheduler.scala:388)
        at 
org.apache.spark.scheduler.DAGScheduler.getAncestorShuffleDependencies(DAGScheduler.scala:405)
        at 
org.apache.spark.scheduler.DAGScheduler.registerShuffleDependencies(DAGScheduler.scala:370)
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getShuffleMapStage(DAGScheduler.scala:253)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$visit$1$1.apply(DAGScheduler.scala:354)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$visit$1$1.apply(DAGScheduler.scala:351)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
org.apache.spark.scheduler.DAGScheduler.visit$1(DAGScheduler.scala:351)
        at 
org.apache.spark.scheduler.DAGScheduler.getParentStages(DAGScheduler.scala:363)
        at 
org.apache.spark.scheduler.DAGScheduler.getParentStagesAndId(DAGScheduler.scala:266)
        at 
org.apache.spark.scheduler.DAGScheduler.newResultStage(DAGScheduler.scala:300)
        at 
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:734)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1477)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1469)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:898)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:896)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
        at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:896)
        at 
org.apache.hadoop.hbase.spark.HBaseContext.foreachPartition(HBaseContext.scala:104)
        at 
org.apache.hadoop.hbase.spark.HBaseRDDFunctions$GenericHBaseRDDFunctions.hbaseForeachPartition(HBaseRDDFunctions.scala:141)
        at 
org.apache.hadoop.hbase.spark.HBaseContext.bulkLoadThinRows(HBaseContext.scala:772)
        at 
org.apache.hadoop.hbase.spark.HBaseRDDFunctions$GenericHBaseRDDFunctions.hbaseBulkLoadThinRows(HBaseRDDFunctions.scala:246)
        at 
com.company.di.owl.batch.spark.rdd.PhoenixRDDFunctions$PhoenixRDDFunctions.saveHFileToPhoenix(PhoenixRDDFunctions.scala:133)
        at 
com.company.di.card.batch.transactions.sinks.TransactionLoaderSink.sink(TransactionLoaderSink.scala:149)
        at 
com.company.di.card.batch.transactions.pipelines.TransactionsPipeline$TransactionsPipeline.runPipeline(TransactionsPipeline.scala:73)
        at 
com.company.di.card.batch.transactions.pipelines.TransactionsPipeline$class.runPipeline(TransactionsPipeline.scala:27)
        at 
com.company.di.card.batch.transactions.driver.AuthTxnBatchDriver$$anon$1.runPipeline(AuthTxnBatchDriver.scala:45)
        at 
com.company.di.card.batch.transactions.driver.AuthTxnBatchDriver$delayedInit$body.apply(AuthTxnBatchDriver.scala:42)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
        at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
        at scala.App$class.main(App.scala:71)
        at 
com.company.di.card.batch.transactions.driver.AuthTxnBatchDriver$.main(AuthTxnBatchDriver.scala:23)
        at 
com.company.di.card.batch.transactions.driver.AuthTxnBatchDriver.main(AuthTxnBatchDriver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)
16/11/18 10:42:34 INFO [Driver] ApplicationMaster: Final app status: FAILED, 
exitCode: 15, (reason: User class threw exception: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
can be issued only with kerberos or web authentication
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7469)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:541)
        at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:661)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:964)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
)

Reply via email to