Hi,
I'm testing Livy server with Hue 3.9 and Spark 1.6.0 inside a kerberized 
cluster (HDP 2.4), when I run the command


/usr/java/jdk1.7.0_71//bin/java -Dhdp.version=2.4.0.0-169 -cp 
/usr/hdp/2.4.0.0-169/spark/conf/:/usr/hdp/2.4.0.0-169/spark/lib/spark-assembly-1.6.0.2.4.0.0-169-hadoop2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-core-3.2.10.jar:/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/hdp/2.4.0.0-169/spark/lib/datanucleus-api-jdo-3.2.6.jar:/etc/hadoop/conf/:/usr/hdp/2.4.0.0-169/hadoop/lib/hadoop-lzo-0.6.0.2.4.0.0-169.jar
 -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master yarn-cluster 
--conf spark.livy.port=0 --conf 
spark.livy.callbackUrl=http://172.16.24.26:8998/sessions/0/callback --conf 
spark.driver.extraJavaOptions=-Dhdp.version=2.4.0.0-169 --class 
com.cloudera.hue.livy.repl.Main --name Livy --proxy-user luca.rea 
/var/cloudera_hue/apps/spark/java/livy-assembly/target/scala-2.10/livy-assembly-0.2.0-SNAPSHOT.jar
 spark


This fails renewing the token  and returns the error below:


16/04/13 09:34:52 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
16/04/13 09:34:53 INFO org.apache.hadoop.security.UserGroupInformation: Login 
successful for user spark-pantagr...@contactlab.lan using keytab file 
/etc/security/keytabs/spark.headless.keytab
16/04/13 09:34:54 INFO 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl: Timeline service 
address: http://pg-master04.contactlab.lan:8188/ws/v1/timeline/
16/04/13 09:34:54 WARN org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: 
The short-circuit local reads feature cannot be used because libhadoop cannot 
be loaded.
16/04/13 09:34:55 INFO org.apache.hadoop.hdfs.DFSClient: Created 
HDFS_DELEGATION_TOKEN token 2135943 for luca.rea on ha-hdfs:pgha
Exception in thread "main" org.apache.hadoop.security.AccessControlException: 
luca.rea tries to renew a token with renewer spark
        at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:481)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewDelegationToken(FSNamesystem.java:6793)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewDelegationToken(NameNodeRpcServer.java:635)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:1005)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1147)
        at org.apache.hadoop.security.token.Token.renew(Token.java:385)
        at 
org.apache.spark.deploy.yarn.Client.getTokenRenewalInterval(Client.scala:593)
        at org.apache.spark.deploy.yarn.Client.setupLaunchEnv(Client.scala:621)
        at 
org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:721)
        at 
org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:142)
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1065)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1125)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:161)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 luca.rea tries to renew a token with renewer spark
        at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:481)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewDelegationToken(FSNamesystem.java:6793)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewDelegationToken(NameNodeRpcServer.java:635)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:1005)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

        at org.apache.hadoop.ipc.Client.call(Client.java:1427)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEng
        at com.sun.proxy.$Proxy22.renewDelegationToken(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat
        at com.sun.proxy.$Proxy23.renewDelegationToken(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1145)
        ... 22 more





Spark-defaults.conf :

spark.yarn.principal spark-pantagr...@contactlab.lan
spark.yarn.keytab /etc/security/keytabs/spark.headless.keytab



core-site.xml:

    <property>
      <name>hadoop.proxyuser.spark.groups</name>
      <value>*</value>
    </property>

    <property>
      <name>hadoop.proxyuser.spark.hosts</name>
      <value>*</value>
    </property>

...

    <property>
      <name>hadoop.security.auth_to_local</name>
      <value>
RULE:[1:$1@$0](spark-pantagr...@contactlab.lan)s/.*/spark/
DEFAULT
     </value>
    </property>


"spark" is present as local user in all servers.
 

What does is missing here ?





---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to