Github user djdean commented on the pull request:

    https://github.com/apache/spark/pull/9875#issuecomment-160785941
  
    After applying the provided patch things still do not work. I've been doing 
some debugging, I've found some additional information. When it works, it seems 
that two tokens are created with the renewal interval being set for the first 
one using the "getTokenRenewalInterval(stagingDirPath)" function in 
Client.scala. The second time around (after stopping and restarting the 
context), however, it prints a message saying 1 token was created, but no 
renewal interval is set. Finally, it dies saying the token can't be found in 
the cache. The relevant output is below (ip/hostnames removed):
    
    
    ---------------Successful run--------------
    15/11/30 14:18:57 INFO yarn.Client: Credentials file set to: 
credentials-372be24e-9614-48d4-9f51-4cf275c51f46
    15/11/30 14:18:57 INFO yarn.YarnSparkHadoopUtil: delegTokenRenewer: hadoop
    15/11/30 14:18:57 INFO yarn.YarnSparkHadoopUtil: getting token for 
namenode: 
hdfs://HOSTNAME:9000/user/hadoop/.sparkStaging/application_1446695132208_0114
    15/11/30 14:18:57 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
142 for hadoop on xxx.xxx.xxx.xxx
    15/11/30 14:18:57 INFO yarn.Client: Renewal Interval set to 86400400
    15/11/30 14:18:57 INFO yarn.Client: Preparing resources for our AM container
    15/11/30 14:18:57 INFO yarn.YarnSparkHadoopUtil: delegTokenRenewer: 
rm/HOSTNAME
    15/11/30 14:18:57 INFO yarn.YarnSparkHadoopUtil: getting token for 
namenode: 
hdfs://HOSTNAME:9000/user/hadoop/.sparkStaging/application_1446695132208_0114
    15/11/30 14:18:57 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
143 for hadoop on xxx.xxx.xxx.xxx
    15/11/30 14:18:58 INFO yarn.YarnSparkHadoopUtil: Hive class not found 
java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
    15/11/30 14:18:58 INFO yarn.Client: To enable the AM to login from keytab, 
credentials are being copied over to the AM via the YARN Secure Distributed 
Cache.
    15/11/30 14:18:58 INFO yarn.Client: Uploading resource 
file:/etc/security/keytabs/hadoop.keytab -> 
<HOSTNAME>/user/hadoop/.sparkStaging/application_1446695132208_0114/hadoop.keytab
    --------End successful run--------------
    --------Failed run------------
    15/11/30 14:19:46 INFO yarn.Client: Credentials file set to: 
credentials-b91660b6-a7c4-49f1-b869-ded70fec1641
    15/11/30 14:19:46 INFO yarn.Client: Preparing resources for our AM container
    15/11/30 14:19:46 INFO yarn.YarnSparkHadoopUtil: Called with conf: 
Configuration: core-default.xml, core-site.xml, mapred-default.xml, 
mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, 
hdfs-site.xml
    15/11/30 14:19:46 INFO yarn.YarnSparkHadoopUtil: delegTokenRenewer: 
rm/HOSTNAME
    15/11/30 14:19:46 INFO yarn.YarnSparkHadoopUtil: getting token for 
namenode: 
hdfs://HOSTNAME:9000/user/hadoop/.sparkStaging/application_1446695132208_0115
    15/11/30 14:19:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
144 for hadoop on xxx.xxx.xxx.xxx
    15/11/30 14:19:46 INFO yarn.YarnSparkHadoopUtil: Hive class not found 
java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
    15/11/30 14:19:46 INFO yarn.Client: To enable the AM to login from keytab, 
credentials are being copied over to the AM via the YARN Secure Distributed 
Cache.
    15/11/30 14:19:46 INFO yarn.Client: Uploading resource 
file:/etc/security/keytabs/hadoop.keytab -> 
hdfs://HOSTNAME:9000/user/hadoop/.sparkStaging/application_1446695132208_0115/hadoop.keytab
    15/11/30 14:19:46 INFO yarn.Client: Uploading resource 
file:/var/tmp/spark-1.6.0-SNAPSHOT-bin-patch-8/lib/spark-assembly-1.6.0-SNAPSHOT-hadoop2.7.1.jar
 -> 
hdfs://HOSTNAME/user/hadoop/.sparkStaging/application_1446695132208_0115/spark-assembly-1.6.0-SNAPSHOT-hadoop2.7.1.jar
    15/11/30 14:19:58 WARN ipc.Client: Exception encountered while connecting 
to the server : 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 143 for hadoop) can't be found in cache
    15/11/30 14:19:58 WARN hdfs.LeaseRenewer: Failed to renew lease for 
[DFSClient_NONMAPREDUCE_-695293104_13] for 30 seconds.  Will retry shortly ...
    
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 143 for hadoop) can't be found in cache



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to