Hi

I am using Spark 1.5.1 with CDH 5.4.2.  My cluster is kerberos protected.

Here is pseudocode  for what I am trying to do.
ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(“foo", “…")
ugi.doAs( new PrivilegedExceptionAction() {
       val sparkConf: SparkConf = createSparkConf(…)
       val sparkContext = new JavaSparkContext(sparkConf)
    new HiveContext(sparkContext.sc)  // failed
})

Spark context boots up fine with UGI, but HiveContext creation failed with 
following message.   If I manually do kinit within same shell, this code works.
Any thoughts?



15/12/08 04:12:27 INFO hive.HiveContext: Initializing execution hive, version 
1.2.1
15/12/08 04:12:27 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
15/12/08 04:12:27 INFO client.ClientWrapper: Loaded 
org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
15/12/08 04:12:27 INFO cluster.YarnClientSchedulerBackend: Registered executor: 
AkkaRpcEndpointRef(Actor[akka.tcp://sparkexecu...@ip-10-222-0-230.us-west-2.compute.internal:48544/user/Executor#684269808])
 with ID 5
15/12/08 04:12:27 INFO hive.metastore: Trying to connect to metastore with URI 
thrift://ip-10-222-0-145.us-west-2.compute.internal:9083
15/12/08 04:12:27 INFO hive.metastore: Connected to metastore.
15/12/08 04:12:27 INFO session.SessionState: Created local directory: 
/tmp/01726939-f7f4-45a4-a027-feb0ab9b0b68_resources
15/12/08 04:12:27 INFO session.SessionState: Created HDFS directory: 
/tmp/hive/dev.baseline/01726939-f7f4-45a4-a027-feb0ab9b0b68
15/12/08 04:12:27 INFO session.SessionState: Created local directory: 
/tmp/developer/01726939-f7f4-45a4-a027-feb0ab9b0b68
15/12/08 04:12:27 INFO cluster.YarnClientSchedulerBackend: Registered executor: 
AkkaRpcEndpointRef(Actor[akka.tcp://sparkexecu...@ip-10-222-0-230.us-west-2.compute.internal:47786/user/Executor#-468362093])
 with ID 9
15/12/08 04:12:27 INFO session.SessionState: Created HDFS directory: 
/tmp/hive/dev.baseline/01726939-f7f4-45a4-a027-feb0ab9b0b68/_tmp_space.db
15/12/08 04:12:27 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager ip-10-222-0-230.us-west-2.compute.internal:38220 with 530.0 MB RAM, 
BlockManagerId(1, ip-10-222-0-230.us-west-2.compute.internal, 38220)
15/12/08 04:12:28 INFO hive.HiveContext: default warehouse location is 
/user/hive/warehouse
15/12/08 04:12:28 INFO hive.HiveContext: Initializing HiveMetastoreConnection 
version 1.2.1 using Spark classes.
15/12/08 04:12:28 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
15/12/08 04:12:28 INFO client.ClientWrapper: Loaded 
org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
15/12/08 04:12:28 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager ip-10-222-0-230.us-west-2.compute.internal:56661 with 530.0 MB RAM, 
BlockManagerId(5, ip-10-222-0-230.us-west-2.compute.internal, 56661)
15/12/08 04:12:28 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager ip-10-222-0-230.us-west-2.compute.internal:43290 with 530.0 MB RAM, 
BlockManagerId(9, ip-10-222-0-230.us-west-2.compute.internal, 43290)
15/12/08 04:12:28 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
15/12/08 04:12:28 INFO hive.metastore: Trying to connect to metastore with URI 
thrift://ip-10-222-0-145.us-west-2.compute.internal:9083
15/12/08 04:12:28 ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
        at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at 
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at 
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:420)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
        at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
        at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
        at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1234)
        at 
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)
        at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
        at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
        at 
org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:183)
        at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:179)
        at 
org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:226)
        at 
org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:185)
        at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:392)
        at 
org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:235)
        at 
org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:234)
        at scala.collection.Iterator$class.foreach(Iterator.scala:742)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:234)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:72)
        at 
com.workday.bds.dfserver.SparkServiceProvider$.com$workday$bds$dfserver$SparkServiceProvider$$createHiveContext(SparkServiceProvider.scala:86)
        at 
com.workday.bds.dfserver.SparkServiceProvider$$anon$1.run(SparkServiceProvider.scala:52)
        at 
com.workday.bds.dfserver.SparkServiceProvider$$anon$1.run(SparkServiceProvider.scala:50)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at 
com.workday.bds.dfserver.SparkServiceProvider$.init(SparkServiceProvider.scala:50)
        at 
com.workday.bds.dfserver.Spas$.delayedEndpoint$com$workday$bds$dfserver$Spas$1(Spas.scala:72)
        at com.workday.bds.dfserver.Spas$delayedInit$body.apply(Spas.scala:20)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
        at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
        at scala.App$class.main(App.scala:76)
        at com.workday.bds.dfserver.Spas$.main(Spas.scala:20)
        at com.workday.bds.dfserver.Spas.main(Spas.scala)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)
        at 
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at 
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
        at 
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at 
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
        at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
        ... 62 more
15/12/08 04:12:28 WARN hive.metastore: Failed to connect to the MetaStore 
Server...

Reply via email to