[ 
https://issues.apache.org/jira/browse/SPARK-20709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-20709.
------------------------------------
    Resolution: Duplicate

> spark-shell use proxy-user failed
> ---------------------------------
>
>                 Key: SPARK-20709
>                 URL: https://issues.apache.org/jira/browse/SPARK-20709
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 2.1.0
>            Reporter: fangfengbin
>
> cmd is : spark-shell --master yarn-client --proxy-user leoB
> Throw Exception: failedto find any Kerberos tgt
> Log is:
> 17/05/11 15:56:21 DEBUG MutableMetricsFactory: field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
> sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 17/05/11 15:56:21 DEBUG MutableMetricsFactory: field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
> sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of 
> failed kerberos logins and latency (milliseconds)])
> 17/05/11 15:56:21 DEBUG MutableMetricsFactory: field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
> sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
> 17/05/11 15:56:21 DEBUG MetricsSystemImpl: UgiMetrics, User and group related 
> metrics
> 17/05/11 15:56:22 DEBUG Shell: setsid exited with exit code 0
> 17/05/11 15:56:22 DEBUG Groups:  Creating new Groups object
> 17/05/11 15:56:22 DEBUG NativeCodeLoader: Trying to load the custom-built 
> native-hadoop library...
> 17/05/11 15:56:22 DEBUG NativeCodeLoader: Loaded the native-hadoop library
> 17/05/11 15:56:22 DEBUG JniBasedUnixGroupsMapping: Using 
> JniBasedUnixGroupsMapping for Group resolution
> 17/05/11 15:56:22 DEBUG JniBasedUnixGroupsMappingWithFallback: Group mapping 
> impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> 17/05/11 15:56:22 DEBUG Groups: Group mapping 
> impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; 
> cacheTimeout=300000; warningDeltaMs=5000
> 17/05/11 15:56:22 DEBUG UserGroupInformation: hadoop login
> 17/05/11 15:56:22 DEBUG UserGroupInformation: hadoop login commit
> 17/05/11 15:56:22 DEBUG UserGroupInformation: using kerberos 
> user:sp...@hadoop.com
> 17/05/11 15:56:22 DEBUG UserGroupInformation: Using user: "sp...@hadoop.com" 
> with name sp...@hadoop.com
> 17/05/11 15:56:22 DEBUG UserGroupInformation: User entry: "sp...@hadoop.com"
> 17/05/11 15:56:22 DEBUG UserGroupInformation: Assuming keytab is managed 
> externally since logged in from subject.
> 17/05/11 15:56:22 DEBUG UserGroupInformation: UGI loginUser:sp...@hadoop.com 
> (auth:KERBEROS)
> 17/05/11 15:56:22 DEBUG UserGroupInformation: Current time is 1494489382449
> 17/05/11 15:56:22 DEBUG UserGroupInformation: Next refresh is 1494541210600
> 17/05/11 15:56:22 DEBUG UserGroupInformation: PrivilegedAction as:leoB 
> (auth:PROXY) via sp...@hadoop.com (auth:KERBEROS) 
> from:org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> [INFO] Unable to bind key for unsupported operation: backward-delete-word
> [INFO] Unable to bind key for unsupported operation: backward-delete-word
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> 17/05/11 15:56:29 WARN SparkConf: In Spark 1.0 and later spark.local.dir will 
> be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS 
> in mesos/standalone and LOCAL_DIRS in YARN).
> 17/05/11 15:56:56 WARN SessionState: load mapred-default.xml, HIVE_CONF_DIR 
> env not found!
> 17/05/11 15:56:56 ERROR TSaslTransport: SASL negotiation failure
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>       at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>       at 
> org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
>       at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
>       at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>       at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>       at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1769)
>       at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:513)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:249)
>       at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>       at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)
>       at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>       at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
>       at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
>       at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3126)
>       at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3145)
>       at 
> org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3370)
>       at 
> org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:176)
>       at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:168)
>       at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:519)
>       at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:487)
>       at 
> org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:192)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>       at 
> org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
>       at 
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:361)
>       at 
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:262)
>       at 
> org.apache.spark.sql.hive.HiveExternalCatalog.<init>(HiveExternalCatalog.scala:65)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>       at 
> org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:166)
>       at 
> org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86)
>       at 
> org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
>       at 
> org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
>       at scala.Option.getOrElse(Option.scala:121)
>       at 
> org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:101)
>       at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100)
>       at 
> org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:159)
>       at 
> org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.scala:32)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>       at 
> org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:978)
>       at 
> org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110)
>       at 
> org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109)
>       at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878)
>       at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878)
>       at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>       at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>       at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>       at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>       at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
>       at 
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:878)
>       at org.apache.spark.repl.Main$.createSparkSession(Main.scala:96)
>       at $line3.$read$$iw$$iw.<init>(<console>:15)
>       at $line3.$read$$iw.<init>(<console>:42)
>       at $line3.$read.<init>(<console>:44)
>       at $line3.$read$.<init>(<console>:48)
>       at $line3.$read$.<clinit>(<console>)
>       at $line3.$eval$.$print$lzycompute(<console>:7)
>       at $line3.$eval$.$print(<console>:6)
>       at $line3.$eval.$print(<console>)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
>       at 
> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
>       at 
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
>       at 
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
>       at 
> scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
>       at 
> scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
>       at 
> scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
>       at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
>       at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
>       at 
> scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
>       at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
>       at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
>       at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
>       at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
>       at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:105)
>       at 
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
>       at 
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
>       at 
> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
>       at 
> scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
>       at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
>       at org.apache.spark.repl.Main$.doMain(Main.scala:69)
>       at org.apache.spark.repl.Main$.main(Main.scala:52)
>       at org.apache.spark.repl.Main.main(Main.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:760)
>       at 
> org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:172)
>       at 
> org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:170)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1769)
>       at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
>       at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:215)
>       at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:129)
>       at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>       at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>       at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>       at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>       at 
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
>       at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
>       at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
>       at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to