On 17 Nov 2015, at 15:39, Nikhil Gs 
<gsnikhil1432...@gmail.com<mailto:gsnikhil1432...@gmail.com>> wrote:

Hello Everyone,

Firstly, thank you so much for the response. In our cluster, we are using Spark 
1.3.0 and our cluster version is CDH 5.4.1. Yes, we are also using Kerbros in 
our cluster and the kerberos version is 1.10.3.

The error "GSS initiate failed [Caused by GSSException: No valid credentials 
provided" was occurring when we are trying to load data from kafka  topic to 
hbase by using Spark classes and spark submit job.

My question is, we also have an other project named as XXX in our cluster which 
is successfully deployed and its running and the scenario for that project is, 
flume + Spark submit + Hbase table. For this scenario, it works fine in our 
Kerberos cluster and why not for kafkatopic + Spark Submit + Hbase table.

Are we doing anything wrong? Not able to figure out? Please suggest us.


you are probably into kerberos debug mode. That's not something anyone enjoys 
(*)

There are some options you can turn up for logging in the JVM codebase

https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/secrets.html

then turn up the org.apache.hadoop.security log to log at DEBUG in the HBase 
server as well as your client code.


(*) This is why I recommend deploying kerberos apps at 09:00 on a tuesday. All 
the big timeout events will happen on tuesday in the morning, afternoon or 
evening, the 24h timeout on wed am, 72h on friday, and the 7 day one the 
following week. You don't want to be fielding support calls on a saturday 
evening as the application — or indeed the entire HDFS filesystem - deployed on 
a friday is failing one node at a time

Reply via email to