Hi Nkechi,

Thank for your early response.

I am currently specifying the principal and the keytab in the spark-submit,
the keytab is in the same location in every node manager.

SPARK_CONF_DIR=conf-hbase spark-submit --master yarn-cluster \
  --executor-memory 6G \
  --num-executors 10 \
  --queue cards \
  --executor-cores 4 \
  --driver-java-options "-Dlog4j.configuration=file:log4j.properties" \
  --driver-class-path "$2" \
  --jars file:/opt/orange/lib/rocksdbjni-4.5.1.jar \
  --conf
"spark.driver.extraClassPath=/var/cloudera/parcels/CDH/lib/hbase/lib/htrace-core-3.2.0-incubating.jar:/var/cloudera/parcels/CDH/jars/hbase-server-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/jars/hbase-common-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-client-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-protocol-1.0.0-cdh5.5.4.jar:/opt/orange/lib/rocksdbjni-4.5.1.jar:/var/cloudera/parcels/CLABS_PHOENIX-4.5.2-1.clabs_phoenix1.2.0.p0.774/lib/phoenix/lib/phoenix-core-1.2.0.jar:/var/cloudera/parcels/CDH/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.5.4.jar"
\
  --conf
"spark.executor.extraClassPath=/var/cloudera/parcels/CDH/lib/hbase/lib/htrace-core-3.2.0-incubating.jar:/var/cloudera/parcels/CDH/jars/hbase-server-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/jars/hbase-common-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-client-1.0.0-cdh5.5.4.jar:/var/cloudera/parcels/CDH/lib/hbase/lib/hbase-protocol-1.0.0-cdh5.5.4.jar:/opt/orange/lib/rocksdbjni-4.5.1.jar:/var/cloudera/parcels/CLABS_PHOENIX-4.5.2-1.clabs_phoenix1.2.0.p0.774/lib/phoenix/lib/phoenix-core-1.2.0.jar:/var/cloudera/parcels/CDH/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.5.4.jar"\
  --principal hb...@company.corp \
  --keytab /opt/company/conf/hbase.keytab \
  --files
"owl.properties,conf-hbase/log4j.properties,conf-hbase/hbase-site.xml,conf-hbase/core-site.xml,$2"
\
  --class $1 \
  cards-batch-$3-jar-with-dependencies.jar $2

On Fri, 18 Nov 2016 at 14:01 Nkechi Achara <nkach...@googlemail.com> wrote:

> Can you use the principal and keytab options in Spark submit? These should
> circumvent this issue.
>
> On 18 Nov 2016 1:01 p.m., "Abel Fernández" <mevsmys...@gmail.com> wrote:
>
> > Hello,
> >
> > We are having problems with the delegation of the token in a secure
> > cluster: Delegation Token can be issued only with kerberos or web
> > authentication
> >
> > We have a spark process which is generating the hfiles to be loaded into
> > hbase. To generate these hfiles, (we are using a back-ported version of
> the
> > latest hbase/spark code), we are using this method HBaseRDDFunctions.
> > hbaseBulkLoadThinRows.
> >
> > I think the problem is in the below piece of code. This function is
> > executed in every partition of the rdd, when the executors are trying to
> > execute the code, the executors do not have a valid kerberos credential
> and
> > cannot execute anything.
> >
> > private def hbaseForeachPartition[T](configBroadcast:
> >                                        Broadcast[SerializableWritable[
> > Configuration]],
> >                                         it: Iterator[T],
> >                                         f: (Iterator[T], Connection) =>
> > Unit) = {
> >
> >     val config = getConf(configBroadcast)
> >
> >     applyCreds
> >     // specify that this is a proxy user
> >     val smartConn = HBaseConnectionCache.getConnection(config)
> >     f(it, smartConn.connection)
> >     smartConn.close()
> >   }
> >
> > I have attached the spark-submit and the complete error log trace. Has
> > anyone faced this problem before?
> >
> > Thanks in advance.
> >
> > Regards,
> > Abel.
> > --
> > Un saludo - Best Regards.
> > Abel
> >
>
-- 
Un saludo - Best Regards.
Abel

Reply via email to