Spark on Kubernetes doesn't yet support mounting ConfigMaps. Not very 
familiar with how HBase is configured. Is it using the Hadoop configuration 
system? If so, you can use Spark configuration properties with the prefix 
"spark.hadoop.*" to set Hadoop config options. Spark automatically removes 
that prefix when using the options. Otherwise if it doesn't use the Hadoop 
configuration system, you can use the Spark Operator, 
<https://github.com/GoogleCloudPlatform/spark-on-k8s-operator> which 
supports mounting ConfigMaps through a mutating admission webhook. See the 
documentation 
at 
https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#mounting-configmaps.

On Friday, September 14, 2018 at 8:27:36 AM UTC-7, R Rao wrote:
>
> hi guys ,
>    trying to figure out how to run spark job that talks to my hbase.
> I do not want to bake/hardcode the hbase config into the driver or 
> executor images .  I want  the configuration to be available via a 
> configmap.
> Can anybody please help , am new to this .
> Thanks
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes-users] s... 'R Rao' via Kubernetes user discussion and Q&A
    • [kubernetes-use... 'Yinan Li' via Kubernetes user discussion and Q&A

Reply via email to