[jira] [Commented] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17276913#comment-17276913 ] Manohar Chamaraju commented on SPARK-34293: --- Update: # In client mode by adding fs.defaultFS in core-site.xml fixed the issue for me. # what do to work is usage of hadoop-conf configmap in client mode. > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: driver.log, executor.log, > image-2021-01-30-00-13-18-234.png, image-2021-01-30-00-14-14-329.png, > image-2021-01-30-00-14-45-335.png, image-2021-01-30-00-20-54-620.png, > image-2021-01-30-00-33-02-109.png, image-2021-01-30-00-34-05-946.png > > > Steps to reproduce > # Configure secure HDFS(kerberos) cluster running as containers in > kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-34-05-946.png! > # Some of observation > ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf > as not effect only works after HADOOP_CONF_DIR is set. Below was the contents > of hadoop-conf configmap. > !image-2021-01-30-00-20-54-620.png! > ## Ran the command in cluster mode as well, in cluster mode also executor > could not connect to secure hdfs. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: image-2021-01-30-00-34-05-946.png > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: driver.log, executor.log, > image-2021-01-30-00-13-18-234.png, image-2021-01-30-00-14-14-329.png, > image-2021-01-30-00-14-45-335.png, image-2021-01-30-00-20-54-620.png, > image-2021-01-30-00-33-02-109.png, image-2021-01-30-00-34-05-946.png > > > Steps to reproduce > # Configure secure HDFS(kerberos) cluster running as containers in > kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-14-45-335.png! > # Some of observation > ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf > as not effect only works after HADOOP_CONF_DIR is set. Below was the contents > of hadoop-conf configmap. > !image-2021-01-30-00-20-54-620.png! > ## Ran the command in cluster mode as well, in cluster mode also executor > could not connect to secure hdfs. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Description: Steps to reproduce # Configure secure HDFS(kerberos) cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spawn as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-34-05-946.png! # Some of observation ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf as not effect only works after HADOOP_CONF_DIR is set. Below was the contents of hadoop-conf configmap. !image-2021-01-30-00-20-54-620.png! ## Ran the command in cluster mode as well, in cluster mode also executor could not connect to secure hdfs. was: Steps to reproduce # Configure secure HDFS(kerberos) cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spawn as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! # Some of observation ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf as not effect only works after HADOOP_CONF_DIR is set. Below was the contents of hadoop-conf configmap. !image-2021-01-30-00-20-54-620.png! ## Ran the command in cluster mode as well, in cluster mode also executor could not connect to secure hdfs. > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: driver.log, executor.log, > image-2021-01-30-00-13-18-234.png, image-2021-01-30-00-14-14-329.png, > image-2021-01-30-00-14-45-335.png, image-2021-01-30-00-20-54-620.png, > image-2021-01-30-00-33-02-109.png, image-2021-01-30-00-34-05-946.png > > > Steps to reproduce > # Configure secure HDFS(kerberos) cluster running as containers in > kubernetes. > # Configure KDC
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: image-2021-01-30-00-33-02-109.png > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: driver.log, executor.log, > image-2021-01-30-00-13-18-234.png, image-2021-01-30-00-14-14-329.png, > image-2021-01-30-00-14-45-335.png, image-2021-01-30-00-20-54-620.png, > image-2021-01-30-00-33-02-109.png > > > Steps to reproduce > # Configure secure HDFS(kerberos) cluster running as containers in > kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-14-45-335.png! > # Some of observation > ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf > as not effect only works after HADOOP_CONF_DIR is set. Below was the contents > of hadoop-conf configmap. > !image-2021-01-30-00-20-54-620.png! > ## Ran the command in cluster mode as well, in cluster mode also executor > could not connect to secure hdfs. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: driver.log > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: driver.log, executor.log, > image-2021-01-30-00-13-18-234.png, image-2021-01-30-00-14-14-329.png, > image-2021-01-30-00-14-45-335.png, image-2021-01-30-00-20-54-620.png > > > Steps to reproduce > # Configure secure HDFS(kerberos) cluster running as containers in > kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-14-45-335.png! > # Some of observation > ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf > as not effect only works after HADOOP_CONF_DIR is set. Below was the contents > of hadoop-conf configmap. > !image-2021-01-30-00-20-54-620.png! > ## Ran the command in cluster mode as well, in cluster mode also executor > could not connect to secure hdfs. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: executor.log > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: driver.log, executor.log, > image-2021-01-30-00-13-18-234.png, image-2021-01-30-00-14-14-329.png, > image-2021-01-30-00-14-45-335.png, image-2021-01-30-00-20-54-620.png > > > Steps to reproduce > # Configure secure HDFS(kerberos) cluster running as containers in > kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-14-45-335.png! > # Some of observation > ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf > as not effect only works after HADOOP_CONF_DIR is set. Below was the contents > of hadoop-conf configmap. > !image-2021-01-30-00-20-54-620.png! > ## Ran the command in cluster mode as well, in cluster mode also executor > could not connect to secure hdfs. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Description: Steps to reproduce # Configure secure HDFS(kerberos) cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spawn as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! # Some of observation ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf as not effect only works after HADOOP_CONF_DIR is set. Below was the contents of hadoop-conf configmap. !image-2021-01-30-00-20-54-620.png! ## Ran the command in cluster mode as well, in cluster mode also executor could not connect to secure hdfs. was: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spawn as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! # Some of observation ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf as not effect only works after HADOOP_CONF_DIR is set. ## Ran the command in cluster mode as well, in cluster mode also executor could not connect to secure hdfs. > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png, > image-2021-01-30-00-14-14-329.png, image-2021-01-30-00-14-45-335.png, > image-2021-01-30-00-20-54-620.png > > > Steps to reproduce > # Configure secure HDFS(kerberos) cluster running as containers in > kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: image-2021-01-30-00-20-54-620.png > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png, > image-2021-01-30-00-14-14-329.png, image-2021-01-30-00-14-45-335.png, > image-2021-01-30-00-20-54-620.png > > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-14-45-335.png! > # Some of observation > ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf > as not effect only works after HADOOP_CONF_DIR is set. > ## Ran the command in cluster mode as well, in cluster mode also executor > could not connect to secure hdfs. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Description: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spawn as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! # Some of observation ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf as not effect only works after HADOOP_CONF_DIR is set. ## Ran the command in cluster mode as well, in cluster mode also executor could not connect to secure hdfs. was: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spwan as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png, > image-2021-01-30-00-14-14-329.png, image-2021-01-30-00-14-45-335.png > > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Description: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spawn as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! # Some of observation ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf as not effect only works after HADOOP_CONF_DIR is set. ## Ran the command in cluster mode as well, in cluster mode also executor could not connect to secure hdfs. was: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spawn as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! # Some of observation ## In Client mode, --conf spark.kubernetes.hadoop.configMapName=hadoop-conf as not effect only works after HADOOP_CONF_DIR is set. ## Ran the command in cluster mode as well, in cluster mode also executor could not connect to secure hdfs. > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png, > image-2021-01-30-00-14-14-329.png, image-2021-01-30-00-14-45-335.png > > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spawn as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below >
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: image-2021-01-30-00-14-14-329.png > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png, > image-2021-01-30-00-14-14-329.png, image-2021-01-30-00-14-45-335.png > > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spwan as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! > # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-11-22-401.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: image-2021-01-30-00-14-45-335.png > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png, > image-2021-01-30-00-14-14-329.png, image-2021-01-30-00-14-45-335.png > > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spwan as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! > # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-11-22-401.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Description: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spwan as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-13-18-234.png! # Create configmap kbr-conf !image-2021-01-30-00-14-14-329.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-14-45-335.png! was: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spwan as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-11-22-401.png! > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png, > image-2021-01-30-00-14-14-329.png, image-2021-01-30-00-14-45-335.png > > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spwan as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below > !image-2021-01-30-00-13-18-234.png! > # Create configmap kbr-conf > !image-2021-01-30-00-14-14-329.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Attachment: image-2021-01-30-00-13-18-234.png > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > Attachments: image-2021-01-30-00-13-18-234.png > > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spwan as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! > # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf spark.kubernetes.kerberos.enabled=true \ > --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ > --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ > --conf spark.kerberos.principal=h...@dom047600.lab \ > local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ > hdfs://hdfs-namenode:30820/staging-directory. > # On running this command driver is able to connect hdfs with kerberos but > execurtor fails to connect to secure hdfs and below is the logs > !image-2021-01-30-00-11-22-401.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
[ https://issues.apache.org/jira/browse/SPARK-34293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manohar Chamaraju updated SPARK-34293: -- Description: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spwan as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-11-22-401.png! was: Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spwan as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://https://172.17.17.1:443 \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-11-22-401.png! > kubernetes executor pod unable to access secure hdfs > > > Key: SPARK-34293 > URL: https://issues.apache.org/jira/browse/SPARK-34293 > Project: Spark > Issue Type: Bug > Components: Kubernetes >Affects Versions: 3.0.1 >Reporter: Manohar Chamaraju >Priority: Major > > Steps to reproduce > # Configure secure HDFS cluster running as containers in kubernetes. > # Configure KDC on centos and create keytab for user principal hdfs, in > hdfsuser.keytab. > # Genearte spark image(v3.0.1), to spwan as container out of spark image. > # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with > core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! > # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! > # Run the command /opt/spark/bin/spark-submit \ > --deploy-mode client \ > --executor-memory 1g\ > --executor-memory 1g\ > --executor-cores 1\ > --class org.apache.spark.examples.HdfsTest \ > --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ > --master k8s://[https://172.17.17.1:443|https://172.17.17.1/] \ > --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ > --conf spark.app.name=spark-hdfs \ > --conf spark.executer.instances=1 \ > --conf spark.kubernetes.node.selector.spark=yes\ > --conf spark.kubernetes.node.selector.Worker=label\ > --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ > --conf
[jira] [Created] (SPARK-34293) kubernetes executor pod unable to access secure hdfs
Manohar Chamaraju created SPARK-34293: - Summary: kubernetes executor pod unable to access secure hdfs Key: SPARK-34293 URL: https://issues.apache.org/jira/browse/SPARK-34293 Project: Spark Issue Type: Bug Components: Kubernetes Affects Versions: 3.0.1 Reporter: Manohar Chamaraju Steps to reproduce # Configure secure HDFS cluster running as containers in kubernetes. # Configure KDC on centos and create keytab for user principal hdfs, in hdfsuser.keytab. # Genearte spark image(v3.0.1), to spwan as container out of spark image. # Inside spark container, run export HADOOP_CONF_DIR=/etc/hadoop/conf/ with core-site.xml configuration as below !image-2021-01-30-00-03-41-506.png! # Create configmap kbr-conf !image-2021-01-30-00-08-52-903.png! # Run the command /opt/spark/bin/spark-submit \ --deploy-mode client \ --executor-memory 1g\ --executor-memory 1g\ --executor-cores 1\ --class org.apache.spark.examples.HdfsTest \ --conf spark.kubernetes.namespace=arcsight-installer-lh7fm\ --master k8s://https://172.17.17.1:443 \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.app.name=spark-hdfs \ --conf spark.executer.instances=1 \ --conf spark.kubernetes.node.selector.spark=yes\ --conf spark.kubernetes.node.selector.Worker=label\ --conf spark.kubernetes.container.image=manohar/spark:v3.0.1 \ --conf spark.kubernetes.kerberos.enabled=true \ --conf spark.kubernetes.kerberos.krb5.configMapName=krb5-conf \ --conf spark.kerberos.keytab=/data/hdfsuser.keytab \ --conf spark.kerberos.principal=h...@dom047600.lab \ local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar \ hdfs://hdfs-namenode:30820/staging-directory. # On running this command driver is able to connect hdfs with kerberos but execurtor fails to connect to secure hdfs and below is the logs !image-2021-01-30-00-11-22-401.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org