Github user sarutak commented on the issue:

    https://github.com/apache/spark/pull/13738
  
    @tgravescs I reproduced this with following condition.
    (1) Made `spark-default.conf` empty
    (2) Only `HADOOP_CONF_DIR=/path/to/hadoop-conf` in spark-env.sh
    (3) NameNode HA is enabled and settings are in hdfs-site.xml in the client 
where I ran spark-submit.
    (4) Using standalone cluster
    (5) Submitted job by following command
    
    ```
    spark-submit \
      --master spark://host:port \
      --deploy-mode client \
      --class ReproduceApp \
      reproduceapp.jar \
      <input_path_to_hdfs>
    ```
    
    Following code is used to reproduce this.
    ```
    import org.apache.spark._
    
    object ReproduceApp {
      def main(args: Array[String]) {
        val conf = new SparkConf()
        val sc = new SparkContext(conf)
        val fileName = args(0)
        sc.textFile(fileName).collect
      }
    } 
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to