Github user squito commented on the issue:

    https://github.com/apache/spark/pull/20203
  
    btw another way you could test out having a bad host would be something 
like this (untested):
    
    ```scala
    import org.apache.spark.SparkEnv
    
    val hosts = sc.parallelize(1 to 10000, 100).map { _ => 
InetAddress.getHostName()}.collect().toSet
    val badHost = hosts.head
    
    sc.parallelize(1 to 10000, 10).map { x =>
      if (InetAddress.getHostName() == badHost) throw new RuntimeException("Bad 
host")
        else (x % 3, x)
    }.reduceByKey((a, b) => a + b).collect()
    ```
    
    that way you make sure the failures are consistently on one host, not 
dependent on higher executor ids getting concentrated on one host.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to