Github user galv commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21494#discussion_r193953345
  
    --- Diff: core/src/main/scala/org/apache/spark/util/RpcUtils.scala ---
    @@ -44,7 +44,7 @@ private[spark] object RpcUtils {
     
       /** Returns the default Spark timeout to use for RPC ask operations. */
       def askRpcTimeout(conf: SparkConf): RpcTimeout = {
    -    RpcTimeout(conf, Seq("spark.rpc.askTimeout", "spark.network.timeout"), 
"120s")
    +    RpcTimeout(conf, Seq("spark.rpc.askTimeout", "spark.network.timeout"), 
"900s")
    --- End diff --
    
    Why hard-code this change? Couldn't you have set this at runtime if you 
needed it increased? I'm concerned about it breaking backwards compatibility 
with jobs that for whatever reason depend on the 120 second timeout.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to