Github user varunsaxena commented on the pull request:

    https://github.com/apache/spark/pull/3562#issuecomment-65761296
  
    @rxin , I will just summarize what are the configuration defaults I have 
used. I put a value of 100 in initial pull request with the intention of having 
a futher discussion on appropriate defaults.
    
    There are 2 approaches possible. We can continue using the same defaults as 
earlier. That spark.network.timeout will have different default values. Or 
decide a fixed default value. I think latter should be done but an appropriate 
value has to be decided.
    1. spark.core.connection.ack.wait.timeout - Default value of 60s was used 
earlier. 
    2. spark.shuffle.io.connectionTimeout - Default value of 120s was used 
earlier. 
    3. spark.akka.timeout - Default value of 100s. was used earlier
    4. spark.storage.blockManagerSlaveTimeoutMs - Here default was 3 times 
value of spark.executor.heartbeatInterval or 45 sec., whichever is higher.
    
    I think based on these cases we can fix a default timeout value of 120 sec. 
for spark.network.timeout
    The only issue i can see is in case 4. But 120 sec. should be a good enough 
upper cap I think.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to