[ 
https://issues.apache.org/jira/browse/SPARK-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15090464#comment-15090464
 ] 

Shivaram Venkataraman commented on SPARK-12609:
-----------------------------------------------

There are other ways to check if the JVM is alive for the user like the web UI 
or monitoring tools like Ganglia etc, especially for really long running jobs. 
IMHO we don't need add another mechanism to do failure detection here. However 
it might be interesting to see how other projects like py4j handle this issue. 

> Make R to JVM timeout configurable 
> -----------------------------------
>
>                 Key: SPARK-12609
>                 URL: https://issues.apache.org/jira/browse/SPARK-12609
>             Project: Spark
>          Issue Type: Improvement
>          Components: SparkR
>            Reporter: Shivaram Venkataraman
>
> The timeout from R to the JVM is hardcoded at 6000 seconds in 
> https://github.com/apache/spark/blob/6c5bbd628aaedb6efb44c15f816fea8fb600decc/R/pkg/R/client.R#L22
> This results in Spark jobs that take more than 100 minutes to always fail. We 
> should make this timeout configurable through SparkConf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to