[ https://issues.apache.org/jira/browse/SPARK-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512209#comment-14512209 ]
Bryan Cutler commented on SPARK-6980: ------------------------------------- Thanks for the clarification [~imranr], that makes sense now. I was pretty sure it had something to do with {{AkkaUtils.askWithReply}} but I wasn't sure if involving Spark jobs would somehow change the flow of messages/timeouts. I'll try to put together another example and we can see if it's ok. > Akka timeout exceptions indicate which conf controls them > --------------------------------------------------------- > > Key: SPARK-6980 > URL: https://issues.apache.org/jira/browse/SPARK-6980 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Reporter: Imran Rashid > Assignee: Harsh Gupta > Priority: Minor > Labels: starter > Attachments: Spark-6980-Test.scala > > > If you hit one of the akka timeouts, you just get an exception like > {code} > java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] > {code} > The exception doesn't indicate how to change the timeout, though there is > usually (always?) a corresponding setting in {{SparkConf}} . It would be > nice if the exception including the relevant setting. > I think this should be pretty easy to do -- we just need to create something > like a {{NamedTimeout}}. It would have its own {{await}} method, catches the > akka timeout and throws its own exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org