[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281402#comment-15281402 ] jianbo li commented on SPARK-2677: -- It seems this issue still can occur on spark 1.5.2 release version > BasicBlockFetchIterator#next can wait forever > - > > Key: SPARK-2677 > URL: https://issues.apache.org/jira/browse/SPARK-2677 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 0.9.2, 1.0.0, 1.0.1 >Reporter: Kousuke Saruta >Assignee: Kousuke Saruta >Priority: Blocker > Fix For: 1.1.0 > > > In BasicBlockFetchIterator#next, it waits fetch result on result.take. > {code} > override def next(): (BlockId, Option[Iterator[Any]]) = { > resultsGotten += 1 > val startFetchWait = System.currentTimeMillis() > val result = results.take() > val stopFetchWait = System.currentTimeMillis() > _fetchWaitTime += (stopFetchWait - startFetchWait) > if (! result.failed) bytesInFlight -= result.size > while (!fetchRequests.isEmpty && > (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size <= > maxBytesInFlight)) { > sendRequest(fetchRequests.dequeue()) > } > (result.blockId, if (result.failed) None else > Some(result.deserialize())) > } > {code} > But, results is implemented as LinkedBlockingQueue so if remote executor hang > up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14255009#comment-14255009 ] Derrick Burns commented on SPARK-2677: -- Appear to still happen in 1.1.1: 2014-12-20 22:54:00,574 INFO [connection-manager-thread] network.ConnectionManager (Logging.scala:logInfo(59)) - Key not valid ? sun.nio.ch.SelectionKeyImpl@6045fa68 2014-12-20 22:54:00,574 INFO [handle-read-write-executor-0] network.ConnectionManager (Logging.scala:logInfo(59)) - Removing SendingConnection to ConnectionManagerId(ip-10-89-134-186.us-west-2.compute.internal,49171) 2014-12-20 22:54:00,574 INFO [handle-read-write-executor-2] network.ConnectionManager (Logging.scala:logInfo(59)) - Removing ReceivingConnection to ConnectionManagerId(ip-10-89-134-186.us-west-2.compute.internal,49171) 2014-12-20 22:54:00,575 INFO [sparkDriver-akka.actor.default-dispatcher-14] cluster.YarnClientSchedulerBackend (Logging.scala:logInfo(59)) - Executor 7 disconnected, so removing it 2014-12-20 22:54:00,576 ERROR [handle-read-write-executor-2] network.ConnectionManager (Logging.scala:logError(75)) - Corresponding SendingConnection to ConnectionManagerId(ip-10-89-134-186.us-west-2.compute.internal,49171) not found 2014-12-20 22:54:00,576 ERROR [sparkDriver-akka.actor.default-dispatcher-14] cluster.YarnClientClusterScheduler (Logging.scala:logError(75)) - Lost executor 7 on ip-10-89-134-186.us-west-2.compute.internal: remote Akka client disassociated 2014-12-20 22:54:00,576 INFO [connection-manager-thread] network.ConnectionManager (Logging.scala:logInfo(80)) - key already cancelled ? sun.nio.ch.SelectionKeyImpl@6045fa68 java.nio.channels.CancelledKeyException at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:392) at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:145) BasicBlockFetchIterator#next can wait forever - Key: SPARK-2677 URL: https://issues.apache.org/jira/browse/SPARK-2677 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 0.9.2, 1.0.0, 1.0.1 Reporter: Kousuke Saruta Assignee: Kousuke Saruta Priority: Blocker Fix For: 1.1.0 In BasicBlockFetchIterator#next, it waits fetch result on result.take. {code} override def next(): (BlockId, Option[Iterator[Any]]) = { resultsGotten += 1 val startFetchWait = System.currentTimeMillis() val result = results.take() val stopFetchWait = System.currentTimeMillis() _fetchWaitTime += (stopFetchWait - startFetchWait) if (! result.failed) bytesInFlight -= result.size while (!fetchRequests.isEmpty (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = maxBytesInFlight)) { sendRequest(fetchRequests.dequeue()) } (result.blockId, if (result.failed) None else Some(result.deserialize())) } {code} But, results is implemented as LinkedBlockingQueue so if remote executor hang up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092355#comment-14092355 ] Kousuke Saruta commented on SPARK-2677: --- SPARK-2538 was resolved but there is still this issue. I tried to resolve this issue in https://github.com/apache/spark/pull/1632 BasicBlockFetchIterator#next can wait forever - Key: SPARK-2677 URL: https://issues.apache.org/jira/browse/SPARK-2677 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 0.9.2, 1.0.0, 1.0.1 Reporter: Kousuke Saruta Assignee: Josh Rosen Priority: Blocker In BasicBlockFetchIterator#next, it waits fetch result on result.take. {code} override def next(): (BlockId, Option[Iterator[Any]]) = { resultsGotten += 1 val startFetchWait = System.currentTimeMillis() val result = results.take() val stopFetchWait = System.currentTimeMillis() _fetchWaitTime += (stopFetchWait - startFetchWait) if (! result.failed) bytesInFlight -= result.size while (!fetchRequests.isEmpty (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = maxBytesInFlight)) { sendRequest(fetchRequests.dequeue()) } (result.blockId, if (result.failed) None else Some(result.deserialize())) } {code} But, results is implemented as LinkedBlockingQueue so if remote executor hang up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14077439#comment-14077439 ] Apache Spark commented on SPARK-2677: - User 'sarutak' has created a pull request for this issue: https://github.com/apache/spark/pull/1632 BasicBlockFetchIterator#next can wait forever - Key: SPARK-2677 URL: https://issues.apache.org/jira/browse/SPARK-2677 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 0.9.2, 1.0.0, 1.0.1 Reporter: Kousuke Saruta Priority: Blocker In BasicBlockFetchIterator#next, it waits fetch result on result.take. {code} override def next(): (BlockId, Option[Iterator[Any]]) = { resultsGotten += 1 val startFetchWait = System.currentTimeMillis() val result = results.take() val stopFetchWait = System.currentTimeMillis() _fetchWaitTime += (stopFetchWait - startFetchWait) if (! result.failed) bytesInFlight -= result.size while (!fetchRequests.isEmpty (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = maxBytesInFlight)) { sendRequest(fetchRequests.dequeue()) } (result.blockId, if (result.failed) None else Some(result.deserialize())) } {code} But, results is implemented as LinkedBlockingQueue so if remote executor hang up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076033#comment-14076033 ] Guoqiang Li commented on SPARK-2677: [~pwendell] , [~sarutak] How about the following solution? https://github.com/witgo/spark/compare/SPARK-2677 BasicBlockFetchIterator#next can wait forever - Key: SPARK-2677 URL: https://issues.apache.org/jira/browse/SPARK-2677 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 0.9.2, 1.0.0, 1.0.1 Reporter: Kousuke Saruta Priority: Blocker In BasicBlockFetchIterator#next, it waits fetch result on result.take. {code} override def next(): (BlockId, Option[Iterator[Any]]) = { resultsGotten += 1 val startFetchWait = System.currentTimeMillis() val result = results.take() val stopFetchWait = System.currentTimeMillis() _fetchWaitTime += (stopFetchWait - startFetchWait) if (! result.failed) bytesInFlight -= result.size while (!fetchRequests.isEmpty (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = maxBytesInFlight)) { sendRequest(fetchRequests.dequeue()) } (result.blockId, if (result.failed) None else Some(result.deserialize())) } {code} But, results is implemented as LinkedBlockingQueue so if remote executor hang up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076278#comment-14076278 ] Apache Spark commented on SPARK-2677: - User 'witgo' has created a pull request for this issue: https://github.com/apache/spark/pull/1619 BasicBlockFetchIterator#next can wait forever - Key: SPARK-2677 URL: https://issues.apache.org/jira/browse/SPARK-2677 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 0.9.2, 1.0.0, 1.0.1 Reporter: Kousuke Saruta Priority: Blocker In BasicBlockFetchIterator#next, it waits fetch result on result.take. {code} override def next(): (BlockId, Option[Iterator[Any]]) = { resultsGotten += 1 val startFetchWait = System.currentTimeMillis() val result = results.take() val stopFetchWait = System.currentTimeMillis() _fetchWaitTime += (stopFetchWait - startFetchWait) if (! result.failed) bytesInFlight -= result.size while (!fetchRequests.isEmpty (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = maxBytesInFlight)) { sendRequest(fetchRequests.dequeue()) } (result.blockId, if (result.failed) None else Some(result.deserialize())) } {code} But, results is implemented as LinkedBlockingQueue so if remote executor hang up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075672#comment-14075672 ] Patrick Wendell commented on SPARK-2677: Just as an FYI - this has been observed also in several earlier versions of Spark. I think one issue is that we don't have timeouts in the conneciton manger code. If a JVM goes into GC thrashing and becomes un-responsive (but still alive), then you can get stuck here. BasicBlockFetchIterator#next can wait forever - Key: SPARK-2677 URL: https://issues.apache.org/jira/browse/SPARK-2677 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 0.9.2, 1.0.0, 1.0.1 Reporter: Kousuke Saruta Priority: Blocker In BasicBlockFetchIterator#next, it waits fetch result on result.take. {code} override def next(): (BlockId, Option[Iterator[Any]]) = { resultsGotten += 1 val startFetchWait = System.currentTimeMillis() val result = results.take() val stopFetchWait = System.currentTimeMillis() _fetchWaitTime += (stopFetchWait - startFetchWait) if (! result.failed) bytesInFlight -= result.size while (!fetchRequests.isEmpty (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = maxBytesInFlight)) { sendRequest(fetchRequests.dequeue()) } (result.blockId, if (result.failed) None else Some(result.deserialize())) } {code} But, results is implemented as LinkedBlockingQueue so if remote executor hang up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (SPARK-2677) BasicBlockFetchIterator#next can wait forever
[ https://issues.apache.org/jira/browse/SPARK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14075673#comment-14075673 ] Guoqiang Li commented on SPARK-2677: If {{yarn.scheduler.fair.preemption}} is set to true in yarn, This issue will appear frequently. BasicBlockFetchIterator#next can wait forever - Key: SPARK-2677 URL: https://issues.apache.org/jira/browse/SPARK-2677 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 0.9.2, 1.0.0, 1.0.1 Reporter: Kousuke Saruta Priority: Blocker In BasicBlockFetchIterator#next, it waits fetch result on result.take. {code} override def next(): (BlockId, Option[Iterator[Any]]) = { resultsGotten += 1 val startFetchWait = System.currentTimeMillis() val result = results.take() val stopFetchWait = System.currentTimeMillis() _fetchWaitTime += (stopFetchWait - startFetchWait) if (! result.failed) bytesInFlight -= result.size while (!fetchRequests.isEmpty (bytesInFlight == 0 || bytesInFlight + fetchRequests.front.size = maxBytesInFlight)) { sendRequest(fetchRequests.dequeue()) } (result.blockId, if (result.failed) None else Some(result.deserialize())) } {code} But, results is implemented as LinkedBlockingQueue so if remote executor hang up, fetching Executor waits forever. -- This message was sent by Atlassian JIRA (v6.2#6252)