[jira] [Assigned] (SPARK-21902) BlockManager.doPut will hide actually exception when exception thrown in finally block

2017-09-14 Thread Saisai Shao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saisai Shao reassigned SPARK-21902:
---

Assignee: zhoukang

> BlockManager.doPut will hide actually exception when exception thrown in 
> finally block
> --
>
> Key: SPARK-21902
> URL: https://issues.apache.org/jira/browse/SPARK-21902
> Project: Spark
>  Issue Type: Wish
>  Components: Block Manager
>Affects Versions: 2.1.0
>Reporter: zhoukang
>Assignee: zhoukang
> Fix For: 2.3.0
>
>
> As logging below, actually exception will be hidden when removeBlockInternal 
> throw an exception.
> {code:java}
> 2017-08-31,10:26:57,733 WARN org.apache.spark.storage.BlockManager: Putting 
> block broadcast_110 failed due to an exception
> 2017-08-31,10:26:57,734 WARN org.apache.spark.broadcast.BroadcastManager: 
> Failed to create a new broadcast in 1 attempts
> java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-5bb5ac1e-c494-434a-ab89-bd1808c6b9ed/2e.
> at 
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
> at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:115)
> at 
> org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1339)
> at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:910)
> at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:948)
> at 
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:726)
> at 
> org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1233)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:122)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
> at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
> at 
> org.apache.spark.broadcast.BroadcastManager$$anonfun$newBroadcast$1.apply$mcVI$sp(BroadcastManager.scala:60)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
> at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:58)
> at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1415)
> at 
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1002)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:924)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:771)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:770)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:770)
> at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1235)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1662)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {code}
> I want to print the exception first for troubleshooting.Or may be we should 
> not throw exception when removing blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-21902) BlockManager.doPut will hide actually exception when exception thrown in finally block

2017-09-05 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-21902:


Assignee: Apache Spark

> BlockManager.doPut will hide actually exception when exception thrown in 
> finally block
> --
>
> Key: SPARK-21902
> URL: https://issues.apache.org/jira/browse/SPARK-21902
> Project: Spark
>  Issue Type: Wish
>  Components: Block Manager
>Affects Versions: 2.1.0
>Reporter: zhoukang
>Assignee: Apache Spark
>
> As logging below, actually exception will be hidden when removeBlockInternal 
> throw an exception.
> {code:java}
> 2017-08-31,10:26:57,733 WARN org.apache.spark.storage.BlockManager: Putting 
> block broadcast_110 failed due to an exception
> 2017-08-31,10:26:57,734 WARN org.apache.spark.broadcast.BroadcastManager: 
> Failed to create a new broadcast in 1 attempts
> java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-5bb5ac1e-c494-434a-ab89-bd1808c6b9ed/2e.
> at 
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
> at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:115)
> at 
> org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1339)
> at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:910)
> at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:948)
> at 
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:726)
> at 
> org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1233)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:122)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
> at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
> at 
> org.apache.spark.broadcast.BroadcastManager$$anonfun$newBroadcast$1.apply$mcVI$sp(BroadcastManager.scala:60)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
> at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:58)
> at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1415)
> at 
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1002)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:924)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:771)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:770)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:770)
> at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1235)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1662)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {code}
> I want to print the exception first for troubleshooting.Or may be we should 
> not throw exception when removing blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-21902) BlockManager.doPut will hide actually exception when exception thrown in finally block

2017-09-05 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-21902:


Assignee: (was: Apache Spark)

> BlockManager.doPut will hide actually exception when exception thrown in 
> finally block
> --
>
> Key: SPARK-21902
> URL: https://issues.apache.org/jira/browse/SPARK-21902
> Project: Spark
>  Issue Type: Wish
>  Components: Block Manager
>Affects Versions: 2.1.0
>Reporter: zhoukang
>
> As logging below, actually exception will be hidden when removeBlockInternal 
> throw an exception.
> {code:java}
> 2017-08-31,10:26:57,733 WARN org.apache.spark.storage.BlockManager: Putting 
> block broadcast_110 failed due to an exception
> 2017-08-31,10:26:57,734 WARN org.apache.spark.broadcast.BroadcastManager: 
> Failed to create a new broadcast in 1 attempts
> java.io.IOException: Failed to create local dir in 
> /tmp/blockmgr-5bb5ac1e-c494-434a-ab89-bd1808c6b9ed/2e.
> at 
> org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
> at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:115)
> at 
> org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1339)
> at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:910)
> at 
> org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:948)
> at 
> org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:726)
> at 
> org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1233)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:122)
> at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
> at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
> at 
> org.apache.spark.broadcast.BroadcastManager$$anonfun$newBroadcast$1.apply$mcVI$sp(BroadcastManager.scala:60)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
> at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:58)
> at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1415)
> at 
> org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1002)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:924)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:771)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:770)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:770)
> at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1235)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1662)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {code}
> I want to print the exception first for troubleshooting.Or may be we should 
> not throw exception when removing blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org