[ https://issues.apache.org/jira/browse/SPARK-706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14309733#comment-14309733 ]
Nicholas Chammas commented on SPARK-706: ---------------------------------------- [~rxin] Is this issue still valid? > Failures in block manager put leads to task hanging > --------------------------------------------------- > > Key: SPARK-706 > URL: https://issues.apache.org/jira/browse/SPARK-706 > Project: Spark > Issue Type: Bug > Components: Block Manager > Affects Versions: 0.6.0, 0.6.1, 0.7.0, 0.6.2 > Reporter: Reynold Xin > > Reported in this thread: > https://groups.google.com/forum/?fromgroups=#!topic/shark-users/Q_SiIDzVtZw > The following exception in block manager leaves the block marked as pending. > {code} > 13/02/26 06:14:56 ERROR executor.Executor: Exception in task ID 39 > com.esotericsoftware.kryo.SerializationException: Buffer limit exceeded > writing object of type: shark.ColumnarWritable > at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:492) > at spark.KryoSerializationStream.writeObject(KryoSerializer.scala:78) > at > spark.serializer.SerializationStream$class.writeAll(Serializer.scala:58) > at spark.KryoSerializationStream.writeAll(KryoSerializer.scala:73) > at spark.storage.DiskStore.putValues(DiskStore.scala:63) > at spark.storage.BlockManager.dropFromMemory(BlockManager.scala:779) > at spark.storage.MemoryStore.tryToPut(MemoryStore.scala:162) > at spark.storage.MemoryStore.putValues(MemoryStore.scala:57) > at spark.storage.BlockManager.put(BlockManager.scala:582) > at spark.CacheTracker.getOrCompute(CacheTracker.scala:215) > at spark.RDD.iterator(RDD.scala:159) > at spark.scheduler.ResultTask.run(ResultTask.scala:18) > at spark.executor.Executor$TaskRunner.run(Executor.scala:76) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) > at java.lang.Thread.run(Thread.java:679) > {code} > When the block is read, the task is stuck in BlockInfo.waitForReady(). > We should propagate the error back to the master instead of hanging the slave > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org