ivoson commented on code in PR #39459:
URL: https://github.com/apache/spark/pull/39459#discussion_r1096526943


##########
core/src/main/scala/org/apache/spark/storage/BlockManager.scala:
##########
@@ -1325,14 +1325,47 @@ private[spark] class BlockManager(
     blockInfoManager.releaseAllLocksForTask(taskAttemptId)
   }
 
+  /**
+   * Retrieve the given rdd block if it exists and is visible, otherwise call 
the provided
+   * `makeIterator` method to compute the block, persist it, and return its 
values.
+   *
+   * @return either a BlockResult if the block was successfully cached, or an 
iterator if the block
+   *         could not be cached.
+   */
+  def getOrElseUpdateRDDBlock[T](
+      taskId: Long,
+      blockId: RDDBlockId,
+      level: StorageLevel,
+      classTag: ClassTag[T],
+      makeIterator: () => Iterator[T]): Either[BlockResult, Iterator[T]] = {
+    val isCacheVisible = isRDDBlockVisible(blockId)
+    var computed: Boolean = false
+    val getIterator = () => {
+      computed = true
+      makeIterator()
+    }
+
+    val res = getOrElseUpdate(blockId, level, classTag, getIterator)
+    if (res.isLeft && !isCacheVisible) {
+      if (!computed) {
+        // Loaded from cache, re-compute to update accumulators.
+        makeIterator()
+      }

Review Comment:
   > Right..in this case, the rdd block locations for different data can be 
attached to the same rdd block id. So the reader could get the different data 
for the same rdd block, which makes the rdd block data also indeterminate.
   
   @Ngone51 do you think shall we solve this issue in this PR? This looks like 
a more general issue about the indeterminate computation. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to