ivoson commented on code in PR #39459:
URL: https://github.com/apache/spark/pull/39459#discussion_r1100333545


##########
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala:
##########
@@ -77,6 +77,11 @@ class BlockManagerMasterEndpoint(
   // Mapping from block id to the set of block managers that have the block.
   private val blockLocations = new JHashMap[BlockId, 
mutable.HashSet[BlockManagerId]]
 
+  // Mapping from task id to the set of rdd blocks which are generated from 
the task.
+  private val tidToRddBlockIds = new mutable.HashMap[Long, 
mutable.HashSet[RDDBlockId]]
+  // Record the visible RDD blocks which have been generated at least from one 
successful task.
+  private val visibleRDDBlocks = new mutable.HashSet[RDDBlockId]

Review Comment:
   Let me explain this further. If we track visible blocks, it's clear that we 
always know which blocks are visible.
   
   If we track invisible blocks, the way we consider a block as visible is that 
at least one block exists and it's not in invisible lists. So if the existing 
blocks got lost, we will lose the information. Next time the cache is 
re-computed, we will do this again(firstly put it into invisible lists, then 
promote it to visible by removing it from invisible list once task finished 
successfully). And after doing the process again, the cache would be visible 
then.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to