Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5286#discussion_r27455173
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala 
---
    @@ -439,14 +439,10 @@ private[spark] class BlockManager(
         // As an optimization for map output fetches, if the block is for a 
shuffle, return it
         // without acquiring a lock; the disk store never deletes (recent) 
items so this should work
         if (blockId.isShuffle) {
    -      val shuffleBlockManager = shuffleManager.shuffleBlockManager
    -      shuffleBlockManager.getBytes(blockId.asInstanceOf[ShuffleBlockId]) 
match {
    -        case Some(bytes) =>
    -          Some(bytes)
    -        case None =>
    -          throw new BlockException(
    -            blockId, s"Block $blockId not found on disk, though it should 
be")
    -      }
    +      val shuffleBlockManager = shuffleManager.shuffleBlockResolver
    +      // TODO: This should gracefully handle case where local block is not 
available. Currently
    +      // downstream code will throw an exception.
    +      
Some(shuffleBlockManager.getBlockData(blockId.asInstanceOf[ShuffleBlockId]).nioByteBuffer())
    --- End diff --
    
    can you change Some -> Option, just in case nioByteBuffer returns null?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to