Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21451#discussion_r209042028
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala 
---
    @@ -404,6 +405,47 @@ private[spark] class BlockManager(
         putBytes(blockId, new ChunkedByteBuffer(data.nioByteBuffer()), 
level)(classTag)
       }
     
    +  override def putBlockDataAsStream(
    +      blockId: BlockId,
    +      level: StorageLevel,
    +      classTag: ClassTag[_]): StreamCallbackWithID = {
    +    // TODO if we're going to only put the data in the disk store, we 
should just write it directly
    +    // to the final location, but that would require a deeper refactor of 
this code.  So instead
    +    // we just write to a temp file, and call putBytes on the data in that 
file.
    +    val tmpFile = diskBlockManager.createTempLocalBlock()._2
    +    new StreamCallbackWithID {
    +      val channel: WritableByteChannel = Channels.newChannel(new 
FileOutputStream(tmpFile))
    --- End diff --
    
    yeah sure looks like it.  


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to