Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20191#discussion_r160296722
  
    --- Diff: 
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/UnsafeMemoryAllocator.java
 ---
    @@ -38,9 +38,20 @@ public MemoryBlock allocate(long size) throws 
OutOfMemoryError {
       public void free(MemoryBlock memory) {
         assert (memory.obj == null) :
           "baseObject not null; are you trying to use the off-heap allocator 
to free on-heap memory?";
    +    assert (memory.pageNumber != 
MemoryBlock.FREED_IN_ALLOCATOR_PAGE_NUMBER) :
    +      "page has already been freed";
    +    assert ((memory.pageNumber == MemoryBlock.NO_PAGE_NUMBER)
    +            || (memory.pageNumber == 
MemoryBlock.FREED_IN_TMM_PAGE_NUMBER)) :
    +      "TMM-allocated pages must be freed via TMM.freePage(), not directly 
in allocator free()";
    +
         if (MemoryAllocator.MEMORY_DEBUG_FILL_ENABLED) {
           memory.fill(MemoryAllocator.MEMORY_DEBUG_FILL_FREED_VALUE);
         }
         Platform.freeMemory(memory.offset);
    +    // As an additional layer of defense against use-after-free bugs, we 
mutate the
    +    // MemoryBlock to reset its pointer.
    +    memory.offset = 0;
    --- End diff --
    
    I think that it depends on whether the direct memory address stored in 
`memory.offset` corresponds to the address of memory allocated to the JVM.
    
    If we've freed the underlying Unsafe / Direct memory then I would expect a 
SIGSEGV, but if the address has been re-used by a subsequent allocation (or if 
we've done some buffer pooling and have recycled the underlying direct memory 
without freeing it) then this address could point to valid memory allocated by 
the JVM but which is currently in use by another Spark task, so reads or writes 
would trigger data corruption.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to