Github user cxzl25 commented on the issue:

    https://github.com/apache/spark/pull/21311
  
    @cloud-fan 
    LongToUnsafeRowMap#append(key: Long, row: UnsafeRow)
    when row.getSizeInBytes > newPageSize( oldPage.length * 8L * 2),still use 
newPageSize value.
    When the new page size is insufficient to hold the entire row of data, 
Platform.copyMemory is still called.No error.
    At this time, the last remaining data was discarded.
    When reading data, read this buffer according to offset and length,the last 
data is unpredictable.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to