Github user squito commented on the issue: https://github.com/apache/spark/pull/23058 > causes any performance degradation compared to memory mapping @ankuriitg good question, though if you look at what the old code was doing, it wasn't memory mapping the file, it was reading it into memory from a regular input stream, take a look at [`ChunkedByteBuffer.fromFile`](https://github.com/apache/spark/blob/fa0d4bf69929c5acd676d602e758a969713d19d8/core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala#L192-L212) basically doing the the same thing this is doing now, but without the extra memory overhead.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org