[ https://issues.apache.org/jira/browse/SPARK-37593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17456269#comment-17456269 ]
Apache Spark commented on SPARK-37593: -------------------------------------- User 'WangGuangxin' has created a pull request for this issue: https://github.com/apache/spark/pull/34846 > Optimize HeapMemoryAllocator to avoid memory waste in humongous allocation > when using G1GC > ------------------------------------------------------------------------------------------ > > Key: SPARK-37593 > URL: https://issues.apache.org/jira/browse/SPARK-37593 > Project: Spark > Issue Type: Improvement > Components: Spark Core, SQL > Affects Versions: 3.3.0 > Reporter: EdisonWang > Priority: Minor > Fix For: 3.3.0 > > > As we may know, a phenomenon called humongous allocations exists in G1GC when > allocations that are larger than 50% of the region size. > Spark's tungsten memory model usually tries to allocate memory by one `page` > each time and allocated by long[pageSizeBytes/8] in > HeapMemoryAllocator.allocate. > Remember that java long array needs extra object header (usually 16 bytes in > 64bit system), so the really bytes allocated is pageSize+16. > Assume that the G1HeapRegionSize is 4M and pageSizeBytes is 4M as well. Since > every time we need to allocate 4M+16byte memory, so two regions are used with > one region only occupies 16byte. Then there are about 50% memory waste. > It can happenes under different combinations of G1HeapRegionSize (varies from > 1M to 32M) and pageSizeBytes (varies from 1M to 64M). -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org