yes-un commented on issue #10099: URL: https://github.com/apache/incubator-gluten/issues/10099#issuecomment-3026642635
> > However, Velox is still [reading the off-heap memory settings from SparkConf](https://github.com/apache/incubator-gluten/blob/f1fc5697d086a5b4a518edba318f44b7c7bf0860/cpp/velox/compute/WholeStageResultIterator.cc#L485). > > This code should only be effective when there're hash aggregations in the query plan. Have you already faced issues with hash aggregation's memory usage? @zhztheplayer The query itself doesn't include any has aggregations. The error we got is below(we used ResourceProfile to ask offHeap memory to be 4G, but still is restricted by SparkConf offheap size): ``` org.apache.spark.SparkException: Job aborted due to stage failure: Task 420 in stage 0.0 failed 4 times, most recent failure: Lost task 420.3 in stage 0.0 (TID 499) (100.67.231.198 executor 11): org.apache.gluten.exception.GlutenException: org.apache.gluten.exception.GlutenException: Error during calling Java code from native code: org.apache.gluten.memory.memtarget.ThrowOnOomMemoryTarget$OutOfMemoryException: Not enough spark off-heap execution memory. Acquired: 80.0 MiB, granted: 24.0 MiB. Try tweaking config option spark.memory.offHeap.size to get larger space to run this application (if spark.gluten.memory.dynamic.offHeap.sizing.enabled is not enabled). Current config settings: spark.gluten.memory.offHeap.size.in.bytes=256.0 MiB spark.gluten.memory.task.offHeap.size.in.bytes=128.0 MiB spark.gluten.memory.conservative.task.offHeap.size.in.bytes=64.0 MiB spark.memory.offHeap.enabled=true spark.gluten.memory.dynamic.offHeap.sizing.enabled=false Memory consumer stats: Task.499: Current used bytes: 104.0 MiB, peak bytes: N/A \- Gluten.Tree.346: Current used bytes: 104.0 MiB, peak bytes: 128.0 MiB \- root.346: Current used bytes: 104.0 MiB, peak bytes: 128.0 MiB +- NativePlanEvaluator-346.0: Current used bytes: 104.0 MiB, peak bytes: 128.0 MiB | \- single: Current used bytes: 104.0 MiB, peak bytes: 104.0 MiB | +- root: Current used bytes: 97.9 MiB, peak bytes: 104.0 MiB | | +- task.Gluten_Stage_0_TID_499_VTID_346: Current used bytes: 97.9 MiB, peak bytes: 104.0 MiB | | | +- node.0: Current used bytes: 97.9 MiB, peak bytes: 104.0 MiB | | | | +- op.0.0.0.TableScan: Current used bytes: 97.9 MiB, peak bytes: 97.9 MiB | | | | \- op.0.0.0.TableScan.test-hive: Current used bytes: 0.0 B, peak bytes: 0.0 B | | | +- node.2: Current used bytes: 0.0 B, peak bytes: 0.0 B | | | | \- op.2.0.0.Limit: Current used bytes: 0.0 B, peak bytes: 0.0 B | | | \- node.1: Current used bytes: 0.0 B, peak bytes: 0.0 B | | | \- op.1.0.0.FilterProject: Current used bytes: 0.0 B, peak bytes: 0.0 B | | \- default_leaf: Current used bytes: 0.0 B, peak bytes: 0.0 B | \- gluten::MemoryAllocator: Current used bytes: 0.0 B, peak bytes: 0.0 B +- NativePlanEvaluator-346.0.OverAcquire.0: Current used bytes: 0.0 B, peak bytes: 24.0 MiB +- CelebornShuffleWriter.346: Current used bytes: 0.0 B, peak bytes: 0.0 B | \- single: Current used bytes: 0.0 B, peak bytes: 0.0 B | +- gluten::MemoryAllocator: Current used bytes: 0.0 B, peak bytes: 0.0 B | \- root: Current used bytes: 0.0 B, peak bytes: 0.0 B | \- default_leaf: Current used bytes: 0.0 B, peak bytes: 0.0 B +- CelebornShuffleWriter.346.OverAcquire.0: Current used bytes: 0.0 B, peak bytes: 0.0 B +- IteratorMetrics.346: Current used bytes: 0.0 B, peak bytes: 0.0 B | \- single: Current used bytes: 0.0 B, peak bytes: 0.0 B | +- gluten::MemoryAllocator: Current used bytes: 0.0 B, peak bytes: 0.0 B | \- root: Current used bytes: 0.0 B, peak bytes: 0.0 B | \- default_leaf: Current used bytes: 0.0 B, peak bytes: 0.0 B \- IteratorMetrics.346.OverAcquire.0: Current used bytes: 0.0 B, peak bytes: 0.0 B ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
