[ https://issues.apache.org/jira/browse/DRILL-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347288#comment-16347288 ]
ASF GitHub Bot commented on DRILL-6032: --------------------------------------- Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1101#discussion_r165136635 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java --- @@ -397,11 +384,9 @@ private void delayedSetup() { } numPartitions = BaseAllocator.nextPowerOfTwo(numPartitions); // in case not a power of 2 - if ( schema == null ) { estValuesBatchSize = estOutgoingAllocSize = estMaxBatchSize = 0; } // incoming was an empty batch --- End diff -- All the unit and functional tests passed without an NPE. The null check was redundant because the code in **doWork** that calls **delayedSetup** sets the schema if it is null. ``` // This would be called only once - first time actual data arrives on incoming if ( schema == null && incoming.getRecordCount() > 0 ) { this.schema = incoming.getSchema(); currentBatchRecordCount = incoming.getRecordCount(); // initialize for first non empty batch // Calculate the number of partitions based on actual incoming data delayedSetup(); } ``` So schema will never be null when delayed setup is called > Use RecordBatchSizer to estimate size of columns in HashAgg > ----------------------------------------------------------- > > Key: DRILL-6032 > URL: https://issues.apache.org/jira/browse/DRILL-6032 > Project: Apache Drill > Issue Type: Improvement > Reporter: Timothy Farkas > Assignee: Timothy Farkas > Priority: Major > Fix For: 1.13.0 > > > We need to use the RecordBatchSize to estimate the size of columns in the > Partition batches created by HashAgg. -- This message was sent by Atlassian JIRA (v7.6.3#76005)