[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15529561#comment-15529561 ] Khurram Faraaz commented on DRILL-4387: --- I will track the wrong results issue separately. As for this JIRA DRILL-4387, the fix is now verified. with Fix on Drill 1.6.0 git commit ID: c67d070b, query takes 60.88 seconds. {noformat} 0: jdbc:drill:schema=dfs.tmp> SELECT DISTINCT dir1 FROM `DRILL_4589`; +---+ | dir1 | +---+ | null | | Q2| | Q1| | Q3| | Q4| +---+ 5 rows selected (60.883 seconds) {noformat} whereas without this fix, same query takes 106.069 seconds on Drill 1.6.0 git commit 1d890ff9, which is one commit before the above commit. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526884#comment-15526884 ] Jinfeng Ni commented on DRILL-4387: --- [~khfaraaz], for this incorrect query result, this is a related but separate issue. Could you please try on 1.5.0 before DRILL-4387 was merged and see if it shows the same behavior? You may open a different JIR to track this incorrect result issue. thx. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15342463#comment-15342463 ] Khurram Faraaz commented on DRILL-4387: --- The below queries return wrong results. (the problem seems to be there for quite some time) {noformat} Directory structure is [root@centos-01 DRILL_4589]# ls 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 [root@centos-01 DRILL_4589]# cd 1990 [root@centos-01 1990]# ls Q1 Q2 Q3 Q4 and so on... Below two queries return 0, I don't think the results are correct, please review 0: jdbc:drill:schema=dfs.tmp> select count(dir0) from `DRILL_4589`; +-+ | EXPR$0 | +-+ | 0 | +-+ 1 row selected (9.117 seconds) 0: jdbc:drill:schema=dfs.tmp> select count(dir1) from `DRILL_4589`; +-+ | EXPR$0 | +-+ | 0 | +-+ 1 row selected (8.97 seconds) 0: jdbc:drill:schema=dfs.tmp> explain plan for select count(dir0) from `DRILL_4589`; +--+--+ | text | json | +--+--+ | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02Project(EXPR$0=[$0]) 00-03 Scan(groupscan=[org.apache.drill.exec.store.pojo.PojoRecordReader@5275c59a[columns = null, isStarQuery = false, isSkipQuery = false]]) 0: jdbc:drill:schema=dfs.tmp> explain plan for select count(dir1) from `DRILL_4589`; +--+--+ | text | json | +--+--+ | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02Project(EXPR$0=[$0]) 00-03 Scan(groupscan=[org.apache.drill.exec.store.pojo.PojoRecordReader@337121ac[columns = null, isStarQuery = false, isSkipQuery = false]]) {noformat} > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15250394#comment-15250394 ] ASF GitHub Bot commented on DRILL-4387: --- Github user jinfengni closed the pull request at: https://github.com/apache/drill/pull/379 > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15250368#comment-15250368 ] ASF GitHub Bot commented on DRILL-4387: --- Github user jaltekruse commented on the pull request: https://github.com/apache/drill/pull/379#issuecomment-212534349 @jinfengni looks like this was merged, can you close the PR? > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15153168#comment-15153168 ] ASF GitHub Bot commented on DRILL-4387: --- Github user jinfengni commented on a diff in the pull request: https://github.com/apache/drill/pull/379#discussion_r53387614 --- Diff: contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseGroupScan.java --- @@ -34,6 +34,7 @@ import java.util.concurrent.TimeUnit; import com.fasterxml.jackson.annotation.JsonCreator; +import com.google.common.base.Objects; --- End diff -- right. I'll remove these unused imports. Thanks. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15153159#comment-15153159 ] ASF GitHub Bot commented on DRILL-4387: --- Github user amansinha100 commented on the pull request: https://github.com/apache/drill/pull/379#issuecomment-185934019 LGTM +1. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15153156#comment-15153156 ] ASF GitHub Bot commented on DRILL-4387: --- Github user amansinha100 commented on a diff in the pull request: https://github.com/apache/drill/pull/379#discussion_r53386823 --- Diff: contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseGroupScan.java --- @@ -34,6 +34,7 @@ import java.util.concurrent.TimeUnit; import com.fasterxml.jackson.annotation.JsonCreator; +import com.google.common.base.Objects; --- End diff -- unnecessary import ? > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15152896#comment-15152896 ] ASF GitHub Bot commented on DRILL-4387: --- Github user jinfengni commented on a diff in the pull request: https://github.com/apache/drill/pull/379#discussion_r53366236 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetScanBatchCreator.java --- @@ -87,9 +87,6 @@ public ScanBatch getBatch(FragmentContext context, ParquetRowGroupScan rowGroupS newColumns.add(column); } } - if (newColumns.isEmpty()) { --- End diff -- @amansinha100 , I made slightly change to the patch to address the comments. Could you please take another look? Thanks! > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15152837#comment-15152837 ] ASF GitHub Bot commented on DRILL-4387: --- Github user jinfengni commented on a diff in the pull request: https://github.com/apache/drill/pull/379#discussion_r53361870 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetScanBatchCreator.java --- @@ -87,9 +87,6 @@ public ScanBatch getBatch(FragmentContext context, ParquetRowGroupScan rowGroupS newColumns.add(column); } } - if (newColumns.isEmpty()) { --- End diff -- I went through all the ScanBatchCreator in Drill's code base. Seems ParquetScanBatchCreator is the only one that is converting an empty column list to ALL_COLUMNS. Looking at the history, seems DRILL-1845 added the code, probably just to make it work in parquet for skipAll query. With the patch of DRILL-4279, parquet record reader would be able to handle empty column list. Besides ParquetScanBatchCreator, this patch also modifies HBaseGroupScan, EasyGroupScan where it originally interprets empty column lists into ALL_COLUMNS. I'll add some comment to the code to clarify the different meaning of NULL and empty column list. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15152820#comment-15152820 ] ASF GitHub Bot commented on DRILL-4387: --- Github user jinfengni commented on a diff in the pull request: https://github.com/apache/drill/pull/379#discussion_r53360482 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/base/GroupScan.java --- @@ -35,6 +35,8 @@ public interface GroupScan extends Scan, HasAffinity{ public static final List ALL_COLUMNS = ImmutableList.of(SchemaPath.getSimplePath("*")); + public static final List EMPTY_COLUMNS = ImmutableList.of(); --- End diff -- Nice catch. It's no longer needed. (Originally, I intended to convert NULL to empty_columns. But not it's not necessary). I'll remove that. thx. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15152573#comment-15152573 ] ASF GitHub Bot commented on DRILL-4387: --- Github user amansinha100 commented on a diff in the pull request: https://github.com/apache/drill/pull/379#discussion_r53340898 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetScanBatchCreator.java --- @@ -87,9 +87,6 @@ public ScanBatch getBatch(FragmentContext context, ParquetRowGroupScan rowGroupS newColumns.add(column); } } - if (newColumns.isEmpty()) { --- End diff -- So, to clarify, the reason you removed the check for newColumns.isEmpty() is that if the column list is empty, the underlying ParquetRecordReader will handle it correctly by produce 1 default column (probably a NullableInt column) ? Was this check for isEmpty() only present in the Parquet scan ? or should other readers need modification too ? I think it would be good to add comments about how the NULL and empty column list are being handled by each data source. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15152497#comment-15152497 ] ASF GitHub Bot commented on DRILL-4387: --- Github user amansinha100 commented on a diff in the pull request: https://github.com/apache/drill/pull/379#discussion_r53335185 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/base/GroupScan.java --- @@ -35,6 +35,8 @@ public interface GroupScan extends Scan, HasAffinity{ public static final List ALL_COLUMNS = ImmutableList.of(SchemaPath.getSimplePath("*")); + public static final List EMPTY_COLUMNS = ImmutableList.of(); --- End diff -- This static constant does not seem to be referenced anywhere ? > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15150797#comment-15150797 ] Jinfeng Ni commented on DRILL-4387: --- I did a simple performance comparison on my mac, using TPCH SF10 lineitem parquet table. Both numbers are measured with warm cache. Without patch: {code} 0: jdbc:drill:zk=local> select distinct dir1 from dfs.`/drill/testdata/tpch-sf10/lineitem`; +---+ | dir1 | +---+ | null | +---+ 1 row selected (18.958 seconds) {code} With patch: {code} 0: jdbc:drill:zk=local> select distinct dir1 from dfs.`/drill/testdata/tpch-sf10/lineitem`; +---+ | dir1 | +---+ | null | +---+ 1 row selected (2.255 seconds) {code} Basically, the query time is reduced from 18.9 seconds to 2.2 seconds. The profile also shows the memory for the Scan operator is reduced from 54M to 2M. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15150792#comment-15150792 ] Jinfeng Ni commented on DRILL-4387: --- Couple of comments I would like to make: 1. If a physical plan comes from query SQL planner, then after DRILL-4279, the columns list should be empty, in stead of NULL, for skipAll query. The empty column list will go through GroupScan, ScanBatchCreator. It's in the RecordReader where different ways of handling skipAll query will be applied. 2. If a physical plan does not come from query planner, it's possible that the "columns" section is missing, leading to NULL for such field. This mainly comes from the old "manually" written physical plan in many unit tests long time ago. In the case column list is NULL, Drill still uses "no words means all columns" policy, to ensure the compatibility for those old physical plans. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4387) Improve execution side when it handles skipAll query
[ https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15150631#comment-15150631 ] ASF GitHub Bot commented on DRILL-4387: --- GitHub user jinfengni opened a pull request: https://github.com/apache/drill/pull/379 DRILL-4387: GroupScan or ScanBatchCreator should not use star column … …in case of skipAll query. The skipAll query should be handled in RecordReader. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jinfengni/incubator-drill DRILL-4387 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/379.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #379 commit 5c1edc42dcad6c3b5943424b9a8373cf6ff51753 Author: Jinfeng Ni Date: 2016-02-12T22:18:59Z DRILL-4387: GroupScan or ScanBatchCreator should not use star column in case of skipAll query. The skipAll query should be handled in RecordReader. > Improve execution side when it handles skipAll query > > > Key: DRILL-4387 > URL: https://issues.apache.org/jira/browse/DRILL-4387 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni >Assignee: Jinfeng Ni > Fix For: 1.6.0 > > > DRILL-4279 changes the planner side and the RecordReader in the execution > side when they handles skipAll query. However, it seems there are other > places in the codebase that do not handle skipAll query efficiently. In > particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty > column list with star column. This essentially will force the execution side > (RecordReader) to fetch all the columns for data source. Such behavior will > lead to big performance overhead for the SCAN operator. > To improve Drill's performance, we should change those places as well, as a > follow-up work after DRILL-4279. > One simple example of this problem is: > {code} >SELECT DISTINCT substring(dir1, 5) from dfs.`/Path/To/ParquetTable`; > {code} > The query does not require any regular column from the parquet file. However, > ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the > column list. In case table has dozens or hundreds of columns, this will make > SCAN operator much more expensive than necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)