[jira] [Assigned] (DRILL-1142) Add Jenkins build status to GitHub README
[ https://issues.apache.org/jira/browse/DRILL-1142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheesh Katkam reassigned DRILL-1142: -- Assignee: Sudheesh Katkam (was: Jacques Nadeau) > Add Jenkins build status to GitHub README > - > > Key: DRILL-1142 > URL: https://issues.apache.org/jira/browse/DRILL-1142 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Reporter: Sudheesh Katkam >Assignee: Sudheesh Katkam >Priority: Minor > Fix For: Future > > > I do not have the link to the embeddable status (no permission). > Here's how-to: > https://wiki.jenkins-ci.org/display/JENKINS/Embeddable+Build+Status+Plugin -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (DRILL-2082) nested arrays of strings returned wrong results
[ https://issues.apache.org/jira/browse/DRILL-2082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chun Chang closed DRILL-2082. - Assignee: Chun Chang (was: Mehant Baid) > nested arrays of strings returned wrong results > --- > > Key: DRILL-2082 > URL: https://issues.apache.org/jira/browse/DRILL-2082 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Data Types >Affects Versions: 0.8.0 >Reporter: Chun Chang >Assignee: Chun Chang >Priority: Critical > Fix For: 0.8.0 > > > #Mon Jan 26 14:10:51 PST 2015 > git.commit.id.abbrev=3c6d0ef > Querying Complex JSON data type nested array of strings returned wrong > results when data size is large (1 million row). Smaller data size (a few > rows) returned correct results. Test data can be accessed at > http://apache-drill.s3.amazonaws.com/files/complex.json.gz > For small data size, I got correct results: > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select t.id, t.aaa from > `aaa.json` t; > +++ > | id |aaa | > +++ > | 1 | [[["aa0 1"],["ab0 1"]],[["ba0 1"],["bb0 1"]],[["ca0 1","ca1 > 1"],["cb0 1","cb1 1","cb2 1"]]] | > | 2 | [[["aa0 2"],["ab0 2"]],[["ba0 2"],["bb0 2"]],[["ca0 2","ca1 > 2"],["cb0 2","cb1 2","cb2 2"]]] | > +++ > {code} > But large data size returned wrong results: > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select t.id, t.aaa from > `complex.json` t where t.id=1 limit 1; > +++ > | id |aaa | > +++ > | 1 | [[["ba0 56"],["bb0 56"],["ca0 56","ca1 56"],["cb0 56","cb1 > 56","cb2 56"],["aa0 91"],["ab0 91"],["aa0 125"],["ab0 125"],["aa0 140"],["ab0 > 140"],["aa0 142"],["ab0 142"],["aa0 146"],["ab0 146"],["ba0 402"],["bb0 > 402"],["ca0 402","ca1 402"],["cb0 402","cb1 402","cb2 402"],["aa0 403"],["ab0 > 403"],["ba0 403"],["bb0 403"],["ca0 403","ca1 403"],["cb0 403","cb1 403","cb2 > 403"],["aa0 404"],["ab0 404"],["ba0 404"],["bb0 404"],["ca0 404","ca1 > 404"],["cb0 404","cb1 404","cb2 404"],["aa0 405"],["ab0 405"],["ba0 > 405"],["bb0 405"],["ca0 405","ca1 405"],["cb0 405","cb1 405","cb2 405"],["aa0 > 437"],["ab0 437"],["aa0 485"],["ab0 485"],["aa0 503"],["ab0 503"],["aa0 > 569"],["ab0 569"],["aa0 581"],["ab0 581"],["aa0 620"],["ab0 620"],["aa0 > 632"],["ab0 632"],["aa0 640"],["ab0 640"],["aa0 650"],["ab0 650"],["aa0 > 669"],["ab0 669"],["aa0 671"],["ab0 671"],["aa0 728"],["ab0 728"],["aa0 > 735"],["ab0 735"],["aa0 772"],["ab0 772"],["aa0 784"],["ab0 784"],["aa0 > 811"],["ab0 811"],["aa0 817"],["ab0 817"],["aa0 836"],["ab0 836"],["aa0 > 881"],["ab0 881"],["aa0 891"],["ab0 891"],["aa0 924"],["ab0 924"],["aa0 > 1005"],["ab0 1005"],["aa0 1057"],["ab0 1057"],["aa0 1086"],["ab0 1086"],["aa0 > 1089"],["ab0 1089"],["aa0 1097"],["ab0 1097"],["aa0 1133"],["ab0 1133"],["aa0 > 1136"],["ab0 1136"],["aa0 1146"],["ab0 1146"],["aa0 1169"],["ab0 1169"],["aa0 > 1178"],["ab0 1178"],["aa0 1184"],["ab0 1184"],["aa0 1189"],["ab0 1189"],["aa0 > 1223"],["ab0 1223"],["aa0 1275"],["ab0 1275"],["aa0 1290"],["ab0 1290"],["aa0 > 1295"],["ab0 1295"],["aa0 1320"],["ab0 1320"],["aa0 1343"],["ab0 1343"],["aa0 > 1400"],["ab0 1400"],["aa0 1426"],["ab0 1426"],["aa0 1442"],["ab0 1442"],["aa0 > 1455"],["ab0 1455"],["aa0 1499"],["ab0 1499"],["aa0 1521"],["ab0 1521"],["aa0 > 1541"],["ab0 1541"],["aa0 1557"],["ab0 1557"],["aa0 1578"],["ab0 1578"],["aa0 > 1633"],["ab0 1633"],["aa0 1635"],["ab0 1635"],["aa0 1651"],["ab0 1651"],["aa0 > 1665"],["ab0 1665"],["aa0 1689"],["ab0 1689"],["aa0 1760"],["ab0 1760"],["aa0 > 1784"],["ab0 1784"],["aa0 1796"],["ab0 1796"],["aa0 1801"],["ab0 1801"],["aa0 > 1817"],["ab0 1817"],["aa0 1861"],["ab0 1861"],["aa0 1872"],["ab0 1872"],["aa0 > 1895"],["ab0 1895"],["aa0 1897"],["ab0 1897"],["aa0 1911"],["ab0 1911"],["aa0 > 1975"],["ab0 1975"],["aa0 1983"],["ab0 1983"],["aa0 1996"],["ab0 1996"],["aa0 > 2005"],["ab0 2005"],["aa0 2048"],["ab0 2048"],["aa0 2063"],["ab0 2063"],["aa0 > 2150"],["ab0 2150"],["aa0 2159"],["ab0 2159"],["aa0 2214"],["ab0 2214"],["aa0 > 2218"],["ab0 2218"],["aa0 2220"],["ab0 2220"],["aa0 2250"],["ab0 2250"],["aa0 > 2256"],["ab0 2256"],["aa0 2265"],["ab0 2265"],["aa0 2296"],["ab0 2296"],["aa0 > 2319"],["ab0 2319"],["aa0 2327"],["ab0 2327"],["aa0 2333"],["ab0 2333"],["aa0 > 2361"],["ab0 2361"],["aa0 2392"],["ab0 2392"],["aa0 2399"],["ab0 2399"],["aa0 > 2424"],["ab0 2424"],["aa0 2466"],["ab0 2466"],["aa0 2473"],["ab0 2473"],["aa0 > 2508"],["ab0 2508"],["aa0 2524"],["ab0 2524"],["aa0 2550"],["ab0 2550"],["aa0 > 2553"],["ab0 2553"],["aa0 2560"],["ab0 2560"],["aa0 2563"],["ab0 2563"],["aa0 > 2574"],["ab0 2574"],["aa0 2592"],["ab0 2592"],["aa0 2600"],["ab0 2600"],["aa0 > 2606"],["ab0
[jira] [Commented] (DRILL-1988) join returned maps are all empty
[ https://issues.apache.org/jira/browse/DRILL-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387735#comment-14387735 ] Chun Chang commented on DRILL-1988: --- {code} 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select * from sys.version; +++-+-++ | commit_id | commit_message | commit_time | build_email | build_time | +++-+-++ | 462e50ce9c4b829c2a4bafdeb9763bfba677c726 | DRILL-2575: FragmentExecutor.cancel() blasts through state transitions regardless of current state | 25.03.2015 @ 21:11:23 PDT 1 row selected (0.054 seconds) 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select a.id, a.soa[3].str, b.soa[3].str, a.ooa[1].fl from `complex.json` a inner join `complex.json` b on a.soa[3].str=b.soa[3].str order by a.id limit 10; +++++ | id | EXPR$1 | EXPR$2 | EXPR$3 | +++++ | 1 | here is a string at row 1 | here is a string at row 1 | {"f1":1.6789,"f2":54331.0} | | 2 | here is a string at row 2 | here is a string at row 2 | {} | | 3 | here is a string at row 3 | here is a string at row 3 | {"f1":3.6789,"f2":54351.0} | | 4 | here is a string at row 4 | here is a string at row 4 | {"f1":4.6789,"f2":54361.0} | | 5 | here is a string at row 5 | here is a string at row 5 | {"f1":5.6789,"f2":54371.0} | | 6 | here is a string at row 6 | here is a string at row 6 | {} | | 7 | here is a string at row 7 | here is a string at row 7 | {"f1":7.6789,"f2":54391.0} | | 8 | here is a string at row 8 | here is a string at row 8 | {} | | 9 | here is a string at row 9 | here is a string at row 9 | {"f1":9.6789,"f2":54411.0} | | 10 | here is a string at row 10 | here is a string at row 10 | {"f1":10.6789,"f2":54421.0} | +++++ {code} > join returned maps are all empty > > > Key: DRILL-1988 > URL: https://issues.apache.org/jira/browse/DRILL-1988 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Data Types >Affects Versions: 0.8.0 >Reporter: Chun Chang >Assignee: Mehant Baid >Priority: Critical > Attachments: DRILL-1988.patch > > > #Fri Jan 09 20:39:31 EST 2015 > git.commit.id.abbrev=487d98e > For complex json type, a join query returned all maps with empty value. The > actual data has empty maps for some rows, but mostly with value. Data can be > downloaded from: > https://s3.amazonaws.com/apache-drill/files/complex.json.gz > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select a.id, a.soa[3].str, > b.soa[3].str, a.ooa[1].fl from `complex.json` a inner join `complex.json` b > on a.soa[3].str=b.soa[3].str order by a.id limit 10; > +++++ > | id | EXPR$1 | EXPR$2 | EXPR$3 | > +++++ > | 1 | here is a string at row 1 | here is a string at row 1 | {} > | > | 2 | here is a string at row 2 | here is a string at row 2 | {} > | > | 3 | here is a string at row 3 | here is a string at row 3 | {} > | > | 4 | here is a string at row 4 | here is a string at row 4 | {} > | > | 5 | here is a string at row 5 | here is a string at row 5 | {} > | > | 6 | here is a string at row 6 | here is a string at row 6 | {} > | > | 7 | here is a string at row 7 | here is a string at row 7 | {} > | > | 8 | here is a string at row 8 | here is a string at row 8 | {} > | > | 9 | here is a string at row 9 | here is a string at row 9 | {} > | > | 10 | here is a string at row 10 | here is a string at row 10 | {} > | > +++++ > {code} > As you can see from the following query, maps is not empty for most of the > row IDs. > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select a.id, a.ooa[1].fl from > `complex.json` a limit 10; > +++ > | id | EXPR$1 | > +++ > | 1 | {"f1":1.6789,"f2":54331.0} | > | 2 | {} | > | 3 | {"f1":3.6789,"f2":54351.0} | > | 4 | {"f1":4.6789,"f2":54361.0} | > | 5 | {"f1":5.6789,"f2":54371.0} | > | 6 | {} | > | 7 | {"f1":7.6789,"f2":54391.0} | > | 8 | {} | > | 9 | {"f1":9.6789,"f2":54411.0} | > | 10 | {"f1":10.6789,"f2":54421.0} | > +++ > {
[jira] [Closed] (DRILL-1988) join returned maps are all empty
[ https://issues.apache.org/jira/browse/DRILL-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chun Chang closed DRILL-1988. - Assignee: Chun Chang (was: Mehant Baid) verified. > join returned maps are all empty > > > Key: DRILL-1988 > URL: https://issues.apache.org/jira/browse/DRILL-1988 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Data Types >Affects Versions: 0.8.0 >Reporter: Chun Chang >Assignee: Chun Chang >Priority: Critical > Attachments: DRILL-1988.patch > > > #Fri Jan 09 20:39:31 EST 2015 > git.commit.id.abbrev=487d98e > For complex json type, a join query returned all maps with empty value. The > actual data has empty maps for some rows, but mostly with value. Data can be > downloaded from: > https://s3.amazonaws.com/apache-drill/files/complex.json.gz > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select a.id, a.soa[3].str, > b.soa[3].str, a.ooa[1].fl from `complex.json` a inner join `complex.json` b > on a.soa[3].str=b.soa[3].str order by a.id limit 10; > +++++ > | id | EXPR$1 | EXPR$2 | EXPR$3 | > +++++ > | 1 | here is a string at row 1 | here is a string at row 1 | {} > | > | 2 | here is a string at row 2 | here is a string at row 2 | {} > | > | 3 | here is a string at row 3 | here is a string at row 3 | {} > | > | 4 | here is a string at row 4 | here is a string at row 4 | {} > | > | 5 | here is a string at row 5 | here is a string at row 5 | {} > | > | 6 | here is a string at row 6 | here is a string at row 6 | {} > | > | 7 | here is a string at row 7 | here is a string at row 7 | {} > | > | 8 | here is a string at row 8 | here is a string at row 8 | {} > | > | 9 | here is a string at row 9 | here is a string at row 9 | {} > | > | 10 | here is a string at row 10 | here is a string at row 10 | {} > | > +++++ > {code} > As you can see from the following query, maps is not empty for most of the > row IDs. > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select a.id, a.ooa[1].fl from > `complex.json` a limit 10; > +++ > | id | EXPR$1 | > +++ > | 1 | {"f1":1.6789,"f2":54331.0} | > | 2 | {} | > | 3 | {"f1":3.6789,"f2":54351.0} | > | 4 | {"f1":4.6789,"f2":54361.0} | > | 5 | {"f1":5.6789,"f2":54371.0} | > | 6 | {} | > | 7 | {"f1":7.6789,"f2":54391.0} | > | 8 | {} | > | 9 | {"f1":9.6789,"f2":54411.0} | > | 10 | {"f1":10.6789,"f2":54421.0} | > +++ > {code} > physical plan: > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> explain plan for select a.id, > a.soa[3].str, b.soa[3].str, a.ooa[1].fl from `complex.json` a inner join > `complex.json` b on a.soa[3].str=b.soa[3].str order by a.id limit 10; > +++ > |text|json| > +++ > | 00-00Screen > 00-01 Project(id=[$0], EXPR$1=[$1], EXPR$2=[$2], EXPR$3=[$3]) > 00-02SelectionVectorRemover > 00-03 Limit(fetch=[10]) > 00-04SingleMergeExchange(sort0=[0 ASC]) > 01-01 SelectionVectorRemover > 01-02TopN(limit=[10]) > 01-03 HashToRandomExchange(dist0=[[$0]]) > 02-01Project(id=[$0], EXPR$1=[$2], EXPR$2=[$5], > EXPR$3=[$3]) > 02-02 HashJoin(condition=[=($1, $4)], joinType=[inner]) > 02-04HashToRandomExchange(dist0=[[$1]]) > 03-01 Project(id=[$2], $f4=[ITEM(ITEM($1, 3), > 'str')], ITEM=[ITEM(ITEM($1, 3), 'str')], ITEM3=[ITEM(ITEM($0, 1), 'fl')]) > 03-02Scan(groupscan=[EasyGroupScan > [selectionRoot=/drill/testdata/complex_type/json/complex.json, numFiles=1, > columns=[`id`, `soa`[3].`str`, `ooa`[1].`fl`], > files=[maprfs:/drill/testdata/complex_type/json/complex.json]]]) > 02-03Project($f40=[$0], ITEM0=[$1]) > 02-05 HashToRandomExchange(dist0=[[$0]]) > 04-01Project($f4=[ITEM(ITEM($0, 3), 'str')], > ITEM=[ITEM(ITEM($0, 3), 'str')]) > 04-02 Scan(groupscan=[EasyGroupScan > [selectionRoot=/drill/testdata/complex_type/json/complex.json, numFiles=1, > columns=[`soa`[3].`str`], > files=[maprfs:/drill/testdata/complex_type/json/complex.json]]]) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332
[jira] [Updated] (DRILL-2408) Invalid (0 length) parquet file created by CTAS
[ https://issues.apache.org/jira/browse/DRILL-2408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deneche A. Hakim updated DRILL-2408: Attachment: (was: DRILL-2408.2.patch.txt) > Invalid (0 length) parquet file created by CTAS > --- > > Key: DRILL-2408 > URL: https://issues.apache.org/jira/browse/DRILL-2408 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Writer >Affects Versions: 0.8.0 >Reporter: Aman Sinha >Assignee: Deneche A. Hakim >Priority: Critical > Fix For: 0.9.0 > > Attachments: DRILL-2408.1.patch.txt, DRILL-2408.2.patch.txt, > DRILL-2408.3.patch.txt > > > We should not be creating 0 length parquet files; subsequent queries on these > will fail with the error shown below. > {code} > 0: jdbc:drill:zk=local> create table tt5 as select * from > cp.`tpch/region.parquet` where 1=0; > ++---+ > | Fragment | Number of records written | > ++---+ > | 0_0| 0 | > ++---+ > 1 row selected (0.8 seconds) > 0: jdbc:drill:zk=local> select count(*) from tt5; > Query failed: RuntimeException: file:/tmp/tt5/0_0_0.parquet is not a Parquet > file (too small) > Error: exception while executing query: Failure while executing query. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2408) Invalid (0 length) parquet file created by CTAS
[ https://issues.apache.org/jira/browse/DRILL-2408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deneche A. Hakim updated DRILL-2408: Attachment: DRILL-2408.3.patch.txt rebased patch on top of master > Invalid (0 length) parquet file created by CTAS > --- > > Key: DRILL-2408 > URL: https://issues.apache.org/jira/browse/DRILL-2408 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Writer >Affects Versions: 0.8.0 >Reporter: Aman Sinha >Assignee: Deneche A. Hakim >Priority: Critical > Fix For: 0.9.0 > > Attachments: DRILL-2408.1.patch.txt, DRILL-2408.2.patch.txt, > DRILL-2408.3.patch.txt > > > We should not be creating 0 length parquet files; subsequent queries on these > will fail with the error shown below. > {code} > 0: jdbc:drill:zk=local> create table tt5 as select * from > cp.`tpch/region.parquet` where 1=0; > ++---+ > | Fragment | Number of records written | > ++---+ > | 0_0| 0 | > ++---+ > 1 row selected (0.8 seconds) > 0: jdbc:drill:zk=local> select count(*) from tt5; > Query failed: RuntimeException: file:/tmp/tt5/0_0_0.parquet is not a Parquet > file (too small) > Error: exception while executing query: Failure while executing query. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2408) Invalid (0 length) parquet file created by CTAS
[ https://issues.apache.org/jira/browse/DRILL-2408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deneche A. Hakim updated DRILL-2408: Attachment: DRILL-2408.2.patch.txt rebased the patch on top of master > Invalid (0 length) parquet file created by CTAS > --- > > Key: DRILL-2408 > URL: https://issues.apache.org/jira/browse/DRILL-2408 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Writer >Affects Versions: 0.8.0 >Reporter: Aman Sinha >Assignee: Deneche A. Hakim >Priority: Critical > Fix For: 0.9.0 > > Attachments: DRILL-2408.1.patch.txt, DRILL-2408.2.patch.txt, > DRILL-2408.2.patch.txt > > > We should not be creating 0 length parquet files; subsequent queries on these > will fail with the error shown below. > {code} > 0: jdbc:drill:zk=local> create table tt5 as select * from > cp.`tpch/region.parquet` where 1=0; > ++---+ > | Fragment | Number of records written | > ++---+ > | 0_0| 0 | > ++---+ > 1 row selected (0.8 seconds) > 0: jdbc:drill:zk=local> select count(*) from tt5; > Query failed: RuntimeException: file:/tmp/tt5/0_0_0.parquet is not a Parquet > file (too small) > Error: exception while executing query: Failure while executing query. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2630) Merge join over inputs with complex type hit run-time code compiler error
[ https://issues.apache.org/jira/browse/DRILL-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinfeng Ni updated DRILL-2630: -- Attachment: complex_1.json Attach the sample JSON file used in the query. > Merge join over inputs with complex type hit run-time code compiler error > - > > Key: DRILL-2630 > URL: https://issues.apache.org/jira/browse/DRILL-2630 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Reporter: Jinfeng Ni >Assignee: Chris Westin > Attachments: complex_1.json > > > Hit run-time code complier error, if we have a merge join whose inputs > contain complex type. > {code} > select * from sys.version; > +++-+-++ > | commit_id | commit_message | commit_time | build_email | build_time | > +++-+-++ > | 0fbcddba14405ec94d51b0ba3512925168efb433 | DRILL-2375: implement reader > reset mechanism and reset reader before accessing it during projection | > 30.03.2015 @ 10:27:02 PDT | j...@maprtech.com | 30.03.2015 @ 16:50:01 PDT | > +++-+-++ > {code} > {code} > alter session set `planner.enable_hashjoin` = false; > {code} > {code} > select a.id, b.oooi.oa.oab.oabc oabc, b.ooof.oa.oab oab from > dfs.`/tmp/complex_1.json` a left outer join cp.`/tmp/complex_1.json` b on > a.id=b.id order by a.id; > {code} > {code} > ++++ > | id |oabc|oab | > ++++ > Query failed: Query stopped., Line 49, Column 32: No applicable > constructor/method found for actual parameters "int, int, > org.apache.drill.exec.vector.complex.MapVector"; candidates are: "public void > org.apache.drill.exec.vector.NullableTinyIntVector.copyFromSafe(int, int, > org.apache.drill.exec.vector.NullableTinyIntVector)", "public void > org.apache.drill.exec.vector.NullableTinyIntVector.copyFromSafe(int, int, > org.apache.drill.exec.vector.TinyIntVector)" [ > e5905a74-98d0-46d4-8090-bcf0cc710e8a on 10.250.0.8:31010 ] > {code} > If I switch to hash join, then, the query works fine. Therefore, looks like > Merge Join operator has some bug in handling complex type. > {code} > alter session set `planner.enable_hashjoin` = true; > +++ > | ok | summary | > +++ > | true | planner.enable_hashjoin updated. | > +++ > 1 row selected (0.058 seconds) > 0: jdbc:drill:zk=local> select a.id, b.oooi.oa.oab.oabc oabc, b.ooof.oa.oab > oab from dfs.`/tmp/complex_1.json` a left outer join > dfs.`/tmp/complex_1.json` b on a.id=b.id order by a.id; > ++++ > | id |oabc|oab | > ++++ > | 1 | 1 | {"oabc":1.5678} | > | 2 | 2 | {"oabc":2.5678} | > ++++ > 2 rows selected (0.73 seconds) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-2629) Initial concurrent queries executed on separate Connections fail
[ https://issues.apache.org/jira/browse/DRILL-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387664#comment-14387664 ] Kunal Khatua commented on DRILL-2629: - The # failures increases with increase in # concurrent threads firing queries: [root@ucs-node1 drillTester]# grep -c "ERROR PipSQuawkling executeQuery" concurrent_8493713_SF100_*thread*out | less concurrent_8493713_SF100_1thread_20150325_1804.out:0 concurrent_8493713_SF100_2thread_20150325_1906.out:1 concurrent_8493713_SF100_4thread_20150325_2012.out:2 concurrent_8493713_SF100_8thread_20150325_2101.out:3 concurrent_8493713_SF100_12thread_20150325_2231.out:7 concurrent_8493713_SF100_16thread_20150325_2354.out:8 > Initial concurrent queries executed on separate Connections fail > > > Key: DRILL-2629 > URL: https://issues.apache.org/jira/browse/DRILL-2629 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 > Environment: RHEL 6.4 >Reporter: Kunal Khatua > Fix For: 0.9.0 > > > When launching concurrently queries on multiple connections (1 query per > connection) for the 1st time, some queries (which otherwise run without > issue) fail with IndexOutOfBoundsException > Here is a sample case where 2 threads (PipSQuawkling.java) executed 2 > different queries on separate SQL Connection objects. > 2015-03-25 19:07:20 [pip1] INFO PipSQuawkling executeTest - [ 0 / 03_par100 > ] Executing query... > Query failed: IndexOutOfBoundsException: Index: 10, Size: 7 > 2015-03-25 19:07:23 [pip1] ERROR PipSQuawkling executeQuery - [ 0 / 03_par100 > ] exception while executing query: Failure while executing query. > java.sql.SQLException: exception while executing query: Failure while > executing query. > at net.hydromatic.avatica.Helper.createException(Helper.java:40) > at > net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:406) > at > net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351) > at > net.hydromatic.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:78) > at PipSQuawkling.executeQuery(PipSQuawkling.java:284) > at PipSQuawkling.executeTest(PipSQuawkling.java:144) > at PipSQuawkling.run(PipSQuawkling.java:76) > Caused by: java.sql.SQLException: Failure while executing query. > at org.apache.drill.jdbc.DrillCursor.next(DrillCursor.java:144) > at > org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:110) > at > org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:49) > at > net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404) > ... 5 more > Caused by: org.apache.drill.exec.rpc.RpcException: IndexOutOfBoundsException: > Index: 10, Size: 7 > at > org.apache.drill.exec.rpc.user.QueryResultHandler.batchArrived(QueryResultHandler.java:157) > at > org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:93) > at > org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:52) > at > org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:34) > at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:57) > at > org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:194) > at > org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:173) > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:161) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.n
[jira] [Updated] (DRILL-2629) Initial concurrent queries executed on separate Connections fail
[ https://issues.apache.org/jira/browse/DRILL-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Khatua updated DRILL-2629: Summary: Initial concurrent queries executed on separate Connections fail (was: Initial concurrenct query fails) > Initial concurrent queries executed on separate Connections fail > > > Key: DRILL-2629 > URL: https://issues.apache.org/jira/browse/DRILL-2629 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 > Environment: RHEL 6.4 >Reporter: Kunal Khatua > Fix For: 0.9.0 > > > When launching concurrently queries on multiple connections (1 query per > connection) for the 1st time, some queries (which otherwise run without > issue) fail with IndexOutOfBoundsException > Here is a sample case where 2 threads (PipSQuawkling.java) executed 2 > different queries on separate SQL Connection objects. > 2015-03-25 19:07:20 [pip1] INFO PipSQuawkling executeTest - [ 0 / 03_par100 > ] Executing query... > Query failed: IndexOutOfBoundsException: Index: 10, Size: 7 > 2015-03-25 19:07:23 [pip1] ERROR PipSQuawkling executeQuery - [ 0 / 03_par100 > ] exception while executing query: Failure while executing query. > java.sql.SQLException: exception while executing query: Failure while > executing query. > at net.hydromatic.avatica.Helper.createException(Helper.java:40) > at > net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:406) > at > net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351) > at > net.hydromatic.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:78) > at PipSQuawkling.executeQuery(PipSQuawkling.java:284) > at PipSQuawkling.executeTest(PipSQuawkling.java:144) > at PipSQuawkling.run(PipSQuawkling.java:76) > Caused by: java.sql.SQLException: Failure while executing query. > at org.apache.drill.jdbc.DrillCursor.next(DrillCursor.java:144) > at > org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:110) > at > org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:49) > at > net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404) > ... 5 more > Caused by: org.apache.drill.exec.rpc.RpcException: IndexOutOfBoundsException: > Index: 10, Size: 7 > at > org.apache.drill.exec.rpc.user.QueryResultHandler.batchArrived(QueryResultHandler.java:157) > at > org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:93) > at > org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:52) > at > org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:34) > at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:57) > at > org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:194) > at > org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:173) > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:161) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > io.netty.channel.nio.NioEventLoop.p
[jira] [Created] (DRILL-2629) Initial concurrenct query fails
Kunal Khatua created DRILL-2629: --- Summary: Initial concurrenct query fails Key: DRILL-2629 URL: https://issues.apache.org/jira/browse/DRILL-2629 Project: Apache Drill Issue Type: Bug Affects Versions: 0.8.0 Environment: RHEL 6.4 Reporter: Kunal Khatua Fix For: 0.9.0 When launching concurrently queries on multiple connections (1 query per connection) for the 1st time, some queries (which otherwise run without issue) fail with IndexOutOfBoundsException Here is a sample case where 2 threads (PipSQuawkling.java) executed 2 different queries on separate SQL Connection objects. 2015-03-25 19:07:20 [pip1] INFO PipSQuawkling executeTest - [ 0 / 03_par100 ] Executing query... Query failed: IndexOutOfBoundsException: Index: 10, Size: 7 2015-03-25 19:07:23 [pip1] ERROR PipSQuawkling executeQuery - [ 0 / 03_par100 ] exception while executing query: Failure while executing query. java.sql.SQLException: exception while executing query: Failure while executing query. at net.hydromatic.avatica.Helper.createException(Helper.java:40) at net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:406) at net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351) at net.hydromatic.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:78) at PipSQuawkling.executeQuery(PipSQuawkling.java:284) at PipSQuawkling.executeTest(PipSQuawkling.java:144) at PipSQuawkling.run(PipSQuawkling.java:76) Caused by: java.sql.SQLException: Failure while executing query. at org.apache.drill.jdbc.DrillCursor.next(DrillCursor.java:144) at org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:110) at org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:49) at net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404) ... 5 more Caused by: org.apache.drill.exec.rpc.RpcException: IndexOutOfBoundsException: Index: 10, Size: 7 at org.apache.drill.exec.rpc.user.QueryResultHandler.batchArrived(QueryResultHandler.java:157) at org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:93) at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:52) at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:34) at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:57) at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:194) at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:173) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:161) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) at java.lang.Thread.run(Thread.java:744) 2015-03-25 19:07:23 [pip1] INFO PipSQuawkling executeQuery - [ 0 / 03_par100 ] Executed in 2369 msec 2015-03-25 19:07:23 [
[jira] [Created] (DRILL-2630) Merge join over inputs with complex type hit run-time code compiler error
Jinfeng Ni created DRILL-2630: - Summary: Merge join over inputs with complex type hit run-time code compiler error Key: DRILL-2630 URL: https://issues.apache.org/jira/browse/DRILL-2630 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Reporter: Jinfeng Ni Assignee: Chris Westin Hit run-time code complier error, if we have a merge join whose inputs contain complex type. {code} select * from sys.version; +++-+-++ | commit_id | commit_message | commit_time | build_email | build_time | +++-+-++ | 0fbcddba14405ec94d51b0ba3512925168efb433 | DRILL-2375: implement reader reset mechanism and reset reader before accessing it during projection | 30.03.2015 @ 10:27:02 PDT | j...@maprtech.com | 30.03.2015 @ 16:50:01 PDT | +++-+-++ {code} {code} alter session set `planner.enable_hashjoin` = false; {code} {code} select a.id, b.oooi.oa.oab.oabc oabc, b.ooof.oa.oab oab from dfs.`/tmp/complex_1.json` a left outer join cp.`/tmp/complex_1.json` b on a.id=b.id order by a.id; {code} {code} ++++ | id |oabc|oab | ++++ Query failed: Query stopped., Line 49, Column 32: No applicable constructor/method found for actual parameters "int, int, org.apache.drill.exec.vector.complex.MapVector"; candidates are: "public void org.apache.drill.exec.vector.NullableTinyIntVector.copyFromSafe(int, int, org.apache.drill.exec.vector.NullableTinyIntVector)", "public void org.apache.drill.exec.vector.NullableTinyIntVector.copyFromSafe(int, int, org.apache.drill.exec.vector.TinyIntVector)" [ e5905a74-98d0-46d4-8090-bcf0cc710e8a on 10.250.0.8:31010 ] {code} If I switch to hash join, then, the query works fine. Therefore, looks like Merge Join operator has some bug in handling complex type. {code} alter session set `planner.enable_hashjoin` = true; +++ | ok | summary | +++ | true | planner.enable_hashjoin updated. | +++ 1 row selected (0.058 seconds) 0: jdbc:drill:zk=local> select a.id, b.oooi.oa.oab.oabc oabc, b.ooof.oa.oab oab from dfs.`/tmp/complex_1.json` a left outer join dfs.`/tmp/complex_1.json` b on a.id=b.id order by a.id; ++++ | id |oabc|oab | ++++ | 1 | 1 | {"oabc":1.5678} | | 2 | 2 | {"oabc":2.5678} | ++++ 2 rows selected (0.73 seconds) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2571) convert_from fails with ' Wrong length 1(1-0) in the buffer '1', expected 4.'
[ https://issues.apache.org/jira/browse/DRILL-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rahul Challapalli updated DRILL-2571: - Description: git.commit.id.abbrev=f1b59ed Hbase : {code} create 'fewtypes_null', 'types' put 'fewtypes_null', 1, 'types:int_col', 1 {code} Now from Drill : {code} select * from fewtypes_null; +++ | row_key | types| +++ | [B@2461ae9c | {"int_col":"MQ=="} | {code} The below query fails : {code} select convert_from(a.types.int_col, 'INT') from fewtypes_null a; Query failed: RemoteRpcException: Failure while running fragment., Wrong length 1(1-0) in the buffer '1', expected 4. [ f9a3bb31-bb19-428c-8c7d-99e1898e66e7 on qa-node114.qa.lab:31010 ] [ f9a3bb31-bb19-428c-8c7d-99e1898e66e7 on qa-node114.qa.lab:31010 ] {code} I attached the complete error from the logs. Let me know if you need anything else was: git.commit.id.abbrev=f1b59ed Hbase : {code} create 'fewtypes_null', 'types' put 'fewtypes_null', 1, 'types:int_col', 1 {code} Now from Drill : {code} select * from fewtypes_null; +++ | row_key | types| +++ | [B@2461ae9c | {"int_col":"MQ=="} | The below query fails : select convert_from(a.types.int_col, 'INT') from fewtypes_null a; Query failed: RemoteRpcException: Failure while running fragment., Wrong length 1(1-0) in the buffer '1', expected 4. [ f9a3bb31-bb19-428c-8c7d-99e1898e66e7 on qa-node114.qa.lab:31010 ] [ f9a3bb31-bb19-428c-8c7d-99e1898e66e7 on qa-node114.qa.lab:31010 ] {code} I attached the complete error from the logs. Let me know if you need anything else > convert_from fails with ' Wrong length 1(1-0) in the buffer '1', expected 4.' > - > > Key: DRILL-2571 > URL: https://issues.apache.org/jira/browse/DRILL-2571 > Project: Apache Drill > Issue Type: Bug > Components: Storage - HBase >Reporter: Rahul Challapalli >Assignee: Aditya Kishore >Priority: Critical > Attachments: dataload.hql, error.log > > > git.commit.id.abbrev=f1b59ed > Hbase : > {code} > create 'fewtypes_null', 'types' > put 'fewtypes_null', 1, 'types:int_col', 1 > {code} > Now from Drill : > {code} > select * from fewtypes_null; > +++ > | row_key | types| > +++ > | [B@2461ae9c | {"int_col":"MQ=="} | > {code} > The below query fails : > {code} > select convert_from(a.types.int_col, 'INT') from fewtypes_null a; > Query failed: RemoteRpcException: Failure while running fragment., Wrong > length 1(1-0) in the buffer '1', expected 4. [ > f9a3bb31-bb19-428c-8c7d-99e1898e66e7 on qa-node114.qa.lab:31010 ] > [ f9a3bb31-bb19-428c-8c7d-99e1898e66e7 on qa-node114.qa.lab:31010 ] > {code} > I attached the complete error from the logs. Let me know if you need anything > else -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2628) sqlline hangs and then asserts when trying to execute anything on a dead JDBC connection
Victoria Markman created DRILL-2628: --- Summary: sqlline hangs and then asserts when trying to execute anything on a dead JDBC connection Key: DRILL-2628 URL: https://issues.apache.org/jira/browse/DRILL-2628 Project: Apache Drill Issue Type: Bug Components: Client - JDBC Affects Versions: 0.8.0 Reporter: Victoria Markman Assignee: Daniel Barclay (Drill) Here is what I'm observing: 1. Start drill 2. Start sqlline 3. Run couple of queries 4. Bounce drill 5. Run in sqlline: "use dfs.temp" It hangs and after some time throws an exception: {code} 0: jdbc:drill:schema=dfs> use dfs.joins_views; java.lang.AssertionError at org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:111) at org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:49) at net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404) at net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351) at net.hydromatic.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:338) at net.hydromatic.avatica.AvaticaStatement.execute(AvaticaStatement.java:69) at sqlline.SqlLine$Commands.execute(SqlLine.java:3755) at sqlline.SqlLine$Commands.sql(SqlLine.java:3663) at sqlline.SqlLine.dispatch(SqlLine.java:889) at sqlline.SqlLine.begin(SqlLine.java:763) at sqlline.SqlLine.start(SqlLine.java:498) at sqlline.SqlLine.main(SqlLine.java:460) {code} Comment on a dead JDBC connection is a speculation on my part. I don't know what is actually happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-1784) Ignore boolean type enforcement on filter conditions during validation
[ https://issues.apache.org/jira/browse/DRILL-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanifi Gunes updated DRILL-1784: Assignee: Jinfeng Ni (was: Hanifi Gunes) > Ignore boolean type enforcement on filter conditions during validation > -- > > Key: DRILL-1784 > URL: https://issues.apache.org/jira/browse/DRILL-1784 > Project: Apache Drill > Issue Type: Improvement > Components: Query Planning & Optimization, SQL Parser >Reporter: Hanifi Gunes >Assignee: Jinfeng Ni >Priority: Minor > Fix For: 0.9.0 > > > The title should be self describing. To give some more context on this, it > would be nice if we stop boolean type enforcement on filter conditions as it > is possible to create a scenario where we don't have a concrete return type > but later bind it during execution. Currently we will need to `cast` > condition to boolean explicitly. This does not reflect the flexibility of > execution engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-1784) Ignore boolean type enforcement on filter conditions during validation
[ https://issues.apache.org/jira/browse/DRILL-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanifi Gunes updated DRILL-1784: Component/s: Query Planning & Optimization > Ignore boolean type enforcement on filter conditions during validation > -- > > Key: DRILL-1784 > URL: https://issues.apache.org/jira/browse/DRILL-1784 > Project: Apache Drill > Issue Type: Improvement > Components: Query Planning & Optimization, SQL Parser >Reporter: Hanifi Gunes >Assignee: Hanifi Gunes >Priority: Minor > Fix For: 0.9.0 > > > The title should be self describing. To give some more context on this, it > would be nice if we stop boolean type enforcement on filter conditions as it > is possible to create a scenario where we don't have a concrete return type > but later bind it during execution. Currently we will need to `cast` > condition to boolean explicitly. This does not reflect the flexibility of > execution engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2627) Full outer join does not work in views when order by is present
[ https://issues.apache.org/jira/browse/DRILL-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aman Sinha updated DRILL-2627: -- Fix Version/s: 0.9.0 > Full outer join does not work in views when order by is present > --- > > Key: DRILL-2627 > URL: https://issues.apache.org/jira/browse/DRILL-2627 > Project: Apache Drill > Issue Type: New Feature > Components: Query Planning & Optimization >Affects Versions: 0.8.0 >Reporter: Victoria Markman >Assignee: Aman Sinha > Fix For: 0.9.0 > > > {code} > 0: jdbc:drill:schema=dfs> select * from t1; > ++++ > | a1 | b1 | c1 | > ++++ > | 1 | 2015-03-01 | a | > | 2 | 2015-03-02 | b | > | null | null | null | > ++++ > 3 rows selected (0.074 seconds) > 0: jdbc:drill:schema=dfs> select * from t2; > ++++ > | a2 | b2 | c2 | > ++++ > | 5 | 2017-03-01 | a | > ++++ > 1 row selected (0.056 seconds) > 0: jdbc:drill:schema=dfs> select * from t1 full outer join t2 on (t1.a1 = > t2.a2); > +++++++ > | a1 | b1 | c1 | a2 | b2 | c2 > | > +++++++ > | 1 | 2015-03-01 | a | null | null | null > | > | 2 | 2015-03-02 | b | null | null | null > | > | null | null | null | null | null | null > | > | null | null | null | 5 | 2017-03-01 | a > | > +++++++ > 4 rows selected (0.277 seconds) > 0: jdbc:drill:schema=dfs> create or replace view v2 as select cast(a2 as > integer) a2, cast(b2 as date) as b2, cast(c2 as varchar(30)) as c2 from t2 > order by a2, b2, c2; > +++ > | ok | summary | > +++ > | true | View 'v2' replaced successfully in 'dfs.test' schema | > +++ > 1 row selected (0.1 seconds) > 0: jdbc:drill:schema=dfs> create or replace view v1 as select cast(a1 as > integer) a1, cast(b1 as date) as b1, cast(c1 as varchar(30)) as c1 from t1 > order by a1, b1, c1; > +++ > | ok | summary | > +++ > | true | View 'v1' replaced successfully in 'dfs.test' schema | > +++ > 1 row selected (0.104 seconds) > {code} > Merge join plan is planned because input is sorted (order by in both views). > Since full outer join is not supported with merge join, we get an error. > {code} > 0: jdbc:drill:schema=dfs> select * from v1 full outer join v2 on (v1.a1 = > v2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > {code} > or subqueries > {code} > 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by > a1, b1, c1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 > order by a2, b2,c2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by > a1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 order by > a2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-2627) Full outer join does not work in views when order by is present
[ https://issues.apache.org/jira/browse/DRILL-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387377#comment-14387377 ] Aman Sinha commented on DRILL-2627: --- I'll take a look. We should not even try to generate a merge join plan for full outer joins. > Full outer join does not work in views when order by is present > --- > > Key: DRILL-2627 > URL: https://issues.apache.org/jira/browse/DRILL-2627 > Project: Apache Drill > Issue Type: New Feature > Components: Query Planning & Optimization >Affects Versions: 0.8.0 >Reporter: Victoria Markman >Assignee: Jinfeng Ni > > {code} > 0: jdbc:drill:schema=dfs> select * from t1; > ++++ > | a1 | b1 | c1 | > ++++ > | 1 | 2015-03-01 | a | > | 2 | 2015-03-02 | b | > | null | null | null | > ++++ > 3 rows selected (0.074 seconds) > 0: jdbc:drill:schema=dfs> select * from t2; > ++++ > | a2 | b2 | c2 | > ++++ > | 5 | 2017-03-01 | a | > ++++ > 1 row selected (0.056 seconds) > 0: jdbc:drill:schema=dfs> select * from t1 full outer join t2 on (t1.a1 = > t2.a2); > +++++++ > | a1 | b1 | c1 | a2 | b2 | c2 > | > +++++++ > | 1 | 2015-03-01 | a | null | null | null > | > | 2 | 2015-03-02 | b | null | null | null > | > | null | null | null | null | null | null > | > | null | null | null | 5 | 2017-03-01 | a > | > +++++++ > 4 rows selected (0.277 seconds) > 0: jdbc:drill:schema=dfs> create or replace view v2 as select cast(a2 as > integer) a2, cast(b2 as date) as b2, cast(c2 as varchar(30)) as c2 from t2 > order by a2, b2, c2; > +++ > | ok | summary | > +++ > | true | View 'v2' replaced successfully in 'dfs.test' schema | > +++ > 1 row selected (0.1 seconds) > 0: jdbc:drill:schema=dfs> create or replace view v1 as select cast(a1 as > integer) a1, cast(b1 as date) as b1, cast(c1 as varchar(30)) as c1 from t1 > order by a1, b1, c1; > +++ > | ok | summary | > +++ > | true | View 'v1' replaced successfully in 'dfs.test' schema | > +++ > 1 row selected (0.104 seconds) > {code} > Merge join plan is planned because input is sorted (order by in both views). > Since full outer join is not supported with merge join, we get an error. > {code} > 0: jdbc:drill:schema=dfs> select * from v1 full outer join v2 on (v1.a1 = > v2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > {code} > or subqueries > {code} > 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by > a1, b1, c1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 > order by a2, b2,c2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by > a1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 order by > a2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (DRILL-2627) Full outer join does not work in views when order by is present
[ https://issues.apache.org/jira/browse/DRILL-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aman Sinha reassigned DRILL-2627: - Assignee: Aman Sinha (was: Jinfeng Ni) > Full outer join does not work in views when order by is present > --- > > Key: DRILL-2627 > URL: https://issues.apache.org/jira/browse/DRILL-2627 > Project: Apache Drill > Issue Type: New Feature > Components: Query Planning & Optimization >Affects Versions: 0.8.0 >Reporter: Victoria Markman >Assignee: Aman Sinha > > {code} > 0: jdbc:drill:schema=dfs> select * from t1; > ++++ > | a1 | b1 | c1 | > ++++ > | 1 | 2015-03-01 | a | > | 2 | 2015-03-02 | b | > | null | null | null | > ++++ > 3 rows selected (0.074 seconds) > 0: jdbc:drill:schema=dfs> select * from t2; > ++++ > | a2 | b2 | c2 | > ++++ > | 5 | 2017-03-01 | a | > ++++ > 1 row selected (0.056 seconds) > 0: jdbc:drill:schema=dfs> select * from t1 full outer join t2 on (t1.a1 = > t2.a2); > +++++++ > | a1 | b1 | c1 | a2 | b2 | c2 > | > +++++++ > | 1 | 2015-03-01 | a | null | null | null > | > | 2 | 2015-03-02 | b | null | null | null > | > | null | null | null | null | null | null > | > | null | null | null | 5 | 2017-03-01 | a > | > +++++++ > 4 rows selected (0.277 seconds) > 0: jdbc:drill:schema=dfs> create or replace view v2 as select cast(a2 as > integer) a2, cast(b2 as date) as b2, cast(c2 as varchar(30)) as c2 from t2 > order by a2, b2, c2; > +++ > | ok | summary | > +++ > | true | View 'v2' replaced successfully in 'dfs.test' schema | > +++ > 1 row selected (0.1 seconds) > 0: jdbc:drill:schema=dfs> create or replace view v1 as select cast(a1 as > integer) a1, cast(b1 as date) as b1, cast(c1 as varchar(30)) as c1 from t1 > order by a1, b1, c1; > +++ > | ok | summary | > +++ > | true | View 'v1' replaced successfully in 'dfs.test' schema | > +++ > 1 row selected (0.104 seconds) > {code} > Merge join plan is planned because input is sorted (order by in both views). > Since full outer join is not supported with merge join, we get an error. > {code} > 0: jdbc:drill:schema=dfs> select * from v1 full outer join v2 on (v1.a1 = > v2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > {code} > or subqueries > {code} > 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by > a1, b1, c1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 > order by a2, b2,c2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by > a1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 order by > a2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); > Query failed: IllegalArgumentException: Full outer join not currently > supported > Error: exception while executing query: Failure while executing query. > (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-2625) org.apache.drill.common.StackTrace should follow standard stacktrace format
[ https://issues.apache.org/jira/browse/DRILL-2625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387342#comment-14387342 ] Daniel Barclay (Drill) commented on DRILL-2625: --- Eclipse. (And Emacs, and presumably other IDEs too.) > org.apache.drill.common.StackTrace should follow standard stacktrace format > --- > > Key: DRILL-2625 > URL: https://issues.apache.org/jira/browse/DRILL-2625 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > org.apache.drill.common.StackTrace uses a different textual format than JDK's > standard format for stack traces. > It should probably use the standard format so that its stack trace output can > be used by tools that already can parse the standard format to provide > functionality such as displaying the corresponding source. > (After correcting for DRILL-2624, StackTrace formats stack traces like this: > org.apache.drill.common.StackTrace.:1 > org.apache.drill.exec.server.Drillbit.run:20 > org.apache.drill.jdbc.DrillConnectionImpl.:232 > The normal form is like this: > at > org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) > at > org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) > at com.google.common.io.Closeables.close(Closeables.java:77) > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2627) Full outer join does not work in views when order by is present
Victoria Markman created DRILL-2627: --- Summary: Full outer join does not work in views when order by is present Key: DRILL-2627 URL: https://issues.apache.org/jira/browse/DRILL-2627 Project: Apache Drill Issue Type: New Feature Components: Query Planning & Optimization Affects Versions: 0.8.0 Reporter: Victoria Markman Assignee: Jinfeng Ni {code} 0: jdbc:drill:schema=dfs> select * from t1; ++++ | a1 | b1 | c1 | ++++ | 1 | 2015-03-01 | a | | 2 | 2015-03-02 | b | | null | null | null | ++++ 3 rows selected (0.074 seconds) 0: jdbc:drill:schema=dfs> select * from t2; ++++ | a2 | b2 | c2 | ++++ | 5 | 2017-03-01 | a | ++++ 1 row selected (0.056 seconds) 0: jdbc:drill:schema=dfs> select * from t1 full outer join t2 on (t1.a1 = t2.a2); +++++++ | a1 | b1 | c1 | a2 | b2 | c2 | +++++++ | 1 | 2015-03-01 | a | null | null | null | | 2 | 2015-03-02 | b | null | null | null | | null | null | null | null | null | null | | null | null | null | 5 | 2017-03-01 | a | +++++++ 4 rows selected (0.277 seconds) 0: jdbc:drill:schema=dfs> create or replace view v2 as select cast(a2 as integer) a2, cast(b2 as date) as b2, cast(c2 as varchar(30)) as c2 from t2 order by a2, b2, c2; +++ | ok | summary | +++ | true | View 'v2' replaced successfully in 'dfs.test' schema | +++ 1 row selected (0.1 seconds) 0: jdbc:drill:schema=dfs> create or replace view v1 as select cast(a1 as integer) a1, cast(b1 as date) as b1, cast(c1 as varchar(30)) as c1 from t1 order by a1, b1, c1; +++ | ok | summary | +++ | true | View 'v1' replaced successfully in 'dfs.test' schema | +++ 1 row selected (0.104 seconds) {code} Merge join plan is planned because input is sorted (order by in both views). Since full outer join is not supported with merge join, we get an error. {code} 0: jdbc:drill:schema=dfs> select * from v1 full outer join v2 on (v1.a1 = v2.a2); Query failed: IllegalArgumentException: Full outer join not currently supported Error: exception while executing query: Failure while executing query. (state=,code=0) {code} or subqueries {code} 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by a1, b1, c1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 order by a2, b2,c2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); Query failed: IllegalArgumentException: Full outer join not currently supported Error: exception while executing query: Failure while executing query. (state=,code=0) 0: jdbc:drill:schema=dfs> select * from (select a1, b1, c1 from t1 order by a1) as sq1(a1, b1, c1) full outer join (select a2, b2, c2 from t2 order by a2) as sq2(a2,b2,c2) on (sq1.a1 = sq2.a2); Query failed: IllegalArgumentException: Full outer join not currently supported Error: exception while executing query: Failure while executing query. (state=,code=0) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2586) document data type formatting functions
[ https://issues.apache.org/jira/browse/DRILL-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristine Hahn updated DRILL-2586: - Description: Add Bridget's MicroStrategy docs/images, add Kris's function files, Daniel's reorg of developer info, misc. fixes (was: Add Bridget's MicroStrategy docs/images, add Kris's function files, misc. fixes) > document data type formatting functions > --- > > Key: DRILL-2586 > URL: https://issues.apache.org/jira/browse/DRILL-2586 > Project: Apache Drill > Issue Type: Task > Components: Documentation >Affects Versions: 0.8.0 >Reporter: Kristine Hahn >Assignee: Kristine Hahn > > Add Bridget's MicroStrategy docs/images, add Kris's function files, Daniel's > reorg of developer info, misc. fixes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2586) document data type formatting functions
[ https://issues.apache.org/jira/browse/DRILL-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristine Hahn updated DRILL-2586: - Description: Add Bridget's MicroStrategy docs/images, add Kris's data type formatting functions, Daniel's reorg of developer info, misc. fixes (was: Add Bridget's MicroStrategy docs/images, add Kris's function files, Daniel's reorg of developer info, misc. fixes) > document data type formatting functions > --- > > Key: DRILL-2586 > URL: https://issues.apache.org/jira/browse/DRILL-2586 > Project: Apache Drill > Issue Type: Task > Components: Documentation >Affects Versions: 0.8.0 >Reporter: Kristine Hahn >Assignee: Kristine Hahn > > Add Bridget's MicroStrategy docs/images, add Kris's data type formatting > functions, Daniel's reorg of developer info, misc. fixes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2586) document data type formatting functions
[ https://issues.apache.org/jira/browse/DRILL-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristine Hahn updated DRILL-2586: - Description: Add Bridget's MicroStrategy docs/images, add Kris's function files, misc. fixes > document data type formatting functions > --- > > Key: DRILL-2586 > URL: https://issues.apache.org/jira/browse/DRILL-2586 > Project: Apache Drill > Issue Type: Task > Components: Documentation >Affects Versions: 0.8.0 >Reporter: Kristine Hahn >Assignee: Kristine Hahn > > Add Bridget's MicroStrategy docs/images, add Kris's function files, misc. > fixes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (DRILL-2375) project more than one column from nested array causes indexoutofbounds exception
[ https://issues.apache.org/jira/browse/DRILL-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mehant Baid resolved DRILL-2375. Resolution: Fixed Fixed in 0fbcddba14405ec94d51b0ba3512925168efb433 > project more than one column from nested array causes indexoutofbounds > exception > > > Key: DRILL-2375 > URL: https://issues.apache.org/jira/browse/DRILL-2375 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Data Types >Affects Versions: 0.8.0 >Reporter: Chun Chang >Assignee: Hanifi Gunes >Priority: Blocker > Fix For: 0.9.0 > > > #Wed Feb 25 17:07:31 EST 2015 > git.commit.id.abbrev=f7ef5ec > I have nested array in a json file looks like this: > {code} > "aaa":[[["aa0 1"], ["ab0 1"]], [["ba0 1"], ["bb0 1"]],[["ca0 1", "ca1 > 1"],["cb0 1", "cb1 1", "cb2 1"]]] > {code} > Following query causes index out of bound exception: > {code} > 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select t.id, t.aaa[0], > t.aaa[1] from `complex.json` t limit 5; > Query failed: RemoteRpcException: Failure while running fragment., index: -4, > length: 4 (expected: range(0, 16384)) [ cc383967-6db8-459d-86fe-564d57f7c016 > on qa-node120.qa.lab:31010 ] > [ cc383967-6db8-459d-86fe-564d57f7c016 on qa-node120.qa.lab:31010 ] > Error: exception while executing query: Failure while executing query. > (state=,code=0) > {code} > drillbit.log > {code} > 2015-03-03 18:37:17,650 [2b099022-67e8-74b5-f68a-950fe3fe9375:frag:0:0] WARN > o.a.d.e.w.fragment.FragmentExecutor - Error while initializing or executing > fragment > java.lang.IndexOutOfBoundsException: index: -4, length: 4 (expected: range(0, > 16384)) > at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:156) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:4.0.24.Final] > at io.netty.buffer.DrillBuf.chk(DrillBuf.java:178) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:4.0.24.Final] > at io.netty.buffer.DrillBuf.getInt(DrillBuf.java:447) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:4.0.24.Final] > at > org.apache.drill.exec.vector.UInt4Vector$Accessor.get(UInt4Vector.java:309) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.vector.complex.RepeatedListVector$RepeatedListAccessor.get(RepeatedListVector.java:195) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.vector.complex.impl.RepeatedListReaderImpl.setPosition(RepeatedListReaderImpl.java:79) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.vector.complex.impl.RepeatedListReaderImpl.setPosition(RepeatedListReaderImpl.java:86) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.test.generated.ProjectorGen80550.doEval(ProjectorTemplate.java:106) > ~[na:na] > at > org.apache.drill.exec.test.generated.ProjectorGen80550.projectRecords(ProjectorTemplate.java:62) > ~[na:na] > at > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.doWork(ProjectRecordBatch.java:174) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:113) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142) > ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-S
[jira] [Assigned] (DRILL-2625) org.apache.drill.common.StackTrace should follow standard stacktrace format
[ https://issues.apache.org/jira/browse/DRILL-2625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Westin reassigned DRILL-2625: --- Assignee: Chris Westin > org.apache.drill.common.StackTrace should follow standard stacktrace format > --- > > Key: DRILL-2625 > URL: https://issues.apache.org/jira/browse/DRILL-2625 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > org.apache.drill.common.StackTrace uses a different textual format than JDK's > standard format for stack traces. > It should probably use the standard format so that its stack trace output can > be used by tools that already can parse the standard format to provide > functionality such as displaying the corresponding source. > (After correcting for DRILL-2624, StackTrace formats stack traces like this: > org.apache.drill.common.StackTrace.:1 > org.apache.drill.exec.server.Drillbit.run:20 > org.apache.drill.jdbc.DrillConnectionImpl.:232 > The normal form is like this: > at > org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) > at > org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) > at com.google.common.io.Closeables.close(Closeables.java:77) > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (DRILL-2624) org.apache.drill.common.StackTrace prints garbage for line numbers
[ https://issues.apache.org/jira/browse/DRILL-2624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Westin reassigned DRILL-2624: --- Assignee: Chris Westin > org.apache.drill.common.StackTrace prints garbage for line numbers > -- > > Key: DRILL-2624 > URL: https://issues.apache.org/jira/browse/DRILL-2624 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > org.apache.drill.common.StackTrace's write(...) method prints irrelevant > characters instead of line numbers, for example: > org.apache.drill.common.StackTrace.:$ > org.apache.drill.exec.server.Drillbit.run:ᅢᄉ > org.apache.drill.jdbc.DrillConnectionImpl.:[ > org.apache.drill.jdbc.DrillJdbc41Factory$DrillJdbc41Connection.:^ > org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection:9 > org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection:^A > org.apache.drill.jdbc.DrillFactory.newConnection:6 > net.hydromatic.avatica.UnregisteredDriver.connect:~ > java.sql.DriverManager.getConnection:ᄏ > java.sql.DriverManager.getConnection:ᅡᄏ > ... > The problem is that somebody passed a line number to Writer.write(int > c)--which takes an integer _representing a character_, *not* an integer to > represent as a string of characters. (Writer's write(...) methods are not > like PrintWriter's and PrintStream's print(...) methods.) > Additionally, a meta-problem is that apparently it was never verified that > the code actually worked. We need to execute the code and verify that it > works *at least once* before checking it in. > A second meta-problem is that there no unit test for the code. We should > have unit tests for most code--especially code that is isolated and easy to > test as this class seems to be. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-2625) org.apache.drill.common.StackTrace should follow standard stacktrace format
[ https://issues.apache.org/jira/browse/DRILL-2625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387243#comment-14387243 ] Chris Westin commented on DRILL-2625: - What's an example of a tool that parses stack traces? > org.apache.drill.common.StackTrace should follow standard stacktrace format > --- > > Key: DRILL-2625 > URL: https://issues.apache.org/jira/browse/DRILL-2625 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > org.apache.drill.common.StackTrace uses a different textual format than JDK's > standard format for stack traces. > It should probably use the standard format so that its stack trace output can > be used by tools that already can parse the standard format to provide > functionality such as displaying the corresponding source. > (After correcting for DRILL-2624, StackTrace formats stack traces like this: > org.apache.drill.common.StackTrace.:1 > org.apache.drill.exec.server.Drillbit.run:20 > org.apache.drill.jdbc.DrillConnectionImpl.:232 > The normal form is like this: > at > org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) > at > org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) > at com.google.common.io.Closeables.close(Closeables.java:77) > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2625) org.apache.drill.common.StackTrace should follow standard stacktrace format
[ https://issues.apache.org/jira/browse/DRILL-2625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Westin updated DRILL-2625: Affects Version/s: 0.8.0 > org.apache.drill.common.StackTrace should follow standard stacktrace format > --- > > Key: DRILL-2625 > URL: https://issues.apache.org/jira/browse/DRILL-2625 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > org.apache.drill.common.StackTrace uses a different textual format than JDK's > standard format for stack traces. > It should probably use the standard format so that its stack trace output can > be used by tools that already can parse the standard format to provide > functionality such as displaying the corresponding source. > (After correcting for DRILL-2624, StackTrace formats stack traces like this: > org.apache.drill.common.StackTrace.:1 > org.apache.drill.exec.server.Drillbit.run:20 > org.apache.drill.jdbc.DrillConnectionImpl.:232 > The normal form is like this: > at > org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) > at > org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) > at com.google.common.io.Closeables.close(Closeables.java:77) > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2624) org.apache.drill.common.StackTrace prints garbage for line numbers
[ https://issues.apache.org/jira/browse/DRILL-2624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Westin updated DRILL-2624: Affects Version/s: 0.8.0 > org.apache.drill.common.StackTrace prints garbage for line numbers > -- > > Key: DRILL-2624 > URL: https://issues.apache.org/jira/browse/DRILL-2624 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > org.apache.drill.common.StackTrace's write(...) method prints irrelevant > characters instead of line numbers, for example: > org.apache.drill.common.StackTrace.:$ > org.apache.drill.exec.server.Drillbit.run:ᅢᄉ > org.apache.drill.jdbc.DrillConnectionImpl.:[ > org.apache.drill.jdbc.DrillJdbc41Factory$DrillJdbc41Connection.:^ > org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection:9 > org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection:^A > org.apache.drill.jdbc.DrillFactory.newConnection:6 > net.hydromatic.avatica.UnregisteredDriver.connect:~ > java.sql.DriverManager.getConnection:ᄏ > java.sql.DriverManager.getConnection:ᅡᄏ > ... > The problem is that somebody passed a line number to Writer.write(int > c)--which takes an integer _representing a character_, *not* an integer to > represent as a string of characters. (Writer's write(...) methods are not > like PrintWriter's and PrintStream's print(...) methods.) > Additionally, a meta-problem is that apparently it was never verified that > the code actually worked. We need to execute the code and verify that it > works *at least once* before checking it in. > A second meta-problem is that there no unit test for the code. We should > have unit tests for most code--especially code that is isolated and easy to > test as this class seems to be. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (DRILL-2626) org.apache.drill.common.StackTrace seems to have duplicate code; should we re-use Throwable's code?
[ https://issues.apache.org/jira/browse/DRILL-2626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Westin reassigned DRILL-2626: --- Assignee: Chris Westin > org.apache.drill.common.StackTrace seems to have duplicate code; should we > re-use Throwable's code? > --- > > Key: DRILL-2626 > URL: https://issues.apache.org/jira/browse/DRILL-2626 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > It seems that class org.apache.drill.common.StackTrace needlessly duplicates > code that's already in the JDK. > In particular, it has code to format the stack trace. That seems at least > mostly redundant with the formatting code already in java.lang.Throwable. > StackTrace does have a comment about eliminating the StackTrace constructor > from the stack trace. However, StackTrace does _not_ actuallly eliminate its > contructor from the stack trace (e.g., its stack traces start with > "org.apache.drill.common.StackTrace.:..."). > Should StackTrace be implemented by simply subclassing Throwable? > That would eliminate StackTrace's current formatting code (which would also > eliminate the difference between StackTrace's format and the standard format). > That should also eliminate having the StackTrace constructor's stack frame > show up in the stack trace. (Throwable's constructor/fillInStackTrace > already handles that.) > (Having "StackTrace extends Throwable" isn't ideal, since StackTrace is not > intended to be a kind of exception, but that would probably be better than > the current form, given the bugs StackTrace has/has had (DRILL-2624, > DRILL-2625). > That non-ideal subclassing could be eliminated by having a member variable of > type Throwable that is constructed during StackTrace's construction, although > that would either cause the StackTrace constructor to re-appear in the stack > trace or require a non-trivial workaround to re-eliminate it. > Perhaps client code should simply use "new Throwable()" to capture the stack > trace and a static methods on a utility class to format the stack trace into > a String.) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2626) org.apache.drill.common.StackTrace seems to have duplicate code; should we re-use Throwable's code?
[ https://issues.apache.org/jira/browse/DRILL-2626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Westin updated DRILL-2626: Affects Version/s: 0.8.0 > org.apache.drill.common.StackTrace seems to have duplicate code; should we > re-use Throwable's code? > --- > > Key: DRILL-2626 > URL: https://issues.apache.org/jira/browse/DRILL-2626 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.8.0 >Reporter: Daniel Barclay (Drill) >Assignee: Chris Westin > > It seems that class org.apache.drill.common.StackTrace needlessly duplicates > code that's already in the JDK. > In particular, it has code to format the stack trace. That seems at least > mostly redundant with the formatting code already in java.lang.Throwable. > StackTrace does have a comment about eliminating the StackTrace constructor > from the stack trace. However, StackTrace does _not_ actuallly eliminate its > contructor from the stack trace (e.g., its stack traces start with > "org.apache.drill.common.StackTrace.:..."). > Should StackTrace be implemented by simply subclassing Throwable? > That would eliminate StackTrace's current formatting code (which would also > eliminate the difference between StackTrace's format and the standard format). > That should also eliminate having the StackTrace constructor's stack frame > show up in the stack trace. (Throwable's constructor/fillInStackTrace > already handles that.) > (Having "StackTrace extends Throwable" isn't ideal, since StackTrace is not > intended to be a kind of exception, but that would probably be better than > the current form, given the bugs StackTrace has/has had (DRILL-2624, > DRILL-2625). > That non-ideal subclassing could be eliminated by having a member variable of > type Throwable that is constructed during StackTrace's construction, although > that would either cause the StackTrace constructor to re-appear in the stack > trace or require a non-trivial workaround to re-eliminate it. > Perhaps client code should simply use "new Throwable()" to capture the stack > trace and a static methods on a utility class to format the stack trace into > a String.) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2383) add exception and pause injections for testing drillbit stability
[ https://issues.apache.org/jira/browse/DRILL-2383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheesh Katkam updated DRILL-2383: --- Description: Use the exception injection mechanism to add exception injections to test a variety of distributed failure scenarios. Here are some scenarios we've worked out before: 1. Cancellation: TC1: cancel before any result set is returned TC2: cancel in the middle of fetching result set TC3: cancel after all result set are produced but not all are fetched TC4: cancel after everything is completed and fetched As test setup, we need: - query dataset large enough to be sent to different drillbits, e.g., TPCH 100 - queries that force multiple drillbits to work on them; e.g., count ... group by 2. Completed (in each case check all drillbits are still up and running): TC1: success TC2: failed query - before query is executed - while sql parsing TC3: failed query - before query is executed - while sending fragments to other drillbits for execution TC4: failed query - during query execution It is currently not possible to create a scenario in which a query may hang. To check all drillbits up and running and in a clean state, run: -select count(*) from sys.drillbits;- {code} select count(*) from sys.memory; {code} was: Use the exception injection mechanism to add exception injections to test a variety of distributed failure scenarios. Here are some scenarios we've worked out before: 1. Cancellation: TC1: cancel before any result set is returned TC2: cancel in the middle of fetching result set TC3: cancel after all result set are produced but not all are fetched TC4: cancel after everything is completed and fetched As test setup, we need: - query dataset large enough to be sent to different drillbits, e.g., TPCH 100 - queries that force multiple drillbits to work on them; e.g., count ... group by 2. Completed (in each case check all drillbits are still up and running): TC1: success TC2: failed query - before query is executed - while sql parsing TC3: failed query - before query is executed - while sending fragments to other drillbits for execution TC4: failed query - during query execution It is currently not possible to create a scenario in which a query may hang. To check all drillbits up and running and in a clean state, run: select count(*) from sys.drillbits; > add exception and pause injections for testing drillbit stability > - > > Key: DRILL-2383 > URL: https://issues.apache.org/jira/browse/DRILL-2383 > Project: Apache Drill > Issue Type: New Feature > Components: Execution - Flow >Reporter: Chris Westin >Assignee: Sudheesh Katkam > Fix For: 0.9.0 > > > Use the exception injection mechanism to add exception injections to test a > variety of distributed failure scenarios. > Here are some scenarios we've worked out before: > 1. Cancellation: > TC1: cancel before any result set is returned > TC2: cancel in the middle of fetching result set > TC3: cancel after all result set are produced but not all are fetched > TC4: cancel after everything is completed and fetched > As test setup, we need: > - query dataset large enough to be sent to different drillbits, e.g., TPCH > 100 > - queries that force multiple drillbits to work on them; e.g., count ... > group by > 2. Completed (in each case check all drillbits are still up and running): > TC1: success > TC2: failed query - before query is executed - while sql parsing > TC3: failed query - before query is executed - while sending fragments to > other drillbits for execution > TC4: failed query - during query execution > It is currently not possible to create a scenario in which a query may hang. > To check all drillbits up and running and in a clean state, run: > -select count(*) from sys.drillbits;- > {code} > select count(*) from sys.memory; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2625) org.apache.drill.common.StackTrace should follow standard stacktrace format
Daniel Barclay (Drill) created DRILL-2625: - Summary: org.apache.drill.common.StackTrace should follow standard stacktrace format Key: DRILL-2625 URL: https://issues.apache.org/jira/browse/DRILL-2625 Project: Apache Drill Issue Type: Bug Reporter: Daniel Barclay (Drill) org.apache.drill.common.StackTrace uses a different textual format than JDK's standard format for stack traces. It should probably use the standard format so that its stack trace output can be used by tools that already can parse the standard format to provide functionality such as displaying the corresponding source. (After correcting for DRILL-, StackTrace formats stack traces like this: org.apache.drill.common.StackTrace.:1 org.apache.drill.exec.server.Drillbit.run:20 org.apache.drill.jdbc.DrillConnectionImpl.:232 The normal form is like this: at org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) at org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) at com.google.common.io.Closeables.close(Closeables.java:77) ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2626) org.apache.drill.common.StackTrace seems to have duplicate code; should we re-use Throwable's code?
[ https://issues.apache.org/jira/browse/DRILL-2626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Barclay (Drill) updated DRILL-2626: -- Description: It seems that class org.apache.drill.common.StackTrace needlessly duplicates code that's already in the JDK. In particular, it has code to format the stack trace. That seems at least mostly redundant with the formatting code already in java.lang.Throwable. StackTrace does have a comment about eliminating the StackTrace constructor from the stack trace. However, StackTrace does _not_ actuallly eliminate its contructor from the stack trace (e.g., its stack traces start with "org.apache.drill.common.StackTrace.:..."). Should StackTrace be implemented by simply subclassing Throwable? That would eliminate StackTrace's current formatting code (which would also eliminate the difference between StackTrace's format and the standard format). That should also eliminate having the StackTrace constructor's stack frame show up in the stack trace. (Throwable's constructor/fillInStackTrace already handles that.) (Having "StackTrace extends Throwable" isn't ideal, since StackTrace is not intended to be a kind of exception, but that would probably be better than the current form, given the bugs StackTrace has/has had (DRILL-2624, DRILL-2625). That non-ideal subclassing could be eliminated by having a member variable of type Throwable that is constructed during StackTrace's construction, although that would either cause the StackTrace constructor to re-appear in the stack trace or require a non-trivial workaround to re-eliminate it. Perhaps client code should simply use "new Throwable()" to capture the stack trace and a static methods on a utility class to format the stack trace into a String.) was: It seems that class org.apache.drill.common.StackTrace needlessly duplicates code that's already in the JDK. In particular, it has code to format the stack trace. That seems at least mostly redundant with the formatting code already in java.lang.Throwable. StackTrace does have a comment about eliminating the StackTrace constructor from the stack trace. However, StackTrace does _not_ actuallly eliminate its contructor from the stack trace (e.g., its stack traces start with "org.apache.drill.common.StackTrace.:..."). Should StackTrace be implemented by simply subclassing Throwable? That would eliminate StackTrace's current formatting code (which would also eliminate the difference between StackTrace's format and the standard format). That should also eliminate having the StackTrace constructor's stack frame show up in the stack trace. (Throwable's constructor/fillInStackTrace already handles that.) (Having "StackTrace extends Throwable" isn't ideal, since StackTrace is not intended to be a kind of exception, but that would probably be better than the current form, given the bugs StackTrace has/has had (DRILL-x, DRILL-). That non-ideal subclassing could be eliminated by having a member variable of type Throwable that is constructed during StackTrace's construction, although that would either cause the StackTrace constructor to re-appear in the stack trace or require a non-trivial workaround to re-eliminate it. Perhaps client code should simply use "new Throwable()" to capture the stack trace and a static methods on a utility class to format the stack trace into a String.) > org.apache.drill.common.StackTrace seems to have duplicate code; should we > re-use Throwable's code? > --- > > Key: DRILL-2626 > URL: https://issues.apache.org/jira/browse/DRILL-2626 > Project: Apache Drill > Issue Type: Bug >Reporter: Daniel Barclay (Drill) > > It seems that class org.apache.drill.common.StackTrace needlessly duplicates > code that's already in the JDK. > In particular, it has code to format the stack trace. That seems at least > mostly redundant with the formatting code already in java.lang.Throwable. > StackTrace does have a comment about eliminating the StackTrace constructor > from the stack trace. However, StackTrace does _not_ actuallly eliminate its > contructor from the stack trace (e.g., its stack traces start with > "org.apache.drill.common.StackTrace.:..."). > Should StackTrace be implemented by simply subclassing Throwable? > That would eliminate StackTrace's current formatting code (which would also > eliminate the difference between StackTrace's format and the standard format). > That should also eliminate having the StackTrace constructor's stack frame > show up in the stack trace. (Throwable's constructor/fillInStackTrace > already handles that.) > (Having "StackTrace extends Throwable" isn't ideal, since StackTrace is not > intended to be a kind of excep
[jira] [Created] (DRILL-2624) org.apache.drill.common.StackTrace prints garbage for line numbers
Daniel Barclay (Drill) created DRILL-2624: - Summary: org.apache.drill.common.StackTrace prints garbage for line numbers Key: DRILL-2624 URL: https://issues.apache.org/jira/browse/DRILL-2624 Project: Apache Drill Issue Type: Bug Reporter: Daniel Barclay (Drill) org.apache.drill.common.StackTrace's write(...) method prints irrelevant characters instead of line numbers, for example: org.apache.drill.common.StackTrace.:$ org.apache.drill.exec.server.Drillbit.run:ᅢᄉ org.apache.drill.jdbc.DrillConnectionImpl.:[ org.apache.drill.jdbc.DrillJdbc41Factory$DrillJdbc41Connection.:^ org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection:9 org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection:^A org.apache.drill.jdbc.DrillFactory.newConnection:6 net.hydromatic.avatica.UnregisteredDriver.connect:~ java.sql.DriverManager.getConnection:ᄏ java.sql.DriverManager.getConnection:ᅡᄏ ... The problem is that somebody passed a line number to Writer.write(int c)--which takes an integer _representing a character_, *not* an integer to represent as a string of characters. (Writer's write(...) methods are not like PrintWriter's and PrintStream's print(...) methods.) Additionally, a meta-problem is that apparently it was never verified that the code actually worked. We need to execute the code and verify that it works *at least once* before checking it in. A second meta-problem is that there no unit test for the code. We should have unit tests for most code--especially code that is isolated and easy to test as this class seems to be. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2626) org.apache.drill.common.StackTrace seems to have duplicate code; should we re-use Throwable's code?
Daniel Barclay (Drill) created DRILL-2626: - Summary: org.apache.drill.common.StackTrace seems to have duplicate code; should we re-use Throwable's code? Key: DRILL-2626 URL: https://issues.apache.org/jira/browse/DRILL-2626 Project: Apache Drill Issue Type: Bug Reporter: Daniel Barclay (Drill) It seems that class org.apache.drill.common.StackTrace needlessly duplicates code that's already in the JDK. In particular, it has code to format the stack trace. That seems at least mostly redundant with the formatting code already in java.lang.Throwable. StackTrace does have a comment about eliminating the StackTrace constructor from the stack trace. However, StackTrace does _not_ actuallly eliminate its contructor from the stack trace (e.g., its stack traces start with "org.apache.drill.common.StackTrace.:..."). Should StackTrace be implemented by simply subclassing Throwable? That would eliminate StackTrace's current formatting code (which would also eliminate the difference between StackTrace's format and the standard format). That should also eliminate having the StackTrace constructor's stack frame show up in the stack trace. (Throwable's constructor/fillInStackTrace already handles that.) (Having "StackTrace extends Throwable" isn't ideal, since StackTrace is not intended to be a kind of exception, but that would probably be better than the current form, given the bugs StackTrace has/has had (DRILL-x, DRILL-). That non-ideal subclassing could be eliminated by having a member variable of type Throwable that is constructed during StackTrace's construction, although that would either cause the StackTrace constructor to re-appear in the stack trace or require a non-trivial workaround to re-eliminate it. Perhaps client code should simply use "new Throwable()" to capture the stack trace and a static methods on a utility class to format the stack trace into a String.) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2625) org.apache.drill.common.StackTrace should follow standard stacktrace format
[ https://issues.apache.org/jira/browse/DRILL-2625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Barclay (Drill) updated DRILL-2625: -- Description: org.apache.drill.common.StackTrace uses a different textual format than JDK's standard format for stack traces. It should probably use the standard format so that its stack trace output can be used by tools that already can parse the standard format to provide functionality such as displaying the corresponding source. (After correcting for DRILL-2624, StackTrace formats stack traces like this: org.apache.drill.common.StackTrace.:1 org.apache.drill.exec.server.Drillbit.run:20 org.apache.drill.jdbc.DrillConnectionImpl.:232 The normal form is like this: at org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) at org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) at com.google.common.io.Closeables.close(Closeables.java:77) ) was: org.apache.drill.common.StackTrace uses a different textual format than JDK's standard format for stack traces. It should probably use the standard format so that its stack trace output can be used by tools that already can parse the standard format to provide functionality such as displaying the corresponding source. (After correcting for DRILL-, StackTrace formats stack traces like this: org.apache.drill.common.StackTrace.:1 org.apache.drill.exec.server.Drillbit.run:20 org.apache.drill.jdbc.DrillConnectionImpl.:232 The normal form is like this: at org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) at org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) at com.google.common.io.Closeables.close(Closeables.java:77) ) > org.apache.drill.common.StackTrace should follow standard stacktrace format > --- > > Key: DRILL-2625 > URL: https://issues.apache.org/jira/browse/DRILL-2625 > Project: Apache Drill > Issue Type: Bug >Reporter: Daniel Barclay (Drill) > > org.apache.drill.common.StackTrace uses a different textual format than JDK's > standard format for stack traces. > It should probably use the standard format so that its stack trace output can > be used by tools that already can parse the standard format to provide > functionality such as displaying the corresponding source. > (After correcting for DRILL-2624, StackTrace formats stack traces like this: > org.apache.drill.common.StackTrace.:1 > org.apache.drill.exec.server.Drillbit.run:20 > org.apache.drill.jdbc.DrillConnectionImpl.:232 > The normal form is like this: > at > org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:162) > at > org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:75) > at com.google.common.io.Closeables.close(Closeables.java:77) > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (DRILL-2599) Wrong results while using stddev with views
[ https://issues.apache.org/jira/browse/DRILL-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mehant Baid resolved DRILL-2599. Resolution: Fixed Fixed in 96d51bdedbeab2f95075ca5e40cdc7b65b1c8e99 > Wrong results while using stddev with views > --- > > Key: DRILL-2599 > URL: https://issues.apache.org/jira/browse/DRILL-2599 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Reporter: Mehant Baid >Assignee: Mehant Baid > Fix For: 0.9.0 > > Attachments: DRILL-2599.patch > > > We seem to be injecting an additional cast in DrillReduceAggregate rule in > the case we know the input type of the aggregate function. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (DRILL-2601) Print SQL query text along with query id in drillbit.log
[ https://issues.apache.org/jira/browse/DRILL-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sudheesh Katkam reassigned DRILL-2601: -- Assignee: Sudheesh Katkam (was: Jacques Nadeau) > Print SQL query text along with query id in drillbit.log > > > Key: DRILL-2601 > URL: https://issues.apache.org/jira/browse/DRILL-2601 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Other >Reporter: Victoria Markman >Assignee: Sudheesh Katkam > > This is a request to print text of a query into drillbit.log in the default > non verbose output. It includes all the changes of a session level parameters > and anything else that might help reproducing issue on a customer site. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-2620) Casting to float is changing the value slightly
[ https://issues.apache.org/jira/browse/DRILL-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386941#comment-14386941 ] Ted Dunning commented on DRILL-2620: What did you expect to see? In SQL the default precision of a FLOAT is implementation defined. I strongly suspect that in Drill the default is 24 (i.e. single precision). If you care (and you seem to), you might be better served by specifying DOUBLE as the type or FLOAT(53). Single precision floating point (aka float) only provides 6 digits of precision. You, as the lucky person you are, got 7. http://en.wikipedia.org/wiki/Single-precision_floating-point_format > Casting to float is changing the value slightly > --- > > Key: DRILL-2620 > URL: https://issues.apache.org/jira/browse/DRILL-2620 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Data Types >Reporter: Rahul Challapalli >Assignee: Daniel Barclay (Drill) > > git.commit.id.abbrev=c11fcf7 > Data Set : > {code} > 2345552345.5342 > 4784.5735 > {code} > Drill Query : > {code} > select cast(columns[0] as float) from `abc.tbl`; > ++ > | EXPR$0 | > ++ > | 2.34555238E9 | > | 4784.5737 | > ++ > {code} > I am not sure whether this is a known limitation or a bug -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2623) Expose only "productized" system/session configuration parameters
Victoria Markman created DRILL-2623: --- Summary: Expose only "productized" system/session configuration parameters Key: DRILL-2623 URL: https://issues.apache.org/jira/browse/DRILL-2623 Project: Apache Drill Issue Type: New Feature Reporter: Victoria Markman Assignee: Jacques Nadeau This is an enhancement request to expose only well tested and useful parameters to the end user. For example, we don't want to allow all users to change internal configuration parameters, like exec.min_hash_table_size or enable features that are not ready for prime time ( store.parquet.enable_dictionary_encoding for example). However, sometimes in order to achieve optimal performace some configuration fiddling will be absolutely necessary. We can allow different users with different privileges have an ability to change settings. One of the proposals to achieve this is to create a view on top of sys.options that will be created on drills start up and have privileged access a) to the information in the view - show only things that particular user is allowed to see b) have privileged access to "ALTER SYSTEM/SESSION" commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2622) C++ Client valgrind errors in sync API
[ https://issues.apache.org/jira/browse/DRILL-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Parth Chandra updated DRILL-2622: - Attachment: qs-vg-log-sync-true-Q_4-1-20150327-165217.xml Valgrind output attached. > C++ Client valgrind errors in sync API > -- > > Key: DRILL-2622 > URL: https://issues.apache.org/jira/browse/DRILL-2622 > Project: Apache Drill > Issue Type: Bug > Components: Client - C++ >Reporter: Parth Chandra >Assignee: Parth Chandra > Fix For: Future, 1.1.0 > > Attachments: qs-vg-log-sync-true-Q_4-1-20150327-165217.xml > > > The synchronous version of the C++ cloient API shows some valgrind errors in > the case with many parallel queries and cancel requests. > This is caused by a synchronization issue where it appears > m_pDrillClientqueryResult member is being accessed after being deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2622) C++ Client valgrind errors in sync API
Parth Chandra created DRILL-2622: Summary: C++ Client valgrind errors in sync API Key: DRILL-2622 URL: https://issues.apache.org/jira/browse/DRILL-2622 Project: Apache Drill Issue Type: Bug Components: Client - C++ Reporter: Parth Chandra Assignee: Parth Chandra Fix For: Future, 1.1.0 The synchronous version of the C++ cloient API shows some valgrind errors in the case with many parallel queries and cancel requests. This is caused by a synchronization issue where it appears m_pDrillClientqueryResult member is being accessed after being deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2616) strings loaded incorrectly from parquet files
[ https://issues.apache.org/jira/browse/DRILL-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Altekruse updated DRILL-2616: --- Priority: Critical (was: Major) > strings loaded incorrectly from parquet files > - > > Key: DRILL-2616 > URL: https://issues.apache.org/jira/browse/DRILL-2616 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Jack Crawford >Assignee: Jason Altekruse >Priority: Critical > Labels: parquet > > When loading string columns from parquet data sources, some rows have their > string values replaced with the value from other rows. > Example parquet for which the problem occurs: > https://drive.google.com/file/d/0B2JGBdceNMxdeFlJcW1FUElOdXc/view?usp=sharing -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-2616) strings loaded incorrectly from parquet files
[ https://issues.apache.org/jira/browse/DRILL-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386904#comment-14386904 ] Jason Altekruse commented on DRILL-2616: Does a select star produce this problem for you? I tried reading the file both with the standard parquet java implementation tools (https://github.com/apache/incubator-parquet-mr/tree/master/parquet-tools) and Drill and it didn't look like any of the data was out of place from inspecting the data in each column visually. Could you post a query along with the incorrect output you are seeing? > strings loaded incorrectly from parquet files > - > > Key: DRILL-2616 > URL: https://issues.apache.org/jira/browse/DRILL-2616 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Jack Crawford > Labels: parquet > > When loading string columns from parquet data sources, some rows have their > string values replaced with the value from other rows. > Example parquet for which the problem occurs: > https://drive.google.com/file/d/0B2JGBdceNMxdeFlJcW1FUElOdXc/view?usp=sharing -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (DRILL-2616) strings loaded incorrectly from parquet files
[ https://issues.apache.org/jira/browse/DRILL-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Altekruse reassigned DRILL-2616: -- Assignee: Jason Altekruse > strings loaded incorrectly from parquet files > - > > Key: DRILL-2616 > URL: https://issues.apache.org/jira/browse/DRILL-2616 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Jack Crawford >Assignee: Jason Altekruse > Labels: parquet > > When loading string columns from parquet data sources, some rows have their > string values replaced with the value from other rows. > Example parquet for which the problem occurs: > https://drive.google.com/file/d/0B2JGBdceNMxdeFlJcW1FUElOdXc/view?usp=sharing -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-2616) strings loaded incorrectly from parquet files
[ https://issues.apache.org/jira/browse/DRILL-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386907#comment-14386907 ] Jason Altekruse commented on DRILL-2616: Changed priority to critical because of wrong results. > strings loaded incorrectly from parquet files > - > > Key: DRILL-2616 > URL: https://issues.apache.org/jira/browse/DRILL-2616 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Jack Crawford >Assignee: Jason Altekruse >Priority: Critical > Labels: parquet > > When loading string columns from parquet data sources, some rows have their > string values replaced with the value from other rows. > Example parquet for which the problem occurs: > https://drive.google.com/file/d/0B2JGBdceNMxdeFlJcW1FUElOdXc/view?usp=sharing -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2621) When view is created on top of parquet file, underlying data types should be automatically converted to SQL type
[ https://issues.apache.org/jira/browse/DRILL-2621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Victoria Markman updated DRILL-2621: Description: Since parquet types are known to us, we need to make sure to expose these types without making user explicitly cast each column. "create view v1 as select c_integer from j1; " produces view with column of ANY data type. {code} 0: jdbc:drill:schema=dfs> describe v1; +-++-+ | COLUMN_NAME | DATA_TYPE | IS_NULLABLE | +-++-+ | c_integer | ANY| YES | +-++-+ 1 row selected (0.091 seconds) {code} I think we need to extend "CREATE VIEW" document for GA explaining this fact. was: Since parquet types are known to us, we need to make sure to expose these types without making user explicitly cast each column. "create view v1 as select c_integer from j1; " produces view with column of ANY data type. {code} 0: jdbc:drill:schema=dfs> describe v1; +-++-+ | COLUMN_NAME | DATA_TYPE | IS_NULLABLE | +-++-+ | c_integer | ANY| YES | +-++-+ 1 row selected (0.091 seconds) {code} I think we need to expend "CREATE VIEW" document for GA explaining this fact. > When view is created on top of parquet file, underlying data types should be > automatically converted to SQL type > > > Key: DRILL-2621 > URL: https://issues.apache.org/jira/browse/DRILL-2621 > Project: Apache Drill > Issue Type: New Feature >Reporter: Victoria Markman > > Since parquet types are known to us, we need to make sure to expose these > types without making user explicitly cast each column. > "create view v1 as select c_integer from j1; " produces view with column of > ANY data type. > {code} > 0: jdbc:drill:schema=dfs> describe v1; > +-++-+ > | COLUMN_NAME | DATA_TYPE | IS_NULLABLE | > +-++-+ > | c_integer | ANY| YES | > +-++-+ > 1 row selected (0.091 seconds) > {code} > I think we need to extend "CREATE VIEW" document for GA explaining this fact. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2621) When view is created on top of parquet file, underlying data types should be automatically converted to SQL type
Victoria Markman created DRILL-2621: --- Summary: When view is created on top of parquet file, underlying data types should be automatically converted to SQL type Key: DRILL-2621 URL: https://issues.apache.org/jira/browse/DRILL-2621 Project: Apache Drill Issue Type: New Feature Reporter: Victoria Markman Since parquet types are known to us, we need to make sure to expose these types without making user explicitly cast each column. "create view v1 as select c_integer from j1; " produces view with column of ANY data type. {code} 0: jdbc:drill:schema=dfs> describe v1; +-++-+ | COLUMN_NAME | DATA_TYPE | IS_NULLABLE | +-++-+ | c_integer | ANY| YES | +-++-+ 1 row selected (0.091 seconds) {code} I think we need to expend "CREATE VIEW" document for GA explaining this fact. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2620) Casting to float is changing the value slightly
Rahul Challapalli created DRILL-2620: Summary: Casting to float is changing the value slightly Key: DRILL-2620 URL: https://issues.apache.org/jira/browse/DRILL-2620 Project: Apache Drill Issue Type: Bug Components: Execution - Data Types Reporter: Rahul Challapalli Assignee: Daniel Barclay (Drill) git.commit.id.abbrev=c11fcf7 Data Set : {code} 2345552345.5342 4784.5735 {code} Drill Query : {code} select cast(columns[0] as float) from `abc.tbl`; ++ | EXPR$0 | ++ | 2.34555238E9 | | 4784.5737 | ++ {code} I am not sure whether this is a known limitation or a bug -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (DRILL-2619) Unsupported implicit casts should throw a proper error message
[ https://issues.apache.org/jira/browse/DRILL-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rahul Challapalli updated DRILL-2619: - Priority: Minor (was: Major) > Unsupported implicit casts should throw a proper error message > -- > > Key: DRILL-2619 > URL: https://issues.apache.org/jira/browse/DRILL-2619 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Data Types >Reporter: Rahul Challapalli >Assignee: Daniel Barclay (Drill) >Priority: Minor > Labels: error_message_must_fix > > git.commit.id.abbrev=c11fcf7 > When I have a where clause with an implicit cast, I get back a wierd message > which does not indicate the problem > {code} > select columns[9] from `fewtypes_null.tbl` where columns[0] = 6; > Query failed: RemoteRpcException: Failure while running fragment., null [ > 0faf63c5-cdca-4b5b-a2ab-5f3ef02d5c9b on qa-node191.qa.lab:31010 ] > [ 0faf63c5-cdca-4b5b-a2ab-5f3ef02d5c9b on qa-node191.qa.lab:31010 ] > {code} > Attached the stacktrace. Let me know if you need anything more. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-2619) Unsupported implicit casts should throw a proper error message
Rahul Challapalli created DRILL-2619: Summary: Unsupported implicit casts should throw a proper error message Key: DRILL-2619 URL: https://issues.apache.org/jira/browse/DRILL-2619 Project: Apache Drill Issue Type: Bug Components: Execution - Data Types Reporter: Rahul Challapalli Assignee: Daniel Barclay (Drill) git.commit.id.abbrev=c11fcf7 When I have a where clause with an implicit cast, I get back a wierd message which does not indicate the problem {code} select columns[9] from `fewtypes_null.tbl` where columns[0] = 6; Query failed: RemoteRpcException: Failure while running fragment., null [ 0faf63c5-cdca-4b5b-a2ab-5f3ef02d5c9b on qa-node191.qa.lab:31010 ] [ 0faf63c5-cdca-4b5b-a2ab-5f3ef02d5c9b on qa-node191.qa.lab:31010 ] {code} Attached the stacktrace. Let me know if you need anything more. -- This message was sent by Atlassian JIRA (v6.3.4#6332)