[jira] [Commented] (DRILL-6453) TPC-DS query 72 has regressed

2018-07-13 Thread Vlad Rozov (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543983#comment-16543983
 ] 

Vlad Rozov commented on DRILL-6453:
---

A deadlock when an operator such as hash join switches between reading from 
left and right sides is caused by:

- Drill senders can send only one batch at a time. For senders such as 
broadcast or hash partitioner, it means that if one of the receivers did not 
acknowledge 3 batches, the sender won't be able to send to any of it's 
receivers and would block until the receiver sends an acknowledgment (for 
previously sent batches).
- On the receiving side, if for example hash join flips between reading from 
left and right sides, it may lead to a condition where for one minor fragment, 
left side is empty while for another minor fragment the right side is empty.
- Drill does not allow to probe if receiver queue is empty, so the first 
fragment would block waiting for the left side to become not empty, while the 
second minor fragment would block on the same condition for the right side.
- As hash join reads from left and right sides on the same thread, when it 
blocks reading from left side, right side may become full and no more 
acknowledgments would be sent to the sender. The same for the second minor 
fragment with left and right flipped. And Drill is at the deadlock condition as 
neither receiver or sender can proceed.
 

> TPC-DS query 72 has regressed
> -
>
> Key: DRILL-6453
> URL: https://issues.apache.org/jira/browse/DRILL-6453
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.14.0
>Reporter: Khurram Faraaz
>Assignee: Boaz Ben-Zvi
>Priority: Blocker
> Fix For: 1.14.0
>
> Attachments: 24f75b18-014a-fb58-21d2-baeab5c3352c.sys.drill, 
> jstack_29173_June_10_2018.txt, jstack_29173_June_10_2018.txt, 
> jstack_29173_June_10_2018_b.txt, jstack_29173_June_10_2018_b.txt, 
> jstack_29173_June_10_2018_c.txt, jstack_29173_June_10_2018_c.txt, 
> jstack_29173_June_10_2018_d.txt, jstack_29173_June_10_2018_d.txt, 
> jstack_29173_June_10_2018_e.txt, jstack_29173_June_10_2018_e.txt
>
>
> TPC-DS query 72 seems to have regressed, query profile for the case where it 
> Canceled after 2 hours on Drill 1.14.0 is attached here.
> {noformat}
> On, Drill 1.14.0-SNAPSHOT 
> commit : 931b43e (TPC-DS query 72 executed successfully on this commit, took 
> around 55 seconds to execute)
> SF1 parquet data on 4 nodes; 
> planner.memory.max_query_memory_per_node = 10737418240. 
> drill.exec.hashagg.fallback.enabled = true
> TPC-DS query 72 executed successfully & took 47 seconds to complete execution.
> {noformat}
> {noformat}
> TPC-DS data in the below run has date values stored as DATE datatype and not 
> VARCHAR type
> On, Drill 1.14.0-SNAPSHOT
> commit : 82e1a12
> SF1 parquet data on 4 nodes; 
> planner.memory.max_query_memory_per_node = 10737418240. 
> drill.exec.hashagg.fallback.enabled = true
> and
> alter system set `exec.hashjoin.num_partitions` = 1;
> TPC-DS query 72 executed for 2 hrs and 11 mins and did not complete, I had to 
> Cancel it by stopping the Foreman drillbit.
> As a result several minor fragments are reported to be in 
> CANCELLATION_REQUESTED state on UI.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6606) Hash Join returns incorrect data types when joining subqueries with limit 0

2018-07-13 Thread Timothy Farkas (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543918#comment-16543918
 ] 

Timothy Farkas commented on DRILL-6606:
---

The root cause is not a fundamental issue with sniffing batches, it is just a 
minor logic error. The issue is that the upstream operators send join an 
OK_NEW_SCHEMA (without data) and then NONE without any data. This is expected 
since we are doing limit 0 in the subqueries. However in this case we don't 
build the schema for the HashJoin operator due to the if statement in the first 
line of HashJoinBatch.buildSchema(). The fix is to simply build the schema in 
this case, which we can do since we have received the schema for the upstream 
operators when they sent us OK_NEW_SCHEMA.

> Hash Join returns incorrect data types when joining subqueries with limit 0
> ---
>
> Key: DRILL-6606
> URL: https://issues.apache.org/jira/browse/DRILL-6606
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Bohdan Kazydub
>Assignee: Timothy Farkas
>Priority: Blocker
> Fix For: 1.14.0
>
>
> PreparedStatement for query
> {code:sql}
> SELECT l.l_quantity, l.l_shipdate, o.o_custkey
> FROM (SELECT * FROM cp.`tpch/lineitem.parquet` LIMIT 0) l
>     JOIN (SELECT * FROM cp.`tpch/orders.parquet` LIMIT 0) o 
>     ON l.l_orderkey = o.o_orderkey
> LIMIT 0
> {code}
>  is created with wrong types (nullable INTEGER) for all selected columns, no 
> matter what their actual type is. This behavior reproduces with hash join 
> only and is very likely to be caused by DRILL-6027 as the query works fine 
> before this feature was implemented.
> To reproduce the problem you can put the aforementioned query into 
> TestPreparedStatementProvider#joinOrderByQuery() test method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6373) Refactor the Result Set Loader to prepare for Union, List support

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543911#comment-16543911
 ] 

ASF GitHub Bot commented on DRILL-6373:
---

paul-rogers commented on issue #1244: DRILL-6373: Refactor Result Set Loader 
for Union, List support
URL: https://github.com/apache/drill/pull/1244#issuecomment-404987892
 
 
   @sohami, the cost of my proposed fix has exceeded its benefit -- we're just 
not converging. I've closed that PR and will look for another solution. Rather 
than use the `MaterializedField` to get the type, I'll ad code that does a 
switch statement on the vector type to learn the "real" type, leaving the 
`MaterializedField` to hold the pretend type for the `values` field of a 
`Nullable` vector.
   
   This should not be hard: we already have generated code that parses the 
vector type. I can use this to manufacture a `MajorType` that matches the 
vector class; bypassing the need for correct `MaterializedField` data, and thus 
eliminating the need to change the vector code.
   
   Revision to be done soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Refactor the Result Set Loader to prepare for Union, List support
> -
>
> Key: DRILL-6373
> URL: https://issues.apache.org/jira/browse/DRILL-6373
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.13.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Major
> Attachments: 6373_Functional_Fail_07_13_1300.txt, 
> drill-6373-with-6585-fix-functional-failure.txt
>
>
> As the next step in merging the "batch sizing" enhancements, refactor the 
> {{ResultSetLoader}} and related classes to prepare for Union and List 
> support. This fix follows the refactoring of the column accessors for the 
> same purpose. Actual Union and List support is to follow in a separate PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6585) PartitionSender clones vectors, but shares field metdata

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543905#comment-16543905
 ] 

ASF GitHub Bot commented on DRILL-6585:
---

paul-rogers commented on issue #1367: DRILL-6585: PartitionSender clones 
vectors, but shares field metdata
URL: https://github.com/apache/drill/pull/1367#issuecomment-404987313
 
 
   Closing this PR. Will try a different approach since we're not converging on 
a solution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> PartitionSender clones vectors, but shares field metdata
> 
>
> Key: DRILL-6585
> URL: https://issues.apache.org/jira/browse/DRILL-6585
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Major
>
> See the discussion for [PR #1244 for 
> DRILL-6373|https://github.com/apache/drill/pull/1244].
> The PartitionSender clones vectors. But, it does so by reusing the 
> {{MaterializedField}} from the original vector. Though the original authors 
> of {{MaterializedField}} apparently meant it to be immutable, later changes 
> for maps and unions ended up changing it to add members.
> When cloning a map, we get the original map materialized field, then start 
> doctoring it up as we add the cloned map members. This screws up the original 
> map vector's metadata.
> The solution is to clone an empty version of the materialized field when 
> creating a new vector.
> But, since much code creates vectors by giving a perfectly valid, unique 
> materialized field, we want to add a new method for use by the ill-behaved 
> uses, such as PartitionSender, that ask to create a new vector without 
> cloning the materialized field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6585) PartitionSender clones vectors, but shares field metdata

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543906#comment-16543906
 ] 

ASF GitHub Bot commented on DRILL-6585:
---

paul-rogers closed pull request #1367: DRILL-6585: PartitionSender clones 
vectors, but shares field metdata
URL: https://github.com/apache/drill/pull/1367
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerTemplate.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerTemplate.java
index 0d52b53efd0..64aabfa5cd3 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerTemplate.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerTemplate.java
@@ -376,9 +376,8 @@ public void updateStats(FragmentWritableBatch 
writableBatch) {
  */
 public void initializeBatch() {
   for (VectorWrapper v : incoming) {
-// create new vector
-@SuppressWarnings("resource")
-ValueVector outgoingVector = TypeHelper.getNewVector(v.getField(), 
allocator);
+// create new vector by cloning the incoming vector's type
+ValueVector outgoingVector = 
TypeHelper.getNewVector(v.getField().cloneEmpty(), allocator);
 outgoingVector.setInitialCapacity(outgoingRecordBatchSize);
 vectorContainer.add(outgoingVector);
   }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> PartitionSender clones vectors, but shares field metdata
> 
>
> Key: DRILL-6585
> URL: https://issues.apache.org/jira/browse/DRILL-6585
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Major
>
> See the discussion for [PR #1244 for 
> DRILL-6373|https://github.com/apache/drill/pull/1244].
> The PartitionSender clones vectors. But, it does so by reusing the 
> {{MaterializedField}} from the original vector. Though the original authors 
> of {{MaterializedField}} apparently meant it to be immutable, later changes 
> for maps and unions ended up changing it to add members.
> When cloning a map, we get the original map materialized field, then start 
> doctoring it up as we add the cloned map members. This screws up the original 
> map vector's metadata.
> The solution is to clone an empty version of the materialized field when 
> creating a new vector.
> But, since much code creates vectors by giving a perfectly valid, unique 
> materialized field, we want to add a new method for use by the ill-behaved 
> uses, such as PartitionSender, that ask to create a new vector without 
> cloning the materialized field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6585) PartitionSender clones vectors, but shares field metdata

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543904#comment-16543904
 ] 

ASF GitHub Bot commented on DRILL-6585:
---

paul-rogers commented on issue #1367: DRILL-6585: PartitionSender clones 
vectors, but shares field metdata
URL: https://github.com/apache/drill/pull/1367#issuecomment-404987283
 
 
   @sohami, thanks for your comments and questions. Unfortunately, I cannot 
debug the use case and so you may have a deeper understanding than I do. I'm 
working from experience gained some six months ago when working with the result 
set loader, and that knowledge is getting rusty.
   
   > In original PR there is a change for NullableValueVectors to add the 
values and bits vector materialized field as child field of parent vector 
field. ... From your comment it looks like because the internal values 
ValueVector mode needs to be required so you are creating another Materialized 
Field with that mode for internal values vector and adding it as child of 
parent vector field.
   
   The reason for that change is that the result set loader code that clones a 
vector needs to know the actual type. That code walks the vector tree, using 
the `MaterializedField` to get the type. If a `values` vector (which has no 
`bits` vector) reports its type as `Nullable`, then the clone will create a 
`bits` vector, which causes havoc.
   
   I'm thinking that I should change the cloning code. Rather than believing 
the `MaterializedField`, I can use the vector class type itself. That will be 
more clunky and slow, but it will eliminate the need to change the existing 
vector code.
   
   Given how long this discussion has gone on, that I can't do the required 
tests, and that we can't we discuss this in person, I'm thinking that the 
alternative approach may be more expedient.
   
   I suppose a larger question is whether the final bits of the result set 
loader are even still useful. Much work has been done on batch sizing since 
this work started. Is it still worth while finishing up this code so we can 
control the batch size for readers? Parquet has its own solution. Is it worth 
worrying about the others?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> PartitionSender clones vectors, but shares field metdata
> 
>
> Key: DRILL-6585
> URL: https://issues.apache.org/jira/browse/DRILL-6585
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Major
>
> See the discussion for [PR #1244 for 
> DRILL-6373|https://github.com/apache/drill/pull/1244].
> The PartitionSender clones vectors. But, it does so by reusing the 
> {{MaterializedField}} from the original vector. Though the original authors 
> of {{MaterializedField}} apparently meant it to be immutable, later changes 
> for maps and unions ended up changing it to add members.
> When cloning a map, we get the original map materialized field, then start 
> doctoring it up as we add the cloned map members. This screws up the original 
> map vector's metadata.
> The solution is to clone an empty version of the materialized field when 
> creating a new vector.
> But, since much code creates vectors by giving a perfectly valid, unique 
> materialized field, we want to add a new method for use by the ill-behaved 
> uses, such as PartitionSender, that ask to create a new vector without 
> cloning the materialized field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6603) Filter pushdown for a null value eliminates all except one rowgroup

2018-07-13 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543895#comment-16543895
 ] 

Kunal Khatua commented on DRILL-6603:
-

This is a planner bug with pruning done early on
Physical Plan for {code:sql}str_val is null{code}
{code:bash}
00-00Screen : rowType = RecordType(BIGINT EXPR$0): rowcount = 1.0, 
cumulative cost = {80963.85 rows, 301771.35 cpu, 0.0 io, 0.0 network, 0.0 
memory}, id = 2841
00-01  Project(EXPR$0=[$0]) : rowType = RecordType(BIGINT EXPR$0): rowcount 
= 1.0, cumulative cost = {80963.75 rows, 301771.25 cpu, 0.0 io, 0.0 network, 
0.0 memory}, id = 2840
00-02StreamAgg(group=[{}], EXPR$0=[COUNT()]) : rowType = 
RecordType(BIGINT EXPR$0): rowcount = 1.0, cumulative cost = {80962.75 rows, 
301770.25 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 2839
00-03  Project($f0=[0]) : rowType = RecordType(INTEGER $f0): rowcount = 
7360.25, cumulative cost = {73602.5 rows, 213447.25 cpu, 0.0 io, 0.0 network, 
0.0 memory}, id = 2838
00-04SelectionVectorRemover : rowType = RecordType(ANY str_var): 
rowcount = 7360.25, cumulative cost = {66242.25 rows, 184006.25 cpu, 0.0 io, 
0.0 network, 0.0 memory}, id = 2837
00-05  Filter(condition=[IS NULL($0)]) : rowType = RecordType(ANY 
str_var): rowcount = 7360.25, cumulative cost = {58882.0 rows, 176646.0 cpu, 
0.0 io, 0.0 network, 0.0 memory}, id = 2836
00-06Scan(groupscan=[ParquetGroupScan 
[entries=[ReadEntryWithPath [path=/widestrings/0_0_2.parquet]], 
selectionRoot=maprfs:/widestrings, numFiles=1, numRowGroups=1, 
usedMetadataFile=false, columns=[`str_var`]]]) : rowType = RecordType(ANY 
str_var): rowcount = 29441.0, cumulative cost = {29441.0 rows, 29441.0 cpu, 0.0 
io, 0.0 network, 0.0 memory}, id = 2835
Query Profile : 24b6bf5b-5769-b2cb-4d38-bcbe68e140e7
{code}
 

Physical Plan for {code:sql}dec_var_prec5_sc2 between 10 and 15;{code}

{code:bash}
00-00Screen : rowType = RecordType(BIGINT EXPR$0): rowcount = 1.0, 
cumulative cost = {31.1 rows, 1425001.1 cpu, 0.0 io, 1.024E8 network, 0.0 
memory}, id = 4328
00-01  Project(EXPR$0=[$0]) : rowType = RecordType(BIGINT EXPR$0): rowcount 
= 1.0, cumulative cost = {31.0 rows, 1425001.0 cpu, 0.0 io, 1.024E8 
network, 0.0 memory}, id = 4327
00-02StreamAgg(group=[{}], EXPR$0=[COUNT()]) : rowType = 
RecordType(BIGINT EXPR$0): rowcount = 1.0, cumulative cost = {30.0 rows, 
1425000.0 cpu, 0.0 io, 1.024E8 network, 0.0 memory}, id = 4326
00-03  UnionExchange : rowType = RecordType(INTEGER $f0): rowcount = 
25000.0, cumulative cost = {275000.0 rows, 1125000.0 cpu, 0.0 io, 1.024E8 
network, 0.0 memory}, id = 4325
01-01Project($f0=[0]) : rowType = RecordType(INTEGER $f0): rowcount 
= 25000.0, cumulative cost = {25.0 rows, 925000.0 cpu, 0.0 io, 0.0 network, 
0.0 memory}, id = 4324
01-02  SelectionVectorRemover : rowType = RecordType(ANY 
dec_var_prec5_sc2): rowcount = 25000.0, cumulative cost = {225000.0 rows, 
825000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 4323
01-03Filter(condition=[AND(>=($0, 10), <=($0, 15))]) : rowType 
= RecordType(ANY dec_var_prec5_sc2): rowcount = 25000.0, cumulative cost = 
{20.0 rows, 80.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 4322
01-04  Scan(groupscan=[ParquetGroupScan 
[entries=[ReadEntryWithPath [path=maprfs:///widestrings]], 
selectionRoot=maprfs:/widestrings, numFiles=1, numRowGroups=4, 
usedMetadataFile=false, columns=[`dec_var_prec5_sc2`]]]) : rowType = 
RecordType(ANY dec_var_prec5_sc2): rowcount = 10.0, cumulative cost = 
{10.0 rows, 10.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 4321
{code}

> Filter pushdown for a null value eliminates all except one rowgroup
> ---
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
>  
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> {code:sql}
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> {code}
> This query should return 5 rows. It is missing 3 rows.
> {code:bash}
> 1664 IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 

[jira] [Updated] (DRILL-6603) Filter pushdown for a null value eliminates all except one rowgroup

2018-07-13 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6603:

Summary: Filter pushdown for a null value eliminates all except one 
rowgroup  (was: Query does not return enough rows)

> Filter pushdown for a null value eliminates all except one rowgroup
> ---
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
>  
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> {code:sql}
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> {code}
> This query should return 5 rows. It is missing 3 rows.
> {code:bash}
> 1664 IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
>  W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
> lF0wj8xSQg1wx5cfOMXBGNA1nvqTELCPCEzUvFj8hXQ3gANHJ9bOt7QFZhxWLlBhCevbqA40IgJntlf0cAJM6V562fpGd16Trt3mI4YQUOkf3luTVRcBJRpIdoP3ZzgvhnVrgfblboAFMZ8CzCaH7QrZf02fPtYJlBAdoJB6DMjqh6mbkphod1QGYOkE0jqLMCnKoZSpOG9Rk9dIFdlkIrvea0f1KDGAuAlYiTTsdgU4R6CowbVNfEyjIv0Wp1CXC6SzM1Vex6Ye7CrRptvn92SOQCsAElScXa1EuErruEAyIEvtWraXL5X42RxTBsH3TZTR6NVuUcpObKbVIx0kLTdbxIElf33x31QwXUfUVZ
>  T4zHEpu6f4mLR6N9uLVG0Fza 
> Glq3UxixhgxPXgZpQt9GqT3HJXHEn9F0KGaxhC9VCqSk119HrrJuMpHiYS34MCkw1iFhGFUsRKI3fTFaByicJeCIkjFwn2cr74lONdco4AAFdGGVN1cMgJmlOxUZE0Okv68DocVXUMSXCdcTBBmGL2h2gDIagThjo8sVXORponMNTrXEP068Zy7pNkVJyW10EoZwqE2IIcoKdixYsJvPc0mRWnk3gfSmB6uHWgKvgGq4yzzbGp3NT01z8IRYKbmSXTmLyk9rJjUYatoIi
>  
> 757C2F0Yq0gceouo3LMaz9h4eyiC9psNiL3aoxquqrisayOjPs5esQzoY2iVmVZ7evrVCfxhe2AATFgTvk8Ek78y8s4nVNztlyluIrckfLbnOa25r1h9emJzooVV0Xj945xj5jAUHTZU9kCHKnmkcpEo0a7BdELbL0IvQlitXxbZBS86PlCltLGpLs
>  fmYeUzJfpp0Cql3MAECSQQbW4ErwWScaZ5D 
> rPfbbDZbF2m2ZtSPNn81G5zZBxfHgpuSm4UVrdd24NlLeG1mxwv 
> zU1PbpjSCqbn8rUCWqn5LFafTrmSdtrCuFaknTpqmk1wR9cLnPF3cD xvh0EqSwvCmCTK9xCpZkJF 
> 4WnBX6w5vg7gQkjvF1GOqP3LeV3qbJc 
> SO68S2UrCBNYQKdWyq4HeGG3TTuFF4x74nWkPPi0txEGiGDoYRxPvEQzWyhZ8SHpHZ3 
> 0UpHpuLWEXIO6VZlPJd4uC IaDEIaB 
> rkCJ8TaIVvaBIf0t8FGY8MgXTWzKdUBkOcQawbODXRLEtdGABTnOqftRSfUSpdojmlwRIs8xJIKaxK9wSL67DKahL6E7CvDBaQx20G0o7u
>  
> rMaponV4OZmHE45vaeAqfLSyWlNL4UvOstiDPaDd8nI08g9MSKFtYYxt3RxvydGxCtaYfgsl3KxjN5VHnAxkvChVlvdS2Yd8IBA
>  0dZwblnKUBibdQSgxcypDbRCPeAaOr169L9mrMv82w0V1Ndyt3qK 
> wcpv5nKeO8P9kbVlWY9bGi9nxCVs804WBZMA9vc7AT4h7Jp0OsaHbJx0qyFyAnXP lu 
> MMsOa28VxSW8thiTfIcx2qkdFN1KXrXpU4uo lxUOcJhH0HlyX6kLKhCnVqpG 
> tFP93c5jJ7FdeSujFvxPgo1rQSN9DHXk4DR6nytgBrn2oGcM58zadRNaqoIL2wmWygQsnk7Euzypbg4KhlTICBl1mpb0JwbI7uaCudGcDNWIBMerY
>  WgjahuC3QjIFd48o78CQSgqgQjzpHzdELrqMCKaKfdW4ihpHCA0sqNBYGQxxd 
> T8iTWorOODkg5Kc7m4gPut8tuzEMOQus1xdajv9PqS8F7xwzAWyhymyYBJ8505HxZDuSFqBXSkpxGDh21fiBHkeKBC9RZp7r
>  yD7i6xvRh47Vln0IxvnwcpahLltLr12yL0sDu9LXxHNAHU4gyvHud5J5xXJPD7r5xHXvtNOSiXVl 
> hkBBib1k4IO9YjCgModazXNudTx2Mr8ccq6 
> kNLKwnrwGdssm3JYyjBsUcXyLMHpS7vncUeKSw2rov4Hg4gTZU8sJMJMAJvu8d6IDJYMHULwrawKOhK8rDTP6sk9Hv27mCG8Gf9inG38Pik7AfnEtUIiZZozEsiSkWvAA7YiHlNDUuL3OX2FRgt2qu9T7zXtQkhon8uSv5FncUq17XB9idflAO0rWIK57HoilaXgIDrzG61kfSKZXpdKuwBVsRNmgJVDSedRsSihlcVDdZ7bmqsgzbvKhFri8lSh8ez6ttlXgF8h4wJ2985bVw5PUmLdeGjlbfrLF0f22vqGi11qz2GUltrjBmmBSrbCLpFUkwqqpATRoQEwo27qi5XwHYWWBqPN9rxF
>  
> orktFM5SRwG2IJmx8li8sRRchYnNYQgH7iuwKqd69jJJTwwdYla2296Lhw88YHzL60aq2XomN0BNNSoY8cALvy0QIHZpCFd3EmBojr46d6c8nBYMXJLlgKNzklk8vMTKrjAgBQevUH4U7gbQpOIWVf7Tx2BIXkdRGwQYHAuJzU5gtDuDqhuddXkGdACMmp0tgJVP2tpMW05Z3OGs6jYKb5xtqHotIJd7tUM33J85fRYOEIoGOaRblZr7RF82nSOSpPQnDgnVUhJ1j
>  mCY1ofeqG7QqeV6LTdRyRPgiiPwHF1Xgpb3feAJ804NmX7xOkDPvw0WeqxrSVMCto 
> r8E64UsRFypZ 
> wtzVAlTJKgTMpzA4xeuVXuk85mpEJTIQpNxPjU3vgAacENiejcRs68Y85Ncb5ymC3fD0WAyh23VIsy
>  GqaCV9hIFrAs tMM2zlkqpoBsSwgODBEsizaJkb4ZOWJj3Z2Wttr08YPpXSO6 
> IhQKD5SHqNXEDNar2UVZwFZbg1YJccvsjWEtfm0AUZ 
> 3KHMUb3X1F3tWqIYrZucrsjUp2xfaGtqnsij4q7CRWhRucucjyKcKmiaGE7XllzVGPeHWmbtAFku355JLB2OlBXdsgWMVZFcaCOHff6OlSECOgdLGBSL297kgCVKLzDEvxS
>  
> T4rb5neHQffvmAHOzdIuDGw1559XGVHwzz5lLoc3iSicYlwZTKN2VUOQPHRSqTI1hMJmgTcUaO3LEHyxL2so3EedaU9BSaTaA3kPefKSdu
>  ibaW3h1 
> WKkznSnlmVjhLzq5e5ywYzwA26EusRtJmAAiiSrYG20uO7ejp1AlorSgOAfM9B5qxQAqaDqQMUlvhlu7SjK46egz5kK3xtcoUfyxyUwAonh3iv
>  
> 

[jira] [Commented] (DRILL-6603) Query does not return enough rows

2018-07-13 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543892#comment-16543892
 ] 

Kunal Khatua commented on DRILL-6603:
-

I narrowed down the issue to this:
{code:sql}
0: jdbc:drill:schema=dfs.root> select count(*) from dfs.root.`widestrings` 
where str_var is null;
+-+
| EXPR$0 |
+-+
| 477 |
+-+
1 row selected (0.507 seconds)
{code}
Actual counts are 
|| File || Filtered RowCount ||
| 0_0_0.parquet | 476 |
| 0_0_1.parquet | 449 |
| 0_0_2.parquet | 477 |
| 0_0_3.parquet | 186 |
| EXPECTED  | 1588|

The range filter, however, works fine:
{code:sql}
0: jdbc:drill:schema=dfs.root> select count(*) from dfs.root.`widestrings` 
where dec_var_prec5_sc2 between 10 and 15;
+-+
| EXPR$0  |
+-+
| 688 |
+-+
1 row selected (0.479 seconds)
{code}
 Actual counts:
|| File || Filtered RowCount ||
| 0_0_0.parquet | 210 |
| 0_0_1.parquet | 194 |
| 0_0_2.parquet | 212 |
| 0_0_3.parquet | 72 |
| EXPECTED  | 688|




> Query does not return enough rows
> -
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
>  
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> {code:sql}
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> {code}
> This query should return 5 rows. It is missing 3 rows.
> {code:bash}
> 1664 IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
>  W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
> lF0wj8xSQg1wx5cfOMXBGNA1nvqTELCPCEzUvFj8hXQ3gANHJ9bOt7QFZhxWLlBhCevbqA40IgJntlf0cAJM6V562fpGd16Trt3mI4YQUOkf3luTVRcBJRpIdoP3ZzgvhnVrgfblboAFMZ8CzCaH7QrZf02fPtYJlBAdoJB6DMjqh6mbkphod1QGYOkE0jqLMCnKoZSpOG9Rk9dIFdlkIrvea0f1KDGAuAlYiTTsdgU4R6CowbVNfEyjIv0Wp1CXC6SzM1Vex6Ye7CrRptvn92SOQCsAElScXa1EuErruEAyIEvtWraXL5X42RxTBsH3TZTR6NVuUcpObKbVIx0kLTdbxIElf33x31QwXUfUVZ
>  T4zHEpu6f4mLR6N9uLVG0Fza 
> Glq3UxixhgxPXgZpQt9GqT3HJXHEn9F0KGaxhC9VCqSk119HrrJuMpHiYS34MCkw1iFhGFUsRKI3fTFaByicJeCIkjFwn2cr74lONdco4AAFdGGVN1cMgJmlOxUZE0Okv68DocVXUMSXCdcTBBmGL2h2gDIagThjo8sVXORponMNTrXEP068Zy7pNkVJyW10EoZwqE2IIcoKdixYsJvPc0mRWnk3gfSmB6uHWgKvgGq4yzzbGp3NT01z8IRYKbmSXTmLyk9rJjUYatoIi
>  
> 757C2F0Yq0gceouo3LMaz9h4eyiC9psNiL3aoxquqrisayOjPs5esQzoY2iVmVZ7evrVCfxhe2AATFgTvk8Ek78y8s4nVNztlyluIrckfLbnOa25r1h9emJzooVV0Xj945xj5jAUHTZU9kCHKnmkcpEo0a7BdELbL0IvQlitXxbZBS86PlCltLGpLs
>  fmYeUzJfpp0Cql3MAECSQQbW4ErwWScaZ5D 
> rPfbbDZbF2m2ZtSPNn81G5zZBxfHgpuSm4UVrdd24NlLeG1mxwv 
> zU1PbpjSCqbn8rUCWqn5LFafTrmSdtrCuFaknTpqmk1wR9cLnPF3cD xvh0EqSwvCmCTK9xCpZkJF 
> 4WnBX6w5vg7gQkjvF1GOqP3LeV3qbJc 
> SO68S2UrCBNYQKdWyq4HeGG3TTuFF4x74nWkPPi0txEGiGDoYRxPvEQzWyhZ8SHpHZ3 
> 0UpHpuLWEXIO6VZlPJd4uC IaDEIaB 
> rkCJ8TaIVvaBIf0t8FGY8MgXTWzKdUBkOcQawbODXRLEtdGABTnOqftRSfUSpdojmlwRIs8xJIKaxK9wSL67DKahL6E7CvDBaQx20G0o7u
>  
> rMaponV4OZmHE45vaeAqfLSyWlNL4UvOstiDPaDd8nI08g9MSKFtYYxt3RxvydGxCtaYfgsl3KxjN5VHnAxkvChVlvdS2Yd8IBA
>  0dZwblnKUBibdQSgxcypDbRCPeAaOr169L9mrMv82w0V1Ndyt3qK 
> wcpv5nKeO8P9kbVlWY9bGi9nxCVs804WBZMA9vc7AT4h7Jp0OsaHbJx0qyFyAnXP lu 
> MMsOa28VxSW8thiTfIcx2qkdFN1KXrXpU4uo lxUOcJhH0HlyX6kLKhCnVqpG 
> tFP93c5jJ7FdeSujFvxPgo1rQSN9DHXk4DR6nytgBrn2oGcM58zadRNaqoIL2wmWygQsnk7Euzypbg4KhlTICBl1mpb0JwbI7uaCudGcDNWIBMerY
>  WgjahuC3QjIFd48o78CQSgqgQjzpHzdELrqMCKaKfdW4ihpHCA0sqNBYGQxxd 
> T8iTWorOODkg5Kc7m4gPut8tuzEMOQus1xdajv9PqS8F7xwzAWyhymyYBJ8505HxZDuSFqBXSkpxGDh21fiBHkeKBC9RZp7r
>  yD7i6xvRh47Vln0IxvnwcpahLltLr12yL0sDu9LXxHNAHU4gyvHud5J5xXJPD7r5xHXvtNOSiXVl 
> hkBBib1k4IO9YjCgModazXNudTx2Mr8ccq6 
> kNLKwnrwGdssm3JYyjBsUcXyLMHpS7vncUeKSw2rov4Hg4gTZU8sJMJMAJvu8d6IDJYMHULwrawKOhK8rDTP6sk9Hv27mCG8Gf9inG38Pik7AfnEtUIiZZozEsiSkWvAA7YiHlNDUuL3OX2FRgt2qu9T7zXtQkhon8uSv5FncUq17XB9idflAO0rWIK57HoilaXgIDrzG61kfSKZXpdKuwBVsRNmgJVDSedRsSihlcVDdZ7bmqsgzbvKhFri8lSh8ez6ttlXgF8h4wJ2985bVw5PUmLdeGjlbfrLF0f22vqGi11qz2GUltrjBmmBSrbCLpFUkwqqpATRoQEwo27qi5XwHYWWBqPN9rxF
>  
> orktFM5SRwG2IJmx8li8sRRchYnNYQgH7iuwKqd69jJJTwwdYla2296Lhw88YHzL60aq2XomN0BNNSoY8cALvy0QIHZpCFd3EmBojr46d6c8nBYMXJLlgKNzklk8vMTKrjAgBQevUH4U7gbQpOIWVf7Tx2BIXkdRGwQYHAuJzU5gtDuDqhuddXkGdACMmp0tgJVP2tpMW05Z3OGs6jYKb5xtqHotIJd7tUM33J85fRYOEIoGOaRblZr7RF82nSOSpPQnDgnVUhJ1j
>  

[jira] [Commented] (DRILL-6603) Query does not return enough rows

2018-07-13 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543889#comment-16543889
 ] 

Kunal Khatua commented on DRILL-6603:
-

[~arina]

I tested the branch and it *does not* fix the issue.
 3 of the 4 files contribute the 5 rows for a total of 54.

There is a pruning bug that is causing this.
{code:sql}
0: jdbc:drill:schema=dfs.root> select * from sys.version;
+-+---++-+-++
| version | commit_id | commit_message | commit_time | build_email | build_time 
|
+-+---++-+-++
| 1.14.0-SNAPSHOT | dc7ce0920b692db36da04e02cb7aff42c9dd63c3 | DRILL-5796 : 
implement ROWS_MATCH enum to keep inside rowgroup the filter result 
information, used to prune the filter if all rows match. | 13.07.2018 @ 
02:31:53 PDT | kkha...@mapr.com | 13.07.2018 @ 17:07:39 PDT |
+-+---++-+-++
0: jdbc:drill:schema=dfs.root> show files in widestrings;
++--+-++++--++--+
|  name  | isDirectory  | isFile  |   length   | owner  | group  | 
permissions  |   accessTime   | modificationTime |
++--+-++++--++--+
| 0_0_2.parquet  | false| true| 536615681  | root   | root   | 
rw-r--r--| 2018-07-13 16:56:00.0  | 2018-07-13 16:56:08.709  |
| 0_0_1.parquet  | false| true| 536721212  | root   | root   | 
rw-r--r--| 2018-07-13 16:55:52.0  | 2018-07-13 16:56:00.019  |
| 0_0_3.parquet  | false| true| 213050551  | root   | root   | 
rw-r--r--| 2018-07-13 16:56:08.0  | 2018-07-13 16:56:11.851  |
| 0_0_0.parquet  | false| true| 536746838  | root   | root   | 
rw-r--r--| 2018-07-13 16:55:43.0  | 2018-07-13 16:55:52.07   |
++--+-++++--++--+
4 rows selected (0.122 seconds)
0: jdbc:drill:schema=dfs.root> select count(*) from dfs.root.`widestrings` 
where str_var is null and dec_var_prec5_sc2 between 10 and 15;
+-+
| EXPR$0 |
+-+
| 2 |
+-+
1 row selected (0.533 seconds)
0: jdbc:drill:schema=dfs.root> select count(*) from 
dfs.root.`widestrings/0_0_0.parquet` where str_var is null and 
dec_var_prec5_sc2 between 10 and 15;
+-+
| EXPR$0 |
+-+
| 2 |
+-+
1 row selected (0.475 seconds)
0: jdbc:drill:schema=dfs.root> select count(*) from 
dfs.root.`widestrings/0_0_1.parquet` where str_var is null and 
dec_var_prec5_sc2 between 10 and 15;
+-+
| EXPR$0 |
+-+
| 0 |
+-+
1 row selected (0.52 seconds)
0: jdbc:drill:schema=dfs.root> select count(*) from 
dfs.root.`widestrings/0_0_2.parquet` where str_var is null and 
dec_var_prec5_sc2 between 10 and 15;
+-+
| EXPR$0 |
+-+
| 2 |
+-+
1 row selected (0.496 seconds)
0: jdbc:drill:schema=dfs.root> select count(*) from 
dfs.root.`widestrings/0_0_3.parquet` where str_var is null and 
dec_var_prec5_sc2 between 10 and 15;
+-+
| EXPR$0 |
+-+
| 1 |
+-+
1 row selected (0.327 seconds)
{code}

> Query does not return enough rows
> -
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
>  
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> {code:sql}
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> {code}
> This query should return 5 rows. It is missing 3 rows.
> {code:bash}
> 1664 IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
>  W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
> 

[jira] [Assigned] (DRILL-5796) Filter pruning for multi rowgroup parquet file

2018-07-13 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reassigned DRILL-5796:
---

Assignee: Kunal Khatua  (was: Jean-Blas IMBERT)

> Filter pruning for multi rowgroup parquet file
> --
>
> Key: DRILL-5796
> URL: https://issues.apache.org/jira/browse/DRILL-5796
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Reporter: Damien Profeta
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> Today, filter pruning use the file name as the partitioning key. This means 
> you can remove a partition only if the whole file is for the same partition. 
> With parquet, you can prune the filter if the rowgroup make a partition of 
> your dataset as the unit of work if the rowgroup not the file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6610) Add support for Minimum TLS support

2018-07-13 Thread Rob Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Wu updated DRILL-6610:
--
Description: 
Add support for minimum TLS support.

Currently, the TLSProtocol parameter only supports a specific version of TLS to 
be used.

 

Investigation:

Setting the default SSL context method to be sslv23 with default sslv2 and 
sslv3 turned off would allow us to restrict the protocol to be TLS only.

Additional flags can be applied to further restrict the minimum TLS version:

For example:

Minimum TLS 1.0 - Sets NO_SSLv2 and NO_SSLv3

Minimum TLS 1.1 - Sets NO_SSLv2 and NO SSLv3 and NO_TLSv1

Minimum TLS 1.2 - Sets NO_SSLv2 and NO SSLv3 and NO_TLSv1 and NO_TLSv1_1

  was:
Add support for minimum TLS support.

Currently, the TLSProtocol parameter only supports a specific version of TLS to 
be used.


> Add support for Minimum TLS support
> ---
>
> Key: DRILL-6610
> URL: https://issues.apache.org/jira/browse/DRILL-6610
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - C++
>Affects Versions: 1.12.0
>Reporter: Rob Wu
>Priority: Major
>
> Add support for minimum TLS support.
> Currently, the TLSProtocol parameter only supports a specific version of TLS 
> to be used.
>  
> Investigation:
> Setting the default SSL context method to be sslv23 with default sslv2 and 
> sslv3 turned off would allow us to restrict the protocol to be TLS only.
> Additional flags can be applied to further restrict the minimum TLS version:
> For example:
> Minimum TLS 1.0 - Sets NO_SSLv2 and NO_SSLv3
> Minimum TLS 1.1 - Sets NO_SSLv2 and NO SSLv3 and NO_TLSv1
> Minimum TLS 1.2 - Sets NO_SSLv2 and NO SSLv3 and NO_TLSv1 and NO_TLSv1_1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-6610) Add support for Minimum TLS support

2018-07-13 Thread Rob Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Wu reassigned DRILL-6610:
-

Assignee: Rob Wu

> Add support for Minimum TLS support
> ---
>
> Key: DRILL-6610
> URL: https://issues.apache.org/jira/browse/DRILL-6610
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - C++
>Affects Versions: 1.12.0
>Reporter: Rob Wu
>Assignee: Rob Wu
>Priority: Major
>
> Add support for minimum TLS support.
> Currently, the TLSProtocol parameter only supports a specific version of TLS 
> to be used.
>  
> Investigation:
> Setting the default SSL context method to be sslv23 with default sslv2 and 
> sslv3 turned off would allow us to restrict the protocol to be TLS only.
> Additional flags can be applied to further restrict the minimum TLS version:
> For example:
> Minimum TLS 1.0 - Sets NO_SSLv2 and NO_SSLv3
> Minimum TLS 1.1 - Sets NO_SSLv2 and NO SSLv3 and NO_TLSv1
> Minimum TLS 1.2 - Sets NO_SSLv2 and NO SSLv3 and NO_TLSv1 and NO_TLSv1_1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543874#comment-16543874
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on issue #1296: DRILL-5365: Prevent plugin config from 
changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#issuecomment-404977655
 
 
   @vdiravka addressed your comments, and  I pushed a few more changes after 
your last pass. I added the DrillFileSystemCache class to use instead of the 
hadoop FileSystem cache.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6610) Add support for Minimum TLS support

2018-07-13 Thread Rob Wu (JIRA)
Rob Wu created DRILL-6610:
-

 Summary: Add support for Minimum TLS support
 Key: DRILL-6610
 URL: https://issues.apache.org/jira/browse/DRILL-6610
 Project: Apache Drill
  Issue Type: Improvement
  Components: Client - C++
Affects Versions: 1.12.0
Reporter: Rob Wu


Add support for minimum TLS support.

Currently, the TLSProtocol parameter only supports a specific version of TLS to 
be used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543867#comment-16543867
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on issue #1296: DRILL-5365: Prevent plugin config from 
changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#issuecomment-404976675
 
 
   Created a jira for the HiveDrillNativeParquetRowGroupScan issue here 
https://issues.apache.org/jira/browse/DRILL-6609


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6609) Investigate Creation of FileSystem Configuration for Hive Parquet Files

2018-07-13 Thread Timothy Farkas (JIRA)
Timothy Farkas created DRILL-6609:
-

 Summary: Investigate Creation of FileSystem Configuration for Hive 
Parquet Files
 Key: DRILL-6609
 URL: https://issues.apache.org/jira/browse/DRILL-6609
 Project: Apache Drill
  Issue Type: Task
Reporter: Timothy Farkas


Currently when reading a parquet file in Hive we try to speed things up by 
doing a native parquet scan with HiveDrillNativeParquetRowGroupScan. When 
retrieving the FileSystem Configuration to use in 
HiveDrillNativeParquetRowGroupScan.getFsConf, use all the properties defined 
for the HiveStoragePlugin. This could cause a misconfiguration in the 
HiveStoragePlugin to influence the configuration of our FileSystem.

Currently it is unclear if this was desired behavior or not. If it is desired 
we need to document why it was done. If it is not desired we need to fix the 
issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543861#comment-16543861
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on issue #1296: DRILL-5365: Prevent plugin config from 
changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#issuecomment-404975183
 
 
   For some reason github won't let me respond next to one of your comments, so 
putting response here:
   
   Searched for usages of **fs.default.name** there was one other usage in 
ExternalSortBatch, so I replaced it to use the new constant in DrillFileSystem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543858#comment-16543858
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on a change in pull request #1296: DRILL-5365: Prevent plugin 
config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202491880
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
 ##
 @@ -83,28 +87,63 @@
   private final OperatorStats operatorStats;
   private final CompressionCodecFactory codecFactory;
 
+  private boolean initialized = false;
+
   public DrillFileSystem(Configuration fsConf) throws IOException {
 this(fsConf, null);
   }
 
   public DrillFileSystem(Configuration fsConf, OperatorStats operatorStats) 
throws IOException {
+Preconditions.checkNotNull(fsConf);
+
+// Configuration objects are mutable, and the underlying FileSystem object 
may directly use a passed in Configuration.
+// In order to avoid scenarios where a Configuration can change after a 
DrillFileSystem is created, we make a copy
+// of the Configuration.
+fsConf = new Configuration(fsConf);
 this.underlyingFs = FileSystem.get(fsConf);
 
 Review comment:
   Added a TODO to the javadoc in DrillFileSystemCache.java and created 
https://issues.apache.org/jira/browse/DRILL-6608.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543860#comment-16543860
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on a change in pull request #1296: DRILL-5365: Prevent plugin 
config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202491914
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java
 ##
 @@ -76,6 +79,16 @@ public FileSystemPlugin(FileSystemConfig config, 
DrillbitContext context, String
   fsConf.set(s, config.config.get(s));
 }
   }
+
+  logger.info("Original FileSystem default fs configuration {} {}",
+fsConf.getTrimmed(FS_DEFAULT_NAME),
+fsConf.getTrimmed(FileSystem.FS_DEFAULT_NAME_KEY));
+
+  if (logger.isInfoEnabled()) {
+logger.info("Who made me? {}", new RuntimeException("Who made me?"));
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6608) Properly Handle Creation and Closure of DrillFileSystems

2018-07-13 Thread Timothy Farkas (JIRA)
Timothy Farkas created DRILL-6608:
-

 Summary: Properly Handle Creation and Closure of DrillFileSystems
 Key: DRILL-6608
 URL: https://issues.apache.org/jira/browse/DRILL-6608
 Project: Apache Drill
  Issue Type: Task
Reporter: Timothy Farkas


Currently the strategy Drill uses for creating file systems is to create a 
DrillFileSystem for readers and writers and then never close it. In order to 
prevent the proliferation of underlying file system objects used by 
DrillFileSystem, the underlying filesystems are cached.

This is not ideal, we should properly close our file system objects instead of 
caching them and keeping them in memory forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6475) Unnest: Null fieldId Pointer

2018-07-13 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6475:
-
Reviewer: Aman Sinha

> Unnest: Null fieldId Pointer 
> -
>
> Key: DRILL-6475
> URL: https://issues.apache.org/jira/browse/DRILL-6475
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Boaz Ben-Zvi
>Assignee: Hanumath Rao Maduri
>Priority: Major
> Fix For: 1.14.0
>
>
>  Executing the following (in TestE2EUnnestAndLateral.java) causes an NPE as 
> `fieldId` is null in `schemaChanged()`: 
> {code}
> @Test
> public void testMultipleBatchesLateral_twoUnnests() throws Exception {
>  String sql = "SELECT t5.l_quantity FROM dfs.`lateraljoin/multipleFiles/` t, 
> LATERAL " +
>  "(SELECT t2.ordrs FROM UNNEST(t.c_orders) t2(ordrs)) t3(ordrs), LATERAL " +
>  "(SELECT t4.l_quantity FROM UNNEST(t3.ordrs) t4(l_quantity)) t5";
>  test(sql);
> }
> {code}
>  
> And the error is:
> {code}
> Error: SYSTEM ERROR: NullPointerException
> Fragment 0:0
> [Error Id: 25f42765-8f68-418e-840a-ffe65788e1e2 on 10.254.130.25:31020]
> (java.lang.NullPointerException) null
>  
> org.apache.drill.exec.physical.impl.unnest.UnnestRecordBatch.schemaChanged():381
>  org.apache.drill.exec.physical.impl.unnest.UnnestRecordBatch.innerNext():199
>  org.apache.drill.exec.record.AbstractRecordBatch.next():172
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.record.AbstractRecordBatch.next():119
>  
> org.apache.drill.exec.physical.impl.join.LateralJoinBatch.prefetchFirstBatchFromBothSides():241
>  org.apache.drill.exec.physical.impl.join.LateralJoinBatch.buildSchema():264
>  org.apache.drill.exec.record.AbstractRecordBatch.next():152
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.record.AbstractRecordBatch.next():119
>  org.apache.drill.exec.record.AbstractRecordBatch.next():109
>  org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>  
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():137
>  org.apache.drill.exec.record.AbstractRecordBatch.next():172
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.record.AbstractRecordBatch.next():119
>  org.apache.drill.exec.record.AbstractRecordBatch.next():109
>  org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>  
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():137
>  org.apache.drill.exec.record.AbstractRecordBatch.next():172
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.physical.impl.BaseRootExec.next():103
>  org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
>  org.apache.drill.exec.physical.impl.BaseRootExec.next():93
>  org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():292
>  org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():279
>  java.security.AccessController.doPrivileged():-2
>  javax.security.auth.Subject.doAs():422
>  org.apache.hadoop.security.UserGroupInformation.doAs():1657
>  org.apache.drill.exec.work.fragment.FragmentExecutor.run():279
>  org.apache.drill.common.SelfCleaningRunnable.run():38
>  java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>  java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>  java.lang.Thread.run():745 (state=,code=0)
> {code} 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6373) Refactor the Result Set Loader to prepare for Union, List support

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543809#comment-16543809
 ] 

ASF GitHub Bot commented on DRILL-6373:
---

sohami commented on issue #1244: DRILL-6373: Refactor Result Set Loader for 
Union, List support
URL: https://github.com/apache/drill/pull/1244#issuecomment-404968576
 
 
   @paul-rogers - I have posted couple of questions for this change on other PR 
(https://github.com/apache/drill/pull/1367)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Refactor the Result Set Loader to prepare for Union, List support
> -
>
> Key: DRILL-6373
> URL: https://issues.apache.org/jira/browse/DRILL-6373
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.13.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Major
> Attachments: 6373_Functional_Fail_07_13_1300.txt, 
> drill-6373-with-6585-fix-functional-failure.txt
>
>
> As the next step in merging the "batch sizing" enhancements, refactor the 
> {{ResultSetLoader}} and related classes to prepare for Union and List 
> support. This fix follows the refactoring of the column accessors for the 
> same purpose. Actual Union and List support is to follow in a separate PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543806#comment-16543806
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

sohami closed pull request #1379: DRILL-6591: Show Exception for failed queries 
submitted in WebUI
URL: https://github.com/apache/drill/pull/1379
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
index cf749371034..1dac0db705f 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
@@ -22,6 +22,7 @@
 import com.google.common.collect.Maps;
 
 import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.exceptions.UserRemoteException;
 import org.apache.drill.exec.proto.UserBitShared.QueryId;
 import org.apache.drill.exec.proto.UserBitShared.QueryResult.QueryState;
 import org.apache.drill.exec.proto.UserBitShared.QueryType;
@@ -86,9 +87,8 @@ public QueryResult run(final WorkManager workManager, final 
WebUserConnection we
 logger.debug("Wait until the query execution is complete or there is error 
submitting the query");
 do {
   try {
-isComplete = webUserConnection.await(TimeUnit.SECONDS.toMillis(1)); 
/*periodically timeout to check heap*/
-  } catch (Exception e) { }
-
+isComplete = webUserConnection.await(TimeUnit.SECONDS.toMillis(1)); 
//periodically timeout 1 sec to check heap
+  } catch (InterruptedException e) {}
   usagePercent = getHeapUsage();
   if (usagePercent >  HEAP_MEMORY_FAILURE_THRESHOLD) {
 nearlyOutOfHeapSpace = true;
@@ -97,21 +97,22 @@ public QueryResult run(final WorkManager workManager, final 
WebUserConnection we
 
 //Fail if nearly out of heap space
 if (nearlyOutOfHeapSpace) {
+  UserException almostOutOfHeapException = UserException.resourceError()
+  .message("There is not enough heap memory to run this query using 
the web interface. ")
+  .addContext("Please try a query with fewer columns or with a filter 
or limit condition to limit the data returned. ")
+  .addContext("You can also try an ODBC/JDBC client. ")
+  .build(logger);
+  //Add event
   workManager.getBee().getForemanForQueryId(queryId)
-.addToEventQueue(QueryState.FAILED,
-UserException.resourceError(
-new Throwable(
-"There is not enough heap memory to run this query using 
the web interface. "
-+ "Please try a query with fewer columns or with a filter 
or limit condition to limit the data returned. "
-+ "You can also try an ODBC/JDBC client. "
-)
-)
-  .build(logger)
-);
+.addToEventQueue(QueryState.FAILED, almostOutOfHeapException);
+  //Return NearlyOutOfHeap exception
+  throw almostOutOfHeapException;
 }
 
-if (logger.isTraceEnabled()) {
-  logger.trace("Query {} is completed ", queryId);
+logger.trace("Query {} is completed ", queryId);
+
+if (webUserConnection.getError() != null) {
+  throw new UserRemoteException(webUserConnection.getError());
 }
 
 if (webUserConnection.results.isEmpty()) {


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6605) TPCDS-84 Query does not return any rows

2018-07-13 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6605:

Description: 
Query is:
Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql

This uses the hive parquet reader.
{code:sql}
SELECT c_customer_id   AS customer_id,
c_last_name
|| ', '
|| c_first_name AS customername
FROM   customer,
customer_address,
customer_demographics,
household_demographics,
income_band,
store_returns
WHERE  ca_city = 'Green Acres'
AND c_current_addr_sk = ca_address_sk
AND ib_lower_bound >= 54986
AND ib_upper_bound <= 54986 + 5
AND ib_income_band_sk = hd_income_band_sk
AND cd_demo_sk = c_current_cdemo_sk
AND hd_demo_sk = c_current_hdemo_sk
AND sr_cdemo_sk = cd_demo_sk
ORDER  BY c_customer_id
LIMIT 100
{code}

This query should return 100 rows

commit id is:
1.14.0-SNAPSHOT a77fd142d86dd5648cda8866b8ff3af39c7b6b11DRILL-6516: 
EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT   Unknown 
12.07.2018 @ 01:50:37 PDT



  was:
Query is:
Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql

This uses the hive parquet reader.
{code:sql}
SELECT c_customer_id   AS customer_id,
c_last_name
\|\| ', '
\|\| c_first_name AS customername
FROM   customer,
customer_address,
customer_demographics,
household_demographics,
income_band,
store_returns
WHERE  ca_city = 'Green Acres'
AND c_current_addr_sk = ca_address_sk
AND ib_lower_bound >= 54986
AND ib_upper_bound <= 54986 + 5
AND ib_income_band_sk = hd_income_band_sk
AND cd_demo_sk = c_current_cdemo_sk
AND hd_demo_sk = c_current_hdemo_sk
AND sr_cdemo_sk = cd_demo_sk
ORDER  BY c_customer_id
LIMIT 100
{code}

This query should return 100 rows

commit id is:
1.14.0-SNAPSHOT a77fd142d86dd5648cda8866b8ff3af39c7b6b11DRILL-6516: 
EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT   Unknown 
12.07.2018 @ 01:50:37 PDT




> TPCDS-84 Query does not return any rows
> ---
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> {code:sql}
> SELECT c_customer_id   AS customer_id,
> c_last_name
> || ', '
> || c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> {code}
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6496:
-
Labels:   (was: ready-to-commit)

> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543799#comment-16543799
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

sohami commented on issue #1336: DRILL-6496: Added missing logging statement in 
VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#issuecomment-404967213
 
 
   @arina-ielchiieva / @ilooner 
   Removing ready-to-commit tag until compilation issue is fixed. Please add it 
once compilation issue is fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543794#comment-16543794
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on a change in pull request #1296: DRILL-5365: Prevent plugin 
config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202485289
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
 ##
 @@ -179,9 +182,16 @@ public FSDataInputStream open(Path f) throws IOException {
 return new DrillFSDataInputStream(underlyingFs.open(f), operatorStats);
   }
 
+  /**
+   * This method should never be used on {@link DrillFileSystem} since {@link 
DrillFileSystem} is immutable.
+   * @param name
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543795#comment-16543795
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on a change in pull request #1296: DRILL-5365: Prevent plugin 
config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202485302
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
 ##
 @@ -102,13 +102,13 @@ public DrillFileSystem(Configuration fsConf, 
OperatorStats operatorStats) throws
 fsConf = new Configuration(fsConf);
 this.underlyingFs = FileSystem.get(fsConf);
 
-logger.trace("Configuration for the DrillFileSystem {} {}, underlyingFs: 
{}",
+logger.info("Configuration for the DrillFileSystem {} {}, underlyingFs: 
{}",
   fsConf.getTrimmed(FS_DEFAULT_NAME),
   fsConf.getTrimmed(FS_DEFAULT_NAME_KEY),
   this.underlyingFs.getUri());
 
-if (logger.isTraceEnabled()) {
-  logger.trace("Who made me? {}", new RuntimeException("Who made me?"));
+if (logger.isInfoEnabled()) {
+  logger.info("Who made me? {}", new RuntimeException("Who made me?"));
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543796#comment-16543796
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

ilooner commented on a change in pull request #1296: DRILL-5365: Prevent plugin 
config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202485318
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java
 ##
 @@ -80,12 +80,12 @@ public FileSystemPlugin(FileSystemConfig config, 
DrillbitContext context, String
 }
   }
 
-  logger.trace("Original FileSystem default fs configuration {} {}",
+  logger.info("Original FileSystem default fs configuration {} {}",
 fsConf.getTrimmed(FS_DEFAULT_NAME),
 fsConf.getTrimmed(FileSystem.FS_DEFAULT_NAME_KEY));
 
-  if (logger.isTraceEnabled()) {
-logger.trace("Who made me? {}", new RuntimeException("Who made me?"));
+  if (logger.isInfoEnabled()) {
+logger.info("Who made me? {}", new RuntimeException("Who made me?"));
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6605) TPCDS-84 Query does not return any rows

2018-07-13 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6605:

Description: 
Query is:
Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql

This uses the hive parquet reader.
{code:sql}
SELECT c_customer_id   AS customer_id,
c_last_name
\|\| ', '
\|\| c_first_name AS customername
FROM   customer,
customer_address,
customer_demographics,
household_demographics,
income_band,
store_returns
WHERE  ca_city = 'Green Acres'
AND c_current_addr_sk = ca_address_sk
AND ib_lower_bound >= 54986
AND ib_upper_bound <= 54986 + 5
AND ib_income_band_sk = hd_income_band_sk
AND cd_demo_sk = c_current_cdemo_sk
AND hd_demo_sk = c_current_hdemo_sk
AND sr_cdemo_sk = cd_demo_sk
ORDER  BY c_customer_id
LIMIT 100
{code}

This query should return 100 rows

commit id is:
1.14.0-SNAPSHOT a77fd142d86dd5648cda8866b8ff3af39c7b6b11DRILL-6516: 
EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT   Unknown 
12.07.2018 @ 01:50:37 PDT



  was:
Query is:
Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql

This uses the hive parquet reader.

SELECT c_customer_id   AS customer_id,
c_last_name
\|\| ', '
\|\| c_first_name AS customername
FROM   customer,
customer_address,
customer_demographics,
household_demographics,
income_band,
store_returns
WHERE  ca_city = 'Green Acres'
AND c_current_addr_sk = ca_address_sk
AND ib_lower_bound >= 54986
AND ib_upper_bound <= 54986 + 5
AND ib_income_band_sk = hd_income_band_sk
AND cd_demo_sk = c_current_cdemo_sk
AND hd_demo_sk = c_current_hdemo_sk
AND sr_cdemo_sk = cd_demo_sk
ORDER  BY c_customer_id
LIMIT 100

This query should return 100 rows

commit id is:
1.14.0-SNAPSHOT a77fd142d86dd5648cda8866b8ff3af39c7b6b11DRILL-6516: 
EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT   Unknown 
12.07.2018 @ 01:50:37 PDT




> TPCDS-84 Query does not return any rows
> ---
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> {code:sql}
> SELECT c_customer_id   AS customer_id,
> c_last_name
> \|\| ', '
> \|\| c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> {code}
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6603) Query does not return enough rows

2018-07-13 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6603:

Description: 
Query is:
 
/root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q

{code:sql}
select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
10 and 15
{code}

This query should return 5 rows. It is missing 3 rows.
{code:bash}
1664 IaYIEviH tJHD 
6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
 
5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
 W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
lF0wj8xSQg1wx5cfOMXBGNA1nvqTELCPCEzUvFj8hXQ3gANHJ9bOt7QFZhxWLlBhCevbqA40IgJntlf0cAJM6V562fpGd16Trt3mI4YQUOkf3luTVRcBJRpIdoP3ZzgvhnVrgfblboAFMZ8CzCaH7QrZf02fPtYJlBAdoJB6DMjqh6mbkphod1QGYOkE0jqLMCnKoZSpOG9Rk9dIFdlkIrvea0f1KDGAuAlYiTTsdgU4R6CowbVNfEyjIv0Wp1CXC6SzM1Vex6Ye7CrRptvn92SOQCsAElScXa1EuErruEAyIEvtWraXL5X42RxTBsH3TZTR6NVuUcpObKbVIx0kLTdbxIElf33x31QwXUfUVZ
 T4zHEpu6f4mLR6N9uLVG0Fza 
Glq3UxixhgxPXgZpQt9GqT3HJXHEn9F0KGaxhC9VCqSk119HrrJuMpHiYS34MCkw1iFhGFUsRKI3fTFaByicJeCIkjFwn2cr74lONdco4AAFdGGVN1cMgJmlOxUZE0Okv68DocVXUMSXCdcTBBmGL2h2gDIagThjo8sVXORponMNTrXEP068Zy7pNkVJyW10EoZwqE2IIcoKdixYsJvPc0mRWnk3gfSmB6uHWgKvgGq4yzzbGp3NT01z8IRYKbmSXTmLyk9rJjUYatoIi
 
757C2F0Yq0gceouo3LMaz9h4eyiC9psNiL3aoxquqrisayOjPs5esQzoY2iVmVZ7evrVCfxhe2AATFgTvk8Ek78y8s4nVNztlyluIrckfLbnOa25r1h9emJzooVV0Xj945xj5jAUHTZU9kCHKnmkcpEo0a7BdELbL0IvQlitXxbZBS86PlCltLGpLs
 fmYeUzJfpp0Cql3MAECSQQbW4ErwWScaZ5D 
rPfbbDZbF2m2ZtSPNn81G5zZBxfHgpuSm4UVrdd24NlLeG1mxwv 
zU1PbpjSCqbn8rUCWqn5LFafTrmSdtrCuFaknTpqmk1wR9cLnPF3cD xvh0EqSwvCmCTK9xCpZkJF 
4WnBX6w5vg7gQkjvF1GOqP3LeV3qbJc 
SO68S2UrCBNYQKdWyq4HeGG3TTuFF4x74nWkPPi0txEGiGDoYRxPvEQzWyhZ8SHpHZ3 
0UpHpuLWEXIO6VZlPJd4uC IaDEIaB 
rkCJ8TaIVvaBIf0t8FGY8MgXTWzKdUBkOcQawbODXRLEtdGABTnOqftRSfUSpdojmlwRIs8xJIKaxK9wSL67DKahL6E7CvDBaQx20G0o7u
 
rMaponV4OZmHE45vaeAqfLSyWlNL4UvOstiDPaDd8nI08g9MSKFtYYxt3RxvydGxCtaYfgsl3KxjN5VHnAxkvChVlvdS2Yd8IBA
 0dZwblnKUBibdQSgxcypDbRCPeAaOr169L9mrMv82w0V1Ndyt3qK 
wcpv5nKeO8P9kbVlWY9bGi9nxCVs804WBZMA9vc7AT4h7Jp0OsaHbJx0qyFyAnXP lu 
MMsOa28VxSW8thiTfIcx2qkdFN1KXrXpU4uo lxUOcJhH0HlyX6kLKhCnVqpG 
tFP93c5jJ7FdeSujFvxPgo1rQSN9DHXk4DR6nytgBrn2oGcM58zadRNaqoIL2wmWygQsnk7Euzypbg4KhlTICBl1mpb0JwbI7uaCudGcDNWIBMerY
 WgjahuC3QjIFd48o78CQSgqgQjzpHzdELrqMCKaKfdW4ihpHCA0sqNBYGQxxd 
T8iTWorOODkg5Kc7m4gPut8tuzEMOQus1xdajv9PqS8F7xwzAWyhymyYBJ8505HxZDuSFqBXSkpxGDh21fiBHkeKBC9RZp7r
 yD7i6xvRh47Vln0IxvnwcpahLltLr12yL0sDu9LXxHNAHU4gyvHud5J5xXJPD7r5xHXvtNOSiXVl 
hkBBib1k4IO9YjCgModazXNudTx2Mr8ccq6 
kNLKwnrwGdssm3JYyjBsUcXyLMHpS7vncUeKSw2rov4Hg4gTZU8sJMJMAJvu8d6IDJYMHULwrawKOhK8rDTP6sk9Hv27mCG8Gf9inG38Pik7AfnEtUIiZZozEsiSkWvAA7YiHlNDUuL3OX2FRgt2qu9T7zXtQkhon8uSv5FncUq17XB9idflAO0rWIK57HoilaXgIDrzG61kfSKZXpdKuwBVsRNmgJVDSedRsSihlcVDdZ7bmqsgzbvKhFri8lSh8ez6ttlXgF8h4wJ2985bVw5PUmLdeGjlbfrLF0f22vqGi11qz2GUltrjBmmBSrbCLpFUkwqqpATRoQEwo27qi5XwHYWWBqPN9rxF
 
orktFM5SRwG2IJmx8li8sRRchYnNYQgH7iuwKqd69jJJTwwdYla2296Lhw88YHzL60aq2XomN0BNNSoY8cALvy0QIHZpCFd3EmBojr46d6c8nBYMXJLlgKNzklk8vMTKrjAgBQevUH4U7gbQpOIWVf7Tx2BIXkdRGwQYHAuJzU5gtDuDqhuddXkGdACMmp0tgJVP2tpMW05Z3OGs6jYKb5xtqHotIJd7tUM33J85fRYOEIoGOaRblZr7RF82nSOSpPQnDgnVUhJ1j
 mCY1ofeqG7QqeV6LTdRyRPgiiPwHF1Xgpb3feAJ804NmX7xOkDPvw0WeqxrSVMCto r8E64UsRFypZ 
wtzVAlTJKgTMpzA4xeuVXuk85mpEJTIQpNxPjU3vgAacENiejcRs68Y85Ncb5ymC3fD0WAyh23VIsy 
GqaCV9hIFrAs tMM2zlkqpoBsSwgODBEsizaJkb4ZOWJj3Z2Wttr08YPpXSO6 
IhQKD5SHqNXEDNar2UVZwFZbg1YJccvsjWEtfm0AUZ 
3KHMUb3X1F3tWqIYrZucrsjUp2xfaGtqnsij4q7CRWhRucucjyKcKmiaGE7XllzVGPeHWmbtAFku355JLB2OlBXdsgWMVZFcaCOHff6OlSECOgdLGBSL297kgCVKLzDEvxS
 
T4rb5neHQffvmAHOzdIuDGw1559XGVHwzz5lLoc3iSicYlwZTKN2VUOQPHRSqTI1hMJmgTcUaO3LEHyxL2so3EedaU9BSaTaA3kPefKSdu
 ibaW3h1 
WKkznSnlmVjhLzq5e5ywYzwA26EusRtJmAAiiSrYG20uO7ejp1AlorSgOAfM9B5qxQAqaDqQMUlvhlu7SjK46egz5kK3xtcoUfyxyUwAonh3iv
 
VJPXdvxm8ZuZbnm82xLkh4MeWbClb0jH5E42m9aFp8GrSQzAwhzciocZJABwerP1sfITnG6EMyPKdl7FBIjJKjNcFOVabzQX966h6WYnAOKuaYdJWNGgKOISIcR6OwHIaUWjqV9w84VYxXutZJ1rRlbeUPT8ygTZmFk2FK2Ix02rBzt0nFkiTNmoZSilSzSOxSF
 iwtXmtDRtjrQPQCVKlZM3KrYjiJfOem8PIOA8wadL0lHN87gpEqUsrvpohZ8FRW 
ILoeDeWeBYO94JOrYv7JdirgNH7MBdmrMQOrBPpY6bdX3is62JWMm9c0Xv7jyEVdq3hkSsJLWEr4Gu8TZBfjrd9rVX0gqjlQZsk30UwEDjvtfufkYcJj2sGbJ3HzJdIh1MCHIoPb1YyacfzEvnQsnlQagfRu51vSF8qehDJ2AtCezy6hOdwberI4qgP8HMuBKRjoyN91ipykonft9himO44rJtkiREFA9opJA9jKWM8kYzICDmE2
 D3pZcmMGyUEyCY K7IEITWxzmISenhl1Ext2wzZxJoQcfLNU 8rmXNFLwxnJCEYq4bNrEn9IQw 
6xhgjw8roQVEgL8NZTxtlcve8RAyLILFdfNsvvg7qa700PCc 
ZDX5BRZtdW9eweK3icrBR6bxsnbXqnwk6ZDIe8qx 
Gd0lbF5OFc5q1hgpSCi5VUAZr3qepzmwhsYGXILwKrDobtkjkHyacqBOjPVDSqukvmdzjKfPRBi 
2GyOpnS6kvXjOUESvWBH1c AyOybXn4Zo55XF8ssFbjte6VzBTX 

[jira] [Updated] (DRILL-6605) TPCDS-84 Query does not return any rows

2018-07-13 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6605:

Summary: TPCDS-84 Query does not return any rows  (was: Query does not 
return any rows)

> TPCDS-84 Query does not return any rows
> ---
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> SELECT c_customer_id   AS customer_id,
> c_last_name
> \|\| ', '
> \|\| c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-6603) Query does not return enough rows

2018-07-13 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker reassigned DRILL-6603:


Assignee: Arina Ielchiieva  (was: Pritesh Maker)

> Query does not return enough rows
> -
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> This query should return 5 rows.  It is missing 3 rows.
> 1664  IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
>  W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
> lF0wj8xSQg1wx5cfOMXBGNA1nvqTELCPCEzUvFj8hXQ3gANHJ9bOt7QFZhxWLlBhCevbqA40IgJntlf0cAJM6V562fpGd16Trt3mI4YQUOkf3luTVRcBJRpIdoP3ZzgvhnVrgfblboAFMZ8CzCaH7QrZf02fPtYJlBAdoJB6DMjqh6mbkphod1QGYOkE0jqLMCnKoZSpOG9Rk9dIFdlkIrvea0f1KDGAuAlYiTTsdgU4R6CowbVNfEyjIv0Wp1CXC6SzM1Vex6Ye7CrRptvn92SOQCsAElScXa1EuErruEAyIEvtWraXL5X42RxTBsH3TZTR6NVuUcpObKbVIx0kLTdbxIElf33x31QwXUfUVZ
>  T4zHEpu6f4mLR6N9uLVG0Fza 
> Glq3UxixhgxPXgZpQt9GqT3HJXHEn9F0KGaxhC9VCqSk119HrrJuMpHiYS34MCkw1iFhGFUsRKI3fTFaByicJeCIkjFwn2cr74lONdco4AAFdGGVN1cMgJmlOxUZE0Okv68DocVXUMSXCdcTBBmGL2h2gDIagThjo8sVXORponMNTrXEP068Zy7pNkVJyW10EoZwqE2IIcoKdixYsJvPc0mRWnk3gfSmB6uHWgKvgGq4yzzbGp3NT01z8IRYKbmSXTmLyk9rJjUYatoIi
>  
> 757C2F0Yq0gceouo3LMaz9h4eyiC9psNiL3aoxquqrisayOjPs5esQzoY2iVmVZ7evrVCfxhe2AATFgTvk8Ek78y8s4nVNztlyluIrckfLbnOa25r1h9emJzooVV0Xj945xj5jAUHTZU9kCHKnmkcpEo0a7BdELbL0IvQlitXxbZBS86PlCltLGpLs
>  fmYeUzJfpp0Cql3MAECSQQbW4ErwWScaZ5D 
> rPfbbDZbF2m2ZtSPNn81G5zZBxfHgpuSm4UVrdd24NlLeG1mxwv 
> zU1PbpjSCqbn8rUCWqn5LFafTrmSdtrCuFaknTpqmk1wR9cLnPF3cD xvh0EqSwvCmCTK9xCpZkJF 
> 4WnBX6w5vg7gQkjvF1GOqP3LeV3qbJc 
> SO68S2UrCBNYQKdWyq4HeGG3TTuFF4x74nWkPPi0txEGiGDoYRxPvEQzWyhZ8SHpHZ3 
> 0UpHpuLWEXIO6VZlPJd4uC IaDEIaB 
> rkCJ8TaIVvaBIf0t8FGY8MgXTWzKdUBkOcQawbODXRLEtdGABTnOqftRSfUSpdojmlwRIs8xJIKaxK9wSL67DKahL6E7CvDBaQx20G0o7u
>  
> rMaponV4OZmHE45vaeAqfLSyWlNL4UvOstiDPaDd8nI08g9MSKFtYYxt3RxvydGxCtaYfgsl3KxjN5VHnAxkvChVlvdS2Yd8IBA
>  0dZwblnKUBibdQSgxcypDbRCPeAaOr169L9mrMv82w0V1Ndyt3qK 
> wcpv5nKeO8P9kbVlWY9bGi9nxCVs804WBZMA9vc7AT4h7Jp0OsaHbJx0qyFyAnXP lu 
> MMsOa28VxSW8thiTfIcx2qkdFN1KXrXpU4uo lxUOcJhH0HlyX6kLKhCnVqpG 
> tFP93c5jJ7FdeSujFvxPgo1rQSN9DHXk4DR6nytgBrn2oGcM58zadRNaqoIL2wmWygQsnk7Euzypbg4KhlTICBl1mpb0JwbI7uaCudGcDNWIBMerY
>  WgjahuC3QjIFd48o78CQSgqgQjzpHzdELrqMCKaKfdW4ihpHCA0sqNBYGQxxd 
> T8iTWorOODkg5Kc7m4gPut8tuzEMOQus1xdajv9PqS8F7xwzAWyhymyYBJ8505HxZDuSFqBXSkpxGDh21fiBHkeKBC9RZp7r
>  yD7i6xvRh47Vln0IxvnwcpahLltLr12yL0sDu9LXxHNAHU4gyvHud5J5xXJPD7r5xHXvtNOSiXVl 
> hkBBib1k4IO9YjCgModazXNudTx2Mr8ccq6 
> kNLKwnrwGdssm3JYyjBsUcXyLMHpS7vncUeKSw2rov4Hg4gTZU8sJMJMAJvu8d6IDJYMHULwrawKOhK8rDTP6sk9Hv27mCG8Gf9inG38Pik7AfnEtUIiZZozEsiSkWvAA7YiHlNDUuL3OX2FRgt2qu9T7zXtQkhon8uSv5FncUq17XB9idflAO0rWIK57HoilaXgIDrzG61kfSKZXpdKuwBVsRNmgJVDSedRsSihlcVDdZ7bmqsgzbvKhFri8lSh8ez6ttlXgF8h4wJ2985bVw5PUmLdeGjlbfrLF0f22vqGi11qz2GUltrjBmmBSrbCLpFUkwqqpATRoQEwo27qi5XwHYWWBqPN9rxF
>  
> orktFM5SRwG2IJmx8li8sRRchYnNYQgH7iuwKqd69jJJTwwdYla2296Lhw88YHzL60aq2XomN0BNNSoY8cALvy0QIHZpCFd3EmBojr46d6c8nBYMXJLlgKNzklk8vMTKrjAgBQevUH4U7gbQpOIWVf7Tx2BIXkdRGwQYHAuJzU5gtDuDqhuddXkGdACMmp0tgJVP2tpMW05Z3OGs6jYKb5xtqHotIJd7tUM33J85fRYOEIoGOaRblZr7RF82nSOSpPQnDgnVUhJ1j
>  mCY1ofeqG7QqeV6LTdRyRPgiiPwHF1Xgpb3feAJ804NmX7xOkDPvw0WeqxrSVMCto 
> r8E64UsRFypZ 
> wtzVAlTJKgTMpzA4xeuVXuk85mpEJTIQpNxPjU3vgAacENiejcRs68Y85Ncb5ymC3fD0WAyh23VIsy
>  GqaCV9hIFrAs tMM2zlkqpoBsSwgODBEsizaJkb4ZOWJj3Z2Wttr08YPpXSO6 
> IhQKD5SHqNXEDNar2UVZwFZbg1YJccvsjWEtfm0AUZ 
> 3KHMUb3X1F3tWqIYrZucrsjUp2xfaGtqnsij4q7CRWhRucucjyKcKmiaGE7XllzVGPeHWmbtAFku355JLB2OlBXdsgWMVZFcaCOHff6OlSECOgdLGBSL297kgCVKLzDEvxS
>  
> T4rb5neHQffvmAHOzdIuDGw1559XGVHwzz5lLoc3iSicYlwZTKN2VUOQPHRSqTI1hMJmgTcUaO3LEHyxL2so3EedaU9BSaTaA3kPefKSdu
>  ibaW3h1 
> WKkznSnlmVjhLzq5e5ywYzwA26EusRtJmAAiiSrYG20uO7ejp1AlorSgOAfM9B5qxQAqaDqQMUlvhlu7SjK46egz5kK3xtcoUfyxyUwAonh3iv
>  
> VJPXdvxm8ZuZbnm82xLkh4MeWbClb0jH5E42m9aFp8GrSQzAwhzciocZJABwerP1sfITnG6EMyPKdl7FBIjJKjNcFOVabzQX966h6WYnAOKuaYdJWNGgKOISIcR6OwHIaUWjqV9w84VYxXutZJ1rRlbeUPT8ygTZmFk2FK2Ix02rBzt0nFkiTNmoZSilSzSOxSF
>  iwtXmtDRtjrQPQCVKlZM3KrYjiJfOem8PIOA8wadL0lHN87gpEqUsrvpohZ8FRW 
> 

[jira] [Assigned] (DRILL-6605) Query does not return any rows

2018-07-13 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker reassigned DRILL-6605:


Assignee: Arina Ielchiieva  (was: Pritesh Maker)

> Query does not return any rows
> --
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Robert Hou
>Assignee: Arina Ielchiieva
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> SELECT c_customer_id   AS customer_id,
> c_last_name
> \|\| ', '
> \|\| c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6603) Query does not return enough rows

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6603:

Priority: Blocker  (was: Major)

> Query does not return enough rows
> -
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> This query should return 5 rows.  It is missing 3 rows.
> 1664  IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
>  W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
> lF0wj8xSQg1wx5cfOMXBGNA1nvqTELCPCEzUvFj8hXQ3gANHJ9bOt7QFZhxWLlBhCevbqA40IgJntlf0cAJM6V562fpGd16Trt3mI4YQUOkf3luTVRcBJRpIdoP3ZzgvhnVrgfblboAFMZ8CzCaH7QrZf02fPtYJlBAdoJB6DMjqh6mbkphod1QGYOkE0jqLMCnKoZSpOG9Rk9dIFdlkIrvea0f1KDGAuAlYiTTsdgU4R6CowbVNfEyjIv0Wp1CXC6SzM1Vex6Ye7CrRptvn92SOQCsAElScXa1EuErruEAyIEvtWraXL5X42RxTBsH3TZTR6NVuUcpObKbVIx0kLTdbxIElf33x31QwXUfUVZ
>  T4zHEpu6f4mLR6N9uLVG0Fza 
> Glq3UxixhgxPXgZpQt9GqT3HJXHEn9F0KGaxhC9VCqSk119HrrJuMpHiYS34MCkw1iFhGFUsRKI3fTFaByicJeCIkjFwn2cr74lONdco4AAFdGGVN1cMgJmlOxUZE0Okv68DocVXUMSXCdcTBBmGL2h2gDIagThjo8sVXORponMNTrXEP068Zy7pNkVJyW10EoZwqE2IIcoKdixYsJvPc0mRWnk3gfSmB6uHWgKvgGq4yzzbGp3NT01z8IRYKbmSXTmLyk9rJjUYatoIi
>  
> 757C2F0Yq0gceouo3LMaz9h4eyiC9psNiL3aoxquqrisayOjPs5esQzoY2iVmVZ7evrVCfxhe2AATFgTvk8Ek78y8s4nVNztlyluIrckfLbnOa25r1h9emJzooVV0Xj945xj5jAUHTZU9kCHKnmkcpEo0a7BdELbL0IvQlitXxbZBS86PlCltLGpLs
>  fmYeUzJfpp0Cql3MAECSQQbW4ErwWScaZ5D 
> rPfbbDZbF2m2ZtSPNn81G5zZBxfHgpuSm4UVrdd24NlLeG1mxwv 
> zU1PbpjSCqbn8rUCWqn5LFafTrmSdtrCuFaknTpqmk1wR9cLnPF3cD xvh0EqSwvCmCTK9xCpZkJF 
> 4WnBX6w5vg7gQkjvF1GOqP3LeV3qbJc 
> SO68S2UrCBNYQKdWyq4HeGG3TTuFF4x74nWkPPi0txEGiGDoYRxPvEQzWyhZ8SHpHZ3 
> 0UpHpuLWEXIO6VZlPJd4uC IaDEIaB 
> rkCJ8TaIVvaBIf0t8FGY8MgXTWzKdUBkOcQawbODXRLEtdGABTnOqftRSfUSpdojmlwRIs8xJIKaxK9wSL67DKahL6E7CvDBaQx20G0o7u
>  
> rMaponV4OZmHE45vaeAqfLSyWlNL4UvOstiDPaDd8nI08g9MSKFtYYxt3RxvydGxCtaYfgsl3KxjN5VHnAxkvChVlvdS2Yd8IBA
>  0dZwblnKUBibdQSgxcypDbRCPeAaOr169L9mrMv82w0V1Ndyt3qK 
> wcpv5nKeO8P9kbVlWY9bGi9nxCVs804WBZMA9vc7AT4h7Jp0OsaHbJx0qyFyAnXP lu 
> MMsOa28VxSW8thiTfIcx2qkdFN1KXrXpU4uo lxUOcJhH0HlyX6kLKhCnVqpG 
> tFP93c5jJ7FdeSujFvxPgo1rQSN9DHXk4DR6nytgBrn2oGcM58zadRNaqoIL2wmWygQsnk7Euzypbg4KhlTICBl1mpb0JwbI7uaCudGcDNWIBMerY
>  WgjahuC3QjIFd48o78CQSgqgQjzpHzdELrqMCKaKfdW4ihpHCA0sqNBYGQxxd 
> T8iTWorOODkg5Kc7m4gPut8tuzEMOQus1xdajv9PqS8F7xwzAWyhymyYBJ8505HxZDuSFqBXSkpxGDh21fiBHkeKBC9RZp7r
>  yD7i6xvRh47Vln0IxvnwcpahLltLr12yL0sDu9LXxHNAHU4gyvHud5J5xXJPD7r5xHXvtNOSiXVl 
> hkBBib1k4IO9YjCgModazXNudTx2Mr8ccq6 
> kNLKwnrwGdssm3JYyjBsUcXyLMHpS7vncUeKSw2rov4Hg4gTZU8sJMJMAJvu8d6IDJYMHULwrawKOhK8rDTP6sk9Hv27mCG8Gf9inG38Pik7AfnEtUIiZZozEsiSkWvAA7YiHlNDUuL3OX2FRgt2qu9T7zXtQkhon8uSv5FncUq17XB9idflAO0rWIK57HoilaXgIDrzG61kfSKZXpdKuwBVsRNmgJVDSedRsSihlcVDdZ7bmqsgzbvKhFri8lSh8ez6ttlXgF8h4wJ2985bVw5PUmLdeGjlbfrLF0f22vqGi11qz2GUltrjBmmBSrbCLpFUkwqqpATRoQEwo27qi5XwHYWWBqPN9rxF
>  
> orktFM5SRwG2IJmx8li8sRRchYnNYQgH7iuwKqd69jJJTwwdYla2296Lhw88YHzL60aq2XomN0BNNSoY8cALvy0QIHZpCFd3EmBojr46d6c8nBYMXJLlgKNzklk8vMTKrjAgBQevUH4U7gbQpOIWVf7Tx2BIXkdRGwQYHAuJzU5gtDuDqhuddXkGdACMmp0tgJVP2tpMW05Z3OGs6jYKb5xtqHotIJd7tUM33J85fRYOEIoGOaRblZr7RF82nSOSpPQnDgnVUhJ1j
>  mCY1ofeqG7QqeV6LTdRyRPgiiPwHF1Xgpb3feAJ804NmX7xOkDPvw0WeqxrSVMCto 
> r8E64UsRFypZ 
> wtzVAlTJKgTMpzA4xeuVXuk85mpEJTIQpNxPjU3vgAacENiejcRs68Y85Ncb5ymC3fD0WAyh23VIsy
>  GqaCV9hIFrAs tMM2zlkqpoBsSwgODBEsizaJkb4ZOWJj3Z2Wttr08YPpXSO6 
> IhQKD5SHqNXEDNar2UVZwFZbg1YJccvsjWEtfm0AUZ 
> 3KHMUb3X1F3tWqIYrZucrsjUp2xfaGtqnsij4q7CRWhRucucjyKcKmiaGE7XllzVGPeHWmbtAFku355JLB2OlBXdsgWMVZFcaCOHff6OlSECOgdLGBSL297kgCVKLzDEvxS
>  
> T4rb5neHQffvmAHOzdIuDGw1559XGVHwzz5lLoc3iSicYlwZTKN2VUOQPHRSqTI1hMJmgTcUaO3LEHyxL2so3EedaU9BSaTaA3kPefKSdu
>  ibaW3h1 
> WKkznSnlmVjhLzq5e5ywYzwA26EusRtJmAAiiSrYG20uO7ejp1AlorSgOAfM9B5qxQAqaDqQMUlvhlu7SjK46egz5kK3xtcoUfyxyUwAonh3iv
>  
> VJPXdvxm8ZuZbnm82xLkh4MeWbClb0jH5E42m9aFp8GrSQzAwhzciocZJABwerP1sfITnG6EMyPKdl7FBIjJKjNcFOVabzQX966h6WYnAOKuaYdJWNGgKOISIcR6OwHIaUWjqV9w84VYxXutZJ1rRlbeUPT8ygTZmFk2FK2Ix02rBzt0nFkiTNmoZSilSzSOxSF
>  iwtXmtDRtjrQPQCVKlZM3KrYjiJfOem8PIOA8wadL0lHN87gpEqUsrvpohZ8FRW 
> 

[jira] [Updated] (DRILL-6603) Query does not return enough rows

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6603:

Fix Version/s: 1.14.0

> Query does not return enough rows
> -
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> This query should return 5 rows.  It is missing 3 rows.
> 1664  IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
>  W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
> lF0wj8xSQg1wx5cfOMXBGNA1nvqTELCPCEzUvFj8hXQ3gANHJ9bOt7QFZhxWLlBhCevbqA40IgJntlf0cAJM6V562fpGd16Trt3mI4YQUOkf3luTVRcBJRpIdoP3ZzgvhnVrgfblboAFMZ8CzCaH7QrZf02fPtYJlBAdoJB6DMjqh6mbkphod1QGYOkE0jqLMCnKoZSpOG9Rk9dIFdlkIrvea0f1KDGAuAlYiTTsdgU4R6CowbVNfEyjIv0Wp1CXC6SzM1Vex6Ye7CrRptvn92SOQCsAElScXa1EuErruEAyIEvtWraXL5X42RxTBsH3TZTR6NVuUcpObKbVIx0kLTdbxIElf33x31QwXUfUVZ
>  T4zHEpu6f4mLR6N9uLVG0Fza 
> Glq3UxixhgxPXgZpQt9GqT3HJXHEn9F0KGaxhC9VCqSk119HrrJuMpHiYS34MCkw1iFhGFUsRKI3fTFaByicJeCIkjFwn2cr74lONdco4AAFdGGVN1cMgJmlOxUZE0Okv68DocVXUMSXCdcTBBmGL2h2gDIagThjo8sVXORponMNTrXEP068Zy7pNkVJyW10EoZwqE2IIcoKdixYsJvPc0mRWnk3gfSmB6uHWgKvgGq4yzzbGp3NT01z8IRYKbmSXTmLyk9rJjUYatoIi
>  
> 757C2F0Yq0gceouo3LMaz9h4eyiC9psNiL3aoxquqrisayOjPs5esQzoY2iVmVZ7evrVCfxhe2AATFgTvk8Ek78y8s4nVNztlyluIrckfLbnOa25r1h9emJzooVV0Xj945xj5jAUHTZU9kCHKnmkcpEo0a7BdELbL0IvQlitXxbZBS86PlCltLGpLs
>  fmYeUzJfpp0Cql3MAECSQQbW4ErwWScaZ5D 
> rPfbbDZbF2m2ZtSPNn81G5zZBxfHgpuSm4UVrdd24NlLeG1mxwv 
> zU1PbpjSCqbn8rUCWqn5LFafTrmSdtrCuFaknTpqmk1wR9cLnPF3cD xvh0EqSwvCmCTK9xCpZkJF 
> 4WnBX6w5vg7gQkjvF1GOqP3LeV3qbJc 
> SO68S2UrCBNYQKdWyq4HeGG3TTuFF4x74nWkPPi0txEGiGDoYRxPvEQzWyhZ8SHpHZ3 
> 0UpHpuLWEXIO6VZlPJd4uC IaDEIaB 
> rkCJ8TaIVvaBIf0t8FGY8MgXTWzKdUBkOcQawbODXRLEtdGABTnOqftRSfUSpdojmlwRIs8xJIKaxK9wSL67DKahL6E7CvDBaQx20G0o7u
>  
> rMaponV4OZmHE45vaeAqfLSyWlNL4UvOstiDPaDd8nI08g9MSKFtYYxt3RxvydGxCtaYfgsl3KxjN5VHnAxkvChVlvdS2Yd8IBA
>  0dZwblnKUBibdQSgxcypDbRCPeAaOr169L9mrMv82w0V1Ndyt3qK 
> wcpv5nKeO8P9kbVlWY9bGi9nxCVs804WBZMA9vc7AT4h7Jp0OsaHbJx0qyFyAnXP lu 
> MMsOa28VxSW8thiTfIcx2qkdFN1KXrXpU4uo lxUOcJhH0HlyX6kLKhCnVqpG 
> tFP93c5jJ7FdeSujFvxPgo1rQSN9DHXk4DR6nytgBrn2oGcM58zadRNaqoIL2wmWygQsnk7Euzypbg4KhlTICBl1mpb0JwbI7uaCudGcDNWIBMerY
>  WgjahuC3QjIFd48o78CQSgqgQjzpHzdELrqMCKaKfdW4ihpHCA0sqNBYGQxxd 
> T8iTWorOODkg5Kc7m4gPut8tuzEMOQus1xdajv9PqS8F7xwzAWyhymyYBJ8505HxZDuSFqBXSkpxGDh21fiBHkeKBC9RZp7r
>  yD7i6xvRh47Vln0IxvnwcpahLltLr12yL0sDu9LXxHNAHU4gyvHud5J5xXJPD7r5xHXvtNOSiXVl 
> hkBBib1k4IO9YjCgModazXNudTx2Mr8ccq6 
> kNLKwnrwGdssm3JYyjBsUcXyLMHpS7vncUeKSw2rov4Hg4gTZU8sJMJMAJvu8d6IDJYMHULwrawKOhK8rDTP6sk9Hv27mCG8Gf9inG38Pik7AfnEtUIiZZozEsiSkWvAA7YiHlNDUuL3OX2FRgt2qu9T7zXtQkhon8uSv5FncUq17XB9idflAO0rWIK57HoilaXgIDrzG61kfSKZXpdKuwBVsRNmgJVDSedRsSihlcVDdZ7bmqsgzbvKhFri8lSh8ez6ttlXgF8h4wJ2985bVw5PUmLdeGjlbfrLF0f22vqGi11qz2GUltrjBmmBSrbCLpFUkwqqpATRoQEwo27qi5XwHYWWBqPN9rxF
>  
> orktFM5SRwG2IJmx8li8sRRchYnNYQgH7iuwKqd69jJJTwwdYla2296Lhw88YHzL60aq2XomN0BNNSoY8cALvy0QIHZpCFd3EmBojr46d6c8nBYMXJLlgKNzklk8vMTKrjAgBQevUH4U7gbQpOIWVf7Tx2BIXkdRGwQYHAuJzU5gtDuDqhuddXkGdACMmp0tgJVP2tpMW05Z3OGs6jYKb5xtqHotIJd7tUM33J85fRYOEIoGOaRblZr7RF82nSOSpPQnDgnVUhJ1j
>  mCY1ofeqG7QqeV6LTdRyRPgiiPwHF1Xgpb3feAJ804NmX7xOkDPvw0WeqxrSVMCto 
> r8E64UsRFypZ 
> wtzVAlTJKgTMpzA4xeuVXuk85mpEJTIQpNxPjU3vgAacENiejcRs68Y85Ncb5ymC3fD0WAyh23VIsy
>  GqaCV9hIFrAs tMM2zlkqpoBsSwgODBEsizaJkb4ZOWJj3Z2Wttr08YPpXSO6 
> IhQKD5SHqNXEDNar2UVZwFZbg1YJccvsjWEtfm0AUZ 
> 3KHMUb3X1F3tWqIYrZucrsjUp2xfaGtqnsij4q7CRWhRucucjyKcKmiaGE7XllzVGPeHWmbtAFku355JLB2OlBXdsgWMVZFcaCOHff6OlSECOgdLGBSL297kgCVKLzDEvxS
>  
> T4rb5neHQffvmAHOzdIuDGw1559XGVHwzz5lLoc3iSicYlwZTKN2VUOQPHRSqTI1hMJmgTcUaO3LEHyxL2so3EedaU9BSaTaA3kPefKSdu
>  ibaW3h1 
> WKkznSnlmVjhLzq5e5ywYzwA26EusRtJmAAiiSrYG20uO7ejp1AlorSgOAfM9B5qxQAqaDqQMUlvhlu7SjK46egz5kK3xtcoUfyxyUwAonh3iv
>  
> VJPXdvxm8ZuZbnm82xLkh4MeWbClb0jH5E42m9aFp8GrSQzAwhzciocZJABwerP1sfITnG6EMyPKdl7FBIjJKjNcFOVabzQX966h6WYnAOKuaYdJWNGgKOISIcR6OwHIaUWjqV9w84VYxXutZJ1rRlbeUPT8ygTZmFk2FK2Ix02rBzt0nFkiTNmoZSilSzSOxSF
>  iwtXmtDRtjrQPQCVKlZM3KrYjiJfOem8PIOA8wadL0lHN87gpEqUsrvpohZ8FRW 
> 

[jira] [Updated] (DRILL-6605) Query does not return any rows

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6605:

Priority: Blocker  (was: Major)

> Query does not return any rows
> --
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> SELECT c_customer_id   AS customer_id,
> c_last_name
> \|\| ', '
> \|\| c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6605) Query does not return any rows

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6605:

Affects Version/s: (was: 1.13.0)

> Query does not return any rows
> --
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> SELECT c_customer_id   AS customer_id,
> c_last_name
> \|\| ', '
> \|\| c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6605) Query does not return any rows

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6605:

Fix Version/s: (was: 1.15.0)
   1.14.0

> Query does not return any rows
> --
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Fix For: 1.14.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> SELECT c_customer_id   AS customer_id,
> c_last_name
> \|\| ', '
> \|\| c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6605) Query does not return any rows

2018-07-13 Thread Robert Hou (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543685#comment-16543685
 ] 

Robert Hou commented on DRILL-6605:
---

Yes, this is a regression.

> Query does not return any rows
> --
>
> Key: DRILL-6605
> URL: https://issues.apache.org/jira/browse/DRILL-6605
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.13.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Major
> Fix For: 1.15.0
>
>
> Query is:
> Advanced/tpcds/tpcds_sf100/hive/parquet/query84.sql
> This uses the hive parquet reader.
> SELECT c_customer_id   AS customer_id,
> c_last_name
> \|\| ', '
> \|\| c_first_name AS customername
> FROM   customer,
> customer_address,
> customer_demographics,
> household_demographics,
> income_band,
> store_returns
> WHERE  ca_city = 'Green Acres'
> AND c_current_addr_sk = ca_address_sk
> AND ib_lower_bound >= 54986
> AND ib_upper_bound <= 54986 + 5
> AND ib_income_band_sk = hd_income_band_sk
> AND cd_demo_sk = c_current_cdemo_sk
> AND hd_demo_sk = c_current_hdemo_sk
> AND sr_cdemo_sk = cd_demo_sk
> ORDER  BY c_customer_id
> LIMIT 100
> This query should return 100 rows
> commit id is:
> 1.14.0-SNAPSHOT   a77fd142d86dd5648cda8866b8ff3af39c7b6b11
> DRILL-6516: EMIT support in streaming agg   11.07.2018 @ 18:40:03 PDT 
>   Unknown 12.07.2018 @ 01:50:37 PDT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6603) Query does not return enough rows

2018-07-13 Thread Robert Hou (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543684#comment-16543684
 ] 

Robert Hou commented on DRILL-6603:
---

Yes, this is a regression for the Apache Advanced tests.

We can check when the PR is merged.

> Query does not return enough rows
> -
>
> Key: DRILL-6603
> URL: https://issues.apache.org/jira/browse/DRILL-6603
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Major
>
> Query is:
> /root/drillAutomation/framework-master/framework/resources/Advanced/data-shapes/wide-columns/5000/10rows/parquet/q67.q
> select * from widestrings where str_var is null and dec_var_prec5_sc2 between 
> 10 and 15
> This query should return 5 rows.  It is missing 3 rows.
> 1664  IaYIEviH tJHD 
> 6nF33QQJn1p4uuTELHOR2z0FCzMK35JkNeDRKCduYKUiPaXFgwftf4Ciidk2d7IXxyrCoX56Vsb 
> ITcI9yxPpd3Gu6zkk2kktmZv9oHxMVE1ccVh2iGzU7greQuUEJ1oYFHGzGN9MEeKc5DqbHHT0F65NF1LE88CAudZW5bv6AiIj2D714q72g8ULd2WaazavWBQ6PgdKax
>  
> 5kVvGkt9czWgZOH9CfT0ApOWUWZlQcvtVC2UumK6Q8tmE5f5yjKhTqvXOiistNIMo4K1NqG8U5t9V33b3h9Hk1ymyeGNMrb5Is1jB5nL9zlpyx3y46WoxV9GornIyrLw
>  W4wxtVsbj2yFYuU65RdDzkNKezE0LsPtpXeEpJeFoFSP 
> lF0wj8xSQg1wx5cfOMXBGNA1nvqTELCPCEzUvFj8hXQ3gANHJ9bOt7QFZhxWLlBhCevbqA40IgJntlf0cAJM6V562fpGd16Trt3mI4YQUOkf3luTVRcBJRpIdoP3ZzgvhnVrgfblboAFMZ8CzCaH7QrZf02fPtYJlBAdoJB6DMjqh6mbkphod1QGYOkE0jqLMCnKoZSpOG9Rk9dIFdlkIrvea0f1KDGAuAlYiTTsdgU4R6CowbVNfEyjIv0Wp1CXC6SzM1Vex6Ye7CrRptvn92SOQCsAElScXa1EuErruEAyIEvtWraXL5X42RxTBsH3TZTR6NVuUcpObKbVIx0kLTdbxIElf33x31QwXUfUVZ
>  T4zHEpu6f4mLR6N9uLVG0Fza 
> Glq3UxixhgxPXgZpQt9GqT3HJXHEn9F0KGaxhC9VCqSk119HrrJuMpHiYS34MCkw1iFhGFUsRKI3fTFaByicJeCIkjFwn2cr74lONdco4AAFdGGVN1cMgJmlOxUZE0Okv68DocVXUMSXCdcTBBmGL2h2gDIagThjo8sVXORponMNTrXEP068Zy7pNkVJyW10EoZwqE2IIcoKdixYsJvPc0mRWnk3gfSmB6uHWgKvgGq4yzzbGp3NT01z8IRYKbmSXTmLyk9rJjUYatoIi
>  
> 757C2F0Yq0gceouo3LMaz9h4eyiC9psNiL3aoxquqrisayOjPs5esQzoY2iVmVZ7evrVCfxhe2AATFgTvk8Ek78y8s4nVNztlyluIrckfLbnOa25r1h9emJzooVV0Xj945xj5jAUHTZU9kCHKnmkcpEo0a7BdELbL0IvQlitXxbZBS86PlCltLGpLs
>  fmYeUzJfpp0Cql3MAECSQQbW4ErwWScaZ5D 
> rPfbbDZbF2m2ZtSPNn81G5zZBxfHgpuSm4UVrdd24NlLeG1mxwv 
> zU1PbpjSCqbn8rUCWqn5LFafTrmSdtrCuFaknTpqmk1wR9cLnPF3cD xvh0EqSwvCmCTK9xCpZkJF 
> 4WnBX6w5vg7gQkjvF1GOqP3LeV3qbJc 
> SO68S2UrCBNYQKdWyq4HeGG3TTuFF4x74nWkPPi0txEGiGDoYRxPvEQzWyhZ8SHpHZ3 
> 0UpHpuLWEXIO6VZlPJd4uC IaDEIaB 
> rkCJ8TaIVvaBIf0t8FGY8MgXTWzKdUBkOcQawbODXRLEtdGABTnOqftRSfUSpdojmlwRIs8xJIKaxK9wSL67DKahL6E7CvDBaQx20G0o7u
>  
> rMaponV4OZmHE45vaeAqfLSyWlNL4UvOstiDPaDd8nI08g9MSKFtYYxt3RxvydGxCtaYfgsl3KxjN5VHnAxkvChVlvdS2Yd8IBA
>  0dZwblnKUBibdQSgxcypDbRCPeAaOr169L9mrMv82w0V1Ndyt3qK 
> wcpv5nKeO8P9kbVlWY9bGi9nxCVs804WBZMA9vc7AT4h7Jp0OsaHbJx0qyFyAnXP lu 
> MMsOa28VxSW8thiTfIcx2qkdFN1KXrXpU4uo lxUOcJhH0HlyX6kLKhCnVqpG 
> tFP93c5jJ7FdeSujFvxPgo1rQSN9DHXk4DR6nytgBrn2oGcM58zadRNaqoIL2wmWygQsnk7Euzypbg4KhlTICBl1mpb0JwbI7uaCudGcDNWIBMerY
>  WgjahuC3QjIFd48o78CQSgqgQjzpHzdELrqMCKaKfdW4ihpHCA0sqNBYGQxxd 
> T8iTWorOODkg5Kc7m4gPut8tuzEMOQus1xdajv9PqS8F7xwzAWyhymyYBJ8505HxZDuSFqBXSkpxGDh21fiBHkeKBC9RZp7r
>  yD7i6xvRh47Vln0IxvnwcpahLltLr12yL0sDu9LXxHNAHU4gyvHud5J5xXJPD7r5xHXvtNOSiXVl 
> hkBBib1k4IO9YjCgModazXNudTx2Mr8ccq6 
> kNLKwnrwGdssm3JYyjBsUcXyLMHpS7vncUeKSw2rov4Hg4gTZU8sJMJMAJvu8d6IDJYMHULwrawKOhK8rDTP6sk9Hv27mCG8Gf9inG38Pik7AfnEtUIiZZozEsiSkWvAA7YiHlNDUuL3OX2FRgt2qu9T7zXtQkhon8uSv5FncUq17XB9idflAO0rWIK57HoilaXgIDrzG61kfSKZXpdKuwBVsRNmgJVDSedRsSihlcVDdZ7bmqsgzbvKhFri8lSh8ez6ttlXgF8h4wJ2985bVw5PUmLdeGjlbfrLF0f22vqGi11qz2GUltrjBmmBSrbCLpFUkwqqpATRoQEwo27qi5XwHYWWBqPN9rxF
>  
> orktFM5SRwG2IJmx8li8sRRchYnNYQgH7iuwKqd69jJJTwwdYla2296Lhw88YHzL60aq2XomN0BNNSoY8cALvy0QIHZpCFd3EmBojr46d6c8nBYMXJLlgKNzklk8vMTKrjAgBQevUH4U7gbQpOIWVf7Tx2BIXkdRGwQYHAuJzU5gtDuDqhuddXkGdACMmp0tgJVP2tpMW05Z3OGs6jYKb5xtqHotIJd7tUM33J85fRYOEIoGOaRblZr7RF82nSOSpPQnDgnVUhJ1j
>  mCY1ofeqG7QqeV6LTdRyRPgiiPwHF1Xgpb3feAJ804NmX7xOkDPvw0WeqxrSVMCto 
> r8E64UsRFypZ 
> wtzVAlTJKgTMpzA4xeuVXuk85mpEJTIQpNxPjU3vgAacENiejcRs68Y85Ncb5ymC3fD0WAyh23VIsy
>  GqaCV9hIFrAs tMM2zlkqpoBsSwgODBEsizaJkb4ZOWJj3Z2Wttr08YPpXSO6 
> IhQKD5SHqNXEDNar2UVZwFZbg1YJccvsjWEtfm0AUZ 
> 3KHMUb3X1F3tWqIYrZucrsjUp2xfaGtqnsij4q7CRWhRucucjyKcKmiaGE7XllzVGPeHWmbtAFku355JLB2OlBXdsgWMVZFcaCOHff6OlSECOgdLGBSL297kgCVKLzDEvxS
>  
> T4rb5neHQffvmAHOzdIuDGw1559XGVHwzz5lLoc3iSicYlwZTKN2VUOQPHRSqTI1hMJmgTcUaO3LEHyxL2so3EedaU9BSaTaA3kPefKSdu
>  ibaW3h1 
> WKkznSnlmVjhLzq5e5ywYzwA26EusRtJmAAiiSrYG20uO7ejp1AlorSgOAfM9B5qxQAqaDqQMUlvhlu7SjK46egz5kK3xtcoUfyxyUwAonh3iv
>  
> VJPXdvxm8ZuZbnm82xLkh4MeWbClb0jH5E42m9aFp8GrSQzAwhzciocZJABwerP1sfITnG6EMyPKdl7FBIjJKjNcFOVabzQX966h6WYnAOKuaYdJWNGgKOISIcR6OwHIaUWjqV9w84VYxXutZJ1rRlbeUPT8ygTZmFk2FK2Ix02rBzt0nFkiTNmoZSilSzSOxSF
>  

[jira] [Created] (DRILL-6607) Index Out of Bounds Error in string_binary function

2018-07-13 Thread John Omernik (JIRA)
John Omernik created DRILL-6607:
---

 Summary: Index Out of Bounds Error in string_binary function
 Key: DRILL-6607
 URL: https://issues.apache.org/jira/browse/DRILL-6607
 Project: Apache Drill
  Issue Type: Bug
  Components:  Server
Affects Versions: 1.13.0
Reporter: John Omernik


I am running a query with the pcap plugin.  When I run 

 

select `type`, `timestamp`, `src_ip`, `dst_ip`, `src_port`, `dst_port`, 
`tcp_parsed_flags`, `packet_length`, `data`
from dfs.root.`user/jomernik/bf2_7306.pcap` where `type` <> 'ARP' limit 10

 

It returns properly, when I run:

select `type`, `timestamp`, `src_ip`, `dst_ip`, `src_port`, `dst_port`, 
`tcp_parsed_flags`, `packet_length`, `data`, string_binary(`data`) as mydata
from dfs.root.`user/jomernik/bf2_7306.pcap` where `type` <> 'ARP' limit 10

 

SYSTEM ERROR: IndexOutOfBoundsException: index: 0, length: 1472 (expected: 
range(0, 256)) Fragment 0:0 [Error Id: 2b804cdf-16c3-4f55-80f5-1cf3b9b6610b on 
zeta3.brewingintel.com:20005]

Full Error:

2018-07-13 15:41:33,187 [24b6f183-8db2-a1ce-3fdb-293cc9d45b9b:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 24b6f183-8db2-a1ce-3fdb-293cc9d45b9b:0:0: 
State change requested RUNNING --> FAILED

2018-07-13 15:41:33,188 [24b6f183-8db2-a1ce-3fdb-293cc9d45b9b:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 24b6f183-8db2-a1ce-3fdb-293cc9d45b9b:0:0: 
State change requested FAILED --> FINISHED

2018-07-13 15:41:33,191 [24b6f183-8db2-a1ce-3fdb-293cc9d45b9b:frag:0:0] ERROR 
o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IndexOutOfBoundsException: 
index: 0, length: 1472 (expected: range(0, 256))

 

Fragment 0:0

 

[Error Id: 2b804cdf-16c3-4f55-80f5-1cf3b9b6610b on zeta3.brewingintel.com:20005]

org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
IndexOutOfBoundsException: index: 0, length: 1472 (expected: range(0, 256))

 

Fragment 0:0

 

[Error Id: 2b804cdf-16c3-4f55-80f5-1cf3b9b6610b on zeta3.brewingintel.com:20005]

 at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
 ~[drill-common-1.13.0-mapr.jar:1.13.0-mapr]

 at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:300)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]

 at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]

 at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:266)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]

 at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.13.0-mapr.jar:1.13.0-mapr]

 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_121]

 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]

 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]

Caused by: java.lang.IndexOutOfBoundsException: index: 0, length: 1472 
(expected: range(0, 256))

 at io.netty.buffer.AbstractByteBuf.checkIndex0(AbstractByteBuf.java:1125) 
~[netty-buffer-4.0.48.Final.jar:4.0.48.Final]

 at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1120) 
~[netty-buffer-4.0.48.Final.jar:4.0.48.Final]

 at io.netty.buffer.UnsafeByteBufUtil.setBytes(UnsafeByteBufUtil.java:349) 
~[netty-buffer-4.0.48.Final.jar:4.0.48.Final]

 at 
io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:199)
 ~[netty-buffer-4.0.48.Final.jar:4.0.48.Final]

 at io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:397) 
~[netty-buffer-4.0.48.Final.jar:4.0.48.Final]

 at 
io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:37)
 ~[drill-memory-base-1.13.0-mapr.jar:4.0.48.Final]

 at io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:767) 
~[drill-memory-base-1.13.0-mapr.jar:4.0.48.Final]

 at io.netty.buffer.AbstractByteBuf.setBytes(AbstractByteBuf.java:528) 
~[netty-buffer-4.0.48.Final.jar:4.0.48.Final]

 at 
org.apache.drill.exec.test.generated.ProjectorGen2.doEval(ProjectorTemplate.java:77)
 ~[na:na]

 at 
org.apache.drill.exec.test.generated.ProjectorGen2.projectRecords(ProjectorTemplate.java:67)
 ~[na:na]

 at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.doWork(ProjectRecordBatch.java:198)
 ~[drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]

 at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:97)
 ~[drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]

 at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134)
 ~[drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]

 at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
 ~[drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]

 at 

[jira] [Updated] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6496:

Labels: ready-to-commit  (was: )

> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543671#comment-16543671
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

arina-ielchiieva commented on issue #1336: DRILL-6496: Added missing logging 
statement in VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
columnWidths)
URL: https://github.com/apache/drill/pull/1336#issuecomment-404946291
 
 
   Looks like there are some compilation errors:
   ```
   [ERROR] COMPILATION ERROR : 
   [INFO] -
   [ERROR] 
/home/travis/build/apache/drill/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/lateraljoin/TestE2EUnnestAndLateral.java:[396,6]
 error: cannot find symbol
   [ERROR]   symbol:   method test(String)
 location: class TestE2EUnnestAndLateral
   
/home/travis/build/apache/drill/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/lateraljoin/TestE2EUnnestAndLateral.java:[432,6]
 error: cannot find symbol
   ```
   The changes look good though, thanks for making changes after code review, 
Putting +1 here, can be merged when compilation errors are addressed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6496:

Reviewer: Arina Ielchiieva  (was: Volodymyr Vysotskyi)

> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6588) System table columns incorrectly marked as non-nullable

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6588:

Labels: ready-to-commit  (was: )

> System table columns incorrectly marked as non-nullable 
> 
>
> Key: DRILL-6588
> URL: https://issues.apache.org/jira/browse/DRILL-6588
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.13.0
>Reporter: Aman Sinha
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
>
> System table columns can contain null values but they are incorrectly marked 
> as non-nullable as shown in example table below:  
> {noformat}
> 0: jdbc:drill:drillbit=10.10.10.191> describe sys.boot;
> +---++--+
> |    COLUMN_NAME    |     DATA_TYPE      | IS_NULLABLE  |
> +---++--+
> | name              | CHARACTER VARYING  | NO           |
> | kind              | CHARACTER VARYING  | NO           |
> | accessibleScopes  | CHARACTER VARYING  | NO           |
> | optionScope       | CHARACTER VARYING  | NO           |
> | status            | CHARACTER VARYING  | NO           |
> | num_val           | BIGINT             | NO           |
> | string_val        | CHARACTER VARYING  | NO           |
> | bool_val          | BOOLEAN            | NO           |
> | float_val         | DOUBLE             | NO           |
> +---++--+{noformat}
>  
> Note that several columns are nulls: 
> {noformat}
> +---+--+--+-++-++--+---+
> |                       name                        |   kind   | 
> accessibleScopes | optionScope | status | num_val | string_val | bool_val | 
> float_val |
> +---+--+--+-++-++--+---+
> drill.exec.options.exec.udf.enable_dynamic_support | BOOLEAN | BOOT | BOOT | 
> BOOT | null | null | true | null |{noformat}
>  
> Because of the not-null metadata, the predicates on these tables such as 
> `WHERE  IS NULL` evaluate to FALSE which is incorrect. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6588) System table columns incorrectly marked as non-nullable

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543665#comment-16543665
 ] 

ASF GitHub Bot commented on DRILL-6588:
---

arina-ielchiieva commented on a change in pull request #1371: DRILL-6588: Make 
Sys tables of nullable datatypes
URL: https://github.com/apache/drill/pull/1371#discussion_r202464481
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/exec/store/sys/TestSystemTable.java
 ##
 @@ -90,4 +92,11 @@ public void testProfilesLimitPushDown() throws Exception {
 String numFilesPattern = "maxRecordsToRead=10";
 testPlanMatchingPatterns(query, new String[] {numFilesPattern}, new 
String[] {});
   }
+
+  @Test
+  public void testColumnNullability() throws Exception {
+String query = " select distinct is_nullable, count(*) from 
INFORMATION_SCHEMA.`COLUMNS` where table_schema = 'sys' group by is_nullable";
 
 Review comment:
   `" select` -> please remove space


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> System table columns incorrectly marked as non-nullable 
> 
>
> Key: DRILL-6588
> URL: https://issues.apache.org/jira/browse/DRILL-6588
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.13.0
>Reporter: Aman Sinha
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> System table columns can contain null values but they are incorrectly marked 
> as non-nullable as shown in example table below:  
> {noformat}
> 0: jdbc:drill:drillbit=10.10.10.191> describe sys.boot;
> +---++--+
> |    COLUMN_NAME    |     DATA_TYPE      | IS_NULLABLE  |
> +---++--+
> | name              | CHARACTER VARYING  | NO           |
> | kind              | CHARACTER VARYING  | NO           |
> | accessibleScopes  | CHARACTER VARYING  | NO           |
> | optionScope       | CHARACTER VARYING  | NO           |
> | status            | CHARACTER VARYING  | NO           |
> | num_val           | BIGINT             | NO           |
> | string_val        | CHARACTER VARYING  | NO           |
> | bool_val          | BOOLEAN            | NO           |
> | float_val         | DOUBLE             | NO           |
> +---++--+{noformat}
>  
> Note that several columns are nulls: 
> {noformat}
> +---+--+--+-++-++--+---+
> |                       name                        |   kind   | 
> accessibleScopes | optionScope | status | num_val | string_val | bool_val | 
> float_val |
> +---+--+--+-++-++--+---+
> drill.exec.options.exec.udf.enable_dynamic_support | BOOLEAN | BOOT | BOOT | 
> BOOT | null | null | true | null |{noformat}
>  
> Because of the not-null metadata, the predicates on these tables such as 
> `WHERE  IS NULL` evaluate to FALSE which is incorrect. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6588) System table columns incorrectly marked as non-nullable

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543667#comment-16543667
 ] 

ASF GitHub Bot commented on DRILL-6588:
---

arina-ielchiieva commented on issue #1371: DRILL-6588: Make Sys tables of 
nullable datatypes
URL: https://github.com/apache/drill/pull/1371#issuecomment-404945249
 
 
   Once minor comment to remove space, also please squash the commits. +1, LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> System table columns incorrectly marked as non-nullable 
> 
>
> Key: DRILL-6588
> URL: https://issues.apache.org/jira/browse/DRILL-6588
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.13.0
>Reporter: Aman Sinha
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> System table columns can contain null values but they are incorrectly marked 
> as non-nullable as shown in example table below:  
> {noformat}
> 0: jdbc:drill:drillbit=10.10.10.191> describe sys.boot;
> +---++--+
> |    COLUMN_NAME    |     DATA_TYPE      | IS_NULLABLE  |
> +---++--+
> | name              | CHARACTER VARYING  | NO           |
> | kind              | CHARACTER VARYING  | NO           |
> | accessibleScopes  | CHARACTER VARYING  | NO           |
> | optionScope       | CHARACTER VARYING  | NO           |
> | status            | CHARACTER VARYING  | NO           |
> | num_val           | BIGINT             | NO           |
> | string_val        | CHARACTER VARYING  | NO           |
> | bool_val          | BOOLEAN            | NO           |
> | float_val         | DOUBLE             | NO           |
> +---++--+{noformat}
>  
> Note that several columns are nulls: 
> {noformat}
> +---+--+--+-++-++--+---+
> |                       name                        |   kind   | 
> accessibleScopes | optionScope | status | num_val | string_val | bool_val | 
> float_val |
> +---+--+--+-++-++--+---+
> drill.exec.options.exec.udf.enable_dynamic_support | BOOLEAN | BOOT | BOOT | 
> BOOT | null | null | true | null |{noformat}
>  
> Because of the not-null metadata, the predicates on these tables such as 
> `WHERE  IS NULL` evaluate to FALSE which is incorrect. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6591:

Labels: ready-to-commit  (was: )

> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543661#comment-16543661
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

arina-ielchiieva commented on issue #1379: DRILL-6591: Show Exception for 
failed queries submitted in WebUI
URL: https://github.com/apache/drill/pull/1379#issuecomment-404944699
 
 
   +1, LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6373) Refactor the Result Set Loader to prepare for Union, List support

2018-07-13 Thread Karthikeyan Manivannan (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthikeyan Manivannan updated DRILL-6373:
--
Attachment: 6373_Functional_Fail_07_13_1300.txt

> Refactor the Result Set Loader to prepare for Union, List support
> -
>
> Key: DRILL-6373
> URL: https://issues.apache.org/jira/browse/DRILL-6373
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.13.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Major
> Attachments: 6373_Functional_Fail_07_13_1300.txt, 
> drill-6373-with-6585-fix-functional-failure.txt
>
>
> As the next step in merging the "batch sizing" enhancements, refactor the 
> {{ResultSetLoader}} and related classes to prepare for Union and List 
> support. This fix follows the refactoring of the column accessors for the 
> same purpose. Actual Union and List support is to follow in a separate PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6373) Refactor the Result Set Loader to prepare for Union, List support

2018-07-13 Thread Karthikeyan Manivannan (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543642#comment-16543642
 ] 

Karthikeyan Manivannan commented on DRILL-6373:
---

[~paul-rogers] The functional test failed with some plan verification failures 
but I doubt it is because of your change. The log is attached 
[^6373_Functional_Fail_07_13_1300.txt]

> Refactor the Result Set Loader to prepare for Union, List support
> -
>
> Key: DRILL-6373
> URL: https://issues.apache.org/jira/browse/DRILL-6373
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.13.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Major
> Attachments: 6373_Functional_Fail_07_13_1300.txt, 
> drill-6373-with-6585-fix-functional-failure.txt
>
>
> As the next step in merging the "batch sizing" enhancements, refactor the 
> {{ResultSetLoader}} and related classes to prepare for Union and List 
> support. This fix follows the refactoring of the column accessors for the 
> same purpose. Actual Union and List support is to follow in a separate PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6346) Create an Official Drill Docker Container

2018-07-13 Thread Bridget Bevens (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543634#comment-16543634
 ] 

Bridget Bevens commented on DRILL-6346:
---

Talked to Abhishek and created a rough draft of a doc 
[here|https://docs.google.com/document/d/1E10NTIBIY7SOS33M5XTXvzagefufaM1sSFnS0lKQmSc/edit?usp=sharing].
I'll update the doc with any review comments and then post to Apache Drill docs 
when complete.
Thanks,
Bridget

> Create an Official Drill Docker Container
> -
>
> Key: DRILL-6346
> URL: https://issues.apache.org/jira/browse/DRILL-6346
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Abhishek Girish
>Priority: Major
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543626#comment-16543626
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on issue #1336: DRILL-6496: Added missing logging statement 
in VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
columnWidths)
URL: https://github.com/apache/drill/pull/1336#issuecomment-404932368
 
 
   @arina-ielchiieva Thanks for catching mistakes, I have applied the review 
comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543623#comment-16543623
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202452062
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/PrintingUtils.java
 ##
 @@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.test;
+
+import ch.qos.logback.classic.Level;
+import org.apache.drill.exec.client.LoggingResultsListener;
+import org.apache.drill.exec.util.VectorUtil;
+
+import java.util.function.Supplier;
+
+/**
+ * 
+ *   This class contains utility methods to run lambda functions with the 
necessary {@link org.apache.drill.test.LogFixture}
+ *   boilerplate to print results to stdout for debugging purposes.
+ * 
+ *
+ * 
+ *   If you need to enable printing for more classes, simply add them to the 
{@link org.apache.drill.test.LogFixture}
+ *   constructed in {@link #printAndThrow(CheckedSupplier)}.
+ * 
+ */
+public final class PrintingUtils {
+  /**
+   * The java standard library does not provide a lambda function interface 
for funtions that take no arguments,
+   * but that throw an exception. So, we have to define our own here for use 
in {@link #printAndThrow(CheckedSupplier)}.
+   * @param  The return type of the lambda function.
+   * @param  The type of exception thrown by the lambda function.
+   */
+  @FunctionalInterface
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6453) TPC-DS query 72 has regressed

2018-07-13 Thread Khurram Faraaz (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543613#comment-16543613
 ] 

Khurram Faraaz commented on DRILL-6453:
---

Results of executing the simplified query with the first three joins starting 
from the leaf level in the plan, of TPC-DS query 72.
Total time taken for below query to complete was 07 min 46.719 sec

{noformat}
SELECT
 Count(*) total_cnt 
FROM catalog_sales 
JOIN inventory 
 ON ( cs_item_sk = inv_item_sk ) 
JOIN customer_demographics 
 ON ( cs_bill_cdemo_sk = cd_demo_sk ) 
JOIN household_demographics 
 ON ( cs_bill_hdemo_sk = hd_demo_sk ) 
WHERE inv_quantity_on_hand < cs_quantity 
 AND hd_buy_potential = '501-1000' 
 AND cd_marital_status = 'M' 
LIMIT 100
{noformat}

{noformat}
00-00 Screen : rowType = RecordType(BIGINT total_cnt): rowcount = 100.0, 
cumulative cost = \{9.7136055E7 rows, 6.08208382E8 cpu, 0.0 io, 9.4473289728E10 
network, 3.04611648E7 memory}, id = 2694
00-01 Project(total_cnt=[$0]) : rowType = RecordType(BIGINT total_cnt): 
rowcount = 100.0, cumulative cost = \{9.7136045E7 rows, 6.08208372E8 cpu, 0.0 
io, 9.4473289728E10 network, 3.04611648E7 memory}, id = 2693
00-02 SelectionVectorRemover : rowType = RecordType(BIGINT total_cnt): rowcount 
= 100.0, cumulative cost = \{9.7135945E7 rows, 6.08208272E8 cpu, 0.0 io, 
9.4473289728E10 network, 3.04611648E7 memory}, id = 2692
00-03 Limit(fetch=[100]) : rowType = RecordType(BIGINT total_cnt): rowcount = 
100.0, cumulative cost = \{9.7135845E7 rows, 6.08208172E8 cpu, 0.0 io, 
9.4473289728E10 network, 3.04611648E7 memory}, id = 2691
00-04 StreamAgg(group=[{}], total_cnt=[$SUM0($0)]) : rowType = 
RecordType(BIGINT total_cnt): rowcount = 1.0, cumulative cost = \{9.7135745E7 
rows, 6.08207772E8 cpu, 0.0 io, 9.4473289728E10 network, 3.04611648E7 memory}, 
id = 2690
00-05 StreamAgg(group=[{}], total_cnt=[COUNT()]) : rowType = RecordType(BIGINT 
total_cnt): rowcount = 1.0, cumulative cost = \{9.7135744E7 rows, 6.0820776E8 
cpu, 0.0 io, 9.4473289728E10 network, 3.04611648E7 memory}, id = 2689
00-06 Project($f0=[0]) : rowType = RecordType(INTEGER $f0): rowcount = 
5872500.0, cumulative cost = \{9.1263244E7 rows, 5.3773776E8 cpu, 0.0 io, 
9.4473289728E10 network, 3.04611648E7 memory}, id = 2688
00-07 HashJoin(condition=[=($0, $1)], joinType=[inner]) : rowType = 
RecordType(ANY cs_bill_hdemo_sk, ANY hd_demo_sk): rowcount = 5872500.0, 
cumulative cost = \{8.5390744E7 rows, 5.1424776E8 cpu, 0.0 io, 9.4473289728E10 
network, 3.04611648E7 memory}, id = 2687
00-09 Project(cs_bill_hdemo_sk=[$1]) : rowType = RecordType(ANY 
cs_bill_hdemo_sk): rowcount = 5872500.0, cumulative cost = \{7.9500604E7 rows, 
4.4371944E8 cpu, 0.0 io, 9.4473289728E10 network, 3.04421568E7 memory}, id = 
2682
00-11 HashJoin(condition=[=($0, $2)], joinType=[inner]) : rowType = 
RecordType(ANY cs_bill_cdemo_sk, ANY cs_bill_hdemo_sk, ANY cd_demo_sk): 
rowcount = 5872500.0, cumulative cost = \{7.3628104E7 rows, 4.3784694E8 cpu, 
0.0 io, 9.4473289728E10 network, 3.04421568E7 memory}, id = 2681
00-14 Project(cs_bill_cdemo_sk=[$0], cs_bill_hdemo_sk=[$1]) : rowType = 
RecordType(ANY cs_bill_cdemo_sk, ANY cs_bill_hdemo_sk): rowcount = 5872500.0, 
cumulative cost = \{6.3049644E7 rows, 3.5181846E8 cpu, 0.0 io, 9.4473289728E10 
network, 2.53712448E7 memory}, id = 2676
00-17 SelectionVectorRemover : rowType = RecordType(ANY cs_bill_cdemo_sk, ANY 
cs_bill_hdemo_sk, ANY cs_item_sk, ANY cs_quantity, ANY inv_item_sk, ANY 
inv_quantity_on_hand): rowcount = 5872500.0, cumulative cost = \{5.7177144E7 
rows, 3.4007346E8 cpu, 0.0 io, 9.4473289728E10 network, 2.53712448E7 memory}, 
id = 2675
00-19 Filter(condition=[<($5, $3)]) : rowType = RecordType(ANY 
cs_bill_cdemo_sk, ANY cs_bill_hdemo_sk, ANY cs_item_sk, ANY cs_quantity, ANY 
inv_item_sk, ANY inv_quantity_on_hand): rowcount = 5872500.0, cumulative cost = 
\{5.1304644E7 rows, 3.3420096E8 cpu, 0.0 io, 9.4473289728E10 network, 
2.53712448E7 memory}, id = 2674
00-21 Project(cs_bill_cdemo_sk=[$2], cs_bill_hdemo_sk=[$3], cs_item_sk=[$4], 
cs_quantity=[$5], inv_item_sk=[$0], inv_quantity_on_hand=[$1]) : rowType = 
RecordType(ANY cs_bill_cdemo_sk, ANY cs_bill_hdemo_sk, ANY cs_item_sk, ANY 
cs_quantity, ANY inv_item_sk, ANY inv_quantity_on_hand): rowcount = 1.1745E7, 
cumulative cost = \{3.9559644E7 rows, 2.6373096E8 cpu, 0.0 io, 9.4473289728E10 
network, 2.53712448E7 memory}, id = 2673
00-22 HashJoin(condition=[=($4, $0)], joinType=[inner]) : rowType = 
RecordType(ANY inv_item_sk, ANY inv_quantity_on_hand, ANY cs_bill_cdemo_sk, ANY 
cs_bill_hdemo_sk, ANY cs_item_sk, ANY cs_quantity): rowcount = 1.1745E7, 
cumulative cost = \{2.7814644E7 rows, 1.9326096E8 cpu, 0.0 io, 9.4473289728E10 
network, 2.53712448E7 memory}, id = 2672
00-24 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath 
[path=/drill/testdata/tpcds_sf1/parquet/inventory]], 
selectionRoot=/drill/testdata/tpcds_sf1/parquet/inventory, numFiles=1, 
numRowGroups=1, 

[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543599#comment-16543599
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202444031
 
 

 ##
 File path: 
contrib/storage-hbase/src/test/java/org/apache/drill/hbase/BaseHBaseTest.java
 ##
 @@ -93,7 +93,7 @@ protected void runHBaseSQLVerifyCount(String sql, int 
expectedRowCount) throws E
   }
 
   private void printResultAndVerifyRowCount(List results, int 
expectedRowCount) throws SchemaChangeException {
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543596#comment-16543596
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202443689
 
 

 ##
 File path: 
contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/KafkaTestBase.java
 ##
 @@ -71,7 +71,7 @@ public void runKafkaSQLVerifyCount(String sql, int 
expectedRowCount) throws Exce
 
   public void printResultAndVerifyRowCount(List results, int 
expectedRowCount)
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543597#comment-16543597
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202443712
 
 

 ##
 File path: 
contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/MongoTestBase.java
 ##
 @@ -69,7 +69,7 @@ public void runMongoSQLVerifyCount(String sql, int 
expectedRowCount)
 
   public void printResultAndVerifyRowCount(List results,
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543595#comment-16543595
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202443154
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/QueryTestUtil.java
 ##
 @@ -100,36 +96,125 @@ public static String normalizeQuery(final String query) {
   }
 
   /**
-   * Execute a SQL query, and print the results.
+   * Execute a SQL query, and output the results.
*
* @param drillClient drill client to use
* @param type type of the query
* @param queryString query string
+   * @param print True to output results to stdout. False to log results.
+   *
* @return number of rows returned
* @throws Exception
*/
-  public static int testRunAndPrint(
-  final DrillClient drillClient, final QueryType type, final String 
queryString) throws Exception {
+  private static int testRunAndOutput(final DrillClient drillClient,
+  final QueryType type,
+  final String queryString,
+  final boolean print) throws Exception {
 final String query = normalizeQuery(queryString);
 DrillConfig config = drillClient.getConfig();
 AwaitableUserResultsListener resultListener =
-new AwaitableUserResultsListener(
-config.getBoolean(TEST_QUERY_PRINTING_SILENT) ?
-new SilentListener() :
-new PrintingResultsListener(config, Format.TSV, 
VectorUtil.DEFAULT_COLUMN_WIDTH)
-);
+  new AwaitableUserResultsListener(print ?
+  new PrintingResultsListener(config, Format.TSV, 
VectorUtil.DEFAULT_COLUMN_WIDTH):
+  new LoggingResultsListener(config, Format.TSV, 
VectorUtil.DEFAULT_COLUMN_WIDTH));
 drillClient.runQuery(type, query, resultListener);
 return resultListener.await();
   }
 
+  /**
+   * Execute one or more queries separated by semicolons, and output the 
results.
+   *
+   * @param drillClient drill client to use
+   * @param queryString the query string
+   * @param print True to output results to stdout. False to log results.
+   * @throws Exception
+   */
+  public static void testRunAndOutput(final DrillClient drillClient,
+  final String queryString,
+  final boolean print) throws Exception{
+final String query = normalizeQuery(queryString);
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543594#comment-16543594
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202443140
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/QueryTestUtil.java
 ##
 @@ -100,36 +96,125 @@ public static String normalizeQuery(final String query) {
   }
 
   /**
-   * Execute a SQL query, and print the results.
+   * Execute a SQL query, and output the results.
*
* @param drillClient drill client to use
* @param type type of the query
* @param queryString query string
+   * @param print True to output results to stdout. False to log results.
+   *
* @return number of rows returned
* @throws Exception
*/
-  public static int testRunAndPrint(
-  final DrillClient drillClient, final QueryType type, final String 
queryString) throws Exception {
+  private static int testRunAndOutput(final DrillClient drillClient,
+  final QueryType type,
+  final String queryString,
+  final boolean print) throws Exception {
 final String query = normalizeQuery(queryString);
 DrillConfig config = drillClient.getConfig();
 AwaitableUserResultsListener resultListener =
-new AwaitableUserResultsListener(
-config.getBoolean(TEST_QUERY_PRINTING_SILENT) ?
-new SilentListener() :
-new PrintingResultsListener(config, Format.TSV, 
VectorUtil.DEFAULT_COLUMN_WIDTH)
-);
+  new AwaitableUserResultsListener(print ?
+  new PrintingResultsListener(config, Format.TSV, 
VectorUtil.DEFAULT_COLUMN_WIDTH):
+  new LoggingResultsListener(config, Format.TSV, 
VectorUtil.DEFAULT_COLUMN_WIDTH));
 drillClient.runQuery(type, query, resultListener);
 return resultListener.await();
   }
 
+  /**
+   * Execute one or more queries separated by semicolons, and output the 
results.
+   *
+   * @param drillClient drill client to use
+   * @param queryString the query string
+   * @param print True to output results to stdout. False to log results.
+   * @throws Exception
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543574#comment-16543574
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202437164
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/QueryTestUtil.java
 ##
 @@ -47,9 +46,6 @@
  * Utilities useful for tests that issue SQL queries.
  */
 public class QueryTestUtil {
-
-  public static final String TEST_QUERY_PRINTING_SILENT = 
"drill.test.query.printing.silent";
 
 Review comment:
   Grepped for it and removed references in pom.xml and Testing.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6496) VectorUtil.showVectorAccessibleContent does not log vector content

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543573#comment-16543573
 ] 

ASF GitHub Bot commented on DRILL-6496:
---

ilooner commented on a change in pull request #1336: DRILL-6496: Added missing 
logging statement in VectorUtil.showVectorAccessibleContent(VectorAccessible 
va, int[] columnWidths)
URL: https://github.com/apache/drill/pull/1336#discussion_r202437046
 
 

 ##
 File path: 
contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/json/BaseJsonTest.java
 ##
 @@ -59,7 +59,7 @@ protected void runSQLAndVerifyCount(String sql, int 
expectedRowCount) throws Exc
   }
 
   private void printResultAndVerifyRowCount(List results, int 
expectedRowCount) throws SchemaChangeException {
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> VectorUtil.showVectorAccessibleContent does not log vector content
> --
>
> Key: DRILL-6496
> URL: https://issues.apache.org/jira/browse/DRILL-6496
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Arina Ielchiieva
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> {{VectorUtil.showVectorAccessibleContent(VectorAccessible va, int[] 
> columnWidths)}} does not log vector content. Introduced after DRILL-6438.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6588) System table columns incorrectly marked as non-nullable

2018-07-13 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6588:

Reviewer: Arina Ielchiieva  (was: Aman Sinha)

> System table columns incorrectly marked as non-nullable 
> 
>
> Key: DRILL-6588
> URL: https://issues.apache.org/jira/browse/DRILL-6588
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.13.0
>Reporter: Aman Sinha
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> System table columns can contain null values but they are incorrectly marked 
> as non-nullable as shown in example table below:  
> {noformat}
> 0: jdbc:drill:drillbit=10.10.10.191> describe sys.boot;
> +---++--+
> |    COLUMN_NAME    |     DATA_TYPE      | IS_NULLABLE  |
> +---++--+
> | name              | CHARACTER VARYING  | NO           |
> | kind              | CHARACTER VARYING  | NO           |
> | accessibleScopes  | CHARACTER VARYING  | NO           |
> | optionScope       | CHARACTER VARYING  | NO           |
> | status            | CHARACTER VARYING  | NO           |
> | num_val           | BIGINT             | NO           |
> | string_val        | CHARACTER VARYING  | NO           |
> | bool_val          | BOOLEAN            | NO           |
> | float_val         | DOUBLE             | NO           |
> +---++--+{noformat}
>  
> Note that several columns are nulls: 
> {noformat}
> +---+--+--+-++-++--+---+
> |                       name                        |   kind   | 
> accessibleScopes | optionScope | status | num_val | string_val | bool_val | 
> float_val |
> +---+--+--+-++-++--+---+
> drill.exec.options.exec.udf.enable_dynamic_support | BOOLEAN | BOOT | BOOT | 
> BOOT | null | null | true | null |{noformat}
>  
> Because of the not-null metadata, the predicates on these tables such as 
> `WHERE  IS NULL` evaluate to FALSE which is incorrect. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6453) TPC-DS query 72 has regressed

2018-07-13 Thread Khurram Faraaz (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543559#comment-16543559
 ] 

Khurram Faraaz commented on DRILL-6453:
---

[~amansinha100] I am working on it, executing the simplified query with the 
first three joins starting from the leaf level in the plan.

> TPC-DS query 72 has regressed
> -
>
> Key: DRILL-6453
> URL: https://issues.apache.org/jira/browse/DRILL-6453
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.14.0
>Reporter: Khurram Faraaz
>Assignee: Boaz Ben-Zvi
>Priority: Blocker
> Fix For: 1.14.0
>
> Attachments: 24f75b18-014a-fb58-21d2-baeab5c3352c.sys.drill, 
> jstack_29173_June_10_2018.txt, jstack_29173_June_10_2018.txt, 
> jstack_29173_June_10_2018_b.txt, jstack_29173_June_10_2018_b.txt, 
> jstack_29173_June_10_2018_c.txt, jstack_29173_June_10_2018_c.txt, 
> jstack_29173_June_10_2018_d.txt, jstack_29173_June_10_2018_d.txt, 
> jstack_29173_June_10_2018_e.txt, jstack_29173_June_10_2018_e.txt
>
>
> TPC-DS query 72 seems to have regressed, query profile for the case where it 
> Canceled after 2 hours on Drill 1.14.0 is attached here.
> {noformat}
> On, Drill 1.14.0-SNAPSHOT 
> commit : 931b43e (TPC-DS query 72 executed successfully on this commit, took 
> around 55 seconds to execute)
> SF1 parquet data on 4 nodes; 
> planner.memory.max_query_memory_per_node = 10737418240. 
> drill.exec.hashagg.fallback.enabled = true
> TPC-DS query 72 executed successfully & took 47 seconds to complete execution.
> {noformat}
> {noformat}
> TPC-DS data in the below run has date values stored as DATE datatype and not 
> VARCHAR type
> On, Drill 1.14.0-SNAPSHOT
> commit : 82e1a12
> SF1 parquet data on 4 nodes; 
> planner.memory.max_query_memory_per_node = 10737418240. 
> drill.exec.hashagg.fallback.enabled = true
> and
> alter system set `exec.hashjoin.num_partitions` = 1;
> TPC-DS query 72 executed for 2 hrs and 11 mins and did not complete, I had to 
> Cancel it by stopping the Foreman drillbit.
> As a result several minor fragments are reported to be in 
> CANCELLATION_REQUESTED state on UI.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543556#comment-16543556
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

kkhatua commented on a change in pull request #1379: DRILL-6591: Show Exception 
for failed queries submitted in WebUI
URL: https://github.com/apache/drill/pull/1379#discussion_r202432457
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
 ##
 @@ -97,21 +97,22 @@ public QueryResult run(final WorkManager workManager, 
final WebUserConnection we
 
 //Fail if nearly out of heap space
 if (nearlyOutOfHeapSpace) {
+  UserException almostOutOfHeapException = UserException.resourceError()
+  .message("There is not enough heap memory to run this query using 
the web interface. ")
+  .addContext("Please try a query with fewer columns or with a filter 
or limit condition to limit the data returned. ")
+  .addContext("You can also try an ODBC/JDBC client. ")
+  .build(logger);
+  //Add event
   workManager.getBee().getForemanForQueryId(queryId)
-.addToEventQueue(QueryState.FAILED,
-UserException.resourceError(
-new Throwable(
-"There is not enough heap memory to run this query using 
the web interface. "
-+ "Please try a query with fewer columns or with a filter 
or limit condition to limit the data returned. "
-+ "You can also try an ODBC/JDBC client. "
-)
-)
-  .build(logger)
-);
+.addToEventQueue(QueryState.FAILED, almostOutOfHeapException);
+  //Return NearlyOutOfHeap exception
+  throw almostOutOfHeapException;
 
 Review comment:
   I added the exception originally to the event queue, but I'm not sure if 
that will necessarily propagate the exception back. If it isn't thrown back, 
there is a possibility that before the eventQueue is handled, the resultSet 
will make it back to the WebServer  that will start constructing the 
JSONResponse object and run out of memory there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6588) System table columns incorrectly marked as non-nullable

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543542#comment-16543542
 ] 

ASF GitHub Bot commented on DRILL-6588:
---

kkhatua commented on issue #1371: DRILL-6588: Make Sys tables of nullable 
datatypes
URL: https://github.com/apache/drill/pull/1371#issuecomment-404909514
 
 
   Added a unit test that groups and counts the number of IS_NULLABLE values. 
Originally, the query would have returned only 1 group (`IS_NULLABLE = false`). 
After the patch, there are 2 records for both boolean values.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> System table columns incorrectly marked as non-nullable 
> 
>
> Key: DRILL-6588
> URL: https://issues.apache.org/jira/browse/DRILL-6588
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.13.0
>Reporter: Aman Sinha
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> System table columns can contain null values but they are incorrectly marked 
> as non-nullable as shown in example table below:  
> {noformat}
> 0: jdbc:drill:drillbit=10.10.10.191> describe sys.boot;
> +---++--+
> |    COLUMN_NAME    |     DATA_TYPE      | IS_NULLABLE  |
> +---++--+
> | name              | CHARACTER VARYING  | NO           |
> | kind              | CHARACTER VARYING  | NO           |
> | accessibleScopes  | CHARACTER VARYING  | NO           |
> | optionScope       | CHARACTER VARYING  | NO           |
> | status            | CHARACTER VARYING  | NO           |
> | num_val           | BIGINT             | NO           |
> | string_val        | CHARACTER VARYING  | NO           |
> | bool_val          | BOOLEAN            | NO           |
> | float_val         | DOUBLE             | NO           |
> +---++--+{noformat}
>  
> Note that several columns are nulls: 
> {noformat}
> +---+--+--+-++-++--+---+
> |                       name                        |   kind   | 
> accessibleScopes | optionScope | status | num_val | string_val | bool_val | 
> float_val |
> +---+--+--+-++-++--+---+
> drill.exec.options.exec.udf.enable_dynamic_support | BOOLEAN | BOOT | BOOT | 
> BOOT | null | null | true | null |{noformat}
>  
> Because of the not-null metadata, the predicates on these tables such as 
> `WHERE  IS NULL` evaluate to FALSE which is incorrect. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6606) Hash Join returns incorrect data types when joining subqueries with limit 0

2018-07-13 Thread Aman Sinha (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543541#comment-16543541
 ] 

Aman Sinha commented on DRILL-6606:
---

Agree that subquery with filter producing 0 rows is  common and we should 
address the issue.  I was mainly referring to the tableau generated limit 0 
queries. 

> Hash Join returns incorrect data types when joining subqueries with limit 0
> ---
>
> Key: DRILL-6606
> URL: https://issues.apache.org/jira/browse/DRILL-6606
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Bohdan Kazydub
>Assignee: Timothy Farkas
>Priority: Blocker
> Fix For: 1.14.0
>
>
> PreparedStatement for query
> {code:sql}
> SELECT l.l_quantity, l.l_shipdate, o.o_custkey
> FROM (SELECT * FROM cp.`tpch/lineitem.parquet` LIMIT 0) l
>     JOIN (SELECT * FROM cp.`tpch/orders.parquet` LIMIT 0) o 
>     ON l.l_orderkey = o.o_orderkey
> LIMIT 0
> {code}
>  is created with wrong types (nullable INTEGER) for all selected columns, no 
> matter what their actual type is. This behavior reproduces with hash join 
> only and is very likely to be caused by DRILL-6027 as the query works fine 
> before this feature was implemented.
> To reproduce the problem you can put the aforementioned query into 
> TestPreparedStatementProvider#joinOrderByQuery() test method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543517#comment-16543517
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

arina-ielchiieva commented on a change in pull request #1379: DRILL-6591: Show 
Exception for failed queries submitted in WebUI
URL: https://github.com/apache/drill/pull/1379#discussion_r202427605
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
 ##
 @@ -97,21 +97,22 @@ public QueryResult run(final WorkManager workManager, 
final WebUserConnection we
 
 //Fail if nearly out of heap space
 if (nearlyOutOfHeapSpace) {
+  UserException almostOutOfHeapException = UserException.resourceError()
+  .message("There is not enough heap memory to run this query using 
the web interface. ")
+  .addContext("Please try a query with fewer columns or with a filter 
or limit condition to limit the data returned. ")
+  .addContext("You can also try an ODBC/JDBC client. ")
+  .build(logger);
+  //Add event
   workManager.getBee().getForemanForQueryId(queryId)
-.addToEventQueue(QueryState.FAILED,
-UserException.resourceError(
-new Throwable(
-"There is not enough heap memory to run this query using 
the web interface. "
-+ "Please try a query with fewer columns or with a filter 
or limit condition to limit the data returned. "
-+ "You can also try an ODBC/JDBC client. "
-)
-)
-  .build(logger)
-);
+.addToEventQueue(QueryState.FAILED, almostOutOfHeapException);
+  //Return NearlyOutOfHeap exception
+  throw almostOutOfHeapException;
 
 Review comment:
   We did not throw exception before, why we are throwing it now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6517) IllegalStateException: Record count not set for this vector container

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543516#comment-16543516
 ] 

ASF GitHub Bot commented on DRILL-6517:
---

ilooner commented on a change in pull request #1373: DRILL-6517: Hash-Join: If 
not OK, exit early from prefetchFirstBatchFromBothSides
URL: https://github.com/apache/drill/pull/1373#discussion_r202425486
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinOutcome.java
 ##
 @@ -0,0 +1,204 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.physical.impl.join;
+
+import com.google.common.collect.Lists;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.drill.categories.OperatorTest;
+import org.apache.drill.common.expression.FieldReference;
+import org.apache.drill.common.logical.data.JoinCondition;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.memory.BufferAllocator;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.physical.config.HashJoinPOP;
+import org.apache.drill.exec.physical.impl.MockRecordBatch;
+import org.apache.drill.exec.physical.unit.PhysicalOpUnitTestBase;
+import org.apache.drill.exec.record.BatchSchema;
+import org.apache.drill.exec.record.RecordBatch;
+import org.apache.drill.exec.record.VectorContainer;
+import org.apache.drill.exec.record.metadata.TupleSchema;
+import org.apache.drill.exec.store.mock.MockStorePOP;
+import org.apache.drill.test.rowSet.RowSet;
+import org.apache.drill.test.rowSet.schema.SchemaBuilder;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.util.ArrayList;
+import java.util.List;
+
+// import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+/**
+ *  Unit tests of the Hash Join getting various outcomes as input
+ *  with uninitialized vector containers
+ */
+@Category(OperatorTest.class)
+public class TestHashJoinOutcome extends PhysicalOpUnitTestBase {
+
 
 Review comment:
   Thanks for adding tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> IllegalStateException: Record count not set for this vector container
> -
>
> Key: DRILL-6517
> URL: https://issues.apache.org/jira/browse/DRILL-6517
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.14.0
>Reporter: Khurram Faraaz
>Assignee: Boaz Ben-Zvi
>Priority: Critical
> Fix For: 1.14.0
>
> Attachments: 24d7b377-7589-7928-f34f-57d02061acef.sys.drill
>
>
> TPC-DS query is Canceled after 2 hrs and 47 mins and we see an 
> IllegalStateException: Record count not set for this vector container, in 
> drillbit.log
> Steps to reproduce the problem, query profile 
> (24d7b377-7589-7928-f34f-57d02061acef) is attached here.
> {noformat}
> In drill-env.sh set max direct memory to 12G on all 4 nodes in cluster
> export DRILL_MAX_DIRECT_MEMORY=${DRILL_MAX_DIRECT_MEMORY:-"12G"}
> and set these options from sqlline,
> alter system set `planner.memory.max_query_memory_per_node` = 10737418240;
> alter system set `drill.exec.hashagg.fallback.enabled` = true;
> To run the query (replace IP-ADDRESS with your foreman node's IP address)
> cd /opt/mapr/drill/drill-1.14.0/bin
> ./sqlline -u 
> "jdbc:drill:schema=dfs.tpcds_sf1_parquet_views;drillbit=" -f 
> /root/query72.sql
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> 2018-06-18 20:08:51,912 [24d7b377-7589-7928-f34f-57d02061acef:frag:4:49] 
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: 
> 

[jira] [Commented] (DRILL-6517) IllegalStateException: Record count not set for this vector container

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543515#comment-16543515
 ] 

ASF GitHub Bot commented on DRILL-6517:
---

ilooner commented on a change in pull request #1373: DRILL-6517: Hash-Join: If 
not OK, exit early from prefetchFirstBatchFromBothSides
URL: https://github.com/apache/drill/pull/1373#discussion_r202423357
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java
 ##
 @@ -289,7 +283,13 @@ private IterOutcome sniffNonEmptyBatch(int inputIndex, 
RecordBatch recordBatch)
   if (recordBatch.getRecordCount() == 0) {
 continue;
   }
-  // We got a non empty batch
+  // We got a non empty batch; update the memory manager
+  final boolean isBuildSide = inputIndex == 1;
+  final int side = isBuildSide ? RIGHT_INDEX : LEFT_INDEX;
 
 Review comment:
   isn't 0 / 1 and LEFT_INDEX / RIGHT_INDEX the same thing? Similarly isn't 
**side** the same thing as **inputIndex**? Could we make things consistent to 
avoid confusion? Or if there is a good reason for using different names for 
things could you add a comment explaining the differences.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> IllegalStateException: Record count not set for this vector container
> -
>
> Key: DRILL-6517
> URL: https://issues.apache.org/jira/browse/DRILL-6517
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.14.0
>Reporter: Khurram Faraaz
>Assignee: Boaz Ben-Zvi
>Priority: Critical
> Fix For: 1.14.0
>
> Attachments: 24d7b377-7589-7928-f34f-57d02061acef.sys.drill
>
>
> TPC-DS query is Canceled after 2 hrs and 47 mins and we see an 
> IllegalStateException: Record count not set for this vector container, in 
> drillbit.log
> Steps to reproduce the problem, query profile 
> (24d7b377-7589-7928-f34f-57d02061acef) is attached here.
> {noformat}
> In drill-env.sh set max direct memory to 12G on all 4 nodes in cluster
> export DRILL_MAX_DIRECT_MEMORY=${DRILL_MAX_DIRECT_MEMORY:-"12G"}
> and set these options from sqlline,
> alter system set `planner.memory.max_query_memory_per_node` = 10737418240;
> alter system set `drill.exec.hashagg.fallback.enabled` = true;
> To run the query (replace IP-ADDRESS with your foreman node's IP address)
> cd /opt/mapr/drill/drill-1.14.0/bin
> ./sqlline -u 
> "jdbc:drill:schema=dfs.tpcds_sf1_parquet_views;drillbit=" -f 
> /root/query72.sql
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> 2018-06-18 20:08:51,912 [24d7b377-7589-7928-f34f-57d02061acef:frag:4:49] 
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: 
> IllegalStateException: Record count not set for this vector container
> Fragment 4:49
> [Error Id: 73177a1c-f7aa-4c9e-99e1-d6e1280e3f27 on qa102-45.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> IllegalStateException: Record count not set for this vector container
> Fragment 4:49
> [Error Id: 73177a1c-f7aa-4c9e-99e1-d6e1280e3f27 on qa102-45.qa.lab:31010]
>  at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[drill-common-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:361)
>  [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:216)
>  [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:327)
>  [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
>  at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_161]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_161]
>  at java.lang.Thread.run(Thread.java:748) [na:1.8.0_161]
> Caused by: java.lang.IllegalStateException: Record count not set for this 
> vector container
>  at com.google.common.base.Preconditions.checkState(Preconditions.java:173) 
> ~[guava-18.0.jar:na]
>  at 
> org.apache.drill.exec.record.VectorContainer.getRecordCount(VectorContainer.java:394)
>  

[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543507#comment-16543507
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

kkhatua commented on issue #1379: DRILL-6591: Show Exception for failed queries 
submitted in WebUI
URL: https://github.com/apache/drill/pull/1379#issuecomment-404905251
 
 
   Done with the change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543499#comment-16543499
 ] 

ASF GitHub Bot commented on DRILL-5495:
---

arina-ielchiieva closed pull request #1382: DRILL-5495: convert_from function 
on top of int96 data results in Arr…
URL: https://github.com/apache/drill/pull/1382
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/NullableFixedByteAlignedReaders.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/NullableFixedByteAlignedReaders.java
index 6a09bd64259..89aa8083fb2 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/NullableFixedByteAlignedReaders.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/NullableFixedByteAlignedReaders.java
@@ -81,17 +81,16 @@ protected void readField(long recordsToReadInThisPass) {
   if (usingDictionary) {
 NullableVarBinaryVector.Mutator mutator =  valueVec.getMutator();
 Binary currDictValToWrite;
-for (int i = 0; i < recordsReadInThisIteration; i++){
+for (int i = 0; i < recordsToReadInThisPass; i++) {
   currDictValToWrite = pageReader.dictionaryValueReader.readBytes();
   ByteBuffer buf = currDictValToWrite.toByteBuffer();
-  mutator.setSafe(valuesReadInCurrentPass + i, buf, buf.position(),
-  currDictValToWrite.length());
+  mutator.setSafe(valuesReadInCurrentPass + i, buf, buf.position(), 
currDictValToWrite.length());
 }
 // Set the write Index. The next page that gets read might be a page 
that does not use dictionary encoding
 // and we will go into the else condition below. The readField method 
of the parent class requires the
 // writer index to be set correctly.
 int writerIndex = castedBaseVector.getBuffer().writerIndex();
-castedBaseVector.getBuffer().setIndex(0, writerIndex + 
(int)readLength);
+castedBaseVector.getBuffer().setIndex(0, writerIndex + (int) 
readLength);
   } else {
 super.readField(recordsToReadInThisPass);
 // TODO - replace this with fixed binary type in drill


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c: 

[jira] [Commented] (DRILL-5796) Filter pruning for multi rowgroup parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543492#comment-16543492
 ] 

ASF GitHub Bot commented on DRILL-5796:
---

vrozov commented on a change in pull request #1298: DRILL-5796: Filter pruning 
for multi rowgroup parquet file
URL: https://github.com/apache/drill/pull/1298#discussion_r202422357
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/stat/ParquetIsPredicate.java
 ##
 @@ -124,8 +124,7 @@ private static LogicalExpression 
createIsTruePredicate(LogicalExpression expr) {
*/
   private static LogicalExpression createIsFalsePredicate(LogicalExpression 
expr) {
 return new ParquetIsPredicate(expr, (exprStat, evaluator) ->
-//if min value is not false or if there are all nulls  -> canDrop
-isAllNulls(exprStat, evaluator.getRowCount()) || 
exprStat.hasNonNullValue() && ((BooleanStatistics) exprStat).getMin()
+  exprStat.hasNonNullValue() && ((BooleanStatistics) exprStat).getMin() || 
isAllNulls(exprStat, evaluator.getRowCount()) ? RowsMatch.NONE : 
checkNull(exprStat)
 
 Review comment:
   @jbimbert
   - If all rows are null, what are the values for min and max, should not 
`hasNonNullValue` be false?
   - Please point me to the specific test that validates that condition.
   - I would prefer to see a unit test, not an integration test. For this 
particular case, the integration test validates results of a query, but it does 
not validate what is the result of 
`((ParquetFilterPredicate)createIsFalsePredicate(expr)).canDrop(evaluator)`
 is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Filter pruning for multi rowgroup parquet file
> --
>
> Key: DRILL-5796
> URL: https://issues.apache.org/jira/browse/DRILL-5796
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Reporter: Damien Profeta
>Assignee: Jean-Blas IMBERT
>Priority: Major
> Fix For: 1.14.0
>
>
> Today, filter pruning use the file name as the partitioning key. This means 
> you can remove a partition only if the whole file is for the same partition. 
> With parquet, you can prune the filter if the rowgroup make a partition of 
> your dataset as the unit of work if the rowgroup not the file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2018-07-13 Thread Vitalii Diravka (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543482#comment-16543482
 ] 

Vitalii Diravka commented on DRILL-4337:


I have reproduced the issue only with dataset from DRILL-5495. The issue is 
solved in context of that Jira.

> Drill fails to read INT96 fields from hive generated parquet files
> --
>
> Key: DRILL-4337
> URL: https://issues.apache.org/jira/browse/DRILL-4337
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Blocker
> Fix For: 1.14.0
>
> Attachments: hive1_fewtypes_null.parquet
>
>
> git.commit.id.abbrev=576271d
> Cluster : 2 nodes running MaprFS 4.1
> The data file used in the below table is generated from hive. Below is output 
> from running the same query multiple times. 
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: NegativeArraySizeException
> Fragment 0:0
> [Error Id: 5517e983-ccae-4c96-b09c-30f331919e56 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking 
> stream.
> Fragment 0:0
> [Error Id: 94ed5996-d2ac-438d-b460-c2d2e41bdcc3 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 0:0
> [Error Id: 41dca093-571e-49e5-a2ab-fd69210b143d on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> ++
> | timestamp_col  |
> ++
> | null   |
> | [B@7c766115|
> | [B@3fdfe989|
> | null   |
> | [B@55d4222 |
> | [B@2da0c8ee|
> | [B@16e798a9|
> | [B@3ed78afe|
> | [B@38e649ed|
> | [B@16ff83ca|
> | [B@61254e91|
> | [B@5849436a|
> | [B@31e9116e|
> | [B@3c77665b|
> | [B@42e0ff60|
> | [B@419e19ed|
> | [B@72b83842|
> | [B@1c75afe5|
> | [B@726ef1fb|
> | [B@51d0d06e|
> | [B@64240fb8|
> +
> {code}
> Attached the log, hive ddl used to generate the parquet file and the parquet 
> file itself



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-4742) Using convert_from timestamp_impala gives a random error

2018-07-13 Thread Vitalii Diravka (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543480#comment-16543480
 ] 

Vitalii Diravka commented on DRILL-4742:


I have reproduced the issue only with dataset from DRILL-5495. The issue is 
solved in context of that Jira.

> Using convert_from timestamp_impala gives a random error
> 
>
> Key: DRILL-4742
> URL: https://issues.apache.org/jira/browse/DRILL-4742
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Critical
> Fix For: 1.14.0
>
> Attachments: error.txt, temp.parquet
>
>
> Drill Commit # fbdd20e54351879200184b478c2a32f238bf2176
> The following query randomly generates the below error. 
> {code}
> select convert_from(create_timestamp, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/temp.parquet`;
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 0:0
> [Error Id: 9fe53a95-c4ae-424d-8c6d-489abab2d2ca on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> The underlying parquet file is generated using hive. Below is the metadata 
> information
> {code}
> /root/parquet-tools-1.5.1-SNAPSHOT/parquet-meta temp.parquet 
> creator:  parquet-mr version 1.6.0 
> file schema:  hive_schema 
> 
> voter_id: OPTIONAL INT32 R:0 D:1
> name: OPTIONAL BINARY O:UTF8 R:0 D:1
> age:  OPTIONAL INT32 R:0 D:1
> registration: OPTIONAL BINARY O:UTF8 R:0 D:1
> contributions:OPTIONAL FLOAT R:0 D:1
> voterzone:OPTIONAL INT32 R:0 D:1
> create_timestamp: OPTIONAL INT96 R:0 D:1
> create_date:  OPTIONAL INT32 O:DATE R:0 D:1
> row group 1:  RC:200 TS:9902 
> 
> voter_id:  INT32 UNCOMPRESSED DO:0 FPO:4 SZ:843/843/1.00 VC:200 
> ENC:RLE,BIT_PACKED,PLAIN
> name:  BINARY UNCOMPRESSED DO:0 FPO:847 SZ:3214/3214/1.00 VC:200 
> ENC:PLAIN_DICTIONARY,RLE,BIT_PACKED
> age:   INT32 UNCOMPRESSED DO:0 FPO:4061 SZ:438/438/1.00 VC:200 
> ENC:PLAIN_DICTIONARY,RLE,BIT_PACKED
> registration:  BINARY UNCOMPRESSED DO:0 FPO:4499 SZ:241/241/1.00 VC:200 
> ENC:PLAIN_DICTIONARY,RLE,BIT_PACKED
> contributions: FLOAT UNCOMPRESSED DO:0 FPO:4740 SZ:843/843/1.00 VC:200 
> ENC:RLE,BIT_PACKED,PLAIN
> voterzone: INT32 UNCOMPRESSED DO:0 FPO:5583 SZ:843/843/1.00 VC:200 
> ENC:RLE,BIT_PACKED,PLAIN
> create_timestamp:  INT96 UNCOMPRESSED DO:0 FPO:6426 SZ:2642/2642/1.00 VC:200 
> ENC:PLAIN_DICTIONARY,RLE,BIT_PACKED
> create_date:   INT32 UNCOMPRESSED DO:0 FPO:9068 SZ:838/838/1.00 VC:200 
> ENC:RLE,BIT_PACKED,PLAIN
> {code}
> I attached the log file and the data file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543478#comment-16543478
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

kkhatua commented on a change in pull request #1379: DRILL-6591: Show Exception 
for failed queries submitted in WebUI
URL: https://github.com/apache/drill/pull/1379#discussion_r202420484
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
 ##
 @@ -97,23 +99,29 @@ public QueryResult run(final WorkManager workManager, 
final WebUserConnection we
 
 //Fail if nearly out of heap space
 if (nearlyOutOfHeapSpace) {
+  UserException almostOutOfHeapException = UserException.resourceError(
+  new Throwable(
 
 Review comment:
   I think I just read the available methods and applied that. Will use the 
addContext to build it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543468#comment-16543468
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

vdiravka commented on a change in pull request #1296: DRILL-5365: Prevent 
plugin config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202386993
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java
 ##
 @@ -54,6 +54,9 @@
  * references to the FileSystem configuration and path management.
  */
 public class FileSystemPlugin extends AbstractStoragePlugin {
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(FileSystemPlugin.class);
+
+  public static final String FS_DEFAULT_NAME = "fs.default.name";
 
 Review comment:
   It makes sense. Possibly we should find all `fs.default.name` properties in 
the Drill project and replace them


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543471#comment-16543471
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

vdiravka commented on a change in pull request #1296: DRILL-5365: Prevent 
plugin config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202418755
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java
 ##
 @@ -76,6 +79,16 @@ public FileSystemPlugin(FileSystemConfig config, 
DrillbitContext context, String
   fsConf.set(s, config.config.get(s));
 }
   }
+
+  logger.info("Original FileSystem default fs configuration {} {}",
+fsConf.getTrimmed(FS_DEFAULT_NAME),
+fsConf.getTrimmed(FileSystem.FS_DEFAULT_NAME_KEY));
+
+  if (logger.isInfoEnabled()) {
+logger.info("Who made me? {}", new RuntimeException("Who made me?"));
 
 Review comment:
   Is it proper message or you just forgot to delete it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543470#comment-16543470
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

vdiravka commented on a change in pull request #1296: DRILL-5365: Prevent 
plugin config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202385699
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
 ##
 @@ -179,9 +182,16 @@ public FSDataInputStream open(Path f) throws IOException {
 return new DrillFSDataInputStream(underlyingFs.open(f), operatorStats);
   }
 
+  /**
+   * This method should never be used on {@link DrillFileSystem} since {@link 
DrillFileSystem} is immutable.
+   * @param name
 
 Review comment:
   please fill java doc parameters description


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543469#comment-16543469
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

vdiravka commented on a change in pull request #1296: DRILL-5365: Prevent 
plugin config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202386359
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java
 ##
 @@ -80,12 +80,12 @@ public FileSystemPlugin(FileSystemConfig config, 
DrillbitContext context, String
 }
   }
 
-  logger.trace("Original FileSystem default fs configuration {} {}",
+  logger.info("Original FileSystem default fs configuration {} {}",
 fsConf.getTrimmed(FS_DEFAULT_NAME),
 fsConf.getTrimmed(FileSystem.FS_DEFAULT_NAME_KEY));
 
-  if (logger.isTraceEnabled()) {
-logger.trace("Who made me? {}", new RuntimeException("Who made me?"));
+  if (logger.isInfoEnabled()) {
+logger.info("Who made me? {}", new RuntimeException("Who made me?"));
 
 Review comment:
   the same


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543467#comment-16543467
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

vdiravka commented on a change in pull request #1296: DRILL-5365: Prevent 
plugin config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202418392
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
 ##
 @@ -83,28 +87,63 @@
   private final OperatorStats operatorStats;
   private final CompressionCodecFactory codecFactory;
 
+  private boolean initialized = false;
+
   public DrillFileSystem(Configuration fsConf) throws IOException {
 this(fsConf, null);
   }
 
   public DrillFileSystem(Configuration fsConf, OperatorStats operatorStats) 
throws IOException {
+Preconditions.checkNotNull(fsConf);
+
+// Configuration objects are mutable, and the underlying FileSystem object 
may directly use a passed in Configuration.
+// In order to avoid scenarios where a Configuration can change after a 
DrillFileSystem is created, we make a copy
+// of the Configuration.
+fsConf = new Configuration(fsConf);
 this.underlyingFs = FileSystem.get(fsConf);
 
 Review comment:
   Agree. Just leave TODO here with a note and number of Jira


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5365) FileNotFoundException when reading a parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543466#comment-16543466
 ] 

ASF GitHub Bot commented on DRILL-5365:
---

vdiravka commented on a change in pull request #1296: DRILL-5365: Prevent 
plugin config from changing default fs. Make DrillFileSystem Immutable.
URL: https://github.com/apache/drill/pull/1296#discussion_r202386141
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
 ##
 @@ -102,13 +102,13 @@ public DrillFileSystem(Configuration fsConf, 
OperatorStats operatorStats) throws
 fsConf = new Configuration(fsConf);
 this.underlyingFs = FileSystem.get(fsConf);
 
-logger.trace("Configuration for the DrillFileSystem {} {}, underlyingFs: 
{}",
+logger.info("Configuration for the DrillFileSystem {} {}, underlyingFs: 
{}",
   fsConf.getTrimmed(FS_DEFAULT_NAME),
   fsConf.getTrimmed(FS_DEFAULT_NAME_KEY),
   this.underlyingFs.getUri());
 
-if (logger.isTraceEnabled()) {
-  logger.trace("Who made me? {}", new RuntimeException("Who made me?"));
+if (logger.isInfoEnabled()) {
+  logger.info("Who made me? {}", new RuntimeException("Who made me?"));
 
 Review comment:
   Is it proper message or you just forgot to delete it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FileNotFoundException when reading a parquet file
> -
>
> Key: DRILL-5365
> URL: https://issues.apache.org/jira/browse/DRILL-5365
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.10.0
>Reporter: Chun Chang
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> The parquet file is generated through the following CTAS.
> To reproduce the issue: 1) two or more nodes cluster; 2) enable 
> impersonation; 3) set "fs.default.name": "file:///" in hive storage plugin; 
> 4) restart drillbits; 5) as a regular user, on node A, drop the table/file; 
> 6) ctas from a large enough hive table as source to recreate the table/file; 
> 7) query the table from node A should work; 8) query from node B as same user 
> should reproduce the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543457#comment-16543457
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

kkhatua commented on a change in pull request #1379: DRILL-6591: Show Exception 
for failed queries submitted in WebUI
URL: https://github.com/apache/drill/pull/1379#discussion_r202417597
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
 ##
 @@ -83,12 +84,13 @@ public QueryResult run(final WorkManager workManager, 
final WebUserConnection we
 float usagePercent = getHeapUsage();
 
 // Wait until the query execution is complete or there is error submitting 
the query
-logger.debug("Wait until the query execution is complete or there is error 
submitting the query");
+if (logger.isDebugEnabled()) {
+  logger.debug("Wait until the query execution is complete or there is 
error submitting the query");
+}
 do {
   try {
-isComplete = webUserConnection.await(TimeUnit.SECONDS.toMillis(1)); 
/*periodically timeout to check heap*/
-  } catch (Exception e) { }
-
+isComplete = 
webUserConnection.await/*timedWait*/(TimeUnit.SECONDS.toMillis(1)); 
//periodically timeout 1sec to check heap
 
 Review comment:
   My bad. That was actually a method I introduced before and reverted in the 
last commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6591) When query fails on Web UI, result page does not show any error

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543458#comment-16543458
 ] 

ASF GitHub Bot commented on DRILL-6591:
---

kkhatua commented on a change in pull request #1379: DRILL-6591: Show Exception 
for failed queries submitted in WebUI
URL: https://github.com/apache/drill/pull/1379#discussion_r202417667
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
 ##
 @@ -83,12 +84,13 @@ public QueryResult run(final WorkManager workManager, 
final WebUserConnection we
 float usagePercent = getHeapUsage();
 
 // Wait until the query execution is complete or there is error submitting 
the query
-logger.debug("Wait until the query execution is complete or there is error 
submitting the query");
+if (logger.isDebugEnabled()) {
 
 Review comment:
   Ok.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> When query fails on Web UI, result page does not show any error
> ---
>
> Key: DRILL-6591
> URL: https://issues.apache.org/jira/browse/DRILL-6591
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: no_result_found.JPG
>
>
> When query fails on Web UI result page no error is shown, only "No result 
> found." Screenshot attached. Drill should display error message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5495:

Reviewer: Arina Ielchiieva

> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva reassigned DRILL-5495:
---

Assignee: Vitalii Diravka  (was: Arina Ielchiieva)

> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5495:

Labels: ready-to-commit  (was: )

> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva reassigned DRILL-5495:
---

Assignee: Arina Ielchiieva  (was: Vitalii Diravka)

> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Arina Ielchiieva
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543448#comment-16543448
 ] 

ASF GitHub Bot commented on DRILL-5495:
---

arina-ielchiieva commented on issue #1382: DRILL-5495: convert_from function on 
top of int96 data results in Arr…
URL: https://github.com/apache/drill/pull/1382#issuecomment-404895013
 
 
   @vdiravka, thanks for the explanation. LGTM, +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543439#comment-16543439
 ] 

ASF GitHub Bot commented on DRILL-5495:
---

vdiravka commented on issue #1382: DRILL-5495: convert_from function on top of 
int96 data results in Arr…
URL: https://github.com/apache/drill/pull/1382#issuecomment-404892801
 
 
   @arina-ielchiieva Yes, it is mechanical issue. 
   `recordsToReadInThisPass` is `numNonNullValues` (see 
`NullableColumnReader#processPagesBulk():284`)
   But `recordsReadInThisIteration` is `numNullValues` + `numNonNullValues`. 
   In `NullableColumnReader#readField()` the only `numNonNullValues` should be 
used.
   
   It is hard to reproduce the issue with one small file. But since it was the 
mechanical error, I think it is fine do not add unit test for this issue. I 
have updated the PR.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5796) Filter pruning for multi rowgroup parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543418#comment-16543418
 ] 

ASF GitHub Bot commented on DRILL-5796:
---

jbimbert commented on a change in pull request #1298: DRILL-5796: Filter 
pruning for multi rowgroup parquet file
URL: https://github.com/apache/drill/pull/1298#discussion_r202408237
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/stat/ParquetIsPredicate.java
 ##
 @@ -124,8 +124,7 @@ private static LogicalExpression 
createIsTruePredicate(LogicalExpression expr) {
*/
   private static LogicalExpression createIsFalsePredicate(LogicalExpression 
expr) {
 return new ParquetIsPredicate(expr, (exprStat, evaluator) ->
-//if min value is not false or if there are all nulls  -> canDrop
-isAllNulls(exprStat, evaluator.getRowCount()) || 
exprStat.hasNonNullValue() && ((BooleanStatistics) exprStat).getMin()
+  exprStat.hasNonNullValue() && ((BooleanStatistics) exprStat).getMin() || 
isAllNulls(exprStat, evaluator.getRowCount()) ? RowsMatch.NONE : 
checkNull(exprStat)
 
 Review comment:
   hasNonNullValue = true if min and max exist
   isAllNulls = true if all rows are null values
   testBooleanPredicate with File 0_0_3.parquet (contains only 3 null values)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Filter pruning for multi rowgroup parquet file
> --
>
> Key: DRILL-5796
> URL: https://issues.apache.org/jira/browse/DRILL-5796
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Reporter: Damien Profeta
>Assignee: Jean-Blas IMBERT
>Priority: Major
> Fix For: 1.14.0
>
>
> Today, filter pruning use the file name as the partitioning key. This means 
> you can remove a partition only if the whole file is for the same partition. 
> With parquet, you can prune the filter if the rowgroup make a partition of 
> your dataset as the unit of work if the rowgroup not the file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5796) Filter pruning for multi rowgroup parquet file

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543399#comment-16543399
 ] 

ASF GitHub Bot commented on DRILL-5796:
---

vrozov commented on a change in pull request #1298: DRILL-5796: Filter pruning 
for multi rowgroup parquet file
URL: https://github.com/apache/drill/pull/1298#discussion_r202402885
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/stat/ParquetIsPredicate.java
 ##
 @@ -124,8 +124,7 @@ private static LogicalExpression 
createIsTruePredicate(LogicalExpression expr) {
*/
   private static LogicalExpression createIsFalsePredicate(LogicalExpression 
expr) {
 return new ParquetIsPredicate(expr, (exprStat, evaluator) ->
-//if min value is not false or if there are all nulls  -> canDrop
-isAllNulls(exprStat, evaluator.getRowCount()) || 
exprStat.hasNonNullValue() && ((BooleanStatistics) exprStat).getMin()
+  exprStat.hasNonNullValue() && ((BooleanStatistics) exprStat).getMin() || 
isAllNulls(exprStat, evaluator.getRowCount()) ? RowsMatch.NONE : 
checkNull(exprStat)
 
 Review comment:
   Under what condition `hasNonNullValue() && isAllNulls()` will be `true`? 
What unit test covers this use case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Filter pruning for multi rowgroup parquet file
> --
>
> Key: DRILL-5796
> URL: https://issues.apache.org/jira/browse/DRILL-5796
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Reporter: Damien Profeta
>Assignee: Jean-Blas IMBERT
>Priority: Major
> Fix For: 1.14.0
>
>
> Today, filter pruning use the file name as the partitioning key. This means 
> you can remove a partition only if the whole file is for the same partition. 
> With parquet, you can prune the filter if the rowgroup make a partition of 
> your dataset as the unit of work if the rowgroup not the file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543381#comment-16543381
 ] 

ASF GitHub Bot commented on DRILL-5495:
---

arina-ielchiieva commented on issue #1382: DRILL-5495: convert_from function on 
top of int96 data results in Arr…
URL: https://github.com/apache/drill/pull/1382#issuecomment-404876965
 
 
   Looks like is was mechanical error. @vdiravka nice catch! Though I am not 
sure to validate this we need to add that many files, total with 10 rows? 
Can we create unit test using only one small file? If not, I would say, we 
might consider removing the test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5495) convert_from function on top of int96 data results in ArrayIndexOutOfBoundsException

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543364#comment-16543364
 ] 

ASF GitHub Bot commented on DRILL-5495:
---

vdiravka opened a new pull request #1382: DRILL-5495: convert_from function on 
top of int96 data results in Arr…
URL: https://github.com/apache/drill/pull/1382
 
 
   …ayIndexOutOfBoundsException
   
   The only issue is the wrong parameter is used for iteration in the process 
of reading values of the Nullable Fixed Binary field.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> convert_from function on top of int96 data results in 
> ArrayIndexOutOfBoundsException
> 
>
> Key: DRILL-5495
> URL: https://issues.apache.org/jira/browse/DRILL-5495
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Major
> Fix For: 1.14.0
>
> Attachments: 26edf56f-6bc6-1e1f-5aa4-d98aec858a4a.sys.drill, 
> d4.tar.gz, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The data set used is generated from spark and contains a timestamp stored as 
> int96
> {code}
> [root@qa-node190 framework]# /home/parquet-tools-1.5.1-SNAPSHOT/parquet-meta 
> /home/framework/framework/resources/Datasources/parquet_date/spark_generated/d4/part-r-0-08c5c621-62ea-4fee-b690-11576eddc39c.snappy.parquet
>  
> creator: parquet-mr (build 32c46643845ea8a705c35d4ec8fc654cc8ff816d) 
> extra:   org.apache.spark.sql.parquet.row.metadata = 
> {"type":"struct","fields":[{"name":"a","type":"integer","nullable":true,"metadata":{}},{"name":"b","type":"strin
>  [more]...
> file schema: spark_schema 
> ---
> a:   OPTIONAL INT32 R:0 D:1
> b:   OPTIONAL BINARY O:UTF8 R:0 D:1
> c:   OPTIONAL INT32 O:DATE R:0 D:1
> d:   OPTIONAL INT96 R:0 D:1
> row group 1: RC:1 TS:8661 
> ---
> a:INT32 SNAPPY DO:0 FPO:4 SZ:2367/2571/1.09 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> b:BINARY SNAPPY DO:0 FPO:2371 SZ:2329/2843/1.22 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> c:INT32 SNAPPY DO:0 FPO:4700 SZ:1374/1507/1.10 VC:1 
> ENC:RLE,PLAIN,BIT_PACKED
> d:INT96 SNAPPY DO:0 FPO:6074 SZ:1597/1740/1.09 VC:1 
> ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED
> {code}
> The below query fails with an ArrayIndexOutOfBoundsException
> {code}
> select convert_from(d, 'TIMESTAMP_IMPALA') from 
> dfs.`/drill/testdata/resource-manager/d4`;
> Fails with below error after displaying a bunch of records
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 1:0
> [Error Id: f963f6c0-3306-49a6-9d98-a193c5e7cfee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Attached the logs, profiles and data files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6606) Hash Join returns incorrect data types when joining subqueries with limit 0

2018-07-13 Thread Volodymyr Vysotskyi (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543352#comment-16543352
 ] 

Volodymyr Vysotskyi commented on DRILL-6606:


{{limit 0}} in subqueries is a good way of discovering schema without joining 
data.

But the problem is more general. For example, if both subqueries have filters, 
which filters all the input data, the information about schema also will be 
lost in the case of a hash join. I think this case is more common than the case 
with {{limit 0}}.

I agree with Boaz that the problem is in "early sniffing", when the schema is 
built only when any data had come.

Columns have the same types as the expected types in 
{{TestPreparedStatementProvider#joinOrderByQuery()}} test: {{DOUBLE}}, 
{{DATE}}, {{INTEGER}}.

> Hash Join returns incorrect data types when joining subqueries with limit 0
> ---
>
> Key: DRILL-6606
> URL: https://issues.apache.org/jira/browse/DRILL-6606
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Bohdan Kazydub
>Assignee: Timothy Farkas
>Priority: Blocker
> Fix For: 1.14.0
>
>
> PreparedStatement for query
> {code:sql}
> SELECT l.l_quantity, l.l_shipdate, o.o_custkey
> FROM (SELECT * FROM cp.`tpch/lineitem.parquet` LIMIT 0) l
>     JOIN (SELECT * FROM cp.`tpch/orders.parquet` LIMIT 0) o 
>     ON l.l_orderkey = o.o_orderkey
> LIMIT 0
> {code}
>  is created with wrong types (nullable INTEGER) for all selected columns, no 
> matter what their actual type is. This behavior reproduces with hash join 
> only and is very likely to be caused by DRILL-6027 as the query works fine 
> before this feature was implemented.
> To reproduce the problem you can put the aforementioned query into 
> TestPreparedStatementProvider#joinOrderByQuery() test method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6475) Unnest: Null fieldId Pointer

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543358#comment-16543358
 ] 

ASF GitHub Bot commented on DRILL-6475:
---

HanumathRao opened a new pull request #1381: DRILL-6475: Unnest: Null fieldId 
Pointer.
URL: https://github.com/apache/drill/pull/1381
 
 
   @amansinha100  Can you please review this PR.
   
   This PR includes changes related to updating the row type and also 
correlated column for the unnest prel.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unnest: Null fieldId Pointer 
> -
>
> Key: DRILL-6475
> URL: https://issues.apache.org/jira/browse/DRILL-6475
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Reporter: Boaz Ben-Zvi
>Assignee: Hanumath Rao Maduri
>Priority: Major
> Fix For: 1.14.0
>
>
>  Executing the following (in TestE2EUnnestAndLateral.java) causes an NPE as 
> `fieldId` is null in `schemaChanged()`: 
> {code}
> @Test
> public void testMultipleBatchesLateral_twoUnnests() throws Exception {
>  String sql = "SELECT t5.l_quantity FROM dfs.`lateraljoin/multipleFiles/` t, 
> LATERAL " +
>  "(SELECT t2.ordrs FROM UNNEST(t.c_orders) t2(ordrs)) t3(ordrs), LATERAL " +
>  "(SELECT t4.l_quantity FROM UNNEST(t3.ordrs) t4(l_quantity)) t5";
>  test(sql);
> }
> {code}
>  
> And the error is:
> {code}
> Error: SYSTEM ERROR: NullPointerException
> Fragment 0:0
> [Error Id: 25f42765-8f68-418e-840a-ffe65788e1e2 on 10.254.130.25:31020]
> (java.lang.NullPointerException) null
>  
> org.apache.drill.exec.physical.impl.unnest.UnnestRecordBatch.schemaChanged():381
>  org.apache.drill.exec.physical.impl.unnest.UnnestRecordBatch.innerNext():199
>  org.apache.drill.exec.record.AbstractRecordBatch.next():172
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.record.AbstractRecordBatch.next():119
>  
> org.apache.drill.exec.physical.impl.join.LateralJoinBatch.prefetchFirstBatchFromBothSides():241
>  org.apache.drill.exec.physical.impl.join.LateralJoinBatch.buildSchema():264
>  org.apache.drill.exec.record.AbstractRecordBatch.next():152
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.record.AbstractRecordBatch.next():119
>  org.apache.drill.exec.record.AbstractRecordBatch.next():109
>  org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>  
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():137
>  org.apache.drill.exec.record.AbstractRecordBatch.next():172
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.record.AbstractRecordBatch.next():119
>  org.apache.drill.exec.record.AbstractRecordBatch.next():109
>  org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>  
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():137
>  org.apache.drill.exec.record.AbstractRecordBatch.next():172
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():229
>  org.apache.drill.exec.physical.impl.BaseRootExec.next():103
>  org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
>  org.apache.drill.exec.physical.impl.BaseRootExec.next():93
>  org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():292
>  org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():279
>  java.security.AccessController.doPrivileged():-2
>  javax.security.auth.Subject.doAs():422
>  org.apache.hadoop.security.UserGroupInformation.doAs():1657
>  org.apache.drill.exec.work.fragment.FragmentExecutor.run():279
>  org.apache.drill.common.SelfCleaningRunnable.run():38
>  java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>  java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>  java.lang.Thread.run():745 (state=,code=0)
> {code} 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6453) TPC-DS query 72 has regressed

2018-07-13 Thread Aman Sinha (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543351#comment-16543351
 ] 

Aman Sinha commented on DRILL-6453:
---

[~khfaraaz]  can you also try running a simplified version of the query with 
the first say 3 joins (starting from the leaf level in the plan) ?  We should 
see what the behavior is with patterns like  hash-partitioned HJ followed by 
broadcast, broadcast HJ.

> TPC-DS query 72 has regressed
> -
>
> Key: DRILL-6453
> URL: https://issues.apache.org/jira/browse/DRILL-6453
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.14.0
>Reporter: Khurram Faraaz
>Assignee: Boaz Ben-Zvi
>Priority: Blocker
> Fix For: 1.14.0
>
> Attachments: 24f75b18-014a-fb58-21d2-baeab5c3352c.sys.drill, 
> jstack_29173_June_10_2018.txt, jstack_29173_June_10_2018.txt, 
> jstack_29173_June_10_2018_b.txt, jstack_29173_June_10_2018_b.txt, 
> jstack_29173_June_10_2018_c.txt, jstack_29173_June_10_2018_c.txt, 
> jstack_29173_June_10_2018_d.txt, jstack_29173_June_10_2018_d.txt, 
> jstack_29173_June_10_2018_e.txt, jstack_29173_June_10_2018_e.txt
>
>
> TPC-DS query 72 seems to have regressed, query profile for the case where it 
> Canceled after 2 hours on Drill 1.14.0 is attached here.
> {noformat}
> On, Drill 1.14.0-SNAPSHOT 
> commit : 931b43e (TPC-DS query 72 executed successfully on this commit, took 
> around 55 seconds to execute)
> SF1 parquet data on 4 nodes; 
> planner.memory.max_query_memory_per_node = 10737418240. 
> drill.exec.hashagg.fallback.enabled = true
> TPC-DS query 72 executed successfully & took 47 seconds to complete execution.
> {noformat}
> {noformat}
> TPC-DS data in the below run has date values stored as DATE datatype and not 
> VARCHAR type
> On, Drill 1.14.0-SNAPSHOT
> commit : 82e1a12
> SF1 parquet data on 4 nodes; 
> planner.memory.max_query_memory_per_node = 10737418240. 
> drill.exec.hashagg.fallback.enabled = true
> and
> alter system set `exec.hashjoin.num_partitions` = 1;
> TPC-DS query 72 executed for 2 hrs and 11 mins and did not complete, I had to 
> Cancel it by stopping the Foreman drillbit.
> As a result several minor fragments are reported to be in 
> CANCELLATION_REQUESTED state on UI.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6606) Hash Join returns incorrect data types when joining subqueries with limit 0

2018-07-13 Thread Aman Sinha (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543323#comment-16543323
 ] 

Aman Sinha commented on DRILL-6606:
---

I don't think LIMIT 0 in the subqueries  or Views is common. For instance, 
Tableau generates a wrapper  LIMIT 0  on the entire query, not within each 
subquery.   What is the data type of columns if you only have the outer LIMIT 0 
after the join of the subqueries ?

> Hash Join returns incorrect data types when joining subqueries with limit 0
> ---
>
> Key: DRILL-6606
> URL: https://issues.apache.org/jira/browse/DRILL-6606
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Bohdan Kazydub
>Assignee: Timothy Farkas
>Priority: Blocker
> Fix For: 1.14.0
>
>
> PreparedStatement for query
> {code:sql}
> SELECT l.l_quantity, l.l_shipdate, o.o_custkey
> FROM (SELECT * FROM cp.`tpch/lineitem.parquet` LIMIT 0) l
>     JOIN (SELECT * FROM cp.`tpch/orders.parquet` LIMIT 0) o 
>     ON l.l_orderkey = o.o_orderkey
> LIMIT 0
> {code}
>  is created with wrong types (nullable INTEGER) for all selected columns, no 
> matter what their actual type is. This behavior reproduces with hash join 
> only and is very likely to be caused by DRILL-6027 as the query works fine 
> before this feature was implemented.
> To reproduce the problem you can put the aforementioned query into 
> TestPreparedStatementProvider#joinOrderByQuery() test method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >