[jira] [Resolved] (DRILL-4205) Simple query hit IndexOutOfBoundException

2015-12-17 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4205.
--
Resolution: Fixed

>  Simple query hit IndexOutOfBoundException
> --
>
> Key: DRILL-4205
> URL: https://issues.apache.org/jira/browse/DRILL-4205
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.4.0
>Reporter: Dechang Gu
>Assignee: Parth Chandra
>
> The following query failed due to IOB:
> 0: jdbc:drill:schema=wf_pigprq100> select * from 
> `store_sales/part-m-00073.parquet`;
> Error: SYSTEM ERROR: IndexOutOfBoundsException: srcIndex: 1048587
> Fragment 0:0
> [Error Id: ad8d2bc0-259f-483c-9024-93865963541e on ucs-node4.perf.lab:31010]
>   (org.apache.drill.common.exceptions.DrillRuntimeException) Error in parquet 
> record reader.
> Message: 
> Hadoop path: /tpcdsPigParq/SF100/store_sales/part-m-00073.parquet
> Total records read: 135280
> Mock records read: 0
> Records to read: 1424
> Row group index: 0
> Records in row group: 3775712
> Parquet Metadata: ParquetMetaData{FileMetaData{schema: message pig_schema {
>   optional int64 ss_sold_date_sk;
>   optional int64 ss_sold_time_sk;
>   optional int64 ss_item_sk;
>   optional int64 ss_customer_sk;
>   optional int64 ss_cdemo_sk;
>   optional int64 ss_hdemo_sk;
>   optional int64 ss_addr_sk;
>   optional int64 ss_store_sk;
>   optional int64 ss_promo_sk;
>   optional int64 ss_ticket_number;
>   optional int64 ss_quantity;
>   optional double ss_wholesale_cost;
>   optional double ss_list_price;
>   optional double ss_sales_price;
>   optional double ss_ext_discount_amt;
>   optional double ss_ext_sales_price;
>   optional double ss_ext_wholesale_cost;
>   optional double ss_ext_list_price;
>   optional double ss_ext_tax;
>   optional double ss_coupon_amt;
>   optional double ss_net_paid;
>   optional double ss_net_paid_inc_tax;
>   optional double ss_net_profit;
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4313) C++ client should manage load balance of queries

2016-01-26 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4313:


 Summary: C++ client should manage load balance of queries
 Key: DRILL-4313
 URL: https://issues.apache.org/jira/browse/DRILL-4313
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra


The current C++ client handles multiple parallel queries over the same 
connection, but that creates a bottleneck as the queries get sent to the same 
drillbit.
The client can manage this more effectively by choosing from a configurable 
pool of connections and round robin queries to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4313) C++ client - Improve method of drillbit selection from cluster

2016-01-29 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4313.
--
Resolution: Fixed

Fixed in 576271d

> C++ client - Improve method of drillbit selection from cluster
> --
>
> Key: DRILL-4313
> URL: https://issues.apache.org/jira/browse/DRILL-4313
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>
> The current C++ client handles multiple parallel queries over the same 
> connection, but that creates a bottleneck as the queries get sent to the same 
> drillbit.
> The client can manage this more effectively by choosing from a configurable 
> pool of connections and round robin queries to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4380) Fix performance regression: in creation of FileSelection in ParquetFormatPlugin to not set files if metadata cache is available.

2016-02-09 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4380:


 Summary: Fix performance regression: in creation of FileSelection 
in ParquetFormatPlugin to not set files if metadata cache is available.
 Key: DRILL-4380
 URL: https://issues.apache.org/jira/browse/DRILL-4380
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra



The regression has been caused by the changes in 
367d74a65ce2871a1452361cbd13bbd5f4a6cc95 (DRILL-2618: handle queries over empty 
folders consistently so that they report table not found rather than failing.)

In ParquetFormatPlugin, the original code created a FileSelection object in the 
following code:
{code}
return new FileSelection(fileNames, metaRootPath.toString(), metadata, 
selection.getFileStatusList(fs));
{code}
The selection.getFileStatusList call made an inexpensive call to 
FileSelection.init(). The call was inexpensive because the FileSelection.files 
member was not set and the code does not need to make an expensive call to get 
the file statuses corresponding to the files in the FileSelection.files member.
In the new code, this is replaced by 
{code}
  final FileSelection newSelection = FileSelection.create(null, fileNames, 
metaRootPath.toString());
return ParquetFileSelection.create(newSelection, metadata);
{code}
This sets the FileSelection.files member but not the FileSelection.statuses 
member. A subsequent call to FileSelection.getStatuses ( in ParquetGroupScan() 
) now makes an expensive call to get all the statuses.

It appears that there was an implicit assumption that the 
FileSelection.statuses member should be set before the FileSelection.files 
member is set. This assumption is no longer true.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4380) Fix performance regression: in creation of FileSelection in ParquetFormatPlugin to not set files if metadata cache is available.

2016-02-09 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4380.
--
Resolution: Fixed

Fixed in 7bfcb40a0ffa49a1ed27e1ff1f57378aa1136bbd. Also see DRILL-4381

> Fix performance regression: in creation of FileSelection in 
> ParquetFormatPlugin to not set files if metadata cache is available.
> 
>
> Key: DRILL-4380
> URL: https://issues.apache.org/jira/browse/DRILL-4380
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Parth Chandra
>
> The regression has been caused by the changes in 
> 367d74a65ce2871a1452361cbd13bbd5f4a6cc95 (DRILL-2618: handle queries over 
> empty folders consistently so that they report table not found rather than 
> failing.)
> In ParquetFormatPlugin, the original code created a FileSelection object in 
> the following code:
> {code}
> return new FileSelection(fileNames, metaRootPath.toString(), metadata, 
> selection.getFileStatusList(fs));
> {code}
> The selection.getFileStatusList call made an inexpensive call to 
> FileSelection.init(). The call was inexpensive because the 
> FileSelection.files member was not set and the code does not need to make an 
> expensive call to get the file statuses corresponding to the files in the 
> FileSelection.files member.
> In the new code, this is replaced by 
> {code}
>   final FileSelection newSelection = FileSelection.create(null, fileNames, 
> metaRootPath.toString());
> return ParquetFileSelection.create(newSelection, metadata);
> {code}
> This sets the FileSelection.files member but not the FileSelection.statuses 
> member. A subsequent call to FileSelection.getStatuses ( in 
> ParquetGroupScan() ) now makes an expensive call to get all the statuses.
> It appears that there was an implicit assumption that the 
> FileSelection.statuses member should be set before the FileSelection.files 
> member is set. This assumption is no longer true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4480) Refactor unnecessary duplication of code in connect functions

2016-03-05 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4480:


 Summary: Refactor unnecessary duplication of code in connect 
functions
 Key: DRILL-4480
 URL: https://issues.apache.org/jira/browse/DRILL-4480
 Project: Apache Drill
  Issue Type: Improvement
  Components: Client - C++
Affects Versions: 1.5.0
Reporter: Parth Chandra
 Fix For: Future


The connect function for DrillClientImpl and PooledDrillClientImpl duplicate a 
lot of code. This code should be refactored so it is clearer to understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4313) C++ client - Improve method of drillbit selection from cluster

2016-03-08 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4313.
--
Resolution: Fixed

Fixed in df0f0af3d963c1b65eb01c3141fe84532c53f5a5

> C++ client - Improve method of drillbit selection from cluster
> --
>
> Key: DRILL-4313
> URL: https://issues.apache.org/jira/browse/DRILL-4313
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>Assignee: Parth Chandra
> Fix For: 1.6.0
>
>
> The current C++ client handles multiple parallel queries over the same 
> connection, but that creates a bottleneck as the queries get sent to the same 
> drillbit.
> The client can manage this more effectively by choosing from a configurable 
> pool of connections and round robin queries to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4647) C++ client is not propagating a connection failed error when a drillbit goes down

2016-04-29 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4647:


 Summary: C++ client is not propagating a connection failed error 
when a drillbit goes down
 Key: DRILL-4647
 URL: https://issues.apache.org/jira/browse/DRILL-4647
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


When a drillbit goes down, there are two conditions under which the client is 
not propagating the error back to the application -
1) The application is in a submitQuery call: the ODBC driver is expecting that 
the error be reported thru the query results listener which hasn't been 
registered at the point the error is encountered.
2) A submitQuery call succeeded but never reached the drillbit because it was 
shutdown. In this case the application has a handle to a query and is listening 
for results which will never arrive. The heartbeat mechanism detects the 
failure, but is not propagating the error to the query results listener.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4652) C++ client build breaks when trying to include commit messages with quotes

2016-05-03 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4652:


 Summary: C++ client build breaks when trying to include commit 
messages with quotes
 Key: DRILL-4652
 URL: https://issues.apache.org/jira/browse/DRILL-4652
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


The C++ client build generates a string based on git commit info to print to 
the log at startup time. This breaks if the commit message has quotes since the 
embedded quotes are not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4652) C++ client build breaks when trying to include commit messages with quotes

2016-05-03 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4652.
--
Resolution: Fixed

> C++ client build breaks when trying to include commit messages with quotes
> --
>
> Key: DRILL-4652
> URL: https://issues.apache.org/jira/browse/DRILL-4652
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Parth Chandra
>
> The C++ client build generates a string based on git commit info to print to 
> the log at startup time. This breaks if the commit message has quotes since 
> the embedded quotes are not escaped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4800) Improve parquet reader performance

2016-07-22 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4800:


 Summary: Improve parquet reader performance
 Key: DRILL-4800
 URL: https://issues.apache.org/jira/browse/DRILL-4800
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra


Reported by a user in the field - 

We're generally getting read speeds of about 100-150 MB/s/node on PARQUET scan 
operator. This seems a little low given the number of drives on the node - 24. 
We're looking for options we can improve the performance of this operator as 
most of our queries are I/O bound. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4826) Query against INFORMATION_SCHEMA.TABLES degrades as the number of views increases

2016-08-04 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4826:


 Summary: Query against INFORMATION_SCHEMA.TABLES degrades as the 
number of views increases
 Key: DRILL-4826
 URL: https://issues.apache.org/jira/browse/DRILL-4826
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra
Assignee: Parth Chandra


Queries against INFORMATION_SCHEMA.TABLES and INFORMATION_SCHEMA.VIEWS slow 
down as the number of views increases. 

BI tools like Tableau issue a query like the following at connection time:
{code}
select TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE from 
INFORMATION_SCHEMA.`TABLES` WHERE TABLE_CATALOG LIKE 'DRILL' ESCAPE '\' AND 
TABLE_SCHEMA <> 'sys' AND TABLE_SCHEMA <> 'INFORMATION_SCHEMA'ORDER BY 
TABLE_TYPE, TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME
{code}

The time to query the information schema tables degrades as the number of views 
increases. On a test system:

|| Views || Time(secs) ||
|500 | 6 |
|1000 | 19 |
|1500 | 33 |

This can result in a single connection taking more than a minute to establish.

The problem occurs because we read the view file for every view and this 
appears to take most of the time.

Querying information_schema.tables does not, in fact, need to open the view 
file at all, it merely needs to get a listing of the view files. Eliminating 
the view file read will speed up the query tremendously.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4840) Sqlline prints log output to stdout on startup

2016-08-09 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4840:


 Summary: Sqlline prints log output to stdout on startup
 Key: DRILL-4840
 URL: https://issues.apache.org/jira/browse/DRILL-4840
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


With the refactoring of drill scripts, sqlline has now started printing logging 
messages to stdout when it starts up. This messes up some users' scripts that 
invoke sqlline.

See also, DRILL-2798 which was logged because end users had scripts that broke 
as a result of printing out additional information.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1315) Allow specifying Zookeeper root znode and cluster-id as JDBC parameters

2015-06-18 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1315.
--
Resolution: Cannot Reproduce

I can no longer reproduce this. Successfully connected to two different 
clusters with different cluster ids from a client with only sqlline 
(drill-override.conf had entries with different cluster ids). Also connected to 
cluster 2 from sqlline on cluster 1.


> Allow specifying Zookeeper root znode and cluster-id as JDBC parameters
> ---
>
> Key: DRILL-1315
> URL: https://issues.apache.org/jira/browse/DRILL-1315
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 0.5.0
>Reporter: Aditya Kishore
>Assignee: Parth Chandra
> Fix For: 1.1.0
>
>
> Currently there is no way to specify a different root z-node and cluster-id 
> to the Drill JDBC driver and it always attempt to connect to the default 
> values (unless there is a {{drill-override.conf}} with the correct values 
> also included early in the classpath).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-2416) Zookeeper in sqlline connection string does not override the entry from drill-override.conf

2015-06-18 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-2416.
--
Resolution: Cannot Reproduce

See DRILL-1315 as well. Please reopen if this issue recurs.

> Zookeeper in sqlline connection string does not override the entry from 
> drill-override.conf 
> 
>
> Key: DRILL-2416
> URL: https://issues.apache.org/jira/browse/DRILL-2416
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - CLI
>Affects Versions: 0.8.0
>Reporter: Krystal
>Assignee: Parth Chandra
> Fix For: 1.1.0
>
>
> git.commit.id=f658a3c513ddf7f2d1b0ad7aa1f3f65049a594fe
> On the sqlline jdbc connection string, I changed the zookeeper ip to point to 
> another cluster; however, sqlline kept connecting to the drillbits specified 
> in drill-override.conf.  I updated the drill-override.conf with the other 
> zookeeper information, then I was able to successfully connected to the 
> drillbits on a remote cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3376) Reading individual files created by CTAS with partition causes an exception

2015-06-25 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-3376:


 Summary: Reading individual files created by CTAS with partition 
causes an exception
 Key: DRILL-3376
 URL: https://issues.apache.org/jira/browse/DRILL-3376
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Writer
Affects Versions: 1.1.0
Reporter: Parth Chandra
Assignee: Steven Phillips
 Fix For: 1.1.0


Create a table using CTAS with partitioning:

{code}
create table `lineitem_part` partition by (l_moddate) as select l.*, l_shipdate 
- extract(day from l_shipdate) + 1 l_moddate from cp.`tpch/lineitem.parquet` l
{code}

Then the following query causes an exception
{code}
select distinct l_moddate from `lineitem_part/0_0_1.parquet` where l_moddate = 
date '1992-01-01';
{code}


Trace in the log file - 
{panel}
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of 
range: 0
at java.lang.String.charAt(String.java:658) ~[na:1.7.0_65]
at 
org.apache.drill.exec.planner.logical.partition.PruneScanRule$PathPartition.(PruneScanRule.java:493)
 ~[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.logical.partition.PruneScanRule.doOnMatch(PruneScanRule.java:385)
 ~[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.logical.partition.PruneScanRule$4.onMatch(PruneScanRule.java:278)
 ~[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:228)
 ~[calcite-core-1.1.0-drill-r9.jar:1.1.0-drill-r9]
... 13 common frames omitted
{panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3382) CTAS with order by clause fails with IOOB exception

2015-06-25 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-3382:


 Summary: CTAS with order by clause fails with IOOB exception
 Key: DRILL-3382
 URL: https://issues.apache.org/jira/browse/DRILL-3382
 Project: Apache Drill
  Issue Type: Bug
  Components: SQL Parser
Affects Versions: 1.1.0
Reporter: Parth Chandra
Assignee: Aman Sinha
Priority: Critical


The query :
{panel}
create table `lineitem__5`  as select l_suppkey, l_partkey, l_linenumber from 
cp.`tpch/lineitem.parquet` l order by l_linenumber;
{panel}

fails with an IOOB exception

Trace in log - 

{panel}
[Error Id: 3351dcf3-032f-4d10-b2a4-c42959d0c06a on localhost:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
IndexOutOfBoundsException: index (2) must be less than size (2)


[Error Id: 3351dcf3-032f-4d10-b2a4-c42959d0c06a on localhost:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:523)
 ~[drill-common-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:737)
 [drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:839)
 [drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:781)
 [drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:73) 
[drill-common-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.Foreman$StateSwitch.moveToState(Foreman.java:783)
 [drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:892) 
[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:253) 
[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_65]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
exception during fragment initialization: Internal error: Error while applying 
rule ExpandConversionRule, args 
[rel#9013:AbstractConverter.PHYSICAL.SINGLETON([]).[2](input=rel#9011:Subset#7.PHYSICAL.ANY([]).[2],convention=PHYSICAL,DrillDistributionTraitDef=SINGLETON([]),sort=[2])]
... 4 common frames omitted
Caused by: java.lang.AssertionError: Internal error: Error while applying rule 
ExpandConversionRule, args 
[rel#9013:AbstractConverter.PHYSICAL.SINGLETON([]).[2](input=rel#9011:Subset#7.PHYSICAL.ANY([]).[2],convention=PHYSICAL,DrillDistributionTraitDef=SINGLETON([]),sort=[2])]
at org.apache.calcite.util.Util.newInternal(Util.java:790) 
~[calcite-core-1.1.0-drill-r9.jar:1.1.0-drill-r9]
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:251)
 ~[calcite-core-1.1.0-drill-r9.jar:1.1.0-drill-r9]
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:795)
 ~[calcite-core-1.1.0-drill-r9.jar:1.1.0-drill-r9]
at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:303) 
~[calcite-core-1.1.0-drill-r9.jar:1.1.0-drill-r9]
at org.apache.calcite.prepare.PlannerImpl.transform(PlannerImpl.java:316) 
~[calcite-core-1.1.0-drill-r9.jar:1.1.0-drill-r9]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:260)
 ~[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.handlers.CreateTableHandler.convertToPrel(CreateTableHandler.java:120)
 ~[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.handlers.CreateTableHandler.getPlan(CreateTableHandler.java:99)
 ~[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:178)
 ~[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:903) 
[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:242) 
[drill-java-exec-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
... 3 common frames omitted
Caused by: java.lang.IndexOutOfBoundsException: index (2) must be less than 
size (2)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:305) 
~[guava-14.0.1.jar:na]
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:284) 
~[guava-14.0.1.jar:na]
at 
com.google.common.collect.RegularImmutableList.get(RegularImmutableList.java:81)
 ~[guava-14.0.1.

[jira] [Resolved] (DRILL-3199) GenericAccessor doesn't support isNull

2015-06-29 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-3199.
--
Resolution: Fixed

Resolved in commit: 6ad3577

> GenericAccessor doesn't support isNull
> --
>
> Key: DRILL-3199
> URL: https://issues.apache.org/jira/browse/DRILL-3199
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
> Environment: I found this problem when calling the driver's wasNull() 
> method on a field that represented a nested JSON object (one level below the 
> root object), using the 'dfs' storage plugin and pointing at my local 
> filesystem.
>Reporter: Matt Burgess
>Assignee: Parth Chandra
> Fix For: 1.1.0
>
> Attachments: DRILL-3199.patch.1, DRILL-3199.patch.2, 
> DRILL-3199.patch.3, DRILL-3199.patch.4
>
>
> GenericAccessor throws an UnsupportedOperationException when isNull() is 
> called. However for other methods it delegates to its ValueVector's accessor. 
> I think it should do the same for isNull().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3428) Errors during text filereading should provide the file name in the error messge

2015-06-30 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-3428:


 Summary: Errors during text filereading should provide the file 
name in the error messge
 Key: DRILL-3428
 URL: https://issues.apache.org/jira/browse/DRILL-3428
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Text & CSV
Affects Versions: 1.0.0
Reporter: Parth Chandra
Assignee: Steven Phillips
 Fix For: 1.2.0


If there is an exception during reading of a text file, the error message 
prints a message like :
...TextParsingException: Error processing input: Cannot use newline
character within quoted string, line=37, char=8855. Content parsed: [ ]

which does not have the name of the file. If there are thousands of files being 
read, printing the filename would help identify the problem. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-3016) Can't set Storage of Cloudera 5.4 HDFS with Apache Drill

2015-07-01 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-3016.
--
Resolution: Not A Problem

Doesn't look like a bug. Please continue discussion on drill-user list if you 
still need help. 

> Can't set Storage of Cloudera 5.4 HDFS with Apache Drill
> 
>
> Key: DRILL-3016
> URL: https://issues.apache.org/jira/browse/DRILL-3016
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 0.9.0
>Reporter: Murtaza
>Assignee: Jacques Nadeau
>Priority: Blocker
> Attachments: sqlline.log
>
>
> I have been struggling to set HDFS Storage of my Single node Cloudera CDH 
> 5.3/5.4 HDFS. Below is my storage configuration, except this I have not done 
> anything, please suggest me which all steps are missing.
> {code}
> {
>   "type": "file",
>   "enabled": true,
>   "connection": "hdfs://demo.gethue.com:8020/",
>   "workspaces": {
> "root": {
>   "location": "/user/4qgbmrt",
>   "writable": false,
>   "defaultInputFormat": null
> }
>   },
>   "formats": {
> "csv": {
>   "type": "text",
>   "extensions": [
> "csv"
>   ],
>   "delimiter": ","
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-3023) Drill ODBC Connection not working - attached logs in the details

2015-07-01 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-3023.
--
Resolution: Not A Problem

Closing as this appears to be a version mismatch issue. 

> Drill ODBC Connection not working - attached logs in the details
> 
>
> Key: DRILL-3023
> URL: https://issues.apache.org/jira/browse/DRILL-3023
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - ODBC
>Affects Versions: 0.9.0
>Reporter: Murtaza
>Assignee: Parth Chandra
>Priority: Critical
> Attachments: screenshot-1.png
>
>
> Getting below error in the sqlline log file.
> Apache Drill 0.9 is installed on CentOS 64bit, embedded drillbits. Trying to 
> access ODBC from Windows machine after installing ODBC 32bit & 64bit drivers, 
> both have same errors.
> 2015-05-11 14:47:29,652 [UserServer-1] ERROR 
> o.a.drill.exec.rpc.user.UserServer - Error 
> 2845b981-804a-44d4-a9f1-88866f778c26 in Handling handshake request: 
> RPC_VERSION_MISMATCH, Invalid rpc version. Expected 5, actual 3.
> 2015-05-11 14:47:29,653 [UserServer-1] ERROR 
> o.a.d.exec.rpc.RpcExceptionHandler - Exception in pipeline.  Closing channel 
> between local /172.16.98.101:31010 and remote /10.16.17.114:61346
> io.netty.handler.codec.DecoderException: 
> org.apache.drill.exec.rpc.RpcException: Handshake request failed: Invalid rpc 
> version. Expected 5, actual 3.
> at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99)
>  [netty-codec-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [netty-codec-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:161)
>  [netty-codec-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
> [netty-transport-4.0.24.Final.jar:4.0.24.Final]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  [netty-common-4.0.24.Final.jar:4.0.24.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
> Caused by: org.apache.drill.exec.rpc.RpcException: Handshake request failed: 
> Invalid rpc version. Expected 5, actual 3.
> at 
> org.apache.drill.exec.rpc.user.UserServer$1.consumeHandshake(UserServer.java:181)
>  ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> 

[jira] [Resolved] (DRILL-2482) JDBC : calling getObject when the actual column type is 'NVARCHAR' results in NoClassDefFoundError

2015-07-01 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-2482.
--
Resolution: Fixed

Closing per Daniel's comment.

> JDBC : calling getObject when the actual column type is 'NVARCHAR' results in 
> NoClassDefFoundError
> --
>
> Key: DRILL-2482
> URL: https://issues.apache.org/jira/browse/DRILL-2482
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Reporter: Rahul Challapalli
>Assignee: Rahul Challapalli
> Fix For: 1.2.0
>
>
> git.commit.id.abbrev=7b4c887
> I tried to call getObject(i) on a column which is of type varchar, drill 
> failed with the below error :
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/io/Text
>   at 
> org.apache.drill.exec.vector.VarCharVector$Accessor.getObject(VarCharVector.java:407)
>   at 
> org.apache.drill.exec.vector.NullableVarCharVector$Accessor.getObject(NullableVarCharVector.java:386)
>   at 
> org.apache.drill.exec.vector.accessor.NullableVarCharAccessor.getObject(NullableVarCharAccessor.java:98)
>   at 
> org.apache.drill.exec.vector.accessor.BoundCheckingAccessor.getObject(BoundCheckingAccessor.java:137)
>   at 
> org.apache.drill.jdbc.AvaticaDrillSqlAccessor.getObject(AvaticaDrillSqlAccessor.java:136)
>   at 
> net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResultSet.java:351)
>   at Dummy.testComplexQuery(Dummy.java:94)
>   at Dummy.main(Dummy.java:30)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.Text
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 8 more
> {code}
> When the underlying type is a primitive, the getObject call succeeds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1831) LIKE operator not working with SQL [charlist] Wildcard

2015-07-01 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1831.
--
Resolution: Won't Fix

Resolving assuming workaround is available.

> LIKE operator not working with SQL [charlist] Wildcard
> --
>
> Key: DRILL-1831
> URL: https://issues.apache.org/jira/browse/DRILL-1831
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Reporter: Rahul Challapalli
>Assignee: Rahul Challapalli
> Fix For: 1.2.0
>
>
> git.commit.id.abbrev=142e577
> Data :
> {code}
> abc
> def
> acf
> fgh
> qjh
> {code}
> The below query works fine
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDir> select columns[0] from 
> `data-shapes/temp.tbl` where columns[0] like '%b%';
> ++
> |   EXPR$0   |
> ++
> | abc|
> ++
> 1 row selected (0.122 seconds)
> {code}
> However the below query does not return anything
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDir> select columns[0] from 
> `data-shapes/temp.tbl` where columns[0] like '[aq]%';
> ++
> |   EXPR$0   |
> ++
> ++
> No rows selected (0.139 seconds)
> {code}
> The above query should return values which start with either 'a' or 'q'.
> Text Plan :
> {code}
> 00-00Screen
> 00-01  Project(EXPR$0=[$0])
> 00-02SelectionVectorRemover
> 00-03  Filter(condition=[LIKE($0, '[aq]%')])
> 00-04Project(ITEM=[ITEM($0, 0)])
> 00-05  Scan(groupscan=[EasyGroupScan 
> [selectionRoot=/drill/testdata/data-shapes/temp.tbl, numFiles=1, 
> columns=[`columns`[0]], files=[maprfs:/drill/testdata/data-shapes/temp.tbl]]])
> {code}
> Let me know if you need anything else



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-2935) Casting varchar to varbinary fails

2015-07-01 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-2935.
--
Resolution: Won't Fix

> Casting varchar to varbinary fails
> --
>
> Key: DRILL-2935
> URL: https://issues.apache.org/jira/browse/DRILL-2935
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Reporter: Rahul Challapalli
>Assignee: Rahul Challapalli
> Fix For: 1.2.0
>
> Attachments: error.log
>
>
> git.commit.id.abbrev=5fbd274
> The below query fails :
> {code}
> select concat(cast(cast('apache ' as varchar(7)) as varbinary(20)), 'drill') 
> from `dummy.json`;
> Query failed: PARSE ERROR: From line 1, column 15 to line 1, column 66: Cast 
> function cannot convert value of type VARCHAR(7) to type VARBINARY(20)
> [4b5916d1-6b96-42a0-9afa-80706f2bd263 on qa-node191.qa.lab:31010]
> Error: exception while executing query: Failure while executing query. 
> (state=,code=0)
> {code}
> I attached the error log. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3550) Incorrect results reading complex data with schema change

2015-07-23 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-3550:


 Summary: Incorrect results reading complex data with schema change
 Key: DRILL-3550
 URL: https://issues.apache.org/jira/browse/DRILL-3550
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Affects Versions: 1.1.0
Reporter: Parth Chandra
Assignee: Hanifi Gunes
Priority: Critical
 Fix For: 1.2.0


Given the data : 

{"some":"yes","others":{"other":"true","all":"false","sometimes":"yes"}}
{"some":"yes","others":{"other":"true","all":"false","sometimes":"yes","additional":"last
 entries only"}}

The query 

select `some`, t.others, t.others.additional from `test.json` t;

 produces incorrect results - 

| yes  | {"additional":"last entries only"}  | last entries only  |

instead of 

| yes  | {"other":"true","all":"false","sometimes":"yes","additional":"last 
entries only"}  | last entries only  |




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3551) CTAS from complex Json source with schema change is not written (and hence not read back ) correctly

2015-07-23 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-3551:


 Summary: CTAS from complex Json source with schema change  is not 
written (and hence not read back ) correctly
 Key: DRILL-3551
 URL: https://issues.apache.org/jira/browse/DRILL-3551
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Affects Versions: 1.1.0
Reporter: Parth Chandra
Assignee: Hanifi Gunes
Priority: Critical
 Fix For: 1.2.0


The source data contains - 

20K rows with the following - 
{"some":"yes","others":{"other":"true","all":"false","sometimes":"yes"}}   

200 rows with the following - 
{"some":"yes","others":{"other":"true","all":"false","sometimes":"yes","additional":"last
entries only"}}

Creating a table and reading it back returns incorrect data - 

CREATE TABLE testparquet as select * from `test.json`;
SELECT * from testparquet;

Yields 

| yes  | {"other":"true","all":"false","sometimes":"yes"}  |
| yes  | {"other":"true","all":"false","sometimes":"yes"}  |
| yes  | {"other":"true","all":"false","sometimes":"yes"}  |
| yes  | {"other":"true","all":"false","sometimes":"yes"}  |

The "additional" field is missing in all records

Parquet metadata for the created file does not have the 'additional' field 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-3141) sqlline throws an exception when query is cancelled

2015-08-10 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-3141.
--
Resolution: Cannot Reproduce

Cannot reproduce this at all. Resolving this issue as it might have been fixed 
by some concurrency issues that were fixed in 1.1. 
If this recurs, please reopen and grab the server log since the error occurred 
on the server.

> sqlline throws an exception when query is cancelled
> ---
>
> Key: DRILL-3141
> URL: https://issues.apache.org/jira/browse/DRILL-3141
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - CLI
>Affects Versions: 1.0.0
>Reporter: Victoria Markman
>Assignee: Parth Chandra
> Fix For: 1.2.0
>
>
> I cancelled query and got a sqlline exception. Was able to reproduce it only 
> twice. Don't have a reliable reproduction.
> {code}
> 0: jdbc:drill:schema=dfs> select * from web_sales ws, web_returns wr where 
> ws.ws_customer_sk = wr.wr_customer_sk limit 1;
> java.lang.RuntimeException: java.sql.SQLException: Unexpected 
> RuntimeException: java.util.ConcurrentModificationException
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at 
> sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:92)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:127)
> at sqlline.SqlLine.print(SqlLine.java:1583)
> at sqlline.Commands.execute(Commands.java:852)
> at sqlline.Commands.sql(Commands.java:751)
> at sqlline.SqlLine.dispatch(SqlLine.java:738)
> at sqlline.SqlLine.begin(SqlLine.java:612)
> at sqlline.SqlLine.start(SqlLine.java:366)
> at sqlline.SqlLine.main(SqlLine.java:259)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-2737) Sqlline throws Runtime exception when JDBC ResultSet throws a SQLException

2015-08-20 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-2737.
--
Resolution: Fixed

Fixed in commit 718abac

> Sqlline throws Runtime exception when JDBC ResultSet throws a SQLException
> --
>
> Key: DRILL-2737
> URL: https://issues.apache.org/jira/browse/DRILL-2737
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - CLI
>Reporter: Parth Chandra
>Assignee: Parth Chandra
> Fix For: 1.2.0
>
> Attachments: DRILL-2737.patch
>
>
> This is a tracking bug to provide a patch to Sqlline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3711) Windows unit test failure on 1.2 snapshot

2015-08-25 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-3711:


 Summary: Windows unit test failure on 1.2 snapshot 
 Key: DRILL-3711
 URL: https://issues.apache.org/jira/browse/DRILL-3711
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build & Test
Reporter: Parth Chandra
Assignee: Parth Chandra
 Fix For: 1.2.0


Windows unit tests are failing with the following error: 

org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
PatternSyntaxException: Unexpected internal error near index 1
\
 ^

The underlying exception is - 

Caused by: java.util.regex.PatternSyntaxException: Unexpected internal error 
near index 1
\
 ^
at java.util.regex.Pattern.error(Pattern.java:1924) ~[na:1.7.0_79]
at java.util.regex.Pattern.compile(Pattern.java:1671) ~[na:1.7.0_79]
at java.util.regex.Pattern.(Pattern.java:1337) ~[na:1.7.0_79]
at java.util.regex.Pattern.compile(Pattern.java:1022) ~[na:1.7.0_79]
at java.lang.String.split(String.java:2313) ~[na:1.7.0_79]
at java.lang.String.split(String.java:2355) ~[na:1.7.0_79]
at 
org.apache.drill.exec.planner.sql.HivePartitionLocation.(HivePartitionLocation.java:37)
 ~[classes/:1.2.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.HivePartitionDescriptor.getPartitions(HivePartitionDescriptor.java:118)
 ~[classes/:1.2.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.logical.partition.PruneScanRule.doOnMatch(PruneScanRule.java:184)
 ~[drill-java-exec-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2.onMatch(HivePushPartitionFilterIntoScan.java:92)
 ~[classes/:na]
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:228)
 ~[calcite-core-1.1.0-drill-r16.jar:1.1.0-drill-r16]
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:795)
 ~[calcite-core-1.1.0-drill-r16.jar:1.1.0-drill-r16]
at 
org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:303) 
~[calcite-core-1.1.0-drill-r16.jar:1.1.0-drill-r16]
at 
org.apache.calcite.prepare.PlannerImpl.transform(PlannerImpl.java:316) 
~[calcite-core-1.1.0-drill-r16.jar:1.1.0-drill-r16]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:262)
 ~[drill-java-exec-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.handlers.ExplainHandler.getPlan(ExplainHandler.java:69)
 ~[drill-java-exec-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:178)
 ~[drill-java-exec-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:903) 
[drill-java-exec-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:242) 
[drill-java-exec-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_79]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
... 11 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-3578) UnsupportedOperationException: Unable to get value vector class for minor type [FIXEDBINARY] and mode [OPTIONAL]

2015-09-29 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-3578.
--
Resolution: Fixed

Resolved in DRILL-2908

> UnsupportedOperationException: Unable to get value vector class for minor 
> type [FIXEDBINARY] and mode [OPTIONAL]
> 
>
> Key: DRILL-3578
> URL: https://issues.apache.org/jira/browse/DRILL-3578
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.1.0
>Reporter: Hao Zhu
>Assignee: Parth Chandra
>Priority: Critical
> Fix For: 1.3.0
>
>
> The issue is Drill fails to read "timestamp" type in parquet file generated 
> by Hive.
> How to reproduce:
> 1. Create a external Hive CSV table in hive 1.0:
> {code}
> create external table type_test_csv
> (
>   id1 int,
>   id2 string,
>   id3 timestamp,
>   id4 double
> )
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> STORED AS TEXTFILE
> LOCATION '/xxx/testcsv';
> {code}
> 2. Put sample data for above external table:
> {code}
> 1,One,2015-01-01 00:01:00,1.0
> 2,Two,2015-01-02 00:02:00,2.0
> {code}
> 3. Create a parquet hive table:
> {code}
> create external table type_test
> (
>   id1 int,
>   id2 string,
>   id3 timestamp,
>   id4 double
> )
> STORED AS PARQUET
> LOCATION '/xxx/type_test';
> INSERT OVERWRITE TABLE type_test
>   SELECT * FROM type_test_csv;
> {code}
> 4. Then querying the parquet file directly through filesystem storage plugin:
> {code}
> > select * from dfs.`xxx/type_test`;
> Error: SYSTEM ERROR: UnsupportedOperationException: Unable to get value 
> vector class for minor type [FIXEDBINARY] and mode [OPTIONAL]
> Fragment 0:0
> [Error Id: fccfe8b2-6427-46e5-8bfd-cac639e526e8 on h3.poc.com:31010] 
> (state=,code=0)
> {code}
> 5. If the sample data is only 1 row:
> {code}
> 1,One,2015-01-01 00:01:00,1.0
> {code}
> Then the error message would become:
> {code}
> > select * from dfs.`xxx/type_test`;
> Error: SYSTEM ERROR: UnsupportedOperationException: Unsupported type:INT96
> [Error Id: b52b5d46-63a8-4be6-a11d-999a1b46c7c2 on h3.poc.com:31010] 
> (state=,code=0)
> {code}
> Using Hive storage plugin works fine. This issue only applies to filesystem 
> storage plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3853) Get off Sqlline fork

2015-09-29 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-3853:


 Summary: Get off Sqlline fork
 Key: DRILL-3853
 URL: https://issues.apache.org/jira/browse/DRILL-3853
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


Drill has it's own forked version of sqlline that includes customizations for 
displaying the drill version, drill QOTD, removing names of unsupported 
commands and removing JDBC drivers not shipped with Drill.

To get off the fork, we need to parameterize these features in sqlline and have 
them driven from a properties file. The changes should be merged back into 
sqlline and Drill packaging should then provide a properties file to customize 
the stock sqlline distribution.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4053) Reduce metadata cache file size

2015-11-09 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4053:


 Summary: Reduce metadata cache file size
 Key: DRILL-4053
 URL: https://issues.apache.org/jira/browse/DRILL-4053
 Project: Apache Drill
  Issue Type: Improvement
  Components: Metadata
Affects Versions: 1.3.0
Reporter: Parth Chandra
Assignee: Parth Chandra
 Fix For: 1.4.0


The parquet metadata cache file has fair amount of redundant metadata that 
causes the size of the cache file to bloat. Two things that we can reduce are :
1) Schema is repeated for every row group. We can keep a merged schema (similar 
to what was discussed for insert into functionality) 2) The max and min value 
in the stats are used for partition pruning when the values are the same. We 
can keep the maxValue only and that too only if it is the same as the minValue.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4152) Add additional logging and metrics to the Parquet reader

2015-12-02 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4152:


 Summary: Add additional logging and metrics to the Parquet reader
 Key: DRILL-4152
 URL: https://issues.apache.org/jira/browse/DRILL-4152
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Parquet
Reporter: Parth Chandra
Assignee: Parth Chandra


In some cases, we see the Parquet reader as the bottleneck in reading from the 
file system. RWSpeedTest is able to read 10x faster than the Parquet reader so 
reading from disk is not the issue. This issue is to add more instrumentation 
to the Parquet reader so speed bottlenecks can be better diagnosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4154) Metadata Caching : Upgrading cache to v2 from v1 corrupts the cache in some scenarios

2015-12-08 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4154.
--
Resolution: Fixed

Updated the migration tool.

> Metadata Caching : Upgrading cache to v2 from v1 corrupts the cache in some 
> scenarios
> -
>
> Key: DRILL-4154
> URL: https://issues.apache.org/jira/browse/DRILL-4154
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Assignee: Parth Chandra
>Priority: Critical
> Attachments: broken-cache.txt, fewtypes_varcharpartition.tar.tgz, 
> old-cache.txt
>
>
> git.commit.id.abbrev=46c47a2
> I copied the data along with the cache file onto maprfs. Now I ran the 
> upgrade tool (https://github.com/parthchandra/drill-upgrade). Now I ran the 
> metadata_caching suite from the functional tests (concurrency 10) without the 
> datagen phase. I see 3 test failures and when I looked at the cache file it 
> seems to be containing wrong information for the varchar column. 
> Sample from the cache :
> {code}
>   {
> "name" : [ "varchar_col" ]
>   }, {
> "name" : [ "float_col" ],
> "mxValue" : 68797.22,
> "nulls" : 0
>   }
> {code}
> Now I followed the same steps and instead of running the suites I executed 
> the "REFRESH TABLE METADATA" command or any query on that folder,  the cache 
> file seems to be created properly
> I attached the data and cache files required. Let me know if you need anything



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5004) Parquet date correction gives null pointer exception if there is no createdBy entry in the metadata

2016-11-04 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5004:


 Summary: Parquet date correction gives null pointer exception if 
there is no createdBy entry in the metadata
 Key: DRILL-5004
 URL: https://issues.apache.org/jira/browse/DRILL-5004
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


If the Parquet metadata does not contain a createdBy entry, the date corruption 
detection code gives a NPE
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-5004) Parquet date correction gives null pointer exception if there is no createdBy entry in the metadata

2016-11-04 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5004.
--
Resolution: Fixed

Fixed in a459e4d

> Parquet date correction gives null pointer exception if there is no createdBy 
> entry in the metadata
> ---
>
> Key: DRILL-5004
> URL: https://issues.apache.org/jira/browse/DRILL-5004
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Parth Chandra
>Assignee: Vitalii Diravka
> Attachments: DRILL-5004.parquet
>
>
> If the Parquet metadata does not contain a createdBy entry, the date 
> corruption detection code gives a NPE
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4800) Improve parquet reader performance

2016-11-04 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4800.
--
Resolution: Fixed

Fixed in fe2334e, f9a443d, 7f5acf8, ee3489c

> Improve parquet reader performance
> --
>
> Key: DRILL-4800
> URL: https://issues.apache.org/jira/browse/DRILL-4800
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>Assignee: Parth Chandra
>  Labels: doc-impacting
>
> Reported by a user in the field - 
> We're generally getting read speeds of about 100-150 MB/s/node on PARQUET 
> scan operator. This seems a little low given the number of drives on the node 
> - 24. We're looking for options we can improve the performance of this 
> operator as most of our queries are I/O bound. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5050) C++ client library has symbol resolution issues when loaded by a process that already uses boost::asio

2016-11-17 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5050:


 Summary: C++ client library has symbol resolution issues when 
loaded by a process that already uses boost::asio
 Key: DRILL-5050
 URL: https://issues.apache.org/jira/browse/DRILL-5050
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - C++
Affects Versions: 1.6.0
 Environment: MacOs
Reporter: Parth Chandra
Assignee: Parth Chandra
 Fix For: 2.0.0


h4. Summary

On MacOS, the Drill ODBC driver hangs when loaded by any process that might 
also be using {{boost::asio}}. This is observed in trying to connect to Drill 
via the ODBC driver using Tableau.


h4. Analysis
The problem is seen in the Drill client library on MacOS. In the method 
{code}
 DrillClientImpl::recvHandshake
.
.
m_io_service.reset();
if (DrillClientConfig::getHandshakeTimeout() > 0){

m_deadlineTimer.expires_from_now(boost::posix_time::seconds(DrillClientConfig::getHandshakeTimeout()));
m_deadlineTimer.async_wait(boost::bind(
&DrillClientImpl::handleHShakeReadTimeout,
this,
boost::asio::placeholders::error
));
DRILL_MT_LOG(DRILL_LOG(LOG_TRACE) << "Started new handshake wait timer 
with "
<< DrillClientConfig::getHandshakeTimeout() << " seconds." << 
std::endl;)
}

async_read(
this->m_socket,
boost::asio::buffer(m_rbuf, LEN_PREFIX_BUFLEN),
boost::bind(
&DrillClientImpl::handleHandshake,
this,
m_rbuf,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
);
DRILL_MT_LOG(DRILL_LOG(LOG_DEBUG) << "DrillClientImpl::recvHandshake: async 
read waiting for server handshake response.\n";)
m_io_service.run();

.
.

{code}

The call to {{io_service::run}} returns without invoking any of the handlers 
that have been registered. The {{io_service}} object has two tasks in its 
queue, the timer task, and the socket read task. However, in the run method, 
the state of the {{io_service}} object appears to change and the number of 
outstanding tasks becomes zero. The run method therefore returns immediately. 
Subsequently, any query request sent to the server hangs as data is never 
pulled off the socket.

This is bizarre behaviour and typically points to build problems. 

More investigation revealed a more interesting thing. {{boost::asio}} is a 
header only library. In other words, there is no actual library 
{{libboost_asio}}. All the code is included into the binary that includes the 
headers of {{boost::asio}}. It so happens that the Tableau process has a 
library (libtabquery) that uses {{boost::asio}} so the code for {{boost::asio}} 
is already loaded into process memory. When the drill client library (via the 
ODBC driver) is loaded by the loader, the drill client library loads its own 
copy of the {{boost:asio}} code.  At runtime, the drill client code jumps to an 
address that resolves to an address inside the libtabquery copy of 
{{boost::asio}}. And that code returns incorrectly.

Really? How is that even allowed? Two copies of {{boost::asio}} in the same 
process? Even if that is allowed, since the code is included at compile time, 
calls to the {{boost::asio}} library should be resolved using internal linkage. 
And if the call to {{boost::asio}} is not resolved statically, the dynamic 
loader would encounter two symbols with the same name and would give us an 
error. And even if the linker picks one of the symbols, as long as the code is 
the same (for example if both libraries use the same version of boost) can that 
cause a problem? Even more importantly, how do we fix that?

h4. Some assembly required

The disassembled libdrillClient shows this code inside recvHandshake
{code}
0003dd8fmovq-0xb0(%rbp), %rdi   
0003dd96addq$0xc0, %rdi
0003dd9dcallq   0x1bff42## symbol stub for: 
__ZN5boost4asio10io_service3runEv
0003dda2movq-0xb0(%rbp), %rdi
0003dda9cmpq$0x0, 0x190(%rdi)
0003ddb4movq%rax, -0x158(%rbp)
{code}

and later in the code 
{code}
00057216retq
00057217nopw(%rax,%rax)
__ZN5boost4asio10io_service3runEv: ## definition of 
io_service::run
00057220pushq   %rbp
00057221movq%rsp, %rbp
00057224subq$0x30, %rsp
00057228leaq-0x18(%rbp), %rax
0005722cmovq%rdi, -0x8(%rbp)
00057230movq-0x8(%rbp), %rdi
00057234movq%rdi, -0x28(%rbp)
{code}


Note that in recvHandshake the call instruction jumps to an address that is an 
offset (0x1bff42). This offset happens to be beyond the end of the lib

[jira] [Created] (DRILL-5207) Improve Parquet scan pipelining

2017-01-19 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5207:


 Summary: Improve Parquet scan pipelining
 Key: DRILL-5207
 URL: https://issues.apache.org/jira/browse/DRILL-5207
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Parquet
Affects Versions: 1.9.0
Reporter: Parth Chandra
Assignee: Parth Chandra
 Fix For: 1.10


The parquet reader's async page reader is not quite efficiently pipelined. 
The default size of the disk read buffer is 4MB while the page reader reads 
~1MB at a time. The Parquet decode is also processing 1MB at a time. This means 
the disk is idle while the data is being processed. Reducing the buffer to 1MB 
will reduce the time the processing thread waits for the disk read thread.
Additionally, since the data to process a page may be more or less than 1MB, a 
queue of pages will help so that the disk scan does not block (until the queue 
is full), waiting for the processing thread.
Additionally, the BufferedDirectBufInputStream class reads from disk as soon as 
it is initialized. Since this is called at setup time, this increases the setup 
time for the query and query execution does not begin until this is completed.
There are a few other inefficiencies - options are read every time a page 
reader is created. Reading options can be expensive.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5240) Parquet reader creates unnecessary objects when checking for nullability in var length columns

2017-02-02 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5240:


 Summary: Parquet reader creates unnecessary objects when checking 
for nullability in var length columns
 Key: DRILL-5240
 URL: https://issues.apache.org/jira/browse/DRILL-5240
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Parquet
Reporter: Parth Chandra
Assignee: Parth Chandra


In {{NullableVarLengthValuesColumn.java}} we have the following line of code:
{code}
currentValNull = 
variableWidthVector.getAccessor().getObject(valuesReadInCurrentPass) == null;
{code}
which creates an object to check nullability of every (nullable var char) value 
being read. This object creation is expensive and can be replaced with a simple 
check for the null bit.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (DRILL-5241) JDBC proxy driver: Do not put null value in map

2017-02-07 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5241.
--
Resolution: Fixed

> JDBC proxy driver: Do not put null value in map
> ---
>
> Key: DRILL-5241
> URL: https://issues.apache.org/jira/browse/DRILL-5241
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.9.0
>Reporter: David Haller
>Priority: Minor
>  Labels: ready-to-commit
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> See GitHub pull request: https://github.com/apache/drill/pull/724
> Hello everyone,
> proxyReturnClass is always null, so interfacesToProxyClassesMap will contain 
> null values only. Adding newProxyReturnClass should be correct.
> This bug does not affect functionality, but probably decreases performance 
> because you get "cache misses" all the time.
> Best regards,
> David.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (DRILL-5349) TestParquetWriter unit tests fail with synchronous parquet reader

2017-03-10 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5349:


 Summary: TestParquetWriter unit tests fail with synchronous 
parquet reader
 Key: DRILL-5349
 URL: https://issues.apache.org/jira/browse/DRILL-5349
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


Appears that some lines of code got unnecessarily removed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (DRILL-5351) Excessive bounds checking in the Parquet reader

2017-03-13 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5351:


 Summary: Excessive bounds checking in the Parquet reader 
 Key: DRILL-5351
 URL: https://issues.apache.org/jira/browse/DRILL-5351
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra


In profiling the Parquet reader, the variable length decoding appears to be a 
major bottleneck making the reader CPU bound rather than disk bound.
A yourkit profile indicates the following methods being severe bottlenecks -

VarLenBinaryReader.determineSizeSerial(long)
  NullableVarBinaryVector$Mutator.setSafe(int, int, int, int, DrillBuf)
  DrillBuf.chk(int, int)
  NullableVarBinaryVector$Mutator.fillEmpties()

The problem is that each of these methods does some form of bounds checking and 
eventually of course, the actual write to the ByteBuf is also bounds checked.

DrillBuf.chk can be disabled by a configuration setting. Disabling this does 
improve performance of TPCH queries. In addition, all regression, unit, and 
TPCH-SF100 tests pass. 

I would recommend we allow users to turn this check off if there are 
performance critical queries.

Removing the bounds checking at every level is going to be a fair amount of 
work. In the meantime, it appears that a few simple changes to variable length 
vectors improves query performance by about 10% across the board. 





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (DRILL-5459) Extend physical operator test framework to test mini plans consisting of multiple operators

2017-05-13 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5459.
--
Resolution: Fixed

> Extend physical operator test framework to test mini plans consisting of 
> multiple operators
> ---
>
> Key: DRILL-5459
> URL: https://issues.apache.org/jira/browse/DRILL-5459
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Reporter: Jinfeng Ni
>Assignee: Jinfeng Ni
>  Labels: ready-to-commit
>
> DRILL-4437 introduced a unit test framework to test a non-scan physical 
> operator. A JSON reader is implicitly used to specify the inputs to the 
> physical operator under test. 
> There are needs to extend such unit test framework for two scenarios.
> 1. We need a way to test scan operator with different record readers. Drill 
> supports a variety of data source, and it's important to make sure every 
> record reader work properly according to the protocol defined.
> 2. We need a way to test a so-called mini-plan (aka plan fragment) consisting 
> of multiple non-scan operators. 
> For the 2nd need, an alternative is to leverage SQL statement and query 
> planner. However, such approach has a direct dependency on query planner; 1) 
> any planner change may impact the testcase and lead to a different plan, 2) 
> it's not always easy job to force the planner to get a desired plan fragment 
> for testing.
> In particular, it would be good to have a relatively easy way to specify a 
> mini-plan with a couple of targeted physical operators. 
> This JIRA is created to track the work to extend the unit test framework in 
> DRILL-4437.
>  
> Related work: DRILL-5318 introduced a sub-operator test fixture, which mainly 
> targeted to test at sub-operator level. The framework in DRILL-4437 and the 
> extension would focus on operator level, or multiple operator levels, where 
> execution would go through RecordBatch's API call. 
> Same as DRILL-4437, we are going to use mockit to mock required objects such 
> fragment context, operator context etc. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (DRILL-5545) Add findbugs to build

2017-05-26 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5545:


 Summary: Add findbugs to build 
 Key: DRILL-5545
 URL: https://issues.apache.org/jira/browse/DRILL-5545
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra


We should allow the manual invocation of findbugs on the code base so that 
developers can check and make sure they are not introducing hard to find bugs. 
Findbugs can take a long time and a lot of memory so the invocation should be 
manual so as not to slow the build down.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (DRILL-5560) Create configuration file for distribution specific configuration

2017-06-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5560.
--
Resolution: Fixed

> Create configuration file for distribution specific configuration
> -
>
> Key: DRILL-5560
> URL: https://issues.apache.org/jira/browse/DRILL-5560
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>  Labels: ready-to-commit
> Fix For: 1.11.0
>
>
> Create a configuration file for distribution specific settings 
> "drill-distrib.conf". 
> This will be used to add distribution specific configuration. 
> The order in which configuration gets loaded and overriden is 
> "drill-default.conf", per module configuration files "drill-module.conf", 
> "drill-distrib.conf" and "drill-override.conf".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (DRILL-5541) C++ Client Crashes During Simple "Man in the Middle" Attack Test with Exploitable Write AV

2017-06-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5541.
--
Resolution: Fixed

> C++ Client Crashes During Simple "Man in the Middle" Attack Test with 
> Exploitable Write AV
> --
>
> Key: DRILL-5541
> URL: https://issues.apache.org/jira/browse/DRILL-5541
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.10.0
>Reporter: Rob Wu
>Priority: Minor
>  Labels: ready-to-commit
>
> drillClient!boost_sb::shared_ptr::reset+0xa7:
> 07fe`c292f827 f0ff4b08lock dec dword ptr [rbx+8] 
> ds:07fe`c2b3de78=c29e6060
> Exploitability Classification: EXPLOITABLE
> Recommended Bug Title: Exploitable - User Mode Write AV starting at 
> drillClient!boost_sb::shared_ptr::reset+0x00a7
>  (Hash=0x4ae7fdff.0xb15af658)
> User mode write access violations that are not near NULL are exploitable.
> ==
> Stack Trace:
> Child-SP  RetAddr   Call Site
> `030df630 07fe`c295bca1 
> drillClient!boost_sb::shared_ptr::reset+0xa7
>  
> [c:\users\bamboo\desktop\make_win_drill\sb_boost\include\boost-1_57\boost\smart_ptr\shared_ptr.hpp
>  @ 620]
> `030df680 07fe`c295433c 
> drillClient!Drill::DrillClientImpl::processSchemasResult+0x281 
> [c:\users\bamboo\desktop\make_win_drill\drill-1.10.0.1\drill-1.10.0.1\contrib\native\client\src\clientlib\drillclientimpl.cpp
>  @ 1227]
> `030df7a0 07fe`c294cbf6 
> drillClient!Drill::DrillClientImpl::handleRead+0x75c 
> [c:\users\bamboo\desktop\make_win_drill\drill-1.10.0.1\drill-1.10.0.1\contrib\native\client\src\clientlib\drillclientimpl.cpp
>  @ 1555]
> `030df9c0 07fe`c294ce9f 
> drillClient!boost_sb::asio::detail::win_iocp_socket_recv_op
>  
> >,boost_sb::asio::mutable_buffers_1,boost_sb::asio::detail::transfer_all_t,boost_sb::_bi::bind_t  char * __ptr64,boost_sb::system::error_code const & __ptr64,unsigned 
> __int64>,boost_sb::_bi::list4 __ptr64>,boost_sb::_bi::value __ptr64>,boost_sb::arg<1>,boost_sb::arg<2> > > > >::do_complete+0x166 
> [c:\users\bamboo\desktop\make_win_drill\sb_boost\include\boost-1_57\boost\asio\detail\win_iocp_socket_recv_op.hpp
>  @ 97]
> `030dfa90 07fe`c296009d 
> drillClient!boost_sb::asio::detail::win_iocp_io_service::do_one+0x27f 
> [c:\users\bamboo\desktop\make_win_drill\sb_boost\include\boost-1_57\boost\asio\detail\impl\win_iocp_io_service.ipp
>  @ 406]
> `030dfb70 07fe`c295ffc9 
> drillClient!boost_sb::asio::detail::win_iocp_io_service::run+0xad 
> [c:\users\bamboo\desktop\make_win_drill\sb_boost\include\boost-1_57\boost\asio\detail\impl\win_iocp_io_service.ipp
>  @ 164]
> `030dfbd0 07fe`c2aa5b53 
> drillClient!boost_sb::asio::io_service::run+0x29 
> [c:\users\bamboo\desktop\make_win_drill\sb_boost\include\boost-1_57\boost\asio\impl\io_service.ipp
>  @ 60]
> `030dfc10 07fe`c2ad3e03 drillClient!boost_sb::`anonymous 
> namespace'::thread_start_function+0x43
> `030dfc50 07fe`c2ad404e drillClient!_callthreadstartex+0x17 
> [f:\dd\vctools\crt\crtw32\startup\threadex.c @ 376]
> `030dfc80 `779e59cd drillClient!_threadstartex+0x102 
> [f:\dd\vctools\crt\crtw32\startup\threadex.c @ 354]
> `030dfcb0 `77c1a561 kernel32!BaseThreadInitThunk+0xd
> `030dfce0 ` ntdll!RtlUserThreadStart+0x1d
> ==
> Register:
> rax=0284bae0 rbx=07fec2b3de70 rcx=027ec210
> rdx=027ec210 rsi=027f2638 rdi=027f25d0
> rip=07fec292f827 rsp=030df630 rbp=027ec210
>  r8=027ec210  r9= r10=027d32fc
> r11=27eb001b0003 r12= r13=028035a0
> r14=027ec210 r15=
> iopl=0 nv up ei pl nz na pe nc
> cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b efl=00010200
> drillClient!boost_sb::shared_ptr::reset+0xa7:
> 07fe`c292f827 f0ff4b08lock dec dword ptr [rbx+8] 
> ds:07fe`c2b3de78=c29e6060



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (DRILL-5545) Add findbugs to build

2017-06-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5545.
--
Resolution: Fixed

> Add findbugs to build 
> --
>
> Key: DRILL-5545
> URL: https://issues.apache.org/jira/browse/DRILL-5545
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>  Labels: ready-to-commit
>
> We should allow the manual invocation of findbugs on the code base so that 
> developers can check and make sure they are not introducing hard to find 
> bugs. Findbugs can take a long time and a lot of memory so the invocation 
> should be manual so as not to slow the build down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5727) Update release profile to generate SHA-512 checksum.

2017-08-17 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5727:


 Summary: Update release profile to generate SHA-512 checksum.
 Key: DRILL-5727
 URL: https://issues.apache.org/jira/browse/DRILL-5727
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


Per latest release guidelines, we should generate a sha-512 checksum with the 
release artifacts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5818) Querying from system options is broken

2017-09-25 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5818:


 Summary: Querying from system options is broken
 Key: DRILL-5818
 URL: https://issues.apache.org/jira/browse/DRILL-5818
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


Querying from system options is broken.


{quote}0: jdbc:drill:schema=dfs.work> select * from sys.boot where name like 
'%abc%';
Error: SYSTEM ERROR: UnrecognizedPropertyException: Unrecognized field "type" 
(class org.apache.drill.exec.server.options.OptionValue), not marked as 
ignorable (8 known properties: "string_val", "kind", "accessibleScopes", 
"num_val", "name", "bool_val", "float_val", "scope"])
 at [Source: [B@1d7f3cd4; line: 6, column: 2] (through reference chain: 
org.apache.drill.exec.server.options.OptionValue["type"])

Fragment 0:0

[Error Id: b71b8550-9eff-4fdd-98f5-02242ce0e6b8 on PChandra-E776:31010] 
(state=,code=0)
0: jdbc:drill:schema=dfs.work> select * from sys.options;
Error: SYSTEM ERROR: UnrecognizedPropertyException: Unrecognized field "type" 
(class org.apache.drill.exec.server.options.OptionValue), not marked as 
ignorable (8 known properties: "string_val", "kind", "accessibleScopes", 
"num_val", "name", "bool_val", "float_val", "scope"])
 at [Source: [B@1d7f3cd4; line: 6, column: 2] (through reference chain: 
org.apache.drill.exec.server.options.OptionValue["type"])

Fragment 0:0

[Error Id: 7c9e264f-032c-4892-9941-28db0a068ecf on PChandra-E776:31010] 
(state=,code=0)
{quote}

Seems to be caused by the fix for DRILL-5723

[~timothyfarkas] can you take a look?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5865) build broken with commit fe79a63

2017-10-11 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5865:


 Summary: build broken with commit fe79a63
 Key: DRILL-5865
 URL: https://issues.apache.org/jira/browse/DRILL-5865
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


Looks like the combination of fe79a63 and 42f7af2 broke the build. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (DRILL-2496) Add SSL support to C++ client

2017-10-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-2496.
--
Resolution: Fixed

Done in DRILL-5431

> Add SSL support to C++ client
> -
>
> Key: DRILL-2496
> URL: https://issues.apache.org/jira/browse/DRILL-2496
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - C++
>Reporter: Parth Chandra
>Assignee: Parth Chandra
>  Labels: security
> Fix For: Future
>
>
> Needed for impersonation where username and password are sent over the wire 
> to the user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5876) Remove netty-tcnative inclusion in java-exec/pom.xml

2017-10-13 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5876:


 Summary: Remove netty-tcnative inclusion in java-exec/pom.xml
 Key: DRILL-5876
 URL: https://issues.apache.org/jira/browse/DRILL-5876
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


The inclusion of netty-tcnative is causing all kinds of problems. The os 
specific classifier  required is determined by a maven extension which in turn 
requires an additional eclipse plugin. The eclipse plugin has a problem that 
may corrupt the current workspace.
It is safe to not include the dependency since it is required only at runtime. 
The only case in which this is required is when a developer has to debug 
SSL/OpenSSL issues in the Java client or the server when launching from within 
an IDE. In this case, the dependency can be enabled by uncommenting the 
relevant lines in the pom file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5877) Fix Travis build

2017-10-16 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5877:


 Summary: Fix Travis build 
 Key: DRILL-5877
 URL: https://issues.apache.org/jira/browse/DRILL-5877
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


Travis build is broken because of a rat check. Rat is complaining about a 
binary file (*.p12) not having an Apache header (even though it detects the 
file as binary).





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (DRILL-5877) Fix Travis build

2017-10-16 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5877.
--
Resolution: Fixed

> Fix Travis build 
> -
>
> Key: DRILL-5877
> URL: https://issues.apache.org/jira/browse/DRILL-5877
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Parth Chandra
>
> Travis build is broken because of a rat check. Rat is complaining about a 
> binary file (*.p12) not having an Apache header (even though it detects the 
> file as binary).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5888) jdbc-all-jar unit tests broken because of dependency on hadoop.security

2017-10-17 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5888:


 Summary: jdbc-all-jar unit tests broken because of dependency on 
hadoop.security
 Key: DRILL-5888
 URL: https://issues.apache.org/jira/browse/DRILL-5888
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


In some of the build profiles, the jdbc-all-jar is being built with all the 
hadoop classes excluded. the changes for DRILL-5431 introduced a new dependency 
on hadoop security and because those classes are not available for 
jdbc-all-jar, the unit tests fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (DRILL-5873) Drill C++ Client should throw proper/complete error message for the ODBC driver to consume

2017-10-24 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5873.
--
Resolution: Fixed

> Drill C++ Client should throw proper/complete error message for the ODBC 
> driver to consume
> --
>
> Key: DRILL-5873
> URL: https://issues.apache.org/jira/browse/DRILL-5873
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Reporter: Krystal
>Assignee: Parth Chandra
>  Labels: ready-to-commit
>
> The Drill C++ Client should throw a proper/complete error message for the 
> driver to utilize.
> The ODBC driver is directly outputting the exception message thrown by the 
> client by calling the getError() API after the connect() API has failed with 
> an error status.
> For the Java client, similar logic is hard coded at 
> https://github.com/apache/drill/blob/1.11.0/exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserClient.java#L247.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5964) Do not allow queries to access paths outside the current workspace root

2017-11-14 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5964:


 Summary: Do not allow queries to access paths outside the current 
workspace root
 Key: DRILL-5964
 URL: https://issues.apache.org/jira/browse/DRILL-5964
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra


Workspace definitions in the dfs plugin are intended to provide a convenient 
shortcut to long directory paths. However, some users may wish to disallow 
access to paths outside the root of the workspace, possibly to prevent 
accidental access. Note that this is a convenience option and not a substitute 
for permissions on the file system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5968) C++ Client should set service_host to the connected host if service_host is empty

2017-11-14 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5968:


 Summary: C++ Client should set service_host to the connected host 
if service_host is empty
 Key: DRILL-5968
 URL: https://issues.apache.org/jira/browse/DRILL-5968
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


In the ODBC driver - 

The krbSpnConfigurationsRequired parameter in odbc.ini is not working as 
expected.  If I set the following:
AuthenticationType=Kerberos
UID=
PWD=
DelegationUID=
KrbServiceName=
KrbServiceHost=maprsasl
krbSpnConfigurationsRequired=1

I could connect successfully.  I was expecting to get an error message.  If 
only either KrbServiceHost is missing or both KrbServiceHost and KrbServiceName 
are missing then I get the expected error message.

Turning off the parameter, I was able to connect using the following setting:
AuthenticationType=Kerberos
UID=
PWD=
DelegationUID=
KrbServiceName=
KrbServiceHost=maprsasl
krbSpnConfigurationsRequired=0

However, if either KrbServiceHost or both KrbServiceHost and KrbServiceName are 
missing, I would get the following error message:
1: SQLDriverConnect = [MapR][Drill] (30)  User authentication failed. Server 
message: DrillClientImpl::handleAuthentication: Authentication failed. 
[Details:  Encryption: enabled ,MaxWrappedSize: 32768 ,WrapSizeLimit: 0, Error: 
-1]. Check connection parameters? (30) SQLSTATE=28000
1: ODBC_Connect = [MapR][Drill] (30)  User authentication failed. Server 
message: DrillClientImpl::handleAuthentication: Authentication failed. 
[Details:  Encryption: enabled ,MaxWrappedSize: 32768 ,WrapSizeLimit: 0, Error: 
-1]. Check connection parameters? (30) SQLSTATE=28000


The Drill C++ Client should set service_host to the connected host (if direct) 
if service_host is empty (similar logic for zookeeper connection). Going 
through the source code, looks like this logic was removed by this commit: 
https://github.com/apache/drill/commit/f246c3cad7f44baeb8153913052ebc963c62276a#diff-8e6df071d8ca863fcfa578892944c1dcL123





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5971) Fix INT64, INT32 logical types in complex parquet reader

2017-11-15 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5971:


 Summary: Fix INT64, INT32 logical types in complex parquet reader
 Key: DRILL-5971
 URL: https://issues.apache.org/jira/browse/DRILL-5971
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


The 'complex' Parquet reader does not recognize the Parquet logical types 
INT64, and INT32. 
Should be a simple change to add these logical types.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-6188) Fix C++ client build on Centos 7 and OSX

2018-02-27 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6188:


 Summary: Fix C++ client build on Centos 7 and OSX 
 Key: DRILL-6188
 URL: https://issues.apache.org/jira/browse/DRILL-6188
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


compile issue on CentOS 7:
{quote}In file included from 
/root/default/private-drill/contrib/native/client/src/clientlib/utils.cpp:22:0:
 /root/default/private-drill/contrib/native/client/src/clientlib/logger.hpp: In 
constructor 'Drill::Logger::Logger()':
 
/root/default/private-drill/contrib/native/client/src/clientlib/logger.hpp:38:29:
 error: 'cout' is not a member of 'std'
 m_pOutStream = &std::cout;
 ^
 make[2]: *** [src/clientlib/CMakeFiles/drillClient.dir/utils.cpp.o] Error 1
 make[1]: *** [src/clientlib/CMakeFiles/drillClient.dir/all] Error 2
 make: *** [all] Error 2
{quote}
OSX - has this compile error:
{quote}In file included from 
/Users/mapr/private-drill/contrib/native/client/src/clientlib/drillClientImpl.cpp:34:
 
/Users/mapr/private-drill/contrib/native/client/src/clientlib/drillClientImpl.hpp:185:39:
 error: 'm_bHasError' is a private member of 'Drill::DrillClientQueryHandle'
 void setHasError(bool hasError)
Unknown macro: \{ m_bHasError = hasError; }
^
 
/Users/mapr/private-drill/contrib/native/client/src/clientlib/drillClientImpl.hpp:158:10:
 note: declared private here
 bool m_bHasError;
 ^
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6218) Update release profile to not generate MD5 checksum

2018-03-07 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6218:


 Summary: Update release profile to not generate MD5 checksum
 Key: DRILL-6218
 URL: https://issues.apache.org/jira/browse/DRILL-6218
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra
Assignee: Parth Chandra


Latest release guidelines require that we not generate a MD5 checksum.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6321) Lateral Join: Planning changes - enable submitting physical plan

2018-04-10 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6321:


 Summary: Lateral Join: Planning changes - enable submitting 
physical plan
 Key: DRILL-6321
 URL: https://issues.apache.org/jira/browse/DRILL-6321
 Project: Apache Drill
  Issue Type: Task
Reporter: Parth Chandra


Implement changes to enable submitting a physical plan containing lateral and 
unnest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6322) Lateral Join: Common changes - Add new iterOutcome, Operator types, MockRecordBatch for testing

2018-04-10 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6322:


 Summary: Lateral Join: Common changes - Add new iterOutcome, 
Operator types, MockRecordBatch for testing
 Key: DRILL-6322
 URL: https://issues.apache.org/jira/browse/DRILL-6322
 Project: Apache Drill
  Issue Type: Task
Reporter: Parth Chandra


Add new iterOutcome (EMIT), Operator types (LATERAL and UNNEST), and  
MockRecordBatch for testing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6323) Lateral Join - Initial implementation

2018-04-10 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6323:


 Summary: Lateral Join - Initial implementation
 Key: DRILL-6323
 URL: https://issues.apache.org/jira/browse/DRILL-6323
 Project: Apache Drill
  Issue Type: Task
Reporter: Parth Chandra


Implementation of Lateral Join with unit tests using MockRecordBatch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6324) Unnest - Initial Implementation

2018-04-10 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6324:


 Summary: Unnest - Initial Implementation
 Key: DRILL-6324
 URL: https://issues.apache.org/jira/browse/DRILL-6324
 Project: Apache Drill
  Issue Type: Task
Reporter: Parth Chandra






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6327) Update unary operators to handle IterOutcome.EMIT

2018-04-12 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6327:


 Summary: Update unary operators to handle IterOutcome.EMIT
 Key: DRILL-6327
 URL: https://issues.apache.org/jira/browse/DRILL-6327
 Project: Apache Drill
  Issue Type: Task
Reporter: Parth Chandra


IterOutcome.EMIT is a new state introduced by the Lateral Join implementation. 
All operators need to be updated to handle it.

This Jira is to track the subtask of updating the unary operators (derived from 
AbstractSingleRecordBatch).

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-5968) C++ Client should set service_host to the connected host if service_host is empty

2018-04-30 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5968.
--
Resolution: Fixed

> C++ Client should set service_host to the connected host if service_host is 
> empty
> -
>
> Key: DRILL-5968
> URL: https://issues.apache.org/jira/browse/DRILL-5968
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Parth Chandra
>Priority: Major
>
> In the ODBC driver - 
> The krbSpnConfigurationsRequired parameter in odbc.ini is not working as 
> expected.  If I set the following:
> AuthenticationType=Kerberos
> UID=
> PWD=
> DelegationUID=
> KrbServiceName=
> KrbServiceHost=maprsasl
> krbSpnConfigurationsRequired=1
> I could connect successfully.  I was expecting to get an error message.  If 
> only either KrbServiceHost is missing or both KrbServiceHost and 
> KrbServiceName are missing then I get the expected error message.
> Turning off the parameter, I was able to connect using the following setting:
> AuthenticationType=Kerberos
> UID=
> PWD=
> DelegationUID=
> KrbServiceName=
> KrbServiceHost=maprsasl
> krbSpnConfigurationsRequired=0
> However, if either KrbServiceHost or both KrbServiceHost and KrbServiceName 
> are missing, I would get the following error message:
> 1: SQLDriverConnect = [MapR][Drill] (30)  User authentication failed. Server 
> message: DrillClientImpl::handleAuthentication: Authentication failed. 
> [Details:  Encryption: enabled ,MaxWrappedSize: 32768 ,WrapSizeLimit: 0, 
> Error: -1]. Check connection parameters? (30) SQLSTATE=28000
> 1: ODBC_Connect = [MapR][Drill] (30)  User authentication failed. Server 
> message: DrillClientImpl::handleAuthentication: Authentication failed. 
> [Details:  Encryption: enabled ,MaxWrappedSize: 32768 ,WrapSizeLimit: 0, 
> Error: -1]. Check connection parameters? (30) SQLSTATE=28000
> The Drill C++ Client should set service_host to the connected host (if 
> direct) if service_host is empty (similar logic for zookeeper connection). 
> Going through the source code, looks like this logic was removed by this 
> commit: 
> https://github.com/apache/drill/commit/f246c3cad7f44baeb8153913052ebc963c62276a#diff-8e6df071d8ca863fcfa578892944c1dcL123



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6440) Fix ignored unit tests in unnest

2018-05-22 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6440:


 Summary: Fix ignored unit tests in unnest
 Key: DRILL-6440
 URL: https://issues.apache.org/jira/browse/DRILL-6440
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra
Assignee: Parth Chandra






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-5584) When Compiling Apache Drill C++ Client, versioning information are not present in the binary

2018-06-04 Thread Parth Chandra (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-5584.
--
   Resolution: Fixed
Fix Version/s: 1.14.0

> When Compiling Apache Drill C++ Client, versioning information are not 
> present in the binary
> 
>
> Key: DRILL-5584
> URL: https://issues.apache.org/jira/browse/DRILL-5584
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - C++
>Affects Versions: 1.10.0
>Reporter: Rob Wu
>Priority: Minor
> Fix For: 1.14.0
>
>
> We should add support for generating an RC file containing the versioning 
> information so this manual task can be automated.
> Current workaround:
> Compile the C++ Client DLL.
> Open the DLL and manually add a Version Resource with the following 
> information:
> FILEVERSION   1,10,0,0
> PRODUCTVERSION 1,10,0,0
> CompanyName
> FileDescription Apache Drill C++ Client
> FileVersion   1.10.0.0
> InternalNamedrillClient.dll
> LegalCopyright Copyright (c) 2013-2017 The Apache Software 
> Foundation
> OriginalFilename  drillClient.dll
> ProductName   Apache Drill C++ Client
> ProductVersion 1.10.0.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6516) Support for EMIT outcome in streaming agg

2018-06-18 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6516:


 Summary: Support for EMIT outcome in streaming agg
 Key: DRILL-6516
 URL: https://issues.apache.org/jira/browse/DRILL-6516
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra


Update the streaming aggregator to recognize the EMIT outcome



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6576) Unnest reports incoming record counts incorrectly

2018-07-02 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6576:


 Summary: Unnest reports incoming record counts incorrectly
 Key: DRILL-6576
 URL: https://issues.apache.org/jira/browse/DRILL-6576
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra
Assignee: Parth Chandra






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6592) Unnest perf improvements - record batch sizer is called too frequently

2018-07-11 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6592:


 Summary: Unnest perf improvements - record batch sizer is called 
too frequently
 Key: DRILL-6592
 URL: https://issues.apache.org/jira/browse/DRILL-6592
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra
Assignee: Parth Chandra


Unnest calls the RecordBatchSizer in every call to next/doWork which is called 
by Lateral Join for every record. It can be called only once at the beginning 
of the incoming batch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6596) Variable length vectors use unnecessary emptyByteArray to fill empties

2018-07-11 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6596:


 Summary: Variable length vectors use unnecessary emptyByteArray to 
fill empties
 Key: DRILL-6596
 URL: https://issues.apache.org/jira/browse/DRILL-6596
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


When writing to an index beyond the last index written to, Variable Length 
vectors set the 'empties' by writing a zero length byte array to the indexes 
that were skipped.

This is, as it turns out, sometimes an expensive operation, and is completely 
unnecessary as all that needs to be done is to set the offset vector correctly. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6631) Wrong result from LateralUnnest query with aggregation and order by

2018-07-24 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6631:


 Summary: Wrong result from LateralUnnest query with aggregation 
and order by
 Key: DRILL-6631
 URL: https://issues.apache.org/jira/browse/DRILL-6631
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Parth Chandra


Reported by Chun:

The following query gives correct result:
{noformat}
0: jdbc:drill:zk=10.10.30.166:5181> select customer.c_custkey, customer.c_name, 
orders.totalprice from customer, lateral (select sum(t.o.o_totalprice) as 
totalprice from unnest(customer.c_orders) t(o) WHERE t.o.o_totalprice in 
(89230.03,270087.44,246408.53,82657.72,153941.38,65277.06,180309.76)) orders 
where customer.c_custkey = 101276;
++-+-+
| c_custkey  |   c_name| totalprice  |
++-+-+
| 101276 | Customer#000101276  | 82657.72|
++-+-+
1 row selected (6.184 seconds)
{noformat}
But if I remove the where clause and replace it with order by and limit, I got 
the following empty result set. This is wrong.
{noformat}
0: jdbc:drill:zk=10.10.30.166:5181> select customer.c_custkey, customer.c_name, 
orders.totalprice from customer, lateral (select sum(t.o.o_totalprice) as 
totalprice from unnest(customer.c_orders) t(o) WHERE t.o.o_totalprice in 
(89230.03,270087.44,246408.53,82657.72,153941.38,65277.06,180309.76)) orders 
order by customer.c_custkey limit 50;
++-+-+
| c_custkey  | c_name  | totalprice  |
++-+-+
++-+-+
No rows selected (2.753 seconds)
{noformat}
Here is the plan for the query giving the correct result:
{noformat}
00-00Screen : rowType = RecordType(ANY c_custkey, ANY c_name, ANY 
totalprice): rowcount = 472783.35, cumulative cost = {8242193.734985 rows, 
4.10218543349E7 cpu, 0.0 io, 5.80956180479E9 network, 0.0 memory}, id = 
14410
00-01  Project(c_custkey=[$0], c_name=[$1], totalprice=[$2]) : rowType = 
RecordType(ANY c_custkey, ANY c_name, ANY totalprice): rowcount = 472783.35, 
cumulative cost = {8194915.399985 rows, 4.0974575E7 cpu, 0.0 io, 
5.80956180479E9 network, 0.0 memory}, id = 14409
00-02UnionExchange : rowType = RecordType(ANY c_custkey, ANY c_name, 
ANY totalprice): rowcount = 472783.35, cumulative cost = {7722132.04999 
rows, 3.955622594996E7 cpu, 0.0 io, 5.80956180479E9 network, 0.0 
memory}, id = 14408
01-01  LateralJoin(correlation=[$cor1], joinType=[inner], 
requiredColumns=[{0}], column excluded from output: =[`c_orders`]) : rowType = 
RecordType(ANY c_custkey, ANY c_name, ANY totalprice): rowcount = 472783.35, 
cumulative cost = {7249348.6 rows, 3.577395915E7 cpu, 0.0 io, 0.0 
network, 0.0 memory}, id = 14407
01-03SelectionVectorRemover : rowType = RecordType(ANY c_orders, 
ANY c_custkey, ANY c_name): rowcount = 472783.35, cumulative cost = {6776561.35 
rows, 2.442713975E7 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 14403
01-05  Filter(condition=[=($1, 101276)]) : rowType = RecordType(ANY 
c_orders, ANY c_custkey, ANY c_name): rowcount = 472783.35, cumulative cost = 
{6303778.0 rows, 2.39543564E7 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 14402
01-07Scan(groupscan=[EasyGroupScan 
[selectionRoot=maprfs:/drill/testdata/lateral/tpchsf1/json/customer, 
numFiles=10, columns=[`c_orders`, `c_custkey`, `c_name`], 
files=[maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_6.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_4.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_3.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_7.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_5.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_2.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_0.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_8.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_1.json, 
maprfs:///drill/testdata/lateral/tpchsf1/json/customer/0_0_9.json]]]) : rowType 
= RecordType(ANY c_orders, ANY c_custkey, ANY c_name): rowcount = 3151889.0, 
cumulative cost = {3151889.0 rows, 9455667.0 cpu, 0.0 io, 0.0 network, 0.0 
memory}, id = 14401
01-02StreamAgg(group=[{}], totalprice=[SUM($0)]) : rowType = 
RecordType(ANY totalprice): rowcount = 1.0, cumulative cost = {4.0 rows, 19.0 
cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 14406
01-04  Filter(condition=[OR(=($0, 89230.03), =($0, 270087.44), 
=($0, 246408.53), =($0, 82657.72), =($0, 153941.38), =($0, 65277.06), =($0, 
180309.76))]) : rowType = RecordType(ANY ITEM): rowcount = 1.0, cumulative cost 
= {3.0 rows, 7.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 14405
01-0

[jira] [Created] (DRILL-6657) Unnest reports one batch less than the actual number of batches

2018-07-31 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-6657:


 Summary: Unnest reports one batch less than the actual number of 
batches
 Key: DRILL-6657
 URL: https://issues.apache.org/jira/browse/DRILL-6657
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra


Unnest doesn't count the first batch that comes in. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-1800) Drillbit is not sending COMPLETED status to the client when the query completes.

2014-12-02 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-1800:


 Summary: Drillbit is not sending COMPLETED status to the client 
when the query completes.
 Key: DRILL-1800
 URL: https://issues.apache.org/jira/browse/DRILL-1800
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: Parth Chandra
Priority: Blocker
 Fix For: 0.7.0


The C++ client is expecting the drillbit to send a record batch with the query 
state set to either COMPLETED or FAILED. The drillbit is not sending the 
COMPLETED message so the C++ client hangs waiting for more messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-1848) Query involving external sort runs out of memory

2014-12-11 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-1848:


 Summary: Query involving external sort runs out of memory 
 Key: DRILL-1848
 URL: https://issues.apache.org/jira/browse/DRILL-1848
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra
Assignee: Parth Chandra


The external sort operator has a memory limit based on, among other things, the 
MAX_QUERY_MEMORY_PER_NODE constant. This constant is 2048 and it is assumed the 
caller will multiply by the appropriate factor to get the number of bytes.
The setting for external sort is not doing so.
It is better to fix the constant to specify the exact value in bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-1869) CPP Client does not handle record batches that contain nullable varchars with only null values.

2014-12-15 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-1869:


 Summary: CPP Client does not handle record batches that contain 
nullable varchars with only null values.
 Key: DRILL-1869
 URL: https://issues.apache.org/jira/browse/DRILL-1869
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - C++
Reporter: Parth Chandra


If the record batch has a nullable varchar value vector with only null values, 
the length of the buffer containing the varchar data is zero and the CPP client 
has an assertion failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-1871) JSON reader cannot read compressed files

2014-12-15 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-1871:


 Summary: JSON reader cannot read compressed files 
 Key: DRILL-1871
 URL: https://issues.apache.org/jira/browse/DRILL-1871
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - JSON
Reporter: Parth Chandra
Assignee: Parth Chandra


GZip a json file and then try to query it. Gives the following error :

ERROR [HY000] [MapR][Drill] (1040) Drill failed to execute the query: SELECT * 
FROM `dfs`.`part-m-0001.json.gz`
[30024]Query execution error. Details:[
Query stopped., Illegal character ((CTRL-CHAR, code 31)): only regular white 
space (\r, \n, \t) is allowed between tokens
at [Source: org.apache.drill.exec.vector.complex.fn.JsonReader@6a375274; line: 
0, column: 2] [ d83909cd-89b7-43a2-aebc-5ebba74570db on vmx0754:31010 ] 
]

The JSON reader is supposed to be able to handle compressed files.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-1886) NPE in queries with a project on a subquery with Union all

2014-12-17 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-1886:


 Summary: NPE in queries with a project on a subquery with Union all
 Key: DRILL-1886
 URL: https://issues.apache.org/jira/browse/DRILL-1886
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: Parth Chandra
Assignee: Parth Chandra
 Fix For: 0.8.0


The following query (data file attached) causes a NPE:

select trans_id from 
(
   select trans_id, trans_id as ti from dfs.`mobile-small-copy.json.gz` where 
trans_id = 1 
union all 
  select trans_id, trans_id as ti from dfs.`mobile-small-copy.json.gz` where 
trans_id = 19998
)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1849) Parquet reader should return unknown columns as nullable int instead of nullable bit to be consistent with behavior elsewhere

2015-01-02 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1849.
--
Resolution: Fixed

Fixed in b83be2e

> Parquet reader should return unknown columns as nullable int instead of 
> nullable bit to be consistent with behavior elsewhere
> -
>
> Key: DRILL-1849
> URL: https://issues.apache.org/jira/browse/DRILL-1849
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Jason Altekruse
>Assignee: Jason Altekruse
> Fix For: Future
>
> Attachments: DRILL_1849.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1584) Update C++ client to handle updated RpcException pattern

2015-01-02 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1584.
--
Resolution: Fixed

C++ Client already handles this. 

> Update C++ client to handle updated RpcException pattern
> 
>
> Key: DRILL-1584
> URL: https://issues.apache.org/jira/browse/DRILL-1584
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jacques Nadeau
>Assignee: Parth Chandra
> Fix For: 0.8.0
>
>
> Move was from use of RpcFailure to DrillPBError as protobuf body for rpc 
> exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-1943) Handle aliases and column names that differ in case only

2015-01-06 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-1943:


 Summary: Handle aliases and column names that differ in case only
 Key: DRILL-1943
 URL: https://issues.apache.org/jira/browse/DRILL-1943
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning & Optimization
Reporter: Parth Chandra
Assignee: Jinfeng Ni


1) Consider the query 
  select a, a from foo.
For this query we return the columns a and a0.
For the query 
  select a, A from foo
we return only one column and also leak memory. (see DRILL-1911).

The same behaviour exists if the query uses aliases. This is not correct. 
Aliases are explicitly specified names to remove ambiguity in column names and 
should be unique (ignoring case).

A query like :
  select A as a1, B as A1 from foo 
should give a syntax error.

This should be the behaviour in subqueries, view creation and CTAS queries as 
well.

2) If a subquery (or view) has column names that are different only in case, 
the use of the subquery or view should result in ann error if the top level 
query references the ambiguous column.










--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1533) C++ Drill Client always sets hasSchemaChanged to true for every new record batch

2015-01-07 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1533.
--
Resolution: Fixed

+1

Resolved in commit 07f276d

> C++ Drill Client always sets hasSchemaChanged to true for every new record 
> batch
> 
>
> Key: DRILL-1533
> URL: https://issues.apache.org/jira/browse/DRILL-1533
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Reporter: Norris Lee
>Assignee: Parth Chandra
> Fix For: 0.8.0
>
>
> hasSchemaChanged is always set as true for all record batches except the 
> first one regardless of whether the schema has changed or not, including 
> cases where specific columns are projected, which should never happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1498) Drill Client to handle handshake messages in handleRead and to ignore spurious results

2015-01-07 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1498.
--
Resolution: Fixed

+1

Resolved in commit 4304b25

> Drill Client to handle handshake messages in handleRead and to ignore 
> spurious results
> --
>
> Key: DRILL-1498
> URL: https://issues.apache.org/jira/browse/DRILL-1498
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Reporter: Norris Lee
>Assignee: Norris Lee
> Fix For: 0.8.0
>
>
> Occasionally the client will receive handshake messages from the server. 
> Requests should be reponded to and responses should be ignored. Spurious 
> Query_Handle and Query_Result messages (where coordination id and query id = 
> 0, respectively) should also be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-1955) C++ Client - Drill client should provide a clean method for detecting query completion in the async API.

2015-01-07 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-1955:


 Summary: C++ Client - Drill client should provide a clean method 
for detecting query completion in the async API.
 Key: DRILL-1955
 URL: https://issues.apache.org/jira/browse/DRILL-1955
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - C++
Reporter: Parth Chandra
Assignee: Parth Chandra


The C++ client swallows the query_completed status message because it has 
already signaled the end of data thru the ls_last_chunk.
However, it may be too early for the application (or odbc driver) to free 
resources.
The API should provide a clean method for detecting the completion of the 
query. This may include calling the listener callback one more time with no 
records, but with the query state set to completed.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2038) C++ Client syncronous API has a concurrency issue with multiple parallel queries

2015-01-19 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-2038:


 Summary: C++ Client syncronous API has a concurrency issue with 
multiple parallel queries
 Key: DRILL-2038
 URL: https://issues.apache.org/jira/browse/DRILL-2038
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - C++
Affects Versions: 0.8.0
Reporter: Parth Chandra
Assignee: Parth Chandra
 Fix For: 0.8.0


Issuing about 30 parallel queries to the C++ client (thru the query submitter 
test program). Some of the queries return failure. The client application seems 
to get into a state where the query state on the client is running even though 
the server has completed the query and is not longer sending results back.

The issue appears to be related to the handling of the QUERY_COMPLETED state. 
The synchronous API expects that if the error object associated with the query 
result is nut null, the query should have failed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1952) Inconsistent result with function mod() on float

2015-01-21 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1952.
--
Resolution: Won't Fix

Per Aman's comment, this is expected behaviour for Float and Double. 

> Inconsistent result with function mod() on float
> 
>
> Key: DRILL-1952
> URL: https://issues.apache.org/jira/browse/DRILL-1952
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 0.8.0
>Reporter: Chun Chang
>Assignee: Daniel Barclay (Drill/MapR)
>
> #Fri Jan 02 21:20:47 EST 2015
> git.commit.id.abbrev=b491cdb
> mod() operation on float give inconsistent result. Test data can be accessed 
> at https://s3.amazonaws.com/apache-drill/files/complex.json.gz
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDirMondrian> select t.sfa[1], mod(t.sfa[1], 
> 10) sfamod from `complex.json` t limit 20;
> +++
> |   EXPR$0   |   sfamod   |
> +++
> | 1.01   | 1.01   |
> | 2.01   | 2.01   |
> | 3.01   | 3.01   |
> | 4.01   | 4.01   |
> | 5.01   | 5.01   |
> | 6.01   | 6.01   |
> | 7.01   | 7.01   |
> | 8.01   | 8.01   |
> | 9.01   | 9.01   |
> | 10.01  | 0.009787 |
> | 11.01  | 1.0098 |
> | 12.01  | 2.01   |
> | 13.01  | 3.01   |
> | 14.01  | 4.01   |
> | 15.01  | 5.01   |
> | 16.01  | 6.012 |
> | 17.01  | 7.012 |
> | 18.01  | 8.012 |
> | 19.01  | 9.012 |
> | 20.01  | 0.011563 |
> +++
> 20 rows selected (0.112 seconds)
> {code}
> physical plan
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDirMondrian> explain plan for select 
> t.sfa[1], mod(t.sfa[1], 10) sfamod from `complex.json` t limit 20;
> +++
> |text|json|
> +++
> | 00-00Screen
> 00-01  Project(EXPR$0=[$0], sfamod=[$1])
> 00-02SelectionVectorRemover
> 00-03  Limit(fetch=[20])
> 00-04Project(EXPR$0=[ITEM($0, 1)], sfamod=[MOD(ITEM($0, 1), 10)])
> 00-05  Scan(groupscan=[EasyGroupScan 
> [selectionRoot=/drill/testdata/complex_type/json/complex.json, numFiles=1, 
> columns=[`sfa`[1]], 
> files=[maprfs:/drill/testdata/complex_type/json/complex.json]]])
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2109) Parquet reader does not return correct data for required dictionary encoded fields

2015-01-28 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-2109:


 Summary: Parquet reader does not return correct data for required 
dictionary encoded fields
 Key: DRILL-2109
 URL: https://issues.apache.org/jira/browse/DRILL-2109
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra
Assignee: Jason Altekruse


The reader returns all the data for the column but the records are in the wrong 
order. If you read more than one column, then the values do not match up.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1466) Drill Client receiving Handshake rpc calls from server

2015-01-30 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1466.
--
Resolution: Cannot Reproduce

Can no longer see this issue. 

> Drill Client receiving Handshake rpc calls from server
> --
>
> Key: DRILL-1466
> URL: https://issues.apache.org/jira/browse/DRILL-1466
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Reporter: Norris Lee
>Assignee: Parth Chandra
> Fix For: 0.8.0
>
>
> Occasionally in handleRead, the rpc type of the message coming back will be 
> of type HANDSHAKE. Subsequent msgs coming back afterwards will be 
> abnormal/out of order.
> Case 1: the next call will be rpc type QUERY_RESULT where the query id is 
> 0:0. QUERY_HANDLE was not called before this. This leads to the error 
> ERR_QRY_OUTOFORDER with message: "Query result received before query id. 
> Aborting …"
> Case 2: the next call will be rpc type QUERY_HANDLE with coordination id 0. 
> This leads to error ERR_QRY_INVQUERYID with message "Cannot find query Id in 
> internal structure"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2149) Projection drops column in select * query with json files containing nulls

2015-02-03 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-2149:


 Summary: Projection drops column in select * query with json files 
containing nulls
 Key: DRILL-2149
 URL: https://issues.apache.org/jira/browse/DRILL-2149
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - JSON
Affects Versions: 0.7.0
Reporter: Parth Chandra
Assignee: Steven Phillips


With the following data -

st_1.json - 
{"a":"xyzzy", "b":null}

st_2.json -
{"a":"xyzzy", "b":"You are inside a building, a well house for a large spring."}

select a,b from st_*.json   - returns both a and b.

select * from st_*.json  - returns only column a.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2188) JDBC should default to getting complex data as JSON

2015-02-09 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-2188:


 Summary: JDBC should default to getting complex data as JSON
 Key: DRILL-2188
 URL: https://issues.apache.org/jira/browse/DRILL-2188
 Project: Apache Drill
  Issue Type: Improvement
  Components: Client - JDBC
Reporter: Parth Chandra
Assignee: Daniel Barclay (Drill/MapR)
Priority: Minor
 Fix For: 0.8.0


Currently the ODBC driver gets complex data as a JSON string while the JDBC 
driver gets complex data as a complex type which it then converts to JSON. The 
conversion to JSON in the JDBC path uses an expensive method that also consumes 
excessive amounts of CPU.
Since client applications are unable to consume complex data, the default 
should be to get JSON data and there should be a client side setting (session 
paramater) to revert to getting  complex data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2189) Query profile has inconsistencies in data reported

2015-02-09 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-2189:


 Summary: Query profile has inconsistencies in data reported
 Key: DRILL-2189
 URL: https://issues.apache.org/jira/browse/DRILL-2189
 Project: Apache Drill
  Issue Type: Bug
Reporter: Parth Chandra
 Fix For: 0.9.0


In the attached query profile, the stats for the screen operator are merged 
into the project operator. 
Also, some of the operators are counting the same minor fragment more than 
once, which causes the average time to be reported incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1538) Memory leak observed with TPCH queries on MapR-DB data (SF-100)

2015-02-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1538.
--
Resolution: Fixed

Fixed by Drill-1480

> Memory leak observed with TPCH queries on MapR-DB data (SF-100)
> ---
>
> Key: DRILL-1538
> URL: https://issues.apache.org/jira/browse/DRILL-1538
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Abhishek Girish
>Assignee: Parth Chandra
>Priority: Critical
> Fix For: 0.8.0
>
> Attachments: drillbit - OutOfMemory error.log
>
>
> Possible memory leak was observed while executing TPCH queries on MapR-DB 
> data (SF-100). 
> Setup - 4 nodes on MapR 3.1.1
> Drill-env:
> DRILL_MAX_DIRECT_MEMORY="32G"
> DRILL_MAX_HEAP="16G"
> export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP 
> -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M 
> -XX:ReservedCodeCacheSize=1G -ea"
> # Class unloading is disabled by default in Java 7
> # 
> http://hg.openjdk.java.net/jdk7u/jdk7u60/hotspot/file/tip/src/share/vm/runtime/globals.hpp#l1622
> export SERVER_GC_OPTS="-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC"
> export DRILL_JAVA_OPTS="$DRILL_JAVA_OPTS 
> -Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf 
> -Dzookeeper.sasl.client=false "
> export DRILL_LOG_DIR="/var/log/drill"
> Log attached. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1139) Drillbit fails with OutOfMemoryError Exception when Drill-smoke test is run for a long time

2015-02-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1139.
--
Resolution: Fixed

Fixed by Drill-1480

> Drillbit fails with OutOfMemoryError Exception when Drill-smoke test is run 
> for a long time
> ---
>
> Key: DRILL-1139
> URL: https://issues.apache.org/jira/browse/DRILL-1139
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Amit Katti
>Assignee: Parth Chandra
>Priority: Critical
> Fix For: 0.8.0
>
>
> I ran the Drill-smoke test in an infinite loop on a cluster with 2 drillbits.
> After about 11 hours of running successfully, the smoke test started to fail 
> and both drillbits went down.
> I had also put in the below option in the /etc/drill/conf/drill-env.sh file:
> export DRILL_JAVA_OPTS="-Xms$DRILL_INIT_HEAP -Xmx$DRILL_MAX_HEAP 
> -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -ea -XX:MaxPermSize=512M 
> -XX:+UseConcMarkSweepGC -XX:ReservedCodeCacheSize=1G 
> -XX:+CMSClassUnloadingEnabled"
> The error message at the smoke test was:
> {code}
> 2014-07-12 05:36:34 INFO  ClientCnxn:852 - Socket connection established to 
> 10.10.30.156/10.10.30.156:5181, initiating session
> 2014-07-12 05:36:34 ERROR ConnectionState:201 - Connection timed out for 
> connection string (10.10.30.156:5181) and timeout (5000) / elapsed (5003)
> org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = 
> ConnectionLoss
>   at 
> org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:198)
>   at 
> org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88)
>   at 
> org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115)
>   at 
> org.apache.curator.utils.EnsurePath$InitialHelper$1.call(EnsurePath.java:148)
>   at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
>   at 
> org.apache.curator.utils.EnsurePath$InitialHelper.ensure(EnsurePath.java:140)
>   at org.apache.curator.utils.EnsurePath.ensure(EnsurePath.java:99)
>   at 
> org.apache.curator.framework.imps.NamespaceImpl.fixForNamespace(NamespaceImpl.java:74)
>   at 
> org.apache.curator.framework.imps.NamespaceImpl.newNamespaceAwareEnsurePath(NamespaceImpl.java:87)
>   at 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.newNamespaceAwareEnsurePath(CuratorFrameworkImpl.java:468)
>   at 
> org.apache.curator.framework.recipes.cache.PathChildrenCache.(PathChildrenCache.java:223)
>   at 
> org.apache.curator.framework.recipes.cache.PathChildrenCache.(PathChildrenCache.java:182)
>   at 
> org.apache.curator.x.discovery.details.ServiceCacheImpl.(ServiceCacheImpl.java:65)
>   at 
> org.apache.curator.x.discovery.details.ServiceCacheBuilderImpl.build(ServiceCacheBuilderImpl.java:47)
>   at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.(ZKClusterCoordinator.java:81)
>   at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:144)
>   at 
> org.apache.drill.jdbc.DrillConnectionImpl.(DrillConnectionImpl.java:90)
>   at 
> org.apache.drill.jdbc.DrillJdbc41Factory$DrillJdbc41Connection.(DrillJdbc41Factory.java:87)
>   at 
> org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:56)
>   at 
> org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:43)
>   at 
> org.apache.drill.jdbc.DrillFactory.newConnection(DrillFactory.java:51)
>   at 
> net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:126)
>   at java.sql.DriverManager.getConnection(DriverManager.java:571)
>   at java.sql.DriverManager.getConnection(DriverManager.java:233)
>   at 
> org.apache.drill.test.framework.DrillTestBase.runTest(DrillTestBase.java:172)
>   at 
> org.apache.drill.test.framework.DrillTests.positiveTests(DrillTests.java:32)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
>   at org.testng.internal.Invoker.invokeMethod(Invoker.java:701)
>   at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:893)
>   at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1218)
>   at 
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
>   at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
>   at org.testng.TestRunner.privateRun(TestRunner.java:758)
>   at org.testng.TestRunner.run(Te

[jira] [Resolved] (DRILL-647) memory leak

2015-02-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-647.
-
Resolution: Fixed

Fixed by Drill-1480

> memory leak
> ---
>
> Key: DRILL-647
> URL: https://issues.apache.org/jira/browse/DRILL-647
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Chun Chang
>Assignee: Parth Chandra
>Priority: Critical
> Fix For: 0.8.0
>
>
> After processing thousands of queries (in the order of, may occur earlier), 
> any new query will fail due to memory allocation failure. 
> org.apache.drill.exec.memory.OutOfMemoryException: You attempted to create a 
> new child allocator with initial reservation 2000 but only 9934592 bytes 
> of memory were available.
>   
> org.apache.drill.exec.memory.TopLevelAllocator.getChildAllocator(TopLevelAllocator.java:68)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   
> org.apache.drill.exec.ops.FragmentContext.(FragmentContext.java:86) 
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   
> org.apache.drill.exec.work.foreman.QueryManager.runFragments(QueryManager.java:87)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:310) 
> [drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:324) 
> [drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:175) 
> [drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_45]
>   
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_45]
>   java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-1442) C++ Client - Synchronous API appears to hang when running many queries in parallel

2015-02-12 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-1442.
--
Resolution: Fixed

Fixed in commit c051bbd

> C++ Client - Synchronous API appears to hang when running many queries in 
> parallel
> --
>
> Key: DRILL-1442
> URL: https://issues.apache.org/jira/browse/DRILL-1442
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 0.5.0
>Reporter: Parth Chandra
>Assignee: Parth Chandra
> Fix For: 0.9.0
>
>
> The C++ client library has a synchronous version that allows a client 
> application to submit multiple queries asynchronously but retrieve results in 
> synchronously.
> A situation may occur where the application may submit several large queries 
> and then choose to process the results of the last submitted query first. In 
> this case the client library buffers up the results of the first few queries 
> and may hit  its memory allocation limit before the last queries results are 
> retrieved. 
> The client app then deadlocks as the last query waits for more memory and the 
> first few queries wait for the app to consume the results.
> Technically this would qualify as a client application bug, but the client 
> library should prevent or break the deadlock if it can.
> At the vary least, the querySubmitter example program should not suffer from 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-2132) ResultSetMetaData.getColumnClassName(...) returns "none" (rather than a class name)

2015-02-16 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-2132.
--
Resolution: Duplicate

Duplicate of DRILL-2137

> ResultSetMetaData.getColumnClassName(...) returns "none" (rather than a class 
> name)
> ---
>
> Key: DRILL-2132
> URL: https://issues.apache.org/jira/browse/DRILL-2132
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Reporter: Daniel Barclay (Drill/MapR)
>Assignee: Daniel Barclay (Drill/MapR)
>
> {{ResultSetMetaData}}'s {{getColumnClassName(...)}} returns the string 
> {{"none"}} (rather than the name of a class), and least for the result set 
> returned by {{DatabaseMetaData.getColumns(...)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-2169) Disable embedded web server for tests that do not depend on it

2015-02-17 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-2169.
--
Resolution: Fixed

Resolved in d9b61fa

> Disable embedded web server for tests that do not depend on it
> --
>
> Key: DRILL-2169
> URL: https://issues.apache.org/jira/browse/DRILL-2169
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Hanifi Gunes
>Assignee: Hanifi Gunes
>Priority: Minor
> Attachments: DRILL-2169.1.patch.txt
>
>
> Starting embedded webserver takes couple of seconds per bit. We should 
> eliminate this whenever possible. This issue proposes to turn off embedded 
> web server by default for all test cases. Individual tests that depend on 
> web-server could set enable flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >