[jira] [Updated] (DRILL-5917) Ban org.json:json library in Drill

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5917:

Labels: ready-to-commit  (was: )

> Ban org.json:json library in Drill
> --
>
> Key: DRILL-5917
> URL: https://issues.apache.org/jira/browse/DRILL-5917
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Vlad Rozov
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Apache Drill has dependencies on json.org lib indirectly from two libraries:
> com.mapr.hadoop:maprfs:jar:5.2.1-mapr
> com.mapr.fs:mapr-hbase:jar:5.2.1-mapr
> {noformat}
> [INFO] org.apache.drill.contrib:drill-format-mapr:jar:1.12.0-SNAPSHOT
> [INFO] +- com.mapr.hadoop:maprfs:jar:5.2.1-mapr:compile
> [INFO] |  \- org.json:json:jar:20080701:compile
> [INFO] \- com.mapr.fs:mapr-hbase:jar:5.2.1-mapr:compile
> [INFO]\- (org.json:json:jar:20080701:compile - omitted for duplicate)
> {noformat}
> Need to make sure we won't have any dependencies from these libs to 
> org.json:json lib and ban this lib in main pom.xml file.
> Issue is critical since Apache release won't happen until we make sure 
> org.json:json lib is not used (https://www.apache.org/legal/resolved.html).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5917) Ban org.json:json library in Drill

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5917:

Reviewer: Paul Rogers  (was: Arina Ielchiieva)

> Ban org.json:json library in Drill
> --
>
> Key: DRILL-5917
> URL: https://issues.apache.org/jira/browse/DRILL-5917
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Vlad Rozov
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Apache Drill has dependencies on json.org lib indirectly from two libraries:
> com.mapr.hadoop:maprfs:jar:5.2.1-mapr
> com.mapr.fs:mapr-hbase:jar:5.2.1-mapr
> {noformat}
> [INFO] org.apache.drill.contrib:drill-format-mapr:jar:1.12.0-SNAPSHOT
> [INFO] +- com.mapr.hadoop:maprfs:jar:5.2.1-mapr:compile
> [INFO] |  \- org.json:json:jar:20080701:compile
> [INFO] \- com.mapr.fs:mapr-hbase:jar:5.2.1-mapr:compile
> [INFO]\- (org.json:json:jar:20080701:compile - omitted for duplicate)
> {noformat}
> Need to make sure we won't have any dependencies from these libs to 
> org.json:json lib and ban this lib in main pom.xml file.
> Issue is critical since Apache release won't happen until we make sure 
> org.json:json lib is not used (https://www.apache.org/legal/resolved.html).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5923) State of a successfully completed query shown as "COMPLETED"

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5923:

Labels: ready-to-commit  (was: )

> State of a successfully completed query shown as "COMPLETED"
> 
>
> Key: DRILL-5923
> URL: https://issues.apache.org/jira/browse/DRILL-5923
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Drill UI currently lists a successfully completed query as "COMPLETED". 
> Successfully completed, failed and canceled queries are all grouped as 
> Completed queries. 
> It would be better to list the state of a successfully completed query as 
> "Succeeded" to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249220#comment-16249220
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/774
  
+1, LGTM.


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5337) OpenTSDB storage plugin

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5337:

Labels: doc-impacting ready-to-commit  (was: features)

> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5717) change some date time unit cases with specific timezone or Local

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249227#comment-16249227
 ] 

ASF GitHub Bot commented on DRILL-5717:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/904
  
+1, LGTM.


> change some date time unit cases with specific timezone or Local
> 
>
> Key: DRILL-5717
> URL: https://issues.apache.org/jira/browse/DRILL-5717
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.9.0, 1.11.0
>Reporter: weijie.tong
>Assignee: weijie.tong
>  Labels: ready-to-commit
>
> Some date time test cases like  JodaDateValidatorTest  is not Local 
> independent .This will cause other Local's users's test phase to fail. We 
> should let these test cases to be Local env independent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5956) Add storage plugin for Druid

2017-11-13 Thread weijie.tong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249318#comment-16249318
 ] 

weijie.tong commented on DRILL-5956:


I have a plan to do this work some days later. But it may take a long time to 
finish it through my free time.

> Add storage plugin for Druid
> 
>
> Key: DRILL-5956
> URL: https://issues.apache.org/jira/browse/DRILL-5956
> Project: Apache Drill
>  Issue Type: Wish
>  Components: Storage - Other
>Reporter: Jiaqi Liu
>
> As more and more companies are using Druid for mission-critical industrial 
> products, Drill could gain much more popularity with Druid as one of its 
> supported storage plugin so that uses could easily bind Druid cluster to 
> running Drill instance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (DRILL-5956) Add storage plugin for Druid

2017-11-13 Thread weijie.tong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weijie.tong reassigned DRILL-5956:
--

Assignee: weijie.tong

> Add storage plugin for Druid
> 
>
> Key: DRILL-5956
> URL: https://issues.apache.org/jira/browse/DRILL-5956
> Project: Apache Drill
>  Issue Type: Wish
>  Components: Storage - Other
>Reporter: Jiaqi Liu
>Assignee: weijie.tong
>
> As more and more companies are using Druid for mission-critical industrial 
> products, Drill could gain much more popularity with Druid as one of its 
> supported storage plugin so that uses could easily bind Druid cluster to 
> running Drill instance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5771) Fix serDe errors for format plugins

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249355#comment-16249355
 ] 

ASF GitHub Bot commented on DRILL-5771:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/1014
  
@ilooner Paul has correctly described race condition issue. At planning 
time when we have defined plugin configuration we want to make sure it would 
stay the same during query execution. Relying on plugin configuration name is 
unsafe since config under this particular name can change and it can be that 
one fragment of the query is executed with one configuration and another with 
different (typical race condition issue).

Regarding your last question, during physical plan creation I don't think 
we deserialize DrillTable, we rather deserialize query fragments (it can be 
seen from json profile). Also you can start looking, for example, from here:

https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/work/QueryWorkUnit.java#L59



> Fix serDe errors for format plugins
> ---
>
> Key: DRILL-5771
> URL: https://issues.apache.org/jira/browse/DRILL-5771
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>Priority: Minor
> Fix For: 1.12.0
>
>
> Create unit tests to check that all storage format plugins can be 
> successfully serialized  / deserialized.
> Usually this happens when query has several major fragments. 
> One way to check serde is to generate physical plan (generated as json) and 
> then submit it back to Drill.
> One example of found errors is described in the first comment. Another 
> example is described in DRILL-5166.
> *Serde issues:*
> 1. Could not obtain format plugin during deserialization
> Format plugin is created based on format plugin configuration or its name. 
> On Drill start up we load information about available plugins (its reloaded 
> each time storage plugin is updated, can be done only by admin).
> When query is parsed, we try to get plugin from the available ones, it we can 
> not find one we try to [create 
> one|https://github.com/apache/drill/blob/3e8b01d5b0d3013e3811913f0fd6028b22c1ac3f/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java#L136-L144]
> but on other query execution stages we always assume that [plugin exists 
> based on 
> configuration|https://github.com/apache/drill/blob/3e8b01d5b0d3013e3811913f0fd6028b22c1ac3f/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java#L156-L162].
> For example, during query parsing we had to create format plugin on one node 
> based on format configuration.
> Then we have sent major fragment to the different node where we used this 
> format configuration we could not get format plugin based on it and 
> deserialization has failed.
> To fix this problem we need to create format plugin during query 
> deserialization if it's absent.
>   
> 2.  Absent hash code and equals.
> Format plugins are stored in hash map where key is format plugin config.
> Since some format plugin configs did not have overridden hash code and 
> equals, we could not find format plugin based on its configuration.
> 3. Named format plugin usage
> Named format plugins configs allow to get format plugin by its name for 
> configuration shared among all drillbits.
> They are used as alias for pre-configured format plugiins. User with admin 
> priliges can modify them at runtime.
> Named format plugins configs are used instead of sending all non-default 
> parameters of format plugin config, in this case only name is sent.
> Their usage in distributed system may cause raise conditions.
> For example, 
> 1. Query is submitted. 
> 2. Parquet format plugin is created with the following configuration 
> (autoCorrectCorruptDates=>true).
> 3. Seralized named format plugin config with name as parquet.
> 4. Major fragment is sent to the different node.
> 5. Admin has changed parquet configuration for the alias 'parquet' on all 
> nodes to autoCorrectCorruptDates=>false.
> 6. Named format is deserialized on the different node into parquet format 
> plugin with configuration (autoCorrectCorruptDates=>false).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249361#comment-16249361
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user vladimirtkach commented on a diff in the pull request:

https://github.com/apache/drill/pull/1026#discussion_r150500059
  
--- Diff: exec/java-exec/src/main/resources/drill-module.conf ---
@@ -502,6 +502,8 @@ drill.exec.options: {
 store.format: "parquet",
 store.hive.optimize_scan_with_native_readers: false,
 store.json.all_text_mode: false,
+store.json.writer.non_numeric_numbers: false,
+store.json.reader.non_numeric_numbers: false,
--- End diff --

No, we don't and we should test them. I think we have to create separate 
jira for testing math functions, because code changes from this PR doesn't 
affect logic of any math function, and while testing math function there will 
be other issues not connected with with functionality (like BigInteger 
constructor doesn't accept NaN, Infinity and others)


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249362#comment-16249362
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user vladimirtkach commented on a diff in the pull request:

https://github.com/apache/drill/pull/1026#discussion_r150500204
  
--- Diff: exec/java-exec/src/main/resources/drill-module.conf ---
@@ -502,6 +502,8 @@ drill.exec.options: {
 store.format: "parquet",
 store.hive.optimize_scan_with_native_readers: false,
 store.json.all_text_mode: false,
+store.json.writer.non_numeric_numbers: false,
+store.json.reader.non_numeric_numbers: false,
--- End diff --

thanks, allow_nan_inf definitely better.


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249365#comment-16249365
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user vladimirtkach commented on a diff in the pull request:

https://github.com/apache/drill/pull/1026#discussion_r150500913
  
--- Diff: exec/java-exec/src/main/resources/drill-module.conf ---
@@ -502,6 +502,8 @@ drill.exec.options: {
 store.format: "parquet",
 store.hive.optimize_scan_with_native_readers: false,
 store.json.all_text_mode: false,
+store.json.writer.non_numeric_numbers: false,
+store.json.reader.non_numeric_numbers: false,
--- End diff --

I think we should stick to json standard and leave them switched off by 
default. If user get an exception we may show the option name which he want to 
switch on.  


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5959) Unable to apply Union on multiple select query expressions

2017-11-13 Thread Rajeshwar Rao (JIRA)
Rajeshwar Rao created DRILL-5959:


 Summary: Unable to apply Union on  multiple select query 
expressions
 Key: DRILL-5959
 URL: https://issues.apache.org/jira/browse/DRILL-5959
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Affects Versions: 1.11.0, 1.10.0
Reporter: Rajeshwar Rao
Priority: Minor
 Fix For: Future


Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
 union all
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
 union all
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

getting the following error : 

Query Failed: An Error Occurred
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5959) Unable to apply Union on multiple select query expressions

2017-11-13 Thread Rajeshwar Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshwar Rao updated DRILL-5959:
-
Description: 
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#f6c342} union all{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:red}{color:#f6c342} union all{color}{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}getting the following error :{color} 

Query Failed: An Error Occurred
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0

  was:
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
 union all
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
 union all
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

getting the following error : 

Query Failed: An Error Occurred
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0


> Unable to apply Union on  multiple select query expressions
> ---
>
> Key: DRILL-5959
> URL: https://issues.apache.org/jira/browse/DRILL-5959
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Rajeshwar Rao
>Priority: Minor
> Fix For: Future
>
>
> Unable to combine the result sets of  more than  two separate query 
> expressions
> example :
> select color, Paul  , 'Paul' name
>  from mysql.devcdp25.pivtest
> {color:#f6c342} union all{color}
>  select color, John , 'John' name
>  from mysql.devcdp25.pivtest
> {color:red}{color:#f6c342} union all{color}{color}
>  select color, Tim , 'Tim' name
>  from mysql.devcdp25.pivtest
> {color:#d04437}getting the following error :{color} 
> Query Failed: An Error Occurred
> org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
> JDBC storage plugin failed while trying setup the SQL query. sql SELECT * 
> FROM (SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` 
> UNION ALL SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) 
> UNION ALL SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` 
> plugin mysql Fragment 0:0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5959) Unable to apply Union on multiple select query expressions

2017-11-13 Thread Rajeshwar Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshwar Rao updated DRILL-5959:
-
Description: 
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#f6c342} union all{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:#f6c342} union all{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}Query Failed: An Error Occurred{color}
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0

  was:
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#f6c342} union all{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:red}{color:#f6c342} union all{color}{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}getting the following error :{color} 

Query Failed: An Error Occurred
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0


> Unable to apply Union on  multiple select query expressions
> ---
>
> Key: DRILL-5959
> URL: https://issues.apache.org/jira/browse/DRILL-5959
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Rajeshwar Rao
>Priority: Minor
> Fix For: Future
>
>
> Unable to combine the result sets of  more than  two separate query 
> expressions
> example :
> select color, Paul  , 'Paul' name
>  from mysql.devcdp25.pivtest
> {color:#f6c342} union all{color}
>  select color, John , 'John' name
>  from mysql.devcdp25.pivtest
> {color:#f6c342} union all{color}
>  select color, Tim , 'Tim' name
>  from mysql.devcdp25.pivtest
> {color:#d04437}Query Failed: An Error Occurred{color}
> org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
> JDBC storage plugin failed while trying setup the SQL query. sql SELECT * 
> FROM (SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` 
> UNION ALL SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) 
> UNION ALL SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` 
> plugin mysql Fragment 0:0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5959) Unable to apply Union on multiple select query expressions

2017-11-13 Thread Rajeshwar Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshwar Rao updated DRILL-5959:
-
Description: 
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#f6c342} {color:#205081}union all{color}{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:#f6c342} {color:#205081}union all{color}{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}Query Failed: An Error Occurred{color}
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0

  was:
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#f6c342} union all{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:#f6c342} union all{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}Query Failed: An Error Occurred{color}
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0


> Unable to apply Union on  multiple select query expressions
> ---
>
> Key: DRILL-5959
> URL: https://issues.apache.org/jira/browse/DRILL-5959
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Rajeshwar Rao
>Priority: Minor
> Fix For: Future
>
>
> Unable to combine the result sets of  more than  two separate query 
> expressions
> example :
> select color, Paul  , 'Paul' name
>  from mysql.devcdp25.pivtest
> {color:#f6c342} {color:#205081}union all{color}{color}
>  select color, John , 'John' name
>  from mysql.devcdp25.pivtest
> {color:#f6c342} {color:#205081}union all{color}{color}
>  select color, Tim , 'Tim' name
>  from mysql.devcdp25.pivtest
> {color:#d04437}Query Failed: An Error Occurred{color}
> org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
> JDBC storage plugin failed while trying setup the SQL query. sql SELECT * 
> FROM (SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` 
> UNION ALL SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) 
> UNION ALL SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` 
> plugin mysql Fragment 0:0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5959) Unable to apply Union on multiple select query expressions

2017-11-13 Thread Rajeshwar Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshwar Rao updated DRILL-5959:
-
Description: 
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#205081}union all{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:#205081}union all{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}Query Failed: An Error Occurred{color}
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0

  was:
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#f6c342} {color:#205081}union all{color}{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:#f6c342} {color:#205081}union all{color}{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}Query Failed: An Error Occurred{color}
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0


> Unable to apply Union on  multiple select query expressions
> ---
>
> Key: DRILL-5959
> URL: https://issues.apache.org/jira/browse/DRILL-5959
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Rajeshwar Rao
>Priority: Minor
> Fix For: Future
>
>
> Unable to combine the result sets of  more than  two separate query 
> expressions
> example :
> select color, Paul  , 'Paul' name
>  from mysql.devcdp25.pivtest
> {color:#205081}union all{color}
>  select color, John , 'John' name
>  from mysql.devcdp25.pivtest
> {color:#205081}union all{color}
>  select color, Tim , 'Tim' name
>  from mysql.devcdp25.pivtest
> {color:#d04437}Query Failed: An Error Occurred{color}
> org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
> JDBC storage plugin failed while trying setup the SQL query. sql SELECT * 
> FROM (SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` 
> UNION ALL SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) 
> UNION ALL SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` 
> plugin mysql Fragment 0:0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5959) Unable to apply Union on multiple select query expressions

2017-11-13 Thread Rajeshwar Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshwar Rao updated DRILL-5959:
-
Description: 
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#205081}union all{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:#205081}union all{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}Query Failed: An Error Occurred{color}
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0

it is desirable to support combining multiple result sets in a single query
note: the same works fine in MySQL

  was:
Unable to combine the result sets of  more than  two separate query expressions

example :

select color, Paul  , 'Paul' name
 from mysql.devcdp25.pivtest
{color:#205081}union all{color}
 select color, John , 'John' name
 from mysql.devcdp25.pivtest
{color:#205081}union all{color}
 select color, Tim , 'Tim' name
 from mysql.devcdp25.pivtest

{color:#d04437}Query Failed: An Error Occurred{color}
org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
JDBC storage plugin failed while trying setup the SQL query. sql SELECT * FROM 
(SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` UNION ALL 
SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) UNION ALL 
SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` plugin mysql 
Fragment 0:0


> Unable to apply Union on  multiple select query expressions
> ---
>
> Key: DRILL-5959
> URL: https://issues.apache.org/jira/browse/DRILL-5959
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Rajeshwar Rao
>Priority: Minor
> Fix For: Future
>
>
> Unable to combine the result sets of  more than  two separate query 
> expressions
> example :
> select color, Paul  , 'Paul' name
>  from mysql.devcdp25.pivtest
> {color:#205081}union all{color}
>  select color, John , 'John' name
>  from mysql.devcdp25.pivtest
> {color:#205081}union all{color}
>  select color, Tim , 'Tim' name
>  from mysql.devcdp25.pivtest
> {color:#d04437}Query Failed: An Error Occurred{color}
> org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: The 
> JDBC storage plugin failed while trying setup the SQL query. sql SELECT * 
> FROM (SELECT `color`, `Paul`, 'Paul' AS `name` FROM `devcdp25`.`pivtest` 
> UNION ALL SELECT `color`, `John`, 'John' AS `name` FROM `devcdp25`.`pivtest`) 
> UNION ALL SELECT `color`, `Tim`, 'Tim' AS `name` FROM `devcdp25`.`pivtest` 
> plugin mysql Fragment 0:0
> it is desirable to support combining multiple result sets in a single query
> note: the same works fine in MySQL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5952) Implement "CREATE TABLE IF NOT EXISTS"

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5952:

Fix Version/s: (was: 1.12.0)
   Future

> Implement "CREATE TABLE IF NOT EXISTS"
> --
>
> Key: DRILL-5952
> URL: https://issues.apache.org/jira/browse/DRILL-5952
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: Future
>
>
> Currently, if a table/view with the same name exists CREATE TABLE fails with 
> VALIDATION ERROR
> Having "IF NOT EXISTS" support for CREATE TABLE will ensure that query 
> succeeds 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5952) Implement "CREATE TABLE IF NOT EXISTS"

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249428#comment-16249428
 ] 

ASF GitHub Bot commented on DRILL-5952:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1033#discussion_r150512392
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestCTAS.java ---
@@ -313,6 +313,59 @@ public void createTableWithCustomUmask() throws 
Exception {
 }
   }
 
+  @Test // DRILL-5951
+  public void 
testCreateTableIfNotExistsWhenTableWithSameNameAlreadyExists() throws Exception{
+final String newTblName = 
"createTableIfNotExistsWhenATableWithSameNameAlreadyExists";
+
+try {
+  test(String.format("CREATE TABLE %s.%s AS SELECT * from 
cp.`region.json`", TEMP_SCHEMA, newTblName));
+
+  final String ctasQuery =
+String.format("CREATE TABLE IF NOT EXISTS %s.%s AS SELECT * FROM 
cp.`employee.json`", TEMP_SCHEMA, newTblName);
+
+  testBuilder()
+.sqlQuery(ctasQuery)
+.unOrdered()
+.baselineColumns("ok", "summary")
+.baselineValues(false, String.format("A table or view with given 
name [%s] already exists in schema [%s]", newTblName, TEMP_SCHEMA))
+.go();
+} finally {
+  test(String.format("DROP TABLE %s.%s", TEMP_SCHEMA, newTblName));
--- End diff --

Please use here `drop table if exists`.  Since test may fail before creates 
the table and we don;t here one more exception not related to the problem.


> Implement "CREATE TABLE IF NOT EXISTS"
> --
>
> Key: DRILL-5952
> URL: https://issues.apache.org/jira/browse/DRILL-5952
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: Future
>
>
> Currently, if a table/view with the same name exists CREATE TABLE fails with 
> VALIDATION ERROR
> Having "IF NOT EXISTS" support for CREATE TABLE will ensure that query 
> succeeds 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5952) Implement "CREATE TABLE IF NOT EXISTS"

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249427#comment-16249427
 ] 

ASF GitHub Bot commented on DRILL-5952:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1033#discussion_r150512175
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestCTAS.java ---
@@ -313,6 +313,59 @@ public void createTableWithCustomUmask() throws 
Exception {
 }
   }
 
+  @Test // DRILL-5951
+  public void 
testCreateTableIfNotExistsWhenTableWithSameNameAlreadyExists() throws Exception{
+final String newTblName = 
"createTableIfNotExistsWhenATableWithSameNameAlreadyExists";
+
+try {
+  test(String.format("CREATE TABLE %s.%s AS SELECT * from 
cp.`region.json`", TEMP_SCHEMA, newTblName));
--- End diff --

`test` method already contains `String.format`, you can use `test("CREATE 
TABLE %s.%s AS SELECT * from cp.`region.json`", TEMP_SCHEMA, newTblName)`


> Implement "CREATE TABLE IF NOT EXISTS"
> --
>
> Key: DRILL-5952
> URL: https://issues.apache.org/jira/browse/DRILL-5952
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: Future
>
>
> Currently, if a table/view with the same name exists CREATE TABLE fails with 
> VALIDATION ERROR
> Having "IF NOT EXISTS" support for CREATE TABLE will ensure that query 
> succeeds 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5952) Implement "CREATE TABLE IF NOT EXISTS"

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249431#comment-16249431
 ] 

ASF GitHub Bot commented on DRILL-5952:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1033#discussion_r150512516
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestCTAS.java ---
@@ -313,6 +313,59 @@ public void createTableWithCustomUmask() throws 
Exception {
 }
   }
 
+  @Test // DRILL-5951
+  public void 
testCreateTableIfNotExistsWhenTableWithSameNameAlreadyExists() throws Exception{
+final String newTblName = 
"createTableIfNotExistsWhenATableWithSameNameAlreadyExists";
+
+try {
+  test(String.format("CREATE TABLE %s.%s AS SELECT * from 
cp.`region.json`", TEMP_SCHEMA, newTblName));
+
+  final String ctasQuery =
+String.format("CREATE TABLE IF NOT EXISTS %s.%s AS SELECT * FROM 
cp.`employee.json`", TEMP_SCHEMA, newTblName);
+
+  testBuilder()
+.sqlQuery(ctasQuery)
+.unOrdered()
+.baselineColumns("ok", "summary")
+.baselineValues(false, String.format("A table or view with given 
name [%s] already exists in schema [%s]", newTblName, TEMP_SCHEMA))
+.go();
+} finally {
+  test(String.format("DROP TABLE %s.%s", TEMP_SCHEMA, newTblName));
+}
+  }
+
+  @Test // DRILL-5951
+  public void 
testCreateTableIfNotExistsWhenViewWithSameNameAlreadyExists() throws Exception{
+final String newTblName = 
"createTableIfNotExistsWhenAViewWithSameNameAlreadyExists";
+
+try {
+  test(String.format("CREATE VIEW %s.%s AS SELECT * from 
cp.`region.json`", TEMP_SCHEMA, newTblName));
+
+  final String ctasQuery =
+String.format("CREATE TABLE IF NOT EXISTS %s.%s AS SELECT * FROM 
cp.`employee.json`", TEMP_SCHEMA, newTblName);
+
+  testBuilder()
+.sqlQuery(ctasQuery)
+.unOrdered()
+.baselineColumns("ok", "summary")
+.baselineValues(false, String.format("A table or view with given 
name [%s] already exists in schema [%s]", newTblName, TEMP_SCHEMA))
+.go();
+} finally {
+  test(String.format("DROP VIEW %s.%s", TEMP_SCHEMA, newTblName));
+}
+  }
+
+  @Test // DRILL-5951
+  public void 
testCreateTableIfNotExistsWhenTableWithSameNameDoesNotExist() throws Exception{
+final String newTblName = 
"createTableIfNotExistsWhenATableWithSameNameDoesNotExist";
+
+try {
+  test(String.format("CREATE TABLE IF NOT EXISTS %s.%s AS SELECT * 
FROM cp.`employee.json`", TEMP_SCHEMA, newTblName));
--- End diff --

You might want to check baseline here as well but only for `true` to 
confirm that table was actually created.


> Implement "CREATE TABLE IF NOT EXISTS"
> --
>
> Key: DRILL-5952
> URL: https://issues.apache.org/jira/browse/DRILL-5952
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: Future
>
>
> Currently, if a table/view with the same name exists CREATE TABLE fails with 
> VALIDATION ERROR
> Having "IF NOT EXISTS" support for CREATE TABLE will ensure that query 
> succeeds 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5952) Implement "CREATE TABLE IF NOT EXISTS"

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249430#comment-16249430
 ] 

ASF GitHub Bot commented on DRILL-5952:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1033#discussion_r150513171
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/handlers/CreateTableHandler.java
 ---
@@ -83,7 +84,20 @@ public PhysicalPlan getPlan(SqlNode sqlNode) throws 
ValidationException, RelConv
 final DrillConfig drillConfig = context.getConfig();
 final AbstractSchema drillSchema = resolveSchema(sqlCreateTable, 
config.getConverter().getDefaultSchema(), drillConfig);
 
-checkDuplicatedObjectExistence(drillSchema, originalTableName, 
drillConfig, context.getSession());
+String schemaPath = drillSchema.getFullSchemaName();
+
+// Check duplicate object existence
+boolean isTemporaryTable = 
context.getSession().isTemporaryTable(drillSchema, drillConfig, 
originalTableName);
--- End diff --

1. Please add unit tests for temp tables as well in `TestCTTAS` class.
2. Should we consider similar functionality for views?


> Implement "CREATE TABLE IF NOT EXISTS"
> --
>
> Key: DRILL-5952
> URL: https://issues.apache.org/jira/browse/DRILL-5952
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: Future
>
>
> Currently, if a table/view with the same name exists CREATE TABLE fails with 
> VALIDATION ERROR
> Having "IF NOT EXISTS" support for CREATE TABLE will ensure that query 
> succeeds 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5952) Implement "CREATE TABLE IF NOT EXISTS"

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249429#comment-16249429
 ] 

ASF GitHub Bot commented on DRILL-5952:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1033#discussion_r150512944
  
--- Diff: exec/java-exec/src/main/codegen/includes/parserImpls.ftl ---
@@ -241,8 +243,8 @@ SqlNode SqlCreateTable() :
 
 query = OrderedQueryOrExpr(ExprContext.ACCEPT_QUERY)
 {
-return new SqlCreateTable(pos,tblName, fieldList, 
partitionFieldList, query,
-SqlLiteral.createBoolean(isTemporary, 
getPos()));
+return new SqlCreateTable(pos, 
SqlLiteral.createBoolean(tableNonExistenceCheck, getPos()), tblName, fieldList,
--- End diff --

Not sure if it is a good practice to add new parameter in the beginning of 
constructor rather in the end.


> Implement "CREATE TABLE IF NOT EXISTS"
> --
>
> Key: DRILL-5952
> URL: https://issues.apache.org/jira/browse/DRILL-5952
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
> Fix For: Future
>
>
> Currently, if a table/view with the same name exists CREATE TABLE fails with 
> VALIDATION ERROR
> Having "IF NOT EXISTS" support for CREATE TABLE will ensure that query 
> succeeds 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5948) The wrong number of batches is displayed

2017-11-13 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249437#comment-16249437
 ] 

Arina Ielchiieva commented on DRILL-5948:
-

[~vstorona] you definitely should not close the Jira :) The description you 
have provided is correct. Your description is from the user's point of view. 
Paul from dev perspective explained where the root cause lies. So this Jira 
should remain open until the issue is fixed.

> The wrong number of batches is displayed
> 
>
> Key: DRILL-5948
> URL: https://issues.apache.org/jira/browse/DRILL-5948
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Vlad
> Attachments: json_profile.json
>
>
> I suppose, when you execute a query with a small amount of data drill must 
> create 1 batch, but here you can see that drill created 2 batches. I think 
> it's a wrong behaviour for the drill. Full JSON file will be in the 
> attachment.
> {code:html}
> "fragmentProfile": [
> {
> "majorFragmentId": 0,
> "minorFragmentProfile": [
> {
> "state": 3,
> "minorFragmentId": 0,
> "operatorProfile": [
> {
> "inputProfile": [
> {
> "records": 1,
> "batches": 2,
> "schemas": 1
> }
> ],
> "operatorId": 2,
> "operatorType": 29,
> "setupNanos": 0,
> "processNanos": 1767363740,
> "peakLocalMemoryAllocated": 639120,
> "waitNanos": 25787
> },
> {code}
> Step to reproduce:
> # Create JSON file with 1 row
> # Execute star query whith this file, for example 
> {code:sql}
> select * from dfs.`/path/to/your/file/example.json`
> {code}
> # Go to the Profile page on the UI, and open info about your query
> # Open JSON profile



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5899) Simple pattern matchers can work with DrillBuf directly

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249465#comment-16249465
 ] 

ASF GitHub Bot commented on DRILL-5899:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1015


> Simple pattern matchers can work with DrillBuf directly
> ---
>
> Key: DRILL-5899
> URL: https://issues.apache.org/jira/browse/DRILL-5899
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Flow
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>Priority: Critical
>  Labels: ready-to-commit
>
> For the 4 simple patterns we have i.e. startsWith, endsWith, contains and 
> constant,, we do not need the overhead of charSequenceWrapper. We can work 
> with DrillBuf directly. This will save us from doing isAscii check and UTF8 
> decoding for each row.
> UTF-8 encoding ensures that no UTF-8 character is a prefix of any other valid 
> character. So, instead of decoding varChar from each row we are processing, 
> encode the patternString once during setup and do raw byte comparison. 
> Instead of bounds checking and reading one byte at a time, we get the whole 
> buffer in one shot and use that for comparison.
> This improved overall performance for filter operator by around 20%. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5923) State of a successfully completed query shown as "COMPLETED"

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249470#comment-16249470
 ] 

ASF GitHub Bot commented on DRILL-5923:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1021


> State of a successfully completed query shown as "COMPLETED"
> 
>
> Key: DRILL-5923
> URL: https://issues.apache.org/jira/browse/DRILL-5923
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Drill UI currently lists a successfully completed query as "COMPLETED". 
> Successfully completed, failed and canceled queries are all grouped as 
> Completed queries. 
> It would be better to list the state of a successfully completed query as 
> "Succeeded" to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5822) The query with "SELECT *" with "ORDER BY" clause and `planner.slice_target`=1 doesn't preserve column order

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249468#comment-16249468
 ] 

ASF GitHub Bot commented on DRILL-5822:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1017


> The query with "SELECT *" with "ORDER BY" clause and `planner.slice_target`=1 
> doesn't preserve column order
> ---
>
> Key: DRILL-5822
> URL: https://issues.apache.org/jira/browse/DRILL-5822
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Vitalii Diravka
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Columns ordering doesn't preserve for the star query with sorting when this 
> is planned into multiple fragments.
> Repro steps:
> 1) {code}alter session set `planner.slice_target`=1;{code}
> 2) ORDER BY clause in the query.
> Scenarios:
> {code}
> 0: jdbc:drill:zk=local> alter session reset `planner.slice_target`;
> +---++
> |  ok   |summary |
> +---++
> | true  | planner.slice_target updated.  |
> +---++
> 1 row selected (0.082 seconds)
> 0: jdbc:drill:zk=local> select * from cp.`tpch/nation.parquet` order by 
> n_name limit 1;
> +--+--+--+--+
> | n_nationkey  |  n_name  | n_regionkey  |  n_comment 
>   |
> +--+--+--+--+
> | 0| ALGERIA  | 0|  haggle. carefully final deposits 
> detect slyly agai  |
> +--+--+--+--+
> 1 row selected (0.141 seconds)
> 0: jdbc:drill:zk=local> alter session set `planner.slice_target`=1;
> +---++
> |  ok   |summary |
> +---++
> | true  | planner.slice_target updated.  |
> +---++
> 1 row selected (0.091 seconds)
> 0: jdbc:drill:zk=local> select * from cp.`tpch/nation.parquet` order by 
> n_name limit 1;
> +--+--+--+--+
> |  n_comment   |  n_name  | 
> n_nationkey  | n_regionkey  |
> +--+--+--+--+
> |  haggle. carefully final deposits detect slyly agai  | ALGERIA  | 0 
>| 0|
> +--+--+--+--+
> 1 row selected (0.201 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5863) Sortable table incorrectly sorts minor fragments and time elements lexically instead of sorting by implicit value

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249464#comment-16249464
 ] 

ASF GitHub Bot commented on DRILL-5863:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/987


> Sortable table incorrectly sorts minor fragments and time elements lexically 
> instead of sorting by implicit value
> -
>
> Key: DRILL-5863
> URL: https://issues.apache.org/jira/browse/DRILL-5863
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.11.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> The fix for this is to use dataTable library's {{data-order}} attribute for 
> the data elements that need to sort by an implicit value.
> ||Old order of Minor Fragment||New order of Minor Fragment||
> |...|...|
> |01-09-01  | 01-09-01|
> |01-10-01  | 01-10-01|
> |01-100-01 | 01-11-01|
> |01-101-01 | 01-12-01|
> |... | ... |
> ||Old order of Duration||New order of Duration|||
> |...|...|
> |1m15s  | 55.03s|
> |55s  | 1m15s|
> |...|...|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5337) OpenTSDB storage plugin

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249463#comment-16249463
 ] 

ASF GitHub Bot commented on DRILL-5337:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/774


> OpenTSDB storage plugin
> ---
>
> Key: DRILL-5337
> URL: https://issues.apache.org/jira/browse/DRILL-5337
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Reporter: Dmitriy Gavrilovych
>Assignee: Dmitriy Gavrilovych
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> Storage plugin for OpenTSDB
> The plugin uses REST API to work with TSDB. 
> Expected queries are listed below:
> SELECT * FROM openTSDB.`warp.speed.test`;
> Return all elements from warp.speed.test table with default aggregator SUM
> SELECT * FROM openTSDB.`(metric=warp.speed.test)`;
> Return all elements from (metric=warp.speed.test) table as a previous query, 
> but with alternative FROM syntax
> SELECT * FROM openTSDB.`(metric=warp.speed.test, aggregator=avg)`;
> Return all elements from warp.speed.test table, but with the custom aggregator
> SELECT `timestamp`, sum(`aggregated value`) FROM 
> openTSDB.`(metric=warp.speed.test, aggregator=avg)` GROUP BY `timestamp`;
> Return aggregated and grouped value by standard drill functions from 
> warp.speed.test table, but with the custom aggregator
> SELECT * FROM openTSDB.`(metric=warp.speed.test, downsample=5m-avg)`
> Return data limited by downsample



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5921) Counters metrics should be listed in table

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249469#comment-16249469
 ] 

ASF GitHub Bot commented on DRILL-5921:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1020


> Counters metrics should be listed in table
> --
>
> Key: DRILL-5921
> URL: https://issues.apache.org/jira/browse/DRILL-5921
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Counter metrics are currently displayed as json string in the Drill UI. They 
> should be listed in a table similar to other metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5909) need new JMX metrics for (FAILED and CANCELED) queries

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249467#comment-16249467
 ] 

ASF GitHub Bot commented on DRILL-5909:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1019


> need new JMX metrics for (FAILED and CANCELED) queries
> --
>
> Key: DRILL-5909
> URL: https://issues.apache.org/jira/browse/DRILL-5909
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Monitoring
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Khurram Faraaz
>Assignee: Prasad Nagaraj Subramanya
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> we have these JMX metrics today
> {noformat}
> drill.queries.running
> drill.queries.completed
> {noformat}
> we need these new JMX metrics
> {noformat}
> drill.queries.failed
> drill.queries.canceled
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5717) change some date time unit cases with specific timezone or Local

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249462#comment-16249462
 ] 

ASF GitHub Bot commented on DRILL-5717:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/904


> change some date time unit cases with specific timezone or Local
> 
>
> Key: DRILL-5717
> URL: https://issues.apache.org/jira/browse/DRILL-5717
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.9.0, 1.11.0
>Reporter: weijie.tong
>Assignee: weijie.tong
>  Labels: ready-to-commit
>
> Some date time test cases like  JodaDateValidatorTest  is not Local 
> independent .This will cause other Local's users's test phase to fail. We 
> should let these test cases to be Local env independent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5795) Filter pushdown for parquet handles multi rowgroup file

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249471#comment-16249471
 ] 

ASF GitHub Bot commented on DRILL-5795:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/949


> Filter pushdown for parquet handles multi rowgroup file
> ---
>
> Key: DRILL-5795
> URL: https://issues.apache.org/jira/browse/DRILL-5795
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Affects Versions: 1.11.0
>Reporter: Damien Profeta
>Assignee: Damien Profeta
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
> Attachments: multirowgroup_overlap.parquet
>
>
> DRILL-1950 implemented the filter pushdown for parquet file but only in the 
> case of one rowgroup per parquet file. In the case of multiple rowgroups per 
> files, it detects that the rowgroup can be pruned but then tell to the 
> drillbit to read the whole file which leads to performance issue.
> Having multiple rowgroup per file helps to handle partitioned dataset and 
> still read only the relevant subset of data without ending with more file 
> than really needed.
> Let's say for instance you have a Parquet file composed of RG1 and RG2 with 
> only one column a. Min/max in RG1 are 1-2 and min/max in RG2 are 2-3.
> If I do "select a from file where a=3", today it will read the whole file, 
> with the patch it will only read RG2.
> *For documentation*
> Support / Other section in 
> https://drill.apache.org/docs/parquet-filter-pushdown/ should be updated.
> After the fix files with multiple row groups will be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5896) Handle vector creation in HbaseRecordReader to avoid NullableInt vectors later

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249466#comment-16249466
 ] 

ASF GitHub Bot commented on DRILL-5896:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1005


> Handle vector creation in HbaseRecordReader to avoid NullableInt vectors later
> --
>
> Key: DRILL-5896
> URL: https://issues.apache.org/jira/browse/DRILL-5896
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - HBase
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> When a hbase query projects both a column family and a column in the column 
> family, the vector for the column is not created in the HbaseRecordReader.
> So, in cases where scan batch is empty we create a NullableInt vector for 
> this column. We need to handle column creation in the reader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5717) change some date time unit cases with specific timezone or Local

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5717:

Fix Version/s: 1.12.0

> change some date time unit cases with specific timezone or Local
> 
>
> Key: DRILL-5717
> URL: https://issues.apache.org/jira/browse/DRILL-5717
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.9.0, 1.11.0
>Reporter: weijie.tong
>Assignee: weijie.tong
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Some date time test cases like  JodaDateValidatorTest  is not Local 
> independent .This will cause other Local's users's test phase to fail. We 
> should let these test cases to be Local env independent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5899) Simple pattern matchers can work with DrillBuf directly

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5899:

Fix Version/s: 1.12.0

> Simple pattern matchers can work with DrillBuf directly
> ---
>
> Key: DRILL-5899
> URL: https://issues.apache.org/jira/browse/DRILL-5899
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Flow
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>Priority: Critical
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> For the 4 simple patterns we have i.e. startsWith, endsWith, contains and 
> constant,, we do not need the overhead of charSequenceWrapper. We can work 
> with DrillBuf directly. This will save us from doing isAscii check and UTF8 
> decoding for each row.
> UTF-8 encoding ensures that no UTF-8 character is a prefix of any other valid 
> character. So, instead of decoding varChar from each row we are processing, 
> encode the patternString once during setup and do raw byte comparison. 
> Instead of bounds checking and reading one byte at a time, we get the whole 
> buffer in one shot and use that for comparison.
> This improved overall performance for filter operator by around 20%. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5909) need new JMX metrics for (FAILED and CANCELED) queries

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5909:

Labels: doc-impacting ready-to-commit  (was: ready-to-commit)

> need new JMX metrics for (FAILED and CANCELED) queries
> --
>
> Key: DRILL-5909
> URL: https://issues.apache.org/jira/browse/DRILL-5909
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Monitoring
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Khurram Faraaz
>Assignee: Prasad Nagaraj Subramanya
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> we have these JMX metrics today
> {noformat}
> drill.queries.running
> drill.queries.completed
> {noformat}
> we need these new JMX metrics
> {noformat}
> drill.queries.failed
> drill.queries.canceled
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5917) Ban org.json:json library in Drill

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249493#comment-16249493
 ] 

ASF GitHub Bot commented on DRILL-5917:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/1031
  
@vrozov when I have deployed Drill with your changes on test cluster, Drill 
start up failed with the following error:
```
2017-11-13 01:30:12,3367 ERROR JniCommon 
fs/client/fileclient/cc/jni_MapRClient.cc:616 Thread: 26996 Mismatch found for 
java and native libraries java build version 5.2.2.44680GA, native build 
version 5.2.2.44680.GA java patch vserion $Id: mapr-version: 5.2.2.44680GA 
44680:b0ba7dcefd80b9aaf39f05f3, native patch version $Id: mapr-version: 
5.2.2.44680.GA 44680:b0ba7dcefd80b9aaf39f05f
2017-11-13 01:30:12,3368 ERROR JniCommon 
fs/client/fileclient/cc/jni_MapRClient.cc:626 Thread: 26996 Client 
initialization failed.
Exception in thread "main" 
org.apache.drill.exec.exception.DrillbitStartupException: Failure during 
initial startup of Drillbit.
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:353)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:323)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:319)
Caused by: org.apache.drill.common.exceptions.DrillRuntimeException: Error 
during udf area creation [/user/root/drill/udf/registry] on file system 
[maprfs:///]
at 
org.apache.drill.common.exceptions.DrillRuntimeException.format(DrillRuntimeException.java:49)
at 
org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.createArea(RemoteFunctionRegistry.java:277)
at 
org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.prepareAreas(RemoteFunctionRegistry.java:234)
at 
org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.init(RemoteFunctionRegistry.java:105)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:167)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:349)
... 2 more
Caused by: java.io.IOException: Could not create FileClient
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:626)
at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:679)
at com.mapr.fs.MapRFileSystem.makeDir(MapRFileSystem.java:1239)
at com.mapr.fs.MapRFileSystem.mkdirs(MapRFileSystem.java:1276)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1917)
at 
org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.createArea(RemoteFunctionRegistry.java:254)
... 6 more
```
Without your changes start up was successful.


> Ban org.json:json library in Drill
> --
>
> Key: DRILL-5917
> URL: https://issues.apache.org/jira/browse/DRILL-5917
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Vlad Rozov
> Fix For: 1.12.0
>
>
> Apache Drill has dependencies on json.org lib indirectly from two libraries:
> com.mapr.hadoop:maprfs:jar:5.2.1-mapr
> com.mapr.fs:mapr-hbase:jar:5.2.1-mapr
> {noformat}
> [INFO] org.apache.drill.contrib:drill-format-mapr:jar:1.12.0-SNAPSHOT
> [INFO] +- com.mapr.hadoop:maprfs:jar:5.2.1-mapr:compile
> [INFO] |  \- org.json:json:jar:20080701:compile
> [INFO] \- com.mapr.fs:mapr-hbase:jar:5.2.1-mapr:compile
> [INFO]\- (org.json:json:jar:20080701:compile - omitted for duplicate)
> {noformat}
> Need to make sure we won't have any dependencies from these libs to 
> org.json:json lib and ban this lib in main pom.xml file.
> Issue is critical since Apache release won't happen until we make sure 
> org.json:json lib is not used (https://www.apache.org/legal/resolved.html).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5917) Ban org.json:json library in Drill

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5917:

Labels:   (was: ready-to-commit)

> Ban org.json:json library in Drill
> --
>
> Key: DRILL-5917
> URL: https://issues.apache.org/jira/browse/DRILL-5917
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Vlad Rozov
> Fix For: 1.12.0
>
>
> Apache Drill has dependencies on json.org lib indirectly from two libraries:
> com.mapr.hadoop:maprfs:jar:5.2.1-mapr
> com.mapr.fs:mapr-hbase:jar:5.2.1-mapr
> {noformat}
> [INFO] org.apache.drill.contrib:drill-format-mapr:jar:1.12.0-SNAPSHOT
> [INFO] +- com.mapr.hadoop:maprfs:jar:5.2.1-mapr:compile
> [INFO] |  \- org.json:json:jar:20080701:compile
> [INFO] \- com.mapr.fs:mapr-hbase:jar:5.2.1-mapr:compile
> [INFO]\- (org.json:json:jar:20080701:compile - omitted for duplicate)
> {noformat}
> Need to make sure we won't have any dependencies from these libs to 
> org.json:json lib and ban this lib in main pom.xml file.
> Issue is critical since Apache release won't happen until we make sure 
> org.json:json lib is not used (https://www.apache.org/legal/resolved.html).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5923) State of a successfully completed query shown as "COMPLETED"

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5923:

Labels: doc-impacting ready-to-commit  (was: ready-to-commit)

> State of a successfully completed query shown as "COMPLETED"
> 
>
> Key: DRILL-5923
> URL: https://issues.apache.org/jira/browse/DRILL-5923
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> Drill UI currently lists a successfully completed query as "COMPLETED". 
> Successfully completed, failed and canceled queries are all grouped as 
> Completed queries. 
> It would be better to list the state of a successfully completed query as 
> "Succeeded" to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5783) Make code generation in the TopN operator more modular and test it

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5783:

Labels:   (was: ready-to-commit)

> Make code generation in the TopN operator more modular and test it
> --
>
> Key: DRILL-5783
> URL: https://issues.apache.org/jira/browse/DRILL-5783
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> The work for this PR has had several other PRs batched together with it. The 
> full description of work is the following:
> DRILL-5783
> * A unit test is created for the priority queue in the TopN operator
> * The code generation classes passed around a completely unused function 
> registry reference in some places so I removed it.
> * The priority queue had unused parameters for some of its methods so I 
> removed them.
> DRILL-5841
> * There were many many ways in which temporary folders were created in unit 
> tests. I have unified the way these folders are created with the 
> DirTestWatcher, SubDirTestWatcher, and BaseDirTestWatcher. All the unit tests 
> have been updated to use these. The test watchers create temp directories in 
> ./target//. So all the files generated and used in the context of a test can 
> easily be found in the same consistent location.
> * This change should fix the sporadic hashagg test failures, as well as 
> failures caused by stray files in /tmp
> DRILL-5894
> * dfs_test is used as a storage plugin throughout the unit tests. This is 
> highly confusing and we can just use dfs instead.
> *Misc*
> * General code cleanup.
> * There are many places where String.format is used unnecessarily. The test 
> builder methods already use String.format for you when you pass them args. I 
> cleaned some of these up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5783) Make code generation in the TopN operator more modular and test it

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249500#comment-16249500
 ] 

ASF GitHub Bot commented on DRILL-5783:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/984
  
@ilooner changes in 361 files seems to be too much :) I suggest you split 
this PR at least at three according to the addressed issues. So we try to merge 
smaller chunks.


> Make code generation in the TopN operator more modular and test it
> --
>
> Key: DRILL-5783
> URL: https://issues.apache.org/jira/browse/DRILL-5783
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> The work for this PR has had several other PRs batched together with it. The 
> full description of work is the following:
> DRILL-5783
> * A unit test is created for the priority queue in the TopN operator
> * The code generation classes passed around a completely unused function 
> registry reference in some places so I removed it.
> * The priority queue had unused parameters for some of its methods so I 
> removed them.
> DRILL-5841
> * There were many many ways in which temporary folders were created in unit 
> tests. I have unified the way these folders are created with the 
> DirTestWatcher, SubDirTestWatcher, and BaseDirTestWatcher. All the unit tests 
> have been updated to use these. The test watchers create temp directories in 
> ./target//. So all the files generated and used in the context of a test can 
> easily be found in the same consistent location.
> * This change should fix the sporadic hashagg test failures, as well as 
> failures caused by stray files in /tmp
> DRILL-5894
> * dfs_test is used as a storage plugin throughout the unit tests. This is 
> highly confusing and we can just use dfs instead.
> *Misc*
> * General code cleanup.
> * There are many places where String.format is used unnecessarily. The test 
> builder methods already use String.format for you when you pass them args. I 
> cleaned some of these up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5922:

Labels:   (was: ready-to-commit)

> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:120)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:65)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29)
>  [jmockit-1.3.jar:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) 
> [jmockit-1.3.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  [junit-4.11.jar:na]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5771) Fix serDe errors for format plugins

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5771:

Priority: Major  (was: Minor)

> Fix serDe errors for format plugins
> ---
>
> Key: DRILL-5771
> URL: https://issues.apache.org/jira/browse/DRILL-5771
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Fix For: 1.12.0
>
>
> Create unit tests to check that all storage format plugins can be 
> successfully serialized  / deserialized.
> Usually this happens when query has several major fragments. 
> One way to check serde is to generate physical plan (generated as json) and 
> then submit it back to Drill.
> One example of found errors is described in the first comment. Another 
> example is described in DRILL-5166.
> *Serde issues:*
> 1. Could not obtain format plugin during deserialization
> Format plugin is created based on format plugin configuration or its name. 
> On Drill start up we load information about available plugins (its reloaded 
> each time storage plugin is updated, can be done only by admin).
> When query is parsed, we try to get plugin from the available ones, it we can 
> not find one we try to [create 
> one|https://github.com/apache/drill/blob/3e8b01d5b0d3013e3811913f0fd6028b22c1ac3f/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java#L136-L144]
> but on other query execution stages we always assume that [plugin exists 
> based on 
> configuration|https://github.com/apache/drill/blob/3e8b01d5b0d3013e3811913f0fd6028b22c1ac3f/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java#L156-L162].
> For example, during query parsing we had to create format plugin on one node 
> based on format configuration.
> Then we have sent major fragment to the different node where we used this 
> format configuration we could not get format plugin based on it and 
> deserialization has failed.
> To fix this problem we need to create format plugin during query 
> deserialization if it's absent.
>   
> 2.  Absent hash code and equals.
> Format plugins are stored in hash map where key is format plugin config.
> Since some format plugin configs did not have overridden hash code and 
> equals, we could not find format plugin based on its configuration.
> 3. Named format plugin usage
> Named format plugins configs allow to get format plugin by its name for 
> configuration shared among all drillbits.
> They are used as alias for pre-configured format plugiins. User with admin 
> priliges can modify them at runtime.
> Named format plugins configs are used instead of sending all non-default 
> parameters of format plugin config, in this case only name is sent.
> Their usage in distributed system may cause raise conditions.
> For example, 
> 1. Query is submitted. 
> 2. Parquet format plugin is created with the following configuration 
> (autoCorrectCorruptDates=>true).
> 3. Seralized named format plugin config with name as parquet.
> 4. Major fragment is sent to the different node.
> 5. Admin has changed parquet configuration for the alias 'parquet' on all 
> nodes to autoCorrectCorruptDates=>false.
> 6. Named format is deserialized on the different node into parquet format 
> plugin with configuration (autoCorrectCorruptDates=>false).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5832) Migrate OperatorFixture to use SystemOptionManager rather than mock

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249634#comment-16249634
 ] 

ASF GitHub Bot commented on DRILL-5832:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/970
  
+1, LGTM.


> Migrate OperatorFixture to use SystemOptionManager rather than mock
> ---
>
> Key: DRILL-5832
> URL: https://issues.apache.org/jira/browse/DRILL-5832
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> The {{OperatorFixture}} provides structure for testing individual operators 
> and other "sub-operator" bits of code. To do that, the framework provides 
> mock network-free and server-free versions of the fragment context and 
> operator context.
> As part of the mock, the {{OperatorFixture}} provides a mock version of the 
> system option manager that provides a simple test-only implementation of an 
> option set.
> With the recent major changes to the system option manager, this mock 
> implementation has drifted out of sync with the system option manager. Rather 
> than upgrading the mock implementation, this ticket asks to use the system 
> option manager directly -- but configured for no ZK or file persistence of 
> options.
> The key reason for this change is that the system option manager has 
> implemented a sophisticated way to handle option defaults; it is better to 
> leverage that than to provide a mock implementation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5842) Refactor and simplify the fragment, operator contexts for testing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249636#comment-16249636
 ] 

ASF GitHub Bot commented on DRILL-5842:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/978
  
+1, LGTM.


> Refactor and simplify the fragment, operator contexts for testing
> -
>
> Key: DRILL-5842
> URL: https://issues.apache.org/jira/browse/DRILL-5842
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> Drill's execution engine has a "fragment context" that provides state for a 
> fragment as a whole, and an "operator context" which provides state for a 
> single operator. Historically, these have both been concrete classes that 
> make generous references to the Drillbit context, and hence need a full Drill 
> server in order to operate.
> Drill has historically made extensive use of system-level testing: build the 
> entire server and fire queries at it to test each component. Over time, we 
> are augmenting that approach with unit tests: the ability to test each 
> operator (or parts of an operator) in isolation.
> Since each operator requires access to both the operator and fragment 
> context, the fact that the contexts depend on the overall server creates a 
> large barrier to unit testing. An earlier checkin started down the path of 
> defining the contexts as interfaces that can have different run-time and 
> test-time implementations to enable testing.
> This ticket asks to refactor those interfaces: simplifying the operator 
> context and introducing an interface for the fragment context. New code will 
> use these new interfaces, while older code continues to use the concrete 
> implementations. Over time, as operators are enhanced, they can be modified 
> to allow unit-level testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5842) Refactor and simplify the fragment, operator contexts for testing

2017-11-13 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5842:

Labels:   (was: ready-to-commit)

> Refactor and simplify the fragment, operator contexts for testing
> -
>
> Key: DRILL-5842
> URL: https://issues.apache.org/jira/browse/DRILL-5842
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> Drill's execution engine has a "fragment context" that provides state for a 
> fragment as a whole, and an "operator context" which provides state for a 
> single operator. Historically, these have both been concrete classes that 
> make generous references to the Drillbit context, and hence need a full Drill 
> server in order to operate.
> Drill has historically made extensive use of system-level testing: build the 
> entire server and fire queries at it to test each component. Over time, we 
> are augmenting that approach with unit tests: the ability to test each 
> operator (or parts of an operator) in isolation.
> Since each operator requires access to both the operator and fragment 
> context, the fact that the contexts depend on the overall server creates a 
> large barrier to unit testing. An earlier checkin started down the path of 
> defining the contexts as interfaces that can have different run-time and 
> test-time implementations to enable testing.
> This ticket asks to refactor those interfaces: simplifying the operator 
> context and introducing an interface for the fragment context. New code will 
> use these new interfaces, while older code continues to use the concrete 
> implementations. Over time, as operators are enhanced, they can be modified 
> to allow unit-level testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5832) Migrate OperatorFixture to use SystemOptionManager rather than mock

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249642#comment-16249642
 ] 

ASF GitHub Bot commented on DRILL-5832:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/970


> Migrate OperatorFixture to use SystemOptionManager rather than mock
> ---
>
> Key: DRILL-5832
> URL: https://issues.apache.org/jira/browse/DRILL-5832
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> The {{OperatorFixture}} provides structure for testing individual operators 
> and other "sub-operator" bits of code. To do that, the framework provides 
> mock network-free and server-free versions of the fragment context and 
> operator context.
> As part of the mock, the {{OperatorFixture}} provides a mock version of the 
> system option manager that provides a simple test-only implementation of an 
> option set.
> With the recent major changes to the system option manager, this mock 
> implementation has drifted out of sync with the system option manager. Rather 
> than upgrading the mock implementation, this ticket asks to use the system 
> option manager directly -- but configured for no ZK or file persistence of 
> options.
> The key reason for this change is that the system option manager has 
> implemented a sophisticated way to handle option defaults; it is better to 
> leverage that than to provide a mock implementation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5941) Skip header / footer logic works incorrectly for Hive tables when file has several input splits

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249774#comment-16249774
 ] 

ASF GitHub Bot commented on DRILL-5941:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/1030
  
@ppadma really good point! I did not realize that when creating readers 
input splits can be already distributed among several drillbits. So I guess we 
should make sure that when we have skip header / footer logic in table input 
splits of the same file reside in one `HiveSubScan` during planning stage. I'll 
have to make some changes and let you know once done.


> Skip header / footer logic works incorrectly for Hive tables when file has 
> several input splits
> ---
>
> Key: DRILL-5941
> URL: https://issues.apache.org/jira/browse/DRILL-5941
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Fix For: Future
>
>
> *To reproduce*
> 1. Create csv file with two columns (key, value) for 329 rows, where 
> first row is a header.
> The data file has size of should be greater than chunk size of 256 MB. Copy 
> file to the distributed file system.
> 2. Create table in Hive:
> {noformat}
> CREATE EXTERNAL TABLE `h_table`(
>   `key` bigint,
>   `value` string)
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY ','
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> LOCATION
>   'maprfs:/tmp/h_table'
> TBLPROPERTIES (
>  'skip.header.line.count'='1');
> {noformat}
> 3. Execute query {{select * from hive.h_table}} in Drill (query data using 
> Hive plugin). The result will return less rows then expected. Expected result 
> is 328 (total count minus one row as header).
> *The root cause*
> Since file is greater than default chunk size, it's split into several 
> fragments, known as input splits. For example:
> {noformat}
> maprfs:/tmp/h_table/h_table.csv:0+268435456
> maprfs:/tmp/h_table/h_table.csv:268435457+492782112
> {noformat}
> TextHiveReader is responsible for handling skip header and / or footer logic.
> Currently Drill creates reader [for each input 
> split|https://github.com/apache/drill/blob/master/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveScanBatchCreator.java#L84]
>  and skip header and /or footer logic is applied for each input splits, 
> though ideally the above mentioned input splits should have been read by one 
> reader, so skip / header footer logic was applied correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5917) Ban org.json:json library in Drill

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249775#comment-16249775
 ] 

ASF GitHub Bot commented on DRILL-5917:
---

Github user vrozov commented on the issue:

https://github.com/apache/drill/pull/1031
  
@arina-ielchiieva Please attach the full log to JIRA.


> Ban org.json:json library in Drill
> --
>
> Key: DRILL-5917
> URL: https://issues.apache.org/jira/browse/DRILL-5917
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Vlad Rozov
> Fix For: 1.12.0
>
>
> Apache Drill has dependencies on json.org lib indirectly from two libraries:
> com.mapr.hadoop:maprfs:jar:5.2.1-mapr
> com.mapr.fs:mapr-hbase:jar:5.2.1-mapr
> {noformat}
> [INFO] org.apache.drill.contrib:drill-format-mapr:jar:1.12.0-SNAPSHOT
> [INFO] +- com.mapr.hadoop:maprfs:jar:5.2.1-mapr:compile
> [INFO] |  \- org.json:json:jar:20080701:compile
> [INFO] \- com.mapr.fs:mapr-hbase:jar:5.2.1-mapr:compile
> [INFO]\- (org.json:json:jar:20080701:compile - omitted for duplicate)
> {noformat}
> Need to make sure we won't have any dependencies from these libs to 
> org.json:json lib and ban this lib in main pom.xml file.
> Issue is critical since Apache release won't happen until we make sure 
> org.json:json lib is not used (https://www.apache.org/legal/resolved.html).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5917) Ban org.json:json library in Drill

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249790#comment-16249790
 ] 

ASF GitHub Bot commented on DRILL-5917:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/1031
  
@vrozov I have posted all content of drillbit.out. drillbit.log just had 
information that could not create remote function registry directory.


> Ban org.json:json library in Drill
> --
>
> Key: DRILL-5917
> URL: https://issues.apache.org/jira/browse/DRILL-5917
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Vlad Rozov
> Fix For: 1.12.0
>
>
> Apache Drill has dependencies on json.org lib indirectly from two libraries:
> com.mapr.hadoop:maprfs:jar:5.2.1-mapr
> com.mapr.fs:mapr-hbase:jar:5.2.1-mapr
> {noformat}
> [INFO] org.apache.drill.contrib:drill-format-mapr:jar:1.12.0-SNAPSHOT
> [INFO] +- com.mapr.hadoop:maprfs:jar:5.2.1-mapr:compile
> [INFO] |  \- org.json:json:jar:20080701:compile
> [INFO] \- com.mapr.fs:mapr-hbase:jar:5.2.1-mapr:compile
> [INFO]\- (org.json:json:jar:20080701:compile - omitted for duplicate)
> {noformat}
> Need to make sure we won't have any dependencies from these libs to 
> org.json:json lib and ban this lib in main pom.xml file.
> Issue is critical since Apache release won't happen until we make sure 
> org.json:json lib is not used (https://www.apache.org/legal/resolved.html).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5783) Make code generation in the TopN operator more modular and test it

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249959#comment-16249959
 ] 

ASF GitHub Bot commented on DRILL-5783:
---

Github user ilooner commented on the issue:

https://github.com/apache/drill/pull/984
  
@arina-ielchiieva I can put **DRILL-5783** in a separate PR **DRILL-5841** 
and **DRILL-5894** are however tightly coupled and must be in the same PR. 
Unfortunately **DRILL-5841** and **DRILL-5894** touched many files because all 
the unit tests were updated so the PR will still remain large. 


> Make code generation in the TopN operator more modular and test it
> --
>
> Key: DRILL-5783
> URL: https://issues.apache.org/jira/browse/DRILL-5783
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> The work for this PR has had several other PRs batched together with it. The 
> full description of work is the following:
> DRILL-5783
> * A unit test is created for the priority queue in the TopN operator
> * The code generation classes passed around a completely unused function 
> registry reference in some places so I removed it.
> * The priority queue had unused parameters for some of its methods so I 
> removed them.
> DRILL-5841
> * There were many many ways in which temporary folders were created in unit 
> tests. I have unified the way these folders are created with the 
> DirTestWatcher, SubDirTestWatcher, and BaseDirTestWatcher. All the unit tests 
> have been updated to use these. The test watchers create temp directories in 
> ./target//. So all the files generated and used in the context of a test can 
> easily be found in the same consistent location.
> * This change should fix the sporadic hashagg test failures, as well as 
> failures caused by stray files in /tmp
> DRILL-5894
> * dfs_test is used as a storage plugin throughout the unit tests. This is 
> highly confusing and we can just use dfs instead.
> *Misc*
> * General code cleanup.
> * There are many places where String.format is used unnecessarily. The test 
> builder methods already use String.format for you when you pass them args. I 
> cleaned some of these up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (DRILL-5889) sqlline loses RPC connection

2017-11-13 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5889:
-
Comment: was deleted

(was: Github user sachouche commented on the issue:

https://github.com/apache/drill/pull/1015
  
Padma, DRILL-5889 is the wrong JIRA (sqlline loses RPC..); I think your 
JIRA is 5899


Regards,

Salim


From: Padma Penumarthy 
Sent: Wednesday, November 1, 2017 11:15:20 AM
To: apache/drill
Cc: Salim Achouche; Mention
Subject: Re: [apache/drill] DRILL-5889: Simple pattern matchers can work 
with DrillBuf directly (#1015)


@sachouche 
@paul-rogers can you please review ?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on 
GitHub, or 
mute the 
thread.

)

> sqlline loses RPC connection
> 
>
> Key: DRILL-5889
> URL: https://issues.apache.org/jira/browse/DRILL-5889
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
> Attachments: 26183ef9-44b2-ef32-adf8-cc2b5ba9f9c0.sys.drill, 
> drillbit.log
>
>
> Query is:
> {noformat}
> alter session set `planner.memory.max_query_memory_per_node` = 10737418240;
> select count(*), max(`filename`) from dfs.`/drill/testdata/hash-agg/data1` 
> group by no_nulls_col, nulls_col;
> {noformat}
> Error is:
> {noformat}
> 0: jdbc:drill:drillbit=10.10.100.190> select count(*), max(`filename`) from 
> dfs.`/drill/testdata/hash-agg/data1` group by no_nulls_col, nulls_col;
> Error: CONNECTION ERROR: Connection /10.10.100.190:45776 <--> 
> /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down?
> [Error Id: db4aea70-11e6-4e63-b0cc-13cdba0ee87a ] (state=,code=0)
> {noformat}
> From drillbit.log:
> 2017-10-18 14:04:23,044 [UserServer-1] INFO  
> o.a.drill.exec.rpc.user.UserServer - RPC connection /10.10.100.190:31010 <--> 
> /10.10.100.190:45776 (user server) timed out.  Timeout was set to 30 seconds. 
> Closing connection.
> Plan is:
> {noformat}
> | 00-00Screen
> 00-01  Project(EXPR$0=[$0], EXPR$1=[$1])
> 00-02UnionExchange
> 01-01  Project(EXPR$0=[$2], EXPR$1=[$3])
> 01-02HashAgg(group=[{0, 1}], EXPR$0=[$SUM0($2)], EXPR$1=[MAX($3)])
> 01-03  Project(no_nulls_col=[$0], nulls_col=[$1], EXPR$0=[$2], 
> EXPR$1=[$3])
> 01-04HashToRandomExchange(dist0=[[$0]], dist1=[[$1]])
> 02-01  UnorderedMuxExchange
> 03-01Project(no_nulls_col=[$0], nulls_col=[$1], 
> EXPR$0=[$2], EXPR$1=[$3], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 
> hash32AsDouble($0, 1301011))])
> 03-02  HashAgg(group=[{0, 1}], EXPR$0=[COUNT()], 
> EXPR$1=[MAX($2)])
> 03-03Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/hash-agg/data1]], 
> selectionRoot=maprfs:/drill/testdata/hash-agg/data1, numFiles=1, 
> usedMetadataFile=false, columns=[`no_nulls_col`, `nulls_col`, `filename`]]])
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (DRILL-5889) sqlline loses RPC connection

2017-11-13 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5889:
-
Comment: was deleted

(was: Github user ppadma commented on the issue:

https://github.com/apache/drill/pull/1015
  
@sachouche @paul-rogers can you please review ?
)

> sqlline loses RPC connection
> 
>
> Key: DRILL-5889
> URL: https://issues.apache.org/jira/browse/DRILL-5889
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
> Attachments: 26183ef9-44b2-ef32-adf8-cc2b5ba9f9c0.sys.drill, 
> drillbit.log
>
>
> Query is:
> {noformat}
> alter session set `planner.memory.max_query_memory_per_node` = 10737418240;
> select count(*), max(`filename`) from dfs.`/drill/testdata/hash-agg/data1` 
> group by no_nulls_col, nulls_col;
> {noformat}
> Error is:
> {noformat}
> 0: jdbc:drill:drillbit=10.10.100.190> select count(*), max(`filename`) from 
> dfs.`/drill/testdata/hash-agg/data1` group by no_nulls_col, nulls_col;
> Error: CONNECTION ERROR: Connection /10.10.100.190:45776 <--> 
> /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down?
> [Error Id: db4aea70-11e6-4e63-b0cc-13cdba0ee87a ] (state=,code=0)
> {noformat}
> From drillbit.log:
> 2017-10-18 14:04:23,044 [UserServer-1] INFO  
> o.a.drill.exec.rpc.user.UserServer - RPC connection /10.10.100.190:31010 <--> 
> /10.10.100.190:45776 (user server) timed out.  Timeout was set to 30 seconds. 
> Closing connection.
> Plan is:
> {noformat}
> | 00-00Screen
> 00-01  Project(EXPR$0=[$0], EXPR$1=[$1])
> 00-02UnionExchange
> 01-01  Project(EXPR$0=[$2], EXPR$1=[$3])
> 01-02HashAgg(group=[{0, 1}], EXPR$0=[$SUM0($2)], EXPR$1=[MAX($3)])
> 01-03  Project(no_nulls_col=[$0], nulls_col=[$1], EXPR$0=[$2], 
> EXPR$1=[$3])
> 01-04HashToRandomExchange(dist0=[[$0]], dist1=[[$1]])
> 02-01  UnorderedMuxExchange
> 03-01Project(no_nulls_col=[$0], nulls_col=[$1], 
> EXPR$0=[$2], EXPR$1=[$3], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 
> hash32AsDouble($0, 1301011))])
> 03-02  HashAgg(group=[{0, 1}], EXPR$0=[COUNT()], 
> EXPR$1=[MAX($2)])
> 03-03Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/hash-agg/data1]], 
> selectionRoot=maprfs:/drill/testdata/hash-agg/data1, numFiles=1, 
> usedMetadataFile=false, columns=[`no_nulls_col`, `nulls_col`, `filename`]]])
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (DRILL-5889) sqlline loses RPC connection

2017-11-13 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5889:
-
Comment: was deleted

(was: GitHub user ppadma opened a pull request:

https://github.com/apache/drill/pull/1015

DRILL-5889: Simple pattern matchers can work with DrillBuf directly

For the 4 simple patterns we have i.e. startsWith, endsWith, contains and 
constant,, we do not need the overhead of charSequenceWrapper. We can work with 
DrillBuf directly. This will save us from doing isAscii check and UTF8 decoding 
for each row.
UTF-8 encoding ensures that no UTF-8 character is a prefix of any other 
valid character. So, instead of decoding varChar from each row we are 
processing, encode the patternString once during setup and do raw byte 
comparison. 
This improved overall performance for filter operator by around 20%.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ppadma/drill DRILL-5899

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/1015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1015


commit ebe2c6f9110f0501a85cb5ae6e119e05254b9f3e
Author: Padma Penumarthy 
Date:   2017-10-25T20:37:25Z

DRILL-5889: Simple pattern matchers can work with DrillBuf directly


)

> sqlline loses RPC connection
> 
>
> Key: DRILL-5889
> URL: https://issues.apache.org/jira/browse/DRILL-5889
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
> Attachments: 26183ef9-44b2-ef32-adf8-cc2b5ba9f9c0.sys.drill, 
> drillbit.log
>
>
> Query is:
> {noformat}
> alter session set `planner.memory.max_query_memory_per_node` = 10737418240;
> select count(*), max(`filename`) from dfs.`/drill/testdata/hash-agg/data1` 
> group by no_nulls_col, nulls_col;
> {noformat}
> Error is:
> {noformat}
> 0: jdbc:drill:drillbit=10.10.100.190> select count(*), max(`filename`) from 
> dfs.`/drill/testdata/hash-agg/data1` group by no_nulls_col, nulls_col;
> Error: CONNECTION ERROR: Connection /10.10.100.190:45776 <--> 
> /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down?
> [Error Id: db4aea70-11e6-4e63-b0cc-13cdba0ee87a ] (state=,code=0)
> {noformat}
> From drillbit.log:
> 2017-10-18 14:04:23,044 [UserServer-1] INFO  
> o.a.drill.exec.rpc.user.UserServer - RPC connection /10.10.100.190:31010 <--> 
> /10.10.100.190:45776 (user server) timed out.  Timeout was set to 30 seconds. 
> Closing connection.
> Plan is:
> {noformat}
> | 00-00Screen
> 00-01  Project(EXPR$0=[$0], EXPR$1=[$1])
> 00-02UnionExchange
> 01-01  Project(EXPR$0=[$2], EXPR$1=[$3])
> 01-02HashAgg(group=[{0, 1}], EXPR$0=[$SUM0($2)], EXPR$1=[MAX($3)])
> 01-03  Project(no_nulls_col=[$0], nulls_col=[$1], EXPR$0=[$2], 
> EXPR$1=[$3])
> 01-04HashToRandomExchange(dist0=[[$0]], dist1=[[$1]])
> 02-01  UnorderedMuxExchange
> 03-01Project(no_nulls_col=[$0], nulls_col=[$1], 
> EXPR$0=[$2], EXPR$1=[$3], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 
> hash32AsDouble($0, 1301011))])
> 03-02  HashAgg(group=[{0, 1}], EXPR$0=[COUNT()], 
> EXPR$1=[MAX($2)])
> 03-03Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/hash-agg/data1]], 
> selectionRoot=maprfs:/drill/testdata/hash-agg/data1, numFiles=1, 
> usedMetadataFile=false, columns=[`no_nulls_col`, `nulls_col`, `filename`]]])
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5960) Add function STAsGeoJSON to extend GIS support

2017-11-13 Thread Chris Sandison (JIRA)
Chris Sandison created DRILL-5960:
-

 Summary: Add function STAsGeoJSON to extend GIS support
 Key: DRILL-5960
 URL: https://issues.apache.org/jira/browse/DRILL-5960
 Project: Apache Drill
  Issue Type: Bug
Reporter: Chris Sandison
Priority: Minor


Add function as wrapper to ESRI's `asGeoJson` functionality. 

Implementation is very similar to STAsText



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5960) Add function STAsGeoJSON to extend GIS support

2017-11-13 Thread Chris Sandison (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Sandison updated DRILL-5960:
--
Component/s: Functions - Drill

> Add function STAsGeoJSON to extend GIS support
> --
>
> Key: DRILL-5960
> URL: https://issues.apache.org/jira/browse/DRILL-5960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Reporter: Chris Sandison
>Priority: Minor
>  Labels: features, newbie
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Add function as wrapper to ESRI's `asGeoJson` functionality. 
> Implementation is very similar to STAsText



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5941) Skip header / footer logic works incorrectly for Hive tables when file has several input splits

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250007#comment-16250007
 ] 

ASF GitHub Bot commented on DRILL-5941:
---

Github user ppadma commented on the issue:

https://github.com/apache/drill/pull/1030
  
@arina-ielchiieva That will have performance impact.  Better way would be 
to keep one reader per split and see if we can figure out a way to tell readers 
how many rows they should skip (for header or footer).


> Skip header / footer logic works incorrectly for Hive tables when file has 
> several input splits
> ---
>
> Key: DRILL-5941
> URL: https://issues.apache.org/jira/browse/DRILL-5941
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Fix For: Future
>
>
> *To reproduce*
> 1. Create csv file with two columns (key, value) for 329 rows, where 
> first row is a header.
> The data file has size of should be greater than chunk size of 256 MB. Copy 
> file to the distributed file system.
> 2. Create table in Hive:
> {noformat}
> CREATE EXTERNAL TABLE `h_table`(
>   `key` bigint,
>   `value` string)
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY ','
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> LOCATION
>   'maprfs:/tmp/h_table'
> TBLPROPERTIES (
>  'skip.header.line.count'='1');
> {noformat}
> 3. Execute query {{select * from hive.h_table}} in Drill (query data using 
> Hive plugin). The result will return less rows then expected. Expected result 
> is 328 (total count minus one row as header).
> *The root cause*
> Since file is greater than default chunk size, it's split into several 
> fragments, known as input splits. For example:
> {noformat}
> maprfs:/tmp/h_table/h_table.csv:0+268435456
> maprfs:/tmp/h_table/h_table.csv:268435457+492782112
> {noformat}
> TextHiveReader is responsible for handling skip header and / or footer logic.
> Currently Drill creates reader [for each input 
> split|https://github.com/apache/drill/blob/master/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveScanBatchCreator.java#L84]
>  and skip header and /or footer logic is applied for each input splits, 
> though ideally the above mentioned input splits should have been read by one 
> reader, so skip / header footer logic was applied correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5917) Ban org.json:json library in Drill

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250006#comment-16250006
 ] 

ASF GitHub Bot commented on DRILL-5917:
---

Github user vrozov commented on the issue:

https://github.com/apache/drill/pull/1031
  
@arina-ielchiieva Please test with the latest commit. I changed mapr 
version back to 5.2.1, note that 2.7.0-mapr-1707 is compiled with 5.2.2


> Ban org.json:json library in Drill
> --
>
> Key: DRILL-5917
> URL: https://issues.apache.org/jira/browse/DRILL-5917
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Vlad Rozov
> Fix For: 1.12.0
>
>
> Apache Drill has dependencies on json.org lib indirectly from two libraries:
> com.mapr.hadoop:maprfs:jar:5.2.1-mapr
> com.mapr.fs:mapr-hbase:jar:5.2.1-mapr
> {noformat}
> [INFO] org.apache.drill.contrib:drill-format-mapr:jar:1.12.0-SNAPSHOT
> [INFO] +- com.mapr.hadoop:maprfs:jar:5.2.1-mapr:compile
> [INFO] |  \- org.json:json:jar:20080701:compile
> [INFO] \- com.mapr.fs:mapr-hbase:jar:5.2.1-mapr:compile
> [INFO]\- (org.json:json:jar:20080701:compile - omitted for duplicate)
> {noformat}
> Need to make sure we won't have any dependencies from these libs to 
> org.json:json lib and ban this lib in main pom.xml file.
> Issue is critical since Apache release won't happen until we make sure 
> org.json:json lib is not used (https://www.apache.org/legal/resolved.html).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5960) Add function STAsGeoJSON to extend GIS support

2017-11-13 Thread Charles Givre (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250026#comment-16250026
 ] 

Charles Givre commented on DRILL-5960:
--

Hi [~ChrisSandison]
Will you be submitting a PR for this?  I'm reviewing another one 
(https://github.com/apache/drill/pull/258) which implements a lot of GIS 
functionality.   I'm hoping to get these into v 1.12 and if you can get this 
submitted, I'd be happy to review it. 


> Add function STAsGeoJSON to extend GIS support
> --
>
> Key: DRILL-5960
> URL: https://issues.apache.org/jira/browse/DRILL-5960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Reporter: Chris Sandison
>Priority: Minor
>  Labels: features, newbie
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Add function as wrapper to ESRI's `asGeoJson` functionality. 
> Implementation is very similar to STAsText



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5941) Skip header / footer logic works incorrectly for Hive tables when file has several input splits

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250120#comment-16250120
 ] 

ASF GitHub Bot commented on DRILL-5941:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/1030
  
For FWIW, the native CSV reader does the following:

* To read the header, it seeks to offset 0 in the file, regardless of the 
block being read, then reads the header, which may be a remote read.
* The reader then seeks to the start of its block. If this is the first 
block, it skips the header, else it searches for the start of the next record.

Since Hive has the same challenges, Hive must have solved this, we have 
only to research that existing solution.

One simple solution is:

* If block number is 0, skip the header.
* If block number is 1 or larger, look for the next record separator.


> Skip header / footer logic works incorrectly for Hive tables when file has 
> several input splits
> ---
>
> Key: DRILL-5941
> URL: https://issues.apache.org/jira/browse/DRILL-5941
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Fix For: Future
>
>
> *To reproduce*
> 1. Create csv file with two columns (key, value) for 329 rows, where 
> first row is a header.
> The data file has size of should be greater than chunk size of 256 MB. Copy 
> file to the distributed file system.
> 2. Create table in Hive:
> {noformat}
> CREATE EXTERNAL TABLE `h_table`(
>   `key` bigint,
>   `value` string)
> ROW FORMAT DELIMITED
>   FIELDS TERMINATED BY ','
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> LOCATION
>   'maprfs:/tmp/h_table'
> TBLPROPERTIES (
>  'skip.header.line.count'='1');
> {noformat}
> 3. Execute query {{select * from hive.h_table}} in Drill (query data using 
> Hive plugin). The result will return less rows then expected. Expected result 
> is 328 (total count minus one row as header).
> *The root cause*
> Since file is greater than default chunk size, it's split into several 
> fragments, known as input splits. For example:
> {noformat}
> maprfs:/tmp/h_table/h_table.csv:0+268435456
> maprfs:/tmp/h_table/h_table.csv:268435457+492782112
> {noformat}
> TextHiveReader is responsible for handling skip header and / or footer logic.
> Currently Drill creates reader [for each input 
> split|https://github.com/apache/drill/blob/master/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveScanBatchCreator.java#L84]
>  and skip header and /or footer logic is applied for each input splits, 
> though ideally the above mentioned input splits should have been read by one 
> reader, so skip / header footer logic was applied correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250256#comment-16250256
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user vladimirtkach commented on the issue:

https://github.com/apache/drill/pull/1026
  
what do you think about having two functions instead: convertFromJSON and 
convertFromJSON+some suffix. Second will be able to convert  NaN, Infinity


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5922) Intermittent Memory Leaks in the ROOT allocator

2017-11-13 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5922:
-
Labels: ready-to-commit  (was: )

> Intermittent Memory Leaks in the ROOT allocator  
> -
>
> Key: DRILL-5922
> URL: https://issues.apache.org/jira/browse/DRILL-5922
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Minor
>  Labels: ready-to-commit
>
> This issue was originall found by [~ben-zvi]. I am able to consistently 
> reproduce the error on my laptop by running the following unit test:
> org.apache.drill.exec.DrillSeparatePlanningTest#testMultiMinorFragmentComplexQuery
> {code}
> java.lang.IllegalStateException: Allocator[ROOT] closed with outstanding 
> child allocators.
> Allocator(ROOT) 0/1048576/10113536/3221225472 (res/actual/peak/limit)
>   child allocators: 1
> Allocator(query:26049b50-0cec-0a92-437c-bbe486e1fcbf) 
> 1048576/0/0/268435456 (res/actual/peak/limit)
>   child allocators: 0
>   ledgers: 0
>   reservations: 0
>   ledgers: 0
>   reservations: 0
>   at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:496) 
> ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:256)
>  ~[classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> [classes/:na]
>   at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> [classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:205) 
> [classes/:na]
>   at org.apache.drill.BaseTestQuery.closeClient(BaseTestQuery.java:315) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:157) 
> [test-classes/:na]
>   at 
> org.apache.drill.BaseTestQuery.updateTestCluster(BaseTestQuery.java:148) 
> [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.getFragmentsHelper(DrillSeparatePlanningTest.java:185)
>  [test-classes/:na]
>   at 
> org.apache.drill.exec.DrillSeparatePlanningTest.testMultiMinorFragmentComplexQuery(DrillSeparatePlanningTest.java:108)
>  [test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.11.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  [junit-4.11.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:120)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:65)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.integration.junit4.internal.MockFrameworkMethod.invokeExplosively(MockFrameworkMethod.java:29)
>  [jmockit-1.3.jar:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_144]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_144]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144]
>   at 
> mockit.internal.util.MethodReflection.invokeWithCheckedThrows(MethodReflection.java:95)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.internal.annotations.MockMethodBridge.callMock(MockMethodBridge.java:76)
>  [jmockit-1.3.jar:na]
>   at 
> mockit.internal.annotations.MockMethodBridge.invoke(MockMethodBridge.java:41) 
> [jmockit-1.3.jar:na]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.11.jar:na]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  [junit-4.11.jar:na]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.1

[jira] [Updated] (DRILL-5936) Refactor MergingRecordBatch based on code inspection

2017-11-13 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5936:
-
Labels: ready-to-commit  (was: )

> Refactor MergingRecordBatch based on code inspection
> 
>
> Key: DRILL-5936
> URL: https://issues.apache.org/jira/browse/DRILL-5936
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Minor
>  Labels: ready-to-commit
>
> * Reorganize code to remove unnecessary {{pqueue.peek()}}
> * Reuse Node



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250406#comment-16250406
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150686404
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ops/FragmentContext.java ---
@@ -229,7 +229,7 @@ public DrillbitContext getDrillbitContext() {
 return context;
   }
 
-  public SchemaPlus getRootSchema() {
+  public SchemaPlus getFullRootSchema() {
--- End diff --

Comment to explain what a "full root schema" is? Apparently, this is both 
the plugin config name and workspace combined?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250400#comment-16250400
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150682787
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicSchema.java
 ---
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import org.apache.calcite.jdbc.CalciteSchema;
+import org.apache.calcite.jdbc.SimpleCalciteSchema;
+import org.apache.calcite.schema.Schema;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+
+
+/**
+ * Unlike SimpleCalciteSchema, DynamicSchema could have an empty or 
partial schemaMap, but it could maintain a map of
+ * name->SchemaFactory, and only register schema when the corresponsdent 
name is requested.
+ */
+public class DynamicSchema extends SimpleCalciteSchema {
+
+  protected SchemaConfig schemaConfig;
--- End diff --

Seems the schemaConfig is never set or used.


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250390#comment-16250390
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150679941
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
+for (Map.Entry storageEntry : 
getSchemaFactories()) {
+  pluginNames.add(storageEntry.getKey());
+}
+return Compatible.INSTANCE.navigableSet(
+ImmutableSortedSet.copyOf(
+Sets.union(pluginNames, getSubSchemaMap().keySet(;
+  }
+
+  /**
+   * load schema factory(storage plugin) for schemaName
+   * @param schemaName
+   * @param caseSensitive
+   */
+  public void loadSchemaFactory(String schemaName, boolean caseSensitive) {
+try {
+  SchemaPlus thisPlus = this.plus();
+  StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+  else {
+//this schema name could be `dfs.tmp`, a 2nd level schema under 
'dfs'
+String[] paths = schemaName.split("\\.");
--- End diff --

Should this be done here in this simple way? How many other places do we do 
the same thing? Or, should we have a common function to split schema names so 
we can handle, way, escapes and other special cases that might come along?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>

[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250407#comment-16250407
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150685946
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
 ---
@@ -532,7 +572,10 @@ public boolean isMutable() {
 }
 
 public DrillFileSystem getFS() {
-  return fs;
+  if (this.fs == null) {
+this.fs = 
ImpersonationUtil.createFileSystem(schemaConfig.getUserName(), fsConf);
+  }
+  return this.fs;
--- End diff --

No need for `this.fs`, just `fs` will do.


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250393#comment-16250393
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150679636
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
+for (Map.Entry storageEntry : 
getSchemaFactories()) {
+  pluginNames.add(storageEntry.getKey());
+}
+return Compatible.INSTANCE.navigableSet(
+ImmutableSortedSet.copyOf(
+Sets.union(pluginNames, getSubSchemaMap().keySet(;
+  }
+
+  /**
+   * load schema factory(storage plugin) for schemaName
+   * @param schemaName
+   * @param caseSensitive
+   */
+  public void loadSchemaFactory(String schemaName, boolean caseSensitive) {
+try {
+  SchemaPlus thisPlus = this.plus();
+  StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
--- End diff --

If the name is `dfs.test`, we first look up the compound name, then the 
parts? Why? Do we put the compound names in the map? Or can we have one schema 
named "dfs.test" and another called `dfs`.`test`? Or, can this code be 
restructured a bit?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled sto

[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250388#comment-16250388
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150678680
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
--- End diff --

if the original call returns non-null, we make the same call a second time. 
Better:
```
retSchema = ...
if (retSchema != null) { return retSchema; }
loadSchemaFactory(...)
return getSubSchemaMap()...
```


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250403#comment-16250403
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150685414
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
 ---
@@ -150,14 +152,30 @@ public WorkspaceSchemaFactory(
* @return True if the user has access. False otherwise.
*/
   public boolean accessible(final String userName) throws IOException {
-final FileSystem fs = ImpersonationUtil.createFileSystem(userName, 
fsConf);
+final DrillFileSystem fs = 
ImpersonationUtil.createFileSystem(userName, fsConf);
+return accessible(fs);
+  }
+
+  /**
+   * Checks whether a FileSystem object has the permission to list/read 
workspace directory
+   * @param fs a DrillFileSystem object that was created with certain user 
privilege
+   * @return True if the user has access. False otherwise.
+   * @throws IOException
+   */
+  public boolean accessible(DrillFileSystem fs) throws IOException {
 try {
-  // We have to rely on the listStatus as a FileSystem can have 
complicated controls such as regular unix style
-  // permissions, Access Control Lists (ACLs) or Access Control 
Expressions (ACE). Hadoop 2.7 version of FileSystem
-  // has a limited private API (FileSystem.access) to check the 
permissions directly
-  // (see https://issues.apache.org/jira/browse/HDFS-6570). Drill 
currently relies on Hadoop 2.5.0 version of
-  // FileClient. TODO: Update this when DRILL-3749 is fixed.
-  fs.listStatus(wsPath);
+  /**
+   * For Windows local file system, fs.access ends up using 
DeprecatedRawLocalFileStatus which has
+   * TrustedInstaller as owner, and a member of Administrators group 
could not satisfy the permission.
+   * In this case, we will still use method listStatus.
+   * In other cases, we use access method since it is cheaper.
+   */
+  if (SystemUtils.IS_OS_WINDOWS && 
fs.getUri().getScheme().equalsIgnoreCase("file")) {
--- End diff --

HDFS probably defines a constant for "file". Should we reference that?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250405#comment-16250405
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150684042
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
 ---
@@ -52,9 +55,20 @@
 
   private List factories;
   private String schemaName;
+  protected FileSystemPlugin plugin;
 
   public FileSystemSchemaFactory(String schemaName, 
List factories) {
 super();
+if (factories.size() > 0 ) {
+  this.plugin = factories.get(0).getPlugin();
--- End diff --

A comment would be helpful: what is special about plugin 0 that it should 
be the default? Is plugin 0 the default of some sort?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250397#comment-16250397
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150682252
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
+for (Map.Entry storageEntry : 
getSchemaFactories()) {
+  pluginNames.add(storageEntry.getKey());
+}
+return Compatible.INSTANCE.navigableSet(
+ImmutableSortedSet.copyOf(
+Sets.union(pluginNames, getSubSchemaMap().keySet(;
+  }
+
+  /**
+   * load schema factory(storage plugin) for schemaName
+   * @param schemaName
+   * @param caseSensitive
+   */
+  public void loadSchemaFactory(String schemaName, boolean caseSensitive) {
+try {
+  SchemaPlus thisPlus = this.plus();
+  StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+  else {
+//this schema name could be `dfs.tmp`, a 2nd level schema under 
'dfs'
+String[] paths = schemaName.split("\\.");
+if (paths.length == 2) {
+  plugin = getSchemaFactories().getPlugin(paths[0]);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+
+  final CalciteSchema secondLevelSchema = 
getSubSchemaMap().get(paths[0]).getSubSchema(paths[1], caseSensitive);
+  if (secondLevelSchema != null) {
+SchemaPlus secondlevel = secondLevelSchema.plus();
+org.apache.drill.exec.store.AbstractSchema drillSchema =
--- End diff --

Fully qualified name needed? Or, will an import work?


> Skip initializing all enabled storage plugins for every qu

[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250399#comment-16250399
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150684114
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
 ---
@@ -52,9 +55,20 @@
 
   private List factories;
   private String schemaName;
+  protected FileSystemPlugin plugin;
 
   public FileSystemSchemaFactory(String schemaName, 
List factories) {
 super();
+if (factories.size() > 0 ) {
+  this.plugin = factories.get(0).getPlugin();
+}
+this.schemaName = schemaName;
+this.factories = factories;
+  }
+
+  public FileSystemSchemaFactory(FileSystemPlugin plugin, String 
schemaName, List factories) {
+super();
--- End diff --

Omit; Java does this by default.


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250396#comment-16250396
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150683680
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/SchemaTreeProvider.java
 ---
@@ -105,12 +106,36 @@ public SchemaPlus createRootSchema(final String 
userName, final SchemaConfigInfo
* @return
*/
   public SchemaPlus createRootSchema(SchemaConfig schemaConfig) {
+  final SchemaPlus rootSchema = 
DynamicSchema.createRootSchema(dContext.getStorage(), schemaConfig);
+  schemaTreesToClose.add(rootSchema);
+  return rootSchema;
+  }
+
+  /**
+   * Return full root schema with schema owner as the given user.
+   *
+   * @param userName Name of the user who is accessing the storage sources.
+   * @param provider {@link SchemaConfigInfoProvider} instance
+   * @return Root of the schema tree.
+   */
+  public SchemaPlus createFullRootSchema(final String userName, final 
SchemaConfigInfoProvider provider) {
+final String schemaUser = isImpersonationEnabled ? userName : 
ImpersonationUtil.getProcessUserName();
--- End diff --

Should this be factored out somewhere? Seems this magic stanza will be 
needed in many places: better to have one copy than several. Maybe a method in 
`ImpersonationUtil`?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250409#comment-16250409
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150682963
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/SchemaTreeProvider.java
 ---
@@ -20,12 +20,13 @@
 import java.io.IOException;
 import java.util.List;
 
-import org.apache.calcite.jdbc.SimpleCalciteSchema;
+//import org.apache.calcite.jdbc.SimpleCalciteSchema;
--- End diff --

Remove


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250392#comment-16250392
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150680454
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
+for (Map.Entry storageEntry : 
getSchemaFactories()) {
+  pluginNames.add(storageEntry.getKey());
+}
+return Compatible.INSTANCE.navigableSet(
+ImmutableSortedSet.copyOf(
+Sets.union(pluginNames, getSubSchemaMap().keySet(;
+  }
+
+  /**
+   * load schema factory(storage plugin) for schemaName
+   * @param schemaName
+   * @param caseSensitive
+   */
+  public void loadSchemaFactory(String schemaName, boolean caseSensitive) {
+try {
+  SchemaPlus thisPlus = this.plus();
+  StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+  else {
+//this schema name could be `dfs.tmp`, a 2nd level schema under 
'dfs'
+String[] paths = schemaName.split("\\.");
+if (paths.length == 2) {
+  plugin = getSchemaFactories().getPlugin(paths[0]);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+
+  final CalciteSchema secondLevelSchema = 
getSubSchemaMap().get(paths[0]).getSubSchema(paths[1], caseSensitive);
+  if (secondLevelSchema != null) {
+SchemaPlus secondlevel = secondLevelSchema.plus();
+org.apache.drill.exec.store.AbstractSchema drillSchema =
+
secondlevel.unwrap(org.apache.drill.exec.store.AbstractSchema.class);
+SubSchemaWrapper wrapper = new SubSc

[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250404#comment-16250404
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150685672
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
 ---
@@ -175,6 +193,21 @@ public WorkspaceSchema createSchema(List 
parentSchemaPath, SchemaConfig
 return new WorkspaceSchema(parentSchemaPath, schemaName, schemaConfig);
   }
 
+  public WorkspaceSchema createSchema(List parentSchemaPath, 
SchemaConfig schemaConfig, DrillFileSystem fs) throws IOException {
+if (!accessible(fs)) {
--- End diff --

Is returning null sufficient to tell the user that they don't have 
permission to do this operation?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250408#comment-16250408
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150686157
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/WorkspaceSchemaFactory.java
 ---
@@ -532,7 +572,10 @@ public boolean isMutable() {
 }
 
 public DrillFileSystem getFS() {
-  return fs;
+  if (this.fs == null) {
+this.fs = 
ImpersonationUtil.createFileSystem(schemaConfig.getUserName(), fsConf);
+  }
+  return this.fs;
--- End diff --

This class caches the file system, which is good. The other classes in this 
PR do not; they create the fs as needed.

Does Calcite allow some kind of session state in which we can cache the fs 
for the query (plan) rather than creating it on the fly each time we need it?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250387#comment-16250387
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150678803
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
--- End diff --

Should be case insensitive?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250401#comment-16250401
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150684557
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
 ---
@@ -73,9 +87,10 @@ public void registerSchemas(SchemaConfig schemaConfig, 
SchemaPlus parent) throws
 
 public FileSystemSchema(String name, SchemaConfig schemaConfig) throws 
IOException {
   super(ImmutableList.of(), name);
+  final DrillFileSystem fs = 
ImpersonationUtil.createFileSystem(schemaConfig.getUserName(), 
plugin.getFsConf());
--- End diff --

Comment to explain what's happening here?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250391#comment-16250391
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150682074
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
+for (Map.Entry storageEntry : 
getSchemaFactories()) {
+  pluginNames.add(storageEntry.getKey());
+}
+return Compatible.INSTANCE.navigableSet(
+ImmutableSortedSet.copyOf(
+Sets.union(pluginNames, getSubSchemaMap().keySet(;
+  }
+
+  /**
+   * load schema factory(storage plugin) for schemaName
+   * @param schemaName
+   * @param caseSensitive
+   */
+  public void loadSchemaFactory(String schemaName, boolean caseSensitive) {
+try {
+  SchemaPlus thisPlus = this.plus();
+  StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+  else {
+//this schema name could be `dfs.tmp`, a 2nd level schema under 
'dfs'
+String[] paths = schemaName.split("\\.");
+if (paths.length == 2) {
+  plugin = getSchemaFactories().getPlugin(paths[0]);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
--- End diff --

What if the plugin is null? Should we fail with an error, or just keep 
going?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimizatio

[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250394#comment-16250394
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150680951
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
--- End diff --

Class would benefit from general comments: purpose, references, structure.


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250389#comment-16250389
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150680095
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
+for (Map.Entry storageEntry : 
getSchemaFactories()) {
+  pluginNames.add(storageEntry.getKey());
+}
+return Compatible.INSTANCE.navigableSet(
+ImmutableSortedSet.copyOf(
+Sets.union(pluginNames, getSubSchemaMap().keySet(;
+  }
+
+  /**
+   * load schema factory(storage plugin) for schemaName
+   * @param schemaName
+   * @param caseSensitive
+   */
+  public void loadSchemaFactory(String schemaName, boolean caseSensitive) {
+try {
+  SchemaPlus thisPlus = this.plus();
+  StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+  else {
+//this schema name could be `dfs.tmp`, a 2nd level schema under 
'dfs'
+String[] paths = schemaName.split("\\.");
+if (paths.length == 2) {
--- End diff --

Do we support only 1- and 2-part names? Should we assert that the length <= 
2?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made 

[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250395#comment-16250395
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150680632
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicRootSchema.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+import org.apache.calcite.DataContext;
+import org.apache.calcite.jdbc.CalciteRootSchema;
+import org.apache.calcite.jdbc.CalciteSchema;
+
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.linq4j.tree.Expressions;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.Compatible;
+import org.apache.drill.common.exceptions.ExecutionSetupException;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.SubSchemaWrapper;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+public class DynamicRootSchema extends DynamicSchema
+implements CalciteRootSchema {
+
+  /** Creates a root schema. */
+  DynamicRootSchema(StoragePluginRegistry storages, SchemaConfig 
schemaConfig) {
+super(null, new RootSchema(), "");
+this.schemaConfig = schemaConfig;
+this.storages = storages;
+  }
+
+  @Override
+  public CalciteSchema getSubSchema(String schemaName, boolean 
caseSensitive) {
+CalciteSchema retSchema = getSubSchemaMap().get(schemaName);
+
+if (retSchema == null) {
+  loadSchemaFactory(schemaName, caseSensitive);
+}
+
+retSchema = getSubSchemaMap().get(schemaName);
+return retSchema;
+  }
+
+  @Override
+  public NavigableSet getTableNames() {
+Set pluginNames = Sets.newHashSet();
+for (Map.Entry storageEntry : 
getSchemaFactories()) {
+  pluginNames.add(storageEntry.getKey());
+}
+return Compatible.INSTANCE.navigableSet(
+ImmutableSortedSet.copyOf(
+Sets.union(pluginNames, getSubSchemaMap().keySet(;
+  }
+
+  /**
+   * load schema factory(storage plugin) for schemaName
+   * @param schemaName
+   * @param caseSensitive
+   */
+  public void loadSchemaFactory(String schemaName, boolean caseSensitive) {
+try {
+  SchemaPlus thisPlus = this.plus();
+  StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+  else {
+//this schema name could be `dfs.tmp`, a 2nd level schema under 
'dfs'
+String[] paths = schemaName.split("\\.");
+if (paths.length == 2) {
+  plugin = getSchemaFactories().getPlugin(paths[0]);
+  if (plugin != null) {
+plugin.registerSchemas(schemaConfig, thisPlus);
+  }
+
+  final CalciteSchema secondLevelSchema = 
getSubSchemaMap().get(paths[0]).getSubSchema(paths[1], caseSensitive);
+  if (secondLevelSchema != null) {
+SchemaPlus secondlevel = secondLevelSchema.plus();
+org.apache.drill.exec.store.AbstractSchema drillSchema =
+
secondlevel.unwrap(org.apache.drill.exec.store.AbstractSchema.class);
+SubSchemaWrapper wrapper = new SubSc

[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250398#comment-16250398
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150682678
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DynamicSchema.java
 ---
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import org.apache.calcite.jdbc.CalciteSchema;
+import org.apache.calcite.jdbc.SimpleCalciteSchema;
+import org.apache.calcite.schema.Schema;
+import org.apache.calcite.schema.SchemaPlus;
+import org.apache.drill.exec.store.SchemaConfig;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+
+
+/**
+ * Unlike SimpleCalciteSchema, DynamicSchema could have an empty or 
partial schemaMap, but it could maintain a map of
+ * name->SchemaFactory, and only register schema when the corresponsdent 
name is requested.
+ */
+public class DynamicSchema extends SimpleCalciteSchema {
+
+  protected SchemaConfig schemaConfig;
+  protected StoragePluginRegistry storages;
--- End diff --

Seems that storages is never initialized or used except in the accessor.


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5089) Skip initializing all enabled storage plugins for every query

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250402#comment-16250402
 ] 

ASF GitHub Bot commented on DRILL-5089:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1032#discussion_r150685113
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemSchemaFactory.java
 ---
@@ -73,9 +87,10 @@ public void registerSchemas(SchemaConfig schemaConfig, 
SchemaPlus parent) throws
 
 public FileSystemSchema(String name, SchemaConfig schemaConfig) throws 
IOException {
   super(ImmutableList.of(), name);
+  final DrillFileSystem fs = 
ImpersonationUtil.createFileSystem(schemaConfig.getUserName(), 
plugin.getFsConf());
   for(WorkspaceSchemaFactory f :  factories){
-if (f.accessible(schemaConfig.getUserName())) {
-  WorkspaceSchema s = f.createSchema(getSchemaPath(), 
schemaConfig);
+WorkspaceSchema s = f.createSchema(getSchemaPath(), schemaConfig, 
fs);
+if ( s != null) {
--- End diff --

Here we iterate over a list of workspace schema factories. For each, we 
resolve a schemaConfig against the file system.

Under what situations would we have multiple factories? Selecting from two 
distinct storage plugins?

Calcite tends to resolve the same things over and over. Will this method be 
called multiple times?


> Skip initializing all enabled storage plugins for every query
> -
>
> Key: DRILL-5089
> URL: https://issues.apache.org/jira/browse/DRILL-5089
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
>Reporter: Abhishek Girish
>Assignee: Chunhui Shi
>Priority: Critical
>
> In a query's lifecycle, at attempt is made to initialize each enabled storage 
> plugin, while building the schema tree. This is done regardless of the actual 
> plugins involved within a query. 
> Sometimes, when one or more of the enabled storage plugins have issues - 
> either due to misconfiguration or the underlying datasource being slow or 
> being down, the overall query time taken increases drastically. Most likely 
> due the attempt being made to register schemas from a faulty plugin.
> For example, when a jdbc plugin is configured with SQL Server, and at one 
> point the underlying SQL Server db goes down, any Drill query starting to 
> execute at that point and beyond begin to slow down drastically. 
> We must skip registering unrelated schemas (& workspaces) for a query. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5954) ListVector shadows "offsets" from BaseRepeatedValueVector

2017-11-13 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5954:
---
Summary: ListVector shadows "offsets" from BaseRepeatedValueVector  (was: 
ListVector derives from BaseRepeatedValueVector, shadows offsets)

> ListVector shadows "offsets" from BaseRepeatedValueVector
> -
>
> Key: DRILL-5954
> URL: https://issues.apache.org/jira/browse/DRILL-5954
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: 1.13.0
>
>
> The Drill vector class {{ListVector}} derives from 
> {{BaseRepeatedValueVector}}:
> {code}
> public class ListVector extends BaseRepeatedValueVector {
>   private UInt4Vector offsets;
> ...
> {code}
> Note that the {{offsets}} member shadows a member of the same type in the 
> super class:
> {code}
> public abstract class BaseRepeatedValueVector ... {
>   protected final UInt4Vector offsets;
> ...
> {code}
> In Java, shadowing an existing field is considered bad practice as it is 
> never clear which field any particular bit of code references.
> In this case, it appears to be that the {{ListVector}} version is simply a 
> reference to the base class version. Perhaps because someone didn't 
> understand {{protected}} mode in Java?
> {code}
>   public ListVector(MaterializedField field, BufferAllocator allocator, 
> CallBack callBack) {
> ...
> offsets = getOffsetVector();
> ...
> {code}
> A quick experiment shows that the {{ListVector}} version can simply be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250431#comment-16250431
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1026#discussion_r150688725
  
--- Diff: exec/java-exec/src/main/resources/drill-module.conf ---
@@ -502,6 +502,8 @@ drill.exec.options: {
 store.format: "parquet",
 store.hive.optimize_scan_with_native_readers: false,
 store.json.all_text_mode: false,
+store.json.writer.non_numeric_numbers: false,
+store.json.reader.non_numeric_numbers: false,
--- End diff --

As noted in DRILL-5949, options like this should be part of the plugin 
config (as in CSV), not session options. For now, let's leave them as session 
options and I'll migrate them to plugin config options along with the others.


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250432#comment-16250432
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1026#discussion_r150688949
  
--- Diff: exec/java-exec/src/main/resources/drill-module.conf ---
@@ -502,6 +502,8 @@ drill.exec.options: {
 store.format: "parquet",
 store.hive.optimize_scan_with_native_readers: false,
 store.json.all_text_mode: false,
+store.json.writer.non_numeric_numbers: false,
+store.json.reader.non_numeric_numbers: false,
--- End diff --

See discussion below. If they are off, then queries that use NaN and 
Infinity will fail until the user turns them on. Queries that don't use NaN and 
Infinity won't care. So, what is the advantage to failing queries unnecessarily?


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250433#comment-16250433
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1026#discussion_r150689617
  
--- Diff: 
contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/MongoRecordReader.java
 ---
@@ -73,6 +73,7 @@
   private final MongoStoragePlugin plugin;
 
   private final boolean enableAllTextMode;
+  private final boolean enableNonNumericNumbers;
--- End diff --

The JSON parser we use does use the term "non-numeric numbers", but not 
sure we want to keep that naming in Drill itself.


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250434#comment-16250434
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1026#discussion_r150689236
  
--- Diff: exec/java-exec/src/main/resources/drill-module.conf ---
@@ -502,6 +502,8 @@ drill.exec.options: {
 store.format: "parquet",
 store.hive.optimize_scan_with_native_readers: false,
 store.json.all_text_mode: false,
+store.json.writer.non_numeric_numbers: false,
+store.json.reader.non_numeric_numbers: false,
--- End diff --

OK. So, if the rest of Drill either does not support (or we don't know if 
it supports) NaN and Inf, should we introduce the change here that potentially 
leads to failures elsewhere?

Do we know if JDBC and ODBC support these values? (I suppose they do as 
they are features of Java's float/double primitives...)


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5919) Add non-numeric support for JSON processing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250437#comment-16250437
 ] 

ASF GitHub Bot commented on DRILL-5919:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/1026
  
On the two functions... Maybe just have one function that handles the 
Nan/Infinity case. As noted earlier, no matter what we do, JSON without these 
symbols will work. So, we need only consider JSON with the symbols. Either:

* The NaN/Infinity cases always work, or
* The NaN/Infinity cases work sometimes, fail others, depending on some 
option or argument.

I would vote for the first case: it is simpler. I just can't see how anyone 
would use Drill to valid JSON and would want a query to fail if it contains NaN 
or Infinity. Can you suggest a case where failing the query would be of help to 
the user?


> Add non-numeric support for JSON processing
> ---
>
> Key: DRILL-5919
> URL: https://issues.apache.org/jira/browse/DRILL-5919
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: doc-impacting
> Fix For: Future
>
>
> Add session options to allow drill working with non standard json strings 
> number literals like: NaN, Infinity, -Infinity. By default these options will 
> be switched off, the user will be able to toggle them during working session.
> *For documentation*
> 1. Added two session options {{store.json.reader.non_numeric_numbers}} and 
> {{store.json.reader.non_numeric_numbers}} that allow to read/write NaN and 
> Infinity as numbers. By default these options are set to false.
> 2. Extended signature of {{convert_toJSON}} and {{convert_fromJSON}} 
> functions by adding second optional parameter that enables read/write NaN and 
> Infinity.
> For example:
> {noformat}
> select convert_fromJSON('{"key": NaN}') from (values(1)); will result with 
> JsonParseException, but
> select convert_fromJSON('{"key": NaN}', true) from (values(1)); will parse 
> NaN as a number.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-4708) connection closed unexpectedly

2017-11-13 Thread Karthikeyan Manivannan (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250490#comment-16250490
 ] 

Karthikeyan Manivannan commented on DRILL-4708:
---

I don't see a failure with the Apache 
master(7a2fc87ee20f706d85cb5c90cc441e6b44b71592). I ran this test in an 
embedded drillbit.

0: jdbc:drill:zk=local> select * from 
dfs.`/Users/karthik/work/bugs/DRILL-4708/data` order by `time` asc;
...
...
| signal=od2lkz4xzq  | pi8tly75osv1aq  | no-thing  | 207369  | 0.046113 
 |
| signal=mgj912yejm  | pi8tly75osv1aq  | no-thing  | 207369  | 0.023779 
 |
++-+---+-+---+
4,354,749 rows selected (157.454 seconds)

0: jdbc:drill:zk=local> select count(*) from 
dfs.`/Users/karthik/work/bugs/DRILL-4708/data`;
+--+
|  EXPR$0  |
+--+
| 4354749  |
+--+


> connection closed unexpectedly
> --
>
> Key: DRILL-4708
> URL: https://issues.apache.org/jira/browse/DRILL-4708
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.7.0
>Reporter: Chun Chang
>Assignee: Karthikeyan Manivannan
>Priority: Critical
> Attachments: data.tgz
>
>
> Running DRILL functional automation, we often see query failed randomly due 
> to the following unexpected connection close error.
> {noformat}
> Execution Failures:
> /root/drillAutomation/framework/framework/resources/Functional/ctas/ctas_flatten/10rows/filter5.q
> Query: 
> select * from dfs.ctas_flatten.`filter5_10rows_ctas`
> Failed with exception
> java.sql.SQLException: CONNECTION ERROR: Connection /10.10.100.171:36185 <--> 
> drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. 
> Drillbit down?
> [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ]
>   at 
> org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
>   at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:321)
>   at 
> oadd.net.hydromatic.avatica.AvaticaResultSet.next(AvaticaResultSet.java:187)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:172)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:210)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:99)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: oadd.org.apache.drill.common.exceptions.UserException: CONNECTION 
> ERROR: Connection /10.10.100.171:36185 <--> 
> drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. 
> Drillbit down?
> [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ]
>   at 
> oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>   at 
> oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
>   at 
> oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
>   at 
> oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
>   at 
> oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>

[jira] [Resolved] (DRILL-4708) connection closed unexpectedly

2017-11-13 Thread Karthikeyan Manivannan (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthikeyan Manivannan resolved DRILL-4708.
---
Resolution: Works for Me

> connection closed unexpectedly
> --
>
> Key: DRILL-4708
> URL: https://issues.apache.org/jira/browse/DRILL-4708
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.7.0
>Reporter: Chun Chang
>Assignee: Karthikeyan Manivannan
>Priority: Critical
> Attachments: data.tgz
>
>
> Running DRILL functional automation, we often see query failed randomly due 
> to the following unexpected connection close error.
> {noformat}
> Execution Failures:
> /root/drillAutomation/framework/framework/resources/Functional/ctas/ctas_flatten/10rows/filter5.q
> Query: 
> select * from dfs.ctas_flatten.`filter5_10rows_ctas`
> Failed with exception
> java.sql.SQLException: CONNECTION ERROR: Connection /10.10.100.171:36185 <--> 
> drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. 
> Drillbit down?
> [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ]
>   at 
> org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
>   at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:321)
>   at 
> oadd.net.hydromatic.avatica.AvaticaResultSet.next(AvaticaResultSet.java:187)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:172)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:210)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:99)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: oadd.org.apache.drill.common.exceptions.UserException: CONNECTION 
> ERROR: Connection /10.10.100.171:36185 <--> 
> drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. 
> Drillbit down?
> [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ]
>   at 
> oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>   at 
> oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
>   at 
> oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
>   at 
> oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
>   at 
> oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   ... 1 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5842) Refactor and simplify the fragment, operator contexts for testing

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250531#comment-16250531
 ] 

ASF GitHub Bot commented on DRILL-5842:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/978


> Refactor and simplify the fragment, operator contexts for testing
> -
>
> Key: DRILL-5842
> URL: https://issues.apache.org/jira/browse/DRILL-5842
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> Drill's execution engine has a "fragment context" that provides state for a 
> fragment as a whole, and an "operator context" which provides state for a 
> single operator. Historically, these have both been concrete classes that 
> make generous references to the Drillbit context, and hence need a full Drill 
> server in order to operate.
> Drill has historically made extensive use of system-level testing: build the 
> entire server and fire queries at it to test each component. Over time, we 
> are augmenting that approach with unit tests: the ability to test each 
> operator (or parts of an operator) in isolation.
> Since each operator requires access to both the operator and fragment 
> context, the fact that the contexts depend on the overall server creates a 
> large barrier to unit testing. An earlier checkin started down the path of 
> defining the contexts as interfaces that can have different run-time and 
> test-time implementations to enable testing.
> This ticket asks to refactor those interfaces: simplifying the operator 
> context and introducing an interface for the fragment context. New code will 
> use these new interfaces, while older code continues to use the concrete 
> implementations. Over time, as operators are enhanced, they can be modified 
> to allow unit-level testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5771) Fix serDe errors for format plugins

2017-11-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250595#comment-16250595
 ] 

ASF GitHub Bot commented on DRILL-5771:
---

Github user ilooner commented on the issue:

https://github.com/apache/drill/pull/1014
  
Thanks I took a look.


> Fix serDe errors for format plugins
> ---
>
> Key: DRILL-5771
> URL: https://issues.apache.org/jira/browse/DRILL-5771
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Fix For: 1.12.0
>
>
> Create unit tests to check that all storage format plugins can be 
> successfully serialized  / deserialized.
> Usually this happens when query has several major fragments. 
> One way to check serde is to generate physical plan (generated as json) and 
> then submit it back to Drill.
> One example of found errors is described in the first comment. Another 
> example is described in DRILL-5166.
> *Serde issues:*
> 1. Could not obtain format plugin during deserialization
> Format plugin is created based on format plugin configuration or its name. 
> On Drill start up we load information about available plugins (its reloaded 
> each time storage plugin is updated, can be done only by admin).
> When query is parsed, we try to get plugin from the available ones, it we can 
> not find one we try to [create 
> one|https://github.com/apache/drill/blob/3e8b01d5b0d3013e3811913f0fd6028b22c1ac3f/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java#L136-L144]
> but on other query execution stages we always assume that [plugin exists 
> based on 
> configuration|https://github.com/apache/drill/blob/3e8b01d5b0d3013e3811913f0fd6028b22c1ac3f/exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/FileSystemPlugin.java#L156-L162].
> For example, during query parsing we had to create format plugin on one node 
> based on format configuration.
> Then we have sent major fragment to the different node where we used this 
> format configuration we could not get format plugin based on it and 
> deserialization has failed.
> To fix this problem we need to create format plugin during query 
> deserialization if it's absent.
>   
> 2.  Absent hash code and equals.
> Format plugins are stored in hash map where key is format plugin config.
> Since some format plugin configs did not have overridden hash code and 
> equals, we could not find format plugin based on its configuration.
> 3. Named format plugin usage
> Named format plugins configs allow to get format plugin by its name for 
> configuration shared among all drillbits.
> They are used as alias for pre-configured format plugiins. User with admin 
> priliges can modify them at runtime.
> Named format plugins configs are used instead of sending all non-default 
> parameters of format plugin config, in this case only name is sent.
> Their usage in distributed system may cause raise conditions.
> For example, 
> 1. Query is submitted. 
> 2. Parquet format plugin is created with the following configuration 
> (autoCorrectCorruptDates=>true).
> 3. Seralized named format plugin config with name as parquet.
> 4. Major fragment is sent to the different node.
> 5. Admin has changed parquet configuration for the alias 'parquet' on all 
> nodes to autoCorrectCorruptDates=>false.
> 6. Named format is deserialized on the different node into parquet format 
> plugin with configuration (autoCorrectCorruptDates=>false).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-4708) connection closed unexpectedly

2017-11-13 Thread Chun Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250625#comment-16250625
 ] 

Chun Chang commented on DRILL-4708:
---

We might have two issues here. One is client to server connection. This is the 
test Lenin did. I downloaded and tested with Lenin's data on a distributed 
cluster, it worked for me as well. I am thinking we had some changes in that 
area and what Lenin experienced might have been fix (I tested with MapR 
1.11.0). The other is communication between drillbits. This issue seems still 
there. The following is from automation run against current apache master 
(1.12.0). I am reopening the JIRA so we can investigate this issue.

{noformat}
/root/drillAutomation/framework-master/framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q123.q
Query: 
select SUBSTRING(str_var, 50, 2) from widestrings
Failed with exception
java.sql.SQLException: SYSTEM ERROR: ChannelClosedException: Channel closed 
/10.10.104.85:60238 <--> atsqa6c86.qa.lab/10.10.104.86:31010.

Query submission to Drillbit failed.

[Error Id: 1df96d41-2dc1-49f5-8a3a-398ff0cb86d1 ]
at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1895)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:61)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:473)
at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1100)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:477)
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:181)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:110)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:130)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.executeQuery(DrillStatementImpl.java:112)
at 
org.apache.drill.test.framework.DrillTestJdbc.executeSetupQuery(DrillTestJdbc.java:195)
at 
org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:97)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: oadd.org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
ChannelClosedException: Channel closed /10.10.104.85:60238 <--> 
atsqa6c86.qa.lab/10.10.104.86:31010.

Query submission to Drillbit failed.

[Error Id: 1df96d41-2dc1-49f5-8a3a-398ff0cb86d1 ]
at 
oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586)
at 
oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$SubmissionListener.failed(QueryResultHandler.java:294)
at 
oadd.org.apache.drill.exec.rpc.RequestIdMap$RpcListener.setException(RequestIdMap.java:139)
at 
oadd.org.apache.drill.exec.rpc.RequestIdMap$SetExceptionProcedure.apply(RequestIdMap.java:76)
at 
oadd.org.apache.drill.exec.rpc.RequestIdMap$SetExceptionProcedure.apply(RequestIdMap.java:66)
at 
oadd.com.carrotsearch.hppc.IntObjectHashMap.forEach(IntObjectHashMap.java:692)
at 
oadd.org.apache.drill.exec.rpc.RequestIdMap.channelClosed(RequestIdMap.java:62)
at 
oadd.org.apache.drill.exec.rpc.AbstractRemoteConnection.channelClosed(AbstractRemoteConnection.java:192)
at 
oadd.org.apache.drill.exec.rpc.AbstractClientConnection.channelClosed(AbstractClientConnection.java:97)
at 
oadd.org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:167)
at 
oadd.org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:144)
at 
oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at 
oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
at 
oadd.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
at 
oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
at 
oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at 
oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
at 
oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed

[jira] [Reopened] (DRILL-4708) connection closed unexpectedly

2017-11-13 Thread Chun Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chang reopened DRILL-4708:
---

> connection closed unexpectedly
> --
>
> Key: DRILL-4708
> URL: https://issues.apache.org/jira/browse/DRILL-4708
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.7.0
>Reporter: Chun Chang
>Assignee: Karthikeyan Manivannan
>Priority: Critical
> Attachments: data.tgz
>
>
> Running DRILL functional automation, we often see query failed randomly due 
> to the following unexpected connection close error.
> {noformat}
> Execution Failures:
> /root/drillAutomation/framework/framework/resources/Functional/ctas/ctas_flatten/10rows/filter5.q
> Query: 
> select * from dfs.ctas_flatten.`filter5_10rows_ctas`
> Failed with exception
> java.sql.SQLException: CONNECTION ERROR: Connection /10.10.100.171:36185 <--> 
> drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. 
> Drillbit down?
> [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ]
>   at 
> org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
>   at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:321)
>   at 
> oadd.net.hydromatic.avatica.AvaticaResultSet.next(AvaticaResultSet.java:187)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:172)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:210)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:99)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: oadd.org.apache.drill.common.exceptions.UserException: CONNECTION 
> ERROR: Connection /10.10.100.171:36185 <--> 
> drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. 
> Drillbit down?
> [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ]
>   at 
> oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>   at 
> oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
>   at 
> oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
>   at 
> oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
>   at 
> oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
>   at 
> oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   ... 1 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >