[jira] [Updated] (DRILL-5539) drillbit.sh script breaks if the working directory contains spaces

2017-09-25 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5539:

Affects Version/s: 1.9.0

> drillbit.sh script breaks if the working directory contains spaces
> --
>
> Key: DRILL-5539
> URL: https://issues.apache.org/jira/browse/DRILL-5539
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
> Environment: Linux
>Reporter: Muhammad Gelbana
>
> The following output occurred when we tried running the drillbit.sh script in 
> a path that contains spaces: */home/folder1/Folder Name/drill/bin*
> {noformat}
> [mgelbana@regression-sysops bin]$ ./drillbit.sh start
> ./drillbit.sh: line 114: [: /home/folder1/Folder: binary operator expected
> Starting drillbit, logging to /home/folder1/Folder Name/drill/log/drillbit.out
> ./drillbit.sh: line 147: $pid: ambiguous redirect
> [mgelbana@regression-sysops bin]$ pwd
> /home/folder1/Folder Name/drill/bin
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5393) ALTER SESSION documentation page broken link

2017-09-25 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5393:

Affects Version/s: 1.11.0

> ALTER SESSION documentation page broken link
> 
>
> Key: DRILL-5393
> URL: https://issues.apache.org/jira/browse/DRILL-5393
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: Muhammad Gelbana
>Assignee: Bridget Bevens
>
> On [this 
> page|https://drill.apache.org/docs/modifying-query-planning-options/], there 
> is a link to the ALTER SESSION documentation page which points to this broken 
> link: https://drill.apache.org/docs/alter-session/
> I believe the correct link should be: https://drill.apache.org/docs/set/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5606) Some tests fail after creating a fresh clone

2017-08-25 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5606:

Affects Version/s: 1.9.0

> Some tests fail after creating a fresh clone
> 
>
> Key: DRILL-5606
> URL: https://issues.apache.org/jira/browse/DRILL-5606
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.9.0, 1.11.0
> Environment: {noformat}
> $ uname -a
> Linux mg-mate 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> $ lsb_release -a
> No LSB modules are available.
> Distributor ID:   Ubuntu
> Description:  Ubuntu 16.04.2 LTS
> Release:  16.04
> Codename: xenial
> $ java -version
> openjdk version "1.8.0_131"
> OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11)
> OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
> {noformat}
> Environment variables JAVA_HOME, JRE_HOME, JDK_HOME aren't configured. Java 
> executable is found as the PATH environment variables links to it. I can 
> provide more details if needed.
>Reporter: Muhammad Gelbana
> Attachments: failing_tests.tar.gz, full_log.txt.tar.gz, 
> surefire-reports.tar.gz
>
>
> I cloned Drill from Github using this url: 
> [https://github.com/apache/drill.git] and I didn't change the branch 
> afterwards, so I'm using *master*.
> Afterwards, I ran the following command
> {noformat}
> mvn clean install
> {noformat}
> I attached the full log but here is a snippet indicating the failing tests:
> {noformat}
> Failed tests: 
>   TestExtendedTypes.checkReadWriteExtended:60 expected:<...ateDay" : 
> "1997-07-1[6"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-16]"
> },
> "drill_...> but was:<...ateDay" : "1997-07-1[5"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-15]"
> },
> "drill_...>
> Tests in error: 
>   TestCastFunctions.testToDateForTimeStamp:79 »  at position 0 column '`col`' 
> mi...
>   TestNewDateFunctions.testIsDate:61 »  After matching 0 records, did not 
> find e...
> Tests run: 2128, Failures: 1, Errors: 2, Skipped: 139
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Drill Root POM .. SUCCESS [ 19.805 
> s]
> [INFO] tools/Parent Pom ... SUCCESS [  0.605 
> s]
> [INFO] tools/freemarker codegen tooling ... SUCCESS [  7.077 
> s]
> [INFO] Drill Protocol . SUCCESS [  7.959 
> s]
> [INFO] Common (Logical Plan, Base expressions)  SUCCESS [  7.734 
> s]
> [INFO] Logical Plan, Base expressions . SUCCESS [  8.099 
> s]
> [INFO] exec/Parent Pom  SUCCESS [  0.575 
> s]
> [INFO] exec/memory/Parent Pom . SUCCESS [  0.513 
> s]
> [INFO] exec/memory/base ... SUCCESS [  4.666 
> s]
> [INFO] exec/rpc ... SUCCESS [  2.684 
> s]
> [INFO] exec/Vectors ... SUCCESS [01:11 
> min]
> [INFO] contrib/Parent Pom . SUCCESS [  0.547 
> s]
> [INFO] contrib/data/Parent Pom  SUCCESS [  0.496 
> s]
> [INFO] contrib/data/tpch-sample-data .. SUCCESS [  2.698 
> s]
> [INFO] exec/Java Execution Engine . FAILURE [19:09 
> min]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5606) Some tests fail after creating a fresh clone

2017-08-25 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5606:

Affects Version/s: 1.11.0

> Some tests fail after creating a fresh clone
> 
>
> Key: DRILL-5606
> URL: https://issues.apache.org/jira/browse/DRILL-5606
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.11.0
> Environment: {noformat}
> $ uname -a
> Linux mg-mate 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> $ lsb_release -a
> No LSB modules are available.
> Distributor ID:   Ubuntu
> Description:  Ubuntu 16.04.2 LTS
> Release:  16.04
> Codename: xenial
> $ java -version
> openjdk version "1.8.0_131"
> OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11)
> OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
> {noformat}
> Environment variables JAVA_HOME, JRE_HOME, JDK_HOME aren't configured. Java 
> executable is found as the PATH environment variables links to it. I can 
> provide more details if needed.
>Reporter: Muhammad Gelbana
> Attachments: failing_tests.tar.gz, full_log.txt.tar.gz, 
> surefire-reports.tar.gz
>
>
> I cloned Drill from Github using this url: 
> [https://github.com/apache/drill.git] and I didn't change the branch 
> afterwards, so I'm using *master*.
> Afterwards, I ran the following command
> {noformat}
> mvn clean install
> {noformat}
> I attached the full log but here is a snippet indicating the failing tests:
> {noformat}
> Failed tests: 
>   TestExtendedTypes.checkReadWriteExtended:60 expected:<...ateDay" : 
> "1997-07-1[6"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-16]"
> },
> "drill_...> but was:<...ateDay" : "1997-07-1[5"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-15]"
> },
> "drill_...>
> Tests in error: 
>   TestCastFunctions.testToDateForTimeStamp:79 »  at position 0 column '`col`' 
> mi...
>   TestNewDateFunctions.testIsDate:61 »  After matching 0 records, did not 
> find e...
> Tests run: 2128, Failures: 1, Errors: 2, Skipped: 139
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Drill Root POM .. SUCCESS [ 19.805 
> s]
> [INFO] tools/Parent Pom ... SUCCESS [  0.605 
> s]
> [INFO] tools/freemarker codegen tooling ... SUCCESS [  7.077 
> s]
> [INFO] Drill Protocol . SUCCESS [  7.959 
> s]
> [INFO] Common (Logical Plan, Base expressions)  SUCCESS [  7.734 
> s]
> [INFO] Logical Plan, Base expressions . SUCCESS [  8.099 
> s]
> [INFO] exec/Parent Pom  SUCCESS [  0.575 
> s]
> [INFO] exec/memory/Parent Pom . SUCCESS [  0.513 
> s]
> [INFO] exec/memory/base ... SUCCESS [  4.666 
> s]
> [INFO] exec/rpc ... SUCCESS [  2.684 
> s]
> [INFO] exec/Vectors ... SUCCESS [01:11 
> min]
> [INFO] contrib/Parent Pom . SUCCESS [  0.547 
> s]
> [INFO] contrib/data/Parent Pom  SUCCESS [  0.496 
> s]
> [INFO] contrib/data/tpch-sample-data .. SUCCESS [  2.698 
> s]
> [INFO] exec/Java Execution Engine . FAILURE [19:09 
> min]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5735) UI options grouping and filtering & Metrics hints

2017-08-21 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5735:

Description: 
I'm thinking of some UI improvements that could make all the difference for 
users trying to optimize low-performing queries.

h2. Options
h3. Grouping
We can organize the options to be grouped by their scope of effect, this will 
help users easily locate the options they may need to tune.
h3. Filtering
Since the options are a lot, we can add a filtering mechanism (i.e. string 
search or group\scope filtering) so the user can filter out the options he's 
not interested in. To provide more benefit than the grouping idea mentioned 
above, filtering may include keywords also and not just the option name, since 
the user may not be aware of the name of the option he's looking for.

h2. Metrics
I'm referring here to the metrics page and the query execution plan page that 
displays the overview section and major\minor fragments metrics. We can show 
hints for each metric such as:
# What does it represent in more details.
# What option\scope-of-options to tune (increase ? decrease ?) to improve the 
performance reported by this metric.
# May be even provide a small dialog to quickly allow the modification of the 
related option(s) to that metric

  was:
I can think of some UI improvements that could make all the difference for 
users trying to optimize low-performing queries.

h2. Options
h3. Grouping
We can organize the options to be grouped by their scope of effect, this will 
help users easily locate the options they may need to tune.
h3. Filtering
Since the options are a lot, we can add a filtering mechanism (i.e. string 
search or group\scope filtering) so the user can filter out the options he's 
not interested in. To provide more benefit than the grouping idea mentioned 
above, filtering may include keywords also and not just the option name, since 
the user may not be aware of the name of the option he's looking for.

h2. Metrics
I'm referring here to the metrics page and the query execution plan page that 
displays the overview section and major\minor fragments metrics. We can show 
hints for each metric such as:
# What does it represent in more details.
# What option\scope-of-options to tune (increase ? decrease ?) to improve the 
performance reported by this metric.
# May be even provide a small dialog to quickly allow the modification of the 
related option(s) to that metric


> UI options grouping and filtering & Metrics hints
> -
>
> Key: DRILL-5735
> URL: https://issues.apache.org/jira/browse/DRILL-5735
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.9.0, 1.10.0, 1.11.0
>Reporter: Muhammad Gelbana
>
> I'm thinking of some UI improvements that could make all the difference for 
> users trying to optimize low-performing queries.
> h2. Options
> h3. Grouping
> We can organize the options to be grouped by their scope of effect, this will 
> help users easily locate the options they may need to tune.
> h3. Filtering
> Since the options are a lot, we can add a filtering mechanism (i.e. string 
> search or group\scope filtering) so the user can filter out the options he's 
> not interested in. To provide more benefit than the grouping idea mentioned 
> above, filtering may include keywords also and not just the option name, 
> since the user may not be aware of the name of the option he's looking for.
> h2. Metrics
> I'm referring here to the metrics page and the query execution plan page that 
> displays the overview section and major\minor fragments metrics. We can show 
> hints for each metric such as:
> # What does it represent in more details.
> # What option\scope-of-options to tune (increase ? decrease ?) to improve the 
> performance reported by this metric.
> # May be even provide a small dialog to quickly allow the modification of the 
> related option(s) to that metric



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5735) UI options grouping and filtering & Metrics hints

2017-08-21 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5735:
---

 Summary: UI options grouping and filtering & Metrics hints
 Key: DRILL-5735
 URL: https://issues.apache.org/jira/browse/DRILL-5735
 Project: Apache Drill
  Issue Type: Improvement
  Components: Web Server
Affects Versions: 1.11.0, 1.10.0, 1.9.0
Reporter: Muhammad Gelbana


I can think of some UI improvements that could make all the difference for 
users trying to optimize low-performing queries.

h2. Options
h3. Grouping
We can organize the options to be grouped by their scope of effect, this will 
help users easily locate the options they may need to tune.
h3. Filtering
Since the options are a lot, we can add a filtering mechanism (i.e. string 
search or group\scope filtering) so the user can filter out the options he's 
not interested in. To provide more benefit than the grouping idea mentioned 
above, filtering may include keywords also and not just the option name, since 
the user may not be aware of the name of the option he's looking for.

h2. Metrics
I'm referring here to the metrics page and the query execution plan page that 
displays the overview section and major\minor fragments metrics. We can show 
hints for each metric such as:
# What does it represent in more details.
# What option\scope-of-options to tune (increase ? decrease ?) to improve the 
performance reported by this metric.
# May be even provide a small dialog to quickly allow the modification of the 
related option(s) to that metric



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5718) java.lang.IllegalStateException: Memory was leaked by query

2017-08-13 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5718:

Environment: 
uname -a
Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Apr 11 17:13:24 UTC 
2017 x86_64 x86_64 x86_64 GNU/Linux

48 Cores
Assigned to Drill 25 GB heap and 200 GB direct memory
The machine has a total of 500GB of RAM, 250 of it is assigned to another 
application. That application and Drill are the only considerable applications 
on that machine.

  was:
uname -a
Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Apr 11 17:13:24 UTC 
2017 x86_64 x86_64 x86_64 GNU/Linux

48 Cores
Assigned to Drill 25 GB heap and 200 GB direct memory


> java.lang.IllegalStateException: Memory was leaked by query
> ---
>
> Key: DRILL-5718
> URL: https://issues.apache.org/jira/browse/DRILL-5718
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow, Execution - RPC
>Affects Versions: 1.9.0, 1.11.0
> Environment: uname -a
> Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Apr 11 17:13:24 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> 48 Cores
> Assigned to Drill 25 GB heap and 200 GB direct memory
> The machine has a total of 500GB of RAM, 250 of it is assigned to another 
> application. That application and Drill are the only considerable 
> applications on that machine.
>Reporter: Muhammad Gelbana
> Attachments: drillbit.out.tar.gz
>
>
> Configurations
> {noformat}
> planner.memory.max_query_memory_per_node: 17179869184 (16 GB)
> planner.width.max_per_node: 48
> store.parquet.block-size: 134217728 (128 MB, this is the block size used to 
> create the parquet files)
> {noformat}
> {noformat}
> Fragment 0:0
> [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
> [BitServer-4] INFO org.apache.drill.exec.work.fragment.FragmentExecutor - 
> 267104f2-e48d-1d66-63f4-387848c1ccf2:1:10: State change requested RUNNING --> 
> CANCELLATION_REQUESTED
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> ChannelClosedException: Channel closed /127.0.0.1:31010 <--> /127.0.0.1:40404.
> Fragment 0:0
> [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.drill.exec.rpc.ChannelClosedException: Channel closed 
> /127.0.0.1:31010 <--> /127.0.0.1:40404.
> at 
> org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:164)
> at 
> org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:144)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> at 
> io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
> at 
> io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1099)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
> at 
> io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
> at 
> 

[jira] [Updated] (DRILL-5718) java.lang.IllegalStateException: Memory was leaked by query

2017-08-12 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5718:

Environment: 
uname -a
Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Apr 11 17:13:24 UTC 
2017 x86_64 x86_64 x86_64 GNU/Linux

48 Cores
Assigned to Drill 25 GB heap and 200 GB direct memory

  was:
Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Apr 11 17:13:24 UTC 
2017 x86_64 x86_64 x86_64 GNU/Linux

48 Cores
25 GB Heap
200 GB Direct memory


> java.lang.IllegalStateException: Memory was leaked by query
> ---
>
> Key: DRILL-5718
> URL: https://issues.apache.org/jira/browse/DRILL-5718
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow, Execution - RPC
>Affects Versions: 1.9.0, 1.11.0
> Environment: uname -a
> Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Apr 11 17:13:24 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> 48 Cores
> Assigned to Drill 25 GB heap and 200 GB direct memory
>Reporter: Muhammad Gelbana
> Attachments: drillbit.out.tar.gz
>
>
> Configurations
> {noformat}
> planner.memory.max_query_memory_per_node: 17179869184 (16 GB)
> planner.width.max_per_node: 48
> store.parquet.block-size: 134217728 (128 MB, this is the block size used to 
> create the parquet files)
> {noformat}
> {noformat}
> Fragment 0:0
> [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
> [BitServer-4] INFO org.apache.drill.exec.work.fragment.FragmentExecutor - 
> 267104f2-e48d-1d66-63f4-387848c1ccf2:1:10: State change requested RUNNING --> 
> CANCELLATION_REQUESTED
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> ChannelClosedException: Channel closed /127.0.0.1:31010 <--> /127.0.0.1:40404.
> Fragment 0:0
> [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.drill.exec.rpc.ChannelClosedException: Channel closed 
> /127.0.0.1:31010 <--> /127.0.0.1:40404.
> at 
> org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:164)
> at 
> org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:144)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> at 
> io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
> at 
> io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1099)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
> at 
> io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466)
> at 
> io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:187)
> at 
> org.apache.drill.exec.rpc.BasicServer$LoggingReadTimeoutHandler.readTimedOut(BasicServer.java:122)
> at 
> 

[jira] [Updated] (DRILL-5718) java.lang.IllegalStateException: Memory was leaked by query

2017-08-12 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5718:

Attachment: drillbit.out.tar.gz

Full logs

> java.lang.IllegalStateException: Memory was leaked by query
> ---
>
> Key: DRILL-5718
> URL: https://issues.apache.org/jira/browse/DRILL-5718
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow, Execution - RPC
>Affects Versions: 1.9.0, 1.11.0
> Environment: Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP 
> Tue Apr 11 17:13:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> 48 Cores
> 25 GB Heap
> 200 GB Direct memory
>Reporter: Muhammad Gelbana
> Attachments: drillbit.out.tar.gz
>
>
> Configurations
> {noformat}
> planner.memory.max_query_memory_per_node: 17179869184 (16 GB)
> planner.width.max_per_node: 48
> store.parquet.block-size: 134217728 (128 MB, this is the block size used to 
> create the parquet files)
> {noformat}
> {noformat}
> Fragment 0:0
> [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
> [BitServer-4] INFO org.apache.drill.exec.work.fragment.FragmentExecutor - 
> 267104f2-e48d-1d66-63f4-387848c1ccf2:1:10: State change requested RUNNING --> 
> CANCELLATION_REQUESTED
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> ChannelClosedException: Channel closed /127.0.0.1:31010 <--> /127.0.0.1:40404.
> Fragment 0:0
> [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.drill.exec.rpc.ChannelClosedException: Channel closed 
> /127.0.0.1:31010 <--> /127.0.0.1:40404.
> at 
> org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:164)
> at 
> org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:144)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> at 
> io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
> at 
> io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
> at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1099)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
> at 
> io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
> at 
> io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466)
> at 
> io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:187)
> at 
> org.apache.drill.exec.rpc.BasicServer$LoggingReadTimeoutHandler.readTimedOut(BasicServer.java:122)
> at 
> io.netty.handler.timeout.ReadTimeoutHandler$ReadTimeoutTask.run(ReadTimeoutHandler.java:212)
> at 
> io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
> at 
> io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
> at 

[jira] [Created] (DRILL-5718) java.lang.IllegalStateException: Memory was leaked by query

2017-08-12 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5718:
---

 Summary: java.lang.IllegalStateException: Memory was leaked by 
query
 Key: DRILL-5718
 URL: https://issues.apache.org/jira/browse/DRILL-5718
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow, Execution - RPC
Affects Versions: 1.11.0, 1.9.0
 Environment: Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue 
Apr 11 17:13:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

48 Cores
25 GB Heap
200 GB Direct memory
Reporter: Muhammad Gelbana


Configurations
{noformat}
planner.memory.max_query_memory_per_node: 17179869184 (16 GB)
planner.width.max_per_node: 48
store.parquet.block-size: 134217728 (128 MB, this is the block size used to 
create the parquet files)
{noformat}

{noformat}
Fragment 0:0

[Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
[BitServer-4] INFO org.apache.drill.exec.work.fragment.FragmentExecutor - 
267104f2-e48d-1d66-63f4-387848c1ccf2:1:10: State change requested RUNNING --> 
CANCELLATION_REQUESTED
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
ChannelClosedException: Channel closed /127.0.0.1:31010 <--> /127.0.0.1:40404.

Fragment 0:0

[Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.drill.exec.rpc.ChannelClosedException: Channel closed 
/127.0.0.1:31010 <--> /127.0.0.1:40404.
at 
org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:164)
at 
org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:144)
at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at 
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at 
io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
at 
io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
at 
io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
at 
io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
at 
io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1099)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
at 
io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
at 
io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
at 
io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
at 
io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466)
at 
io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:187)
at 
org.apache.drill.exec.rpc.BasicServer$LoggingReadTimeoutHandler.readTimedOut(BasicServer.java:122)
at 
io.netty.handler.timeout.ReadTimeoutHandler$ReadTimeoutTask.run(ReadTimeoutHandler.java:212)
at 
io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at 
io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
Suppressed: org.apache.drill.exec.rpc.RpcException: Failure sending message.
at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:124)
at 
org.apache.drill.exec.rpc.user.UserServer$BitToUserConnection.sendData(UserServer.java:173)
at 

[jira] [Commented] (DRILL-4398) SYSTEM ERROR: IllegalStateException: Memory was leaked by query

2017-08-11 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124376#comment-16124376
 ] 

Muhammad Gelbana commented on DRILL-4398:
-

Thanks [~zfong] for the response. I'll open a new issue shortly.
{noformat}
Fragment 0:0

[Error Id: 0403a63e-86cc-428e-929b-e8dcd561a6bf on iWebStitchFixDev:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.drill.exec.rpc.ChannelClosedException: Channel closed 
/72.55.136.6:31010 <--> /72.55.136.6:40834.
at 
org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:166)
at 
org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:146)
at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at 
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at 
io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
at 
io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
at 
io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943)
at 
io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592)
at 
io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1099)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
at 
io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
at 
io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615)
at 
io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600)
at 
io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466)
at 
io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:187)
at 
org.apache.drill.exec.rpc.BasicServer$LogggingReadTimeoutHandler.readTimedOut(BasicServer.java:121)
at 
io.netty.handler.timeout.ReadTimeoutHandler$ReadTimeoutTask.run(ReadTimeoutHandler.java:212)
at 
io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at 
io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
Suppressed: org.apache.drill.exec.rpc.RpcException: Failure sending 
message.
at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:126)
at 
org.apache.drill.exec.rpc.user.UserServer$UserClientConnectionImpl.sendData(UserServer.java:285)
at 
org.apache.drill.exec.ops.AccountingUserConnection.sendData(AccountingUserConnection.java:42)
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:118)
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
at 

[jira] [Created] (DRILL-5707) Non-scalar subquery fails the whole query if it's aggregate column has an alias

2017-08-07 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5707:
---

 Summary: Non-scalar subquery fails the whole query if it's 
aggregate column has an alias
 Key: DRILL-5707
 URL: https://issues.apache.org/jira/browse/DRILL-5707
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning & Optimization, SQL Parser
Affects Versions: 1.11.0, 1.9.0
Reporter: Muhammad Gelbana


The following query can be handled by Drill
{code:sql}
SELECT b.marital_status, (SELECT SUM(position_id) FROM cp.`employee.json` a 
WHERE a.marital_status = b.marital_status ) AS max_a FROM cp.`employee.json` b
{code}

But if I add an alias to the aggregate fuction
{code:sql}
SELECT b.marital_status, (SELECT SUM(position_id) MY_ALIAS FROM 
cp.`employee.json` a WHERE a.marital_status = b.marital_status ) AS max_a FROM 
cp.`employee.json` b
{code}

Drill starts complaining that it can't handle non-scalar subqueries
{noformat}
org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION 
ERROR: Non-scalar sub-query used in an expression See Apache Drill JIRA: 
DRILL-1937
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (DRILL-1937) Throw exception and give error message when Non-scalar sub-query used in an expression

2017-08-07 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-1937:

Comment: was deleted

(was: What's the problem with supporting non-scalar sub-queries ? If a 
sub-query returns multiple rows, the result set can be grown to include the 
single value, joined with all the values returned from the scalar sub-query.

For example:
{code:sql}SELECT COL1, (SELECT COL2 FROM T2) FROM T1{code}

Would output the following resultset
{noformat}
[COL1], [COL2]
Col1-Value, Col2-Value1
Col1-Value, Col2-Value2
Col1-Value, Col2-Value3
Col1-Value, Col2-Value4
{noformat}

Note that Col1 is the same for all rows, the column with the changing value, is 
the one returned from Col2.)

> Throw exception and give error message when Non-scalar sub-query used in an 
> expression
> --
>
> Key: DRILL-1937
> URL: https://issues.apache.org/jira/browse/DRILL-1937
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 0.8.0
>Reporter: Victoria Markman
>Assignee: Aman Sinha
> Fix For: 0.8.0
>
> Attachments: DRILL-1937.1.patch
>
>
> {code}
> #Fri Jan 02 21:20:47 EST 2015
> git.commit.id.abbrev=b491cdb
> {code}
> It is dangerous to have an internal function be exposed to users.
> What if one day user decided to write a UDF with the same signature ?
> {code}
> 0: jdbc:drill:schema=dfs> select SINGLE_VALUE(1) from `t.json`;
> +--+
> |  |
> +--+
> +--+
> No rows selected (0.111 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5695) INTERVAL DAY multiplication isn't supported

2017-08-01 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5695:

Affects Version/s: 1.11.0

> INTERVAL DAY multiplication isn't supported
> ---
>
> Key: DRILL-5695
> URL: https://issues.apache.org/jira/browse/DRILL-5695
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.9.0, 1.11.0
>Reporter: Muhammad Gelbana
>
> I'm not sure if this is intended or a missing feature.
> The following query
> {code:sql}
> SELECT CUSTOM_DATE_TRUNC('day', CAST('1900-01-01' AS DATE) + CAST (NULL AS 
> INTERVAL DAY) * INTERVAL '1' DAY) + 1 * INTERVAL '1' YEAR FROM 
> `dfs`.`path_to_parquet` Calcs HAVING (COUNT(1) > 0) LIMIT 0
> {code}
> {noformat}
> 2017-07-30 13:12:15,439 [268240ef-eeea-04e2-cca2-b95033061af5:foreman] INFO  
> o.a.d.e.p.sql.TypeInferenceUtils - User Error Occurred
> org.apache.drill.common.exceptions.UserException: FUNCTION ERROR: * does not 
> support operand types (INTERVAL_DAY_TIME,INTERVAL_DAY_TIME)
> [Error Id: 50c2bd86-332c-4569-a5a2-76193e7eca41 ]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.planner.sql.TypeInferenceUtils.resolveDrillFuncHolder(TypeInferenceUtils.java:644)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.planner.sql.TypeInferenceUtils.access$1700(TypeInferenceUtils.java:57)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.planner.sql.TypeInferenceUtils$DrillDefaultSqlReturnTypeInference.inferReturnType(TypeInferenceUtils.java:260)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:468) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:435) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:507) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:493) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.SqlOperator.constructArgTypeList(SqlOperator.java:581) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:240) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:222) 
> [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
>  [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
>  

[jira] [Updated] (DRILL-5583) Literal expression not handled

2017-08-01 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5583:

Affects Version/s: 1.11.0

> Literal expression not handled
> --
>
> Key: DRILL-5583
> URL: https://issues.apache.org/jira/browse/DRILL-5583
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.9.0, 1.11.0
>Reporter: Muhammad Gelbana
>
> The following query
> {code:sql}
> SELECT ((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + (365 
> * 70 + 17)) `TEMP(Test)(64617177)(0)` FROM `dfs`.`path_to_parquet` Calcs 
> GROUP BY ((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + 
> (365 * 70 + 17))
> {code}
> Throws the following exception
> {noformat}
> [Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> AssertionError: Internal error: invalid literal: 60 * 60 * 24
> [Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:825)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:935) 
> [drill-java-exec-1.9.0.jar:1.9.0]
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:281) 
> [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
> exception during fragment initialization: Internal error: invalid literal: 60 
> + 2
>   ... 4 common frames omitted
> Caused by: java.lang.AssertionError: Internal error: invalid literal: 60 + 2
>   at org.apache.calcite.util.Util.newInternal(Util.java:777) 
> ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at org.apache.calcite.sql.SqlLiteral.value(SqlLiteral.java:329) 
> ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.SqlCallBinding.getOperandLiteralValue(SqlCallBinding.java:219)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.SqlBinaryOperator.getMonotonicity(SqlBinaryOperator.java:188)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) 
> ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.SqlCallBinding.getOperandMonotonicity(SqlCallBinding.java:193)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.fun.SqlMonotonicBinaryOperator.getMonotonicity(SqlMonotonicBinaryOperator.java:59)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) 
> ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql.validate.SelectScope.getMonotonicity(SelectScope.java:154)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.createAggImpl(SqlToRelConverter.java:2476)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertAgg(SqlToRelConverter.java:2374)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:603)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:564)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:2769)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:518)
>  ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
>   at 
> org.apache.drill.exec.planner.sql.SqlConverter.toRel(SqlConverter.java:263) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> 

[jira] [Created] (DRILL-5695) INTERVAL DAY multiplication isn't supported

2017-07-30 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5695:
---

 Summary: INTERVAL DAY multiplication isn't supported
 Key: DRILL-5695
 URL: https://issues.apache.org/jira/browse/DRILL-5695
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Affects Versions: 1.9.0
Reporter: Muhammad Gelbana


I'm not sure if this is intended or a missing feature.

The following query
{code:sql}
SELECT CUSTOM_DATE_TRUNC('day', CAST('1900-01-01' AS DATE) + CAST (NULL AS 
INTERVAL DAY) * INTERVAL '1' DAY) + 1 * INTERVAL '1' YEAR FROM 
`dfs`.`path_to_parquet` Calcs HAVING (COUNT(1) > 0) LIMIT 0
{code}

{noformat}
2017-07-30 13:12:15,439 [268240ef-eeea-04e2-cca2-b95033061af5:foreman] INFO  
o.a.d.e.p.sql.TypeInferenceUtils - User Error Occurred
org.apache.drill.common.exceptions.UserException: FUNCTION ERROR: * does not 
support operand types (INTERVAL_DAY_TIME,INTERVAL_DAY_TIME)


[Error Id: 50c2bd86-332c-4569-a5a2-76193e7eca41 ]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.TypeInferenceUtils.resolveDrillFuncHolder(TypeInferenceUtils.java:644)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.TypeInferenceUtils.access$1700(TypeInferenceUtils.java:57)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.TypeInferenceUtils$DrillDefaultSqlReturnTypeInference.inferReturnType(TypeInferenceUtils.java:260)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:468) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:435) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:507) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:493) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlOperator.constructArgTypeList(SqlOperator.java:581) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:240) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:222) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
 [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
 

[jira] [Closed] (DRILL-5515) "IS NO DISTINCT FROM" and it's equivalent form aren't handled likewise

2017-07-14 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana closed DRILL-5515.
---
Resolution: Invalid

What I posted is NOT the equivalent form for the "IS DISTINCT FROM" clause.

> "IS NO DISTINCT FROM" and it's equivalent form aren't handled likewise
> --
>
> Key: DRILL-5515
> URL: https://issues.apache.org/jira/browse/DRILL-5515
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Muhammad Gelbana
>
> The following query fails to execute
> {code:sql}SELECT * FROM (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) 
> `t0` INNER JOIN (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t1` ON 
> (`t0`.`UserID` IS NOT DISTINCT FROM `t1`.`UserID`){code}
> and produces the following error message
> {noformat}org.apache.drill.common.exceptions.UserRemoteException: 
> UNSUPPORTED_OPERATION ERROR: This query cannot be planned possibly due to 
> either a cartesian join or an inequality join [Error Id: 
> 0bd41e06-ccd7-45d6-a038-3359bf5a4a7f on mgelbana-incorta:31010]{noformat}
> While the query's equivalent form runs fine
> {code:sql}SELECT * FROM (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) 
> `t0` INNER JOIN (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t1` ON 
> (`t0`.`UserID` = `t1`.`UserID` OR (`t0`.`UserID` IS NULL AND `t1`.`UserID` IS 
> NULL)){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5606) Some tests fail after creating a fresh clone

2017-07-08 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079131#comment-16079131
 ] 

Muhammad Gelbana commented on DRILL-5606:
-

For the 
*org.apache.drill.exec.fn.impl.TestCastFunctions.testToDateForTimeStamp()* 
method, I found that when the test query
{code:sql}select to_date(to_timestamp(-1)) as col from (values(1)){code}
against Drill, the result is
{noformat}1970-01-01T00:00:00.000+02:00{noformat}

If I set the timezone to *UTC* (i.e. *-Duser.timezone=UTC*), the result becomes
{noformat}1969-12-31T00:00:00.000Z{noformat}
which is what the test case expects, I guess.

I looked around for a test case that sets the timezone but I found a couple of 
test cases ignored because they rely on timezones !
Would someone please tell me how can I set the timezone for a test case ?

> Some tests fail after creating a fresh clone
> 
>
> Key: DRILL-5606
> URL: https://issues.apache.org/jira/browse/DRILL-5606
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
> Environment: {noformat}
> $ uname -a
> Linux mg-mate 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> $ lsb_release -a
> No LSB modules are available.
> Distributor ID:   Ubuntu
> Description:  Ubuntu 16.04.2 LTS
> Release:  16.04
> Codename: xenial
> $ java -version
> openjdk version "1.8.0_131"
> OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11)
> OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
> {noformat}
> Environment variables JAVA_HOME, JRE_HOME, JDK_HOME aren't configured. Java 
> executable is found as the PATH environment variables links to it. I can 
> provide more details if needed.
>Reporter: Muhammad Gelbana
> Attachments: failing_tests.tar.gz, full_log.txt.tar.gz, 
> surefire-reports.tar.gz
>
>
> I cloned Drill from Github using this url: 
> [https://github.com/apache/drill.git] and I didn't change the branch 
> afterwards, so I'm using *master*.
> Afterwards, I ran the following command
> {noformat}
> mvn clean install
> {noformat}
> I attached the full log but here is a snippet indicating the failing tests:
> {noformat}
> Failed tests: 
>   TestExtendedTypes.checkReadWriteExtended:60 expected:<...ateDay" : 
> "1997-07-1[6"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-16]"
> },
> "drill_...> but was:<...ateDay" : "1997-07-1[5"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-15]"
> },
> "drill_...>
> Tests in error: 
>   TestCastFunctions.testToDateForTimeStamp:79 »  at position 0 column '`col`' 
> mi...
>   TestNewDateFunctions.testIsDate:61 »  After matching 0 records, did not 
> find e...
> Tests run: 2128, Failures: 1, Errors: 2, Skipped: 139
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Drill Root POM .. SUCCESS [ 19.805 
> s]
> [INFO] tools/Parent Pom ... SUCCESS [  0.605 
> s]
> [INFO] tools/freemarker codegen tooling ... SUCCESS [  7.077 
> s]
> [INFO] Drill Protocol . SUCCESS [  7.959 
> s]
> [INFO] Common (Logical Plan, Base expressions)  SUCCESS [  7.734 
> s]
> [INFO] Logical Plan, Base expressions . SUCCESS [  8.099 
> s]
> [INFO] exec/Parent Pom  SUCCESS [  0.575 
> s]
> [INFO] exec/memory/Parent Pom . SUCCESS [  0.513 
> s]
> [INFO] exec/memory/base ... SUCCESS [  4.666 
> s]
> [INFO] exec/rpc ... SUCCESS [  2.684 
> s]
> [INFO] exec/Vectors ... SUCCESS [01:11 
> min]
> [INFO] contrib/Parent Pom . SUCCESS [  0.547 
> s]
> [INFO] contrib/data/Parent Pom  SUCCESS [  0.496 
> s]
> [INFO] contrib/data/tpch-sample-data .. SUCCESS [  2.698 
> s]
> [INFO] exec/Java Execution Engine . FAILURE [19:09 
> min]
> {noformat}



--

[jira] [Commented] (DRILL-3993) Rebase Drill on Calcite master branch

2017-06-28 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066793#comment-16066793
 ] 

Muhammad Gelbana commented on DRILL-3993:
-

I believe we need to change that and target whatever the latest version of 
Calcite is. And its unclear to me why did Drill make a special build of Calcite 
in the first place ?

> Rebase Drill on Calcite master branch
> -
>
> Key: DRILL-3993
> URL: https://issues.apache.org/jira/browse/DRILL-3993
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.2.0
>Reporter: Sudheesh Katkam
>
> Calcite keeps moving, and now we need to catch up to Calcite 1.5, and ensure 
> there are no regressions.
> Also, how do we resolve this 'catching up' issue in the long term?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5606) Some tests fail after creating a fresh clone

2017-06-24 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5606:

Attachment: failing_tests.tar.gz
full_log.txt.tar.gz
surefire-reports.tar.gz

*surefire-reports.tar.gz*
This is the complete folder created by the surefire plugin

*full_log.txt.tar.gz*
This is the full log of the build process.

*failing_tests.tar.gz*
This archive contains 2 folders. Each folder contains 2 files outputted by one 
of the failing tests.

> Some tests fail after creating a fresh clone
> 
>
> Key: DRILL-5606
> URL: https://issues.apache.org/jira/browse/DRILL-5606
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
> Environment: {noformat}
> $ uname -a
> Linux mg-mate 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> $ lsb_release -a
> No LSB modules are available.
> Distributor ID:   Ubuntu
> Description:  Ubuntu 16.04.2 LTS
> Release:  16.04
> Codename: xenial
> $ java -version
> openjdk version "1.8.0_131"
> OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11)
> OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
> {noformat}
> Environment variables JAVA_HOME, JRE_HOME, JDK_HOME aren't configured. Java 
> executable is found as the PATH environment variables links to it. I can 
> provide more details if needed.
>Reporter: Muhammad Gelbana
> Attachments: failing_tests.tar.gz, full_log.txt.tar.gz, 
> surefire-reports.tar.gz
>
>
> I cloned Drill from Github using this url: 
> [https://github.com/apache/drill.git] and I didn't change the branch 
> afterwards, so I'm using *master*.
> Afterwards, I ran the following command
> {noformat}
> mvn clean install
> {noformat}
> I attached the full log but here is a snippet indicating the failing tests:
> {noformat}
> Failed tests: 
>   TestExtendedTypes.checkReadWriteExtended:60 expected:<...ateDay" : 
> "1997-07-1[6"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-16]"
> },
> "drill_...> but was:<...ateDay" : "1997-07-1[5"
>   },
>   "drill_timestamp" : {
> "$date" : "2009-02-23T08:00:00.000Z"
>   },
>   "time" : {
> "$time" : "19:20:30.450Z"
>   },
>   "interval" : {
> "$interval" : "PT26.400S"
>   },
>   "integer" : {
> "$numberLong" : 4
>   },
>   "inner" : {
> "bin" : {
>   "$binary" : "ZHJpbGw="
> },
> "drill_date" : {
>   "$dateDay" : "1997-07-15]"
> },
> "drill_...>
> Tests in error: 
>   TestCastFunctions.testToDateForTimeStamp:79 »  at position 0 column '`col`' 
> mi...
>   TestNewDateFunctions.testIsDate:61 »  After matching 0 records, did not 
> find e...
> Tests run: 2128, Failures: 1, Errors: 2, Skipped: 139
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Drill Root POM .. SUCCESS [ 19.805 
> s]
> [INFO] tools/Parent Pom ... SUCCESS [  0.605 
> s]
> [INFO] tools/freemarker codegen tooling ... SUCCESS [  7.077 
> s]
> [INFO] Drill Protocol . SUCCESS [  7.959 
> s]
> [INFO] Common (Logical Plan, Base expressions)  SUCCESS [  7.734 
> s]
> [INFO] Logical Plan, Base expressions . SUCCESS [  8.099 
> s]
> [INFO] exec/Parent Pom  SUCCESS [  0.575 
> s]
> [INFO] exec/memory/Parent Pom . SUCCESS [  0.513 
> s]
> [INFO] exec/memory/base ... SUCCESS [  4.666 
> s]
> [INFO] exec/rpc ... SUCCESS [  2.684 
> s]
> [INFO] exec/Vectors ... SUCCESS [01:11 
> min]
> [INFO] contrib/Parent Pom . SUCCESS [  0.547 
> s]
> [INFO] contrib/data/Parent Pom  SUCCESS [  0.496 
> s]
> [INFO] contrib/data/tpch-sample-data .. SUCCESS [  2.698 
> s]
> [INFO] exec/Java Execution Engine . FAILURE [19:09 
> min]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5606) Some tests fail after creating a fresh clone

2017-06-24 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5606:
---

 Summary: Some tests fail after creating a fresh clone
 Key: DRILL-5606
 URL: https://issues.apache.org/jira/browse/DRILL-5606
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build & Test
 Environment: {noformat}
$ uname -a
Linux mg-mate 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 16.04.2 LTS
Release:16.04
Codename:   xenial

$ java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11)
OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
{noformat}

Environment variables JAVA_HOME, JRE_HOME, JDK_HOME aren't configured. Java 
executable is found as the PATH environment variables links to it. I can 
provide more details if needed.
Reporter: Muhammad Gelbana


I cloned Drill from Github using this url: 
[https://github.com/apache/drill.git] and I didn't change the branch 
afterwards, so I'm using *master*.

Afterwards, I ran the following command

{noformat}
mvn clean install
{noformat}

I attached the full log but here is a snippet indicating the failing tests:
{noformat}
Failed tests: 
  TestExtendedTypes.checkReadWriteExtended:60 expected:<...ateDay" : 
"1997-07-1[6"
  },
  "drill_timestamp" : {
"$date" : "2009-02-23T08:00:00.000Z"
  },
  "time" : {
"$time" : "19:20:30.450Z"
  },
  "interval" : {
"$interval" : "PT26.400S"
  },
  "integer" : {
"$numberLong" : 4
  },
  "inner" : {
"bin" : {
  "$binary" : "ZHJpbGw="
},
"drill_date" : {
  "$dateDay" : "1997-07-16]"
},
"drill_...> but was:<...ateDay" : "1997-07-1[5"
  },
  "drill_timestamp" : {
"$date" : "2009-02-23T08:00:00.000Z"
  },
  "time" : {
"$time" : "19:20:30.450Z"
  },
  "interval" : {
"$interval" : "PT26.400S"
  },
  "integer" : {
"$numberLong" : 4
  },
  "inner" : {
"bin" : {
  "$binary" : "ZHJpbGw="
},
"drill_date" : {
  "$dateDay" : "1997-07-15]"
},
"drill_...>

Tests in error: 
  TestCastFunctions.testToDateForTimeStamp:79 »  at position 0 column '`col`' 
mi...
  TestNewDateFunctions.testIsDate:61 »  After matching 0 records, did not find 
e...

Tests run: 2128, Failures: 1, Errors: 2, Skipped: 139

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Drill Root POM .. SUCCESS [ 19.805 s]
[INFO] tools/Parent Pom ... SUCCESS [  0.605 s]
[INFO] tools/freemarker codegen tooling ... SUCCESS [  7.077 s]
[INFO] Drill Protocol . SUCCESS [  7.959 s]
[INFO] Common (Logical Plan, Base expressions)  SUCCESS [  7.734 s]
[INFO] Logical Plan, Base expressions . SUCCESS [  8.099 s]
[INFO] exec/Parent Pom  SUCCESS [  0.575 s]
[INFO] exec/memory/Parent Pom . SUCCESS [  0.513 s]
[INFO] exec/memory/base ... SUCCESS [  4.666 s]
[INFO] exec/rpc ... SUCCESS [  2.684 s]
[INFO] exec/Vectors ... SUCCESS [01:11 min]
[INFO] contrib/Parent Pom . SUCCESS [  0.547 s]
[INFO] contrib/data/Parent Pom  SUCCESS [  0.496 s]
[INFO] contrib/data/tpch-sample-data .. SUCCESS [  2.698 s]
[INFO] exec/Java Execution Engine . FAILURE [19:09 min]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5583) Literal expression not handled

2017-06-13 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5583:

Description: 
The following query
{code:sql}
SELECT ((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + (365 * 
70 + 17)) `TEMP(Test)(64617177)(0)` FROM `dfs`.`path_to_parquet` Calcs GROUP BY 
((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + (365 * 70 + 
17))
{code}

Throws the following exception
{noformat}
[Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: AssertionError: 
Internal error: invalid literal: 60 * 60 * 24


[Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:825)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:935) 
[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:281) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
exception during fragment initialization: Internal error: invalid literal: 60 + 
2
... 4 common frames omitted
Caused by: java.lang.AssertionError: Internal error: invalid literal: 60 + 2
at org.apache.calcite.util.Util.newInternal(Util.java:777) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlLiteral.value(SqlLiteral.java:329) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlCallBinding.getOperandLiteralValue(SqlCallBinding.java:219)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlBinaryOperator.getMonotonicity(SqlBinaryOperator.java:188)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlCallBinding.getOperandMonotonicity(SqlCallBinding.java:193)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.fun.SqlMonotonicBinaryOperator.getMonotonicity(SqlMonotonicBinaryOperator.java:59)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SelectScope.getMonotonicity(SelectScope.java:154)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.createAggImpl(SqlToRelConverter.java:2476)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertAgg(SqlToRelConverter.java:2374)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:603)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:564)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:2769)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:518)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.drill.exec.planner.sql.SqlConverter.toRel(SqlConverter.java:263) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRel(DefaultSqlHandler.java:626)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert(DefaultSqlHandler.java:195)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:164)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 

[jira] [Created] (DRILL-5583) Literal expression not handled

2017-06-13 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5583:
---

 Summary: Literal expression not handled
 Key: DRILL-5583
 URL: https://issues.apache.org/jira/browse/DRILL-5583
 Project: Apache Drill
  Issue Type: Bug
  Components: SQL Parser
Affects Versions: 1.9.0
Reporter: Muhammad Gelbana


The following query
{code:sql}
SELECT ((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + (365 * 
70 + 17)) `TEMP(Test)(64617177)(0)` FROM `dfs`.`path_to_parquet` Calcs GROUP BY 
((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + (365 * 70 + 
17))
{code}

Throws the following exception
{noformat}
[Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: AssertionError: 
Internal error: invalid literal: 60 + 2


[Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:825)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:935) 
[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:281) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
exception during fragment initialization: Internal error: invalid literal: 60 + 
2
... 4 common frames omitted
Caused by: java.lang.AssertionError: Internal error: invalid literal: 60 + 2
at org.apache.calcite.util.Util.newInternal(Util.java:777) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.sql.SqlLiteral.value(SqlLiteral.java:329) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlCallBinding.getOperandLiteralValue(SqlCallBinding.java:219)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlBinaryOperator.getMonotonicity(SqlBinaryOperator.java:188)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.SqlCallBinding.getOperandMonotonicity(SqlCallBinding.java:193)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.fun.SqlMonotonicBinaryOperator.getMonotonicity(SqlMonotonicBinaryOperator.java:59)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) 
~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql.validate.SelectScope.getMonotonicity(SelectScope.java:154)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.createAggImpl(SqlToRelConverter.java:2476)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertAgg(SqlToRelConverter.java:2374)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:603)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:564)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:2769)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:518)
 ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at 
org.apache.drill.exec.planner.sql.SqlConverter.toRel(SqlConverter.java:263) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRel(DefaultSqlHandler.java:626)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert(DefaultSqlHandler.java:195)
 ~[drill-java-exec-1.9.0.jar:1.9.0]

[jira] [Created] (DRILL-5539) drillbit.sh script breaks if the working directory contains spaces

2017-05-25 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5539:
---

 Summary: drillbit.sh script breaks if the working directory 
contains spaces
 Key: DRILL-5539
 URL: https://issues.apache.org/jira/browse/DRILL-5539
 Project: Apache Drill
  Issue Type: Bug
 Environment: Linux
Reporter: Muhammad Gelbana


The following output occurred when we tried running the drillbit.sh script in a 
path that contains spaces: */home/folder1/Folder Name/drill/bin*

{noformat}
[mgelbana@regression-sysops bin]$ ./drillbit.sh start
./drillbit.sh: line 114: [: /home/folder1/Folder: binary operator expected
Starting drillbit, logging to /home/folder1/Folder Name/drill/log/drillbit.out
./drillbit.sh: line 147: $pid: ambiguous redirect
[mgelbana@regression-sysops bin]$ pwd
/home/folder1/Folder Name/drill/bin
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-1937) Throw exception and give error message when Non-scalar sub-query used in an expression

2017-05-24 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022650#comment-16022650
 ] 

Muhammad Gelbana commented on DRILL-1937:
-

What's the problem with supporting non-scalar sub-queries ? If a sub-query 
returns multiple rows, the result set can be grown to include the single value, 
joined with all the values returned from the scalar sub-query.

For example:
{code:sql}SELECT COL1, (SELECT COL2 FROM T2) FROM T1{code}

Would output the following resultset
{noformat}
[COL1], [COL2]
Col1-Value, Col2-Value1
Col1-Value, Col2-Value2
Col1-Value, Col2-Value3
Col1-Value, Col2-Value4
{noformat}

Note that Col1 is the same for all rows, the column with the changing value, is 
the one returned from Col2.

> Throw exception and give error message when Non-scalar sub-query used in an 
> expression
> --
>
> Key: DRILL-1937
> URL: https://issues.apache.org/jira/browse/DRILL-1937
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 0.8.0
>Reporter: Victoria Markman
>Assignee: Aman Sinha
> Fix For: 0.8.0
>
> Attachments: DRILL-1937.1.patch
>
>
> {code}
> #Fri Jan 02 21:20:47 EST 2015
> git.commit.id.abbrev=b491cdb
> {code}
> It is dangerous to have an internal function be exposed to users.
> What if one day user decided to write a UDF with the same signature ?
> {code}
> 0: jdbc:drill:schema=dfs> select SINGLE_VALUE(1) from `t.json`;
> +--+
> |  |
> +--+
> +--+
> No rows selected (0.111 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (DRILL-5515) "IS NO DISTINCT FROM" and it's equivalent form aren't handled likewise

2017-05-16 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5515:
---

 Summary: "IS NO DISTINCT FROM" and it's equivalent form aren't 
handled likewise
 Key: DRILL-5515
 URL: https://issues.apache.org/jira/browse/DRILL-5515
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning & Optimization
Affects Versions: 1.10.0, 1.9.0
Reporter: Muhammad Gelbana


The following query fails to execute
{code:sql}SELECT * FROM (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t0` 
INNER JOIN (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t1` ON 
(`t0`.`UserID` IS NOT DISTINCT FROM `t1`.`UserID`){code}
and produces the following error message
{noformat}org.apache.drill.common.exceptions.UserRemoteException: 
UNSUPPORTED_OPERATION ERROR: This query cannot be planned possibly due to 
either a cartesian join or an inequality join [Error Id: 
0bd41e06-ccd7-45d6-a038-3359bf5a4a7f on mgelbana-incorta:31010]{noformat}
While the query's equivalent form runs fine
{code:sql}SELECT * FROM (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t0` 
INNER JOIN (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t1` ON 
(`t0`.`UserID` = `t1`.`UserID` OR (`t0`.`UserID` IS NULL AND `t1`.`UserID` IS 
NULL)){code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-5300) SYSTEM ERROR: IllegalStateException: Memory was leaked by query while querying parquet files

2017-05-14 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009677#comment-16009677
 ] 

Muhammad Gelbana commented on DRILL-5300:
-

[~kkhatua], I tried it once v1.10 was released but the issue wasn't solved. I 
still had to clone this [repo|https://github.com/dain/snappy.git], build it, 
and include the result JAR with Drill in the jard/3rdparty folder. Forgive me 
for the late reply.

> SYSTEM ERROR: IllegalStateException: Memory was leaked by query while 
> querying parquet files
> 
>
> Key: DRILL-5300
> URL: https://issues.apache.org/jira/browse/DRILL-5300
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
> Environment: OS: Linux
>Reporter: Muhammad Gelbana
> Attachments: both_queries_logs.zip
>
>
> Running the following query against parquet files (I modified some values for 
> privacy reasons)
> {code:title=Query causing the long logs|borderStyle=solid}
> SELECT AL4.NAME, AL5.SEGMENT2, SUM(AL1.AMOUNT), AL2.ATTRIBUTE4, 
> AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, 
> AL11.NAME FROM 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA__TRX_LINE_GL_DIST_ALL`
>  AL1, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA_OMER_TRX_ALL`
>  AL2, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX`
>  AL3, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_HR_COMMON/HR_ALL_ORGANIZATION_UNITS`
>  AL4, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_CODE_COMBINATIONS`
>  AL5, 
> dfs.`/disk2/XXX/XXX//data/../parquet//XXAT_AR_MU_TAB` 
> AL8, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX`
>  AL11, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S`
>  AL12, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS`
>  AL13, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL`
>  AL14, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL`
>  AL15, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL`
>  AL16, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL`
>  AL17, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS`
>  AL18, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S`
>  AL19 WHERE (AL2.SHIP_TO__USE_ID = AL15._USE_ID AND 
> AL15.___ID = AL14.___ID AND AL14.X__ID = 
> AL12.X__ID AND AL12.LOCATION_ID = AL13.LOCATION_ID AND 
> AL17.___ID = AL16.___ID AND AL16.X__ID = 
> AL19.X__ID AND AL19.LOCATION_ID = AL18.LOCATION_ID AND 
> AL2.BILL_TO__USE_ID = AL17._USE_ID AND AL2.SET_OF_X_ID = 
> AL3.SET_OF_X_ID AND AL1.CODE_COMBINATION_ID = AL5.CODE_COMBINATION_ID AND 
> AL5.SEGMENT4 = AL8.MU AND AL1.SET_OF_X_ID = AL11.SET_OF_X_ID AND 
> AL2.ORG_ID = AL4.ORGANIZATION_ID AND AL2.OMER_TRX_ID = 
> AL1.OMER_TRX_ID) AND ((AL5.SEGMENT2 = '41' AND AL1.AMOUNT <> 0 AND 
> AL4.NAME IN ('XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 
> 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-') 
> AND AL3.NAME like '%-PR-%')) GROUP BY AL4.NAME, AL5.SEGMENT2, AL2.ATTRIBUTE4, 
> AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, 
> AL11.NAME
> {code}
> {code:title=Query causing the short logs|borderStyle=solid}
> SELECT AL11.NAME
> FROM
> dfs.`/XXX/XXX/XXX/data/../parquet/XXX_XXX_COMMON/GL_XXX` 
> LIMIT 10
> {code}
> This issue may be a duplicate for [this 
> one|https://issues.apache.org/jira/browse/DRILL-4398] but I created a new one 
> based on [this 
> suggestion|https://issues.apache.org/jira/browse/DRILL-4398?focusedCommentId=15884846=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15884846].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (DRILL-5452) Join query cannot be planned although all joins are enabled and "planner.enable_nljoin_for_scalar_only" is disabled

2017-04-28 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5452:

Description: 
The following query
{code:sql}
SELECT * FROM (SELECT 'ABC' `UserID` FROM `dfs`.`path_to_parquet_file` tc LIMIT 
2147483647) `t0` INNER JOIN (SELECT 'ABC' `UserID` FROM 
`dfs`.`path_to_parquet_file` tc LIMIT 2147483647) `t1` ON (`t0`.`UserID` IS NOT 
DISTINCT FROM `t1`.`UserID`) LIMIT 2147483647{code}

Leads to the following exception

{noformat}2017-04-28 16:59:11,722 
[26fca73f-92f0-4664-4dca-88bc48265c92:foreman] INFO  
o.a.d.e.planner.sql.DrillSqlWorker - User Error Occurred
org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION ERROR: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 ]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:107)
 [drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1008) 
[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.drill.exec.work.foreman.UnsupportedRelOperatorException: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:432)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:169)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:97)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 5 common frames omitted
2017-04-28 16:59:11,741 [USER-rpc-event-queue] ERROR 
o.a.d.exec.server.rest.QueryWrapper - Query Failed
org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION 
ERROR: This query cannot be planned possibly due to either a cartesian join or 
an inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 on mgelbana:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:144) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
 [drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:65) 
[drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:363) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:240) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:245) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 [netty-codec-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
 [netty-handler-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 

[jira] [Updated] (DRILL-5452) Join query cannot be planned although all joins are enabled and "planner.enable_nljoin_for_scalar_only" is disabled

2017-04-28 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5452:

Description: 
The following query
{code:sql}
SELECT * FROM (SELECT 'ABC' `UserID` FROM `dfs`.`path_to_parquet_file` tc LIMIT 
2147483647) `t0` INNER JOIN (SELECT 'ABC' `UserID` FROM 
`dfs`.`path_to_parquet_file` tc LIMIT 2147483647) `t1` ON (`t0`.`UserID` IS NOT 
DISTINCT FROM `t1`.`UserID`) LIMIT 2147483647{code}

Leads to the following exception

{noformat}2017-04-28 16:59:11,722 
[26fca73f-92f0-4664-4dca-88bc48265c92:foreman] INFO  
o.a.d.e.planner.sql.DrillSqlWorker - User Error Occurred
org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION ERROR: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 ]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:107)
 [drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1008) 
[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.drill.exec.work.foreman.UnsupportedRelOperatorException: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:432)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:169)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:97)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 5 common frames omitted
2017-04-28 16:59:11,741 [USER-rpc-event-queue] ERROR 
o.a.d.exec.server.rest.QueryWrapper - Query Failed
org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION 
ERROR: This query cannot be planned possibly due to either a cartesian join or 
an inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 on mgelbana-incorta:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:144) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
 [drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:65) 
[drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:363) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:240) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:245) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 [netty-codec-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
 [netty-handler-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 

[jira] [Updated] (DRILL-5452) Join query cannot be planned although all joins are enabled and "planner.enable_nljoin_for_scalar_only" is disabled

2017-04-28 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5452:

Description: 
The following query
{code:sql}
SELECT * FROM (SELECT 'ABC' `UserID` FROM `dfs`.`path_to_parquet_file` tc LIMIT 
2147483647) `t0` INNER JOIN (SELECT 'ABC' `UserID` FROM 
`dfs`.`path_to_parquet_file` tc LIMIT 2147483647) `t1` ON (`t0`.`UserID` IS NOT 
DISTINCT FROM `t1`.`UserID`) LIMIT 2147483647{code}

Leads to the following exception

{preformatted}2017-04-28 16:59:11,722 
[26fca73f-92f0-4664-4dca-88bc48265c92:foreman] INFO  
o.a.d.e.planner.sql.DrillSqlWorker - User Error Occurred
org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION ERROR: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 ]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:107)
 [drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1008) 
[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.drill.exec.work.foreman.UnsupportedRelOperatorException: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:432)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:169)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:97)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 5 common frames omitted
2017-04-28 16:59:11,741 [USER-rpc-event-queue] ERROR 
o.a.d.exec.server.rest.QueryWrapper - Query Failed
org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION 
ERROR: This query cannot be planned possibly due to either a cartesian join or 
an inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 on mgelbana-incorta:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:144) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
 [drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:65) 
[drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:363) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:240) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:245) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 [netty-codec-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
 [netty-handler-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 

[jira] [Updated] (DRILL-5452) Join query cannot be planned although all joins are enabled and "planner.enable_nljoin_for_scalar_only" is disabled

2017-04-28 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5452:

Description: 
The following query
{code:sql}
SELECT * FROM (SELECT 'ABC' `UserID` FROM `dfs`.`path_to_parquet_file` tc LIMIT 
2147483647) `t0` INNER JOIN (SELECT 'ABC' `UserID` FROM 
`dfs`.`path_to_parquet_file` tc LIMIT 2147483647) `t1` ON (`t0`.`UserID` IS NOT 
DISTINCT FROM `t1`.`UserID`) LIMIT 2147483647{code}

Leads to the following exception

{noformat}2017-04-28 16:59:11,722 
[26fca73f-92f0-4664-4dca-88bc48265c92:foreman] INFO  
o.a.d.e.planner.sql.DrillSqlWorker - User Error Occurred
org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION ERROR: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 ]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:107)
 [drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1008) 
[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.drill.exec.work.foreman.UnsupportedRelOperatorException: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:432)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:169)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:97)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 5 common frames omitted
2017-04-28 16:59:11,741 [USER-rpc-event-queue] ERROR 
o.a.d.exec.server.rest.QueryWrapper - Query Failed
org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION 
ERROR: This query cannot be planned possibly due to either a cartesian join or 
an inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 on mgelbana-incorta:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:144) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
 [drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:65) 
[drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:363) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:240) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:245) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 [netty-codec-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
 [netty-handler-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 

[jira] [Created] (DRILL-5452) Join query cannot be planned although all joins are enabled and "planner.enable_nljoin_for_scalar_only" is disabled

2017-04-28 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5452:
---

 Summary: Join query cannot be planned although all joins are 
enabled and "planner.enable_nljoin_for_scalar_only" is disabled
 Key: DRILL-5452
 URL: https://issues.apache.org/jira/browse/DRILL-5452
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning & Optimization
Affects Versions: 1.10.0, 1.9.0
Reporter: Muhammad Gelbana


The following query
{code:sql}
SELECT * FROM (SELECT 'ABC' `UserID` FROM `dfs`.`path_to_parquet_file tc LIMIT 
2147483647) `t0` INNER JOIN (SELECT 'ABC' `UserID` FROM 
`dfs`.`path_to_parquet_file` tc LIMIT 2147483647) `t1` ON (`t0`.`UserID` IS NOT 
DISTINCT FROM `t1`.`UserID`) LIMIT 2147483647{code}

Leads to the following exception

{preformatted}2017-04-28 16:59:11,722 
[26fca73f-92f0-4664-4dca-88bc48265c92:foreman] INFO  
o.a.d.e.planner.sql.DrillSqlWorker - User Error Occurred
org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION ERROR: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 ]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:107)
 [drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1008) 
[drill-java-exec-1.9.0.jar:1.9.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.drill.exec.work.foreman.UnsupportedRelOperatorException: 
This query cannot be planned possibly due to either a cartesian join or an 
inequality join
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:432)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:169)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:97)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 5 common frames omitted
2017-04-28 16:59:11,741 [USER-rpc-event-queue] ERROR 
o.a.d.exec.server.rest.QueryWrapper - Query Failed
org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION 
ERROR: This query cannot be planned possibly due to either a cartesian join or 
an inequality join


[Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 on mgelbana-incorta:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:144) 
[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
 [drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:65) 
[drill-rpc-1.9.0.jar:1.9.0]
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:363) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
 [drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:240) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:245) 
[drill-rpc-1.9.0.jar:1.9.0]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 [netty-codec-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
 [netty-transport-4.0.27.Final.jar:4.0.27.Final]
at 

[jira] [Commented] (DRILL-5269) SYSTEM ERROR: JsonMappingException: No suitable constructor found for type [simple type, class org.apache.drill.exec.store.direct.DirectSubScan]

2017-04-28 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15988691#comment-15988691
 ] 

Muhammad Gelbana commented on DRILL-5269:
-

[~sudheeshkatkam], have you had the chance to look into this ? If possible, 
would give me a hint on how to fix this so may I can try working out a patch or 
a pull request may be ?

> SYSTEM ERROR: JsonMappingException: No suitable constructor found for type 
> [simple type, class org.apache.drill.exec.store.direct.DirectSubScan]
> 
>
> Key: DRILL-5269
> URL: https://issues.apache.org/jira/browse/DRILL-5269
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Anas
>Priority: Critical
> Attachments: tc_sm_parquet.tar.gz
>
>
> I am a query that has nested joins. The query fails with the following 
> exception.
> {code}
> SYSTEM ERROR: JsonMappingException: No suitable constructor found for type 
> [simple type, class org.apache.drill.exec.store.direct.DirectSubScan]: can 
> not instantiate from JSON object (missing default constructor or creator, or 
> perhaps need to add/enable type information?)
>  at [Source: {
>   "pop" : "broadcast-sender",
>   "@id" : 0,
>   "receiver-major-fragment" : 1,
>   "child" : {
> "pop" : "selection-vector-remover",
> "@id" : 1,
> "child" : {
>   "pop" : "filter",
>   "@id" : 2,
>   "child" : {
> "pop" : "project",
> "@id" : 3,
> "exprs" : [ {
>   "ref" : "`__measure__10`",
>   "expr" : "`count`"
> } ],
> "child" : {
>   "pop" : "DirectSubScan",
>   "@id" : 4,
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "reader" : [ {
> "count" : 633
>   } ],
>   "cost" : 0.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 20.0
>   },
>   "expr" : "greater_than(`__measure__10`, 0) ",
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 10.0
>   },
>   "destinations" : [ {
> "minorFragmentId" : 0,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   }, {
> "minorFragmentId" : 1,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   } ],
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> }; line: 20, column: 11] (through reference chain: 
> org.apache.drill.exec.physical.config.BroadcastSender["child"]->org.apache.drill.exec.physical.config.SelectionVectorRemover["child"]->org.apache.drill.exec.physical.config.Filter["child"]->org.apache.drill.exec.physical.config.Project["child"])
> Fragment 3:0
> [Error Id: 9fb4ef4a-f118-4625-94f5-56c96dc7bdb4 on 192.168.0.100:31010]
>   (com.fasterxml.jackson.databind.JsonMappingException) No suitable 
> constructor found for type [simple type, class 
> org.apache.drill.exec.store.direct.DirectSubScan]: can not instantiate from 
> JSON object (missing default constructor or creator, or perhaps need to 
> add/enable type information?)
>  at [Source: {
>   "pop" : "broadcast-sender",
>   "@id" : 0,
>   "receiver-major-fragment" : 1,
>   "child" : {
> "pop" : "selection-vector-remover",
> "@id" : 1,
> "child" : {
>   "pop" : "filter",
>   "@id" : 2,
>   "child" : {
> "pop" : "project",
> "@id" : 3,
> "exprs" : [ {
>   "ref" : "`__measure__10`",
>   "expr" : "`count`"
> } ],
> "child" : {
>   "pop" : "DirectSubScan",
>   "@id" : 4,
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "reader" : [ {
> "count" : 633
>   } ],
>   "cost" : 0.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 20.0
>   },
>   "expr" : "greater_than(`__measure__10`, 0) ",
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 10.0
>   },
>   "destinations" : [ {
> "minorFragmentId" : 0,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   }, {
> "minorFragmentId" : 1,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   } ],
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> }; line: 20, column: 11] (through reference chain: 
> 

[jira] [Created] (DRILL-5393) ALTER SESSION documentation page broken link

2017-03-28 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5393:
---

 Summary: ALTER SESSION documentation page broken link
 Key: DRILL-5393
 URL: https://issues.apache.org/jira/browse/DRILL-5393
 Project: Apache Drill
  Issue Type: Bug
  Components: Documentation
Reporter: Muhammad Gelbana


On [this page|https://drill.apache.org/docs/modifying-query-planning-options/], 
there is a link to the ALTER SESSION documentation page which points to this 
broken link: https://drill.apache.org/docs/alter-session/

I believe the correct link should be: https://drill.apache.org/docs/set/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-4818) Drill not pushing down joins to RDBS Storages

2017-03-24 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940527#comment-15940527
 ] 

Muhammad Gelbana commented on DRILL-4818:
-

[~marcus.r...@gmail.com], have you had any progress with this please ?

> Drill not pushing down joins to RDBS Storages
> -
>
> Key: DRILL-4818
> URL: https://issues.apache.org/jira/browse/DRILL-4818
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - JDBC
>Affects Versions: 1.7.0
> Environment: Windows 7 and Linux Red Hat 6.1 server
>Reporter: Marcus Rehm
>Priority: Critical
> Attachments: drill pushdown rdbms.sql, Json Profile.txt, Physical 
> Plan.txt
>
>
> I'm trying to map ours databases running on Oracle 11g. After try some 
> queries I realized that the amount of time Drill takes to complete is bigger 
> than a general sql client takes. Looking the execution plan I saw  that Drill 
> is doing the join of tables and is not pushing it down to the database.
> My storage configuration is as:
> {
>   "type": "jdbc",
>   "driver": "oracle.jdbc.OracleDriver",
>   "url": "jdbc:oracle:thin:USER/PASS@server:1521/ORCL",
>   "username": null,
>   "password": null,
>   "enabled": true
> }
> I'm not able to reproduce the case with example tables so i'm sending the 
> query and the physical plan Drill is generating.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-4177) select * from table;Node ran out of Heap memory, exiting.java.lang.OutOfMemoryError: GC overhead limit exceeded

2017-03-24 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940493#comment-15940493
 ] 

Muhammad Gelbana commented on DRILL-4177:
-

[~arina], I followed your comment from 
[here|https://issues.apache.org/jira/browse/DRILL-4696?focusedCommentId=15351403=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15351403].

(I understand that "pushing down query filters" means that Drill would let the 
datasource perform the data filtering, instead of receiving loads of data and 
performing the filtering by itself, please correct me if I'm wrong.

Now you're saying that If MySQL (or any datasource) is configured to burst out 
huge amounts of data when selected, then Drill will attempt to push down query 
filters (and joins ?) ?
I see you've only configured the MySQL connection to enable useCursorFetch and 
set the default fetch size to 10K.

If that's what you meant, then how would that influence Drill to push down 
query filters ? Is there any other way that would configure Drill to push down 
query and join filters ?

> select * from table;Node ran out of Heap memory, 
> exiting.java.lang.OutOfMemoryError: GC overhead limit exceeded
> ---
>
> Key: DRILL-4177
> URL: https://issues.apache.org/jira/browse/DRILL-4177
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.3.0
> Environment: drill1.3 jdk7
>Reporter: david_hudavy
>  Labels: patch
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 0: jdbc:drill:zk=local> select * from table;
> Node ran out of Heap memory, exiting.
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2149)
> at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1956)
> at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3308)
> at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:463)
> at 
> com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3032)
> at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2280)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2673)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2546)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2504)
> at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1370)
> at 
> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
> at 
> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
> at 
> org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup(JdbcRecordReader.java:177)
> at 
> org.apache.drill.exec.physical.impl.ScanBatch.(ScanBatch.java:101)
> at 
> org.apache.drill.exec.physical.impl.ScanBatch.(ScanBatch.java:128)
> at 
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch(JdbcBatchCreator.java:40)
> at 
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch(JdbcBatchCreator.java:33)
> at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:151)
> at 
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:174)
> at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:131)
> at 
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:174)
> at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:105)
> at 
> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:79)
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:230)
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (DRILL-5300) SYSTEM ERROR: IllegalStateException: Memory was leaked by query while querying parquet files

2017-02-27 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5300:

Attachment: both_queries_logs.zip

> SYSTEM ERROR: IllegalStateException: Memory was leaked by query while 
> querying parquet files
> 
>
> Key: DRILL-5300
> URL: https://issues.apache.org/jira/browse/DRILL-5300
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
> Environment: OS: Linux
>Reporter: Muhammad Gelbana
> Attachments: both_queries_logs.zip
>
>
> Running the following query against parquet files (I modified some values for 
> privacy reasons)
> {code:title=Query causing the long logs|borderStyle=solid}
> SELECT AL4.NAME, AL5.SEGMENT2, SUM(AL1.AMOUNT), AL2.ATTRIBUTE4, 
> AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, 
> AL11.NAME FROM 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA__TRX_LINE_GL_DIST_ALL`
>  AL1, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA_OMER_TRX_ALL`
>  AL2, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX`
>  AL3, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_HR_COMMON/HR_ALL_ORGANIZATION_UNITS`
>  AL4, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_CODE_COMBINATIONS`
>  AL5, 
> dfs.`/disk2/XXX/XXX//data/../parquet//XXAT_AR_MU_TAB` 
> AL8, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX`
>  AL11, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S`
>  AL12, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS`
>  AL13, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL`
>  AL14, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL`
>  AL15, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL`
>  AL16, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL`
>  AL17, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS`
>  AL18, 
> dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S`
>  AL19 WHERE (AL2.SHIP_TO__USE_ID = AL15._USE_ID AND 
> AL15.___ID = AL14.___ID AND AL14.X__ID = 
> AL12.X__ID AND AL12.LOCATION_ID = AL13.LOCATION_ID AND 
> AL17.___ID = AL16.___ID AND AL16.X__ID = 
> AL19.X__ID AND AL19.LOCATION_ID = AL18.LOCATION_ID AND 
> AL2.BILL_TO__USE_ID = AL17._USE_ID AND AL2.SET_OF_X_ID = 
> AL3.SET_OF_X_ID AND AL1.CODE_COMBINATION_ID = AL5.CODE_COMBINATION_ID AND 
> AL5.SEGMENT4 = AL8.MU AND AL1.SET_OF_X_ID = AL11.SET_OF_X_ID AND 
> AL2.ORG_ID = AL4.ORGANIZATION_ID AND AL2.OMER_TRX_ID = 
> AL1.OMER_TRX_ID) AND ((AL5.SEGMENT2 = '41' AND AL1.AMOUNT <> 0 AND 
> AL4.NAME IN ('XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 
> 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-') 
> AND AL3.NAME like '%-PR-%')) GROUP BY AL4.NAME, AL5.SEGMENT2, AL2.ATTRIBUTE4, 
> AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, 
> AL11.NAME
> {code}
> {code:title=Query causing the short logs|borderStyle=solid}
> SELECT AL11.NAME
> FROM
> dfs.`/XXX/XXX/XXX/data/../parquet/XXX_XXX_COMMON/GL_XXX` 
> LIMIT 10
> {code}
> This issue may be a duplicate for [this 
> one|https://issues.apache.org/jira/browse/DRILL-4398] but I created a new one 
> based on [this 
> suggestion|https://issues.apache.org/jira/browse/DRILL-4398?focusedCommentId=15884846=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15884846].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (DRILL-5300) SYSTEM ERROR: IllegalStateException: Memory was leaked by query while querying parquet files

2017-02-27 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5300:
---

 Summary: SYSTEM ERROR: IllegalStateException: Memory was leaked by 
query while querying parquet files
 Key: DRILL-5300
 URL: https://issues.apache.org/jira/browse/DRILL-5300
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.9.0
 Environment: OS: Linux
Reporter: Muhammad Gelbana
 Attachments: both_queries_logs.zip

Running the following query against parquet files (I modified some values for 
privacy reasons)
{code:title=Query causing the long logs|borderStyle=solid}
SELECT AL4.NAME, AL5.SEGMENT2, SUM(AL1.AMOUNT), AL2.ATTRIBUTE4, 
AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, 
AL11.NAME FROM 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA__TRX_LINE_GL_DIST_ALL`
 AL1, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA_OMER_TRX_ALL`
 AL2, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX` 
AL3, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_HR_COMMON/HR_ALL_ORGANIZATION_UNITS`
 AL4, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_CODE_COMBINATIONS`
 AL5, 
dfs.`/disk2/XXX/XXX//data/../parquet//XXAT_AR_MU_TAB` 
AL8, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX` 
AL11, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S`
 AL12, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS`
 AL13, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL`
 AL14, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL`
 AL15, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL`
 AL16, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL`
 AL17, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS`
 AL18, 
dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S`
 AL19 WHERE (AL2.SHIP_TO__USE_ID = AL15._USE_ID AND 
AL15.___ID = AL14.___ID AND AL14.X__ID = 
AL12.X__ID AND AL12.LOCATION_ID = AL13.LOCATION_ID AND 
AL17.___ID = AL16.___ID AND AL16.X__ID = 
AL19.X__ID AND AL19.LOCATION_ID = AL18.LOCATION_ID AND 
AL2.BILL_TO__USE_ID = AL17._USE_ID AND AL2.SET_OF_X_ID = 
AL3.SET_OF_X_ID AND AL1.CODE_COMBINATION_ID = AL5.CODE_COMBINATION_ID AND 
AL5.SEGMENT4 = AL8.MU AND AL1.SET_OF_X_ID = AL11.SET_OF_X_ID AND 
AL2.ORG_ID = AL4.ORGANIZATION_ID AND AL2.OMER_TRX_ID = AL1.OMER_TRX_ID) 
AND ((AL5.SEGMENT2 = '41' AND AL1.AMOUNT <> 0 AND AL4.NAME IN 
('XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 
'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-') AND AL3.NAME like 
'%-PR-%')) GROUP BY AL4.NAME, AL5.SEGMENT2, AL2.ATTRIBUTE4, 
AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, 
AL11.NAME
{code}

{code:title=Query causing the short logs|borderStyle=solid}
SELECT AL11.NAME
FROM
dfs.`/XXX/XXX/XXX/data/../parquet/XXX_XXX_COMMON/GL_XXX` 
LIMIT 10
{code}
This issue may be a duplicate for [this 
one|https://issues.apache.org/jira/browse/DRILL-4398] but I created a new one 
based on [this 
suggestion|https://issues.apache.org/jira/browse/DRILL-4398?focusedCommentId=15884846=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15884846].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-4398) SYSTEM ERROR: IllegalStateException: Memory was leaked by query

2017-02-26 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884730#comment-15884730
 ] 

Muhammad Gelbana commented on DRILL-4398:
-

I face this error using v1.9 after I run very simple queries such as:
{code:sql}
SELECT A1.NAME
FROM
dfs.`/parquet_file_path` A1
LIMIT 10
{code}
Would someone kindly give an update on this ? Or may be a workaround ?



> SYSTEM ERROR: IllegalStateException: Memory was leaked by query
> ---
>
> Key: DRILL-4398
> URL: https://issues.apache.org/jira/browse/DRILL-4398
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - JDBC
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> Several queries fail with memory leaked errors
> select tjoin2.rnum, tjoin1.c1, tjoin2.c1 as c1j2, tjoin2.c2 as c2j2 from 
> postgres.public.tjoin1 full outer join postgres.public.tjoin2 on tjoin1.c1 = 
> tjoin2.c1
> select tjoin1.rnum, tjoin1.c1, tjoin2.c1 as c1j2, tjoin2.c2 from 
> postgres.public.tjoin1, lateral ( select tjoin2.c1, tjoin2.c2 from 
> postgres.public.tjoin2 where tjoin1.c1=tjoin2.c1) tjoin2
> SYSTEM ERROR: IllegalStateException: Memory was leaked by query. Memory 
> leaked: (40960)
> Allocator(op:0:0:3:JdbcSubScan) 100/40960/135168/100 
> (res/actual/peak/limit)
> create table TJOIN1 (RNUM integer   not null , C1 integer, C2 integer);
> insert into TJOIN1 (RNUM, C1, C2) values ( 0, 10, 15);
> insert into TJOIN1 (RNUM, C1, C2) values ( 1, 20, 25);
> insert into TJOIN1 (RNUM, C1, C2) values ( 2, NULL, 50);
> create table TJOIN2 (RNUM integer   not null , C1 integer, C2 char(2));
> insert into TJOIN2 (RNUM, C1, C2) values ( 0, 10, 'BB');
> insert into TJOIN2 (RNUM, C1, C2) values ( 1, 15, 'DD');
> insert into TJOIN2 (RNUM, C1, C2) values ( 2, NULL, 'EE');
> insert into TJOIN2 (RNUM, C1, C2) values ( 3, 10, 'FF');



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-5197) CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL]

2017-01-27 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842842#comment-15842842
 ] 

Muhammad Gelbana commented on DRILL-5197:
-

[~sharnyk], [~khfaraaz]
Now I'm facing the same problem with a parquet file but the casting solution 
isn't working as expected. The weird part is that the following query against 
parquet doesn't fail if it's limited to 0 results (i.e. LIMIT 0) while 
unlimiting the query or limiting it to 1 or more rows causes the query to fail.

That's opposite to what happens if the query mentioned in this issue 
description is limited, as the query always fails. Whether it's limited or not.

Could this be due to a specific nature of the data contained by the parquet 
file ? I understand that the problem occurs before Drill knowing anything about 
the data it's about to read.

This is the query that we run against the parquet file (i.e. after changing the 
table and columns names)
{code:sql}
SELECT CASE
 WHEN (
   CASE
  WHEN `mytable`.`column_1` = ' '
  THEN
 (
 CASE
WHEN `mytable`.`column_1` = 'XYZ'
THEN CAST(`mytable`.`column_1` AS VARCHAR)
ELSE NULL
END
 ) 
 ELSE NULL
 END
   ) <> 'ABC'
  THEN CAST(`mytable`.`column_2` AS DOUBLE) 
  WHEN `mytable`.`column_3` = 0
  THEN NULL
  ELSE NULL
   END
 `CASE_RESULT` 
FROM
   dfs.`/home/mgelbana/workspace/Parquet/mytable` `mytable` LIMIT 1
{code}

> CASE statement fails due to error: Unable to get value vector class for minor 
> type [NULL] and mode [OPTIONAL]
> -
>
> Key: DRILL-5197
> URL: https://issues.apache.org/jira/browse/DRILL-5197
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.9.0
>Reporter: Muhammad Gelbana
>
> The following query fails for no obvious reason
> {code:sql}
> SELECT
>CASE
>   WHEN `tname`.`full_name` = 'ABC' 
>   THEN
>  ( 
>  CASE
> WHEN `tname`.`full_name` = 'ABC' 
> THEN
>(
>   CASE
>  WHEN `tname`.`full_name` = ' ' 
>  THEN
> (
>CASE
>   WHEN `tname`.`full_name` = 'ABC' 
>   THEN `tname`.`full_name` 
>   ELSE NULL 
>END
> )
> ELSE NULL 
>   END
>)
>ELSE NULL 
>  END
>  )
>  WHEN `tname`.`full_name` = 'ABC' 
>  THEN NULL 
>  ELSE NULL 
>END
> FROM
>cp.`employee.json` `tname`
> {code}
> If the `THEN `tname`.`full_name`` statements is changed to `THEN 'ABC'` the 
> error does not occur.
> Thrown exception
> {noformat}
> [Error Id: e75fd0fe-132b-4eb4-b2e8-7b34dc39657e on mgelbana-incorta:31010]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.9.0.jar:1.9.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
> Caused by: java.lang.UnsupportedOperationException: Unable to get value 
> vector class for minor type [NULL] and mode [OPTIONAL]
>   at 
> org.apache.drill.exec.expr.BasicTypeHelper.getValueVectorClass(BasicTypeHelper.java:441)
>  ~[vector-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.record.VectorContainer.addOrGet(VectorContainer.java:123)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:463)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> 

[jira] [Commented] (DRILL-5197) CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL]

2017-01-16 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824237#comment-15824237
 ] 

Muhammad Gelbana commented on DRILL-5197:
-

I used casting to fix the problem because I can't change the default values in 
the query from NULL to anything else. Is there a better way to solve this ?

> CASE statement fails due to error: Unable to get value vector class for minor 
> type [NULL] and mode [OPTIONAL]
> -
>
> Key: DRILL-5197
> URL: https://issues.apache.org/jira/browse/DRILL-5197
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.9.0
>Reporter: Muhammad Gelbana
>
> The following query fails for no obvious reason
> {code:sql}
> SELECT
>CASE
>   WHEN `tname`.`full_name` = 'ABC' 
>   THEN
>  ( 
>  CASE
> WHEN `tname`.`full_name` = 'ABC' 
> THEN
>(
>   CASE
>  WHEN `tname`.`full_name` = ' ' 
>  THEN
> (
>CASE
>   WHEN `tname`.`full_name` = 'ABC' 
>   THEN `tname`.`full_name` 
>   ELSE NULL 
>END
> )
> ELSE NULL 
>   END
>)
>ELSE NULL 
>  END
>  )
>  WHEN `tname`.`full_name` = 'ABC' 
>  THEN NULL 
>  ELSE NULL 
>END
> FROM
>cp.`employee.json` `tname`
> {code}
> If the `THEN `tname`.`full_name`` statements is changed to `THEN 'ABC'` the 
> error does not occur.
> Thrown exception
> {noformat}
> [Error Id: e75fd0fe-132b-4eb4-b2e8-7b34dc39657e on mgelbana-incorta:31010]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.9.0.jar:1.9.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
> Caused by: java.lang.UnsupportedOperationException: Unable to get value 
> vector class for minor type [NULL] and mode [OPTIONAL]
>   at 
> org.apache.drill.exec.expr.BasicTypeHelper.getValueVectorClass(BasicTypeHelper.java:441)
>  ~[vector-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.record.VectorContainer.addOrGet(VectorContainer.java:123)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:463)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_111]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_111]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>  

[jira] [Commented] (DRILL-5197) CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL]

2017-01-16 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824001#comment-15824001
 ] 

Muhammad Gelbana commented on DRILL-5197:
-

[~sharnyk], how is this applied to the above query ? If I replace the 
{code:sql}THEN `tname`.`full_name`{code} statement with a literal value, the 
query runs without errors. Why is this considered a different case from 
returning a column value ?

> CASE statement fails due to error: Unable to get value vector class for minor 
> type [NULL] and mode [OPTIONAL]
> -
>
> Key: DRILL-5197
> URL: https://issues.apache.org/jira/browse/DRILL-5197
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.9.0
>Reporter: Muhammad Gelbana
>
> The following query fails for no obvious reason
> {code:sql}
> SELECT
>CASE
>   WHEN `tname`.`full_name` = 'ABC' 
>   THEN
>  ( 
>  CASE
> WHEN `tname`.`full_name` = 'ABC' 
> THEN
>(
>   CASE
>  WHEN `tname`.`full_name` = ' ' 
>  THEN
> (
>CASE
>   WHEN `tname`.`full_name` = 'ABC' 
>   THEN `tname`.`full_name` 
>   ELSE NULL 
>END
> )
> ELSE NULL 
>   END
>)
>ELSE NULL 
>  END
>  )
>  WHEN `tname`.`full_name` = 'ABC' 
>  THEN NULL 
>  ELSE NULL 
>END
> FROM
>cp.`employee.json` `tname`
> {code}
> If the `THEN `tname`.`full_name`` statements is changed to `THEN 'ABC'` the 
> error does not occur.
> Thrown exception
> {noformat}
> [Error Id: e75fd0fe-132b-4eb4-b2e8-7b34dc39657e on mgelbana-incorta:31010]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.9.0.jar:1.9.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
> Caused by: java.lang.UnsupportedOperationException: Unable to get value 
> vector class for minor type [NULL] and mode [OPTIONAL]
>   at 
> org.apache.drill.exec.expr.BasicTypeHelper.getValueVectorClass(BasicTypeHelper.java:441)
>  ~[vector-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.record.VectorContainer.addOrGet(VectorContainer.java:123)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:463)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_111]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_111]
>   at 
> 

[jira] [Updated] (DRILL-5197) CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL]

2017-01-16 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5197:

Description: 
The following query fails for no obvious reason
{code:sql}
SELECT
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN
 ( 
 CASE
WHEN `tname`.`full_name` = 'ABC' 
THEN
   (
  CASE
 WHEN `tname`.`full_name` = ' ' 
 THEN
(
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN `tname`.`full_name` 
  ELSE NULL 
   END
)
ELSE NULL 
  END
   )
   ELSE NULL 
 END
 )
 WHEN `tname`.`full_name` = 'ABC' 
 THEN NULL 
 ELSE NULL 
   END
FROM
   cp.`employee.json` `tname`
{code}
If the `THEN `tname`.`full_name`` statements is changed to `THEN 'ABC'` the 
error does not occur.

Thrown exception
{noformat}
[Error Id: e75fd0fe-132b-4eb4-b2e8-7b34dc39657e on mgelbana-incorta:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_111]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.UnsupportedOperationException: Unable to get value vector 
class for minor type [NULL] and mode [OPTIONAL]
at 
org.apache.drill.exec.expr.BasicTypeHelper.getValueVectorClass(BasicTypeHelper.java:441)
 ~[vector-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.VectorContainer.addOrGet(VectorContainer.java:123) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:463)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.8.0_111]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_111]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 ~[hadoop-common-2.7.1.jar:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 4 common frames omitted
{noformat}

  was:
The following query fails for no obvious reason
{code:sql}
SELECT
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN
( 
 CASE
WHEN `tname`.`full_name` = 'ABC' 
THEN
   (
  CASE
 WHEN `tname`.`full_name` = ' ' 
 THEN
(
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN `tname`.`full_name` 
  ELSE NULL 
   END
)
ELSE NULL 
  END

[jira] [Updated] (DRILL-5197) CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL]

2017-01-16 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5197:

Description: 
The following query fails for no obvious reason
{code:sql}
SELECT
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN
( 
 CASE
WHEN `tname`.`full_name` = 'ABC' 
THEN
   (
  CASE
 WHEN `tname`.`full_name` = ' ' 
 THEN
(
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN `tname`.`full_name` 
  ELSE NULL 
   END
)
ELSE NULL 
  END
   )
   ELSE NULL 
 END
 )
 WHEN `tname`.`full_name` = 'ABC' 
 THEN NULL 
 ELSE NULL 
   END
FROM
   cp.`employee.json` `tname`
{code}
If the `THEN `tname`.`full_name`` statements is changed to `THEN 'ABC'` the 
error does not occur.

Thrown exception
{noformat}
[Error Id: e75fd0fe-132b-4eb4-b2e8-7b34dc39657e on mgelbana-incorta:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_111]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.UnsupportedOperationException: Unable to get value vector 
class for minor type [NULL] and mode [OPTIONAL]
at 
org.apache.drill.exec.expr.BasicTypeHelper.getValueVectorClass(BasicTypeHelper.java:441)
 ~[vector-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.VectorContainer.addOrGet(VectorContainer.java:123) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:463)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.8.0_111]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_111]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 ~[hadoop-common-2.7.1.jar:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 4 common frames omitted
{noformat}

  was:
The following query fails for no obvious reason
{code:sql}
SELECT
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN
( 
 CASE
WHEN `tname`.`full_name` = 'ABC' 
THEN
   (
  CASE
 WHEN `tname`.`full_name` = ' ' 
 THEN
(
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN `tname`.`full_name` 
  ELSE NULL 
   END
)
ELSE NULL 
  END
 

[jira] [Created] (DRILL-5197) CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL]

2017-01-15 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5197:
---

 Summary: CASE statement fails due to error: Unable to get value 
vector class for minor type [NULL] and mode [OPTIONAL]
 Key: DRILL-5197
 URL: https://issues.apache.org/jira/browse/DRILL-5197
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Affects Versions: 1.9.0
Reporter: Muhammad Gelbana


The following query fails for no obvious reason
{code:sql}
SELECT
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN
( 
 CASE
WHEN `tname`.`full_name` = 'ABC' 
THEN
   (
  CASE
 WHEN `tname`.`full_name` = ' ' 
 THEN
(
   CASE
  WHEN `tname`.`full_name` = 'ABC' 
  THEN `tname`.`full_name` 
  ELSE NULL 
   END
)
ELSE NULL 
  END
   )
   ELSE NULL 
 END
 )
 WHEN `tname`.`full_name` = 'ABC' 
 THEN NULL 
 ELSE NULL 
   END
FROM
   cp.`employee.json` `tname`
{code}
If the `THEN `tname`.`full_name`` statements is changed to `THEN 'ABC'` the 
error does not occur.

Thrown exception
{quote}
[Error Id: e75fd0fe-132b-4eb4-b2e8-7b34dc39657e on mgelbana-incorta:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
 [drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.9.0.jar:1.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_111]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.UnsupportedOperationException: Unable to get value vector 
class for minor type [NULL] and mode [OPTIONAL]
at 
org.apache.drill.exec.expr.BasicTypeHelper.getValueVectorClass(BasicTypeHelper.java:441)
 ~[vector-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.VectorContainer.addOrGet(VectorContainer.java:123) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:463)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.8.0_111]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_111]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 ~[hadoop-common-2.7.1.jar:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
 [drill-java-exec-1.9.0.jar:1.9.0]
... 4 common frames omitted
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5194) UDF returns NULL as expected only if the input is a literal

2017-01-12 Thread Muhammad Gelbana (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820799#comment-15820799
 ] 

Muhammad Gelbana commented on DRILL-5194:
-

This issue duplicates this one: https://issues.apache.org/jira/browse/DRILL-5193

> UDF returns NULL as expected only if the input is a literal
> ---
>
> Key: DRILL-5194
> URL: https://issues.apache.org/jira/browse/DRILL-5194
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.9.0
>Reporter: Muhammad Gelbana
>
> I defined the following UDF
> {code:title=SplitPartFunc.java|borderStyle=solid}
> import javax.inject.Inject;
> import org.apache.drill.exec.expr.DrillSimpleFunc;
> import org.apache.drill.exec.expr.annotations.FunctionTemplate;
> import org.apache.drill.exec.expr.annotations.Output;
> import org.apache.drill.exec.expr.annotations.Param;
> import org.apache.drill.exec.expr.holders.IntHolder;
> import org.apache.drill.exec.expr.holders.NullableVarCharHolder;
> import org.apache.drill.exec.expr.holders.VarCharHolder;
> import io.netty.buffer.DrillBuf;
> @FunctionTemplate(name = "split_string", scope = 
> FunctionTemplate.FunctionScope.SIMPLE, nulls = 
> FunctionTemplate.NullHandling.NULL_IF_NULL)
> public class SplitPartFunc implements DrillSimpleFunc {
> @Param
> VarCharHolder input;
> @Param(constant = true)
> VarCharHolder delimiter;
> @Param(constant = true)
> IntHolder field;
> @Output
> NullableVarCharHolder out;
> @Inject
> DrillBuf buffer;
> public void setup() {
> }
> public void eval() {
> String stringValue = 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start,
>  input.end, input.buffer);
> out.buffer = buffer; //If I return before this statement, a NPE is 
> thrown :(
> if(stringValue == null){
> return;
> }
> int fieldValue = field.value;
> if(fieldValue <= 0){
> return; 
> }
> String delimiterValue = 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start,
>  delimiter.end, delimiter.buffer);
> if(delimiterValue == null){
> return;
> }
> String[] splittedInput = stringValue.split(delimiterValue);
> if(splittedInput.length < fieldValue){
> return;
> }
> // put the output value in the out buffer
> String outputValue = splittedInput[fieldValue - 1];
> out.start = 0;
> out.end = outputValue.getBytes().length;
> buffer.setBytes(0, outputValue.getBytes());
> out.isSet = 1;
> }
> }
> {code}
> If I run the following query on the sample employees.json file (or actually a 
> parquet, after modifying the table and columns names)
> {code:title=SQL Query|borderStyle=solid}SELECT full_name, 
> split_string(full_name, ' ', 4), split_string('Whatever', ' ', 4) FROM 
> cp.employee.json LIMIT 1{code}
> I get the following result
> !https://i.stack.imgur.com/L8uQW.png!
> Shouldn't I be getting the column value and null for the other 2 columns ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5193) UDF returns NULL as expected only if the input is a literal

2017-01-12 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana updated DRILL-5193:

Description: 
I defined the following UDF
{code:title=SplitPartFunc.java|borderStyle=solid}
import javax.inject.Inject;

import org.apache.drill.exec.expr.DrillSimpleFunc;
import org.apache.drill.exec.expr.annotations.FunctionTemplate;
import org.apache.drill.exec.expr.annotations.Output;
import org.apache.drill.exec.expr.annotations.Param;
import org.apache.drill.exec.expr.holders.IntHolder;
import org.apache.drill.exec.expr.holders.NullableVarCharHolder;
import org.apache.drill.exec.expr.holders.VarCharHolder;

import io.netty.buffer.DrillBuf;

@FunctionTemplate(name = "split_string", scope = 
FunctionTemplate.FunctionScope.SIMPLE, nulls = 
FunctionTemplate.NullHandling.NULL_IF_NULL)
public class SplitPartFunc implements DrillSimpleFunc {

@Param
VarCharHolder input;

@Param(constant = true)
VarCharHolder delimiter;

@Param(constant = true)
IntHolder field;

@Output
NullableVarCharHolder out;

@Inject
DrillBuf buffer;

public void setup() {
}

public void eval() {

String stringValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start,
 input.end, input.buffer);

out.buffer = buffer; //If I return before this statement, a NPE is 
thrown :(
if(stringValue == null){
return;
}

int fieldValue = field.value;
if(fieldValue <= 0){
return; 
}

String delimiterValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start,
 delimiter.end, delimiter.buffer);
if(delimiterValue == null){
return;
}

String[] splittedInput = stringValue.split(delimiterValue);
if(splittedInput.length < fieldValue){
return;
}

// put the output value in the out buffer
String outputValue = splittedInput[fieldValue - 1];
out.start = 0;
out.end = outputValue.getBytes().length;
buffer.setBytes(0, outputValue.getBytes());
out.isSet = 1;
}

}
{code}

If I run the following query on the sample employees.json file (or actually a 
parquet, after modifying the table and columns names)

{code:title=SQL Query|borderStyle=solid}SELECT full_name, 
split_string(full_name, ' ', 4), split_string('Whatever', ' ', 4) FROM 
cp.employee.json LIMIT 1{code}

I get the following result
!https://i.stack.imgur.com/L8uQW.png!

Shouldn't I be getting NULLs for the last 2 columns ?

  was:
I defined the following UDF
{code:title=SplitPartFunc.java|borderStyle=solid}
import javax.inject.Inject;

import org.apache.drill.exec.expr.DrillSimpleFunc;
import org.apache.drill.exec.expr.annotations.FunctionTemplate;
import org.apache.drill.exec.expr.annotations.Output;
import org.apache.drill.exec.expr.annotations.Param;
import org.apache.drill.exec.expr.holders.IntHolder;
import org.apache.drill.exec.expr.holders.NullableVarCharHolder;
import org.apache.drill.exec.expr.holders.VarCharHolder;

import io.netty.buffer.DrillBuf;

@FunctionTemplate(name = "split_string", scope = 
FunctionTemplate.FunctionScope.SIMPLE, nulls = 
FunctionTemplate.NullHandling.NULL_IF_NULL)
public class SplitPartFunc implements DrillSimpleFunc {

@Param
VarCharHolder input;

@Param(constant = true)
VarCharHolder delimiter;

@Param(constant = true)
IntHolder field;

@Output
NullableVarCharHolder out;

@Inject
DrillBuf buffer;

public void setup() {
}

public void eval() {

String stringValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start,
 input.end, input.buffer);

out.buffer = buffer; //If I return before this statement, a NPE is 
thrown :(
if(stringValue == null){
return;
}

int fieldValue = field.value;
if(fieldValue <= 0){
return; 
}

String delimiterValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start,
 delimiter.end, delimiter.buffer);
if(delimiterValue == null){
return;
}

String[] splittedInput = stringValue.split(delimiterValue);
if(splittedInput.length < fieldValue){
return;
}

// put the output value in the out buffer
String outputValue = splittedInput[fieldValue - 1];
out.start = 0;
out.end = outputValue.getBytes().length;
buffer.setBytes(0, outputValue.getBytes());
out.isSet = 1;
}

}
{code}

If I run the following query on the sample employees.json file (or actually a 
parquet, after modifying the table and columns names)

{code:title=SQL Query|borderStyle=solid}SELECT full_name, 
split_string(full_name, 

[jira] [Closed] (DRILL-5194) UDF returns NULL as expected only if the input is a literal

2017-01-12 Thread Muhammad Gelbana (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Gelbana closed DRILL-5194.
---
Resolution: Duplicate

> UDF returns NULL as expected only if the input is a literal
> ---
>
> Key: DRILL-5194
> URL: https://issues.apache.org/jira/browse/DRILL-5194
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.9.0
>Reporter: Muhammad Gelbana
>
> I defined the following UDF
> {code:title=SplitPartFunc.java|borderStyle=solid}
> import javax.inject.Inject;
> import org.apache.drill.exec.expr.DrillSimpleFunc;
> import org.apache.drill.exec.expr.annotations.FunctionTemplate;
> import org.apache.drill.exec.expr.annotations.Output;
> import org.apache.drill.exec.expr.annotations.Param;
> import org.apache.drill.exec.expr.holders.IntHolder;
> import org.apache.drill.exec.expr.holders.NullableVarCharHolder;
> import org.apache.drill.exec.expr.holders.VarCharHolder;
> import io.netty.buffer.DrillBuf;
> @FunctionTemplate(name = "split_string", scope = 
> FunctionTemplate.FunctionScope.SIMPLE, nulls = 
> FunctionTemplate.NullHandling.NULL_IF_NULL)
> public class SplitPartFunc implements DrillSimpleFunc {
> @Param
> VarCharHolder input;
> @Param(constant = true)
> VarCharHolder delimiter;
> @Param(constant = true)
> IntHolder field;
> @Output
> NullableVarCharHolder out;
> @Inject
> DrillBuf buffer;
> public void setup() {
> }
> public void eval() {
> String stringValue = 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start,
>  input.end, input.buffer);
> out.buffer = buffer; //If I return before this statement, a NPE is 
> thrown :(
> if(stringValue == null){
> return;
> }
> int fieldValue = field.value;
> if(fieldValue <= 0){
> return; 
> }
> String delimiterValue = 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start,
>  delimiter.end, delimiter.buffer);
> if(delimiterValue == null){
> return;
> }
> String[] splittedInput = stringValue.split(delimiterValue);
> if(splittedInput.length < fieldValue){
> return;
> }
> // put the output value in the out buffer
> String outputValue = splittedInput[fieldValue - 1];
> out.start = 0;
> out.end = outputValue.getBytes().length;
> buffer.setBytes(0, outputValue.getBytes());
> out.isSet = 1;
> }
> }
> {code}
> If I run the following query on the sample employees.json file (or actually a 
> parquet, after modifying the table and columns names)
> {code:title=SQL Query|borderStyle=solid}SELECT full_name, 
> split_string(full_name, ' ', 4), split_string('Whatever', ' ', 4) FROM 
> cp.employee.json LIMIT 1{code}
> I get the following result
> !https://i.stack.imgur.com/L8uQW.png!
> Shouldn't I be getting the column value and null for the other 2 columns ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5194) UDF returns NULL as expected only if the input is a literal

2017-01-12 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5194:
---

 Summary: UDF returns NULL as expected only if the input is a 
literal
 Key: DRILL-5194
 URL: https://issues.apache.org/jira/browse/DRILL-5194
 Project: Apache Drill
  Issue Type: Bug
  Components: Functions - Drill
Affects Versions: 1.9.0
Reporter: Muhammad Gelbana


I defined the following UDF
{code:title=SplitPartFunc.java|borderStyle=solid}
import javax.inject.Inject;

import org.apache.drill.exec.expr.DrillSimpleFunc;
import org.apache.drill.exec.expr.annotations.FunctionTemplate;
import org.apache.drill.exec.expr.annotations.Output;
import org.apache.drill.exec.expr.annotations.Param;
import org.apache.drill.exec.expr.holders.IntHolder;
import org.apache.drill.exec.expr.holders.NullableVarCharHolder;
import org.apache.drill.exec.expr.holders.VarCharHolder;

import io.netty.buffer.DrillBuf;

@FunctionTemplate(name = "split_string", scope = 
FunctionTemplate.FunctionScope.SIMPLE, nulls = 
FunctionTemplate.NullHandling.NULL_IF_NULL)
public class SplitPartFunc implements DrillSimpleFunc {

@Param
VarCharHolder input;

@Param(constant = true)
VarCharHolder delimiter;

@Param(constant = true)
IntHolder field;

@Output
NullableVarCharHolder out;

@Inject
DrillBuf buffer;

public void setup() {
}

public void eval() {

String stringValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start,
 input.end, input.buffer);

out.buffer = buffer; //If I return before this statement, a NPE is 
thrown :(
if(stringValue == null){
return;
}

int fieldValue = field.value;
if(fieldValue <= 0){
return; 
}

String delimiterValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start,
 delimiter.end, delimiter.buffer);
if(delimiterValue == null){
return;
}

String[] splittedInput = stringValue.split(delimiterValue);
if(splittedInput.length < fieldValue){
return;
}

// put the output value in the out buffer
String outputValue = splittedInput[fieldValue - 1];
out.start = 0;
out.end = outputValue.getBytes().length;
buffer.setBytes(0, outputValue.getBytes());
out.isSet = 1;
}

}
{code}

If I run the following query on the sample employees.json file (or actually a 
parquet, after modifying the table and columns names)

{code:title=SQL Query|borderStyle=solid}SELECT full_name, 
split_string(full_name, ' ', 4), split_string('Whatever', ' ', 4) FROM 
cp.employee.json LIMIT 1{code}

I get the following result
!https://i.stack.imgur.com/L8uQW.png!

Shouldn't I be getting the column value and null for the other 2 columns ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5193) UDF returns NULL as expected only if the input is a literal

2017-01-12 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created DRILL-5193:
---

 Summary: UDF returns NULL as expected only if the input is a 
literal
 Key: DRILL-5193
 URL: https://issues.apache.org/jira/browse/DRILL-5193
 Project: Apache Drill
  Issue Type: Bug
  Components: Functions - Drill
Affects Versions: 1.9.0
Reporter: Muhammad Gelbana


I defined the following UDF
{code:title=SplitPartFunc.java|borderStyle=solid}
import javax.inject.Inject;

import org.apache.drill.exec.expr.DrillSimpleFunc;
import org.apache.drill.exec.expr.annotations.FunctionTemplate;
import org.apache.drill.exec.expr.annotations.Output;
import org.apache.drill.exec.expr.annotations.Param;
import org.apache.drill.exec.expr.holders.IntHolder;
import org.apache.drill.exec.expr.holders.NullableVarCharHolder;
import org.apache.drill.exec.expr.holders.VarCharHolder;

import io.netty.buffer.DrillBuf;

@FunctionTemplate(name = "split_string", scope = 
FunctionTemplate.FunctionScope.SIMPLE, nulls = 
FunctionTemplate.NullHandling.NULL_IF_NULL)
public class SplitPartFunc implements DrillSimpleFunc {

@Param
VarCharHolder input;

@Param(constant = true)
VarCharHolder delimiter;

@Param(constant = true)
IntHolder field;

@Output
NullableVarCharHolder out;

@Inject
DrillBuf buffer;

public void setup() {
}

public void eval() {

String stringValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start,
 input.end, input.buffer);

out.buffer = buffer; //If I return before this statement, a NPE is 
thrown :(
if(stringValue == null){
return;
}

int fieldValue = field.value;
if(fieldValue <= 0){
return; 
}

String delimiterValue = 
org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start,
 delimiter.end, delimiter.buffer);
if(delimiterValue == null){
return;
}

String[] splittedInput = stringValue.split(delimiterValue);
if(splittedInput.length < fieldValue){
return;
}

// put the output value in the out buffer
String outputValue = splittedInput[fieldValue - 1];
out.start = 0;
out.end = outputValue.getBytes().length;
buffer.setBytes(0, outputValue.getBytes());
out.isSet = 1;
}

}
{code}

If I run the following query on the sample employees.json file (or actually a 
parquet, after modifying the table and columns names)

{code:title=SQL Query|borderStyle=solid}SELECT full_name, 
split_string(full_name, ' ', 4), split_string('Whatever', ' ', 4) FROM 
cp.employee.json LIMIT 1{code}

I get the following result
!https://i.stack.imgur.com/L8uQW.png!

Shouldn't I be getting the column value and null for the other 2 columns ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)