[jira] [Commented] (DRILL-6381) Add capability to do index based planning and execution

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615592#comment-16615592
 ] 

ASF GitHub Bot commented on DRILL-6381:
---

Ben-Zvi commented on a change in pull request #1466: DRILL-6381: Add support 
for index based planning and execution
URL: https://github.com/apache/drill/pull/1466#discussion_r217874840
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/common/JoinControl.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.common;
+
+/**
+ * For the int type control,
+ * the meaning of each bit start from lowest:
+ * bit 0: intersect or not, 0 -- default(no intersect), 1 -- INTERSECT 
(DISTINCT as default)
+ * bit 1: intersect type, 0 -- default (DISTINCT), 1 -- INTERSECT_ALL
+ */
+public class JoinControl {
 
 Review comment:
   Looks like the JoinControl currently is not being actually used by the 
runtime operators; is it just a preparation for future use ?
   
   (A little off the subject)  And it is used to convey information from the 
planner to the operator. I was recently planning passing other information 
(e.g., semi-join, anti-semi, broadcast, between lateral and unnest ...) -- any 
such new bit of information requires so many changes, and extending method 
signatures; there should be an easier way to do that - like passing the whole 
plan (but that's another project ).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add capability to do index based planning and execution
> ---
>
> Key: DRILL-6381
> URL: https://issues.apache.org/jira/browse/DRILL-6381
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Execution - Relational Operators, Query Planning  
> Optimization
>Reporter: Aman Sinha
>Assignee: Aman Sinha
>Priority: Major
> Fix For: 1.15.0
>
>
> If the underlying data source supports indexes (primary and secondary 
> indexes), Drill should leverage those during planning and execution in order 
> to improve query performance.  
> On the planning side, Drill planner should be enhanced to provide an 
> abstraction layer which express the index metadata and statistics.  Further, 
> a cost-based index selection is needed to decide which index(es) are 
> suitable.  
> On the execution side, appropriate operator enhancements would be needed to 
> handle different categories of indexes such as covering, non-covering 
> indexes, taking into consideration the index data may not be co-located with 
> the primary table, i.e a global index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-1248) Add support for using aliases in group by

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615524#comment-16615524
 ] 

ASF GitHub Bot commented on DRILL-1248:
---

sohami closed pull request #1461: DRILL-1248: Allow positional / named aliases 
in group by / having clauses
URL: https://github.com/apache/drill/pull/1461
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillConformance.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillConformance.java
index e6efeb92d1b..4a6aefc86b4 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillConformance.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillConformance.java
@@ -21,10 +21,7 @@
 import org.apache.calcite.sql.validate.SqlDelegatingConformance;
 
 /**
- * Drill's SQL conformance is SqlConformanceEnum.DEFAULT except for method 
isApplyAllowed().
- * Since Drill is going to allow OUTER APPLY and CROSS APPLY to allow each row 
from left child of Join
- * to join with output of right side (sub-query or table function that will be 
invoked for each row).
- * Refer to DRILL-5999 for more information.
+ * Drill's SQL conformance is SqlConformanceEnum.DEFAULT with a couple of 
deviations.
  */
 public class DrillConformance extends SqlDelegatingConformance {
 
@@ -36,8 +33,28 @@ public DrillConformance(SqlConformanceEnum flavor) {
 super(flavor);
   }
 
+  /**
+   * Drill allows OUTER APPLY and CROSS APPLY to allow each row from left 
child of Join
+   * to join with output of right side (sub-query or table function that will 
be invoked for each row).
+   * Refer to DRILL-5999 for more information.
+   */
   @Override
   public boolean isApplyAllowed() {
 return true;
   }
+
+  @Override
+  public boolean isGroupByOrdinal() {
+return true;
+  }
+
+  @Override
+  public boolean isGroupByAlias() {
+return true;
+  }
+
+  @Override
+  public boolean isHavingAlias() {
+return true;
+  }
 }
diff --git 
a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java
 
b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java
index d9564b7d606..f1f74a683b8 100644
--- 
a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java
+++ 
b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java
@@ -447,7 +447,7 @@ public void testAggGroupByWithNullDecimal() throws 
Exception {
   alterSession(PlannerSettings.ENABLE_DECIMAL_DATA_TYPE_KEY, true);
   testBuilder()
   .sqlQuery("select sum(cast(a as decimal(9,0))) as s,\n" +
-  "avg(cast(a as decimal(9,0))) as a,\n" +
+  "avg(cast(a as decimal(9,0))) as av,\n" +
   "var_samp(cast(a as decimal(9,0))) as varSamp,\n" +
   "var_pop(cast(a as decimal(9,0))) as varPop,\n" +
   "stddev_pop(cast(a as decimal(9,0))) as stddevPop,\n" +
@@ -455,7 +455,7 @@ public void testAggGroupByWithNullDecimal() throws 
Exception {
   "max(cast(a as decimal(9,0))) as mx," +
 "min(cast(a as decimal(9,0))) as mn from dfs.`%s` t group by a", 
fileName)
   .unOrdered()
-  .baselineColumns("s", "a", "varSamp", "varPop", "stddevPop", 
"stddevSamp", "mx", "mn")
+  .baselineColumns("s", "av", "varSamp", "varPop", "stddevPop", 
"stddevSamp", "mx", "mn")
   .baselineValues(BigDecimal.valueOf(1), new BigDecimal("1.00"), 
new BigDecimal("0.00"),
   new BigDecimal("0.00"), new BigDecimal("0.00"), new 
BigDecimal("0.00"),
   BigDecimal.valueOf(1), BigDecimal.valueOf(1))
diff --git 
a/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestConformance.java 
b/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestConformance.java
index 4af1a84b84d..f058bd7aaa5 100644
--- 
a/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestConformance.java
+++ 
b/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestConformance.java
@@ -17,30 +17,82 @@
  */
 package org.apache.drill.exec.sql;
 
-import org.apache.drill.PlanTestBase;
 import org.apache.drill.categories.SqlTest;
-import org.apache.drill.test.BaseTestQuery;
+import org.apache.drill.test.ClusterFixture;
+import org.apache.drill.test.ClusterFixtureBuilder;
+import org.apache.drill.test.ClusterTest;
+import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
+import static org.junit.Assert.assertTrue;
+
 @Category(SqlTest.class)
-public class TestConformance extends BaseTestQuery {
+public 

[jira] [Commented] (DRILL-6732) Queries are runnable on disable plugins

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615526#comment-16615526
 ] 

ASF GitHub Bot commented on DRILL-6732:
---

sohami commented on issue #1460: DRILL-6732: Queries are runnable on disable 
plugins
URL: https://github.com/apache/drill/pull/1460#issuecomment-421513206
 
 
   Merged in master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Queries are runnable on disable plugins
> ---
>
> Key: DRILL-6732
> URL: https://issues.apache.org/jira/browse/DRILL-6732
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 1.13.0, 1.14.0
>Reporter: shuifeng lu
>Assignee: shuifeng lu
>Priority: Critical
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> Queries are not allowed to run on disable plugins: (1.10 works fine, 1.13 and 
> 1.14 has the same problem. Not checked for 1.11 and 1.12)
> 1) It is not allowed
> 2) the plugin rules cannot be applied on queries, but queries are runnable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-1248) Add support for using aliases in group by

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615523#comment-16615523
 ] 

ASF GitHub Bot commented on DRILL-1248:
---

sohami commented on issue #1461: DRILL-1248: Allow positional / named aliases 
in group by / having clauses
URL: https://github.com/apache/drill/pull/1461#issuecomment-421513156
 
 
   Merged in master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for using aliases in group by
> -
>
> Key: DRILL-1248
> URL: https://issues.apache.org/jira/browse/DRILL-1248
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: SQL Parser
>Reporter: Jim Scott
>Assignee: Arina Ielchiieva
>Priority: Major
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.15.0
>
>
> when I select using a function and alias the resultant function value it 
> won't parse properly saying the alias is ambiguous. I know that this is a 
> debatable / questionable topic, but with this engine being so flexible it 
> seems that in order to support all of the formatting, casting, etc.. that 
> will likely occur having the group by support an alias would be a big deal. 
> This in my opinion is nothing like an ordinal group by. 
> This works:
> select extract(year from to_date(crimes.datetime, 'MM/DD/ hh:mm:ss a')) 
> from BLAH group by extract(year from to_date(crimes.datetime, 'MM/DD/ 
> hh:mm:ss a'));
> This doesn't:
> select extract(year from to_date(crimes.datetime, 'MM/DD/ hh:mm:ss a')) 
> as mygroup from BLAH group by mygroup



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6732) Queries are runnable on disable plugins

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615525#comment-16615525
 ] 

ASF GitHub Bot commented on DRILL-6732:
---

sohami closed pull request #1460: DRILL-6732: Queries are runnable on disable 
plugins
URL: https://github.com/apache/drill/pull/1460
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/main/java/org/apache/calcite/jdbc/DynamicRootSchema.java 
b/exec/java-exec/src/main/java/org/apache/calcite/jdbc/DynamicRootSchema.java
index 5fecfddb28b..6bef3d5aaa1 100644
--- 
a/exec/java-exec/src/main/java/org/apache/calcite/jdbc/DynamicRootSchema.java
+++ 
b/exec/java-exec/src/main/java/org/apache/calcite/jdbc/DynamicRootSchema.java
@@ -78,7 +78,7 @@ public void loadSchemaFactory(String schemaName, boolean 
caseSensitive) {
 try {
   SchemaPlus schemaPlus = this.plus();
   StoragePlugin plugin = getSchemaFactories().getPlugin(schemaName);
-  if (plugin != null) {
+  if (plugin != null && plugin.getConfig().isEnabled()) {
 plugin.registerSchemas(schemaConfig, schemaPlus);
 return;
   }
diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/store/StoragePluginRegistryImpl.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/store/StoragePluginRegistryImpl.java
index cf8ea500cf9..c5554f8fd29 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/store/StoragePluginRegistryImpl.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/store/StoragePluginRegistryImpl.java
@@ -373,6 +373,7 @@ private StoragePlugins 
loadBootstrapPlugins(LogicalPlanPersistence lpPersistence
 logger.debug("Storage plugin name {} is not defined. Skipping 
plugin initialization.", annotatedClass.getClassName());
 continue;
   }
+  storagePlugin.getConfig().setEnabled(true);
   plugins.put(name, storagePlugin);
   isPluginInitialized = true;
 
diff --git 
a/exec/java-exec/src/test/java/org/apache/drill/exec/store/store/TestDisabledPlugin.java
 
b/exec/java-exec/src/test/java/org/apache/drill/exec/store/store/TestDisabledPlugin.java
new file mode 100644
index 000..0f31e579c19
--- /dev/null
+++ 
b/exec/java-exec/src/test/java/org/apache/drill/exec/store/store/TestDisabledPlugin.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.store;
+
+import org.apache.drill.categories.SqlTest;
+import org.apache.drill.common.exceptions.UserRemoteException;
+import org.apache.drill.exec.proto.UserBitShared;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+import org.apache.drill.exec.store.dfs.FileSystemConfig;
+import org.apache.drill.test.ClusterFixture;
+import org.apache.drill.test.ClusterTest;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import static org.apache.drill.exec.util.StoragePluginTestUtils.CP_PLUGIN_NAME;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+@Category(SqlTest.class)
+public class TestDisabledPlugin extends ClusterTest {
+  private static StoragePluginRegistry pluginRegistry;
+  private static FileSystemConfig pluginConfig;
+
+  @BeforeClass
+  public static void setup() throws Exception {
+startCluster(ClusterFixture.builder(dirTestWatcher));
+pluginRegistry = cluster.drillbit().getContext().getStorage();
+pluginConfig = (FileSystemConfig) 
pluginRegistry.getPlugin(CP_PLUGIN_NAME).getConfig();
+pluginConfig.setEnabled(false);
+pluginRegistry.createOrUpdate(CP_PLUGIN_NAME, pluginConfig, true);
+  }
+
+  @AfterClass
+  public static void restore() throws Exception {
+pluginConfig.setEnabled(true);
+pluginRegistry.createOrUpdate(CP_PLUGIN_NAME, pluginConfig, true);
+  }
+
+  @Test
+  public 

[jira] [Commented] (DRILL-6733) Unit tests from KafkaFilterPushdownTest are failing in some environments.

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615522#comment-16615522
 ] 

ASF GitHub Bot commented on DRILL-6733:
---

sohami commented on issue #1464: DRILL-6733: Unit tests from 
KafkaFilterPushdownTest are failing in so…
URL: https://github.com/apache/drill/pull/1464#issuecomment-421513123
 
 
   Merged in master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit tests from KafkaFilterPushdownTest are failing in some environments.
> -
>
> Key: DRILL-6733
> URL: https://issues.apache.org/jira/browse/DRILL-6733
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> *Steps:*
>  # Build the Drill project without skipping the unit tests:
> {noformat}
> mvn clean install
> {noformat}
> Alternatively, if the project was already built, run tests for Kafka:
> {noformat}
> mvn test -pl contrib/storage-kafka
> {noformat}
> *Expected results:*
> All tests are passed.
> *Actual results:*
>  Tests from KafkaFilterPushdownTest are failing:
> {noformat}
> --- 
> T E S T S 
> --- 
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 
> -1,283,514.348 sec - in org.apache.drill.exec.store.kafka.MessageIteratorTest 
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 
> -1,283,513.783 sec - in org.apache.drill.exec.store.kafka.KafkaQueriesTest 
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: -1,283,512.35 
> sec - in org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest 
> Running org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec - 
> in org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Running org.apache.drill.exec.store.kafka.KafkaQueriesTest 
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.2 sec - 
> in org.apache.drill.exec.store.kafka.KafkaQueriesTest 
> Running org.apache.drill.exec.store.kafka.MessageIteratorTest 
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.036 sec - 
> in org.apache.drill.exec.store.kafka.MessageIteratorTest 
> Running org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.611 sec - 
> in org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest 
> 13:09:29.511 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 213 
> B(139.3 KiB), h: 20.0 MiB(719.0 MiB), nh: 794.5 KiB(120.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest) 
> java.lang.AssertionError: expected:<26> but was:<0> 
>    at 
> org.apache.drill.exec.store.kafka.KafkaTestBase.logResultAndVerifyRowCount(KafkaTestBase.java:76)
>  ~[test-classes/:na] 
>    at 
> org.apache.drill.exec.store.kafka.KafkaTestBase.runKafkaSQLVerifyCount(KafkaTestBase.java:69)
>  ~[test-classes/:na] 
>    at 
> org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest.testPushdownWithOr(KafkaFilterPushdownTest.java:259)
>  ~[test-classes/:na] 
>    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181] 
> 13:09:33.307 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 377 
> B(139.7 KiB), h: 18.5 MiB(743.2 MiB), nh: 699.5 KiB(120.9 MiB)): 
> testPushdownWithAndOrCombo2(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
>  
> java.lang.AssertionError: expected:<4> but was:<0> 
>    at 
> org.apache.drill.exec.store.kafka.KafkaTestBase.logResultAndVerifyRowCount(KafkaTestBase.java:76)
>  ~[test-classes/:na] 
>    at 
> org.apache.drill.exec.store.kafka.KafkaTestBase.runKafkaSQLVerifyCount(KafkaTestBase.java:69)
>  ~[test-classes/:na] 
>    at 
> org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest.testPushdownWithAndOrCombo2(KafkaFilterPushdownTest.java:316)
>  ~[test-classes/:na] 
>    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181] 
> 13:09:44.424 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 0 
> B(139.7 KiB), h: 11.7 MiB(774.6 MiB), nh: 537.1 KiB(122.3 MiB)): 
> testPushdownOnTimestamp(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
>  
> 

[jira] [Commented] (DRILL-6733) Unit tests from KafkaFilterPushdownTest are failing in some environments.

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615521#comment-16615521
 ] 

ASF GitHub Bot commented on DRILL-6733:
---

sohami closed pull request #1464: DRILL-6733: Unit tests from 
KafkaFilterPushdownTest are failing in so…
URL: https://github.com/apache/drill/pull/1464
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/KafkaTestBase.java
 
b/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/KafkaTestBase.java
index 24e6f6d68ae..9f066062165 100644
--- 
a/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/KafkaTestBase.java
+++ 
b/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/KafkaTestBase.java
@@ -86,7 +86,9 @@ public void testHelper(String query, String 
expectedExprInPlan, int expectedReco
 
   @AfterClass
   public static void tearDownKafkaTestBase() throws Exception {
-TestKafkaSuit.tearDownCluster();
+if (TestKafkaSuit.isRunningSuite()) {
+  TestKafkaSuit.tearDownCluster();
+}
   }
 
 }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit tests from KafkaFilterPushdownTest are failing in some environments.
> -
>
> Key: DRILL-6733
> URL: https://issues.apache.org/jira/browse/DRILL-6733
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> *Steps:*
>  # Build the Drill project without skipping the unit tests:
> {noformat}
> mvn clean install
> {noformat}
> Alternatively, if the project was already built, run tests for Kafka:
> {noformat}
> mvn test -pl contrib/storage-kafka
> {noformat}
> *Expected results:*
> All tests are passed.
> *Actual results:*
>  Tests from KafkaFilterPushdownTest are failing:
> {noformat}
> --- 
> T E S T S 
> --- 
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 
> -1,283,514.348 sec - in org.apache.drill.exec.store.kafka.MessageIteratorTest 
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 
> -1,283,513.783 sec - in org.apache.drill.exec.store.kafka.KafkaQueriesTest 
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: -1,283,512.35 
> sec - in org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest 
> Running org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec - 
> in org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Running org.apache.drill.exec.store.kafka.KafkaQueriesTest 
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.2 sec - 
> in org.apache.drill.exec.store.kafka.KafkaQueriesTest 
> Running org.apache.drill.exec.store.kafka.MessageIteratorTest 
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.036 sec - 
> in org.apache.drill.exec.store.kafka.MessageIteratorTest 
> Running org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.611 sec - 
> in org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest 
> 13:09:29.511 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 213 
> B(139.3 KiB), h: 20.0 MiB(719.0 MiB), nh: 794.5 KiB(120.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest) 
> java.lang.AssertionError: expected:<26> but was:<0> 
>    at 
> org.apache.drill.exec.store.kafka.KafkaTestBase.logResultAndVerifyRowCount(KafkaTestBase.java:76)
>  ~[test-classes/:na] 
>    at 
> org.apache.drill.exec.store.kafka.KafkaTestBase.runKafkaSQLVerifyCount(KafkaTestBase.java:69)
>  ~[test-classes/:na] 
>    at 
> org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest.testPushdownWithOr(KafkaFilterPushdownTest.java:259)
>  ~[test-classes/:na] 
>    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181] 
> 13:09:33.307 [main] ERROR org.apache.drill.TestReporter - Test 

[jira] [Updated] (DRILL-5940) Avro with schema registry support for Kafka

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5940:
-
Component/s: (was: Storage - Other)
 Storage - Kafka

> Avro with schema registry support for Kafka
> ---
>
> Key: DRILL-5940
> URL: https://issues.apache.org/jira/browse/DRILL-5940
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Avro, Storage - Kafka
>Reporter: B Anil Kumar
>Assignee: B Anil Kumar
>Priority: Major
>
> Support Avro messages with Schema registry for Kafka storage plugin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5977:
-
Component/s: Storage - Kafka

> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Kafka
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: doc-complete, ready-to-commit
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-4779) Kafka storage plugin support

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-4779:
-
Component/s: (was: Storage - Other)
 Storage - Kafka

> Kafka storage plugin support
> 
>
> Key: DRILL-4779
> URL: https://issues.apache.org/jira/browse/DRILL-4779
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Kafka
>Affects Versions: 1.11.0
>Reporter: B Anil Kumar
>Assignee: B Anil Kumar
>Priority: Major
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> Implement Kafka storage plugin will enable the strong SQL support for Kafka.
> Initially implementation can target for supporting json and avro message types



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6625) Intermittent failures in Kafka unit tests

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6625:
-
Component/s: Storage - Kafka

> Intermittent failures in Kafka unit tests
> -
>
> Key: DRILL-6625
> URL: https://issues.apache.org/jira/browse/DRILL-6625
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Kafka
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> The following failures have been seen (consistently on my Mac, or 
> occasionally on Jenkins) when running the unit tests, in the Kafka test suit. 
> After the failure, maven hangs for a long time.
>  Cost was 0.0 (instead of 26.0) :
> {code:java}
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest
> 16:46:57.748 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -65.3 KiB(73.6 KiB), h: -573.5 MiB(379.5 MiB), nh: 1.2 MiB(117.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 26.0 in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 200,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:63751",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 0.0
>   }, {
> {code}
> Or occasionally:
> {code}
> ---
>  T E S T S
> ---
> 11:52:57.571 [main] ERROR o.a.d.e.s.k.KafkaMessageGenerator - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6625) Intermittent failures in Kafka unit tests

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6625:
-
Component/s: (was: Storage - Other)

> Intermittent failures in Kafka unit tests
> -
>
> Key: DRILL-6625
> URL: https://issues.apache.org/jira/browse/DRILL-6625
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> The following failures have been seen (consistently on my Mac, or 
> occasionally on Jenkins) when running the unit tests, in the Kafka test suit. 
> After the failure, maven hangs for a long time.
>  Cost was 0.0 (instead of 26.0) :
> {code:java}
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest
> 16:46:57.748 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -65.3 KiB(73.6 KiB), h: -573.5 MiB(379.5 MiB), nh: 1.2 MiB(117.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 26.0 in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 200,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:63751",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 0.0
>   }, {
> {code}
> Or occasionally:
> {code}
> ---
>  T E S T S
> ---
> 11:52:57.571 [main] ERROR o.a.d.e.s.k.KafkaMessageGenerator - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6625) Intermittent failures in Kafka unit tests

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6625:
-
Reviewer: Timothy Farkas

> Intermittent failures in Kafka unit tests
> -
>
> Key: DRILL-6625
> URL: https://issues.apache.org/jira/browse/DRILL-6625
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> The following failures have been seen (consistently on my Mac, or 
> occasionally on Jenkins) when running the unit tests, in the Kafka test suit. 
> After the failure, maven hangs for a long time.
>  Cost was 0.0 (instead of 26.0) :
> {code:java}
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest
> 16:46:57.748 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -65.3 KiB(73.6 KiB), h: -573.5 MiB(379.5 MiB), nh: 1.2 MiB(117.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 26.0 in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 200,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:63751",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 0.0
>   }, {
> {code}
> Or occasionally:
> {code}
> ---
>  T E S T S
> ---
> 11:52:57.571 [main] ERROR o.a.d.e.s.k.KafkaMessageGenerator - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-3846) Metadata Caching : A count(*) query took more time with the cache in place

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-3846:
-
Fix Version/s: (was: Future)

> Metadata Caching : A count(*) query took more time with the cache in place
> --
>
> Key: DRILL-3846
> URL: https://issues.apache.org/jira/browse/DRILL-3846
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Reporter: Rahul Challapalli
>Assignee: Venkata Jyothsna Donapati
>Priority: Critical
> Fix For: 1.15.0
>
>
> git.commit.id.abbrev=3c89b30
> I have a folder with 10k complex files. The generated cache file is around 
> 486 MB. The below numbers indicate that we regressed in terms of performance 
> when we generated the metadata cache
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from 
> `complex_sparse_5files`;
> +--+
> |  EXPR$0  |
> +--+
> | 100  |
> +--+
> 1 row selected (30.835 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> refresh table metadata 
> `complex_sparse_5files`;
> +---+-+
> |  ok   |   summary   
> |
> +---+-+
> | true  | Successfully updated metadata for table complex_sparse_5files.  
> |
> +---+-+
> 1 row selected (10.69 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from 
> `complex_sparse_5files`;
> +--+
> |  EXPR$0  |
> +--+
> | 100  |
> +--+
> 1 row selected (47.614 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-6567) Jenkins Regression: TPCDS query 93 fails with INTERNAL_ERROR ERROR: java.lang.reflect.UndeclaredThrowableException.

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker reassigned DRILL-6567:


Assignee: Robert Hou  (was: Khurram Faraaz)

> Jenkins Regression: TPCDS query 93 fails with INTERNAL_ERROR ERROR: 
> java.lang.reflect.UndeclaredThrowableException.
> ---
>
> Key: DRILL-6567
> URL: https://issues.apache.org/jira/browse/DRILL-6567
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.14.0
>Reporter: Robert Hou
>Assignee: Robert Hou
>Priority: Critical
> Fix For: 1.15.0
>
>
> This is TPCDS Query 93.
> Query: 
> /root/drillAutomation/framework-master/framework/resources/Advanced/tpcds/tpcds_sf100/hive/parquet/query93.sql
> SELECT ss_customer_sk,
> Sum(act_sales) sumsales
> FROM   (SELECT ss_item_sk,
> ss_ticket_number,
> ss_customer_sk,
> CASE
> WHEN sr_return_quantity IS NOT NULL THEN
> ( ss_quantity - sr_return_quantity ) * ss_sales_price
> ELSE ( ss_quantity * ss_sales_price )
> END act_sales
> FROM   store_sales
> LEFT OUTER JOIN store_returns
> ON ( sr_item_sk = ss_item_sk
> AND sr_ticket_number = ss_ticket_number ),
> reason
> WHERE  sr_reason_sk = r_reason_sk
> AND r_reason_desc = 'reason 38') t
> GROUP  BY ss_customer_sk
> ORDER  BY sumsales,
> ss_customer_sk
> LIMIT 100;
> Here is the stack trace:
> 2018-06-29 07:00:32 INFO  DrillTestLogger:348 - 
> Exception:
> java.sql.SQLException: INTERNAL_ERROR ERROR: 
> java.lang.reflect.UndeclaredThrowableException
> Setup failed for null
> Fragment 4:56
> [Error Id: 3c72c14d-9362-4a9b-affb-5cf937bed89e on atsqa6c82.qa.lab:31010]
>   (org.apache.drill.common.exceptions.ExecutionSetupException) 
> java.lang.reflect.UndeclaredThrowableException
> 
> org.apache.drill.common.exceptions.ExecutionSetupException.fromThrowable():30
> org.apache.drill.exec.store.hive.readers.HiveAbstractReader.setup():327
> org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():245
> org.apache.drill.exec.physical.impl.ScanBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> 
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.sniffNonEmptyBatch():276
> 
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.prefetchFirstBatchFromBothSides():238
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.buildSchema():218
> org.apache.drill.exec.record.AbstractRecordBatch.next():152
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():147
> org.apache.drill.exec.record.AbstractRecordBatch.next():172
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> 
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.sniffNonEmptyBatch():276
> 
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.prefetchFirstBatchFromBothSides():238
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.buildSchema():218
> org.apache.drill.exec.record.AbstractRecordBatch.next():152
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():147
> org.apache.drill.exec.record.AbstractRecordBatch.next():172
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> 
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.buildSchema():118
> org.apache.drill.exec.record.AbstractRecordBatch.next():152
> org.apache.drill.exec.physical.impl.BaseRootExec.next():103
> 
> org.apache.drill.exec.physical.impl.partitionsender.PartitionSenderRootExec.innerNext():152
> org.apache.drill.exec.physical.impl.BaseRootExec.next():93
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():294
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():281
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():422
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():281
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1149
> java.util.concurrent.ThreadPoolExecutor$Worker.run():624
> java.lang.Thread.run():748
>   Caused By 

[jira] [Updated] (DRILL-2035) Add ability to cancel multiple queries

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-2035:
-
Fix Version/s: (was: 1.15.0)
   Future

> Add ability to cancel multiple queries
> --
>
> Key: DRILL-2035
> URL: https://issues.apache.org/jira/browse/DRILL-2035
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - HTTP, Web Server
>Reporter: Neeraja
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: Future
>
>
> Currently Drill UI allows canceling one query at a time.
> This could be cumbersome to manage for scenarios using with BI tools which 
> generate multiple queries for a single action in the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-3764) Support the ability to identify and/or skip records when a function evaluation fails

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-3764:
-
Fix Version/s: (was: 1.15.0)

> Support the ability to identify and/or skip records when a function 
> evaluation fails
> 
>
> Key: DRILL-3764
> URL: https://issues.apache.org/jira/browse/DRILL-3764
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.1.0
>Reporter: Aman Sinha
>Assignee: Pritesh Maker
>Priority: Major
> Fix For: Future
>
>
> Drill can point out the filename and location of corrupted records in a file 
> but it does not have a good mechanism to deal with the following scenario: 
> Consider a text file with 2 records:
> {code}
> $ cat t4.csv
> 10,2001
> 11,http://www.cnn.com
> {code}
> {code}
> 0: jdbc:drill:zk=local> alter session set `exec.errors.verbose` = true;
> 0: jdbc:drill:zk=local> select cast(columns[0] as init), cast(columns[1] as 
> bigint) from dfs.`t4.csv`;
> Error: SYSTEM ERROR: NumberFormatException: http://www.cnn.com
> Fragment 0:0
> [Error Id: 72aad22c-a345-4100-9a57-dcd8436105f7 on 10.250.56.140:31010]
>   (java.lang.NumberFormatException) http://www.cnn.com
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeL():91
> 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.varCharToLong():62
> org.apache.drill.exec.test.generated.ProjectorGen1.doEval():62
> org.apache.drill.exec.test.generated.ProjectorGen1.projectRecords():62
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.doWork():172
> {code}
> The problem is user does not have the context of where the error occurred 
> -either the file name or the record number.   This becomes a pain point 
> especially when CTAS is being used to do data conversion from (say) text 
> format to Parquet format.  The CTAS may be accessing thousands of files and 1 
> such casting (or another function) failure aborts the query. 
> It would substantially improve the user experience if we provided: 
> 1) the filename and record number where  this failure occurred
> 2) the ability to skip such records depending on a session option
> 3) the ability to write such records to a staging table for future ingestion
> Please see discussion on dev list: 
> http://mail-archives.apache.org/mod_mbox/drill-dev/201509.mbox/%3cCAFyDVvLuPLgTNZ56S6=J=9Vb=aBs=pdw7nrhkkdupbdxgfa...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-4456) Hive translate function is not working

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker reassigned DRILL-4456:


Assignee: Volodymyr Vysotskyi

> Hive translate function is not working
> --
>
> Key: DRILL-4456
> URL: https://issues.apache.org/jira/browse/DRILL-4456
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Hive
>Affects Versions: 1.5.0
>Reporter: Arina Ielchiieva
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>
> In Hive "select translate(name, 'A', 'B') from users" works fine.
> But in Drill "select translate(name, 'A', 'B') from hive.`users`" returns the 
> following error:
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "," at line 1, column 22. Was expecting one of: "USING" ... "NOT" 
> ... "IN" ... "BETWEEN" ... "LIKE" ... "SIMILAR" ... "=" ... ">" ... "<" ... 
> "<=" ... ">=" ... "<>" ... "+" ... "-" ... "*" ... "/" ... "||" ... "AND" ... 
> "OR" ... "IS" ... "MEMBER" ... "SUBMULTISET" ... "MULTISET" ... "[" ... "." 
> ... "(" ... while parsing SQL query: select translate(name, 'A', 'B') from 
> hive.users ^ [Error Id: ba21956b-3285-4544-b3b2-fab68b95be1f on 
> localhost:31010]
> Root cause:
> Calcite follows the standard SQL reference.
> SQL reference,  ISO/IEC 9075-2:2011(E), section 6.30
>  ::=
>   TRANSLATE  
> USING  
> To fix:
> 1. add support to translate (expession, from_string, to_string) alternative 
> syntax
> 2. add unit test in org.apache.drill.exec.fn.hive.TestInbuiltHiveUDFs
> Changes can be made directly in Calcite and then upgrade to appropriate 
> Calcite version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-4456) Hive translate function is not working

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-4456:
-
Fix Version/s: (was: 1.15.0)

> Hive translate function is not working
> --
>
> Key: DRILL-4456
> URL: https://issues.apache.org/jira/browse/DRILL-4456
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Hive
>Affects Versions: 1.5.0
>Reporter: Arina Ielchiieva
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>
> In Hive "select translate(name, 'A', 'B') from users" works fine.
> But in Drill "select translate(name, 'A', 'B') from hive.`users`" returns the 
> following error:
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "," at line 1, column 22. Was expecting one of: "USING" ... "NOT" 
> ... "IN" ... "BETWEEN" ... "LIKE" ... "SIMILAR" ... "=" ... ">" ... "<" ... 
> "<=" ... ">=" ... "<>" ... "+" ... "-" ... "*" ... "/" ... "||" ... "AND" ... 
> "OR" ... "IS" ... "MEMBER" ... "SUBMULTISET" ... "MULTISET" ... "[" ... "." 
> ... "(" ... while parsing SQL query: select translate(name, 'A', 'B') from 
> hive.users ^ [Error Id: ba21956b-3285-4544-b3b2-fab68b95be1f on 
> localhost:31010]
> Root cause:
> Calcite follows the standard SQL reference.
> SQL reference,  ISO/IEC 9075-2:2011(E), section 6.30
>  ::=
>   TRANSLATE  
> USING  
> To fix:
> 1. add support to translate (expession, from_string, to_string) alternative 
> syntax
> 2. add unit test in org.apache.drill.exec.fn.hive.TestInbuiltHiveUDFs
> Changes can be made directly in Calcite and then upgrade to appropriate 
> Calcite version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-4309) Make this option store.hive.optimize_scan_with_native_readers=true default

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-4309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-4309:
-
Fix Version/s: (was: 1.15.0)

> Make this option store.hive.optimize_scan_with_native_readers=true default
> --
>
> Key: DRILL-4309
> URL: https://issues.apache.org/jira/browse/DRILL-4309
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning  Optimization
>Affects Versions: 1.8.0
>Reporter: Sean Hsuan-Yi Chu
>Priority: Major
>  Labels: doc-impacting
> Fix For: Future
>
>
> This new feature has been around and used/tests in many scenarios. 
> We should enable this feature by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5897) Support Query Cancellation when WebConnection is closed on client side both for authenticated and unauthenticated user's

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5897:
-
Issue Type: Improvement  (was: Task)

> Support Query Cancellation when WebConnection is closed on client side both 
> for authenticated and unauthenticated user's
> 
>
> Key: DRILL-5897
> URL: https://issues.apache.org/jira/browse/DRILL-5897
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Reporter: Sorabh Hamirwasia
>Priority: Major
>
> Today there is no session created (using cookies) for unauthenticated WebUser 
> whereas for authenticated user's session is created. Also when a user submits 
> a query then we wait until entire results is gathered on WebServer side and 
> then send the entire Webpage in the response (probably that's how ftl works).
> For authenticated user's we only cancel the queries in-flight when the 
> session is invalidated (either by timeout or logout). However in absence of 
> session we do nothing for unauthenticated user's so once a query is submitted 
> it will run until it's failed or successful. The only way to explicitly 
> cancel a query is from profile page which will not work when profiles are 
> disabled.
> We should research more on if it's possible to get the underlying 
> WebConnection (not session) close event and cancel queries running as part of 
> that connection close event. Also since today we will wait for entire query 
> to finish on backend server and then send the response back, which is when a 
> bad connection is detected it doesn't makes sense to cancel at that point 
> (there is 1:1 mapping between request and connection) since query is already 
> completed. Instead we can send header followed by batches of data 
> (pagination) then we can detect early enough if connection is valid or not 
> and cancel the query in response to that. More research is needed in this 
> area along with knowledge of Jetty on how this can be achieved to make our 
> WebServer more performant.
>  It would also be good to explore if we can provide sessions for 
> unauthenticated user connection too based on timeout and then handle the 
> query cancellation as part of session timeout. This will also impact the way 
> we support impersonation without authentication scenario where we ask user to 
> input query user name for each request. If we support session then username 
> should be done at session level rather than per request level which can be 
> achieved by logging user without password (similar to authentication flow)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5897) Support Query Cancellation when WebConnection is closed on client side both for authenticated and unauthenticated user's

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5897:
-
Fix Version/s: (was: 1.15.0)

> Support Query Cancellation when WebConnection is closed on client side both 
> for authenticated and unauthenticated user's
> 
>
> Key: DRILL-5897
> URL: https://issues.apache.org/jira/browse/DRILL-5897
> Project: Apache Drill
>  Issue Type: Task
>  Components: Web Server
>Reporter: Sorabh Hamirwasia
>Priority: Major
>
> Today there is no session created (using cookies) for unauthenticated WebUser 
> whereas for authenticated user's session is created. Also when a user submits 
> a query then we wait until entire results is gathered on WebServer side and 
> then send the entire Webpage in the response (probably that's how ftl works).
> For authenticated user's we only cancel the queries in-flight when the 
> session is invalidated (either by timeout or logout). However in absence of 
> session we do nothing for unauthenticated user's so once a query is submitted 
> it will run until it's failed or successful. The only way to explicitly 
> cancel a query is from profile page which will not work when profiles are 
> disabled.
> We should research more on if it's possible to get the underlying 
> WebConnection (not session) close event and cancel queries running as part of 
> that connection close event. Also since today we will wait for entire query 
> to finish on backend server and then send the response back, which is when a 
> bad connection is detected it doesn't makes sense to cancel at that point 
> (there is 1:1 mapping between request and connection) since query is already 
> completed. Instead we can send header followed by batches of data 
> (pagination) then we can detect early enough if connection is valid or not 
> and cancel the query in response to that. More research is needed in this 
> area along with knowledge of Jetty on how this can be achieved to make our 
> WebServer more performant.
>  It would also be good to explore if we can provide sessions for 
> unauthenticated user connection too based on timeout and then handle the 
> query cancellation as part of session timeout. This will also impact the way 
> we support impersonation without authentication scenario where we ask user to 
> input query user name for each request. If we support session then username 
> should be done at session level rather than per request level which can be 
> achieved by logging user without password (similar to authentication flow)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-6527) Update option name for Drill Parquet native reader

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker resolved DRILL-6527.
--
Resolution: Fixed

> Update option name for Drill Parquet native reader
> --
>
> Key: DRILL-6527
> URL: https://issues.apache.org/jira/browse/DRILL-6527
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Hive, Storage - Parquet
>Affects Versions: 1.14.0
>Reporter: Vitalii Diravka
>Assignee: Bridget Bevens
>Priority: Minor
> Fix For: 1.15.0
>
>
> The old option name to enable Drill parquet reader is 
> "store.hive.optimize_scan_with_native_readers".
> Starting from DRILL-6454 one new native reader is introduced, therefore more 
> precise option name is added for parquet native reader too.
> A new option name for parquet reader is 
> "store.hive.parquet.optimize_scan_with_native_reader".
> The old one is deprecated and should be removed starting from Drill 1.15.0 
> release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-6527) Update option name for Drill Parquet native reader

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker reassigned DRILL-6527:


Assignee: Bridget Bevens

> Update option name for Drill Parquet native reader
> --
>
> Key: DRILL-6527
> URL: https://issues.apache.org/jira/browse/DRILL-6527
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Hive, Storage - Parquet
>Affects Versions: 1.14.0
>Reporter: Vitalii Diravka
>Assignee: Bridget Bevens
>Priority: Minor
> Fix For: 1.15.0
>
>
> The old option name to enable Drill parquet reader is 
> "store.hive.optimize_scan_with_native_readers".
> Starting from DRILL-6454 one new native reader is introduced, therefore more 
> precise option name is added for parquet native reader too.
> A new option name for parquet reader is 
> "store.hive.parquet.optimize_scan_with_native_reader".
> The old one is deprecated and should be removed starting from Drill 1.15.0 
> release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-6630) Extra spaces are ignored while publishing results in Drill Web UI

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker reassigned DRILL-6630:


Assignee: Anton Gozhiy

> Extra spaces are ignored while publishing results in Drill Web UI
> -
>
> Key: DRILL-6630
> URL: https://issues.apache.org/jira/browse/DRILL-6630
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Minor
> Fix For: 1.15.0
>
>
> *Prerequisites:*
> Use Drill Web UI to submit queries
> *Query:*
> {code:sql}
> select '   sdssada' from (values(1))
> {code}
> *Expected Result:*
> {noformat}
> "  sdssada"
> {noformat}
> *Actual Result:*
> {noformat}
> "sds sada"
> {noformat}
> *Note:* Inspecting the element using Chrome Developer Tools you can see that 
> it contain the real string. So something should be done with HTML formatting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6184) Add batch sizing information to query profile

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6184:
-
Fix Version/s: (was: 1.15.0)

> Add batch sizing information to query profile
> -
>
> Key: DRILL-6184
> URL: https://issues.apache.org/jira/browse/DRILL-6184
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Flow
>Affects Versions: 1.12.0
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>Priority: Major
>
> for debugging, we need batch sizing information for each operator in query 
> profile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6133) RecordBatchSizer throws IndexOutOfBounds Exception for union vector

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6133:
-
Fix Version/s: (was: 1.15.0)

> RecordBatchSizer throws IndexOutOfBounds Exception for union vector
> ---
>
> Key: DRILL-6133
> URL: https://issues.apache.org/jira/browse/DRILL-6133
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.12.0
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>Priority: Minor
>
> RecordBatchSizer throws IndexOutOfBoundsException when trying to get payload 
> byte count of union vector. 
> [Error Id: 430026a7-a963-40f1-bae2-1850649e8434 on 172.30.8.158:31013]
>  at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:300)
>  [classes/:na]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [classes/:na]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:266)
>  [classes/:na]
>  at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [classes/:na]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>  at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.lang.IndexOutOfBoundsException: DrillBuf[2], udle: [1 0..0], 
> index: 4, length: 4 (expected: range(0, 0))
> DrillBuf[2], udle: [1 0..0]
>  at 
> org.apache.drill.exec.memory.BoundsChecking.checkIndex(BoundsChecking.java:80)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.memory.BoundsChecking.lengthCheck(BoundsChecking.java:86)
>  ~[classes/:na]
>  at io.netty.buffer.DrillBuf.chk(DrillBuf.java:114) ~[classes/:4.0.48.Final]
>  at io.netty.buffer.DrillBuf.getInt(DrillBuf.java:484) 
> ~[classes/:4.0.48.Final]
>  at 
> org.apache.drill.exec.vector.UInt4Vector$Accessor.get(UInt4Vector.java:432) 
> ~[classes/:na]
>  at 
> org.apache.drill.exec.vector.VarCharVector.getPayloadByteCount(VarCharVector.java:308)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.vector.NullableVarCharVector.getPayloadByteCount(NullableVarCharVector.java:256)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.vector.complex.AbstractMapVector.getPayloadByteCount(AbstractMapVector.java:303)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.vector.complex.UnionVector.getPayloadByteCount(UnionVector.java:574)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.physical.impl.spill.RecordBatchSizer$ColumnSize.(RecordBatchSizer.java:147)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.physical.impl.spill.RecordBatchSizer.measureColumn(RecordBatchSizer.java:403)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.physical.impl.spill.RecordBatchSizer.(RecordBatchSizer.java:350)
>  ~[classes/:na]
>  at 
> org.apache.drill.exec.physical.impl.spill.RecordBatchSizer.(RecordBatchSizer.java:320)
>  ~[classes/:na]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6166) RecordBatchSizer does not handle hyper vectors

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6166:
-
Fix Version/s: (was: 1.15.0)

> RecordBatchSizer does not handle hyper vectors
> --
>
> Key: DRILL-6166
> URL: https://issues.apache.org/jira/browse/DRILL-6166
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.12.0
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
>Priority: Critical
>
> RecordBatchSizer throws an exception when incoming batch has hyper vector.
> (java.lang.UnsupportedOperationException) null
>  org.apache.drill.exec.record.HyperVectorWrapper.getValueVector():61
>  org.apache.drill.exec.record.RecordBatchSizer.():346
>  org.apache.drill.exec.record.RecordBatchSizer.():311
>  
> org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch$StreamingAggregateMemoryManager.update():198
>  
> org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():328
>  org.apache.drill.exec.record.AbstractRecordBatch.next():164
>  
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():228
>  org.apache.drill.exec.physical.impl.BaseRootExec.next():105
>  
> org.apache.drill.exec.physical.impl.partitionsender.PartitionSenderRootExec.innerNext():155
>  org.apache.drill.exec.physical.impl.BaseRootExec.next():95
>  org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():233
>  org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():226
>  java.security.AccessController.doPrivileged():-2
>  javax.security.auth.Subject.doAs():422
>  org.apache.hadoop.security.UserGroupInformation.doAs():1657
>  org.apache.drill.exec.work.fragment.FragmentExecutor.run():226
>  org.apache.drill.common.SelfCleaningRunnable.run():38
>  java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>  java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>  java.lang.Thread.run():745



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6078) Query with INTERVAL in predicate does not return any rows

2018-09-14 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6078:
-
Fix Version/s: (was: 1.15.0)

> Query with INTERVAL in predicate does not return any rows
> -
>
> Key: DRILL-6078
> URL: https://issues.apache.org/jira/browse/DRILL-6078
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning  Optimization
>Affects Versions: 1.12.0
>Reporter: Robert Hou
>Assignee: Chunhui Shi
>Priority: Major
>
> This query does not return any rows when accessing MapR DB tables.
> SELECT
>   C.C_CUSTKEY,
>   C.C_NAME,
>   SUM(L.L_EXTENDEDPRICE * (1 - L.L_DISCOUNT)) AS REVENUE,
>   C.C_ACCTBAL,
>   N.N_NAME,
>   C.C_ADDRESS,
>   C.C_PHONE,
>   C.C_COMMENT
> FROM
>   customer C,
>   orders O,
>   lineitem L,
>   nation N
> WHERE
>   C.C_CUSTKEY = O.O_CUSTKEY
>   AND L.L_ORDERKEY = O.O_ORDERKEY
>   AND O.O_ORDERDate >= DATE '1994-03-01'
>   AND O.O_ORDERDate < DATE '1994-03-01' + INTERVAL '3' MONTH
>   AND L.L_RETURNFLAG = 'R'
>   AND C.C_NATIONKEY = N.N_NATIONKEY
> GROUP BY
>   C.C_CUSTKEY,
>   C.C_NAME,
>   C.C_ACCTBAL,
>   C.C_PHONE,
>   N.N_NAME,
>   C.C_ADDRESS,
>   C.C_COMMENT
> ORDER BY
>   REVENUE DESC
> LIMIT 20
> This query works against JSON tables.  It should return 20 rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6625) Intermittent failures in Kafka unit tests

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615079#comment-16615079
 ] 

ASF GitHub Bot commented on DRILL-6625:
---

aravi5 commented on issue #1463: DRILL-6625: Intermittent failures in Kafka 
unit tests
URL: https://github.com/apache/drill/pull/1463#issuecomment-421418137
 
 
   The design of `TestKafkaSuit` is very similar to design of `MongoTestSuit` 
and hence needed changes similar to the ones made in 
[storage-mongo/pom.xml](f5dfa56#diff-e110e2cbfd77d27e85d5121529c612bfR83).
   
   Current behavior is that surefire runs test classes twice - once as a part 
of `TestKafkaSuit` and the other by directly running classes. To prevent the 
latter from happening, changes were made (by @ilooner ) in `pom.xml` for 
`storage-mongo` plugin. Similar changes are made in `storage-kafka`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Intermittent failures in Kafka unit tests
> -
>
> Key: DRILL-6625
> URL: https://issues.apache.org/jira/browse/DRILL-6625
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> The following failures have been seen (consistently on my Mac, or 
> occasionally on Jenkins) when running the unit tests, in the Kafka test suit. 
> After the failure, maven hangs for a long time.
>  Cost was 0.0 (instead of 26.0) :
> {code:java}
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest
> 16:46:57.748 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -65.3 KiB(73.6 KiB), h: -573.5 MiB(379.5 MiB), nh: 1.2 MiB(117.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 26.0 in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 200,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:63751",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 0.0
>   }, {
> {code}
> Or occasionally:
> {code}
> ---
>  T E S T S
> ---
> 11:52:57.571 [main] ERROR o.a.d.e.s.k.KafkaMessageGenerator - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6625) Intermittent failures in Kafka unit tests

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615078#comment-16615078
 ] 

ASF GitHub Bot commented on DRILL-6625:
---

aravi5 commented on issue #1463: DRILL-6625: Intermittent failures in Kafka 
unit tests
URL: https://github.com/apache/drill/pull/1463#issuecomment-421417416
 
 
   As a part of PR https://github.com/apache/drill/pull/1464, we discussed that 
some changes would go in as a part of this PR. Just checked in those changes.
   
   @Ben-Zvi  @vdiravka @ilooner - Please review the new changes. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Intermittent failures in Kafka unit tests
> -
>
> Key: DRILL-6625
> URL: https://issues.apache.org/jira/browse/DRILL-6625
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> The following failures have been seen (consistently on my Mac, or 
> occasionally on Jenkins) when running the unit tests, in the Kafka test suit. 
> After the failure, maven hangs for a long time.
>  Cost was 0.0 (instead of 26.0) :
> {code:java}
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest
> 16:46:57.748 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -65.3 KiB(73.6 KiB), h: -573.5 MiB(379.5 MiB), nh: 1.2 MiB(117.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 26.0 in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 200,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:63751",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 0.0
>   }, {
> {code}
> Or occasionally:
> {code}
> ---
>  T E S T S
> ---
> 11:52:57.571 [main] ERROR o.a.d.e.s.k.KafkaMessageGenerator - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-3853) Get off Sqlline fork

2018-09-14 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-3853:

Labels: ready-to-commit  (was: )

> Get off Sqlline fork
> 
>
> Key: DRILL-3853
> URL: https://issues.apache.org/jira/browse/DRILL-3853
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>Assignee: Arina Ielchiieva
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> Drill has it's own forked version of sqlline that includes customizations for 
> displaying the drill version, drill QOTD, removing names of unsupported 
> commands and removing JDBC drivers not shipped with Drill.
> To get off the fork, we need to parameterize these features in sqlline and 
> have them driven from a properties file. The changes should be merged back 
> into sqlline and Drill packaging should then provide a properties file to 
> customize the stock sqlline distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6625) Intermittent failures in Kafka unit tests

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615000#comment-16615000
 ] 

ASF GitHub Bot commented on DRILL-6625:
---

Ben-Zvi commented on issue #1463: DRILL-6625: Intermittent failures in Kafka 
unit tests
URL: https://github.com/apache/drill/pull/1463#issuecomment-421399980
 
 
   @aravi5 - Please also change the status of the jira 
(https://issues.apache.org/jira/browse/DRILL-6625) to 
**REVIEWABLE**   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Intermittent failures in Kafka unit tests
> -
>
> Key: DRILL-6625
> URL: https://issues.apache.org/jira/browse/DRILL-6625
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> The following failures have been seen (consistently on my Mac, or 
> occasionally on Jenkins) when running the unit tests, in the Kafka test suit. 
> After the failure, maven hangs for a long time.
>  Cost was 0.0 (instead of 26.0) :
> {code:java}
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest
> 16:46:57.748 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -65.3 KiB(73.6 KiB), h: -573.5 MiB(379.5 MiB), nh: 1.2 MiB(117.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 26.0 in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 200,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:63751",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 0.0
>   }, {
> {code}
> Or occasionally:
> {code}
> ---
>  T E S T S
> ---
> 11:52:57.571 [main] ERROR o.a.d.e.s.k.KafkaMessageGenerator - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6625) Intermittent failures in Kafka unit tests

2018-09-14 Thread Boaz Ben-Zvi (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boaz Ben-Zvi updated DRILL-6625:

Labels: ready-to-commit  (was: )

> Intermittent failures in Kafka unit tests
> -
>
> Key: DRILL-6625
> URL: https://issues.apache.org/jira/browse/DRILL-6625
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.15.0
>
>
> The following failures have been seen (consistently on my Mac, or 
> occasionally on Jenkins) when running the unit tests, in the Kafka test suit. 
> After the failure, maven hangs for a long time.
>  Cost was 0.0 (instead of 26.0) :
> {code:java}
> Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest
> 16:46:57.748 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -65.3 KiB(73.6 KiB), h: -573.5 MiB(379.5 MiB), nh: 1.2 MiB(117.1 MiB)): 
> testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 26.0 in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 200,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:63751",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" : 0.0
>   }, {
> {code}
> Or occasionally:
> {code}
> ---
>  T E S T S
> ---
> 11:52:57.571 [main] ERROR o.a.d.e.s.k.KafkaMessageGenerator - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6381) Add capability to do index based planning and execution

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614902#comment-16614902
 ] 

ASF GitHub Bot commented on DRILL-6381:
---

amansinha100 commented on issue #1466: DRILL-6381: Add support for index based 
planning and execution
URL: https://github.com/apache/drill/pull/1466#issuecomment-421371288
 
 
   @Ben-Zvi since the code touches HashJoin and a few other run time operators, 
could you help review the operator changes ? In particular RowKeyJoin, 
RangePartition, ScanBatch and related changes. 
   
   @arina-ielchiieva do you think you can help review the planner changes and 
storage plugin changes ? 
   
   Note: All unit and functional tests (including advanced tests) are passing 
with this branch. 
   
   Thanks in advance..


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add capability to do index based planning and execution
> ---
>
> Key: DRILL-6381
> URL: https://issues.apache.org/jira/browse/DRILL-6381
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Execution - Relational Operators, Query Planning  
> Optimization
>Reporter: Aman Sinha
>Assignee: Aman Sinha
>Priority: Major
> Fix For: 1.15.0
>
>
> If the underlying data source supports indexes (primary and secondary 
> indexes), Drill should leverage those during planning and execution in order 
> to improve query performance.  
> On the planning side, Drill planner should be enhanced to provide an 
> abstraction layer which express the index metadata and statistics.  Further, 
> a cost-based index selection is needed to decide which index(es) are 
> suitable.  
> On the execution side, appropriate operator enhancements would be needed to 
> handle different categories of indexes such as covering, non-covering 
> indexes, taking into consideration the index data may not be co-located with 
> the primary table, i.e a global index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-4896) After a failed CTAS, the table both exists and does not exist

2018-09-14 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-4896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva reassigned DRILL-4896:
---

Assignee: (was: Arina Ielchiieva)

> After a failed CTAS, the table both exists and does not exist
> -
>
> Key: DRILL-4896
> URL: https://issues.apache.org/jira/browse/DRILL-4896
> Project: Apache Drill
>  Issue Type: Improvement
>  Components:  Server
>Affects Versions: 1.8.0
>Reporter: Boaz Ben-Zvi
>Priority: Major
>
>   After CTAS failed (due to no space on storage device) there were 
> (incomplete) Parquet files left.  A subsequent CTAS for the same table name 
> fails with "table exists", and a subsequent DROP on the same table name fails 
> with "table does not exist".
>   A possible enhancement: DROP to be able to cleanup such a corrupted table.
> 0: jdbc:drill:zk=local> create table `/drill/spill/tt1` as
> . . . . . . . . . . . >  select
> . . . . . . . . . . . >case when columns[2] = '' then cast(null as 
> varchar(100)) else cast(columns[2] as varchar(100)) end,
> . . . . . . . . . . . >case when columns[3] = '' then cast(null as 
> varchar(100)) else cast(columns[3] as varchar(100)) end,
> . . . . . . . . . . . >case when columns[4] = '' then cast(null as 
> varchar(100)) else cast(columns[4] as varchar(100)) end, 
> . . . . . . . . . . . >case when columns[5] = '' then cast(null as 
> varchar(100)) else cast(columns[5] as varchar(100)) end, 
> . . . . . . . . . . . >case when columns[0] = '' then cast(null as 
> varchar(100)) else cast(columns[0] as varchar(100)) end, 
> . . . . . . . . . . . >case when columns[8] = '' then cast(null as 
> varchar(100)) else cast(columns[8] as varchar(100)) end
> . . . . . . . . . . . > FROM 
> dfs.`/Users/boazben-zvi/data/store_sales/store_sales.dat`;
> Exception in thread "drill-executor-4" org.apache.hadoop.fs.FSError: 
> java.io.IOException: No space left on device
>   . 39 more
> Error: SYSTEM ERROR: IOException: The file being written is in an invalid 
> state. Probably caused by an error thrown previously. Current state: COLUMN
> Fragment 0:0
> [Error Id: de84c212-2400-4a08-a15c-8e3adb5ec774 on 10.250.57.63:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=local> create table `/drill/spill/tt1` as select * from 
> dfs.`/Users/boazben-zvi/data/store_sales/store_sales.dat`;
> Error: VALIDATION ERROR: A table or view with given name [/drill/spill/tt1] 
> already exists in schema [dfs.tmp]
> [Error Id: 0ef99a15-9d67-49ad-87fb-023105dece3c on 10.250.57.63:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=local> drop table `/drill/spill/tt1` ;
> Error: DATA_WRITE ERROR: Failed to drop table: File /drill/spill/tt1 does not 
> exist
> [Error Id: c22da79f-ecbd-423c-b5b2-4eae7d1263d7 on 10.250.57.63:31010] 
> (state=,code=0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6743) H2O AI Library Causes Errors

2018-09-14 Thread Volodymyr Vysotskyi (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614834#comment-16614834
 ] 

Volodymyr Vysotskyi commented on DRILL-6743:


[~cgivre], it is not connected with a shaded module, the error comes from 
{{org.objectweb.asm.ClassReader.}}, where the check for the class version 
wasn't passed. Looks like was used not JDK 8.

> H2O AI Library Causes Errors
> 
>
> Key: DRILL-6743
> URL: https://issues.apache.org/jira/browse/DRILL-6743
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.14.0, 1.15.0
> Environment: Mac OSX High Sierra
>Reporter: Charles Givre
>Priority: Major
>
> I've been working on a UDF to use a POJO generated by H2O Ai.  The basic idea 
> is that a person could train a machine learning model in H2O, generate a POJO 
> and then use Drill and the POJO to make predictions from data in Drill.  
> The idea is similar to this: 
> [https://github.com/h2oai/h2o-tutorials/tree/master/tutorials/hive_udf_template/hive_multimojo_udf_template,]
>  but for Drill.
> This depends on importing the h2o-genmodel.jar 
> ([https://mvnrepository.com/artifact/ai.h2o/h2o-genmodel)].  However when you 
> even put that JAR file into Drill's site directory or into the /jars/3rdParty 
> you get the following errors:
> I've been speaking with [~paul-rogers] about this, and he thinks it has to do 
> with a shaded module, but does anyone have any idea how to fix this?  
>  
> {{2018-09-13 23:38:54,163 ERROR 
> [2464d2b3-7a4a-4449-d08e-61283bce78e0:frag:0:0] 
> record.AbstractUnaryRecordBatch: Failure during query}}
>  {{org.apache.drill.exec.exception.SchemaChangeException: Failure while 
> attempting to load generated class}}
>  \{{ at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput(ProjectRecordBatch.java:572)}}
>  \{{ at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:582)}}
>  \{{ at 
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:101)}}
>  \{{ at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:142)}}
>  \{{ at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:172)}}
>  \{{ at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103)}}
>  \{{ at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:83)}}
>  \{{ at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93)}}
>  \{{ at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:293)}}
>  \{{ at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:280)}}
>  \{{ at java.security.AccessController.doPrivileged(Native Method)}}
>  \{{ at javax.security.auth.Subject.doAs(Subject.java:422)}}
>  \{{ at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)}}
>  \{{ at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:280)}}
>  \{{ at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)}}
>  \{{ at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)}}
>  \{{ at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)}}
>  \{{ at java.lang.Thread.run(Thread.java:745)}}
>  {{Caused by: org.apache.drill.exec.exception.ClassTransformationException: 
> org.apache.drill.shaded.guava.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.lang.IllegalArgumentException}}
>  \{{ at 
> org.apache.drill.exec.compile.CodeCompiler.createInstances(CodeCompiler.java:197)}}
>  \{{ at 
> org.apache.drill.exec.compile.CodeCompiler.createInstance(CodeCompiler.java:163)}}
>  \{{ at 
> org.apache.drill.exec.ops.BaseFragmentContext.getImplementationClass(BaseFragmentContext.java:56)}}
>  \{{ at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput(ProjectRecordBatch.java:569)}}
>  \{{ ... 17 more}}
>  {{Caused by: 
> org.apache.drill.shaded.guava.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.lang.IllegalArgumentException}}
>  \{{ at 
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2218)}}
>  \{{ at 
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.get(LocalCache.java:4147)}}
>  \{{ at 
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4151)}}
>  \{{ at 
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5140)}}
>  \{{ at 
> 

[jira] [Updated] (DRILL-6743) H2O AI Library Causes Errors

2018-09-14 Thread Charles Givre (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Givre updated DRILL-6743:
-
Description: 
I've been working on a UDF to use a POJO generated by H2O Ai.  The basic idea 
is that a person could train a machine learning model in H2O, generate a POJO 
and then use Drill and the POJO to make predictions from data in Drill.  

The idea is similar to this: 
[https://github.com/h2oai/h2o-tutorials/tree/master/tutorials/hive_udf_template/hive_multimojo_udf_template,]
 but for Drill.

This depends on importing the h2o-genmodel.jar 
([https://mvnrepository.com/artifact/ai.h2o/h2o-genmodel)].  However when you 
even put that JAR file into Drill's site directory or into the /jars/3rdParty 
you get the following errors:

I've been speaking with [~paul-rogers] about this, and he thinks it has to do 
with a shaded module, but does anyone have any idea how to fix this?  

 

{{2018-09-13 23:38:54,163 ERROR [2464d2b3-7a4a-4449-d08e-61283bce78e0:frag:0:0] 
record.AbstractUnaryRecordBatch: Failure during query}}
 {{org.apache.drill.exec.exception.SchemaChangeException: Failure while 
attempting to load generated class}}
 \{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput(ProjectRecordBatch.java:572)}}
 \{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:582)}}
 \{{ at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:101)}}
 \{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:142)}}
 \{{ at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:172)}}
 \{{ at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103)}}
 \{{ at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:83)}}
 \{{ at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93)}}
 \{{ at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:293)}}
 \{{ at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:280)}}
 \{{ at java.security.AccessController.doPrivileged(Native Method)}}
 \{{ at javax.security.auth.Subject.doAs(Subject.java:422)}}
 \{{ at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)}}
 \{{ at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:280)}}
 \{{ at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)}}
 \{{ at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)}}
 \{{ at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)}}
 \{{ at java.lang.Thread.run(Thread.java:745)}}
 {{Caused by: org.apache.drill.exec.exception.ClassTransformationException: 
org.apache.drill.shaded.guava.com.google.common.util.concurrent.UncheckedExecutionException:
 java.lang.IllegalArgumentException}}
 \{{ at 
org.apache.drill.exec.compile.CodeCompiler.createInstances(CodeCompiler.java:197)}}
 \{{ at 
org.apache.drill.exec.compile.CodeCompiler.createInstance(CodeCompiler.java:163)}}
 \{{ at 
org.apache.drill.exec.ops.BaseFragmentContext.getImplementationClass(BaseFragmentContext.java:56)}}
 \{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput(ProjectRecordBatch.java:569)}}
 \{{ ... 17 more}}
 {{Caused by: 
org.apache.drill.shaded.guava.com.google.common.util.concurrent.UncheckedExecutionException:
 java.lang.IllegalArgumentException}}
 \{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2218)}}
 \{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.get(LocalCache.java:4147)}}
 \{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4151)}}
 \{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5140)}}
 \{{ at 
org.apache.drill.exec.compile.CodeCompiler.createInstances(CodeCompiler.java:186)}}
 \{{ ... 20 more}}
 {{Caused by: java.lang.IllegalArgumentException}}
 \{{ at org.objectweb.asm.ClassReader.(Unknown Source)}}
 \{{ at org.objectweb.asm.ClassReader.(Unknown Source)}}
 \{{ at org.apache.drill.exec.compile.AsmUtil.classFromBytes(AsmUtil.java:93)}}
 \{{ at org.apache.drill.exec.compile.AsmUtil.isClassBytesOk(AsmUtil.java:80)}}
 \{{ at 
org.apache.drill.exec.compile.MergeAdapter.getMergedClass(MergeAdapter.java:206)}}
 \{{ at 
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:289)}}
 \{{ at 
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:228)}}
 \{{ at 

[jira] [Created] (DRILL-6743) H2O AI Library Causes Errors

2018-09-14 Thread Charles Givre (JIRA)
Charles Givre created DRILL-6743:


 Summary: H2O AI Library Causes Errors
 Key: DRILL-6743
 URL: https://issues.apache.org/jira/browse/DRILL-6743
 Project: Apache Drill
  Issue Type: Bug
  Components: Functions - Drill
Affects Versions: 1.14.0, 1.15.0
 Environment: Mac OSX High Sierra
Reporter: Charles Givre


I've been working on a UDF to use a POJO generated by H2O Ai.  The basic idea 
is that a person could train a machine learning model in H2O, generate a POJO 
and then use Drill and the POJO to make predictions from data in Drill.  This 
depends on importing the h2o-genmodel.jar 
([https://mvnrepository.com/artifact/ai.h2o/h2o-genmodel)].  However when you 
even put that JAR file into Drill's site directory or into the /jars/3rdParty 
you get the following errors:

I've been speaking with [~paul-rogers] about this, and he thinks it has to do 
with a shaded module, but does anyone have any idea how to fix this?  

 

{{2018-09-13 23:38:54,163 ERROR [2464d2b3-7a4a-4449-d08e-61283bce78e0:frag:0:0] 
record.AbstractUnaryRecordBatch: Failure during query}}
{{org.apache.drill.exec.exception.SchemaChangeException: Failure while 
attempting to load generated class}}
{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput(ProjectRecordBatch.java:572)}}
{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:582)}}
{{ at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:101)}}
{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:142)}}
{{ at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:172)}}
{{ at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103)}}
{{ at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:83)}}
{{ at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93)}}
{{ at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:293)}}
{{ at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:280)}}
{{ at java.security.AccessController.doPrivileged(Native Method)}}
{{ at javax.security.auth.Subject.doAs(Subject.java:422)}}
{{ at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)}}
{{ at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:280)}}
{{ at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)}}
{{ at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)}}
{{ at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)}}
{{ at java.lang.Thread.run(Thread.java:745)}}
{{Caused by: org.apache.drill.exec.exception.ClassTransformationException: 
org.apache.drill.shaded.guava.com.google.common.util.concurrent.UncheckedExecutionException:
 java.lang.IllegalArgumentException}}
{{ at 
org.apache.drill.exec.compile.CodeCompiler.createInstances(CodeCompiler.java:197)}}
{{ at 
org.apache.drill.exec.compile.CodeCompiler.createInstance(CodeCompiler.java:163)}}
{{ at 
org.apache.drill.exec.ops.BaseFragmentContext.getImplementationClass(BaseFragmentContext.java:56)}}
{{ at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput(ProjectRecordBatch.java:569)}}
{{ ... 17 more}}
{{Caused by: 
org.apache.drill.shaded.guava.com.google.common.util.concurrent.UncheckedExecutionException:
 java.lang.IllegalArgumentException}}
{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2218)}}
{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.get(LocalCache.java:4147)}}
{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4151)}}
{{ at 
org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5140)}}
{{ at 
org.apache.drill.exec.compile.CodeCompiler.createInstances(CodeCompiler.java:186)}}
{{ ... 20 more}}
{{Caused by: java.lang.IllegalArgumentException}}
{{ at org.objectweb.asm.ClassReader.(Unknown Source)}}
{{ at org.objectweb.asm.ClassReader.(Unknown Source)}}
{{ at org.apache.drill.exec.compile.AsmUtil.classFromBytes(AsmUtil.java:93)}}
{{ at org.apache.drill.exec.compile.AsmUtil.isClassBytesOk(AsmUtil.java:80)}}
{{ at 
org.apache.drill.exec.compile.MergeAdapter.getMergedClass(MergeAdapter.java:206)}}
{{ at 
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:289)}}
{{ at 
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:228)}}
{{ at 
org.apache.drill.exec.compile.CodeCompiler$CodeGenCompiler.compile(CodeCompiler.java:79)}}
{{ 

[jira] [Commented] (DRILL-6731) JPPD:Move aggregating the BF from the Foreman to the RuntimeFilter

2018-09-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614453#comment-16614453
 ] 

ASF GitHub Bot commented on DRILL-6731:
---

sohami commented on issue #1459: DRILL-6731: Move the BFs aggregating work from 
the Foreman to the RuntimeFi…
URL: https://github.com/apache/drill/pull/1459#issuecomment-421248457
 
 
   @weijietong  - Sorry I totally missed this PR. Will try to review it over 
weekend


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> JPPD:Move aggregating the BF from the Foreman to the RuntimeFilter
> --
>
> Key: DRILL-6731
> URL: https://issues.apache.org/jira/browse/DRILL-6731
> Project: Apache Drill
>  Issue Type: Improvement
>  Components:  Server
>Affects Versions: 1.15.0
>Reporter: weijie.tong
>Assignee: weijie.tong
>Priority: Major
>
> This PR is to move the BloomFilter aggregating work from the foreman to 
> RuntimeFilter. Though this change, the RuntimeFilter can apply the incoming 
> BF as soon as possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)