[jira] [Created] (DRILL-6751) Upgrade Apache parent POM to version 21

2018-09-21 Thread Arina Ielchiieva (JIRA)
Arina Ielchiieva created DRILL-6751:
---

 Summary: Upgrade Apache parent POM to version 21
 Key: DRILL-6751
 URL: https://issues.apache.org/jira/browse/DRILL-6751
 Project: Apache Drill
  Issue Type: Task
Affects Versions: 1.14.0
Reporter: Arina Ielchiieva
Assignee: Vitalii Diravka
 Fix For: 1.15.0


Upgrade Apache parent POM to version 21



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6752) Surround Drill quotes with double quotes

2018-09-21 Thread Arina Ielchiieva (JIRA)
Arina Ielchiieva created DRILL-6752:
---

 Summary: Surround Drill quotes with double quotes
 Key: DRILL-6752
 URL: https://issues.apache.org/jira/browse/DRILL-6752
 Project: Apache Drill
  Issue Type: Task
Reporter: Arina Ielchiieva
Assignee: Arina Ielchiieva
 Fix For: 1.15.0


Surround Drill quotes with double quotes. Double quotes are missing after 
DRILL-3853.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6753) Fix show files command to return result the same way as before

2018-09-21 Thread Arina Ielchiieva (JIRA)
Arina Ielchiieva created DRILL-6753:
---

 Summary: Fix show files command to return result the same way as 
before
 Key: DRILL-6753
 URL: https://issues.apache.org/jira/browse/DRILL-6753
 Project: Apache Drill
  Issue Type: Bug
Reporter: Arina Ielchiieva
Assignee: Arina Ielchiieva
 Fix For: 1.15.0


After DRILL-6680 show files command was returning result the same was as new 
FILES table. After the changes Drill Explorer cannot browse files since ODBC 
driver is heavily dependent on previously used column names and types. 

To do:
rework show files command result to support previous column names and types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219225796
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/BaseRootExec.java
 ##
 @@ -125,17 +126,24 @@ public void receivingFragmentFinished(final 
FragmentHandle handle) {
 
   @Override
   public void dumpOperators() {
-if (!operators.isEmpty()) {
-  logger.info("Operator dump started.");
-  for (CloseableRecordBatch batch : operators) {
-batch.dump();
-if (batch.isFailed()) {
-  // No need to proceed further as this batch is the one that failed
-  return;
+final int numberOfOperatorsToDump = 2;
 
 Review comment:
   I've meant dumping info for last two batches, not operators. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219233988
 
 

 ##
 File path: exec/java-exec/src/test/java/org/apache/drill/TestOperatorDump.java
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill;
+
+import ch.qos.logback.classic.spi.ILoggingEvent;
+import ch.qos.logback.core.ConsoleAppender;
+import org.apache.drill.common.exceptions.UserRemoteException;
+import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.exec.physical.impl.ScanBatch;
+import org.apache.drill.exec.physical.impl.join.HashJoinBatch;
+import org.apache.drill.exec.physical.impl.limit.LimitRecordBatch;
+import org.apache.drill.exec.testing.Controls;
+import org.apache.drill.exec.testing.ControlsInjectionUtil;
+import org.apache.drill.test.ClusterFixture;
+import org.apache.drill.test.ClusterFixtureBuilder;
+import org.apache.drill.test.ClusterTest;
+import org.apache.drill.test.LogFixture;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.junit.Assert.assertTrue;
+
+public class TestOperatorDump extends ClusterTest {
+
+  private static final String ENTRY_DUMP_COMPLETED = "Operator dump completed";
+  private static final String ENTRY_DUMP_STARTED = "Operator dump started";
+
+  private LogFixture logFixture;
+  private TestAppender appender;
+
+  @BeforeClass
+  public static void setupFiles() {
+dirTestWatcher.copyResourceToRoot(Paths.get("multilevel"));
+  }
+
+  @Before
+  public void setup() throws Exception {
+ClusterFixtureBuilder builder = ClusterFixture.builder(dirTestWatcher);
+appender = new TestAppender();
+logFixture = LogFixture.builder()
+.toConsole(appender, LogFixture.DEFAULT_CONSOLE_FORMAT)
+.build();
+startCluster(builder);
+  }
+
+  @After
+  public void tearUp(){
 
 Review comment:
   -> `tearDown`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219225010
 
 

 ##
 File path: 
common/src/main/java/org/apache/drill/common/exceptions/UserExceptionContext.java
 ##
 @@ -141,7 +141,7 @@ String generateContextMessage(boolean 
includeErrorIdAndIdentity, boolean include
 }
 
 if (includeSeeLogsMessage) {
-  sb.append("Please, refer to logs for more information.\n");
+  sb.append("\nPlease, refer to logs for more information.\n");
 
 Review comment:
   `\n` works for Unix based OS. For Windows it is `\r\n`
   Please use `System.lineSeparator()`
   
http://docs.oracle.com/javase/7/docs/api/java/lang/System.html#lineSeparator%28%29


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219481519
 
 

 ##
 File path: exec/java-exec/src/test/java/org/apache/drill/test/LogFixture.java
 ##
 @@ -204,10 +210,14 @@ private void setupConsole(LogFixtureBuilder builder) {
 ple.setContext(lc);
 ple.start();
 
-appender = new ConsoleAppender<>( );
+if (builder.appender == null) {
 
 Review comment:
   `appender = builder.appender == null ? new ConsoleAppender<>() : 
builder.appender;`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219230793
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/filter/RuntimeFilterRecordBatch.java
 ##
 @@ -250,7 +254,8 @@ private void computeBitSet(int fieldId, BloomFilter 
bloomFilter, BitSet bitSet)
 
   @Override
   public void dump() {
-logger.info("RuntimeFilterRecordBatch[selectionVector={}, 
toFilterFields={}, originalRecordCount={}, " +
-"batchSchema={}]", sv2, toFilterFields, originalRecordCount, 
incoming.getSchema());
+logger.error("RuntimeFilterRecordBatch[container={}, selectionVector={}, 
toFilterFields={}, "
++ "originalRecordCount={}, batchSchema={}]",
+container, sv2, toFilterFields, originalRecordCount, 
incoming.getSchema());
   }
 }
 
 Review comment:
   could you add new line here please?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219463764
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/record/AbstractRecordBatch.java
 ##
 @@ -47,6 +47,10 @@
   protected final boolean unionTypeEnabled;
   protected BatchState state;
 
+  // Used for state dump
+  protected boolean failed;
 
 Review comment:
   We have already two states for failed batches `BatchState.STOP` and 
`BatchState.OUT_OF_MEMORY` in `state` filed. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219480896
 
 

 ##
 File path: exec/java-exec/src/test/java/org/apache/drill/TestOperatorDump.java
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill;
+
+import ch.qos.logback.classic.spi.ILoggingEvent;
+import ch.qos.logback.core.ConsoleAppender;
+import org.apache.drill.common.exceptions.UserRemoteException;
+import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.exec.physical.impl.ScanBatch;
+import org.apache.drill.exec.physical.impl.join.HashJoinBatch;
+import org.apache.drill.exec.physical.impl.limit.LimitRecordBatch;
+import org.apache.drill.exec.testing.Controls;
+import org.apache.drill.exec.testing.ControlsInjectionUtil;
+import org.apache.drill.test.ClusterFixture;
+import org.apache.drill.test.ClusterFixtureBuilder;
+import org.apache.drill.test.ClusterTest;
+import org.apache.drill.test.LogFixture;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.junit.Assert.assertTrue;
+
+public class TestOperatorDump extends ClusterTest {
+
+  private static final String ENTRY_DUMP_COMPLETED = "Operator dump completed";
+  private static final String ENTRY_DUMP_STARTED = "Operator dump started";
+
+  private LogFixture logFixture;
+  private TestAppender appender;
+
+  @BeforeClass
+  public static void setupFiles() {
+dirTestWatcher.copyResourceToRoot(Paths.get("multilevel"));
+  }
+
+  @Before
+  public void setup() throws Exception {
+ClusterFixtureBuilder builder = ClusterFixture.builder(dirTestWatcher);
+appender = new TestAppender();
+logFixture = LogFixture.builder()
+.toConsole(appender, LogFixture.DEFAULT_CONSOLE_FORMAT)
+.build();
+startCluster(builder);
+  }
+
+  @After
+  public void tearUp(){
+logFixture.close();
+  }
+
+  @Test(expected = UserRemoteException.class)
+  public void testScanBatchChecked() throws Exception {
+String exceptionDesc = "next-allocate";
+final String controls = Controls.newBuilder()
+.addException(ScanBatch.class, exceptionDesc, 
OutOfMemoryException.class, 0, 1)
+.build();
+ControlsInjectionUtil.setControls(client.client(), controls);
+String query = "select * from dfs.`multilevel/parquet` limit 100";
+try {
+  client.queryBuilder().sql(query).run();
+} catch (UserRemoteException e) {
+  assertTrue(e.getMessage().contains(exceptionDesc));
+
+  String[] expectedEntries = new String[] {ENTRY_DUMP_STARTED, 
ENTRY_DUMP_COMPLETED};
+  validateContainsEntries(expectedEntries, ScanBatch.class.getName());
+  throw e;
+}
+  }
+
+  @Test(expected = UserRemoteException.class)
+  public void testLimitRecordBatchUnchecked() throws Exception {
+String exceptionDesc = "limit-do-work";
+final String controls = Controls.newBuilder()
+.addException(LimitRecordBatch.class, exceptionDesc, 
IndexOutOfBoundsException.class, 0, 1)
+.build();
+ControlsInjectionUtil.setControls(client.client(), controls);
+String query = "select * from dfs.`multilevel/parquet` limit 5";
+try {
+  client.queryBuilder().sql(query).run();
+} catch (UserRemoteException e) {
+  assertTrue(e.getMessage().contains(exceptionDesc));
+
+  String[] expectedEntries = new String[] {ENTRY_DUMP_STARTED, 
ENTRY_DUMP_COMPLETED};
+  validateContainsEntries(expectedEntries, 
LimitRecordBatch.class.getName());
+  throw e;
+}
+  }
+
+  @Test(expected = UserRemoteException.class)
+  public void testHashJoinBatchUnchecked() throws Exception {
+String exceptionDesc = "hashjoin-innerNext";
+final String controls = Controls.newBuilder()
+.addException(HashJoinBatch.class, exceptionDesc, 
RuntimeException.class, 0, 1)
+.build();
+ 

[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219231084
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java
 ##
 @@ -435,6 +440,8 @@ public IterOutcome innerNext() {
 // Build the hash table, using the build side record batches.
 final IterOutcome buildExecuteTermination = executeBuildPhase();
 
+injector.injectUnchecked(context.getExecutionControls(), 
"hashjoin-innerNext");
 
 Review comment:
   Could we use existing injection points, for instance in `ExternalSortBatch` 
and `ScanBatch`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219480896
 
 

 ##
 File path: exec/java-exec/src/test/java/org/apache/drill/TestOperatorDump.java
 ##
 @@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill;
+
+import ch.qos.logback.classic.spi.ILoggingEvent;
+import ch.qos.logback.core.ConsoleAppender;
+import org.apache.drill.common.exceptions.UserRemoteException;
+import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.exec.physical.impl.ScanBatch;
+import org.apache.drill.exec.physical.impl.join.HashJoinBatch;
+import org.apache.drill.exec.physical.impl.limit.LimitRecordBatch;
+import org.apache.drill.exec.testing.Controls;
+import org.apache.drill.exec.testing.ControlsInjectionUtil;
+import org.apache.drill.test.ClusterFixture;
+import org.apache.drill.test.ClusterFixtureBuilder;
+import org.apache.drill.test.ClusterTest;
+import org.apache.drill.test.LogFixture;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.junit.Assert.assertTrue;
+
+public class TestOperatorDump extends ClusterTest {
+
+  private static final String ENTRY_DUMP_COMPLETED = "Operator dump completed";
+  private static final String ENTRY_DUMP_STARTED = "Operator dump started";
+
+  private LogFixture logFixture;
+  private TestAppender appender;
+
+  @BeforeClass
+  public static void setupFiles() {
+dirTestWatcher.copyResourceToRoot(Paths.get("multilevel"));
+  }
+
+  @Before
+  public void setup() throws Exception {
+ClusterFixtureBuilder builder = ClusterFixture.builder(dirTestWatcher);
+appender = new TestAppender();
+logFixture = LogFixture.builder()
+.toConsole(appender, LogFixture.DEFAULT_CONSOLE_FORMAT)
+.build();
+startCluster(builder);
+  }
+
+  @After
+  public void tearUp(){
+logFixture.close();
+  }
+
+  @Test(expected = UserRemoteException.class)
+  public void testScanBatchChecked() throws Exception {
+String exceptionDesc = "next-allocate";
+final String controls = Controls.newBuilder()
+.addException(ScanBatch.class, exceptionDesc, 
OutOfMemoryException.class, 0, 1)
+.build();
+ControlsInjectionUtil.setControls(client.client(), controls);
+String query = "select * from dfs.`multilevel/parquet` limit 100";
+try {
+  client.queryBuilder().sql(query).run();
+} catch (UserRemoteException e) {
+  assertTrue(e.getMessage().contains(exceptionDesc));
+
+  String[] expectedEntries = new String[] {ENTRY_DUMP_STARTED, 
ENTRY_DUMP_COMPLETED};
+  validateContainsEntries(expectedEntries, ScanBatch.class.getName());
+  throw e;
+}
+  }
+
+  @Test(expected = UserRemoteException.class)
+  public void testLimitRecordBatchUnchecked() throws Exception {
+String exceptionDesc = "limit-do-work";
+final String controls = Controls.newBuilder()
+.addException(LimitRecordBatch.class, exceptionDesc, 
IndexOutOfBoundsException.class, 0, 1)
+.build();
+ControlsInjectionUtil.setControls(client.client(), controls);
+String query = "select * from dfs.`multilevel/parquet` limit 5";
+try {
+  client.queryBuilder().sql(query).run();
+} catch (UserRemoteException e) {
+  assertTrue(e.getMessage().contains(exceptionDesc));
+
+  String[] expectedEntries = new String[] {ENTRY_DUMP_STARTED, 
ENTRY_DUMP_COMPLETED};
+  validateContainsEntries(expectedEntries, 
LimitRecordBatch.class.getName());
+  throw e;
+}
+  }
+
+  @Test(expected = UserRemoteException.class)
+  public void testHashJoinBatchUnchecked() throws Exception {
+String exceptionDesc = "hashjoin-innerNext";
+final String controls = Controls.newBuilder()
+.addException(HashJoinBatch.class, exceptionDesc, 
RuntimeException.class, 0, 1)
+.build();
+ 

[GitHub] arina-ielchiieva opened a new pull request #1475: DRILL-6752: Surround Drill quotes with double quotes

2018-09-21 Thread GitBox
arina-ielchiieva opened a new pull request #1475: DRILL-6752: Surround Drill 
quotes with double quotes
URL: https://github.com/apache/drill/pull/1475
 
 
   1. Surround Drill quotes with double quotes.
   2. Remove drill-sqlline-test.conf, use drill-sqlline.conf for tests instead.
   
   Details in [DRILL-6752](https://issues.apache.org/jira/browse/DRILL-6752).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] agozhiy opened a new pull request #1476: DRILL-5782: Web UI: do not attempt to build visualized plan when plan…

2018-09-21 Thread GitBox
agozhiy opened a new pull request #1476: DRILL-5782: Web UI: do not attempt to 
build visualized plan when plan…
URL: https://github.com/apache/drill/pull/1476
 
 
   … is absent
   
   Added checks for undefined values. Now there are no js errors and also a 
JSON profile is now displayed for incorrect queries.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arina-ielchiieva commented on issue #1476: DRILL-5782: Web UI: do not attempt to build visualized plan when plan…

2018-09-21 Thread GitBox
arina-ielchiieva commented on issue #1476: DRILL-5782: Web UI: do not attempt 
to build visualized plan when plan…
URL: https://github.com/apache/drill/pull/1476#issuecomment-423565894
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution

2018-09-21 Thread GitBox
vdiravka commented on a change in pull request #1455: DRILL-6724: Dump operator 
context to logs when error occurs during query execution
URL: https://github.com/apache/drill/pull/1455#discussion_r219549936
 
 

 ##
 File path: 
common/src/main/java/org/apache/drill/common/exceptions/UserExceptionContext.java
 ##
 @@ -141,7 +141,7 @@ String generateContextMessage(boolean 
includeErrorIdAndIdentity, boolean include
 }
 
 if (includeSeeLogsMessage) {
-  sb.append("Please, refer to logs for more information.\n");
+  sb.append("\nPlease, refer to logs for more information.\n");
 
 Review comment:
   Looks like on Windows some editors like Notepad don't consider `\n` as line 
separator.
   For instance, `Intelij Idea` terminal on Windows shows:
   ```
   0: jdbc:drill:zk=local> select * from cp.`employee.json` where full_name = 5;
   Error: SYSTEM ERROR: NumberFormatException: Sheri Nowmer
   
   Fragment 0:0
   
   Please, refer to logs for more information.
   
   [Error Id: bdf70e30-4b36-48cf-8d97-39b2d31dbece on 10.2.14.127:31010] 
(state=,code=0)
   ```
   but `sqlline.log` in NotePad shows:
   ```
   2018-09-21 18:49:11,972 [245aed07-be42-1f73-4905-f1bbc400f50d:frag:0:0] 
ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: 
NumberFormatException: Sheri NowmerFragment 0:0Please, refer to logs for more 
information.[Error Id: bdf70e30-4b36-48cf-8d97-39b2d31dbece on 
10.2.14.127:31010]
   org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
NumberFormatException: Sheri NowmerFragment 0:0Please, refer to logs for more 
information. [Error Id: bdf70e30-4b36-48cf-8d97-39b2d31dbece on 
10.2.14.127:31010]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arina-ielchiieva opened a new pull request #1477: DRILL-6753: Fix show files command to return result the same way as before

2018-09-21 Thread GitBox
arina-ielchiieva opened a new pull request #1477: DRILL-6753: Fix show files 
command to return result the same way as before
URL: https://github.com/apache/drill/pull/1477
 
 
   1. Add ACCESS_TIME column.
   2. Re-write show files command to return result using ShowFilesCommandResult 
to maintain backward compatibility.
   
   Details in [DRILL-6753](https://issues.apache.org/jira/browse/DRILL-6753).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ilooner commented on a change in pull request #1472: DRILL-6747: Fixed IOB in HashJoin for inner and right joins on empty probe side

2018-09-21 Thread GitBox
ilooner commented on a change in pull request #1472: DRILL-6747: Fixed IOB in 
HashJoin for inner and right joins on empty probe side
URL: https://github.com/apache/drill/pull/1472#discussion_r219561941
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatch.java
 ##
 @@ -115,6 +115,14 @@
  *   This value will be returned only after {@link #OK_NEW_SCHEMA} has been
  *   returned at least once (not necessarily immediately after).
  * 
+ * 
+ *   Also after a RecordBatch returns NONE a RecordBatch should:
+ *   
+ * Contain the last valid schema seen by the operator.
+ * Contain a VectorContainer with empty columns corresponding to 
the last valid schema.
+ * Return a record count of 0.
+ *   
+ * 
  */
 NONE(false),
 
 Review comment:
   HashJoin needs to first sniff the first data holding batch from the probe 
side in order to compute stats for the memory calculations used to spill. If 
the probe side is empty we will hit NONE. This is unavoidable and this is where 
the problem begins.
   
   In the case of a right join HashJoin still builds HashTables, and the 
HashTables require build and probe batches to be updated simultaneously before 
being built. When the build and probe sides are updated the Hashtable expects a 
vector container with valid columns. If the columns are not present an IOB is 
thrown.
   
   Then later the probing logic is called in HashJoinProbeTemplate, which 
expects the upstream probeBatch to return a valid record count.
   
   There are a few ways to fix this problem:
   
 1. We can refactor the HashTable to update probe and build sides 
independently. This will have some corner cases though, and the HashTable has 
no unit tests, so it's risky. Or we can remove the code generation from the 
HashTable, which will also be somewhat time consuming. Or we could change the 
join logic to not build the HashTable in this case, which is another major 
change. Then we can also refactor HashJoinProbeTemplate to handle an empty 
probe side gracefully, but again HashJoinProbeTemplate is not unit tested so 
this will be time consuming.
   
 2. We leave the HashTable and HashJoinProbeTemplate as is, but use the 
last schema we got from the probe side to build dummy empty vector containers 
for building the HashTable, and for probing in HashJoinProbeTemplate. IMO this 
is a hack. Also there are some corner cases because some Drill operators 
violate the contract that they must first send OK_NEW_SCHEMA and then NONE. 
I've observed in some unit tests some readers directly skip to sending NONE 
without OK_NEW_SCHEMA if there is no data.
   
3. We define the state of a RecordBatch after it returns NONE to have a 
record count of zero, the last valid schema, and an empty VectorContainer. This 
would be defined should a downstream operator choose to query it. This resolves 
the issue in HashJoin without doing hacks and risky refactoring in a rush. It 
also defines state that was previously undefined, which IMO is a good thing. I 
would also do follow up PRs to make all operators obey this behavior, since it 
will be trivial to do.
   
-  Of the three options, option 1 is the strictly correct thing to do if 
you don't consider time costs. However, due to the technical debt we've 
accumulated it will be time consuming to do.
- Option 2 is a hack IMO and just makes things more confusing for the 
future generation of Drillers.
- Option 3 is a win win. We get a clean resolution to the issue, and we 
define the state of operators after they return NONE. Defining state that was 
previously undefined is always a good thing, especially when it simplifies the 
logic of the system. And I can go through the operators and make sure they do 
this, it should be a trivial change involving zeroing VectorContainers before 
returning NONE and updating record counts to be 0.
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vdiravka commented on issue #1307: DRILL-6473: Upgrade Drill 1.14 with Hive 2.3 for mapr profile

2018-09-21 Thread GitBox
vdiravka commented on issue #1307: DRILL-6473: Upgrade Drill 1.14 with Hive 2.3 
for mapr profile
URL: https://github.com/apache/drill/pull/1307#issuecomment-423636602
 
 
   @KazydubB Other MapR core packages are updated and should be used within 
MapR Hive 2.3 too.
   At least 6.0.1 version should be used (it is now public and available).
   Please update `mapr.release.version` and `ojai.version` version as in [this 
PR 
#1466](https://github.com/apache/drill/pull/1466/files#diff-600376dffeb79835ede4a0b285078036R56)
 to your PR and update Jira, PR names and commit message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (DRILL-6754) Add a field to SV2 to indicate if the SV2 reorders the Record Batch

2018-09-21 Thread Karthikeyan Manivannan (JIRA)
Karthikeyan Manivannan created DRILL-6754:
-

 Summary: Add a field to SV2 to indicate if the SV2 reorders the 
Record Batch
 Key: DRILL-6754
 URL: https://issues.apache.org/jira/browse/DRILL-6754
 Project: Apache Drill
  Issue Type: Improvement
 Environment: The optimization in DRILL-6687 is not correct if an SV2 
is used to re-order rows in the record batch. Currently, this is not a problem 
because none of the reordering operators (SORT, TOPN) use an SV2. SORT has code 
for SV2 but it is disabled. 

Adding a field to SV2 to indicate if the SV2 reorders the Record Batch would 
allow the safe application of the DRILL-6687 optimization.

Reporter: Karthikeyan Manivannan
Assignee: Karthikeyan Manivannan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] KazydubB commented on issue #1307: DRILL-6473: Upgrade Drill 1.14 with Hive 2.3 for mapr profile

2018-09-21 Thread GitBox
KazydubB commented on issue #1307: DRILL-6473: Upgrade Drill 1.14 with Hive 2.3 
for mapr profile
URL: https://github.com/apache/drill/pull/1307#issuecomment-423644143
 
 
   @vdiravka updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sachouche commented on a change in pull request #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sachouche commented on a change in pull request #1470: DRILL-6746: Query can 
hang when PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#discussion_r219618697
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/batch/UnlimitedRawBatchBuffer.java
 ##
 @@ -63,6 +64,15 @@ public RawFragmentBatch take() throws IOException, 
InterruptedException {
   return batch;
 }
 
+@Override
+public RawFragmentBatch poll(long timeout, TimeUnit timeUnit) throws 
InterruptedException, IOException {
+  RawFragmentBatch batch = buffer.poll(timeout, timeUnit);
 
 Review comment:
   Does poll return null when timeout is reached? if not, then you should 
handle timeout exception. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sachouche commented on a change in pull request #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sachouche commented on a change in pull request #1470: DRILL-6746: Query can 
hang when PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#discussion_r219618211
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/batch/SpoolingRawBatchBuffer.java
 ##
 @@ -119,6 +119,19 @@ public RawFragmentBatch take() throws IOException, 
InterruptedException {
   return buffer.take().get();
 }
 
+@Override
+public RawFragmentBatch poll(long timeout, TimeUnit timeUnit) throws 
InterruptedException, IOException {
+  RawFragmentBatchWrapper batchWrapper = buffer.poll(timeout, timeUnit);
+  if (batchWrapper != null) {
+try {
+  return batchWrapper.get();
+} catch (InterruptedException e) {
 
 Review comment:
   You should not catch InterruptedException as it is being handled by caller; 
the handling is also different:
   - Upper code is having more logic on how to deal with an interrupted 
exception
   - This code is erasing the interrupted flag whereas caller code doesn't


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sachouche commented on a change in pull request #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sachouche commented on a change in pull request #1470: DRILL-6746: Query can 
hang when PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#discussion_r219619807
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/batch/BaseRawBatchBuffer.java
 ##
 @@ -167,7 +169,25 @@ public RawFragmentBatch getNext() throws IOException {
 
   // if we didn't get a batch, block on waiting for queue.
   if (b == null && (!isTerminated() || !bufferQueue.isEmpty())) {
-b = bufferQueue.take();
+// We shouldn't block infinitely here. There can be a condition such 
that due to a failure FragmentExecutor
+// state is changed to FAILED and queue is empty. Because of this the 
minor fragment main thread will block
+// here waiting for next batch to arrive. Meanwhile when next batch 
arrived and was enqueued it sees
+// FragmentExecutor failure state and doesn't enqueue the batch and 
cleans up the buffer queue. Hence this
+// thread will stuck forever. So we pool for 5 seconds until we get a 
batch or FragmentExecutor state is in
+// error condition.
+while (b == null) {
+  b = bufferQueue.poll(5, TimeUnit.SECONDS);
+  if (!context.getExecutorState().shouldContinue()) {
+kill(context);
+if (b != null) {
+  assertAckSent(b);
 
 Review comment:
   can this assert fail when the minor fragment is getting closed? My point, 
assertions should be avoided during cleanup path.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (DRILL-6755) HashJoin don't build hash tables when probe side is empty.

2018-09-21 Thread Timothy Farkas (JIRA)
Timothy Farkas created DRILL-6755:
-

 Summary: HashJoin don't build hash tables when probe side is empty.
 Key: DRILL-6755
 URL: https://issues.apache.org/jira/browse/DRILL-6755
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Timothy Farkas
Assignee: Boaz Ben-Zvi


Currently when doing an Inner or a Right join we still build hashtables when 
the probe side is empty. A performance optimization would be to not build them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6756) HashTable Decouple updateIncoming for build and probe side.

2018-09-21 Thread Timothy Farkas (JIRA)
Timothy Farkas created DRILL-6756:
-

 Summary: HashTable Decouple updateIncoming for build and probe 
side.
 Key: DRILL-6756
 URL: https://issues.apache.org/jira/browse/DRILL-6756
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Timothy Farkas


We should decouple update incoming for the build and probe side batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ilooner commented on a change in pull request #1472: DRILL-6747: Fixed IOB in HashJoin for inner and right joins on empty probe side

2018-09-21 Thread GitBox
ilooner commented on a change in pull request #1472: DRILL-6747: Fixed IOB in 
HashJoin for inner and right joins on empty probe side
URL: https://github.com/apache/drill/pull/1472#discussion_r219634563
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatch.java
 ##
 @@ -115,6 +115,14 @@
  *   This value will be returned only after {@link #OK_NEW_SCHEMA} has been
  *   returned at least once (not necessarily immediately after).
  * 
+ * 
+ *   Also after a RecordBatch returns NONE a RecordBatch should:
+ *   
+ * Contain the last valid schema seen by the operator.
+ * Contain a VectorContainer with empty columns corresponding to 
the last valid schema.
+ * Return a record count of 0.
+ *   
+ * 
  */
 NONE(false),
 
 Review comment:
   We discussed offline and agreed to go with this solution for now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (DRILL-6757) Enforce Updated NONE Contract

2018-09-21 Thread Timothy Farkas (JIRA)
Timothy Farkas created DRILL-6757:
-

 Summary: Enforce Updated NONE Contract
 Key: DRILL-6757
 URL: https://issues.apache.org/jira/browse/DRILL-6757
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Timothy Farkas
Assignee: Timothy Farkas


DRILL-6747 updated the contract for RecordBatches after they return NONE. This 
new contract should be implemented in all the operators.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ilooner closed pull request #1472: DRILL-6747: Fixed IOB in HashJoin for inner and right joins on empty probe side

2018-09-21 Thread GitBox
ilooner closed pull request #1472: DRILL-6747: Fixed IOB in HashJoin for inner 
and right joins on empty probe side
URL: https://github.com/apache/drill/pull/1472
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unorderedreceiver/UnorderedReceiverBatch.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unorderedreceiver/UnorderedReceiverBatch.java
index 424a733bb63..8cdc0a1c78a 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unorderedreceiver/UnorderedReceiverBatch.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unorderedreceiver/UnorderedReceiverBatch.java
@@ -174,7 +174,7 @@ public IterOutcome next() {
   first = false;
 
   if (batch == null) {
-batchLoader.clear();
+batchLoader.zero();
 if (!context.getExecutorState().shouldContinue()) {
   return IterOutcome.STOP;
 }
diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatch.java 
b/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatch.java
index b44a3629211..c65827c2be6 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatch.java
@@ -115,6 +115,14 @@
  *   This value will be returned only after {@link #OK_NEW_SCHEMA} has been
  *   returned at least once (not necessarily immediately after).
  * 
+ * 
+ *   Also after a RecordBatch returns NONE a RecordBatch should:
+ *   
+ * Contain the last valid schema seen by the operator.
+ * Contain a VectorContainer with empty columns corresponding to 
the last valid schema.
+ * Return a record count of 0.
+ *   
+ * 
  */
 NONE(false),
 
diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatchLoader.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatchLoader.java
index bea7cb503fe..e841f2f3bc0 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatchLoader.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatchLoader.java
@@ -276,6 +276,14 @@ public SelectionVector4 getSelectionVector4() {
 
   public void resetRecordCount() { valueCount = 0; }
 
+  /**
+   * Removes an data from the loader, but maintains the schema and empty 
vectors.
+   */
+  public void zero() {
+container.zeroVectors();
+resetRecordCount();
+  }
+
   /**
* Clears this loader, which clears the internal vector container (see
* {@link VectorContainer#clear}) and resets the record count to zero.


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sohami commented on a change in pull request #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sohami commented on a change in pull request #1470: DRILL-6746: Query can hang 
when PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#discussion_r219643878
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/batch/UnlimitedRawBatchBuffer.java
 ##
 @@ -63,6 +64,15 @@ public RawFragmentBatch take() throws IOException, 
InterruptedException {
   return batch;
 }
 
+@Override
+public RawFragmentBatch poll(long timeout, TimeUnit timeUnit) throws 
InterruptedException, IOException {
+  RawFragmentBatch batch = buffer.poll(timeout, timeUnit);
 
 Review comment:
   Yes returns null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sohami commented on a change in pull request #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sohami commented on a change in pull request #1470: DRILL-6746: Query can hang 
when PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#discussion_r219639174
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/batch/BaseRawBatchBuffer.java
 ##
 @@ -167,7 +169,25 @@ public RawFragmentBatch getNext() throws IOException {
 
   // if we didn't get a batch, block on waiting for queue.
   if (b == null && (!isTerminated() || !bufferQueue.isEmpty())) {
-b = bufferQueue.take();
+// We shouldn't block infinitely here. There can be a condition such 
that due to a failure FragmentExecutor
+// state is changed to FAILED and queue is empty. Because of this the 
minor fragment main thread will block
+// here waiting for next batch to arrive. Meanwhile when next batch 
arrived and was enqueued it sees
+// FragmentExecutor failure state and doesn't enqueue the batch and 
cleans up the buffer queue. Hence this
+// thread will stuck forever. So we pool for 5 seconds until we get a 
batch or FragmentExecutor state is in
+// error condition.
+while (b == null) {
+  b = bufferQueue.poll(5, TimeUnit.SECONDS);
+  if (!context.getExecutorState().shouldContinue()) {
+kill(context);
+if (b != null) {
+  assertAckSent(b);
 
 Review comment:
   This path is not executed during cleanup. Also the contract with poll is if 
a batch is dequeued from buffer queue then it sends ack for that batch. So this 
is to make sure that state is correct as done in non failure path too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sohami commented on a change in pull request #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sohami commented on a change in pull request #1470: DRILL-6746: Query can hang 
when PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#discussion_r219652726
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/batch/SpoolingRawBatchBuffer.java
 ##
 @@ -119,6 +119,19 @@ public RawFragmentBatch take() throws IOException, 
InterruptedException {
   return buffer.take().get();
 }
 
+@Override
+public RawFragmentBatch poll(long timeout, TimeUnit timeUnit) throws 
InterruptedException, IOException {
+  RawFragmentBatchWrapper batchWrapper = buffer.poll(timeout, timeUnit);
+  if (batchWrapper != null) {
+try {
+  return batchWrapper.get();
+} catch (InterruptedException e) {
 
 Review comment:
   Thanks for catching this. Specifically for spooling buffer implementation 
even `poll()` method was consuming the InterruptedException. I have made both 
polls to throw `InterruptedException`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sohami commented on issue #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sohami commented on issue #1470: DRILL-6746: Query can hang when 
PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#issuecomment-423701021
 
 
   @sachouche - Thanks for review. I have address the review comments, please 
take another look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sachouche commented on a change in pull request #1470: DRILL-6746: Query can hang when PartitionSender task thread sees a co…

2018-09-21 Thread GitBox
sachouche commented on a change in pull request #1470: DRILL-6746: Query can 
hang when PartitionSender task thread sees a co…
URL: https://github.com/apache/drill/pull/1470#discussion_r219654096
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/batch/BaseRawBatchBuffer.java
 ##
 @@ -129,17 +129,24 @@ public synchronized void kill(final FragmentContext 
context) {
* responses pending
*/
   private void clearBufferWithBody() {
+RawFragmentBatch batch;
 while (!bufferQueue.isEmpty()) {
-  final RawFragmentBatch batch;
+  batch = null;
   try {
 batch = bufferQueue.poll();
 assertAckSent(batch);
   } catch (IOException e) {
 context.getExecutorState().fail(e);
 continue;
-  }
-  if (batch.getBody() != null) {
-batch.getBody().release();
+  } catch (InterruptedException e) {
 
 Review comment:
   Good catch!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (DRILL-6758) Hash Join should not return the join columns when they are not needed downstream

2018-09-21 Thread Boaz Ben-Zvi (JIRA)
Boaz Ben-Zvi created DRILL-6758:
---

 Summary: Hash Join should not return the join columns when they 
are not needed downstream
 Key: DRILL-6758
 URL: https://issues.apache.org/jira/browse/DRILL-6758
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Relational Operators, Query Planning & 
Optimization
Affects Versions: 1.14.0
Reporter: Boaz Ben-Zvi
Assignee: Hanumath Rao Maduri
 Fix For: 1.15.0


Currently the Hash-Join operator returns all its (both sides) incoming columns. 
In cases where the join columns are not used further downstream, this is a 
waste (allocating vectors, copying each value, etc).

  Suggestion: Have the planner pass this information to the Hash-Join operator, 
to enable skipping the return of these columns.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] weijietong commented on issue #1459: DRILL-6731: Move the BFs aggregating work from the Foreman to the RuntimeFi…

2018-09-21 Thread GitBox
weijietong commented on issue #1459: DRILL-6731: Move the BFs aggregating work 
from the Foreman to the RuntimeFi…
URL: https://github.com/apache/drill/pull/1459#issuecomment-423720474
 
 
   @sohami sorry for late update , except fix thread safe problem, this commit 
also changed the  `RuntimeFilterVisitor` to support a query without exchange 
that's only one fragment case.Please review, tks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services