[jira] [Commented] (DRILL-5112) Unit tests derived from PopUnitTestBase fail in IDE due to config errors

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727945#comment-15727945
 ] 

ASF GitHub Bot commented on DRILL-5112:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/681#discussion_r91236287
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/pop/PopUnitTestBase.java ---
@@ -42,7 +46,17 @@
 
   @BeforeClass
   public static void setup() {
-CONFIG = DrillConfig.create();
+Properties props = new Properties();
--- End diff --

Per the TypeSafe config system, command line properties override any 
properties here or in drill-module.conf.


> Unit tests derived from PopUnitTestBase fail in IDE due to config errors
> 
>
> Key: DRILL-5112
> URL: https://issues.apache.org/jira/browse/DRILL-5112
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>
> Drill provides a wide variety of unit tests. Many derive from 
> {{PopUnitTestBase}} to test the Physical OPerators.
> The tests use a default configuration:
> {code}
> protected static DrillConfig CONFIG;
>   @BeforeClass
>   public static void setup() {
> CONFIG = DrillConfig.create();
>   }
> {code}
> The tests rely on config settings specified in the {{pom.xml}} file (see note 
> below.) When run in Eclipse, no such config exists, so the tests use only the 
> default config. The defaults allow a web server to be started.
> Many tests start multiple Drillbits using the above config. When this occurs, 
> each tries to start a web server. The second one fails because the HTTP port 
> is already in use.
> The solution is to initialize the config using the same settings as used in 
> the {{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.
> As an aside, having multiple ways to set up the Drill config (and other 
> items) leads to much wasted time as each engineer must learn the quirks of 
> each test hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5114) Rationalize use of Logback logging in unit tests

2016-12-06 Thread Paul Rogers (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727865#comment-15727865
 ] 

Paul Rogers commented on DRILL-5114:


A note to Eclipse users. Maven includes only the src/test/resources folder for 
a single sub-project on the class path when running tests under Maven. Hence we 
need to have a Logback config file in each sub-project.

But, Eclipse puts the /src/test/resources for *all* projects on the class path. 
This will lead Logback to complain about finding multiple config files. To 
solve this, add the following to the default JVM args for all projects:

* Eclipse --> Preferences
* Java --> Installed JREs
* Choose your default JRE/JDK
* Click Edit...
* Add the following as the default JVM arguments:

{code}
-ea -Dlogback.statusListenerClass=ch.qos.logback.core.status.NopStatusListener
{code}

The first turns on assertions, the second tells Logback not to display any of 
its own internal logging messages. 

> Rationalize use of Logback logging in unit tests
> 
>
> Key: DRILL-5114
> URL: https://issues.apache.org/jira/browse/DRILL-5114
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Priority: Minor
>
> Drill uses Logback as its logger. The logger is used in several to display 
> some test output. Test output is sent to stdout, rather than a log file. 
> Since Drill also uses Logback, that same configuration sends much Drill 
> logging output to stdout as well, cluttering test output.
> Logback requires that that one Logback config file (either logback.xml or 
> hogback-test.xml) exist on the class path. Tests store the config file in the 
> src/test/resources folder of each sub-project.
> These files set the default logging level to debug. While this setting is 
> fine when working with individual tests, the output is overwhelming for bulk 
> test runs.
> The first requested change is to set the default logging level to error.
> The existing config files are usually called "logback.xml." Change the name 
> of test files to "logback-test.xml" to make clear that they are, in fact, 
> test configs.
> The {{exec/java-exec/src/test/resources/logback.xml}} config file is a full 
> version of Drill's production config file. Replace this with a config 
> suitable for testing (that is, the same as other modules.)
> The java-exec project includes a production-like config file in its non-test 
> sources: {{exec/java-exec/src/main/resources/logback.xml}}. Remove this as it 
> is not needed. (Instead, rely on the one shipped in the distribution 
> subsystem, which is the one copied to the Drill distribution.)
> Since Logback complains bitterly (via many log messages) when it cannot find 
> a configuration file (and each sub-module must have its own test 
> configuration), add missing logging configuration files:
> * exec/memory/base/src/test/resources/logback-test.xml
> * logical/src/test/resources/logback-test.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5114) Rationalize use of Logback logging in unit tests

2016-12-06 Thread Paul Rogers (JIRA)
Paul Rogers created DRILL-5114:
--

 Summary: Rationalize use of Logback logging in unit tests
 Key: DRILL-5114
 URL: https://issues.apache.org/jira/browse/DRILL-5114
 Project: Apache Drill
  Issue Type: Improvement
Affects Versions: 1.8.0
Reporter: Paul Rogers
Priority: Minor


Drill uses Logback as its logger. The logger is used in several to display some 
test output. Test output is sent to stdout, rather than a log file. Since Drill 
also uses Logback, that same configuration sends much Drill logging output to 
stdout as well, cluttering test output.

Logback requires that that one Logback config file (either logback.xml or 
hogback-test.xml) exist on the class path. Tests store the config file in the 
src/test/resources folder of each sub-project.

These files set the default logging level to debug. While this setting is fine 
when working with individual tests, the output is overwhelming for bulk test 
runs.

The first requested change is to set the default logging level to error.

The existing config files are usually called "logback.xml." Change the name of 
test files to "logback-test.xml" to make clear that they are, in fact, test 
configs.

The {{exec/java-exec/src/test/resources/logback.xml}} config file is a full 
version of Drill's production config file. Replace this with a config suitable 
for testing (that is, the same as other modules.)

The java-exec project includes a production-like config file in its non-test 
sources: {{exec/java-exec/src/main/resources/logback.xml}}. Remove this as it 
is not needed. (Instead, rely on the one shipped in the distribution subsystem, 
which is the one copied to the Drill distribution.)

Since Logback complains bitterly (via many log messages) when it cannot find a 
configuration file (and each sub-module must have its own test configuration), 
add missing logging configuration files:

* exec/memory/base/src/test/resources/logback-test.xml
* logical/src/test/resources/logback-test.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5113) Upgrade Maven RAT plugin to avoid annoying XML errors

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727818#comment-15727818
 ] 

ASF GitHub Bot commented on DRILL-5113:
---

Github user chunhui-shi commented on the issue:

https://github.com/apache/drill/pull/682
  
+1


> Upgrade Maven RAT plugin to avoid annoying XML errors
> -
>
> Key: DRILL-5113
> URL: https://issues.apache.org/jira/browse/DRILL-5113
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> Build Drill with most Maven logging turned off. On every sub-project you will 
> see the following:
> {code}
> Compiler warnings:
>   WARNING:  'org.apache.xerces.jaxp.SAXParserImpl: Property 
> 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.'
> [INFO] Starting audit...
> Audit done.
> {code}
> The warning is a known issue with Java: 
> http://bugs.java.com/view_bug.do?bug_id=8016153
> The RAT folks seem to have done a patch: version 0.12 of the plugin no longer 
> has the warning. Upgrade Drill's {{pom.xml}} file to use this version instead 
> of the anonymous version currently used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5113) Upgrade Maven RAT plugin to avoid annoying XML errors

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727785#comment-15727785
 ] 

ASF GitHub Bot commented on DRILL-5113:
---

GitHub user paul-rogers opened a pull request:

https://github.com/apache/drill/pull/682

DRILL-5113: Upgrade Maven RAT plugin to avoid annoying XML errors

Upgrade to eliminate XML compiler warnings that appear for each sub-project 
in the Drill build.

Also fixes a Maven warning about a duplicated version error for the 
dependency plugin (version is inherited from root pom.xml). And removes some 
trailing spaces.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-rogers/drill DRILL-5113

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/682.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #682


commit 6824a52ab5e52e839a52db42dec2605c2b66b720
Author: Paul Rogers 
Date:   2016-12-07T05:26:58Z

DRILL-5113: Upgrade Maven RAT plugin to avoid annoying XML errors

Upgrade to eliminate XML compiler warnings that appear for each
sub-project in the Drill build.

Also fixes a Maven warning about a duplicated version error for the
dependency plugin (version is inherited from root pom.xml).




> Upgrade Maven RAT plugin to avoid annoying XML errors
> -
>
> Key: DRILL-5113
> URL: https://issues.apache.org/jira/browse/DRILL-5113
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> Build Drill with most Maven logging turned off. On every sub-project you will 
> see the following:
> {code}
> Compiler warnings:
>   WARNING:  'org.apache.xerces.jaxp.SAXParserImpl: Property 
> 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.'
> [INFO] Starting audit...
> Audit done.
> {code}
> The warning is a known issue with Java: 
> http://bugs.java.com/view_bug.do?bug_id=8016153
> The RAT folks seem to have done a patch: version 0.12 of the plugin no longer 
> has the warning. Upgrade Drill's {{pom.xml}} file to use this version instead 
> of the anonymous version currently used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5112) Unit tests derived from PopUnitTestBase fail in IDE due to config errors

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727760#comment-15727760
 ] 

ASF GitHub Bot commented on DRILL-5112:
---

GitHub user paul-rogers opened a pull request:

https://github.com/apache/drill/pull/681

DRILL-5112: Unit tests derived from PopUnitTestBase fail in IDE due to 
config errors

Tests rely on command-line settings in the pom.xml file. Those settings
are not available when tests are run in Eclipse. Replicated required
settings into the base test class (as in BaseTestQuery).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-rogers/drill DRILL-5112

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/681.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #681


commit ddccb6a6ad16e49175bb90468a7845d9fae8eaac
Author: Paul Rogers 
Date:   2016-12-07T05:11:56Z

DRILL-5112: Unit tests derived from PopUnitTestBase fail in IDE due to 
config errors

Tests rely on command-line settings in the pom.xml file. Those settings
are not available when tests are run in Eclipse. Replicated required
settings into the base test class (as in BaseTestQuery).




> Unit tests derived from PopUnitTestBase fail in IDE due to config errors
> 
>
> Key: DRILL-5112
> URL: https://issues.apache.org/jira/browse/DRILL-5112
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>
> Drill provides a wide variety of unit tests. Many derive from 
> {{PopUnitTestBase}} to test the Physical OPerators.
> The tests use a default configuration:
> {code}
> protected static DrillConfig CONFIG;
>   @BeforeClass
>   public static void setup() {
> CONFIG = DrillConfig.create();
>   }
> {code}
> The tests rely on config settings specified in the {{pom.xml}} file (see note 
> below.) When run in Eclipse, no such config exists, so the tests use only the 
> default config. The defaults allow a web server to be started.
> Many tests start multiple Drillbits using the above config. When this occurs, 
> each tries to start a web server. The second one fails because the HTTP port 
> is already in use.
> The solution is to initialize the config using the same settings as used in 
> the {{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.
> As an aside, having multiple ways to set up the Drill config (and other 
> items) leads to much wasted time as each engineer must learn the quirks of 
> each test hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5112) Unit tests derived from PopUnitTestBase fail in IDE due to config errors

2016-12-06 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5112:
---
Description: 
Drill provides a wide variety of unit tests. Many derive from 
{{PopUnitTestBase}} to test the Physical OPerators.

The tests use a default configuration:

{code}
protected static DrillConfig CONFIG;

  @BeforeClass
  public static void setup() {
CONFIG = DrillConfig.create();
  }
{code}

The tests rely on config settings specified in the {{pom.xml}} file (see note 
below.) When run in Eclipse, no such config exists, so the tests use only the 
default config. The defaults allow a web server to be started.

Many tests start multiple Drillbits using the above config. When this occurs, 
each tries to start a web server. The second one fails because the HTTP port is 
already in use.

The solution is to initialize the config using the same settings as used in the 
{{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.

As an aside, having multiple ways to set up the Drill config (and other items) 
leads to much wasted time as each engineer must learn the quirks of each test 
hierarchy.

  was:
Drill provides a wide variety of unit tests. Many derive from 
{{PopUnitTestBase}} to test the Physical OPerators.

The tests use a default configuration:

{code}
protected static DrillConfig CONFIG;

  @BeforeClass
  public static void setup() {
CONFIG = DrillConfig.create();
  }
{code}

The default config tries to locate a {{drill-override.conf}} file somewhere on 
the class path.

When run in Eclipse, no such file exists. Instead, no override file is found 
and defaults are used. The defaults allow a web server to be started.

Many tests start multiple Drillbits using the above config. When this occurs, 
each tries to start a web server. The second one fails because the HTTP port is 
already in use.

It is not clear how these tests succeed when run from Maven. Perhaps in that 
scenario the required file is somehow placed onto the class path? No such file 
exists in the source path.

The solution is to initialize the config using the same settings as used in the 
{{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.

As an aside, having multiple ways to set up the Drill config (and other items) 
leads to much wasted time as each engineer must learn the quirks of each test 
hierarchy.


> Unit tests derived from PopUnitTestBase fail in IDE due to config errors
> 
>
> Key: DRILL-5112
> URL: https://issues.apache.org/jira/browse/DRILL-5112
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>
> Drill provides a wide variety of unit tests. Many derive from 
> {{PopUnitTestBase}} to test the Physical OPerators.
> The tests use a default configuration:
> {code}
> protected static DrillConfig CONFIG;
>   @BeforeClass
>   public static void setup() {
> CONFIG = DrillConfig.create();
>   }
> {code}
> The tests rely on config settings specified in the {{pom.xml}} file (see note 
> below.) When run in Eclipse, no such config exists, so the tests use only the 
> default config. The defaults allow a web server to be started.
> Many tests start multiple Drillbits using the above config. When this occurs, 
> each tries to start a web server. The second one fails because the HTTP port 
> is already in use.
> The solution is to initialize the config using the same settings as used in 
> the {{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.
> As an aside, having multiple ways to set up the Drill config (and other 
> items) leads to much wasted time as each engineer must learn the quirks of 
> each test hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4956) Temporary tables support

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727713#comment-15727713
 ] 

ASF GitHub Bot commented on DRILL-4956:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/666#discussion_r91227026
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserSession.java ---
@@ -207,25 +239,165 @@ public SchemaPlus getDefaultSchema(SchemaPlus 
rootSchema) {
   return null;
 }
 
-final SchemaPlus defaultSchema = SchemaUtilites.findSchema(rootSchema, 
defaultSchemaPath);
-
-if (defaultSchema == null) {
-  // If the current schema resolves to null, return root schema as the 
current default schema.
-  return defaultSchema;
-}
-
-return defaultSchema;
+return SchemaUtilites.findSchema(rootSchema, defaultSchemaPath);
   }
 
   public boolean setSessionOption(String name, String value) {
 return true;
   }
 
+  /**
+   * @return unique session identifier
+   */
+  public String getUuid() { return uuid; }
+
+  /**
+   * Adds temporary table to temporary tables cache.
+   *
+   * @param schema table schema
+   * @param tableName original table name
+   * @return generated temporary table name
+   */
+  public String registerTemporaryTable(AbstractSchema schema, String 
tableName) {
+return temporaryTablesCache.add(schema, tableName);
+  }
+
+  /**
+   * Looks for temporary table in temporary tables cache by its name in 
specified schema.
+   *
+   * @param fullSchemaName table full schema name (example, dfs.tmp)
+   * @param tableName original table name
+   * @return temporary table name if found, null otherwise
+   */
+  public String findTemporaryTable(String fullSchemaName, String 
tableName) {
+return temporaryTablesCache.find(fullSchemaName, tableName);
+  }
+
+  /**
+   * Before removing temporary table from temporary tables cache,
+   * checks if table exists physically on disk, if yes, removes it.
+   *
+   * @param fullSchemaName full table schema name (example, dfs.tmp)
+   * @param tableName original table name
+   * @return true if table was physically removed, false otherwise
+   */
+  public boolean removeTemporaryTable(String fullSchemaName, String 
tableName) {
+final AtomicBoolean result = new AtomicBoolean();
+temporaryTablesCache.remove(fullSchemaName, tableName, new 
BiConsumer() {
+  @Override
+  public void accept(AbstractSchema schema, String temporaryTableName) 
{
+if (schema.getTable(temporaryTableName) != null) {
+  schema.dropTable(temporaryTableName);
+  result.set(true);
+}
+  }
+});
+return result.get();
+  }
+
   private String getProp(String key) {
 return properties.get(key) != null ? properties.get(key) : "";
   }
 
   private void setProp(String key, String value) {
 properties.put(key, value);
   }
+
+  /**
+   * Temporary tables cache stores data by full schema name (schema and 
workspace separated by dot
+   * (example: dfs.tmp)) as key, and map of generated temporary tables 
names
+   * and its schemas represented by {@link AbstractSchema} as values.
+   * Schemas represented by {@link AbstractSchema} are used to drop 
temporary tables.
+   * Generated temporary tables consists of original table name and unique 
session id.
+   * Cache is represented by {@link ConcurrentMap} so if is thread-safe 
and can be used
+   * in multi-threaded environment.
+   *
+   * Temporary tables cache is used to find temporary table by its name 
and schema,
+   * to drop all existing temporary tables on session close
+   * or remove temporary table from cache on user demand.
+   */
+  public static class TemporaryTablesCache {
+
+private final String uuid;
+private final ConcurrentMap> temporaryTables;
+
+public TemporaryTablesCache(String uuid) {
+  this.uuid = uuid;
+  this.temporaryTables = Maps.newConcurrentMap();
+}
+
+/**
+ * Generates temporary table name using its original table name and 
unique session identifier.
+ * Caches generated table name and its schema in temporary table cache.
+ *
+ * @param schema table schema
+ * @param tableName original table name
+ * @return generated temporary table name
+ */
+public String add(AbstractSchema schema, String tableName) {
+  final String temporaryTableName = 
SqlHandlerUtil.generateTemporaryTableName(tableName, uuid);
+

[jira] [Commented] (DRILL-4956) Temporary tables support

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727712#comment-15727712
 ] 

ASF GitHub Bot commented on DRILL-4956:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/666#discussion_r91226400
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetRecordWriter.java
 ---
@@ -374,6 +381,13 @@ public void endRecord() throws IOException {
 
   @Override
   public void abort() throws IOException {
+cleanup();
--- End diff --

Does the new code call abort()? Searched master source and found no 
references. On the other hand, cleanup() is called from WriterRecordBatch. 
Should the file system cleanup stuff move there?


> Temporary tables support
> 
>
> Key: DRILL-4956
> URL: https://issues.apache.org/jira/browse/DRILL-4956
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>  Labels: doc-impacting
> Fix For: Future
>
>
> Link to design doc - 
> https://docs.google.com/document/d/1gSRo_w6q2WR5fPx7SsQ5IaVmJXJ6xCOJfYGyqpVOC-g/edit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4956) Temporary tables support

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727715#comment-15727715
 ] 

ASF GitHub Bot commented on DRILL-4956:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/666#discussion_r91226488
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetRecordWriter.java
 ---
@@ -382,4 +396,20 @@ public void cleanup() throws IOException {
 
 codecFactory.release();
   }
+
+  /**
+   * Prepares location where files will be written to.
+   * Creates directory if not present, applies storage strategy.
+   *
+   * @return path to files location
+   * @throws IOException during directory creation or permission setting 
problems
+   */
+  private Path prepareLocationPath() throws IOException {
+if (locationPath == null) {
+  locationPath = new Path(location);
+  fs.mkdirs(locationPath);
+  storageStrategy.apply(fs, locationPath);
--- End diff --

This will apply the permissions only to the deepest of the directories 
created by mkdirs?


> Temporary tables support
> 
>
> Key: DRILL-4956
> URL: https://issues.apache.org/jira/browse/DRILL-4956
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>  Labels: doc-impacting
> Fix For: Future
>
>
> Link to design doc - 
> https://docs.google.com/document/d/1gSRo_w6q2WR5fPx7SsQ5IaVmJXJ6xCOJfYGyqpVOC-g/edit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4956) Temporary tables support

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727714#comment-15727714
 ] 

ASF GitHub Bot commented on DRILL-4956:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/666#discussion_r91227503
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/rpc/user/TemporaryTablesAutomaticDropTest.java
 ---
@@ -0,0 +1,74 @@
+/**
--- End diff --

Nice set of unit tests! Seems to cover the cases I could think of.


> Temporary tables support
> 
>
> Key: DRILL-4956
> URL: https://issues.apache.org/jira/browse/DRILL-4956
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>  Labels: doc-impacting
> Fix For: Future
>
>
> Link to design doc - 
> https://docs.google.com/document/d/1gSRo_w6q2WR5fPx7SsQ5IaVmJXJ6xCOJfYGyqpVOC-g/edit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5113) Upgrade Maven RAT plugin to avoid annoying XML errors

2016-12-06 Thread Paul Rogers (JIRA)
Paul Rogers created DRILL-5113:
--

 Summary: Upgrade Maven RAT plugin to avoid annoying XML errors
 Key: DRILL-5113
 URL: https://issues.apache.org/jira/browse/DRILL-5113
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.8.0
Reporter: Paul Rogers
Assignee: Paul Rogers
Priority: Minor


Build Drill with most Maven logging turned off. On every sub-project you will 
see the following:

{code}
Compiler warnings:
  WARNING:  'org.apache.xerces.jaxp.SAXParserImpl: Property 
'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.'
[INFO] Starting audit...
Audit done.
{code}

The warning is a known issue with Java: 
http://bugs.java.com/view_bug.do?bug_id=8016153

The RAT folks seem to have done a patch: version 0.12 of the plugin no longer 
has the warning. Upgrade Drill's {{pom.xml}} file to use this version instead 
of the anonymous version currently used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5112) Unit tests derived from PopUnitTestBase fail in IDE due to config errors

2016-12-06 Thread Paul Rogers (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727611#comment-15727611
 ] 

Paul Rogers commented on DRILL-5112:


When run in an IDE, tests do not run with the Maven Surefire settings spelled 
out in {{drill-root/pom.xml}}:

{code}

  maven-surefire-plugin
  2.17
  
-Xms512m -Xmx3g -Ddrill.exec.http.enabled=false
  -Ddrill.exec.sys.store.provider.local.write=false
  
-Dorg.apache.drill.exec.server.Drillbit.system_options="org.apache.drill.exec.compile.ClassTransformer.scalar_replacement=on"
  -Ddrill.test.query.printing.silent=true
  -Ddrill.catastrophic_to_standard_out=true
{code}

As a result, many tests will fail, or do strange things, when run in an IDE. 
The workaround is to copy all the config settings to each launch profile in 
Eclipse.

The proposed solution is to migrate, for the tests described in this bug, to 
set the properties in the test code (base class) as is already done in tests 
deriving from {{BaseTestQuery}}.

Note that even doing so, the properties can be easily override via Maven in the 
future since, under the TypeSafe config system, properties on the command line 
override those set internally.

> Unit tests derived from PopUnitTestBase fail in IDE due to config errors
> 
>
> Key: DRILL-5112
> URL: https://issues.apache.org/jira/browse/DRILL-5112
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>
> Drill provides a wide variety of unit tests. Many derive from 
> {{PopUnitTestBase}} to test the Physical OPerators.
> The tests use a default configuration:
> {code}
> protected static DrillConfig CONFIG;
>   @BeforeClass
>   public static void setup() {
> CONFIG = DrillConfig.create();
>   }
> {code}
> The default config tries to locate a {{drill-override.conf}} file somewhere 
> on the class path.
> When run in Eclipse, no such file exists. Instead, no override file is found 
> and defaults are used. The defaults allow a web server to be started.
> Many tests start multiple Drillbits using the above config. When this occurs, 
> each tries to start a web server. The second one fails because the HTTP port 
> is already in use.
> It is not clear how these tests succeed when run from Maven. Perhaps in that 
> scenario the required file is somehow placed onto the class path? No such 
> file exists in the source path.
> The solution is to initialize the config using the same settings as used in 
> the {{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.
> As an aside, having multiple ways to set up the Drill config (and other 
> items) leads to much wasted time as each engineer must learn the quirks of 
> each test hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5108) Reduce output from Maven git-commit-id-plugin

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5108:

Reviewer: Sorabh Hamirwasia

Assigned Reviewer to [~shamirwasia]

> Reduce output from Maven git-commit-id-plugin
> -
>
> Key: DRILL-5108
> URL: https://issues.apache.org/jira/browse/DRILL-5108
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> The git-commit-id-plugin grabs information from Git to display during a 
> build. It prints many e-mail addresses and other generic project information. 
> As part of the effort to trim down unit test output, we propose to turn off 
> the verbose output from this plugin.
> Specific change:
> {code}
>   
> pl.project13.maven
> git-commit-id-plugin
> ...
> 
>  false
> {code}
> That is, change the verbose setting from true to false.
> In the unlikely event that some build process depends on the verbose output, 
> we can make the setting a configurable parameter, defaulting to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5108) Reduce output from Maven git-commit-id-plugin

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5108:

Assignee: Paul Rogers

> Reduce output from Maven git-commit-id-plugin
> -
>
> Key: DRILL-5108
> URL: https://issues.apache.org/jira/browse/DRILL-5108
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> The git-commit-id-plugin grabs information from Git to display during a 
> build. It prints many e-mail addresses and other generic project information. 
> As part of the effort to trim down unit test output, we propose to turn off 
> the verbose output from this plugin.
> Specific change:
> {code}
>   
> pl.project13.maven
> git-commit-id-plugin
> ...
> 
>  false
> {code}
> That is, change the verbose setting from true to false.
> In the unlikely event that some build process depends on the verbose output, 
> we can make the setting a configurable parameter, defaulting to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5108) Reduce output from Maven git-commit-id-plugin

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727572#comment-15727572
 ] 

ASF GitHub Bot commented on DRILL-5108:
---

GitHub user paul-rogers opened a pull request:

https://github.com/apache/drill/pull/680

DRILL-5108: Reduce output from Maven git-commit-id-plugin

The git-commit-id-plugin grabs information from Git to display during a
build. It prints many e-mail addresses and other generic project
information. As part of the effort to trim down unit test output, we
propose to turn off the verbose output from this plugin by default.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-rogers/drill DRILL-5108

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/680.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #680


commit f2c7d155f018fec597c6be797a31eb77a03c62ce
Author: Paul Rogers 
Date:   2016-12-07T03:38:38Z

DRILL-5108: Reduce output from Maven git-commit-id-plugin

The git-commit-id-plugin grabs information from Git to display during a
build. It prints many e-mail addresses and other generic project
information. As part of the effort to trim down unit test output, we
propose to turn off the verbose output from this plugin by default.




> Reduce output from Maven git-commit-id-plugin
> -
>
> Key: DRILL-5108
> URL: https://issues.apache.org/jira/browse/DRILL-5108
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Priority: Minor
>
> The git-commit-id-plugin grabs information from Git to display during a 
> build. It prints many e-mail addresses and other generic project information. 
> As part of the effort to trim down unit test output, we propose to turn off 
> the verbose output from this plugin.
> Specific change:
> {code}
>   
> pl.project13.maven
> git-commit-id-plugin
> ...
> 
>  false
> {code}
> That is, change the verbose setting from true to false.
> In the unlikely event that some build process depends on the verbose output, 
> we can make the setting a configurable parameter, defaulting to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5056) UserException does not write full message to log

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727322#comment-15727322
 ] 

ASF GitHub Bot commented on DRILL-5056:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/665#discussion_r91210473
  
--- Diff: 
common/src/main/java/org/apache/drill/common/exceptions/UserException.java ---
@@ -549,7 +550,12 @@ public UserException build(final Logger logger) {
   if (isSystemError) {
 logger.error(newException.getMessage(), newException);
   } else {
-logger.info("User Error Occurred", newException);
+String msg = "User Error Occurred";
+if (message != null) {
+  msg += ": " + message; }
+if (cause != null) {
--- End diff --

The whole stack trace appears in the log. The item above is for those 
"helpful" users go just do a grep based on the query ID and find just the lines 
with the log header (including query ID.)


> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5112) Unit tests derived from PopUnitTestBase fail in IDE due to config errors

2016-12-06 Thread Paul Rogers (JIRA)
Paul Rogers created DRILL-5112:
--

 Summary: Unit tests derived from PopUnitTestBase fail in IDE due 
to config errors
 Key: DRILL-5112
 URL: https://issues.apache.org/jira/browse/DRILL-5112
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.8.0
Reporter: Paul Rogers
Assignee: Paul Rogers


Drill provides a wide variety of unit tests. Many derive from 
{{PopUnitTestBase}} to test the Physical OPerators.

The tests use a default configuration:

{code}
protected static DrillConfig CONFIG;

  @BeforeClass
  public static void setup() {
CONFIG = DrillConfig.create();
  }
{code}

The default config tries to locate a {{drill-override.conf}} file somewhere on 
the class path.

When run in Eclipse, no such file exists. Instead, no override file is found 
and defaults are used. The defaults allow a web server to be started.

Many tests start multiple Drillbits using the above config. When this occurs, 
each tries to start a web server. The second one fails because the HTTP port is 
already in use.

It is not clear how these tests succeed when run from Maven. Perhaps in that 
scenario the required file is somehow placed onto the class path? No such file 
exists in the source path.

The solution is to initialize the config using the same settings as used in the 
{{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.

As an aside, having multiple ways to set up the Drill config (and other items) 
leads to much wasted time as each engineer must learn the quirks of each test 
hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5098:

Labels: doc-impacting ready-to-commit  (was: doc-impacting)

> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-06 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727103#comment-15727103
 ] 

Zelaine Fong commented on DRILL-5098:
-

[~Paul.Rogers] - FYI, [~shamirwasia] has assigned this to you for review.

> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4301) OOM : Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.

2016-12-06 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-4301:
---
Issue Type: Sub-task  (was: Bug)
Parent: DRILL-5080

> OOM : Unable to allocate sv2 for 1000 records, and not enough batchGroups to 
> spill.
> ---
>
> Key: DRILL-4301
> URL: https://issues.apache.org/jira/browse/DRILL-4301
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Execution - Flow
>Affects Versions: 1.5.0
> Environment: 4 node cluster
>Reporter: Khurram Faraaz
>Assignee: Paul Rogers
>
> Query below in Functional tests, fails due to OOM 
> {code}
> select * from dfs.`/drill/testdata/metadata_caching/fewtypes_boolpartition` 
> where bool_col = true;
> {code}
> Drill version : drill-1.5.0
> JAVA_VERSION=1.8.0
> {noformat}
> version   commit_id   commit_message  commit_time build_email 
> build_time
> 1.5.0-SNAPSHOT2f0e3f27e630d5ac15cdaef808564e01708c3c55
> DRILL-4190 Don't hold on to batches from left side of merge join.   
> 20.01.2016 @ 22:30:26 UTC   Unknown 20.01.2016 @ 23:48:33 UTC
> framework/framework/resources/Functional/metadata_caching/data/bool_partition1.q
>  (connection: 808078113)
> [#1378] Query failed: 
> oadd.org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: 
> One or more nodes ran out of memory while executing the query.
> Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.
> batchGroups.size 0
> spilledBatchGroups.size 0
> allocated memory 48326272
> allocator limit 46684427
> Fragment 0:0
> [Error Id: 97d58ea3-8aff-48cf-a25e-32363b8e0ecd on drill-demod2:31010]
>   at 
> oadd.org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
>   at 
> oadd.org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
>   at 
> oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
>   at 
> oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
>   at oadd.org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
>   at 
> oadd.org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
>   at 
> oadd.org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
>   at 
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>   at 
> oadd.io.netty.channel.

[jira] [Updated] (DRILL-4272) When sort runs out of memory and query fails, resources are seemingly not freed

2016-12-06 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-4272:
---
Issue Type: Sub-task  (was: Bug)
Parent: DRILL-5080

> When sort runs out of memory and query fails, resources are seemingly not 
> freed
> ---
>
> Key: DRILL-4272
> URL: https://issues.apache.org/jira/browse/DRILL-4272
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Execution - Relational Operators
>Affects Versions: 1.5.0
>Reporter: Victoria Markman
>Assignee: Paul Rogers
>Priority: Critical
>
> Executed query11.sql from resources/Advanced/tpcds/tpcds_sf1/original/parquet
> Query runs out of memory:
> {code}
> Error: RESOURCE ERROR: One or more nodes ran out of memory while executing 
> the query.
> Unable to allocate sv2 for 32768 records, and not enough batchGroups to spill.
> batchGroups.size 1
> spilledBatchGroups.size 0
> allocated memory 19961472
> allocator limit 2000
> Fragment 19:0
> [Error Id: 87aa32b8-17eb-488e-90cb-5f5b9aec on atsqa4-133.qa.lab:31010] 
> (state=,code=0)
> {code}
> And leaves fragments running, holding resources:
> {code}
> 2016-01-14 22:46:32,435 [Drillbit-ShutdownHook#0] INFO  
> o.apache.drill.exec.server.Drillbit - Received shutdown request.
> 2016-01-14 22:46:32,546 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - Foreman atsqa4-136.qa.lab no longer 
> active.  Cancelling fragment 2967db08-cd38-925a-4960-9e881f537af8:19:0.
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:19:0: State change requested 
> CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:19:0: Ignoring unexpected state 
> transition CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - Foreman atsqa4-136.qa.lab no longer 
> active.  Cancelling fragment 2967db08-cd38-925a-4960-9e881f537af8:17:0.
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:17:0: State change requested 
> CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:17:0: Ignoring unexpected state 
> transition CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:33,563 [BitServer-1] INFO  
> o.a.d.exec.rpc.control.ControlClient - Channel closed /10.10.88.134:59069 
> <--> atsqa4-136.qa.lab/10.10.88.136:31011.
> 2016-01-14 22:46:33,563 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:34802 <--> 
> atsqa4-136.qa.lab/10.10.88.136:31012.
> 2016-01-14 22:46:33,590 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:36937 <--> 
> atsqa4-135.qa.lab/10.10.88.135:31012.
> 2016-01-14 22:46:33,595 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:53860 <--> 
> atsqa4-133.qa.lab/10.10.88.133:31012.
> 2016-01-14 22:46:38,467 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:48276 <--> 
> atsqa4-134.qa.lab/10.10.88.134:31012.
> 2016-01-14 22:46:39,470 [pool-6-thread-1] INFO  
> o.a.drill.exec.rpc.user.UserServer - closed eventLoopGroup 
> io.netty.channel.nio.NioEventLoopGroup@6fb32dfb in 1003 ms
> 2016-01-14 22:46:39,470 [pool-6-thread-2] INFO  
> o.a.drill.exec.rpc.data.DataServer - closed eventLoopGroup 
> io.netty.channel.nio.NioEventLoopGroup@5c93dd80 in 1003 ms
> 2016-01-14 22:46:39,470 [pool-6-thread-1] INFO  
> o.a.drill.exec.service.ServiceEngine - closed userServer in 1004 ms
> 2016-01-14 22:46:39,470 [pool-6-thread-2] INFO  
> o.a.drill.exec.service.ServiceEngine - closed dataPool in 1005 ms
> 2016-01-14 22:46:39,483 [Drillbit-ShutdownHook#0] WARN  
> o.apache.drill.exec.work.WorkManager - Closing WorkManager but there are 2 
> running fragments.
> 2016-01-14 22:46:41,489 [Drillbit-ShutdownHook#0] ERROR 
> o.a.d.exec.server.BootStrapContext - Pool did not terminate
> 2016-01-14 22:46:41,498 [Drillbit-ShutdownHook#0] WARN  
> o.apache.drill.exec.server.Drillbit - Failure on close()
> java.lang.RuntimeException: Exception while closing
> at 
> org.apache.drill.common.DrillAutoCloseables.closeNoChecked(DrillAutoCloseables.java:46)
>  ~[drill-common-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
> at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:127)
>  ~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAP

[jira] [Assigned] (DRILL-4272) When sort runs out of memory and query fails, resources are seemingly not freed

2016-12-06 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers reassigned DRILL-4272:
--

Assignee: Paul Rogers

> When sort runs out of memory and query fails, resources are seemingly not 
> freed
> ---
>
> Key: DRILL-4272
> URL: https://issues.apache.org/jira/browse/DRILL-4272
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.5.0
>Reporter: Victoria Markman
>Assignee: Paul Rogers
>Priority: Critical
>
> Executed query11.sql from resources/Advanced/tpcds/tpcds_sf1/original/parquet
> Query runs out of memory:
> {code}
> Error: RESOURCE ERROR: One or more nodes ran out of memory while executing 
> the query.
> Unable to allocate sv2 for 32768 records, and not enough batchGroups to spill.
> batchGroups.size 1
> spilledBatchGroups.size 0
> allocated memory 19961472
> allocator limit 2000
> Fragment 19:0
> [Error Id: 87aa32b8-17eb-488e-90cb-5f5b9aec on atsqa4-133.qa.lab:31010] 
> (state=,code=0)
> {code}
> And leaves fragments running, holding resources:
> {code}
> 2016-01-14 22:46:32,435 [Drillbit-ShutdownHook#0] INFO  
> o.apache.drill.exec.server.Drillbit - Received shutdown request.
> 2016-01-14 22:46:32,546 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - Foreman atsqa4-136.qa.lab no longer 
> active.  Cancelling fragment 2967db08-cd38-925a-4960-9e881f537af8:19:0.
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:19:0: State change requested 
> CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:19:0: Ignoring unexpected state 
> transition CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - Foreman atsqa4-136.qa.lab no longer 
> active.  Cancelling fragment 2967db08-cd38-925a-4960-9e881f537af8:17:0.
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:17:0: State change requested 
> CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:32,547 [Curator-ServiceCache-0] WARN  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2967db08-cd38-925a-4960-9e881f537af8:17:0: Ignoring unexpected state 
> transition CANCELLATION_REQUESTED --> CANCELLATION_REQUESTED
> 2016-01-14 22:46:33,563 [BitServer-1] INFO  
> o.a.d.exec.rpc.control.ControlClient - Channel closed /10.10.88.134:59069 
> <--> atsqa4-136.qa.lab/10.10.88.136:31011.
> 2016-01-14 22:46:33,563 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:34802 <--> 
> atsqa4-136.qa.lab/10.10.88.136:31012.
> 2016-01-14 22:46:33,590 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:36937 <--> 
> atsqa4-135.qa.lab/10.10.88.135:31012.
> 2016-01-14 22:46:33,595 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:53860 <--> 
> atsqa4-133.qa.lab/10.10.88.133:31012.
> 2016-01-14 22:46:38,467 [BitClient-1] INFO  
> o.a.drill.exec.rpc.data.DataClient - Channel closed /10.10.88.134:48276 <--> 
> atsqa4-134.qa.lab/10.10.88.134:31012.
> 2016-01-14 22:46:39,470 [pool-6-thread-1] INFO  
> o.a.drill.exec.rpc.user.UserServer - closed eventLoopGroup 
> io.netty.channel.nio.NioEventLoopGroup@6fb32dfb in 1003 ms
> 2016-01-14 22:46:39,470 [pool-6-thread-2] INFO  
> o.a.drill.exec.rpc.data.DataServer - closed eventLoopGroup 
> io.netty.channel.nio.NioEventLoopGroup@5c93dd80 in 1003 ms
> 2016-01-14 22:46:39,470 [pool-6-thread-1] INFO  
> o.a.drill.exec.service.ServiceEngine - closed userServer in 1004 ms
> 2016-01-14 22:46:39,470 [pool-6-thread-2] INFO  
> o.a.drill.exec.service.ServiceEngine - closed dataPool in 1005 ms
> 2016-01-14 22:46:39,483 [Drillbit-ShutdownHook#0] WARN  
> o.apache.drill.exec.work.WorkManager - Closing WorkManager but there are 2 
> running fragments.
> 2016-01-14 22:46:41,489 [Drillbit-ShutdownHook#0] ERROR 
> o.a.d.exec.server.BootStrapContext - Pool did not terminate
> 2016-01-14 22:46:41,498 [Drillbit-ShutdownHook#0] WARN  
> o.apache.drill.exec.server.Drillbit - Failure on close()
> java.lang.RuntimeException: Exception while closing
> at 
> org.apache.drill.common.DrillAutoCloseables.closeNoChecked(DrillAutoCloseables.java:46)
>  ~[drill-common-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
> at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:127)
>  ~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
> at 
> org.apache.dri

[jira] [Assigned] (DRILL-4301) OOM : Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.

2016-12-06 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers reassigned DRILL-4301:
--

Assignee: Paul Rogers

> OOM : Unable to allocate sv2 for 1000 records, and not enough batchGroups to 
> spill.
> ---
>
> Key: DRILL-4301
> URL: https://issues.apache.org/jira/browse/DRILL-4301
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.5.0
> Environment: 4 node cluster
>Reporter: Khurram Faraaz
>Assignee: Paul Rogers
>
> Query below in Functional tests, fails due to OOM 
> {code}
> select * from dfs.`/drill/testdata/metadata_caching/fewtypes_boolpartition` 
> where bool_col = true;
> {code}
> Drill version : drill-1.5.0
> JAVA_VERSION=1.8.0
> {noformat}
> version   commit_id   commit_message  commit_time build_email 
> build_time
> 1.5.0-SNAPSHOT2f0e3f27e630d5ac15cdaef808564e01708c3c55
> DRILL-4190 Don't hold on to batches from left side of merge join.   
> 20.01.2016 @ 22:30:26 UTC   Unknown 20.01.2016 @ 23:48:33 UTC
> framework/framework/resources/Functional/metadata_caching/data/bool_partition1.q
>  (connection: 808078113)
> [#1378] Query failed: 
> oadd.org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: 
> One or more nodes ran out of memory while executing the query.
> Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.
> batchGroups.size 0
> spilledBatchGroups.size 0
> allocated memory 48326272
> allocator limit 46684427
> Fragment 0:0
> [Error Id: 97d58ea3-8aff-48cf-a25e-32363b8e0ecd on drill-demod2:31010]
>   at 
> oadd.org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
>   at 
> oadd.org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
>   at 
> oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
>   at 
> oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
>   at oadd.org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
>   at 
> oadd.org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
>   at 
> oadd.org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
>   at 
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
>   at 
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> oadd.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
>   at 
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>   at 
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(

[jira] [Updated] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-06 Thread Sorabh Hamirwasia (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-5098:
-
Reviewer: Paul Rogers

> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727060#comment-15727060
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

GitHub user sohami opened a pull request:

https://github.com/apache/drill/pull/679

DRILL-5098: Improving fault tolerance for connection between client a…

…nd foreman node.

 Note: Adding tries config option in connection string.
   Improving fault tolerance in Drill client when trying to 
make first connection with foreman.
   The client will try to connect to min(tries, num_drillbits) 
unique drillbits unless a successfull
   connection is established.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sohami/drill DRILL-5098

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/679.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #679


commit d13c6cc91b72b91b481bd1d428ca5490a8cf
Author: Sorabh Hamirwasia 
Date:   2016-12-01T22:58:00Z

DRILL-5098: Improving fault tolerance for connection between client and 
foreman node.
 Note: Adding tries config option in connection string.
   Improving fault tolerance in Drill client when trying to 
make first connection with foreman.
   The client will try to connect to min(tries, num_drillbits) 
unique drillbits unless a successfull
   connection is established.




> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-06 Thread Sorabh Hamirwasia (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-5098:
-
Labels: doc-impacting  (was: )

> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5043) Function that returns a unique id per session/connection similar to MySQL's CONNECTION_ID()

2016-12-06 Thread Gautam Kumar Parai (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726877#comment-15726877
 ] 

Gautam Kumar Parai commented on DRILL-5043:
---

Hi Nagarajan,

Any updates? 

Regarding your questions:

{quote} I am not sure about one change I made in BitControl.java in the 
following block: {quote}
It should be 32 instead of 36

{quote} Also, I am not sure how to incorporate session_id into "descriptorData" 
static variable that is initialized at line number 9073 in BitControl.java. 
Please advice. {quote}
I think no change is required.

Can you please post the pull request? We can only start the review process when 
we have a pull request.

> Function that returns a unique id per session/connection similar to MySQL's 
> CONNECTION_ID()
> ---
>
> Key: DRILL-5043
> URL: https://issues.apache.org/jira/browse/DRILL-5043
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.8.0
>Reporter: Nagarajan Chinnasamy
>Priority: Minor
>  Labels: CONNECTION_ID, SESSION, UDF
> Attachments: 01_session_id_sqlline.png, 
> 02_session_id_webconsole_query.png, 03_session_id_webconsole_result.png
>
>
> Design and implement a function that returns a unique id per 
> session/connection similar to MySQL's CONNECTION_ID().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5056) UserException does not write full message to log

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726528#comment-15726528
 ] 

ASF GitHub Bot commented on DRILL-5056:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/665#discussion_r91158494
  
--- Diff: 
common/src/main/java/org/apache/drill/common/exceptions/UserException.java ---
@@ -549,7 +550,12 @@ public UserException build(final Logger logger) {
   if (isSystemError) {
 logger.error(newException.getMessage(), newException);
   } else {
-logger.info("User Error Occurred", newException);
+String msg = "User Error Occurred";
+if (message != null) {
+  msg += ": " + message; }
+if (cause != null) {
--- End diff --

Wonder if the root cause (see line 540 above) may also be useful (in case 
it is below/different from 'cause') ?



> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5056) UserException does not write full message to log

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726480#comment-15726480
 ] 

ASF GitHub Bot commented on DRILL-5056:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/665#discussion_r91155444
  
--- Diff: 
common/src/main/java/org/apache/drill/common/exceptions/UserException.java ---
@@ -37,6 +37,7 @@
  *
  * @see org.apache.drill.exec.proto.UserBitShared.DrillPBError.ErrorType
  */
+@SuppressWarnings("serial")
--- End diff --

Should there instead be in this class :
 private static final long serialVersionUID = -3796081521525479249L;
as does its super-class DrillRuntimeException, and as does its super-class 
RuntimeException, and as does its super-class Exception, and as does its 
super-class Throwable (which implements Serializable ) ??
(Not sure how to pick the specific UID number ...) 



> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5032) Drill query on hive parquet table failed with OutOfMemoryError: Java heap space

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726450#comment-15726450
 ] 

ASF GitHub Bot commented on DRILL-5032:
---

Github user chunhui-shi commented on a diff in the pull request:

https://github.com/apache/drill/pull/654#discussion_r91153138
  
--- Diff: 
contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestColumnListCache.java
 ---
@@ -0,0 +1,78 @@
+/*
--- End diff --

Could you add a unit test to verify when there are two kind (or more) of 
columns in e.g. 10 partitions, the physical plan text has only two copies of 
columns?


> Drill query on hive parquet table failed with OutOfMemoryError: Java heap 
> space
> ---
>
> Key: DRILL-5032
> URL: https://issues.apache.org/jira/browse/DRILL-5032
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Hive
>Affects Versions: 1.8.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
> Attachments: plan, plan with fix
>
>
> Following query on hive parquet table failed with OOM Java heap space:
> {code}
> select distinct(businessdate) from vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:02:03,597 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 283938c3-fde8-0fc6-37e1-9a568c7f5913: select distinct(businessdate) from 
> vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:05:58,502 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 1 ms
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 3 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:05:58,664 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$1
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:09:42,355 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] ERROR 
> o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurred, 
> exiting. Information message: Unable to handle out of memory condition in 
> Foreman.
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3332) ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:136) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:76) 
> ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:457) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:166) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:76) 
> ~[na:1.8.0_74]
> at 
> com.google.protobuf.TextFormat$TextGenerator.write(TextFormat.java:538) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$TextGenerator.print(TextFormat.java:526) 
> ~[protobuf-java-2.5.0.jar:na]
> 

[jira] [Commented] (DRILL-5032) Drill query on hive parquet table failed with OutOfMemoryError: Java heap space

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726451#comment-15726451
 ] 

ASF GitHub Bot commented on DRILL-5032:
---

Github user chunhui-shi commented on a diff in the pull request:

https://github.com/apache/drill/pull/654#discussion_r91152465
  
--- Diff: 
contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveUtilities.java
 ---
@@ -398,17 +399,13 @@ public static void addConfToJob(final JobConf job, 
final Properties properties)
* Wrapper around {@link MetaStoreUtils#getPartitionMetadata(Partition, 
Table)} which also adds parameters from table
* to properties returned by {@link 
MetaStoreUtils#getPartitionMetadata(Partition, Table)}.
*
-   * @param partition {@link Partition} instance
-   * @param table {@link Table} instance
+   * @param partition the source of partition level parameters
+   * @param table the source of table level parameters
* @return properties
*/
-  public static Properties getPartitionMetadata(final Partition partition, 
final Table table) {
+  public static Properties getPartitionMetadata(final HivePartition 
partition, final HiveTable table) {
 final Properties properties;
-// exactly the same column lists for partitions and table
-// stored only in table to reduce physical plan serialization
-if (partition.getSd().getCols() == null) {
-  partition.getSd().setCols(table.getSd().getCols());
-}
+restoreColumns(table, partition);
--- End diff --

could this restoreColumns fail the purpose of this fix? Since it is setting 
columns back to each partition, if getPartitionMetadata was called before the 
final physical plan generated, then these columns could be printed out to the 
plan again, could you check this possibility?


> Drill query on hive parquet table failed with OutOfMemoryError: Java heap 
> space
> ---
>
> Key: DRILL-5032
> URL: https://issues.apache.org/jira/browse/DRILL-5032
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Hive
>Affects Versions: 1.8.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
> Attachments: plan, plan with fix
>
>
> Following query on hive parquet table failed with OOM Java heap space:
> {code}
> select distinct(businessdate) from vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:02:03,597 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 283938c3-fde8-0fc6-37e1-9a568c7f5913: select distinct(businessdate) from 
> vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:05:58,502 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 1 ms
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 3 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:05:58,664 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$1
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:09:42,355 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] ERROR 
> o.a.drill.common.CatastrophicFailure - Catastrophic Failure

[jira] [Updated] (DRILL-4491) FormatPluginOptionsDescriptor requires FormatPluginConfig fields to be public

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4491:

Assignee: Aditya Kishore  (was: Parth Chandra)
Reviewer: Parth Chandra

Assigned Reviewer to [~parthc]

> FormatPluginOptionsDescriptor requires FormatPluginConfig fields to be public
> -
>
> Key: DRILL-4491
> URL: https://issues.apache.org/jira/browse/DRILL-4491
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Minor
> Fix For: Future
>
>
> The code uses {{getField()}} instead of {{getDeclaredField()}}, which returns 
> only the public fields.
> {code:title=FormatPluginOptionsDescriptor.java:165|borderStyle=solid}
> Field field = pluginConfigClass.getField(paramDef.name);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4831) Running refresh table metadata concurrently randomly fails with JsonParseException

2016-12-06 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15726005#comment-15726005
 ] 

Zelaine Fong commented on DRILL-4831:
-

Changed status to In Progress, as I believe there are still some issues 
[~ppenumarthy] needs to resolve.

> Running refresh table metadata concurrently randomly fails with 
> JsonParseException
> --
>
> Key: DRILL-4831
> URL: https://issues.apache.org/jira/browse/DRILL-4831
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.8.0
>Reporter: Rahul Challapalli
>Assignee: Padma Penumarthy
> Attachments: error.log, l_3level.tgz
>
>
> git.commit.id.abbrev=f476eb5
> Just run the below command concurrently from 10 different JDBC connections. 
> There is a likelihood that you might encounter the below error
> Extracts from the log
> {code}
> Caused By (java.lang.AssertionError) Internal error: Error while applying 
> rule DrillPushProjIntoScan, args 
> [rel#189411:LogicalProject.NONE.ANY([]).[](input=rel#189289:Subset#3.ENUMERABLE.ANY([]).[],l_orderkey=$1,dir0=$2,dir1=$3,dir2=$4,l_shipdate=$5,l_extendedprice=$6,l_discount=$7),
>  rel#189233:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[dfs, 
> metadata_caching_pp, l_3level])]
> org.apache.calcite.util.Util.newInternal():792
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch():251
> .
> .
>   java.lang.Thread.run():745
>   Caused By (org.apache.drill.common.exceptions.DrillRuntimeException) 
> com.fasterxml.jackson.core.JsonParseException: Illegal character ((CTRL-CHAR, 
> code 0)): only regular white space (\r, \n, \t) is allowed between tokens
>  at [Source: com.mapr.fs.MapRFsDataInputStream@57a574a8; line: 1, column: 2]
> org.apache.drill.exec.planner.logical.DrillPushProjIntoScan.onMatch():95
> {code}  
> Attached the complete log message and the data set



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4280) Kerberos Authentication

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4280:

Assignee: Sudheesh Katkam  (was: Chunhui Shi)
Reviewer: Chunhui Shi

Assigned Reviewer to [~cshi]

> Kerberos Authentication
> ---
>
> Key: DRILL-4280
> URL: https://issues.apache.org/jira/browse/DRILL-4280
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Keys Botzum
>Assignee: Sudheesh Katkam
>  Labels: security
>
> Drill should support Kerberos based authentication from clients. This means 
> that both the ODBC and JDBC drivers as well as the web/REST interfaces should 
> support inbound Kerberos. For Web this would most likely be SPNEGO while for 
> ODBC and JDBC this will be more generic Kerberos.
> Since Hive and much of Hadoop supports Kerberos there is a potential for a 
> lot of reuse of ideas if not implementation.
> Note that this is related to but not the same as 
> https://issues.apache.org/jira/browse/DRILL-3584 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4842) SELECT * on JSON data results in NumberFormatException

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4842:

Assignee: Serhii Harnyk  (was: Chunhui Shi)
Reviewer: Chunhui Shi

Assigned Reviewer to [~cshi]

> SELECT * on JSON data results in NumberFormatException
> --
>
> Key: DRILL-4842
> URL: https://issues.apache.org/jira/browse/DRILL-4842
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.2.0
>Reporter: Khurram Faraaz
>Assignee: Serhii Harnyk
> Attachments: tooManyNulls.json
>
>
> Note that doing SELECT c1 returns correct results, the failure is seen when 
> we do SELECT star. json.all_text_mode was set to true.
> JSON file tooManyNulls.json has one key c1 with 4096 nulls as its value and 
> the 4097th key c1 has the value "Hello World"
> git commit ID : aaf220ff
> MapR Drill 1.8.0 RPM
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> alter session set 
> `store.json.all_text_mode`=true;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | store.json.all_text_mode updated.  |
> +---++
> 1 row selected (0.27 seconds)
> 0: jdbc:drill:schema=dfs.tmp> SELECT c1 FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> +--+
> |  c1  |
> +--+
> | Hello World  |
> +--+
> 1 row selected (0.243 seconds)
> 0: jdbc:drill:schema=dfs.tmp> select * FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> Error: SYSTEM ERROR: NumberFormatException: Hello World
> Fragment 0:0
> [Error Id: 9cafb3f9-3d5c-478a-b55c-900602b8765e on centos-01.qa.lab:31010]
>  (java.lang.NumberFormatException) Hello World
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeI():95
> 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.varTypesToInt():120
> org.apache.drill.exec.test.generated.FiltererGen1169.doSetup():45
> org.apache.drill.exec.test.generated.FiltererGen1169.setup():54
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.generateSV2Filterer():195
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.setupNewSchema():107
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():78
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():94
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745 (state=,code=0)
> 0: jdbc:drill:schema=dfs.tmp>
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> Caused by: java.lang.NumberFormatException: Hello World
> at 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeI(StringFunctionHelpers.java:95)
>  ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.varTypesToInt(StringFunctionHelpers.java:120)
>  ~[drill-java-exec-1.8.0-SNAPSHOT

[jira] [Updated] (DRILL-4831) Running refresh table metadata concurrently randomly fails with JsonParseException

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4831:

Assignee: Padma Penumarthy  (was: Aman Sinha)
Reviewer: Aman Sinha

Assigned Reviewer to [~amansinha100]

> Running refresh table metadata concurrently randomly fails with 
> JsonParseException
> --
>
> Key: DRILL-4831
> URL: https://issues.apache.org/jira/browse/DRILL-4831
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.8.0
>Reporter: Rahul Challapalli
>Assignee: Padma Penumarthy
> Attachments: error.log, l_3level.tgz
>
>
> git.commit.id.abbrev=f476eb5
> Just run the below command concurrently from 10 different JDBC connections. 
> There is a likelihood that you might encounter the below error
> Extracts from the log
> {code}
> Caused By (java.lang.AssertionError) Internal error: Error while applying 
> rule DrillPushProjIntoScan, args 
> [rel#189411:LogicalProject.NONE.ANY([]).[](input=rel#189289:Subset#3.ENUMERABLE.ANY([]).[],l_orderkey=$1,dir0=$2,dir1=$3,dir2=$4,l_shipdate=$5,l_extendedprice=$6,l_discount=$7),
>  rel#189233:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[dfs, 
> metadata_caching_pp, l_3level])]
> org.apache.calcite.util.Util.newInternal():792
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch():251
> .
> .
>   java.lang.Thread.run():745
>   Caused By (org.apache.drill.common.exceptions.DrillRuntimeException) 
> com.fasterxml.jackson.core.JsonParseException: Illegal character ((CTRL-CHAR, 
> code 0)): only regular white space (\r, \n, \t) is allowed between tokens
>  at [Source: com.mapr.fs.MapRFsDataInputStream@57a574a8; line: 1, column: 2]
> org.apache.drill.exec.planner.logical.DrillPushProjIntoScan.onMatch():95
> {code}  
> Attached the complete log message and the data set



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4309) Make this option store.hive.optimize_scan_with_native_readers=true default

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4309:

Assignee: Vitalii Diravka  (was: Parth Chandra)
Reviewer: Parth Chandra  (was: Rahul Challapalli)

Assigned Reviewer to [~parthc]

> Make this option store.hive.optimize_scan_with_native_readers=true default
> --
>
> Key: DRILL-4309
> URL: https://issues.apache.org/jira/browse/DRILL-4309
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning & Optimization
>Affects Versions: 1.8.0
>Reporter: Sean Hsuan-Yi Chu
>Assignee: Vitalii Diravka
>  Labels: doc-impacting
> Fix For: Future
>
>
> This new feature has been around and used/tests in many scenarios. 
> We should enable this feature by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4987) Use ImpersonationUtil in RemoteFunctionRegistry

2016-12-06 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725980#comment-15725980
 ] 

Zelaine Fong commented on DRILL-4987:
-

@sudheesh - is this pull request ready to be checked in?

> Use ImpersonationUtil in RemoteFunctionRegistry
> ---
>
> Key: DRILL-4987
> URL: https://issues.apache.org/jira/browse/DRILL-4987
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Reporter: Sudheesh Katkam
>Assignee: Sudheesh Katkam
>Priority: Minor
> Fix For: 1.9.0
>
>
> + Use ImpersonationUtil#getProcessUserName rather than  
> UserGroupInformation#getCurrentUser#getUserName in RemoteFunctionRegistry
> + Expose process users' group info in ImpersonationUtil and use that in 
> RemoteFunctionRegistry, rather than 
> UserGroupInformation#getCurrentUser#getGroupNames



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4990) Use new HDFS API access instead of listStatus to check if users have permissions to access workspace.

2016-12-06 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725978#comment-15725978
 ] 

Zelaine Fong commented on DRILL-4990:
-

Changed status back to "In Progress".  I believe there are some test issues 
that still need to be resolved.

> Use new HDFS API access instead of listStatus to check if users have 
> permissions to access workspace.
> -
>
> Key: DRILL-4990
> URL: https://issues.apache.org/jira/browse/DRILL-4990
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.8.0
>Reporter: Padma Penumarthy
>Assignee: Padma Penumarthy
> Fix For: 1.9.0
>
>
> For every query, we build the schema tree 
> (runSQL->getPlan->getNewDefaultSchema->getRootSchema). All workspaces in all 
> storage plugins are checked and are added to the schema tree if they are 
> accessible by the user who initiated the query.  For file system plugin, 
> listStatus API is used to check if  the workspace is accessible or not 
> (WorkspaceSchemaFactory.accessible) by the user.  The idea seem to be if the 
> user does not have access to file(s) in the workspace, listStatus will 
> generate an exception and we return false. But, listStatus (which lists all 
> the entries of a directory) is an expensive operation when there are large 
> number of files in the directory. A new API is added in Hadoop 2.6 called 
> access (HDFS-6570) which provides the ability to check if the user has 
> permissions on a file/directory.  Use this new API instead of listStatus. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5034) Select timestamp from hive generated parquet always return in UTC

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5034:

Assignee: Vitalii Diravka  (was: Parth Chandra)
Reviewer: Parth Chandra  (was: Rahul Challapalli)

Assigned Reviewer to [~parthc]

> Select timestamp from hive generated parquet always return in UTC
> -
>
> Key: DRILL-5034
> URL: https://issues.apache.org/jira/browse/DRILL-5034
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Affects Versions: 1.9.0
>Reporter: Krystal
>Assignee: Vitalii Diravka
>
> commit id: 5cea9afa6278e21574c6a982ae5c3d82085ef904
> Reading timestamp data against a hive parquet table from drill automatically 
> converts the timestamp data to UTC. 
> {code}
> SELECT TIMEOFDAY() FROM (VALUES(1));
> +--+
> |EXPR$0|
> +--+
> | 2016-11-10 12:33:26.547 America/Los_Angeles  |
> +--+
> {code}
> data schema:
> {code}
> message hive_schema {
>   optional int32 voter_id;
>   optional binary name (UTF8);
>   optional int32 age;
>   optional binary registration (UTF8);
>   optional fixed_len_byte_array(3) contributions (DECIMAL(6,2));
>   optional int32 voterzone;
>   optional int96 create_timestamp;
>   optional int32 create_date (DATE);
> }
> {code}
> Using drill-1.8, the returned timestamps match the table data:
> {code}
> select convert_from(create_timestamp, 'TIMESTAMP_IMPALA') from 
> `/user/hive/warehouse/voter_hive_parquet` limit 5;
> ++
> | EXPR$0 |
> ++
> | 2016-10-23 20:03:58.0  |
> | null   |
> | 2016-09-09 12:01:18.0  |
> | 2017-03-06 20:35:55.0  |
> | 2017-01-20 22:32:43.0  |
> ++
> 5 rows selected (1.032 seconds)
> {code}
> If the user timzone is changed to UTC, then the timestamp data is returned in 
> UTC time.
> Using drill-1.9, the returned timestamps got converted to UTC eventhough the 
> user timezone is in PST.
> {code}
> select convert_from(create_timestamp, 'TIMESTAMP_IMPALA') from 
> dfs.`/user/hive/warehouse/voter_hive_parquet` limit 5;
> ++
> | EXPR$0 |
> ++
> | 2016-10-24 03:03:58.0  |
> | null   |
> | 2016-09-09 19:01:18.0  |
> | 2017-03-07 04:35:55.0  |
> | 2017-01-21 06:32:43.0  |
> ++
> {code}
> {code}
> alter session set `store.parquet.reader.int96_as_timestamp`=true;
> +---+---+
> |  ok   |  summary  |
> +---+---+
> | true  | store.parquet.reader.int96_as_timestamp updated.  |
> +---+---+
> select create_timestamp from dfs.`/user/hive/warehouse/voter_hive_parquet` 
> limit 5;
> ++
> |create_timestamp|
> ++
> | 2016-10-24 03:03:58.0  |
> | null   |
> | 2016-09-09 19:01:18.0  |
> | 2017-03-07 04:35:55.0  |
> | 2017-01-21 06:32:43.0  |
> ++
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5052) Option to debug generated Java code using an IDE

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5052:

Assignee: Paul Rogers  (was: Arina Ielchiieva)
Reviewer: Arina Ielchiieva

Assigned Reviewer to [~arina]

> Option to debug generated Java code using an IDE
> 
>
> Key: DRILL-5052
> URL: https://issues.apache.org/jira/browse/DRILL-5052
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Codegen
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> Drill makes extensive use of Java code generation to implement its operators. 
> Drill uses sophisticated techniques to blend generated code with pre-compiled 
> template code. An unfortunate side-effect of this behavior is that it is very 
> difficult to visualize and debug the generated code.
> As it turns out, Drill's code-merge facility is, in essence, a do-it-yourself 
> version of subclassing. The Drill "template" is the parent class, the 
> generated code is the subclass. But, rather than using plain-old subclassing, 
> Drill combines the code from the two classes into a single "artificial" 
> packet of byte codes for which no source exists.
> Modify the code generation path to optionally allow "plain-old Java" 
> compilation: the generated code is a subclass of the template. Compile the 
> generated code as a plain-old Java class with no byte-code fix-up. Write the 
> code to a known location that the IDE can search when looking for source 
> files.
> With this change, developers can turn on the above feature, set a breakpoint 
> in a template, then step directly into the generated Java code called from 
> the template.
> This feature should be an option, enabled by developers when needed. The 
> existing byte-code technique should be used for production code generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5051) Returning incorrect number of rows while querying using both nested select and offset

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5051:

Assignee: Hongze Zhang  (was: Sudheesh Katkam)
Reviewer: Sudheesh Katkam  (was: Jinfeng Ni)

Assigned Reviewer to [~sudheeshkatkam]

> Returning incorrect number of rows while querying using both nested select 
> and offset
> -
>
> Key: DRILL-5051
> URL: https://issues.apache.org/jira/browse/DRILL-5051
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.8.0
> Environment: Fedora 24 / OpenJDK 8
>Reporter: Hongze Zhang
>Assignee: Hongze Zhang
> Fix For: Future
>
>
> My SQl:
> select count(1) from (select id from (select id from 
> cp.`tpch/lineitem.parquet` LIMIT 2) limit 1 offset 1) 
> This SQL returns nothing.
> Something goes wrong in LimitRecordBatch.java, and the reason is different 
> with [DRILL-4884|https://issues.apache.org/jira/browse/DRILL-4884?filter=-2]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5056) UserException does not write full message to log

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5056:

Reviewer: Boaz Ben-Zvi

Assigned Reviewer to [~ben-zvi]

> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4956) Temporary tables support

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4956:

Assignee: Arina Ielchiieva  (was: Paul Rogers)
Reviewer: Paul Rogers

Assigned Reviewer to [~Paul.Rogers]

> Temporary tables support
> 
>
> Key: DRILL-4956
> URL: https://issues.apache.org/jira/browse/DRILL-4956
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>  Labels: doc-impacting
> Fix For: Future
>
>
> Link to design doc - 
> https://docs.google.com/document/d/1gSRo_w6q2WR5fPx7SsQ5IaVmJXJ6xCOJfYGyqpVOC-g/edit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4347) Planning time for query64 from TPCDS test suite has increased 10 times compared to 1.4 release

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4347:

Assignee: Aman Sinha  (was: Gautam Kumar Parai)
Reviewer: Gautam Kumar Parai  (was: Victoria Markman)

Assigned Reviewer to [~gparai]

> Planning time for query64 from TPCDS test suite has increased 10 times 
> compared to 1.4 release
> --
>
> Key: DRILL-4347
> URL: https://issues.apache.org/jira/browse/DRILL-4347
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.5.0
>Reporter: Victoria Markman
>Assignee: Aman Sinha
> Fix For: Future
>
> Attachments: 294e9fb9-cdda-a89f-d1a7-b852878926a1.sys.drill_1.4.0, 
> 294ea418-9fb8-3082-1725-74e3cfe38fe9.sys.drill_1.5.0, drill4347_jstack.txt
>
>
> mapr-drill-1.5.0.201602012001-1.noarch.rpm
> {code}
> 0: jdbc:drill:schema=dfs> WITH cs_ui
> . . . . . . . . . . . . >  AS (SELECT cs_item_sk,
> . . . . . . . . . . . . > Sum(cs_ext_list_price) AS sale,
> . . . . . . . . . . . . > Sum(cr_refunded_cash + 
> cr_reversed_charge
> . . . . . . . . . . . . > + cr_store_credit) AS refund
> . . . . . . . . . . . . >  FROM   catalog_sales,
> . . . . . . . . . . . . > catalog_returns
> . . . . . . . . . . . . >  WHERE  cs_item_sk = cr_item_sk
> . . . . . . . . . . . . > AND cs_order_number = 
> cr_order_number
> . . . . . . . . . . . . >  GROUP  BY cs_item_sk
> . . . . . . . . . . . . >  HAVING Sum(cs_ext_list_price) > 2 * Sum(
> . . . . . . . . . . . . > cr_refunded_cash + 
> cr_reversed_charge
> . . . . . . . . . . . . > + cr_store_credit)),
> . . . . . . . . . . . . >  cross_sales
> . . . . . . . . . . . . >  AS (SELECT i_product_name product_name,
> . . . . . . . . . . . . > i_item_sk  item_sk,
> . . . . . . . . . . . . > s_store_name   store_name,
> . . . . . . . . . . . . > s_zip  store_zip,
> . . . . . . . . . . . . > ad1.ca_street_number   
> b_street_number,
> . . . . . . . . . . . . > ad1.ca_street_name 
> b_streen_name,
> . . . . . . . . . . . . > ad1.ca_cityb_city,
> . . . . . . . . . . . . > ad1.ca_zip b_zip,
> . . . . . . . . . . . . > ad2.ca_street_number   
> c_street_number,
> . . . . . . . . . . . . > ad2.ca_street_name 
> c_street_name,
> . . . . . . . . . . . . > ad2.ca_cityc_city,
> . . . . . . . . . . . . > ad2.ca_zip c_zip,
> . . . . . . . . . . . . > d1.d_year  AS syear,
> . . . . . . . . . . . . > d2.d_year  AS fsyear,
> . . . . . . . . . . . . > d3.d_year  s2year,
> . . . . . . . . . . . . > Count(*)   cnt,
> . . . . . . . . . . . . > Sum(ss_wholesale_cost) s1,
> . . . . . . . . . . . . > Sum(ss_list_price) s2,
> . . . . . . . . . . . . > Sum(ss_coupon_amt) s3
> . . . . . . . . . . . . >  FROM   store_sales,
> . . . . . . . . . . . . > store_returns,
> . . . . . . . . . . . . > cs_ui,
> . . . . . . . . . . . . > date_dim d1,
> . . . . . . . . . . . . > date_dim d2,
> . . . . . . . . . . . . > date_dim d3,
> . . . . . . . . . . . . > store,
> . . . . . . . . . . . . > customer,
> . . . . . . . . . . . . > customer_demographics cd1,
> . . . . . . . . . . . . > customer_demographics cd2,
> . . . . . . . . . . . . > promotion,
> . . . . . . . . . . . . > household_demographics hd1,
> . . . . . . . . . . . . > household_demographics hd2,
> . . . . . . . . . . . . > customer_address ad1,
> . . . . . . . . . . . . > customer_address ad2,
> . . . . . . . . . . . . > income_band ib1,
> . . . . . . . . . . . . > income_band ib2,
> . . . . . . . . . . . . > item
> . . . . . . . . . . . . >  WHERE  ss_store_sk = s_store_sk
> . . . . . . . . . . . . > AND ss_sold_date_sk = d1.d_date_sk
> . . . . . . . . . . . . > AND ss_customer_sk = c_customer_sk
> . . . . . . . . . . . . > AND ss_cdemo_sk = cd1.cd_demo_sk
> . . . . . . . . . . . . > AND ss_hdemo_sk = hd1.hd_demo_sk
> . . . . . . .

[jira] [Updated] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5085:

Assignee: Arina Ielchiieva  (was: Paul Rogers)
Reviewer: Paul Rogers

Assigned Reviewer to [~Paul.Rogers]

> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>Priority: Minor
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4764) Parquet file with INT_16, etc. logical types not supported by simple SELECT

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4764:

Assignee: Serhii Harnyk  (was: Parth Chandra)
Reviewer: Parth Chandra

Assigned Reviewer to [~parthc]

> Parquet file with INT_16, etc. logical types not supported by simple SELECT
> ---
>
> Key: DRILL-4764
> URL: https://issues.apache.org/jira/browse/DRILL-4764
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.6.0
>Reporter: Paul Rogers
>Assignee: Serhii Harnyk
> Attachments: int_16.parquet, int_8.parquet, uint_16.parquet, 
> uint_32.parquet, uint_8.parquet
>
>
> Create a Parquet file with the following schema:
> message int16Data { required int32 index; required int32 value (INT_16); }
> Store it as int_16.parquet in the local file system. Query it with:
> SELECT * from `local`.`root`.`int_16.parquet`;
> The result, in the web UI, is this error:
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> UnsupportedOperationException: unsupported type: INT32 INT_16 Fragment 0:0 
> [Error Id: c63f66b4-e5a9-4a35-9ceb-546b74645dd4 on 172.30.1.28:31010]
> The INT_16 logical (or "original") type simply tells consumers of the file 
> that the data is actually a 16-bit signed int. Presumably, this should tell 
> Drill to use the SmallIntVector (or NullableSmallIntVector) class for 
> storage. Without supporting this annotation, even 16-bit integers must be 
> stored as 32-bits within Drill.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5091) JDBC unit test fail on Java 8

2016-12-06 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5091:

Assignee: Paul Rogers  (was: Padma Penumarthy)
Reviewer: Padma Penumarthy

Assigned code reviewer to [~ppenumarthy] in Reviewer field.

> JDBC unit test fail on Java 8
> -
>
> Key: DRILL-5091
> URL: https://issues.apache.org/jira/browse/DRILL-5091
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
> Environment: Java 8
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>
> Run the {{TestJDBCQuery}} unit tests. They will fail with errors relating to 
> the default name space.
> The problem is due to a failure (that is ignored, DRILL-5090) to set up the 
> test DFS name space.
> The "dfs_test" storage plugin is not found in the plugin registry, resulting 
> in a null object and NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725702#comment-15725702
 ] 

ASF GitHub Bot commented on DRILL-5085:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/672#discussion_r91085385
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/registry/RemoteFunctionRegistry.java
 ---
@@ -189,6 +188,7 @@ private void prepareStores(PersistentStoreProvider 
storeProvider, ClusterCoordin
* if not set, uses user home directory instead.
*/
   private void prepareAreas(DrillConfig config) {
+logger.info("Preparing three remote udf areas: staging, registry and 
tmp.");
--- End diff --

Sure.


> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725703#comment-15725703
 ] 

ASF GitHub Bot commented on DRILL-5085:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/672#discussion_r91085335
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/FunctionImplementationRegistry.java
 ---
@@ -377,14 +374,12 @@ private ScanResult scan(ClassLoader classLoader, Path 
path, URL[] urls) throws I
* Creates local udf directory, if it doesn't exist.
* Checks if local udf directory is a directory and if current 
application has write rights on it.
* Attempts to clean up local udf directory in case jars were left after 
previous drillbit run.
-   * Local udf directory path is concatenated from drill temporary 
directory and ${drill.exec.udf.directory.local}.
*
* @param config drill config
* @return path to local udf directory
*/
   private Path getLocalUdfDir(DrillConfig config) {
-tmpDir = getTmpDir(config);
-File udfDir = new File(tmpDir, 
config.getString(ExecConstants.UDF_DIRECTORY_LOCAL));
+File udfDir = new 
File(config.getString(ExecConstants.UDF_DIRECTORY_LOCAL));
--- End diff --

Ok, reverted usage of generated temporary directory logic.


> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725710#comment-15725710
 ] 

ASF GitHub Bot commented on DRILL-5085:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/672
  
@paul-rogers made changes after 2CR. Partially reverted temporary directory 
implementation logic to provide backward compatibility. Please review.


> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725700#comment-15725700
 ] 

ASF GitHub Bot commented on DRILL-5085:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/672#discussion_r91078578
  
--- Diff: distribution/src/resources/drill-override-example.conf ---
@@ -170,7 +170,17 @@ drill.exec: {
 threadpool_size: 8,
 decode_threadpool_size: 1
   },
-  debug.error_on_leak: true
+  debug.error_on_leak: true,
+  udf: {
+retry-attempts: 10,
--- End diff --

Done.


> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725701#comment-15725701
 ] 

ASF GitHub Bot commented on DRILL-5085:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/672#discussion_r91077966
  
--- Diff: distribution/src/resources/drill-override-example.conf ---
@@ -170,7 +170,17 @@ drill.exec: {
 threadpool_size: 8,
 decode_threadpool_size: 1
   },
-  debug.error_on_leak: true
+  debug.error_on_leak: true,
+  udf: {
--- End diff --

Done.


> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725704#comment-15725704
 ] 

ASF GitHub Bot commented on DRILL-5085:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/672#discussion_r91085531
  
--- Diff: exec/java-exec/src/main/resources/drill-module.conf ---
@@ -45,11 +45,13 @@ drill.client: {
   supports-complex-types: true
 }
 
-// Directory is used as base for temporary storage of Dynamic UDF jars.
-// Set this property if you want to have custom temporary directory, 
instead of generated at runtime.
-// By default ${DRILL_TMP_DIR} is used if set.
-// drill.tmp-dir: "/tmp"
-// drill.tmp-dir: ${?DRILL_TMP_DIR}
+// Location Drill uses for temporary files, such as downloaded dynamic 
UDFs jars.
--- End diff --

1. Reverted generated directory logic.
2. "/tmp/drill/udf" + cluster-id is not good idea, since local udf 
directory has similar location as remote. And remote directory is generated as 
cluster-id + /udf. So I suggest to keep it as is. Local registry is returned to 
use its previous composition logic.
3. Removed "cluster-temp-dir", we don't need it for now.


> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Paul Rogers
>Priority: Minor
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4726) Dynamic UDFs support

2016-12-06 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-4726:

Description: 
Allow register UDFs without  restart of Drillbits.
Design is described in document below:

https://docs.google.com/document/d/1FfyJtWae5TLuyheHCfldYUpCdeIezR2RlNsrOTYyAB4/edit?usp=sharing
 

Gist - 
https://gist.github.com/arina-ielchiieva/a1c4cfa3890145c5ecb1b70a39cbff55#file-dynamicudfssupport-md
 

  was:
Allow register UDFs without  restart of Drillbits.
Design is described in document below:

https://docs.google.com/document/d/1FfyJtWae5TLuyheHCfldYUpCdeIezR2RlNsrOTYyAB4/edit?usp=sharing
 


> Dynamic UDFs support
> 
>
> Key: DRILL-4726
> URL: https://issues.apache.org/jira/browse/DRILL-4726
> Project: Apache Drill
>  Issue Type: New Feature
>Affects Versions: 1.6.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>  Labels: doc-impacting
> Fix For: 1.9.0
>
>
> Allow register UDFs without  restart of Drillbits.
> Design is described in document below:
> https://docs.google.com/document/d/1FfyJtWae5TLuyheHCfldYUpCdeIezR2RlNsrOTYyAB4/edit?usp=sharing
>  
> Gist - 
> https://gist.github.com/arina-ielchiieva/a1c4cfa3890145c5ecb1b70a39cbff55#file-dynamicudfssupport-md
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (DRILL-4941) UnsupportedOperationException : CASE WHEN true or null then 1 else 0 end

2016-12-06 Thread Serhii Harnyk (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Serhii Harnyk reassigned DRILL-4941:


Assignee: Serhii Harnyk

> UnsupportedOperationException : CASE WHEN true or null then 1 else 0 end
> 
>
> Key: DRILL-4941
> URL: https://issues.apache.org/jira/browse/DRILL-4941
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Khurram Faraaz
>Assignee: Serhii Harnyk
> Fix For: 1.9.0
>
>
> Below case expression results in UnsupportedOperationException on Drill 1.9.0 
> git commit ID: 4edabe7a
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> SELECT (CASE WHEN true or null then 1 else 0 
> end) from (VALUES(1));
> Error: VALIDATION ERROR: class org.apache.calcite.sql.SqlLiteral: NULL
> SQL Query null
> [Error Id: 822ec7b0-3630-478c-b82a-0acedc39a560 on centos-01.qa.lab:31010] 
> (state=,code=0)
> -- changing null to "not null" in the search condition causes Drill to return 
> results
> 0: jdbc:drill:schema=dfs.tmp> SELECT (CASE WHEN true or not null then 1 else 
> 0 end) from (VALUES(1));
> +-+
> | EXPR$0  |
> +-+
> | 1   |
> +-+
> 1 row selected (0.11 seconds)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> Caused by: java.lang.UnsupportedOperationException: class 
> org.apache.calcite.sql.SqlLiteral: NULL
> at org.apache.calcite.util.Util.needToImplement(Util.java:920) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.getValidatedNodeType(SqlValidatorImpl.java:1426)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlBinaryOperator.adjustType(SqlBinaryOperator.java:103)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:511) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.fun.SqlCaseOperator.checkOperandTypes(SqlCaseOperator.java:178)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:430) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.fun.SqlCaseOperator.deriveType(SqlCaseOperator.java:164)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:446)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)