[jira] [Commented] (DRILL-4831) Running refresh table metadata concurrently randomly fails with JsonParseException

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673119#comment-15673119
 ] 

ASF GitHub Bot commented on DRILL-4831:
---

Github user amansinha100 commented on a diff in the pull request:

https://github.com/apache/drill/pull/653#discussion_r88399438
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/Metadata.java 
---
@@ -495,31 +499,75 @@ private ParquetFileMetadata_v3 
getParquetFileMetadata_v3(ParquetTableMetadata_v3
* @param p
* @throws IOException
*/
-  private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata, 
Path p) throws IOException {
+  private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata, 
String path) throws IOException {
 JsonFactory jsonFactory = new JsonFactory();
 jsonFactory.configure(Feature.AUTO_CLOSE_TARGET, false);
 jsonFactory.configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
 ObjectMapper mapper = new ObjectMapper(jsonFactory);
 SimpleModule module = new SimpleModule();
 module.addSerializer(ColumnMetadata_v3.class, new 
ColumnMetadata_v3.Serializer());
 mapper.registerModule(module);
-FSDataOutputStream os = fs.create(p);
+
+// If multiple clients are updating metadata cache file concurrently, 
the cache file
+// can get corrupted. To prevent this, write to a unique temporary 
file and then do
+// atomic rename.
+UUID randomUUID =  UUID.randomUUID();
+Path tmpPath = new Path(path, METADATA_FILENAME + "." + randomUUID);
+
+FSDataOutputStream os = fs.create(tmpPath);
 mapper.writerWithDefaultPrettyPrinter().writeValue(os, 
parquetTableMetadata);
 os.flush();
 os.close();
+
+// Use fileContext API as FileSystem rename is deprecated.
+FileContext fileContext = FileContext.getFileContext(tmpPath.toUri());
+Path finalPath = new Path(path, METADATA_FILENAME);
+
+try {
+  fileContext.rename(tmpPath, finalPath, Options.Rename.OVERWRITE);
+} catch (Exception e) {
+  logger.info("Metadata cache file rename from {} to {} failed", 
tmpPath.toString(), finalPath.toString(), e);
+  throw new IOException("metadata cache file rename failed");
+} finally {
+  if (fs.exists(tmpPath)) {
+fs.delete(tmpPath, false);
+  }
+}
   }
 
-  private void writeFile(ParquetTableMetadataDirs 
parquetTableMetadataDirs, Path p) throws IOException {
+  private void writeFile(ParquetTableMetadataDirs 
parquetTableMetadataDirs, String path) throws IOException {
 JsonFactory jsonFactory = new JsonFactory();
 jsonFactory.configure(Feature.AUTO_CLOSE_TARGET, false);
 jsonFactory.configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
 ObjectMapper mapper = new ObjectMapper(jsonFactory);
 SimpleModule module = new SimpleModule();
 mapper.registerModule(module);
-FSDataOutputStream os = fs.create(p);
+
+// If multiple clients are updating metadata cache file concurrently, 
the cache file
+// can get corrupted. To prevent this, write to a unique temporary 
file and then do
+// atomic rename.
+UUID randomUUID = UUID.randomUUID();
+Path tmpPath = new Path(path, METADATA_DIRECTORIES_FILENAME + "." + 
randomUUID);
+
+FSDataOutputStream os = fs.create(tmpPath);
 mapper.writerWithDefaultPrettyPrinter().writeValue(os, 
parquetTableMetadataDirs);
 os.flush();
 os.close();
+
+// Use fileContext API as FileSystem rename is deprecated.
+FileContext fileContext = FileContext.getFileContext(tmpPath.toUri());
+Path finalPath = new Path(path,  METADATA_DIRECTORIES_FILENAME);
+
+try {
+  fileContext.rename(tmpPath, finalPath, Options.Rename.OVERWRITE);
+} catch (Exception e) {
+  logger.info("Metadata cache file rename from {} to {} failed", 
tmpPath.toString(), finalPath.toString(), e);
+  throw new IOException("metadata cache file rename failed");
--- End diff --

This IOException is masking the original exception e.  Better to rethrow 
using IOException(message, cause) constructor. 


> Running refresh table metadata concurrently randomly fails with 
> JsonParseException
> --
>
> Key: DRILL-4831
> URL: https://issues.apache.org/jira/browse/DRILL-4831
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.8.0
>Reporter: Rahul Challapalli
>Assignee: Aman Sinha
> Attachments: error.log, l_3level.tgz
>
>
> git.commi

[jira] [Commented] (DRILL-4831) Running refresh table metadata concurrently randomly fails with JsonParseException

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673118#comment-15673118
 ] 

ASF GitHub Bot commented on DRILL-4831:
---

Github user amansinha100 commented on a diff in the pull request:

https://github.com/apache/drill/pull/653#discussion_r88399778
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/Metadata.java 
---
@@ -495,31 +499,75 @@ private ParquetFileMetadata_v3 
getParquetFileMetadata_v3(ParquetTableMetadata_v3
* @param p
* @throws IOException
*/
-  private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata, 
Path p) throws IOException {
+  private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata, 
String path) throws IOException {
 JsonFactory jsonFactory = new JsonFactory();
 jsonFactory.configure(Feature.AUTO_CLOSE_TARGET, false);
 jsonFactory.configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
 ObjectMapper mapper = new ObjectMapper(jsonFactory);
 SimpleModule module = new SimpleModule();
 module.addSerializer(ColumnMetadata_v3.class, new 
ColumnMetadata_v3.Serializer());
 mapper.registerModule(module);
-FSDataOutputStream os = fs.create(p);
+
+// If multiple clients are updating metadata cache file concurrently, 
the cache file
+// can get corrupted. To prevent this, write to a unique temporary 
file and then do
+// atomic rename.
+UUID randomUUID =  UUID.randomUUID();
+Path tmpPath = new Path(path, METADATA_FILENAME + "." + randomUUID);
+
+FSDataOutputStream os = fs.create(tmpPath);
 mapper.writerWithDefaultPrettyPrinter().writeValue(os, 
parquetTableMetadata);
 os.flush();
 os.close();
+
+// Use fileContext API as FileSystem rename is deprecated.
+FileContext fileContext = FileContext.getFileContext(tmpPath.toUri());
+Path finalPath = new Path(path, METADATA_FILENAME);
+
+try {
+  fileContext.rename(tmpPath, finalPath, Options.Rename.OVERWRITE);
+} catch (Exception e) {
+  logger.info("Metadata cache file rename from {} to {} failed", 
tmpPath.toString(), finalPath.toString(), e);
+  throw new IOException("metadata cache file rename failed");
+} finally {
+  if (fs.exists(tmpPath)) {
+fs.delete(tmpPath, false);
+  }
+}
   }
 
-  private void writeFile(ParquetTableMetadataDirs 
parquetTableMetadataDirs, Path p) throws IOException {
+  private void writeFile(ParquetTableMetadataDirs 
parquetTableMetadataDirs, String path) throws IOException {
 JsonFactory jsonFactory = new JsonFactory();
 jsonFactory.configure(Feature.AUTO_CLOSE_TARGET, false);
 jsonFactory.configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
 ObjectMapper mapper = new ObjectMapper(jsonFactory);
 SimpleModule module = new SimpleModule();
 mapper.registerModule(module);
-FSDataOutputStream os = fs.create(p);
+
+// If multiple clients are updating metadata cache file concurrently, 
the cache file
+// can get corrupted. To prevent this, write to a unique temporary 
file and then do
+// atomic rename.
+UUID randomUUID = UUID.randomUUID();
+Path tmpPath = new Path(path, METADATA_DIRECTORIES_FILENAME + "." + 
randomUUID);
+
+FSDataOutputStream os = fs.create(tmpPath);
 mapper.writerWithDefaultPrettyPrinter().writeValue(os, 
parquetTableMetadataDirs);
 os.flush();
 os.close();
+
+// Use fileContext API as FileSystem rename is deprecated.
+FileContext fileContext = FileContext.getFileContext(tmpPath.toUri());
--- End diff --

The creation and renaming of the temp file is common to both writeFile() 
methods except for the METADATA_FILENAME vs METADATA_DIRECTORIES_FILENAME...can 
you create utility methods and call from both places. 


> Running refresh table metadata concurrently randomly fails with 
> JsonParseException
> --
>
> Key: DRILL-4831
> URL: https://issues.apache.org/jira/browse/DRILL-4831
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.8.0
>Reporter: Rahul Challapalli
>Assignee: Aman Sinha
> Attachments: error.log, l_3level.tgz
>
>
> git.commit.id.abbrev=f476eb5
> Just run the below command concurrently from 10 different JDBC connections. 
> There is a likelihood that you might encounter the below error
> Extracts from the log
> {code}
> Caused By (java.lang.AssertionError) Internal error: Error while applying 
> rule DrillPushProjIntoScan, args 

[jira] [Commented] (DRILL-4831) Running refresh table metadata concurrently randomly fails with JsonParseException

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673117#comment-15673117
 ] 

ASF GitHub Bot commented on DRILL-4831:
---

Github user amansinha100 commented on a diff in the pull request:

https://github.com/apache/drill/pull/653#discussion_r88404356
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/Metadata.java 
---
@@ -495,31 +499,75 @@ private ParquetFileMetadata_v3 
getParquetFileMetadata_v3(ParquetTableMetadata_v3
* @param p
* @throws IOException
*/
-  private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata, 
Path p) throws IOException {
+  private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata, 
String path) throws IOException {
 JsonFactory jsonFactory = new JsonFactory();
 jsonFactory.configure(Feature.AUTO_CLOSE_TARGET, false);
 jsonFactory.configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
 ObjectMapper mapper = new ObjectMapper(jsonFactory);
 SimpleModule module = new SimpleModule();
 module.addSerializer(ColumnMetadata_v3.class, new 
ColumnMetadata_v3.Serializer());
 mapper.registerModule(module);
-FSDataOutputStream os = fs.create(p);
+
+// If multiple clients are updating metadata cache file concurrently, 
the cache file
+// can get corrupted. To prevent this, write to a unique temporary 
file and then do
+// atomic rename.
+UUID randomUUID =  UUID.randomUUID();
--- End diff --

Is there a way to get the global queryId created by the Foreman ?  The name 
of the temp file would best be associated with the queryId.  If that is not 
possible (I am not sure if the query context is accessible easily from here), 
at the very least we should re-use the UUID for both the METADATA_FILENAME and 
METADATA_DIRECTORIES_FILENAME .  It is not a correctness issue but will avoid 
confusion. 


> Running refresh table metadata concurrently randomly fails with 
> JsonParseException
> --
>
> Key: DRILL-4831
> URL: https://issues.apache.org/jira/browse/DRILL-4831
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.8.0
>Reporter: Rahul Challapalli
>Assignee: Aman Sinha
> Attachments: error.log, l_3level.tgz
>
>
> git.commit.id.abbrev=f476eb5
> Just run the below command concurrently from 10 different JDBC connections. 
> There is a likelihood that you might encounter the below error
> Extracts from the log
> {code}
> Caused By (java.lang.AssertionError) Internal error: Error while applying 
> rule DrillPushProjIntoScan, args 
> [rel#189411:LogicalProject.NONE.ANY([]).[](input=rel#189289:Subset#3.ENUMERABLE.ANY([]).[],l_orderkey=$1,dir0=$2,dir1=$3,dir2=$4,l_shipdate=$5,l_extendedprice=$6,l_discount=$7),
>  rel#189233:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[dfs, 
> metadata_caching_pp, l_3level])]
> org.apache.calcite.util.Util.newInternal():792
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch():251
> .
> .
>   java.lang.Thread.run():745
>   Caused By (org.apache.drill.common.exceptions.DrillRuntimeException) 
> com.fasterxml.jackson.core.JsonParseException: Illegal character ((CTRL-CHAR, 
> code 0)): only regular white space (\r, \n, \t) is allowed between tokens
>  at [Source: com.mapr.fs.MapRFsDataInputStream@57a574a8; line: 1, column: 2]
> org.apache.drill.exec.planner.logical.DrillPushProjIntoScan.onMatch():95
> {code}  
> Attached the complete log message and the data set



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread Arina Ielchiieva (JIRA)
Arina Ielchiieva created DRILL-5047:
---

 Summary: When session option is string, query profile is displayed 
incorrectly on Web UI
 Key: DRILL-5047
 URL: https://issues.apache.org/jira/browse/DRILL-5047
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.9.0
Reporter: Arina Ielchiieva
Assignee: Arina Ielchiieva


When session option is string, query profile is displayed incorrectly on Web UI:

{noformat}
NameValue
store.formatFreeMarker template error: For "?c" left-hand operand: Expected 
a number or boolean, but this evaluated to a string (wrapper: 
f.t.SimpleScalar): ==> option.getValue() [in template 
"rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
means nesting-related): - Failed at: ${option.getValue()?c} [in template 
"rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
Reached through: @page_body [in template "rest/profile/*/generic.ftl" in macro 
"page_html" at line 89, column 9] - Reached through: @page_html [in template 
"rest/profile/profile.ftl" at line 247, column 1]  Java stack trace (for 
programmers):  freemarker.core.UnexpectedTypeException: [... Exception 
message was already printed; see it above ...] at 
freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
 at freemarker.core.Expression.eval(Expression.java:76) at 
freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.MixedContent.accept(MixedContent.java:57) at 
freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.MixedContent.accept(MixedContent.java:57) at 
freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.MixedContent.accept(MixedContent.java:57) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
freemarker.core.Environment.visit(Environment.java:686) at 
freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.MixedContent.accept(MixedContent.java:57) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
freemarker.core.Environment.visit(Environment.java:686) at 
freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.MixedContent.accept(MixedContent.java:57) at 
freemarker.core.Environment.visit(Environment.java:257) at 
freemarker.core.Environment.process(Environment.java:235) at 
freemarker.template.Template.process(Template.java:262) at 
org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
 at 
org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
 at 
org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
 at 
org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
 at 
org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:88)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:263)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
 at 
org.glassfish.jersey.server.mvc.internal.TemplateMethodInterceptor.aroundWriteTo(TemplateMethodInterceptor.java:77)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
 at 
org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:103)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
 at 
org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:88)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor

[jira] [Commented] (DRILL-4792) Include session options used for a query as part of the profile

2016-11-17 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673251#comment-15673251
 ] 

Arina Ielchiieva commented on DRILL-4792:
-

[~gparai] Yes, you are right, I have created Jira to track this issue - 
https://issues.apache.org/jira/browse/DRILL-5047

> Include session options used for a query as part of the profile
> ---
>
> Key: DRILL-4792
> URL: https://issues.apache.org/jira/browse/DRILL-4792
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>Priority: Minor
>  Labels: doc-impacting
> Fix For: 1.9.0
>
> Attachments: no_session_options.JPG, session_options_block.JPG, 
> session_options_collapsed.JPG, session_options_json.JPG
>
>
> Include session options used for a query as part of the profile.
> This will be very useful for debugging/diagnostics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5032) Drill query on hive parquet table failed with OutOfMemoryError: Java heap space

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673340#comment-15673340
 ] 

ASF GitHub Bot commented on DRILL-5032:
---

GitHub user Serhii-Harnyk opened a pull request:

https://github.com/apache/drill/pull/654

DRILL-5032: Drill query on hive parquet table failed with OutOfMemoryError



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Serhii-Harnyk/drill DRILL-5032

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/654.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #654


commit 90482d7a1be99293fc3afdf2a297ee08e8831f66
Author: Serhii-Harnyk 
Date:   2016-10-27T19:20:27Z

DRILL-5032 Drill query on hive parquet table failed with OutOfMemoryError: 
Java heap space




> Drill query on hive parquet table failed with OutOfMemoryError: Java heap 
> space
> ---
>
> Key: DRILL-5032
> URL: https://issues.apache.org/jira/browse/DRILL-5032
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Hive
>Affects Versions: 1.8.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
>
> Following query on hive parquet table failed with OOM Java heap space:
> {code}
> select distinct(businessdate) from vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:02:03,597 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 283938c3-fde8-0fc6-37e1-9a568c7f5913: select distinct(businessdate) from 
> vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:05:58,502 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 1 ms
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 3 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:05:58,664 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$1
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:09:42,355 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] ERROR 
> o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurred, 
> exiting. Information message: Unable to handle out of memory condition in 
> Foreman.
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3332) ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:136) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:76) 
> ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:457) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:166) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.jav

[jira] [Commented] (DRILL-5032) Drill query on hive parquet table failed with OutOfMemoryError: Java heap space

2016-11-17 Thread Serhii Harnyk (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673402#comment-15673402
 ] 

Serhii Harnyk commented on DRILL-5032:
--

List of columns in every hive partition increases size of serialized physical 
plan that causes OOM.
As Hive allows every partition to have its own set of columns now system 
checks, and if all column sets for all partitions are equal, list of columns 
stores only on hive table level.

> Drill query on hive parquet table failed with OutOfMemoryError: Java heap 
> space
> ---
>
> Key: DRILL-5032
> URL: https://issues.apache.org/jira/browse/DRILL-5032
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Hive
>Affects Versions: 1.8.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
>
> Following query on hive parquet table failed with OOM Java heap space:
> {code}
> select distinct(businessdate) from vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:02:03,597 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 283938c3-fde8-0fc6-37e1-9a568c7f5913: select distinct(businessdate) from 
> vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:05:58,502 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 1 ms
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 3 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:05:58,664 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$1
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:09:42,355 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] ERROR 
> o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurred, 
> exiting. Information message: Unable to handle out of memory condition in 
> Foreman.
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3332) ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:136) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:76) 
> ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:457) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:166) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:76) 
> ~[na:1.8.0_74]
> at 
> com.google.protobuf.TextFormat$TextGenerator.write(TextFormat.java:538) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$TextGenerator.print(TextFormat.java:526) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:389) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:327) 
> ~[protobuf-java-2.5.0.jar:

[jira] [Created] (DRILL-5048) AssertionError when case statement is used with timestamp and null

2016-11-17 Thread Serhii Harnyk (JIRA)
Serhii Harnyk created DRILL-5048:


 Summary: AssertionError when case statement is used with timestamp 
and null
 Key: DRILL-5048
 URL: https://issues.apache.org/jira/browse/DRILL-5048
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.9.0
Reporter: Serhii Harnyk
Assignee: Serhii Harnyk
 Fix For: Future


AssertionError when we use case with timestamp and null:

{noformat}
0: jdbc:drill:schema=dfs.tmp> SELECT res, CASE res WHEN true THEN 
CAST('1990-10-10 22:40:50' AS TIMESTAMP) ELSE null END
. . . . . . . . . . . . . . > FROM
. . . . . . . . . . . . . . > (
. . . . . . . . . . . . . . > SELECT
. . . . . . . . . . . . . . > (CASE WHEN (false) THEN null ELSE 
CAST('1990-10-10 22:40:50' AS TIMESTAMP) END) res
. . . . . . . . . . . . . . > FROM (values(1)) foo
. . . . . . . . . . . . . . > ) foobar;
Error: SYSTEM ERROR: AssertionError: Type mismatch:
rowtype of new rel:
RecordType(TIMESTAMP(0) NOT NULL res, TIMESTAMP(0) EXPR$1) NOT NULL
rowtype of set:
RecordType(TIMESTAMP(0) res, TIMESTAMP(0) EXPR$1) NOT NULL


[Error Id: b56e0a4d-2f9e-4afd-8c60-5bc2f9d31f8f on centos-01.qa.lab:31010] 
(state=,code=0)
{noformat}

Stack trace from drillbit.log

{noformat}
Caused by: java.lang.AssertionError: Type mismatch:
rowtype of new rel:
RecordType(TIMESTAMP(0) NOT NULL res, TIMESTAMP(0) EXPR$1) NOT NULL
rowtype of set:
RecordType(TIMESTAMP(0) res, TIMESTAMP(0) EXPR$1) NOT NULL
at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1696) 
~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:295) 
~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:147) 
~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1818)
 ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1760)
 ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1017)
 ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1037)
 ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1940)
 ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:138)
 ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
... 16 common frames omitted
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4923) Use of CASE WHEN inside a sub-query results in AssertionError

2016-11-17 Thread Serhii Harnyk (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673457#comment-15673457
 ] 

Serhii Harnyk commented on DRILL-4923:
--

[~khfaraaz]
The problem with case statement when using timestamp and null is not related to 
this Jira, so I have created separate Jira to track this issue: DRILL-5048

> Use of CASE WHEN inside a sub-query results in AssertionError
> -
>
> Key: DRILL-4923
> URL: https://issues.apache.org/jira/browse/DRILL-4923
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
> Environment: 4 node cluster on CentOS
>Reporter: Khurram Faraaz
>Assignee: Serhii Harnyk
>Priority: Critical
> Attachments: 0_0_0.parquet
>
>
> Use of CASE WHEN inside a sub-query results in AssertionError
> Drill 1.9.0 git commit ID: f3c26e34
> parquet file used in test is attached
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select * from (SELECT CASE WHEN 'WA'='WA' THEN 
> '13' ELSE CAST(state as varchar(2)) end as state_nm from `emp_tbl` as a) 
> `LOG_FCT` where ('WA'=`LOG_FCT`.`state_nm`);
> Error: SYSTEM ERROR: AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" NOT NULL state_nm) NOT NULL
> rowtype of set:
> RecordType(VARCHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" state_nm) NOT NULL
> [Error Id: 59b8da55-cf01-41fe-ba7a-018f7fad2892 on centos-01.qa.lab:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> 2016-10-03 09:37:32,756 [280dd922-b97d-2ccd-551c-e425c075d91f:foreman] ERROR 
> o.a.drill.exec.work.foreman.Foreman - SYSTEM ERROR: AssertionError: Type 
> mismatch:
> rowtype of new rel:
> RecordType(CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" NOT NULL state_nm) NOT NULL
> rowtype of set:
> RecordType(VARCHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" state_nm) NOT NULL
> [Error Id: 59b8da55-cf01-41fe-ba7a-018f7fad2892 on centos-01.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" NOT NULL state_nm) NOT NULL
> rowtype of set:
> RecordType(VARCHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" state_nm) NOT NULL
> [Error Id: 59b8da55-cf01-41fe-ba7a-018f7fad2892 on centos-01.qa.lab:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0-SNAPSHOT.jar:1.9.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:825)
>  [drill-java-exec-1.9.0-SNAPSHOT.jar:1.9.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:935) 
> [drill-java-exec-1.9.0-SNAPSHOT.jar:1.9.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:281) 
> [drill-java-exec-1.9.0-SNAPSHOT.jar:1.9.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
> exception during fragment initialization: Internal error: Error while 
> applying rule ProjectMergeRule:force_mode, args 
> [rel#5:LogicalProject.NONE.ANY([]).[](input=rel#11103:Subset#6.NONE.ANY([]).[],state_nm='13'),
>  
> rel#11107:LogicalProject.NONE.ANY([]).[](input=rel#11095:Subset#4.NONE.ANY([]).[],state=$1)]
> ... 4 common frames omitted
> Caused by: java.lang.AssertionError: Internal error: Error while applying 
> rule ProjectMergeRule:force_mode, args 
> [rel#5:LogicalProject.NONE.ANY([]).[](input=rel#11103:Subset#6.NONE.ANY([]).[],state_nm='13'),
>  
> rel#11107:LogicalProject.NONE.ANY([]).[](input=rel#11095:Subset#4.NONE.ANY([]).[],state=$1)]
> at org.apache.calcite.util.Util.newInternal(Util.java:792) 
> ~[calcite-core-1.4.0-drill-r17.jar:1.4.0-drill-r17]
> at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:251)
>  ~[calcite-core-1.4.0-drill-r17.jar:1.4.0-drill-r17]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:808)
>  ~[calcite-core-1.4.0-drill-r17.jar:1.4.0-drill-r17]
> at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:303) 
> ~[calcite-core-1.4.0-drill-r17.jar:1.4.0-drill-r17]
> at 
>

[jira] [Commented] (DRILL-5048) AssertionError when case statement is used with timestamp and null

2016-11-17 Thread Serhii Harnyk (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673472#comment-15673472
 ] 

Serhii Harnyk commented on DRILL-5048:
--

Originally found by [~khfaraaz] in DRILL-4923

> AssertionError when case statement is used with timestamp and null
> --
>
> Key: DRILL-5048
> URL: https://issues.apache.org/jira/browse/DRILL-5048
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
> Fix For: Future
>
>
> AssertionError when we use case with timestamp and null:
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> SELECT res, CASE res WHEN true THEN 
> CAST('1990-10-10 22:40:50' AS TIMESTAMP) ELSE null END
> . . . . . . . . . . . . . . > FROM
> . . . . . . . . . . . . . . > (
> . . . . . . . . . . . . . . > SELECT
> . . . . . . . . . . . . . . > (CASE WHEN (false) THEN null ELSE 
> CAST('1990-10-10 22:40:50' AS TIMESTAMP) END) res
> . . . . . . . . . . . . . . > FROM (values(1)) foo
> . . . . . . . . . . . . . . > ) foobar;
> Error: SYSTEM ERROR: AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(TIMESTAMP(0) NOT NULL res, TIMESTAMP(0) EXPR$1) NOT NULL
> rowtype of set:
> RecordType(TIMESTAMP(0) res, TIMESTAMP(0) EXPR$1) NOT NULL
> [Error Id: b56e0a4d-2f9e-4afd-8c60-5bc2f9d31f8f on centos-01.qa.lab:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(TIMESTAMP(0) NOT NULL res, TIMESTAMP(0) EXPR$1) NOT NULL
> rowtype of set:
> RecordType(TIMESTAMP(0) res, TIMESTAMP(0) EXPR$1) NOT NULL
> at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1696) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:295) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:147) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1818)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1760)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1017)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1037)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1940)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:138)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> ... 16 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4842) SELECT * on JSON data results in NumberFormatException

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673490#comment-15673490
 ] 

ASF GitHub Bot commented on DRILL-4842:
---

Github user jinma1978 commented on the issue:

https://github.com/apache/drill/pull/594
  
@chunhui-shi I tried to play with different java structures for column 
path, but no real advance in performance.


> SELECT * on JSON data results in NumberFormatException
> --
>
> Key: DRILL-4842
> URL: https://issues.apache.org/jira/browse/DRILL-4842
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.2.0
>Reporter: Khurram Faraaz
>Assignee: Chunhui Shi
> Attachments: tooManyNulls.json
>
>
> Note that doing SELECT c1 returns correct results, the failure is seen when 
> we do SELECT star. json.all_text_mode was set to true.
> JSON file tooManyNulls.json has one key c1 with 4096 nulls as its value and 
> the 4097th key c1 has the value "Hello World"
> git commit ID : aaf220ff
> MapR Drill 1.8.0 RPM
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> alter session set 
> `store.json.all_text_mode`=true;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | store.json.all_text_mode updated.  |
> +---++
> 1 row selected (0.27 seconds)
> 0: jdbc:drill:schema=dfs.tmp> SELECT c1 FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> +--+
> |  c1  |
> +--+
> | Hello World  |
> +--+
> 1 row selected (0.243 seconds)
> 0: jdbc:drill:schema=dfs.tmp> select * FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> Error: SYSTEM ERROR: NumberFormatException: Hello World
> Fragment 0:0
> [Error Id: 9cafb3f9-3d5c-478a-b55c-900602b8765e on centos-01.qa.lab:31010]
>  (java.lang.NumberFormatException) Hello World
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeI():95
> 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.varTypesToInt():120
> org.apache.drill.exec.test.generated.FiltererGen1169.doSetup():45
> org.apache.drill.exec.test.generated.FiltererGen1169.setup():54
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.generateSV2Filterer():195
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.setupNewSchema():107
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():78
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():94
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745 (state=,code=0)
> 0: jdbc:drill:schema=dfs.tmp>
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> Caused by: java.lang.NumberFormatException: Hello World
> at 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeI(StringFunctionHelpers.java:95)
>  ~[drill-java-exec-1.8.0-SNAPSHOT.jar:

[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673653#comment-15673653
 ] 

Arina Ielchiieva commented on DRILL-5047:
-

Should look like on screenshot - session_options_all_types.JPG

> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:88)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:263)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>  at 
> org.glassfish.jersey.server.mvc.internal.TemplateMethodInterceptor.aroundWriteTo(TemplateMethodInterceptor.java:77)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterIn

[jira] [Updated] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5047:

Attachment: session_options_all_types.JPG

> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:88)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:263)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>  at 
> org.glassfish.jersey.server.mvc.internal.TemplateMethodInterceptor.aroundWriteTo(TemplateMethodInterceptor.java:77)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>  at 
> org.glassfish.jersey.server.internal.J

[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673831#comment-15673831
 ] 

ASF GitHub Bot commented on DRILL-5047:
---

GitHub user arina-ielchiieva opened a pull request:

https://github.com/apache/drill/pull/655

DRILL-5047: When session option is string, query profile is displayed…

… incorrectly on Web UI

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arina-ielchiieva/drill DRILL-5047

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/655.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #655


commit 686d820ca4216f3da4d3570f9158223707b2259a
Author: Arina Ielchiieva 
Date:   2016-11-17T12:44:34Z

DRILL-5047: When session option is string, query profile is displayed 
incorrectly on Web UI




> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter

[jira] [Assigned] (DRILL-5040) Interrupted CTAS should not succeed & should not create physical file on disk

2016-11-17 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva reassigned DRILL-5040:
---

Assignee: Arina Ielchiieva

> Interrupted CTAS should not succeed & should not create physical file on disk
> -
>
> Key: DRILL-5040
> URL: https://issues.apache.org/jira/browse/DRILL-5040
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.9.0
>Reporter: Khurram Faraaz
>Assignee: Arina Ielchiieva
>
> We should not allow CTAS to succeed (i.e create physical file on disk ) in 
> the case where it was interrupted. (vis Ctrl-C)
> Drill 1.9.0
> git commit ID : db30854
> Consider the below CTAS that was interrupted using Ctrl-C
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> create table temp_t1 as select * from 
> `twoKeyJsn.json`; 
> [ issue Ctrl-C while the above CTAS is running ]
> No rows affected (7.694 seconds)
> {noformat}
> I verified that physical file was created on disk, even though the above CTAS 
> was Canceled
> {noformat}
> [root@centos-01 ~]# hadoop fs -ls /tmp/temp_t1*
> -rwxr-xr-x   3 root root   36713198 2016-11-14 10:51 
> /tmp/temp_t1/0_0_0.parquet
> {noformat}
> We are able to do a select on the CTAS table (above) that was Canceled.
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select count(*) from temp_t1;
> +--+
> |  EXPR$0  |
> +--+
> | 3747840  |
> +--+
> 1 row selected (0.183 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5040) Interrupted CTAS should not succeed & should not create physical file on disk

2016-11-17 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674070#comment-15674070
 ] 

Arina Ielchiieva commented on DRILL-5040:
-

During Jira verification , please consider checking clean up for different 
store.formats, ex: json, parquet, csv.

> Interrupted CTAS should not succeed & should not create physical file on disk
> -
>
> Key: DRILL-5040
> URL: https://issues.apache.org/jira/browse/DRILL-5040
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.9.0
>Reporter: Khurram Faraaz
>Assignee: Arina Ielchiieva
>
> We should not allow CTAS to succeed (i.e create physical file on disk ) in 
> the case where it was interrupted. (vis Ctrl-C)
> Drill 1.9.0
> git commit ID : db30854
> Consider the below CTAS that was interrupted using Ctrl-C
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> create table temp_t1 as select * from 
> `twoKeyJsn.json`; 
> [ issue Ctrl-C while the above CTAS is running ]
> No rows affected (7.694 seconds)
> {noformat}
> I verified that physical file was created on disk, even though the above CTAS 
> was Canceled
> {noformat}
> [root@centos-01 ~]# hadoop fs -ls /tmp/temp_t1*
> -rwxr-xr-x   3 root root   36713198 2016-11-14 10:51 
> /tmp/temp_t1/0_0_0.parquet
> {noformat}
> We are able to do a select on the CTAS table (above) that was Canceled.
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select count(*) from temp_t1;
> +--+
> |  EXPR$0  |
> +--+
> | 3747840  |
> +--+
> 1 row selected (0.183 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (DRILL-5040) Interrupted CTAS should not succeed & should not create physical file on disk

2016-11-17 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674070#comment-15674070
 ] 

Arina Ielchiieva edited comment on DRILL-5040 at 11/17/16 4:02 PM:
---

During Jira verification, please consider checking clean up for different 
store.formats, ex: json, parquet, csv.


was (Author: arina):
During Jira verification , please consider checking clean up for different 
store.formats, ex: json, parquet, csv.

> Interrupted CTAS should not succeed & should not create physical file on disk
> -
>
> Key: DRILL-5040
> URL: https://issues.apache.org/jira/browse/DRILL-5040
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.9.0
>Reporter: Khurram Faraaz
>Assignee: Arina Ielchiieva
>
> We should not allow CTAS to succeed (i.e create physical file on disk ) in 
> the case where it was interrupted. (vis Ctrl-C)
> Drill 1.9.0
> git commit ID : db30854
> Consider the below CTAS that was interrupted using Ctrl-C
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> create table temp_t1 as select * from 
> `twoKeyJsn.json`; 
> [ issue Ctrl-C while the above CTAS is running ]
> No rows affected (7.694 seconds)
> {noformat}
> I verified that physical file was created on disk, even though the above CTAS 
> was Canceled
> {noformat}
> [root@centos-01 ~]# hadoop fs -ls /tmp/temp_t1*
> -rwxr-xr-x   3 root root   36713198 2016-11-14 10:51 
> /tmp/temp_t1/0_0_0.parquet
> {noformat}
> We are able to do a select on the CTAS table (above) that was Canceled.
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select count(*) from temp_t1;
> +--+
> |  EXPR$0  |
> +--+
> | 3747840  |
> +--+
> 1 row selected (0.183 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (DRILL-5032) Drill query on hive parquet table failed with OutOfMemoryError: Java heap space

2016-11-17 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong reassigned DRILL-5032:
---

Assignee: Chunhui Shi  (was: Serhii Harnyk)

Assigning to [~cshi] for review.

> Drill query on hive parquet table failed with OutOfMemoryError: Java heap 
> space
> ---
>
> Key: DRILL-5032
> URL: https://issues.apache.org/jira/browse/DRILL-5032
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Hive
>Affects Versions: 1.8.0
>Reporter: Serhii Harnyk
>Assignee: Chunhui Shi
>
> Following query on hive parquet table failed with OOM Java heap space:
> {code}
> select distinct(businessdate) from vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:02:03,597 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 283938c3-fde8-0fc6-37e1-9a568c7f5913: select distinct(businessdate) from 
> vmdr_trades where trade_date='2016-04-12'
> 2016-08-31 08:05:58,502 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 1 ms
> 2016-08-31 08:05:58,506 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 3 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$2
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,663 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:05:58,664 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning 
> class: 
> org.apache.drill.exec.planner.sql.logical.HivePushPartitionFilterIntoScan$1
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze 
> filter tree: 0 ms
> 2016-08-31 08:05:58,665 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - No conditions were found eligible for 
> partition pruning.Total pruning elapsed time: 0 ms
> 2016-08-31 08:09:42,355 [283938c3-fde8-0fc6-37e1-9a568c7f5913:foreman] ERROR 
> o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurred, 
> exiting. Information message: Unable to handle out of memory condition in 
> Foreman.
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3332) ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)
>  ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:136) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:76) 
> ~[na:1.8.0_74]
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:457) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:166) 
> ~[na:1.8.0_74]
> at java.lang.StringBuilder.append(StringBuilder.java:76) 
> ~[na:1.8.0_74]
> at 
> com.google.protobuf.TextFormat$TextGenerator.write(TextFormat.java:538) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$TextGenerator.print(TextFormat.java:526) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:389) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:327) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:286) 
> ~[protobuf-java-2.5.0.jar:na]
> at com.google.protobuf.TextFormat$Printer.print(TextFormat.java:273) 
> ~[protobuf-java-2.5.0.jar:na]
> at 
> com.goo

[jira] [Assigned] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong reassigned DRILL-5047:
---

Assignee: Gautam Kumar Parai  (was: Arina Ielchiieva)

Assigning to [~gparai] for review.

> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Gautam Kumar Parai
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:88)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:263)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>  at 
> org.glassfish.jersey.server.mvc.internal.TemplateMethodInterceptor.aroundWriteTo(TemplateMethodInterceptor.java:77)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162

[jira] [Created] (DRILL-5049) difference in results - correlated subquery interacting with null equality join

2016-11-17 Thread Khurram Faraaz (JIRA)
Khurram Faraaz created DRILL-5049:
-

 Summary: difference in results - correlated subquery interacting 
with null equality join
 Key: DRILL-5049
 URL: https://issues.apache.org/jira/browse/DRILL-5049
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Affects Versions: 1.9.0
Reporter: Khurram Faraaz


Here is a query that uses null equality join. Drill 1.9.0 returns 124 records, 
whereas Postgres 9.3 returns 145 records. I am on Drill 1.9.0 git commit id: 
db308549

I have attached the results from Drill 1.9.0 and Postgres, please review.

{noformat}
0: jdbc:drill:schema=dfs.tmp> explain plan for
. . . . . . . . . . . . . . > SELECT *
. . . . . . . . . . . . . . > FROM `t_alltype.parquet` t1
. . . . . . . . . . . . . . > WHERE EXISTS
. . . . . . . . . . . . . . > (
. . . . . . . . . . . . . . > SELECT *
. . . . . . . . . . . . . . > FROM `t_alltype.parquet` t2
. . . . . . . . . . . . . . > WHERE t1.c4 = t2.c4 OR (t1.c4 IS 
NULL AND t2.c4 IS NULL)
. . . . . . . . . . . . . . > );
+--+--+
| text | json |
+--+--+
| 00-00Screen
00-01  Project(*=[$0])
00-02Project(T30¦¦*=[$0])
00-03  HashJoin(condition=[AND(=($1, $2), =($1, $3))], joinType=[inner])
00-05Project(T30¦¦*=[$0], c4=[$1])
00-07  Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath 
[path=maprfs:///tmp/t_alltype.parquet]], 
selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
usedMetadataFile=false, columns=[`*`]]])
00-04HashAgg(group=[{0, 1}], agg#0=[MIN($2)])
00-06  Project(c40=[$1], c400=[$1], $f0=[true])
00-08HashJoin(condition=[IS NOT DISTINCT FROM($0, $1)], 
joinType=[inner])
00-10  Scan(groupscan=[ParquetGroupScan 
[entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
usedMetadataFile=false, columns=[`c4`]]])
00-09  Project(c40=[$0])
00-11HashAgg(group=[{0}])
00-12  Scan(groupscan=[ParquetGroupScan 
[entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
usedMetadataFile=false, columns=[`c4`]]])
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5049) difference in results - correlated subquery interacting with null equality join

2016-11-17 Thread Khurram Faraaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Khurram Faraaz updated DRILL-5049:
--
Attachment: nullEqJoin_17.postgres
nullEqJoin_17.drill_res
t_alltype.parquet

> difference in results - correlated subquery interacting with null equality 
> join
> ---
>
> Key: DRILL-5049
> URL: https://issues.apache.org/jira/browse/DRILL-5049
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.9.0
>Reporter: Khurram Faraaz
> Attachments: nullEqJoin_17.drill_res, nullEqJoin_17.postgres, 
> t_alltype.parquet
>
>
> Here is a query that uses null equality join. Drill 1.9.0 returns 124 
> records, whereas Postgres 9.3 returns 145 records. I am on Drill 1.9.0 git 
> commit id: db308549
> I have attached the results from Drill 1.9.0 and Postgres, please review.
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> explain plan for
> . . . . . . . . . . . . . . > SELECT *
> . . . . . . . . . . . . . . > FROM `t_alltype.parquet` t1
> . . . . . . . . . . . . . . > WHERE EXISTS
> . . . . . . . . . . . . . . > (
> . . . . . . . . . . . . . . > SELECT *
> . . . . . . . . . . . . . . > FROM `t_alltype.parquet` t2
> . . . . . . . . . . . . . . > WHERE t1.c4 = t2.c4 OR (t1.c4 
> IS NULL AND t2.c4 IS NULL)
> . . . . . . . . . . . . . . > );
> +--+--+
> | text | json |
> +--+--+
> | 00-00Screen
> 00-01  Project(*=[$0])
> 00-02Project(T30¦¦*=[$0])
> 00-03  HashJoin(condition=[AND(=($1, $2), =($1, $3))], 
> joinType=[inner])
> 00-05Project(T30¦¦*=[$0], c4=[$1])
> 00-07  Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
> selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
> usedMetadataFile=false, columns=[`*`]]])
> 00-04HashAgg(group=[{0, 1}], agg#0=[MIN($2)])
> 00-06  Project(c40=[$1], c400=[$1], $f0=[true])
> 00-08HashJoin(condition=[IS NOT DISTINCT FROM($0, $1)], 
> joinType=[inner])
> 00-10  Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
> selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
> usedMetadataFile=false, columns=[`c4`]]])
> 00-09  Project(c40=[$0])
> 00-11HashAgg(group=[{0}])
> 00-12  Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
> selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
> usedMetadataFile=false, columns=[`c4`]]])
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5040) Interrupted CTAS should not succeed & should not create physical file on disk

2016-11-17 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674180#comment-15674180
 ] 

Khurram Faraaz commented on DRILL-5040:
---

sure will do.

> Interrupted CTAS should not succeed & should not create physical file on disk
> -
>
> Key: DRILL-5040
> URL: https://issues.apache.org/jira/browse/DRILL-5040
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.9.0
>Reporter: Khurram Faraaz
>Assignee: Arina Ielchiieva
>
> We should not allow CTAS to succeed (i.e create physical file on disk ) in 
> the case where it was interrupted. (vis Ctrl-C)
> Drill 1.9.0
> git commit ID : db30854
> Consider the below CTAS that was interrupted using Ctrl-C
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> create table temp_t1 as select * from 
> `twoKeyJsn.json`; 
> [ issue Ctrl-C while the above CTAS is running ]
> No rows affected (7.694 seconds)
> {noformat}
> I verified that physical file was created on disk, even though the above CTAS 
> was Canceled
> {noformat}
> [root@centos-01 ~]# hadoop fs -ls /tmp/temp_t1*
> -rwxr-xr-x   3 root root   36713198 2016-11-14 10:51 
> /tmp/temp_t1/0_0_0.parquet
> {noformat}
> We are able to do a select on the CTAS table (above) that was Canceled.
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select count(*) from temp_t1;
> +--+
> |  EXPR$0  |
> +--+
> | 3747840  |
> +--+
> 1 row selected (0.183 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4842) SELECT * on JSON data results in NumberFormatException

2016-11-17 Thread Chunhui Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674364#comment-15674364
 ] 

Chunhui Shi commented on DRILL-4842:


I don't think this fix is aimed to improving performance. Or the performance 
number could be different with LinkedHashSet and HashSet, but we know for sure 
adding/getting an item will cost more if it is LinkedHashSet, so unless it is 
really required, we should not add extra cost. 

By the way, have you tested this bug by trying to have less, e.g. 1000 nulls? 
Seems the problem is related to could not build schema from the first batch 
being seen, could this issue be seen with other formats/data sources that have 
null values? 


> SELECT * on JSON data results in NumberFormatException
> --
>
> Key: DRILL-4842
> URL: https://issues.apache.org/jira/browse/DRILL-4842
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.2.0
>Reporter: Khurram Faraaz
>Assignee: Chunhui Shi
> Attachments: tooManyNulls.json
>
>
> Note that doing SELECT c1 returns correct results, the failure is seen when 
> we do SELECT star. json.all_text_mode was set to true.
> JSON file tooManyNulls.json has one key c1 with 4096 nulls as its value and 
> the 4097th key c1 has the value "Hello World"
> git commit ID : aaf220ff
> MapR Drill 1.8.0 RPM
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> alter session set 
> `store.json.all_text_mode`=true;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | store.json.all_text_mode updated.  |
> +---++
> 1 row selected (0.27 seconds)
> 0: jdbc:drill:schema=dfs.tmp> SELECT c1 FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> +--+
> |  c1  |
> +--+
> | Hello World  |
> +--+
> 1 row selected (0.243 seconds)
> 0: jdbc:drill:schema=dfs.tmp> select * FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> Error: SYSTEM ERROR: NumberFormatException: Hello World
> Fragment 0:0
> [Error Id: 9cafb3f9-3d5c-478a-b55c-900602b8765e on centos-01.qa.lab:31010]
>  (java.lang.NumberFormatException) Hello World
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeI():95
> 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.varTypesToInt():120
> org.apache.drill.exec.test.generated.FiltererGen1169.doSetup():45
> org.apache.drill.exec.test.generated.FiltererGen1169.setup():54
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.generateSV2Filterer():195
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.setupNewSchema():107
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():78
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():94
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():

[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674373#comment-15674373
 ] 

ASF GitHub Bot commented on DRILL-5047:
---

Github user Ben-Zvi commented on the issue:

https://github.com/apache/drill/pull/655
  
LGTM 


> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Gautam Kumar Parai
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:88)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:263)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>  at 
> org.glassfish.jersey.server.mvc.internal.TemplateMethodInterceptor.aroundWriteTo(TemplateMethodInterceptor.java:77)
>  at 
> org.glassfish.jersey.message.internal.Wri

[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674447#comment-15674447
 ] 

ASF GitHub Bot commented on DRILL-5047:
---

Github user gparai commented on a diff in the pull request:

https://github.com/apache/drill/pull/655#discussion_r88514074
  
--- Diff: exec/java-exec/src/main/resources/rest/profile/profile.ftl ---
@@ -132,7 +132,7 @@
 <#list model.getOptionList() as option>
   
 ${option.getName()}
-${option.getValue()?c}
+${option.getValue()?string}
--- End diff --

I am not familiar with FreeMarker so I do not understand the change. In 
Drill-4792 you mentioned 
> Since org.apache.drill.exec.server.options.OptionValue.getValue() returns 
Object, Freemarker built-in c is used to convert Object to string.
Could you please explain why that was not sufficient and how using `string` 
changes that?


> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Gautam Kumar Parai
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessage

[jira] [Commented] (DRILL-5043) Function that returns a unique id per session/connection similar to MySQL's CONNECTION_ID()

2016-11-17 Thread Nagarajan Chinnasamy (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674512#comment-15674512
 ] 

Nagarajan Chinnasamy commented on DRILL-5043:
-

Thanks for the references.

Went through QueryContext.java, UserSession.java and ContextFunctions.java and 
understood how ContextInformation instance is injected into a function.

As per my understanding, there is no property in UserSession that can give a 
unique-id to represent a user session. So UserSession needs to be modified to 
introduce a new property that represents a session_id. But, I want to know what 
are the considerations that should go into having such a property. Can it be a 
simple "static" variable? what should be its datatype? Any clustered 
environment related points that I should take care??? Is there already 
existing unique-id generator utility function that I can make use of?

Does it have to be a UDF? Why not another one of ContextFunctions???

Appreciate your inputs. Thanks.

> Function that returns a unique id per session/connection similar to MySQL's 
> CONNECTION_ID()
> ---
>
> Key: DRILL-5043
> URL: https://issues.apache.org/jira/browse/DRILL-5043
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.8.0
>Reporter: Nagarajan Chinnasamy
>Priority: Minor
>  Labels: CONNECTION_ID, SESSION, UDF
>
> Design and implement a function that returns a unique id per 
> session/connection similar to MySQL's CONNECTION_ID().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674603#comment-15674603
 ] 

ASF GitHub Bot commented on DRILL-5047:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/655#discussion_r88528313
  
--- Diff: exec/java-exec/src/main/resources/rest/profile/profile.ftl ---
@@ -132,7 +132,7 @@
 <#list model.getOptionList() as option>
   
 ${option.getName()}
-${option.getValue()?c}
+${option.getValue()?string}
--- End diff --

For [the documentation](http://freemarker.org/docs/ref_builtins.html), 
`?string` is deprecated for certain type. Why isn't `${option.getValue()}` or 
`${option.getValue().toString()}` sufficient?


> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Gautam Kumar Parai
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:8

[jira] [Updated] (DRILL-5049) difference in results - correlated subquery interacting with null equality join

2016-11-17 Thread Khurram Faraaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Khurram Faraaz updated DRILL-5049:
--
Priority: Critical  (was: Major)

> difference in results - correlated subquery interacting with null equality 
> join
> ---
>
> Key: DRILL-5049
> URL: https://issues.apache.org/jira/browse/DRILL-5049
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.9.0
>Reporter: Khurram Faraaz
>Priority: Critical
> Attachments: nullEqJoin_17.drill_res, nullEqJoin_17.postgres, 
> t_alltype.parquet
>
>
> Here is a query that uses null equality join. Drill 1.9.0 returns 124 
> records, whereas Postgres 9.3 returns 145 records. I am on Drill 1.9.0 git 
> commit id: db308549
> I have attached the results from Drill 1.9.0 and Postgres, please review.
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> explain plan for
> . . . . . . . . . . . . . . > SELECT *
> . . . . . . . . . . . . . . > FROM `t_alltype.parquet` t1
> . . . . . . . . . . . . . . > WHERE EXISTS
> . . . . . . . . . . . . . . > (
> . . . . . . . . . . . . . . > SELECT *
> . . . . . . . . . . . . . . > FROM `t_alltype.parquet` t2
> . . . . . . . . . . . . . . > WHERE t1.c4 = t2.c4 OR (t1.c4 
> IS NULL AND t2.c4 IS NULL)
> . . . . . . . . . . . . . . > );
> +--+--+
> | text | json |
> +--+--+
> | 00-00Screen
> 00-01  Project(*=[$0])
> 00-02Project(T30¦¦*=[$0])
> 00-03  HashJoin(condition=[AND(=($1, $2), =($1, $3))], 
> joinType=[inner])
> 00-05Project(T30¦¦*=[$0], c4=[$1])
> 00-07  Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
> selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
> usedMetadataFile=false, columns=[`*`]]])
> 00-04HashAgg(group=[{0, 1}], agg#0=[MIN($2)])
> 00-06  Project(c40=[$1], c400=[$1], $f0=[true])
> 00-08HashJoin(condition=[IS NOT DISTINCT FROM($0, $1)], 
> joinType=[inner])
> 00-10  Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
> selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
> usedMetadataFile=false, columns=[`c4`]]])
> 00-09  Project(c40=[$0])
> 00-11HashAgg(group=[{0}])
> 00-12  Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=maprfs:///tmp/t_alltype.parquet]], 
> selectionRoot=maprfs:/tmp/t_alltype.parquet, numFiles=1, 
> usedMetadataFile=false, columns=[`c4`]]])
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5034) Select timestamp from hive generated parquet always return in UTC

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674649#comment-15674649
 ] 

ASF GitHub Bot commented on DRILL-5034:
---

GitHub user vdiravka opened a pull request:

https://github.com/apache/drill/pull/656

DRILL-5034: Select timestamp from hive generated parquet always return in 
UTC

- TIMESTAMP_IMPALA function is reverted to retaine local timezone
- TIMESTAMP_IMPALA_LOCALTIMEZONE is deleted
- Retain local timezone for the INT96 timestamp values in the parquet files 
while
  PARQUET_READER_INT96_AS_TIMESTAMP option is on

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vdiravka/drill DRILL-5034

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/656.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #656


commit fa4029c493a25eccd6de0deadbaf3d3d749eafbe
Author: Vitalii Diravka 
Date:   2016-11-14T21:13:28Z

DRILL-5034: Select timestamp from hive generated parquet always return in 
UTC
- TIMESTAMP_IMPALA function is reverted to retaine local timezone
- TIMESTAMP_IMPALA_LOCALTIMEZONE is deleted
- Retain local timezone for the INT96 timestamp values in the parquet files 
while
  PARQUET_READER_INT96_AS_TIMESTAMP option is on




> Select timestamp from hive generated parquet always return in UTC
> -
>
> Key: DRILL-5034
> URL: https://issues.apache.org/jira/browse/DRILL-5034
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Affects Versions: 1.9.0
>Reporter: Krystal
>Assignee: Vitalii Diravka
>
> commit id: 5cea9afa6278e21574c6a982ae5c3d82085ef904
> Reading timestamp data against a hive parquet table from drill automatically 
> converts the timestamp data to UTC. 
> {code}
> SELECT TIMEOFDAY() FROM (VALUES(1));
> +--+
> |EXPR$0|
> +--+
> | 2016-11-10 12:33:26.547 America/Los_Angeles  |
> +--+
> {code}
> data schema:
> {code}
> message hive_schema {
>   optional int32 voter_id;
>   optional binary name (UTF8);
>   optional int32 age;
>   optional binary registration (UTF8);
>   optional fixed_len_byte_array(3) contributions (DECIMAL(6,2));
>   optional int32 voterzone;
>   optional int96 create_timestamp;
>   optional int32 create_date (DATE);
> }
> {code}
> Using drill-1.8, the returned timestamps match the table data:
> {code}
> select convert_from(create_timestamp, 'TIMESTAMP_IMPALA') from 
> `/user/hive/warehouse/voter_hive_parquet` limit 5;
> ++
> | EXPR$0 |
> ++
> | 2016-10-23 20:03:58.0  |
> | null   |
> | 2016-09-09 12:01:18.0  |
> | 2017-03-06 20:35:55.0  |
> | 2017-01-20 22:32:43.0  |
> ++
> 5 rows selected (1.032 seconds)
> {code}
> If the user timzone is changed to UTC, then the timestamp data is returned in 
> UTC time.
> Using drill-1.9, the returned timestamps got converted to UTC eventhough the 
> user timezone is in PST.
> {code}
> select convert_from(create_timestamp, 'TIMESTAMP_IMPALA') from 
> dfs.`/user/hive/warehouse/voter_hive_parquet` limit 5;
> ++
> | EXPR$0 |
> ++
> | 2016-10-24 03:03:58.0  |
> | null   |
> | 2016-09-09 19:01:18.0  |
> | 2017-03-07 04:35:55.0  |
> | 2017-01-21 06:32:43.0  |
> ++
> {code}
> {code}
> alter session set `store.parquet.reader.int96_as_timestamp`=true;
> +---+---+
> |  ok   |  summary  |
> +---+---+
> | true  | store.parquet.reader.int96_as_timestamp updated.  |
> +---+---+
> select create_timestamp from dfs.`/user/hive/warehouse/voter_hive_parquet` 
> limit 5;
> ++
> |create_timestamp|
> ++
> | 2016-10-24 03:03:58.0  |
> | null   |
> | 2016-09-09 19:01:18.0  |
> | 2017-03-07 04:35:55.0  |
> | 2017-01-21 06:32:43.0  |
> ++
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (DRILL-5034) Select timestamp from hive generated parquet always return in UTC

2016-11-17 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong reassigned DRILL-5034:
---

Assignee: Parth Chandra  (was: Vitalii Diravka)

Assigning to [~parthc] for review.

> Select timestamp from hive generated parquet always return in UTC
> -
>
> Key: DRILL-5034
> URL: https://issues.apache.org/jira/browse/DRILL-5034
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Affects Versions: 1.9.0
>Reporter: Krystal
>Assignee: Parth Chandra
>
> commit id: 5cea9afa6278e21574c6a982ae5c3d82085ef904
> Reading timestamp data against a hive parquet table from drill automatically 
> converts the timestamp data to UTC. 
> {code}
> SELECT TIMEOFDAY() FROM (VALUES(1));
> +--+
> |EXPR$0|
> +--+
> | 2016-11-10 12:33:26.547 America/Los_Angeles  |
> +--+
> {code}
> data schema:
> {code}
> message hive_schema {
>   optional int32 voter_id;
>   optional binary name (UTF8);
>   optional int32 age;
>   optional binary registration (UTF8);
>   optional fixed_len_byte_array(3) contributions (DECIMAL(6,2));
>   optional int32 voterzone;
>   optional int96 create_timestamp;
>   optional int32 create_date (DATE);
> }
> {code}
> Using drill-1.8, the returned timestamps match the table data:
> {code}
> select convert_from(create_timestamp, 'TIMESTAMP_IMPALA') from 
> `/user/hive/warehouse/voter_hive_parquet` limit 5;
> ++
> | EXPR$0 |
> ++
> | 2016-10-23 20:03:58.0  |
> | null   |
> | 2016-09-09 12:01:18.0  |
> | 2017-03-06 20:35:55.0  |
> | 2017-01-20 22:32:43.0  |
> ++
> 5 rows selected (1.032 seconds)
> {code}
> If the user timzone is changed to UTC, then the timestamp data is returned in 
> UTC time.
> Using drill-1.9, the returned timestamps got converted to UTC eventhough the 
> user timezone is in PST.
> {code}
> select convert_from(create_timestamp, 'TIMESTAMP_IMPALA') from 
> dfs.`/user/hive/warehouse/voter_hive_parquet` limit 5;
> ++
> | EXPR$0 |
> ++
> | 2016-10-24 03:03:58.0  |
> | null   |
> | 2016-09-09 19:01:18.0  |
> | 2017-03-07 04:35:55.0  |
> | 2017-01-21 06:32:43.0  |
> ++
> {code}
> {code}
> alter session set `store.parquet.reader.int96_as_timestamp`=true;
> +---+---+
> |  ok   |  summary  |
> +---+---+
> | true  | store.parquet.reader.int96_as_timestamp updated.  |
> +---+---+
> select create_timestamp from dfs.`/user/hive/warehouse/voter_hive_parquet` 
> limit 5;
> ++
> |create_timestamp|
> ++
> | 2016-10-24 03:03:58.0  |
> | null   |
> | 2016-09-09 19:01:18.0  |
> | 2017-03-07 04:35:55.0  |
> | 2017-01-21 06:32:43.0  |
> ++
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674837#comment-15674837
 ] 

ASF GitHub Bot commented on DRILL-5047:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/655#discussion_r88547748
  
--- Diff: exec/java-exec/src/main/resources/rest/profile/profile.ftl ---
@@ -132,7 +132,7 @@
 <#list model.getOptionList() as option>
   
 ${option.getName()}
-${option.getValue()?c}
+${option.getValue()?string}
--- End diff --

According to the [documentation](), ?string works only for numbers and 
formats them for "human display". ?c also works for numbers (and formats them 
as "C" language constants.)

The challenge is that this template line works for all data types. So, the 
suggestion of using toString( ) is good. But, since toString( ) may be used for 
debugging, perhaps add a toDisplayString( ) that will format the value as we 
want it to appear in the Web UI.

Another issue: are we sure that the value of option is always non-null at 
this point?


> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Gautam Kumar Parai
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerView

[jira] [Created] (DRILL-5050) C++ client library has symbol resolution issues when loaded by a process that already uses boost::asio

2016-11-17 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-5050:


 Summary: C++ client library has symbol resolution issues when 
loaded by a process that already uses boost::asio
 Key: DRILL-5050
 URL: https://issues.apache.org/jira/browse/DRILL-5050
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - C++
Affects Versions: 1.6.0
 Environment: MacOs
Reporter: Parth Chandra
Assignee: Parth Chandra
 Fix For: 2.0.0


h4. Summary

On MacOS, the Drill ODBC driver hangs when loaded by any process that might 
also be using {{boost::asio}}. This is observed in trying to connect to Drill 
via the ODBC driver using Tableau.


h4. Analysis
The problem is seen in the Drill client library on MacOS. In the method 
{code}
 DrillClientImpl::recvHandshake
.
.
m_io_service.reset();
if (DrillClientConfig::getHandshakeTimeout() > 0){

m_deadlineTimer.expires_from_now(boost::posix_time::seconds(DrillClientConfig::getHandshakeTimeout()));
m_deadlineTimer.async_wait(boost::bind(
&DrillClientImpl::handleHShakeReadTimeout,
this,
boost::asio::placeholders::error
));
DRILL_MT_LOG(DRILL_LOG(LOG_TRACE) << "Started new handshake wait timer 
with "
<< DrillClientConfig::getHandshakeTimeout() << " seconds." << 
std::endl;)
}

async_read(
this->m_socket,
boost::asio::buffer(m_rbuf, LEN_PREFIX_BUFLEN),
boost::bind(
&DrillClientImpl::handleHandshake,
this,
m_rbuf,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
);
DRILL_MT_LOG(DRILL_LOG(LOG_DEBUG) << "DrillClientImpl::recvHandshake: async 
read waiting for server handshake response.\n";)
m_io_service.run();

.
.

{code}

The call to {{io_service::run}} returns without invoking any of the handlers 
that have been registered. The {{io_service}} object has two tasks in its 
queue, the timer task, and the socket read task. However, in the run method, 
the state of the {{io_service}} object appears to change and the number of 
outstanding tasks becomes zero. The run method therefore returns immediately. 
Subsequently, any query request sent to the server hangs as data is never 
pulled off the socket.

This is bizarre behaviour and typically points to build problems. 

More investigation revealed a more interesting thing. {{boost::asio}} is a 
header only library. In other words, there is no actual library 
{{libboost_asio}}. All the code is included into the binary that includes the 
headers of {{boost::asio}}. It so happens that the Tableau process has a 
library (libtabquery) that uses {{boost::asio}} so the code for {{boost::asio}} 
is already loaded into process memory. When the drill client library (via the 
ODBC driver) is loaded by the loader, the drill client library loads its own 
copy of the {{boost:asio}} code.  At runtime, the drill client code jumps to an 
address that resolves to an address inside the libtabquery copy of 
{{boost::asio}}. And that code returns incorrectly.

Really? How is that even allowed? Two copies of {{boost::asio}} in the same 
process? Even if that is allowed, since the code is included at compile time, 
calls to the {{boost::asio}} library should be resolved using internal linkage. 
And if the call to {{boost::asio}} is not resolved statically, the dynamic 
loader would encounter two symbols with the same name and would give us an 
error. And even if the linker picks one of the symbols, as long as the code is 
the same (for example if both libraries use the same version of boost) can that 
cause a problem? Even more importantly, how do we fix that?

h4. Some assembly required

The disassembled libdrillClient shows this code inside recvHandshake
{code}
0003dd8fmovq-0xb0(%rbp), %rdi   
0003dd96addq$0xc0, %rdi
0003dd9dcallq   0x1bff42## symbol stub for: 
__ZN5boost4asio10io_service3runEv
0003dda2movq-0xb0(%rbp), %rdi
0003dda9cmpq$0x0, 0x190(%rdi)
0003ddb4movq%rax, -0x158(%rbp)
{code}

and later in the code 
{code}
00057216retq
00057217nopw(%rax,%rax)
__ZN5boost4asio10io_service3runEv: ## definition of 
io_service::run
00057220pushq   %rbp
00057221movq%rsp, %rbp
00057224subq$0x30, %rsp
00057228leaq-0x18(%rbp), %rax
0005722cmovq%rdi, -0x8(%rbp)
00057230movq-0x8(%rbp), %rdi
00057234movq%rdi, -0x28(%rbp)
{code}


Note that in recvHandshake the call instruction jumps to an address that is an 
offset (0x1bff42). This offset happens to be beyond the end of the lib

[jira] [Commented] (DRILL-4984) Limit 0 raises NullPointerException on JDBC storage sources

2016-11-17 Thread Holger Kiel (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675285#comment-15675285
 ] 

Holger Kiel commented on DRILL-4984:


This one is important for BI Tools, as they are using it for schema discovery. 
Currently, JasperReports Studio for example doesn't work anymore.

> Limit 0 raises NullPointerException on JDBC storage sources
> ---
>
> Key: DRILL-4984
> URL: https://issues.apache.org/jira/browse/DRILL-4984
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.8.0, 1.9.0
> Environment: Latest 1.9 Snapshot, also 1.8 release version,
> mysql-connector-java-5.1.30, mysql-connector-java-5.1.40
>Reporter: Holger Kiel
>
> NullPointerExceptions occur when a query with 'limit 0' is executed on a jdbc 
> storage source (e.g. Mysql):
> {code}
> 0: jdbc:drill:zk=local> select * from mysql.sugarcrm.sales_person limit 0;
> Error: SYSTEM ERROR: NullPointerException
> [Error Id: 6cd676fc-6db9-40b3-81d5-c2db044aeb77 on localhost:31010]
>   (org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception 
> during fragment initialization: null
> org.apache.drill.exec.work.foreman.Foreman.run():281
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
>   Caused By (java.lang.NullPointerException) null
> 
> org.apache.drill.exec.planner.sql.handlers.FindHardDistributionScans.visit():55
> org.apache.calcite.rel.core.TableScan.accept():166
> org.apache.calcite.rel.RelShuttleImpl.visitChild():53
> org.apache.calcite.rel.RelShuttleImpl.visitChildren():68
> org.apache.calcite.rel.RelShuttleImpl.visit():126
> org.apache.calcite.rel.AbstractRelNode.accept():256
> org.apache.calcite.rel.RelShuttleImpl.visitChild():53
> org.apache.calcite.rel.RelShuttleImpl.visitChildren():68
> org.apache.calcite.rel.RelShuttleImpl.visit():126
> org.apache.calcite.rel.AbstractRelNode.accept():256
> 
> org.apache.drill.exec.planner.sql.handlers.FindHardDistributionScans.canForceSingleMode():45
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel():262
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel():290
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():168
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan():123
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():97
> org.apache.drill.exec.work.foreman.Foreman.runSQL():1008
> org.apache.drill.exec.work.foreman.Foreman.run():264
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745 (state=,code=0)
> 0: jdbc:drill:zk=local> select * from mysql.sugarcrm.sales_person limit 1;
> +-+-+++-+
> | id  | first_name  |   last_name| full_name  | manager_id  |
> +-+-+++-+
> | 1   | null| Administrator  | admin  | 0   |
> +-+-+++-+
> 1 row selected (0,235 seconds)
> {code}
> Other datasources are okay:
> {code}
> 0: jdbc:drill:zk=local> SELECT * FROM cp.`employee.json` LIMIT 0;
> +--+---+---+-+--++-++--+-+---++-++-++--+-+-+--+
> | fqn  | filename  | filepath  | suffix  | employee_id  | full_name  | 
> first_name  | last_name  | position_id  | position_title  | store_id  | 
> department_id  | birth_date  | hire_date  | salary  | supervisor_id  | 
> education_level  | marital_status  | gender  | management_role  |
> +--+---+---+-+--++-++--+-+---++-++-++--+-+-+--+
> +--+---+---+-+--++-++--+-+---++-++-++--+-+-+--+
> No rows selected (0,309 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675381#comment-15675381
 ] 

ASF GitHub Bot commented on DRILL-5047:
---

Github user kkhatua commented on the issue:

https://github.com/apache/drill/pull/655
  
@arina-ielchiieva Your fix will not conflict, but is in a branch rebased 
off 4b1902c . 
@sudheeshkatkam  had reverted the commit for DRILL-4373 2 days later. He is 
using the following branch to prepare for the Apache commit (including the 
invite to vote).
[Ref: https://github.com/sudheeshkatkam/drill/commits/drill-1.9.0 ]
So, you'll need to rebase it before making a pull request. 


> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Gautam Kumar Parai
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:88)
>  at 
> org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:263)
>  at 
> org.glassfish.jersey.message.internal.W

[jira] [Commented] (DRILL-5050) C++ client library has symbol resolution issues when loaded by a process that already uses boost::asio

2016-11-17 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675518#comment-15675518
 ] 

Laurent Goujon commented on DRILL-5050:
---

Nice find!

> C++ client library has symbol resolution issues when loaded by a process that 
> already uses boost::asio
> --
>
> Key: DRILL-5050
> URL: https://issues.apache.org/jira/browse/DRILL-5050
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.6.0
> Environment: MacOs
>Reporter: Parth Chandra
>Assignee: Parth Chandra
> Fix For: 2.0.0
>
>
> h4. Summary
> On MacOS, the Drill ODBC driver hangs when loaded by any process that might 
> also be using {{boost::asio}}. This is observed in trying to connect to Drill 
> via the ODBC driver using Tableau.
> h4. Analysis
> The problem is seen in the Drill client library on MacOS. In the method 
> {code}
>  DrillClientImpl::recvHandshake
> .
> .
> m_io_service.reset();
> if (DrillClientConfig::getHandshakeTimeout() > 0){
> 
> m_deadlineTimer.expires_from_now(boost::posix_time::seconds(DrillClientConfig::getHandshakeTimeout()));
> m_deadlineTimer.async_wait(boost::bind(
> &DrillClientImpl::handleHShakeReadTimeout,
> this,
> boost::asio::placeholders::error
> ));
> DRILL_MT_LOG(DRILL_LOG(LOG_TRACE) << "Started new handshake wait 
> timer with "
> << DrillClientConfig::getHandshakeTimeout() << " seconds." << 
> std::endl;)
> }
> async_read(
> this->m_socket,
> boost::asio::buffer(m_rbuf, LEN_PREFIX_BUFLEN),
> boost::bind(
> &DrillClientImpl::handleHandshake,
> this,
> m_rbuf,
> boost::asio::placeholders::error,
> boost::asio::placeholders::bytes_transferred)
> );
> DRILL_MT_LOG(DRILL_LOG(LOG_DEBUG) << "DrillClientImpl::recvHandshake: 
> async read waiting for server handshake response.\n";)
> m_io_service.run();
> .
> .
> {code}
> The call to {{io_service::run}} returns without invoking any of the handlers 
> that have been registered. The {{io_service}} object has two tasks in its 
> queue, the timer task, and the socket read task. However, in the run method, 
> the state of the {{io_service}} object appears to change and the number of 
> outstanding tasks becomes zero. The run method therefore returns immediately. 
> Subsequently, any query request sent to the server hangs as data is never 
> pulled off the socket.
> This is bizarre behaviour and typically points to build problems. 
> More investigation revealed a more interesting thing. {{boost::asio}} is a 
> header only library. In other words, there is no actual library 
> {{libboost_asio}}. All the code is included into the binary that includes the 
> headers of {{boost::asio}}. It so happens that the Tableau process has a 
> library (libtabquery) that uses {{boost::asio}} so the code for 
> {{boost::asio}} is already loaded into process memory. When the drill client 
> library (via the ODBC driver) is loaded by the loader, the drill client 
> library loads its own copy of the {{boost:asio}} code.  At runtime, the drill 
> client code jumps to an address that resolves to an address inside the 
> libtabquery copy of {{boost::asio}}. And that code returns incorrectly.
> Really? How is that even allowed? Two copies of {{boost::asio}} in the same 
> process? Even if that is allowed, since the code is included at compile time, 
> calls to the {{boost::asio}} library should be resolved using internal 
> linkage. And if the call to {{boost::asio}} is not resolved statically, the 
> dynamic loader would encounter two symbols with the same name and would give 
> us an error. And even if the linker picks one of the symbols, as long as the 
> code is the same (for example if both libraries use the same version of 
> boost) can that cause a problem? Even more importantly, how do we fix that?
> h4. Some assembly required
> The disassembled libdrillClient shows this code inside recvHandshake
> {code}
> 0003dd8fmovq-0xb0(%rbp), %rdi   
> 0003dd96addq$0xc0, %rdi
> 0003dd9dcallq   0x1bff42## symbol stub for: 
> __ZN5boost4asio10io_service3runEv
> 0003dda2movq-0xb0(%rbp), %rdi
> 0003dda9cmpq$0x0, 0x190(%rdi)
> 0003ddb4movq%rax, -0x158(%rbp)
> {code}
> and later in the code 
> {code}
> 00057216retq
> 00057217nopw(%rax,%rax)
> __ZN5boost4asio10io_service3runEv: ## definition of 
> io_service::run
> 0

[jira] [Commented] (DRILL-5050) C++ client library has symbol resolution issues when loaded by a process that already uses boost::asio

2016-11-17 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675549#comment-15675549
 ] 

Laurent Goujon commented on DRILL-5050:
---

May be related to this issue: https://svn.boost.org/trac/boost/ticket/11070?

> C++ client library has symbol resolution issues when loaded by a process that 
> already uses boost::asio
> --
>
> Key: DRILL-5050
> URL: https://issues.apache.org/jira/browse/DRILL-5050
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.6.0
> Environment: MacOs
>Reporter: Parth Chandra
>Assignee: Parth Chandra
> Fix For: 2.0.0
>
>
> h4. Summary
> On MacOS, the Drill ODBC driver hangs when loaded by any process that might 
> also be using {{boost::asio}}. This is observed in trying to connect to Drill 
> via the ODBC driver using Tableau.
> h4. Analysis
> The problem is seen in the Drill client library on MacOS. In the method 
> {code}
>  DrillClientImpl::recvHandshake
> .
> .
> m_io_service.reset();
> if (DrillClientConfig::getHandshakeTimeout() > 0){
> 
> m_deadlineTimer.expires_from_now(boost::posix_time::seconds(DrillClientConfig::getHandshakeTimeout()));
> m_deadlineTimer.async_wait(boost::bind(
> &DrillClientImpl::handleHShakeReadTimeout,
> this,
> boost::asio::placeholders::error
> ));
> DRILL_MT_LOG(DRILL_LOG(LOG_TRACE) << "Started new handshake wait 
> timer with "
> << DrillClientConfig::getHandshakeTimeout() << " seconds." << 
> std::endl;)
> }
> async_read(
> this->m_socket,
> boost::asio::buffer(m_rbuf, LEN_PREFIX_BUFLEN),
> boost::bind(
> &DrillClientImpl::handleHandshake,
> this,
> m_rbuf,
> boost::asio::placeholders::error,
> boost::asio::placeholders::bytes_transferred)
> );
> DRILL_MT_LOG(DRILL_LOG(LOG_DEBUG) << "DrillClientImpl::recvHandshake: 
> async read waiting for server handshake response.\n";)
> m_io_service.run();
> .
> .
> {code}
> The call to {{io_service::run}} returns without invoking any of the handlers 
> that have been registered. The {{io_service}} object has two tasks in its 
> queue, the timer task, and the socket read task. However, in the run method, 
> the state of the {{io_service}} object appears to change and the number of 
> outstanding tasks becomes zero. The run method therefore returns immediately. 
> Subsequently, any query request sent to the server hangs as data is never 
> pulled off the socket.
> This is bizarre behaviour and typically points to build problems. 
> More investigation revealed a more interesting thing. {{boost::asio}} is a 
> header only library. In other words, there is no actual library 
> {{libboost_asio}}. All the code is included into the binary that includes the 
> headers of {{boost::asio}}. It so happens that the Tableau process has a 
> library (libtabquery) that uses {{boost::asio}} so the code for 
> {{boost::asio}} is already loaded into process memory. When the drill client 
> library (via the ODBC driver) is loaded by the loader, the drill client 
> library loads its own copy of the {{boost:asio}} code.  At runtime, the drill 
> client code jumps to an address that resolves to an address inside the 
> libtabquery copy of {{boost::asio}}. And that code returns incorrectly.
> Really? How is that even allowed? Two copies of {{boost::asio}} in the same 
> process? Even if that is allowed, since the code is included at compile time, 
> calls to the {{boost::asio}} library should be resolved using internal 
> linkage. And if the call to {{boost::asio}} is not resolved statically, the 
> dynamic loader would encounter two symbols with the same name and would give 
> us an error. And even if the linker picks one of the symbols, as long as the 
> code is the same (for example if both libraries use the same version of 
> boost) can that cause a problem? Even more importantly, how do we fix that?
> h4. Some assembly required
> The disassembled libdrillClient shows this code inside recvHandshake
> {code}
> 0003dd8fmovq-0xb0(%rbp), %rdi   
> 0003dd96addq$0xc0, %rdi
> 0003dd9dcallq   0x1bff42## symbol stub for: 
> __ZN5boost4asio10io_service3runEv
> 0003dda2movq-0xb0(%rbp), %rdi
> 0003dda9cmpq$0x0, 0x190(%rdi)
> 0003ddb4movq%rax, -0x158(%rbp)
> {code}
> and later in the code 
> {code}
> 00057216retq
> 00057217nopw(%rax,%rax)
> __ZN5boost4asio10io_service3runE

[jira] [Commented] (DRILL-5050) C++ client library has symbol resolution issues when loaded by a process that already uses boost::asio

2016-11-17 Thread Parth Chandra (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675560#comment-15675560
 ] 

Parth Chandra commented on DRILL-5050:
--

No I don't think this was the same issue. I did also try the -fvisibility flag, 
BTW.
I'll post boost build and other instructions for the Mac as a patch to resolve 
this issue. 

> C++ client library has symbol resolution issues when loaded by a process that 
> already uses boost::asio
> --
>
> Key: DRILL-5050
> URL: https://issues.apache.org/jira/browse/DRILL-5050
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.6.0
> Environment: MacOs
>Reporter: Parth Chandra
>Assignee: Parth Chandra
> Fix For: 2.0.0
>
>
> h4. Summary
> On MacOS, the Drill ODBC driver hangs when loaded by any process that might 
> also be using {{boost::asio}}. This is observed in trying to connect to Drill 
> via the ODBC driver using Tableau.
> h4. Analysis
> The problem is seen in the Drill client library on MacOS. In the method 
> {code}
>  DrillClientImpl::recvHandshake
> .
> .
> m_io_service.reset();
> if (DrillClientConfig::getHandshakeTimeout() > 0){
> 
> m_deadlineTimer.expires_from_now(boost::posix_time::seconds(DrillClientConfig::getHandshakeTimeout()));
> m_deadlineTimer.async_wait(boost::bind(
> &DrillClientImpl::handleHShakeReadTimeout,
> this,
> boost::asio::placeholders::error
> ));
> DRILL_MT_LOG(DRILL_LOG(LOG_TRACE) << "Started new handshake wait 
> timer with "
> << DrillClientConfig::getHandshakeTimeout() << " seconds." << 
> std::endl;)
> }
> async_read(
> this->m_socket,
> boost::asio::buffer(m_rbuf, LEN_PREFIX_BUFLEN),
> boost::bind(
> &DrillClientImpl::handleHandshake,
> this,
> m_rbuf,
> boost::asio::placeholders::error,
> boost::asio::placeholders::bytes_transferred)
> );
> DRILL_MT_LOG(DRILL_LOG(LOG_DEBUG) << "DrillClientImpl::recvHandshake: 
> async read waiting for server handshake response.\n";)
> m_io_service.run();
> .
> .
> {code}
> The call to {{io_service::run}} returns without invoking any of the handlers 
> that have been registered. The {{io_service}} object has two tasks in its 
> queue, the timer task, and the socket read task. However, in the run method, 
> the state of the {{io_service}} object appears to change and the number of 
> outstanding tasks becomes zero. The run method therefore returns immediately. 
> Subsequently, any query request sent to the server hangs as data is never 
> pulled off the socket.
> This is bizarre behaviour and typically points to build problems. 
> More investigation revealed a more interesting thing. {{boost::asio}} is a 
> header only library. In other words, there is no actual library 
> {{libboost_asio}}. All the code is included into the binary that includes the 
> headers of {{boost::asio}}. It so happens that the Tableau process has a 
> library (libtabquery) that uses {{boost::asio}} so the code for 
> {{boost::asio}} is already loaded into process memory. When the drill client 
> library (via the ODBC driver) is loaded by the loader, the drill client 
> library loads its own copy of the {{boost:asio}} code.  At runtime, the drill 
> client code jumps to an address that resolves to an address inside the 
> libtabquery copy of {{boost::asio}}. And that code returns incorrectly.
> Really? How is that even allowed? Two copies of {{boost::asio}} in the same 
> process? Even if that is allowed, since the code is included at compile time, 
> calls to the {{boost::asio}} library should be resolved using internal 
> linkage. And if the call to {{boost::asio}} is not resolved statically, the 
> dynamic loader would encounter two symbols with the same name and would give 
> us an error. And even if the linker picks one of the symbols, as long as the 
> code is the same (for example if both libraries use the same version of 
> boost) can that cause a problem? Even more importantly, how do we fix that?
> h4. Some assembly required
> The disassembled libdrillClient shows this code inside recvHandshake
> {code}
> 0003dd8fmovq-0xb0(%rbp), %rdi   
> 0003dd96addq$0xc0, %rdi
> 0003dd9dcallq   0x1bff42## symbol stub for: 
> __ZN5boost4asio10io_service3runEv
> 0003dda2movq-0xb0(%rbp), %rdi
> 0003dda9cmpq$0x0, 0x190(%rdi)
> 0003ddb4movq%rax, -0x158(%rbp)
> {code}
> and later in the code 
> {code}
> 0

[jira] [Commented] (DRILL-5050) C++ client library has symbol resolution issues when loaded by a process that already uses boost::asio

2016-11-17 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675607#comment-15675607
 ] 

Laurent Goujon commented on DRILL-5050:
---

If I understand correctly, fvisibility doesn't work on mac as it still exports 
some symbols. And I'm guess this is because two libraries are using two 
different versions of boost, or even the same boost version would cause the 
same issue?

Linux has probably the same issue (I checked the library and lots of 
boost::asio symbols are exported too), which means one could conflict with 
version installed by the system (or brew). Or the opposite (trying to build the 
client using system libraries and conflicting with a 3rd party)

> C++ client library has symbol resolution issues when loaded by a process that 
> already uses boost::asio
> --
>
> Key: DRILL-5050
> URL: https://issues.apache.org/jira/browse/DRILL-5050
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.6.0
> Environment: MacOs
>Reporter: Parth Chandra
>Assignee: Parth Chandra
> Fix For: 2.0.0
>
>
> h4. Summary
> On MacOS, the Drill ODBC driver hangs when loaded by any process that might 
> also be using {{boost::asio}}. This is observed in trying to connect to Drill 
> via the ODBC driver using Tableau.
> h4. Analysis
> The problem is seen in the Drill client library on MacOS. In the method 
> {code}
>  DrillClientImpl::recvHandshake
> .
> .
> m_io_service.reset();
> if (DrillClientConfig::getHandshakeTimeout() > 0){
> 
> m_deadlineTimer.expires_from_now(boost::posix_time::seconds(DrillClientConfig::getHandshakeTimeout()));
> m_deadlineTimer.async_wait(boost::bind(
> &DrillClientImpl::handleHShakeReadTimeout,
> this,
> boost::asio::placeholders::error
> ));
> DRILL_MT_LOG(DRILL_LOG(LOG_TRACE) << "Started new handshake wait 
> timer with "
> << DrillClientConfig::getHandshakeTimeout() << " seconds." << 
> std::endl;)
> }
> async_read(
> this->m_socket,
> boost::asio::buffer(m_rbuf, LEN_PREFIX_BUFLEN),
> boost::bind(
> &DrillClientImpl::handleHandshake,
> this,
> m_rbuf,
> boost::asio::placeholders::error,
> boost::asio::placeholders::bytes_transferred)
> );
> DRILL_MT_LOG(DRILL_LOG(LOG_DEBUG) << "DrillClientImpl::recvHandshake: 
> async read waiting for server handshake response.\n";)
> m_io_service.run();
> .
> .
> {code}
> The call to {{io_service::run}} returns without invoking any of the handlers 
> that have been registered. The {{io_service}} object has two tasks in its 
> queue, the timer task, and the socket read task. However, in the run method, 
> the state of the {{io_service}} object appears to change and the number of 
> outstanding tasks becomes zero. The run method therefore returns immediately. 
> Subsequently, any query request sent to the server hangs as data is never 
> pulled off the socket.
> This is bizarre behaviour and typically points to build problems. 
> More investigation revealed a more interesting thing. {{boost::asio}} is a 
> header only library. In other words, there is no actual library 
> {{libboost_asio}}. All the code is included into the binary that includes the 
> headers of {{boost::asio}}. It so happens that the Tableau process has a 
> library (libtabquery) that uses {{boost::asio}} so the code for 
> {{boost::asio}} is already loaded into process memory. When the drill client 
> library (via the ODBC driver) is loaded by the loader, the drill client 
> library loads its own copy of the {{boost:asio}} code.  At runtime, the drill 
> client code jumps to an address that resolves to an address inside the 
> libtabquery copy of {{boost::asio}}. And that code returns incorrectly.
> Really? How is that even allowed? Two copies of {{boost::asio}} in the same 
> process? Even if that is allowed, since the code is included at compile time, 
> calls to the {{boost::asio}} library should be resolved using internal 
> linkage. And if the call to {{boost::asio}} is not resolved statically, the 
> dynamic loader would encounter two symbols with the same name and would give 
> us an error. And even if the linker picks one of the symbols, as long as the 
> code is the same (for example if both libraries use the same version of 
> boost) can that cause a problem? Even more importantly, how do we fix that?
> h4. Some assembly required
> The disassembled libdrillClient shows this code inside recvHandshake
> {code}
> 0003dd8fmovq-0xb0(%rbp), %rd

[jira] [Commented] (DRILL-4462) Slow JOIN Query On Remote MongoDB

2016-11-17 Thread Mridul Chopra (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675764#comment-15675764
 ] 

Mridul Chopra commented on DRILL-4462:
--

Can you please share the link for Drill 1.9.0 source code/ executable

> Slow JOIN Query On Remote MongoDB
> -
>
> Key: DRILL-4462
> URL: https://issues.apache.org/jira/browse/DRILL-4462
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - CLI, Storage - MongoDB
>Affects Versions: 1.5.0
>Reporter: Rifat Mahmud
> Attachments: fragmentprof.PNG
>
>
> Regardless of the number of collections in the MongoDB database, simple join 
> query, like select * from t1, t2 where t1.a=t2.b is taking on and around 27 
> seconds from drill-embedded running on a single machine.
> Here are the profiles:
> https://drive.google.com/open?id=0B-J_8-KYz50mZ1NjSzlUUjR3Q0U
> https://drive.google.com/open?id=0B-J_8-KYz50mcTFpRmxKOWdfak0
> Screenshot of fragment profile has been attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4462) Slow JOIN Query On Remote MongoDB

2016-11-17 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675862#comment-15675862
 ] 

Khurram Faraaz commented on DRILL-4462:
---

You can get source from here, do a git clone from 
https://github.com/apache/drill and checkout to the latest commit and then 
build and deploy binaries (that will be available under distribution/target/ 
directory), you should get Drill 1.9.0 binaries that way, or, you can download 
the already released version, Drill 1.8.0 from here - 
https://drill.apache.org/download/

> Slow JOIN Query On Remote MongoDB
> -
>
> Key: DRILL-4462
> URL: https://issues.apache.org/jira/browse/DRILL-4462
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - CLI, Storage - MongoDB
>Affects Versions: 1.5.0
>Reporter: Rifat Mahmud
> Attachments: fragmentprof.PNG
>
>
> Regardless of the number of collections in the MongoDB database, simple join 
> query, like select * from t1, t2 where t1.a=t2.b is taking on and around 27 
> seconds from drill-embedded running on a single machine.
> Here are the profiles:
> https://drive.google.com/open?id=0B-J_8-KYz50mZ1NjSzlUUjR3Q0U
> https://drive.google.com/open?id=0B-J_8-KYz50mcTFpRmxKOWdfak0
> Screenshot of fragment profile has been attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5047) When session option is string, query profile is displayed incorrectly on Web UI

2016-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15675887#comment-15675887
 ] 

ASF GitHub Bot commented on DRILL-5047:
---

Github user kkhatua commented on a diff in the pull request:

https://github.com/apache/drill/pull/655#discussion_r88602974
  
--- Diff: exec/java-exec/src/main/resources/rest/profile/profile.ftl ---
@@ -132,7 +132,7 @@
 <#list model.getOptionList() as option>
   
 ${option.getName()}
-${option.getValue()?c}
+${option.getValue()?string}
--- End diff --

Tried this out. All the options take non-null values.  Any attempt to set a 
property (String, for e.g.) to null failed. So, I guess as long as a property 
takes only non-null values, things should be fine.


> When session option is string, query profile is displayed incorrectly on Web 
> UI
> ---
>
> Key: DRILL-5047
> URL: https://issues.apache.org/jira/browse/DRILL-5047
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Gautam Kumar Parai
> Attachments: session_options_all_types.JPG
>
>
> When session option is string, query profile is displayed incorrectly on Web 
> UI:
> {noformat}
> Name  Value
> store.format  FreeMarker template error: For "?c" left-hand operand: Expected 
> a number or boolean, but this evaluated to a string (wrapper: 
> f.t.SimpleScalar): ==> option.getValue() [in template 
> "rest/profile/profile.ftl" at line 135, column 27]  FTL stack trace ("~" 
> means nesting-related): - Failed at: ${option.getValue()?c} [in template 
> "rest/profile/profile.ftl" in macro "page_body" at line 135, column 25] - 
> Reached through: @page_body [in template "rest/profile/*/generic.ftl" in 
> macro "page_html" at line 89, column 9] - Reached through: @page_html [in 
> template "rest/profile/profile.ftl" at line 247, column 1]  Java stack 
> trace (for programmers):  freemarker.core.UnexpectedTypeException: [... 
> Exception message was already printed; see it above ...] at 
> freemarker.core.BuiltInsForMultipleTypes$AbstractCBI._eval(BuiltInsForMultipleTypes.java:598)
>  at freemarker.core.Expression.eval(Expression.java:76) at 
> freemarker.core.Expression.evalAndCoerceToString(Expression.java:80) at 
> freemarker.core.DollarVariable.accept(DollarVariable.java:40) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:157) at 
> freemarker.core.Environment.visitIteratorBlock(Environment.java:501) at 
> freemarker.core.IteratorBlock.accept(IteratorBlock.java:67) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visitByHiddingParent(Environment.java:278) at 
> freemarker.core.ConditionalBlock.accept(ConditionalBlock.java:48) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Macro$Context.runMacro(Macro.java:173) at 
> freemarker.core.Environment.visit(Environment.java:686) at 
> freemarker.core.UnifiedCall.accept(UnifiedCall.java:80) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.MixedContent.accept(MixedContent.java:57) at 
> freemarker.core.Environment.visit(Environment.java:257) at 
> freemarker.core.Environment.process(Environment.java:235) at 
> freemarker.template.Template.process(Template.java:262) at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
>  at 
> org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
>  at 
> org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
>  at 
> org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.jav