[GitHub] drill pull request #1114: Drill-6104: Added Logfile Reader

2018-03-16 Thread paul-rogers
Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1114#discussion_r175244984
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/log/LogFormatPlugin.java
 ---
@@ -0,0 +1,151 @@
+package org.apache.drill.exec.store.log;
--- End diff --

The comment says fixed. Did you forget to commit the changes to your 
branch? Github still shows the original code...


---


[GitHub] drill pull request #1171: DRILL-6231: Fix memory allocation for repeated lis...

2018-03-16 Thread paul-rogers
Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/1171#discussion_r175244575
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/record/RecordBatchSizer.java 
---
@@ -395,11 +395,24 @@ private void allocateMap(AbstractMapVector map, int 
recordCount) {
   }
 }
 
+private void allocateRepeatedList(RepeatedListVector vector, int 
recordCount) {
+  vector.allocateOffsetsNew(recordCount);
+  recordCount *= getCardinality();
+  ColumnSize child = children.get(vector.getField().getName());
+  child.allocateVector(vector.getDataVector(), recordCount);
--- End diff --

One interesting feature of this vector is that the child can be null during 
reading for some time. That is, in JSON, we may see that the field is `foo: 
[[]]`, but don't know the inner type yet. So, for safety, allocate the inner 
vector only if `vector.getDataVector()` is non-null.

Also note that a repeated list can be of any dimension. So, the inner 
vector can be another repeated list of lesser dimension. The code here handles 
that case. But, does the sizer itself handle nested repeated lists? Do we have 
a unit test for a 2D and 3D list?

Never had to do these before because only JSON can produce such structures 
and we don't seem to exercise most operators with complex JSON structures. We 
probably should.


---


[jira] [Created] (DRILL-6262) IndexOutOfBoundException in RecordBatchSize for empty variableWidthVector

2018-03-16 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-6262:


 Summary: IndexOutOfBoundException in RecordBatchSize for empty 
variableWidthVector
 Key: DRILL-6262
 URL: https://issues.apache.org/jira/browse/DRILL-6262
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.14.0


ColumnSize inside RecordBatchSizer while computing the totalDataSize for 
VariableWidthVector throws IndexOutOfBoundException when the underlying vector 
is empty without any allocated memory.

This happens because the way totalDataSize is computed is using the 
offsetVector value at an index n where n is total number of records in the 
vector. When vector is empty then n=0 and offsetVector drillbuf is empty as 
well. So while retrieving value at index 0 from offsetVector exception is 
thrown. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] drill issue #1159: DRILL-6215: Changed Statement to PreparedStatement in Jdb...

2018-03-16 Thread kfaraaz
Github user kfaraaz commented on the issue:

https://github.com/apache/drill/pull/1159
  
I don't know about the other file, I didn't add it. Let me check.

Thanks,
Khurram


---


[jira] [Created] (DRILL-6261) logging "Waiting for X queries to complete before shutting down" even before shutdown request is triggered

2018-03-16 Thread Venkata Jyothsna Donapati (JIRA)
Venkata Jyothsna Donapati created DRILL-6261:


 Summary: logging "Waiting for X queries to complete before 
shutting down" even before shutdown request is triggered
 Key: DRILL-6261
 URL: https://issues.apache.org/jira/browse/DRILL-6261
 Project: Apache Drill
  Issue Type: Bug
Reporter: Venkata Jyothsna Donapati


After https://issues.apache.org/jira/browse/DRILL-5922 changes "Waiting for X 
queries to complete before shutting down" is logged every time a query runs 
instead of it being logged after a shutdown request is triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-2656) Add ability to specify options for clean shutdown of a Drillbit

2018-03-16 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker resolved DRILL-2656.
--
Resolution: Duplicate

Addressed by DRILL-4286

> Add ability to specify options for clean shutdown of a Drillbit
> ---
>
> Key: DRILL-2656
> URL: https://issues.apache.org/jira/browse/DRILL-2656
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Execution - Flow
>Affects Versions: 0.8.0
>Reporter: Chris Westin
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: Future
>
>
> When we shut down a Drillbit, we should provide some options similar to those 
> available from Oracle's shutdown command (see 
> https://docs.oracle.com/cd/B28359_01/server.111/b28310/start003.htm#ADMIN11156)
>  .
> At present, in order to avoid problems like DRILL-2654, we try to do a short 
> wait for executing queries, but that times out after 5 seconds, and doesn't 
> help with long-running queries.
> Someone that is running a long query might be unhappy about losing work for 
> something that was near completion, so we can do better.
> And, in order to avoid spurious cleanup problems and exceptions, we should 
> explicitly cancel any remaining queries before we do complete the shutdown.
> As in the Oracle example, we might have shutdown immediate issue 
> cancellations to the running queries.  A clean shutdown might not have a 
> timeout, or might allow the specification of a longer timeout, and even when 
> the timeout goes off, we should still cleanly cancel any remaining queries, 
> and wait for the cancellations to complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-4829) Configure the address to bind to

2018-03-16 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker resolved DRILL-4829.
--
Resolution: Duplicate

Addressed by DRILL-6005

> Configure the address to bind to
> 
>
> Key: DRILL-4829
> URL: https://issues.apache.org/jira/browse/DRILL-4829
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Daniel Stockton
>Assignee: Venkata Jyothsna Donapati
>Priority: Minor
> Fix For: 1.14.0
>
>
> 1.7 included the following patch to prevent Drillbits binding to the loopback 
> address: https://issues.apache.org/jira/browse/DRILL-4523
> "Drillbit is disallowed to bind to loopback address in distributed mode."
> It would be better if this was configurable rather than rely on /etc/hosts, 
> since it's common for the hostname to resolve to loopback.
> Would you accept a patch that adds this option to drill.override.conf?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-3928) OutOfMemoryException should not be derived from FragmentSetupException

2018-03-16 Thread Karthikeyan Manivannan (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthikeyan Manivannan resolved DRILL-3928.
---
Resolution: Not A Problem

OutofMemoryException is not derived from FragmentSetupException

> OutOfMemoryException should not be derived from FragmentSetupException
> --
>
> Key: DRILL-3928
> URL: https://issues.apache.org/jira/browse/DRILL-3928
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.2.0
>Reporter: Chris Westin
>Assignee: Karthikeyan Manivannan
>Priority: Major
>
> Discovered while working on DRILL-3927.
> The client and server both use the same direct memory allocator code. But the 
> allocator's OutOfMemoryException is derived from FragmentSetupException 
> (which is derived from ForemanException).
> Firstly, OOM situations don't only happen during setup.
> Secondly, Fragment and Foreman classes shouldn't exist on the client side. 
> (This is causing unnecessary dependencies on the jdbc-all jar on server-only 
> code).
> There's nothing special in those base classes that OutOfMemoryException 
> depends on. This looks like it was just a cheap way to avoid extra catch 
> clauses in Foreman and FragmentExecutor by catching the baser classes only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] drill issue #1173: DRILL-6259: Support parquet filter push down for complex ...

2018-03-16 Thread priteshm
Github user priteshm commented on the issue:

https://github.com/apache/drill/pull/1173
  
@parthchandra can you please review this change?


---


[GitHub] drill issue #1170: DRILL-6223: Fixed several Drillbit failures due to schema...

2018-03-16 Thread parthchandra
Github user parthchandra commented on the issue:

https://github.com/apache/drill/pull/1170
  
I added a comment in the JIRA - 
[DRILL-6223](https://issues.apache.org/jira/browse/DRILL-6223?focusedCommentId=16402223&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16402223)



---


[jira] [Created] (DRILL-6260) Query fails with "UNSUPPORTED_OPERATION ERROR: Non-scalar sub-query used in an expression" when it contains a cast expression around a scalar sub-query

2018-03-16 Thread Abhishek Girish (JIRA)
Abhishek Girish created DRILL-6260:
--

 Summary: Query fails with "UNSUPPORTED_OPERATION ERROR: Non-scalar 
sub-query used in an expression" when it contains a cast expression around a 
scalar sub-query 
 Key: DRILL-6260
 URL: https://issues.apache.org/jira/browse/DRILL-6260
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning & Optimization
Affects Versions: 1.13.0
Reporter: Abhishek Girish


{code}
> explain plan for SELECT T1.b FROM `t1.json` T1  WHERE  T1.a = (SELECT 
> cast(max(T2.a) as varchar) FROM `t2.json` T2);

Error: UNSUPPORTED_OPERATION ERROR: Non-scalar sub-query used in an expression
See Apache Drill JIRA: DRILL-1937
{code}

Slightly different variants of the query work fine. 
{code}
> explain plan for SELECT T1.b FROM `t1.json` T1  WHERE  T1.a = (SELECT 
> max(cast(T2.a as varchar)) FROM `t2.json` T2);

+--+--+
| text | json |
+--+--+
| 00-00    Screen
00-01      Project(b=[$0])
00-02        Project(b=[$1])
00-03          SelectionVectorRemover
00-04            Filter(condition=[=($0, $2)])
00-05              NestedLoopJoin(condition=[true], joinType=[left])
00-07                Scan(table=[[si, tmp, t1.json]], groupscan=[EasyGroupScan 
[selectionRoot=maprfs:/tmp/t1.json, numFiles=1, columns=[`a`, `b`], 
files=[maprfs:///tmp/t1.json]]])
00-06                StreamAgg(group=[{}], EXPR$0=[MAX($0)])
00-08                  Project($f0=[CAST($0):VARCHAR(65535) CHARACTER SET 
"UTF-16LE" COLLATE "UTF-16LE$en_US$primary"])
00-09                    Scan(table=[[si, tmp, t2.json]], 
groupscan=[EasyGroupScan [selectionRoot=maprfs:/tmp/t2.json, numFiles=1, 
columns=[`a`], files=[maprfs:///tmp/t2.json]]]){code}
{code}
> explain plan for SELECT T1.b FROM `t1.json` T1  WHERE  T1.a = (SELECT 
> max(T2.a) FROM `t2.json` T2);

+--+--+
| text | json |
+--+--+
| 00-00Screen
00-01  Project(b=[$0])
00-02Project(b=[$1])
00-03  SelectionVectorRemover
00-04Filter(condition=[=($0, $2)])
00-05  NestedLoopJoin(condition=[true], joinType=[left])
00-07Scan(table=[[si, tmp, t1.json]], groupscan=[EasyGroupScan 
[selectionRoot=maprfs:/tmp/t1.json, numFiles=1, columns=[`a`, `b`], 
files=[maprfs:///tmp/t1.json]]])
00-06StreamAgg(group=[{}], EXPR$0=[MAX($0)])
00-08  Scan(table=[[si, tmp, t2.json]], 
groupscan=[EasyGroupScan [selectionRoot=maprfs:/tmp/t2.json, numFiles=1, 
columns=[`a`], files=[maprfs:///tmp/t2.json]]])
{code}

File contents:
{code}
# cat t1.json 
{"a":1, "b":"V"}
{"a":2, "b":"W"}
{"a":3, "b":"X"}
{"a":4, "b":"Y"}
{"a":5, "b":"Z"}

# # cat t2.json 
{"a":1, "b":"A"}
{"a":2, "b":"B"}
{"a":3, "b":"C"}
{"a":4, "b":"D"}
{"a":5, "b":"E"}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] drill issue #1152: DRILL-6199: Add support for filter push down and partitio...

2018-03-16 Thread arina-ielchiieva
Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/1152
  
@HanumathRao thanks for the review. Applied code review comment.


---


[GitHub] drill pull request #1152: DRILL-6199: Add support for filter push down and p...

2018-03-16 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1152#discussion_r175120182
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/DrillFilterItemStarReWriterRule.java
 ---
@@ -54,83 +44,189 @@
 import static 
org.apache.drill.exec.planner.logical.FieldsReWriterUtil.FieldsReWriter;
 
 /**
- * Rule will transform filter -> project -> scan call with item star 
fields in filter
- * into project -> filter -> project -> scan where item star fields are 
pushed into scan
- * and replaced with actual field references.
+ * Rule will transform item star fields in filter and replaced with actual 
field references.
  *
  * This will help partition pruning and push down rules to detect fields 
that can be pruned or push downed.
  * Item star operator appears when sub-select or cte with star are used as 
source.
  */
-public class DrillFilterItemStarReWriterRule extends RelOptRule {
+public class DrillFilterItemStarReWriterRule {
 
-  public static final DrillFilterItemStarReWriterRule INSTANCE = new 
DrillFilterItemStarReWriterRule(
-  RelOptHelper.some(Filter.class, RelOptHelper.some(Project.class, 
RelOptHelper.any( TableScan.class))),
-  "DrillFilterItemStarReWriterRule");
+  public static final DrillFilterItemStarReWriterRule.ProjectOnScan 
PROJECT_ON_SCAN = new ProjectOnScan(
+  RelOptHelper.some(DrillProjectRel.class, 
RelOptHelper.any(DrillScanRel.class)),
+  "DrillFilterItemStarReWriterRule.ProjectOnScan");
 
-  private DrillFilterItemStarReWriterRule(RelOptRuleOperand operand, 
String id) {
-super(operand, id);
-  }
+  public static final DrillFilterItemStarReWriterRule.FilterOnScan 
FILTER_ON_SCAN = new FilterOnScan(
+  RelOptHelper.some(DrillFilterRel.class, 
RelOptHelper.any(DrillScanRel.class)),
+  "DrillFilterItemStarReWriterRule.FilterOnScan");
 
-  @Override
-  public void onMatch(RelOptRuleCall call) {
-Filter filterRel = call.rel(0);
-Project projectRel = call.rel(1);
-TableScan scanRel = call.rel(2);
+  public static final DrillFilterItemStarReWriterRule.FilterOnProject 
FILTER_ON_PROJECT = new FilterOnProject(
+  RelOptHelper.some(DrillFilterRel.class, 
RelOptHelper.some(DrillProjectRel.class, RelOptHelper.any(DrillScanRel.class))),
+  "DrillFilterItemStarReWriterRule.FilterOnProject");
 
-ItemStarFieldsVisitor itemStarFieldsVisitor = new 
ItemStarFieldsVisitor(filterRel.getRowType().getFieldNames());
-filterRel.getCondition().accept(itemStarFieldsVisitor);
 
-// there are no item fields, no need to proceed further
-if (!itemStarFieldsVisitor.hasItemStarFields()) {
-  return;
+  private static class ProjectOnScan extends RelOptRule {
+
+ProjectOnScan(RelOptRuleOperand operand, String id) {
+  super(operand, id);
 }
 
-Map itemStarFields = 
itemStarFieldsVisitor.getItemStarFields();
+@Override
+public boolean matches(RelOptRuleCall call) {
+  DrillScanRel scan = call.rel(1);
+  return scan.getGroupScan() instanceof ParquetGroupScan && 
super.matches(call);
+}
 
-// create new scan
-RelNode newScan = constructNewScan(scanRel, itemStarFields.keySet());
+@Override
+public void onMatch(RelOptRuleCall call) {
+  DrillProjectRel projectRel = call.rel(0);
+  DrillScanRel scanRel = call.rel(1);
+
+  ItemStarFieldsVisitor itemStarFieldsVisitor = new 
ItemStarFieldsVisitor(scanRel.getRowType().getFieldNames());
+  List projects = projectRel.getProjects();
+  for (RexNode project : projects) {
+project.accept(itemStarFieldsVisitor);
+  }
 
-// combine original and new projects
-List newProjects = new ArrayList<>(projectRel.getProjects());
+  Map itemStarFields = 
itemStarFieldsVisitor.getItemStarFields();
 
-// prepare node mapper to replace item star calls with new input field 
references
-Map fieldMapper = new HashMap<>();
+  // if there are no item fields, no need to proceed further
+  if (itemStarFieldsVisitor.hasNoItemStarFields()) {
--- End diff --

Sure, moved.


---


[GitHub] drill pull request #1152: DRILL-6199: Add support for filter push down and p...

2018-03-16 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1152#discussion_r175136589
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/DrillFilterItemStarReWriterRule.java
 ---
@@ -54,83 +44,189 @@
 import static 
org.apache.drill.exec.planner.logical.FieldsReWriterUtil.FieldsReWriter;
 
 /**
- * Rule will transform filter -> project -> scan call with item star 
fields in filter
- * into project -> filter -> project -> scan where item star fields are 
pushed into scan
- * and replaced with actual field references.
+ * Rule will transform item star fields in filter and replaced with actual 
field references.
  *
  * This will help partition pruning and push down rules to detect fields 
that can be pruned or push downed.
  * Item star operator appears when sub-select or cte with star are used as 
source.
  */
-public class DrillFilterItemStarReWriterRule extends RelOptRule {
+public class DrillFilterItemStarReWriterRule {
 
-  public static final DrillFilterItemStarReWriterRule INSTANCE = new 
DrillFilterItemStarReWriterRule(
-  RelOptHelper.some(Filter.class, RelOptHelper.some(Project.class, 
RelOptHelper.any( TableScan.class))),
-  "DrillFilterItemStarReWriterRule");
+  public static final DrillFilterItemStarReWriterRule.ProjectOnScan 
PROJECT_ON_SCAN = new ProjectOnScan(
+  RelOptHelper.some(DrillProjectRel.class, 
RelOptHelper.any(DrillScanRel.class)),
+  "DrillFilterItemStarReWriterRule.ProjectOnScan");
 
-  private DrillFilterItemStarReWriterRule(RelOptRuleOperand operand, 
String id) {
-super(operand, id);
-  }
+  public static final DrillFilterItemStarReWriterRule.FilterOnScan 
FILTER_ON_SCAN = new FilterOnScan(
+  RelOptHelper.some(DrillFilterRel.class, 
RelOptHelper.any(DrillScanRel.class)),
+  "DrillFilterItemStarReWriterRule.FilterOnScan");
 
-  @Override
-  public void onMatch(RelOptRuleCall call) {
-Filter filterRel = call.rel(0);
-Project projectRel = call.rel(1);
-TableScan scanRel = call.rel(2);
+  public static final DrillFilterItemStarReWriterRule.FilterOnProject 
FILTER_ON_PROJECT = new FilterOnProject(
--- End diff --

Done.


---


[GitHub] drill pull request #1152: DRILL-6199: Add support for filter push down and p...

2018-03-16 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/1152#discussion_r175137249
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestPushDownAndPruningWithItemStar.java
 ---
@@ -180,4 +248,38 @@ public void testFilterPushDownMultipleConditions() 
throws Exception {
 .build();
   }
 
+  @Test
+  public void testFilterPushDownWithSeveralNestedStarSubQueries() throws 
Exception {
+String subQuery = String.format("select * from `%s`.`%s`", 
DFS_TMP_SCHEMA, TABLE_NAME);
+String query = String.format("select * from (select * from (select * 
from (%s))) where o_orderdate = date '1992-01-01'", subQuery);
+
+String[] expectedPlan = {"numFiles=1, numRowGroups=1, 
usedMetadataFile=false, columns=\\[`\\*\\*`, `o_orderdate`\\]"};
+String[] excludedPlan = {};
+
+PlanTestBase.testPlanMatchingPatterns(query, expectedPlan, 
excludedPlan);
+
+testBuilder()
+.sqlQuery(query)
+.unOrdered()
+.sqlBaselineQuery("select * from `%s`.`%s` where o_orderdate = 
date '1992-01-01'", DFS_TMP_SCHEMA, TABLE_NAME)
+.build();
+  }
+
+  @Test
+  public void 
testFilterPushDownWithSeveralNestedStarSubQueriesWithAdditionalColumns() throws 
Exception {
+String subQuery = String.format("select * from `%s`.`%s`", 
DFS_TMP_SCHEMA, TABLE_NAME);
+String query = String.format("select * from (select * from (select *, 
o_orderdate from (%s))) where o_orderdate = date '1992-01-01'", subQuery);
--- End diff --

Done.


---


[GitHub] drill pull request #1173: DRILL-6259: Support parquet filter push down for c...

2018-03-16 Thread arina-ielchiieva
GitHub user arina-ielchiieva opened a pull request:

https://github.com/apache/drill/pull/1173

DRILL-6259: Support parquet filter push down for complex types

Details in [DRILL-6259](https://issues.apache.org/jira/browse/DRILL-6259).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arina-ielchiieva/drill DRILL-6259

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/1173.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1173


commit 7a694cedc76d76ce062b393ddd30002e8a6ba11a
Author: Arina Ielchiieva 
Date:   2018-03-13T17:54:25Z

DRILL-6259: Support parquet filter push down for complex types




---


[jira] [Created] (DRILL-6259) Implement parquet filter push down for complex types

2018-03-16 Thread Arina Ielchiieva (JIRA)
Arina Ielchiieva created DRILL-6259:
---

 Summary: Implement parquet filter push down for complex types
 Key: DRILL-6259
 URL: https://issues.apache.org/jira/browse/DRILL-6259
 Project: Apache Drill
  Issue Type: Improvement
Affects Versions: 1.13.0
Reporter: Arina Ielchiieva
Assignee: Arina Ielchiieva
 Fix For: 1.14.0


Currently parquet filter push down is not working for complex types (including 
arrays).

This Jira aims to implement filter push down for complex types which underneath 
type is among supported simple types for filter push down. For instance, 
currently Drill does not support filter push down for varchars, decimals etc. 
Though once Drill will start support, this support will be applied for complex 
type automatically.

Complex fields will be pushed down the same way regular fields are, except for 
one case with arrays.

Query with predicate {{where users.hobbies_ids[2] is null}} won't be able to 
push down because we are not able to determine exact number of nulls in arrays 
fields. 

{{Consider [1, 2, 3]}} vs {{[1, 2]. If}} these arrays are in different files. 
Statistics for the second case won't show any nulls but when querying from two 
files, in terms of data the third value in array is null.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] drill issue #1172: DRILL-6256: Remove references to java 7 from readme and o...

2018-03-16 Thread arina-ielchiieva
Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/1172
  
+1


---


[RESULT] [VOTE] Apache Drill release 1.13.0 (RC0)

2018-03-16 Thread Parth Chandra
The vote passes. Thanks to everyone who has tested the release candidate and
gave their comments and votes. Final tally:

Vote

+1 : 3 binding (Aman, Arina, Parth), 4 non-binding (Khurram, Sorabh,
Vitalii, Vova)
 0 : 1 non binding (Vlad)
-1 : none

I'll start the process for pushing the release artifacts and send an
announcement
once propagated.


Re: [VOTE] Apache Drill release 1.13.0 - RC0

2018-03-16 Thread Parth Chandra
Hi Vlad,

  These issues should be fixed in the next release. Can you please log
JIRA's for these.

Thanks

Parth

On Fri, Mar 16, 2018 at 8:38 AM, Vlad Rozov  wrote:

> 0 (non-binding)
>
> - verified release signature and hash
> - verified build "mvn -Dmaven.repo.local=/tmp/.m2/repository/ install
> -DskipTests"
> - verified NOTICE, LICENSE and README.md
> - checked DEPENDENCIES
>
> issues noted (in order of severity)
> - the source distribution contains binary jar files under
> exec/java-exec/src/test/resources/jars
> - dependency licensed under GNU LGPL
> - copyright year range in NOTICE needs to be updated
> - release is signed with the signature that is valid till 2018-10-09
>
> Thank you,
>
> Vlad
>
>
> On 3/15/18 15:31, Sorabh Hamirwasia wrote:
>
>>*   Downloaded binary tarball from [2] and deployed 2 node cluster
>>*   Ran some basic queries using sqlline and Web UI with and without
>> security enabled
>>*   Verified user to bit secure connections using Plain/Kerberos
>>*   Verified bit to bit secure connections using Kerberos.
>>   *   During testing I found an issue that local control message is
>> still creating a connection, but is not related to this release. The issue
>> is a regression because of state introduced in Drillbit endpoint as part of
>> shutdown feature in 1.12. I have opened DRILL-6255> he.org/jira/browse/DRILL-6255> for this issue with details in it.
>>*   Verified SPNEGO and FORM authentication for Web UI
>>*   Ran and verify queries against sys.connections table
>>*   Built C++ client on linux box using the source tarball from [2]
>> and ran queries to secure and unsecure Drillbit
>>
>>
>> LGTM +1 (non-binding)
>>
>>
>> Thanks,
>> Sorabh
>>
>> 
>> From: Vitalii Diravka 
>> Sent: Thursday, March 15, 2018 11:12 AM
>> To: dev@drill.apache.org
>> Subject: Re: [VOTE] Apache Drill release 1.13.0 - RC0
>>
>> * Downloaded sources tarball from [2].
>> Ran drill in embedded mode on local debian machine. Ran tpch queries with
>> joins, group by, order by, limit, order by with limit statements.
>> Looked through logs - looks good.
>> * Build drill for [4] with MapR profile. Ran driillbit in distributed mode
>> on centos VM with MapR core. Ran queries for Hive 1.2, ran queries for
>> Hive
>> 2.1 (transactional and non-transactional tables). Connected to this
>> drillbit from remote machine via JDBC with java programm and with SQuirrel
>> using different drivers (prebuild "drill-jdbc-all-1.12.0.jar" from [2]
>> tarball, the driver after build [4] sources with default and MapR profiles
>> too) and ran a simple query. Ran the same with enabled custom
>> authentication (jdbc driver which is build with MapR profile works good
>> too).
>> In the process of testing jdbc connection I found the issue - DRILL-6251.
>> It is a regression. I have described the case in Jira. But I suppose isn't
>> critical for current Drill release.
>> * All unit test were passed for [4]. Total time on my machine was: 42:10
>> min
>>
>> +1 (non-binding)
>>
>> Kind regards
>> Vitalii
>>
>> On Thu, Mar 15, 2018 at 4:49 PM, Vova Vysotskyi  wrote:
>>
>> - Downloaded source tar at [2], ran unit tests and all tests are passed.
>>> - Downloaded built tar at [2], submitted several TPCH queries from UI,
>>> checked that profiles are displayed correctly.
>>> - Connected from SQuirrel, ran several queries; ran queries from a java
>>> application, no issues were found.
>>>
>>> +1 (non-binding)
>>>
>>>
>>> 2018-03-15 13:19 GMT+02:00 Arina Yelchiyeva >> >:
>>>
>>>   - Built from the source [4] on Linux, run unit test.
 - Downloaded the binary tarball [2], untarred and ran Drill in embedded
 mode on Windows.
 - Ran sample queries, checked system tables, profiles on Web UI, also

>>> logs
>>>
 and index page.
 - Created persistent and temporary tables, loaded custom UDFs.

 +1 (binding)

 Kind regards
 Arina

 On Thu, Mar 15, 2018 at 1:39 AM, Aman Sinha 

>>> wrote:
>>>
 - Downloaded the source tarball from [2] on my Linux VM, built and ran
>
 the

> unit tests successfully
> - Downloaded the binary tarball onto my Macbook, untarred and ran Drill
>
 in

> embedded mode
> - Ran several queries  against a TPC-DS SF1 data set, including CTAS
> statements with PARTITION BY and ran a few partition pruning queries
> - Tested query cancellation by cancelling a query that was taking long
>
 time

> due to expanding join
> - Examined the run-time query profiles of these queries with and
>
 without
>>>
 parallelism.
> - Checked the maven artifacts on [3].
>
>   - Found one reference to JDK 7 : README.md says 'JDK 7' in the
> Prerequisites.  Ideally, this should be changed to JDK 8
>
> Overall, LGTM  +1 (binding)
>
>
> On Tue, Mar 13, 2018 at 3:58 AM, Parth Chandra 
>
 wrote:

[GitHub] drill issue #1158: DRILL-6145: Implement Hive MapR-DB JSON handler

2018-03-16 Thread vdiravka
Github user vdiravka commented on the issue:

https://github.com/apache/drill/pull/1158
  
@priteshm @priteshm I have created a Jira for above mentioned issue: 
[DRILL-6258](https://issues.apache.org/jira/browse/DRILL-6258)


---


[jira] [Created] (DRILL-6258) Jar files aren't downloaded if dependency is present only in profile section

2018-03-16 Thread Vitalii Diravka (JIRA)
Vitalii Diravka created DRILL-6258:
--

 Summary: Jar files aren't downloaded if dependency is present only 
in profile section
 Key: DRILL-6258
 URL: https://issues.apache.org/jira/browse/DRILL-6258
 Project: Apache Drill
  Issue Type: Improvement
  Components: Tools, Build & Test
Affects Versions: 1.13.0
Reporter: Vitalii Diravka
 Fix For: Future


Dependencies from specific profiles in POM files of any modules, which are 
present in distribution POM should be downloaded as jars (with enabled 
appropriate profile) like dependencies from common section of POM files.

It will allow don't create extra dependency sections or additional modules.

Currently to add jar files for some specific profile it is necessary to add it 
to profile section in distribution/pom file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Committer: Volodymyr Vysotskyi

2018-03-16 Thread Parth Chandra
Congratulations Volodymyr!

On Fri, Mar 16, 2018 at 7:33 AM, Paul Rogers 
wrote:

> Congratulations! Thanks for the great contributions.
> - Paul
>
>
>
> On Thursday, March 15, 2018, 10:26:26 AM PDT, Khurram Faraaz <
> kfar...@mapr.com> wrote:
>
>  Congratulations Volodymyr!
>
> 
> From: Arina Ielchiieva 
> Sent: Thursday, March 15, 2018 10:16:51 AM
> To: dev@drill.apache.org
> Subject: [ANNOUNCE] New Committer: Volodymyr Vysotskyi
>
> The Project Management Committee (PMC) for Apache Drill has
> invited Volodymyr Vysotskyi to become a committer, and we are pleased to
> announce that he has accepted.
>
> Volodymyr has been contributing for Drill over a year. He contributed in
> different areas, including code generation, json processing, function
> implementations.
> Also he actively participated in Calcite rebase and showed profound
> knowledge
> in planning area.
> Currently he is working on decimal's enhancement in Drill.
>
> Congratulations Volodymyr and thank you for your contributions!
>
> - Arina
> (on behalf of the Apache Drill PMC)
>


[jira] [Resolved] (DRILL-4493) Fixed issues in various POMs with MapR profile

2018-03-16 Thread Vitalii Diravka (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitalii Diravka resolved DRILL-4493.

Resolution: Fixed

It was merged into master branch with commit id c047f04b507faec

> Fixed issues in various POMs with MapR profile
> --
>
> Key: DRILL-4493
> URL: https://issues.apache.org/jira/browse/DRILL-4493
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.6.0
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Major
> Fix For: 1.6.0
>
>
> * Remove inclusion of some transitive dependencies from distribution pom.
> * Remove maprfs/json artifacts from "mapr" profile in drill-java-exec pom.
> * Set "hadoop-common"'s scope as test in jdbc pom (without this the jdbc-all 
> jar bloats to >60MB).
> * Revert HBase version to 0.98.12-mapr-1602-m7-5.1.0.
> * Exclude log4j and commons-logging from some HBase artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] drill issue #1168: DRILL-6246: Reduced the size of the jdbc-all jar file

2018-03-16 Thread vvysotskyi
Github user vvysotskyi commented on the issue:

https://github.com/apache/drill/pull/1168
  
Classes from `avatica.metrics` are used in `JsonHandler`, `ProtobufHandler` 
and `LocalService`. If Drill does not use these classes than I agree that we 
can exclude it from `jdbc-all` jar. 
Regarding excluding `avatica/org/**`, looks like the problem is in the 
Avatica pom files since there are no dependencies to `org.apache.commons` and 
`org.apache.http`, but they are shaded to the jar. Created Jira CALCITE-2215 to 
fix this issue, but for now, I think it's ok to exclude them.


---


Re: [Drill 1.12.0] : Suggestions on Downgrade to 1.11.0  & com.mysql.jdbc.exceptions.jdbc4.CommunicationsException

2018-03-16 Thread Anup Tiwari
Hi All,
We checked our MySQL max number of connections which is set to 200 and i think
this might be due to exceeding max number of connections only as right now i can
see 89 connections to MySQL.
I want to know community's thoughts on this whether i am heading in right
direction or not.  





On Fri, Mar 16, 2018 1:03 PM, Anup Tiwari anup.tiw...@games24x7.com  wrote:
Hi All,
We are getting a lot of different type of issues/error post upgrading from Drill
1.10.0 to 1.12.0 which i am asking on forum as well so just wanted to know
whether downgrading to Drill 1.11.0 will help or not?
This time we got exception related to mysql connection storage and please note
that this issue is not consistent i.e. if i execute this query after some time
then it works. Please find below query are error logs.
Query :
create table dfs.tmp.table_info as select * from mysql.test.table_info;
Error :
WARN o.a.d.e.store.jdbc.JdbcStoragePlugin - Failure while attempting to load
JDBC schema.com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last
packet successfully received from the server was 49,949,177 milliseconds ago. 
The last packet sent successfully to the server was 49,949,196 milliseconds ago.
is longer than the server configured value of 'wait_timeout'. You should
consider either expiring and/or testing connection validity before use in your
application, increasing the server configured values for client timeouts, or
using the Connector/J connection property 'autoReconnect=true' to avoid this
problem.at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) ~[na:1.8.0_72]at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
~[na:1.8.0_72]at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
~[na:1.8.0_72]at
java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_72]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:389)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1038)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3609)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2417)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2531)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2489)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1446)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.DatabaseMetaData.getCatalogs(DatabaseMetaData.java:2025)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
org.apache.commons.dbcp.DelegatingDatabaseMetaData.getCatalogs(DelegatingDatabaseMetaData.java:190)
~[commons-dbcp-1.4.jar:1.4]at
org.apache.drill.exec.store.jdbc.JdbcStoragePlugin$JdbcCatalogSchema.(JdbcStoragePlugin.java:309)
~[drill-jdbc-storage-1.12.0.jar:1.12.0]at
org.apache.drill.exec.store.jdbc.JdbcStoragePlugin.registerSchemas(JdbcStoragePlugin.java:430)
[drill-jdbc-storage-1.12.0.jar:1.12.0]at
org.apache.drill.exec.planner.sql.DynamicRootSchema.loadSchemaFactory(DynamicRootSchema.java:94)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.drill.exec.planner.sql.DynamicRootSchema.getSubSchema(DynamicRootSchema.java:74)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.calcite.prepare.CalciteCatalogReader.getSchema(CalciteCatalogReader.java:160)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.prepare.CalciteCatalogReader.getTableFrom(CalciteCatalogReader.java:114)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:108)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.drill.exec.planner.sql.SqlConverter$DrillCalciteCatalogReader.getTable(SqlConverter.java:493)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.drill.exec.planner.sql.SqlConverter$DrillCalciteCatalogReader.getTable(SqlConverter.java:434)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.calcite.sql.validate.EmptyScope.getTableNamespace(EmptyScope.java:75)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.DelegatingScope.getTableNamespace(DelegatingScope.java:124)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:104)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.AbstractNamespace.validate(Abstract

[Drill 1.12.0] : Suggestions on Downgrade to 1.11.0  & com.mysql.jdbc.exceptions.jdbc4.CommunicationsException

2018-03-16 Thread Anup Tiwari
Hi All,
We are getting a lot of different type of issues/error post upgrading from Drill
1.10.0 to 1.12.0 which i am asking on forum as well so just wanted to know
whether downgrading to Drill 1.11.0 will help or not?
This time we got exception related to mysql connection storage and please note
that this issue is not consistent i.e. if i execute this query after some time
then it works. Please find below query are error logs.
Query :
create table dfs.tmp.table_info as select * from mysql.test.table_info;
Error :
WARN o.a.d.e.store.jdbc.JdbcStoragePlugin - Failure while attempting to load
JDBC schema.com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last
packet successfully received from the server was 49,949,177 milliseconds ago. 
The last packet sent successfully to the server was 49,949,196 milliseconds ago.
is longer than the server configured value of 'wait_timeout'. You should
consider either expiring and/or testing connection validity before use in your
application, increasing the server configured values for client timeouts, or
using the Connector/J connection property 'autoReconnect=true' to avoid this
problem.at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) ~[na:1.8.0_72]at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
~[na:1.8.0_72]at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
~[na:1.8.0_72]at
java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_72]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:389)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1038)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3609)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2417)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2531)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2489)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1446)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
com.mysql.jdbc.DatabaseMetaData.getCatalogs(DatabaseMetaData.java:2025)
~[mysql-connector-java-5.1.35-bin.jar:5.1.35]at
org.apache.commons.dbcp.DelegatingDatabaseMetaData.getCatalogs(DelegatingDatabaseMetaData.java:190)
~[commons-dbcp-1.4.jar:1.4]at
org.apache.drill.exec.store.jdbc.JdbcStoragePlugin$JdbcCatalogSchema.(JdbcStoragePlugin.java:309)
~[drill-jdbc-storage-1.12.0.jar:1.12.0]at
org.apache.drill.exec.store.jdbc.JdbcStoragePlugin.registerSchemas(JdbcStoragePlugin.java:430)
[drill-jdbc-storage-1.12.0.jar:1.12.0]at
org.apache.drill.exec.planner.sql.DynamicRootSchema.loadSchemaFactory(DynamicRootSchema.java:94)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.drill.exec.planner.sql.DynamicRootSchema.getSubSchema(DynamicRootSchema.java:74)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.calcite.prepare.CalciteCatalogReader.getSchema(CalciteCatalogReader.java:160)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.prepare.CalciteCatalogReader.getTableFrom(CalciteCatalogReader.java:114)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:108)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.drill.exec.planner.sql.SqlConverter$DrillCalciteCatalogReader.getTable(SqlConverter.java:493)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.drill.exec.planner.sql.SqlConverter$DrillCalciteCatalogReader.getTable(SqlConverter.java:434)
[drill-java-exec-1.12.0.jar:1.12.0]at
org.apache.calcite.sql.validate.EmptyScope.getTableNamespace(EmptyScope.java:75)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.DelegatingScope.getTableNamespace(DelegatingScope.java:124)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:104)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:86)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:886)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]at
org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:872)
[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r

Re: [VOTE] Apache Drill release 1.13.0 - RC0

2018-03-16 Thread Khurram Faraaz
Built from source and deployed binaries on a 4 node cluster.

Ran queries from sqlline and Web UI.

Performed basic sanity tests on the Web UI.


Looks good. (non-binding).


Thanks,

Khurram



From: Sorabh Hamirwasia 
Sent: Thursday, March 15, 2018 3:31:53 PM
To: dev@drill.apache.org
Subject: Re: [VOTE] Apache Drill release 1.13.0 - RC0

  *   Downloaded binary tarball from [2] and deployed 2 node cluster
  *   Ran some basic queries using sqlline and Web UI with and without security 
enabled
  *   Verified user to bit secure connections using Plain/Kerberos
  *   Verified bit to bit secure connections using Kerberos.
 *   During testing I found an issue that local control message is still 
creating a connection, but is not related to this release. The issue is a 
regression because of state introduced in Drillbit endpoint as part of shutdown 
feature in 1.12. I have opened 
DRILL-6255
 for this issue with details in it.
  *   Verified SPNEGO and FORM authentication for Web UI
  *   Ran and verify queries against sys.connections table
  *   Built C++ client on linux box using the source tarball from [2] and ran 
queries to secure and unsecure Drillbit


LGTM +1 (non-binding)


Thanks,
Sorabh


From: Vitalii Diravka 
Sent: Thursday, March 15, 2018 11:12 AM
To: dev@drill.apache.org
Subject: Re: [VOTE] Apache Drill release 1.13.0 - RC0

* Downloaded sources tarball from [2].
Ran drill in embedded mode on local debian machine. Ran tpch queries with
joins, group by, order by, limit, order by with limit statements.
Looked through logs - looks good.
* Build drill for [4] with MapR profile. Ran driillbit in distributed mode
on centos VM with MapR core. Ran queries for Hive 1.2, ran queries for Hive
2.1 (transactional and non-transactional tables). Connected to this
drillbit from remote machine via JDBC with java programm and with SQuirrel
using different drivers (prebuild "drill-jdbc-all-1.12.0.jar" from [2]
tarball, the driver after build [4] sources with default and MapR profiles
too) and ran a simple query. Ran the same with enabled custom
authentication (jdbc driver which is build with MapR profile works good
too).
In the process of testing jdbc connection I found the issue - DRILL-6251.
It is a regression. I have described the case in Jira. But I suppose isn't
critical for current Drill release.
* All unit test were passed for [4]. Total time on my machine was: 42:10 min

+1 (non-binding)

Kind regards
Vitalii

On Thu, Mar 15, 2018 at 4:49 PM, Vova Vysotskyi  wrote:

> - Downloaded source tar at [2], ran unit tests and all tests are passed.
> - Downloaded built tar at [2], submitted several TPCH queries from UI,
> checked that profiles are displayed correctly.
> - Connected from SQuirrel, ran several queries; ran queries from a java
> application, no issues were found.
>
> +1 (non-binding)
>
>
> 2018-03-15 13:19 GMT+02:00 Arina Yelchiyeva :
>
> >  - Built from the source [4] on Linux, run unit test.
> > - Downloaded the binary tarball [2], untarred and ran Drill in embedded
> > mode on Windows.
> > - Ran sample queries, checked system tables, profiles on Web UI, also
> logs
> > and index page.
> > - Created persistent and temporary tables, loaded custom UDFs.
> >
> > +1 (binding)
> >
> > Kind regards
> > Arina
> >
> > On Thu, Mar 15, 2018 at 1:39 AM, Aman Sinha 
> wrote:
> >
> > > - Downloaded the source tarball from [2] on my Linux VM, built and ran
> > the
> > > unit tests successfully
> > > - Downloaded the binary tarball onto my Macbook, untarred and ran Drill
> > in
> > > embedded mode
> > > - Ran several queries  against a TPC-DS SF1 data set, including CTAS
> > > statements with PARTITION BY and ran a few partition pruning queries
> > > - Tested query cancellation by cancelling a query that was taking long
> > time
> > > due to expanding join
> > > - Examined the run-time query profiles of these queries with and
> without
> > > parallelism.
> > > - Checked the maven artifacts on [3].
> > >
> > >  - Found one reference to JDK 7 : README.md says 'JDK 7' in the
> > > Prerequisites.  Ideally, this should be changed to JDK 8
> > >
> > > Overall, LGTM  +1 (binding)
> > >
> > >
> > > On Tue, Mar 13, 2018 at 3:58 AM, Parth Chandra 
> > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to propose the first release candidate (RC0) of Apache
> Drill,
> > > > version 1.13.0.
> > > >
> > > > The release candidate covers a total of 113 resolved JIRAs [1].
> Thanks
> > > > to everyone
> > > > who contributed to this release.
> > > >
> > > > The tarball artifacts are hosted at [2] and the maven artifacts are
> > > hosted
> > > > at
> > > > [3].
> > > >
> > > > This release candidate is based on comm